Author Archives: Kaylin Trychon

Get ready for the 2021 Google CTF



Are you ready for no sleep, no chill and a lot of hacking? Our annual Google CTF is back!


The competition kicks off on Saturday July 17 00:00:01 AM UTC and runs through Sunday July 18 23:59:59 UTC. Teams can register at http://goo.gle/ctf.


Just like last year, the top 16 teams will qualify for our Hackceler8 speed run and the chance to take home a total of $30,301.70 in prize money.



As we reminisce on last years event, we’d be remiss if we didn’t recognize our 2020 winning teams:


  • Plaid Parliament of Pwning
  • I Use Bing
  • pasten
  • The Flat Network Society

We are eager to see if they can defend their leet status. For those interested, we have published all 2020 Hackceler8 videos for your viewing pleasure here.


Whether you’re a seasoned CTF player or just curious about cyber security and ethical hacking, we want you to join us. Sign up to learn skills, meet new friends in the security community and even watch the pros in action. For the latest announcements, see g.co/ctf, subscribe to our mailing list or follow us on @GoogleVRP. See you there!


P.S. Curious about last year’s Google CTF challenges? We open-sourced them here.

Introducing SLSA, an End-to-End Framework for Supply Chain Integrity



Supply chain integrity attacks—unauthorized modifications to software packages—have been on the rise in the past two years, and are proving to be common and reliable attack vectors that affect all consumers of software. The software development and deployment supply chain is quite complicated, with numerous threats along the source ➞ build ➞ publish workflow. While point solutions do exist for some specific vulnerabilities, there is no comprehensive end-to-end framework that both defines how to mitigate threats across the software supply chain, and provides reasonable security guarantees. There is an urgent need for a solution in the face of the eye-opening, multi-billion dollar attacks in recent months (e.g. SolarWinds, Codecov), some of which could have been prevented or made more difficult had such a framework been adopted by software developers and consumers.


Our proposed solution is Supply chain Levels for Software Artifacts (SLSA, pronounced “salsa”), an end-to-end framework for ensuring the integrity of software artifacts throughout the software supply chain. It is inspired by Google’s internal “Binary Authorization for Borg” which has been in use for the past 8+ years and is mandatory for all of Google's production workloads. The goal of SLSA is to improve the state of the industry, particularly open source, to defend against the most pressing integrity threats. With SLSA, consumers can make informed choices about the security posture of the software they consume.

How SLSA helps

SLSA helps to protect against common supply chain attacks. The following image illustrates a typical software supply chain and includes examples of attacks that can occur at every link in the chain. Each type of attack has occured over the past several years and, unfortunately, is increasing as time goes on.




Threat

Known example

How SLSA could have helped

A

Submit bad code to the source repository

Linux hypocrite commits: Researcher attempted to intentionally introduce vulnerabilities into the Linux kernel via patches on the mailing list.

Two-person review caught most, but not all, of the vulnerabilities.

B

Compromise source control platform

PHP: Attacker compromised PHP’s self-hosted git server and injected two malicious commits.

A better-protected source code platform would have been a much harder target for the attackers. 

C

Build with official process but from code not matching source control

Webmin: Attacker modified the build infrastructure to use source files not matching source control.

A SLSA-compliant build server would have produced provenance identifying the actual sources used, allowing consumers to detect such tampering.

D

Compromise build platform

SolarWinds: Attacker compromised the build platform and installed an implant that injected malicious behavior during each build.

Higher SLSA levels require stronger security controls for the build platform, making it more difficult to compromise and gain persistence.

E

Use bad dependency (i.e. A-H, recursively)

event-stream: Attacker added an innocuous dependency and then updated the dependency to add malicious behavior. The update did not match the code submitted to GitHub (i.e. attack F).

Applying SLSA recursively to all dependencies would have prevented this particular vector, because the provenance would have indicated that it either wasn’t built from a proper builder or that the source did not come from GitHub.

F

Upload an artifact that was not built by the CI/CD system

CodeCov: Attacker used leaked credentials to upload a malicious artifact to a GCS bucket, from which users download directly.

Provenance of the artifact in the GCS bucket would have shown that the artifact was not built in the expected manner from the expected source repo.

G

Compromise package repository

Attacks on Package Mirrors: Researcher ran mirrors for several popular package repositories, which could have been used to serve malicious packages.

Similar to above (F), provenance of the malicious artifacts would have shown that they were not built as expected or from the expected source repo.

H

Trick consumer into using bad package

Browserify typosquatting: Attacker uploaded a malicious package with a similar name as the original.

SLSA does not directly address this threat, but provenance linking back to source control can enable and enhance other solutions.


What is SLSA

In its current state, SLSA is a set of incrementally adoptable security guidelines being established by industry consensus. In its final form, SLSA will differ from a list of best practices in its enforceability: it will support the automatic creation of auditable metadata that can be fed into policy engines to give "SLSA certification" to a particular package or build platform. SLSA is designed to be incremental and actionable, and to provide security benefits at every step. Once an artifact qualifies at the highest level, consumers can have confidence that it has not been tampered with and can be securely traced back to source—something that is difficult, if not impossible, to do with most software today.

SLSA consists of four levels, with SLSA 4 representing the ideal end state. The lower levels represent incremental milestones with corresponding incremental integrity guarantees. The requirements are currently defined as follows.



SLSA 1 requires that the build process be fully scripted/automated and generate provenance. Provenance is metadata about how an artifact was built, including the build process, top-level source, and dependencies. Knowing the provenance allows software consumers to make risk-based security decisions. Though provenance at SLSA 1 does not protect against tampering, it offers a basic level of code source identification and may aid in vulnerability management.


SLSA 2 requires using version control and a hosted build service that generates authenticated provenance. These additional requirements give the consumer greater confidence in the origin of the software. At this level, the provenance prevents tampering to the extent that the build service is trusted. SLSA 2 also provides an easy upgrade path to SLSA 3.


SLSA 3 further requires that the source and build platforms meet specific standards to guarantee the auditability of the source and the integrity of the provenance, respectively. We envision an accreditation process whereby auditors certify that platforms meet the requirements, which consumers can then rely on. SLSA 3 provides much stronger protections against tampering than earlier levels by preventing specific classes of threats, such as cross-build contamination.


SLSA 4 is currently the highest level, requiring two-person review of all changes and a hermetic, reproducible build process. Two-person review is an industry best practice for catching mistakes and deterring bad behavior. Hermetic builds guarantee that the provenance’s list of dependencies is complete. Reproducible builds, though not strictly required, provide many auditability and reliability benefits. Overall, SLSA 4 gives the consumer a high degree of confidence that the software has not been tampered with.


More details on these proposed levels can be found in the GitHub repository, including the corresponding Source and Build/Provenance requirements. We are open to feedback and suggestions for changes on these requirements.

Proof of Concept

Today, we are releasing a proof of concept for SLSA 1 provenance generator (repo, marketplace). This will allow a user to create and upload provenance alongside their build artifacts, thereby achieving SLSA 1. To use it, add the following snippet to your workflow:

- name: Generate provenance

  uses: slsa-framework/github-actions-demo@v0.1

  with:

    artifact_path: <path-to-artifact/directory>


Going forward, we plan to work with popular source, build, and packaging platforms to make it as easy as possible to reach higher levels of SLSA. These plans include generating provenance automatically in build systems, propagating provenance natively in package repositories, and adding security features across the major platforms. Our long-term goal is to raise the security bar across the industry so that the default expectation is higher-level SLSA security standards, with minimal effort on the part of software producers.
 
Summary

SLSA is a practical framework for end-to-end software supply chain integrity, based on a model proven to work at scale in one of the world’s largest software engineering organizations. Achieving the highest level of SLSA for most projects may be difficult, but incremental improvements recognized by lower SLSA levels will already go a long way toward improving the security of the open source ecosystem.

We look forward to working with the community on refining the levels as we begin adopting SLSA for our own open source projects. If you are a project maintainer and interested in trying to adopt and provide feedback on SLSA, please reach out or come join the discussions taking place in the OpenSSF Digital Identity Attestation Working Group.

Check out the Know, Prevent, Fix post to read more about Google’s overall approach to open source security.

Verifiable Supply Chain Metadata for Tekton


If you've been paying attention to the news at all lately, you've probably noticed that software supply chain attacks are rapidly becoming a big problem. Whether you're trying to prevent these attacks, responding to an ongoing one or recovering from one, you understand that knowing what is happening in your CI/CD pipeline is critical.

Fortunately, the Kubernetes-native Tekton project – an open-source framework for creating CI/CD systems – was designed with security in mind from Day One, and the new Tekton Chains project is here to help take it to the next level. Tekton Chains securely captures metadata for CI/CD pipeline executions. We made two really important design decisions early on in Tekton that make supply chain security easy: declarative pipeline definitions and explicit state transitions. This next section will explain what these mean in practice and how they make it easy to build a secure delivery pipeline.


Definitions or “boxes and arrows”
Just like everything in your high school physics class, a CI/CD pipeline can be modeled as a series of boxes. Each box has some inputs, some outputs, and some steps that happen in the middle. Even if you have one big complicated bash script that fetches dependencies, builds programs, runs tests, downloads the internet and deploys to production, you can draw boxes and arrows to represent this flow. The boxes might be really big, but you can do it.

Since the initial whiteboard sketches, the Pipeline and Task CRDs in Tekton were designed to allow users to define each step of their pipeline at a granular level. These types include support for mandatory declared inputs, outputs, and build environments. This means you can track exactly what sources went into a build, what tools were used during the build itself and what artifacts came out at the end. By breaking up a large monolithic pipeline into a series of smaller, reusable steps, you can increase visibility into the overall system. This makes it easier to understand your exposure to supply chain attacks, detect issues when they do happen and recover from them after.


Explicit transitions
After a pipeline is defined, there are a few approaches to orchestrating it: level-triggered and edge-triggered. Like most of the Kubernetes ecosystem, Tekton is designed to operate in a level-triggered fashion. This means steps are executed explicitly by a central orchestrator which runs one task, waits for completion, then decides what to do next. In edge-based systems, a pipeline definition would be translated into a set of events and listeners. Each step fires off events when it completes, and these events are then picked up by listeners which run the next set of steps.

Event-based or edge-triggered systems are easy to reason about, but can be tricky to manage at scale. They also make it much harder to track an artifact as it flows through the entire system. Each step in the pipeline only knows about the one immediately before it; no step is responsible for tracking the entire execution. This can become problematic when you try to understand the security posture of your delivery pipeline.

Tekton was designed with the opposite approach in mind - level-triggered. Instead of a Rube-Goldberg machine tied together with duct tape and clothespins, Tekton is more like an explicit assembly-line. Level-triggered systems like Tekton move from state-to-state in a calculated manner by a central orchestrator. They require more explicit-design up front, but they are easier to observe and reason about after. Supply chains that use systems like Tekton are more secure.


Secure delivery pipeline through chains and provenance
So how do these two design decisions combine to make supply chain security easier? Enter Tekton Chains.

By observing the execution of a Task or a Pipeline and paying careful attention to the inputs, outputs, and steps along the way, we can make it easier to track down what happened and why later on. This "observer" can be run in a separate trust domain and cryptographically sign all of this captured metadata as it's stored, leaving a tamper-proof activity ledger. This technique is called "verifiable builds." This securely generated metadata can be used in a number of ways, from audit logging to recovering from security breaches to pre-deployment policy enforcement.

You can install Chains into any Tekton-enabled cluster and configure it to generate this cryptographically-signed supply chain metadata for your builds. Chains supports pluggable signature systems like PGP, x509 and Cloud KMS's. Payloads can be generated in a few different industry-standard formats like the RedHat Simple-Signing and the In-Toto Provenance specifications. The full documentation is available here, but you can get started quickly with something like this:


For this tutorial, you’ll need access to a GKE Kubernetes cluster and a GCR registry with push credentials. The cluster should already have Tekton Pipelines installed.


Install Tekton Chains into your cluster:

$ kubectl apply --filename https://storage.googleapis.com/tekton-releases/chains/latest/release.yaml



Next, you’ll set up registry authentication for the Tekton Chains controller, so that it can push OCI image signatures to your registry. To set up authentication, you’ll create a Service Account and download credentials:

$ export PROJECT_ID=<GCP Project ID>

$ gcloud iam service-accounts create tekton-chains

$ gcloud iam service-accounts keys create credentials.json --iam-account=tekton-chains@${PROJECT_ID}.iam.gserviceaccount.com



Now, create a Kubernetes Secret from your credentials file so the Chains controller can access it:

$ kubectl create secret docker-registry registry-credentials \

  --docker-server=gcr.io \

  --docker-username=_json_key \

  [email protected] \

  --docker-password="$(cat credentials.json)" \

  -n tekton-chains

$ kubectl patch serviceaccount tekton-chains-controller \

  -p "{\"imagePullSecrets\": [{\"name\": \"registry-credentials\"}]}" -n tekton-chains



We can use cosign to generate a keypair as a Kubernetes secret, which the Chains controller will use for signing. Cosign will ask for a password, which will be stored in the secret:

$ cosign generate-key-pair -k8s tekton-chains/signing-secrets


Next, you’ll need to set up authentication to your GCR registry for the kaniko task as another Kubernetes Secret.

$ export CREDENTIALS_SECRET=kaniko-credentials

$ kubectl create secret generic $CREDENTIALS_SECRET --from-file credentials.json



Now, we’ll create a kaniko-chains task which will build and push a container image to your registry. Tekton Chains will recognize that an image has been built, and sign it automatically.

$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/kaniko/gcp/kaniko.yaml

$ cat <<EOF | kubectl apply -f -

apiVersion: tekton.dev/v1beta1

kind: TaskRun

metadata:

  name: kaniko-run

spec:

  taskRef:

    name: kaniko-gcp

  params:

  - name: IMAGE

    value: gcr.io/${PROJECT_ID}/kaniko-chains

  workspaces:

  - name: source

    emptyDir: {}

  - name: credentials

    secret:

      secretName: ${CREDENTIALS_SECRET} 

EOF



Wait for the TaskRun to complete, and give the Tekton Chains controller a few seconds to sign the image and store the signature. You should be able to verify the signature with cosign and your public key:

$ cosign verify -key cosign.pub gcr.io/${PROJECT_ID}/kaniko-chains


Congratulations! You’ve successfully signed and verified an OCI image with Tekton Chains and cosign.


What's Next
Within Chains, we'll be improving integration with other supply-chain security projects. This includes support for Binary Transparency and Verifiable Builds through integrations with the Sigstore and In-Toto projects. We'll also be improving and providing a set of well-designed, highly secure Tasks and Pipeline definitions in the TektonCD Catalog.

In Tekton Pipelines, we plan on finishing up TEP-0025 (Hermekton) to enable the support for hermetic build execution. If you want to play around with it now, hermekton can be run as an alpha feature in experimental mode. When hermekton is enabled, a build runs in a locked-down environment without network connectivity. Hermetic builds guarantee all inputs have been explicitly declared ahead-of-time, providing for a more auditable supply-chain. Hermetic builds and Chains align well, because the hermeticity build property is contained in the full build provenance captured by Chains. Chains can generate and attest to metadata specifying exactly which sections of a build had network access.

This means policy can be defined around exactly which build tools are allowed to access the network and which ones are not. This metadata can be used in policies at build time (banning compilers with security vulnerabilities) or stored and used by policy engines at deploy time (only code-reviewed and verifiably built containers are allowed to run).

We believe supply-chain security must be built-in and by default. No task orchestrator can promise perfect supply-chain security, but TektonCD was designed with unique features in mind that make it easier to do the right thing. We're always looking for feedback on the design, goals and requirements. You can reach out on GitHub or the #chains Slack channel.

Announcing New Abuse Research Grants Program

Our Abuse Bug Bounty program has proved tremendously successful in the past three years since its introduction – thanks to our incredibly engaged community of researchers. Their contributions resulted in +1,000 valid bugs, helping us raise the bar in combating product abuse.

As a result of this continued success, today we are announcing a new experimental Abuse Research Grants Program in addition to the already existing Vulnerability Research Grants. Similar to other Research Grant Programs, these grants are up-front awards that our top researchers will receive before they ever submit a bug.

Last year, we increased our rewards to recognize the important work of our community. The growth of this program would not have been possible without partners like David (@xdavidhu), Zohar (ehpus.com), and Ademar (@nowaskyjr) who, on top of becoming our top research experts in Product Abuse, regularly contribute to transparency by sharing their work, further inspiring and influencing our community of researchers.

Despite the growth and success of this program, there remains more work to be done.

With our new Abuse Research Grants Program, we hope to bring even more awareness to product abuse by connecting more closely with our experienced researchers – so we can all work together to overcome these challenges, prevent product abuse and keep our users safe. Here’s how the program works:
  • We invite our top abuse researchers to the program.
  • We award grants immediately before research begins, no strings attached.
  • Bug Hunters apply for the targets we share with them and start their research.
  • On top of the grant, researchers are eligible for regular rewards for the bugs they discover in scope of our Bug Bounty program.
To learn more about this and other grant programs, visit our rules page.

Introducing Half-Double: New hammering technique for DRAM Rowhammer bug


Today, we are sharing details around our discovery of Half-Double, a new Rowhammer technique that capitalizes on the worsening physics of some of the newer DRAM chips to alter the contents of memory.

Rowhammer is a DRAM vulnerability whereby repeated accesses to one address can tamper with the data stored at other addresses. Much like speculative execution vulnerabilities in CPUs, Rowhammer is a breach of the security guarantees made by the underlying hardware. As an electrical coupling phenomenon within the silicon itself, Rowhammer allows the potential bypass of hardware and software memory protection policies. This can allow untrusted code to break out of its sandbox and take full control of the system.

Rowhammer was first discussed in a paper in 2014 for what was then the mainstream generation of DRAM: DDR3. The following year, Google’s Project Zero released a working privilege-escalation exploit. In response, DRAM manufacturers implemented proprietary logic inside their chips that attempted to track frequently accessed addresses and reactively mitigate when necessary.

As DDR4 became widely adopted, it appeared as though Rowhammer had faded away thanks in part to these built-in defense mechanisms. However, in 2020, the TRRespass paper showed how to reverse-engineer and neutralize the defense by distributing accesses, demonstrating that Rowhammer techniques are still viable. Earlier this year, the SMASH research went one step further and demonstrated exploitation from JavaScript, without invoking cache-management primitives or system calls.

Traditionally, Rowhammer was understood to operate at a distance of one row: when a DRAM row is accessed repeatedly (the “aggressor”), bit flips were found only in the two adjacent rows (the “victims”). However, with Half-Double, we have observed Rowhammer effects propagating to rows beyond adjacent neighbors, albeit at a reduced strength. Given three consecutive rows A, B, and C, we were able to attack C by directing a very large number of accesses to A, along with just a handful (~dozens) to B. Based on our experiments, accesses to B have a non-linear gating effect, in which they appear to “transport” the Rowhammer effect of A onto C. Unlike TRRespass, which exploits the blind spots of manufacturer-dependent defenses, Half-Double is an intrinsic property of the underlying silicon substrate. This is likely an indication that the electrical coupling responsible for Rowhammer is a property of distance, effectively becoming stronger and longer-ranged as cell geometries shrink down. Distances greater than two are conceivable.




Google has been working with JEDEC, an independent semiconductor engineering trade organization, along with other industry partners, in search of possible solutions for the Rowhammer phenomenon. JEDEC has published two documents about DRAM and system-level mitigation techniques (JEP 300-1 and JEP301-1).

We are disclosing this work because we believe that it significantly advances the understanding of the Rowhammer phenomenon, and that it will help both researchers and industry partners to work together, to develop lasting solutions. The challenge is substantial and the ramifications are industry-wide. We encourage all stakeholders (server, client, mobile, automotive, IoT) to join the effort to develop a practical and effective solution that benefits all of our users.

Telegraphing the future of security

This week at the annual RSA Conference, we will hear from industry leaders on a wide range of issues, from the supply chain security crisis to breach disclosure notifications. While it’s important to talk about where we have been and what is happening in the industry right now, it is equally as important to think about where we need to go.


At Google, that means creating a safer Internet that is more secure for the next billion users. 


In order to create a safer Internet, our engineers, technologists and product teams look at what we know today and think about how it will change tomorrow – from analyzing trends in attacker methods, to shifts in the threat landscape, to new technologies – and we use those insights to chart the path ahead. 


We recently asked security experts across Google to telegraph the future of security, here’s a glimpse at their insights:

What do you think the biggest security challenge will be in 10 years? 

Royal cartoon image

“Shifting the focus of security from the technical hygiene of code and configuration to self defending data will save time and resources while unlocking rapid and safe innovation. 


Defense in depth and the control design we have learned from engineering methodologies will finally catch up to the dynamic nature of software. The better analogies will become biological - the immune system or the combination of organ systems like circulatory and respiratory.  Independent and constantly evolving but stronger operating together in the same superorganism.” - Royal Hansen, Vice President, Security 

Camille cartoon image

“Developing a global, unified framework for operating in cyberspace will be the biggest security challenge we face in 10 years. 


Data points to the positive effects of standards on innovation and collaboration, specifically through increased interoperability and reduced information inequality. We need to rethink how digital security standards are developed and operationalized, with an emphasis on the root challenge we seek to solve.” - Camille Stewart, Global Head of Product Security Strategy, Google
Vint cartoon image

“Securing open source software will be the biggest security challenge we face in 10 years.


Over the next decade, the heavy use of open source software by billions of devices that fall into the ‘Internet of Things’ category, will cause the number of vulnerabilities to scale dramatically and outpace our ability to fix them before they are exploited.


For too long it has been assumed that open source software is inherently more secure due to its openness – the thought that multiple people were using it, reviewing it and verifying it. That mindset must shift. “ - Vint Cerf, Chief Internet Evangelist  

Toni cartoon image

“Complexity will be the greatest challenge we face.


So much of what we have to secure are systems made up of other systems. All of those seams increase the opportunity for attacks. This will only ring more true in 10 years when there are projected to be over 25 billion connected devices.” - Toni Gidwani,  Security Engineering Manager, Threat Analysis Group


Where do you think the security industry will be in 10 years?

Mark cartoon image

“Phishing will no longer be a successful attack vector for bad actors. Passwords will be a thing of the past as we see widespread adoption of a secure by default framework.


Our advancements in authentication and verification technology will completely transform how users sign in to their accounts, moving from a sea of passwords to continuous, device-based authentication that seamlessly connects us to our content wherever we are." - Mark Risher, Director of Product Management, Identity and User Security
Sunil cartoon image

“Security will be nearly invisible for all users and many of the standalone security tools will disappear. This will be a result of advanced security technologies being built into devices and platforms by default, instead of bolted on as an afterthought. 


We will also see computing platforms based on simpler, similar models that will make them easier to protect, update and support – leading to democratization of security operations and ultimately breaking down the security talent shortage problem.” - Sunil Potti, Vice President and General Manager of Cloud Security

Mark cartoon image

“In 10 years, Private Computing will be ubiquitous. 


Most folks are aware of end-to-end encryption in private messaging and documents — this allows users to retain exclusive control over their private information and reduces risk from breaches and attacks, including ransomware. But the same concept applies to most aspects of personal digital technology, from home healthcare to photos to your private social network feeds. Delivering helpful, delightful, and safe user experiences - within the Private Computing model - is arguably the most important challenge for the tech world to embrace, today.” - Dave Kleidermacher,  Vice President, Engineering, Android Security & Privacy

If you could make one immediate change to security what would it be?

Phil cartoon image

“Risk transparency – organizations need real-time business context for security data. 


Mapping security issues to business context to determine a risk level is a time consuming process. This delay ultimately leaves organizations at more risk for a security incident. 


The good news is that change is on the horizon. Cloud makes risk transparency easier today, from well-lit security paths, declarative approaches like configuration as code and more precise inventories and diagnostics.” - Phil Venables, Chief Information Security Officer at Google Cloud
Jeannette cartoon image

“Expedite IT modernization across governments globally to keep pace with the evolving threat landscape. 


Achieving this would improve productivity, increase costs savings, enhance performance and ensure security every step of the way. Rather than continuing to invest in outdated security models, it’s time governments around the world explored options like a multi-vendor ecosystem and zero-trust security principles that allow for flexibility and innovation.” - Jeanette Manfra, Director for Risk and Compliance at Google Cloud


Mark cartoon image

“Build security and digital literacy into the curriculum of every school program globally. 

We need to solve the lack of understanding of the complex digital ecosystems in which we live our lives and address the cybersecurity skills and talent gap.” - Mark Johnston, Head of Security at Google Cloud , Asia- Pacific

Maddie cartoon image

“If I could make an immediate change to security, I'd have end user security and privacy be a requirement for all devices. 

There aren't exceptions made for early versions or the less expensive product, security and privacy is a requirement, just like seat belts in cars.”  - Maddie Stone, Security Researcher, Project Zero 


Making the Internet more secure one signed container at a time


With over 16 million pulls per month, Google’s `distroless` base images are widely used and depended on by large projects like Kubernetes and Istio. These minimal images don’t include common tools like shells or package managers, making their attack surface (and download size!) smaller than traditional base images such as `ubuntu` or `alpine`. Even with this additional protection, users could still fall prey to typosquatting attacks, or receive a malicious image if the distroless build process was compromised – making users vulnerable to accidentally using a malicious image instead of the actual distroless image. This problem isn’t unique to distroless images – until now, there just hasn’t been an easy way to verify that images are what they claim to be.

Introducing Cosign

Cosign simplifies signing and verifying container images, aiming to make signatures invisible infrastructure – basically, it takes over the hard part of signing and verifying software for you.

We developed cosign in collaboration with the sigstore project, a Linux Foundation project and a non-profit service that seeks to improve the open source software supply chain by easing the adoption of cryptographic software signing, backed by transparency log technologies.

We’re excited to announce that all of our distroless images are now signed by cosign! This means that all users of distroless can verify that they are indeed using the base image they intended to before kicking off image builds, making distroless images even more trustworthy. In fact, Kubernetes has already begun performing this check in their builds.

As we look to the future, Kubernetes SIG Release's vision is to establish a consumable, introspectable, and secure supply chain for the project. By collaborating with the sigstore maintainers (who are fellow Kubernetes contributors) to integrate signing and transparency into our supply chain, we hope to be an exemplar for standards in the cloud native (and wider) tech industry, said Stephen Augustus, co-chair for Kubernetes SIG Release.

How it works

To start signing distroless we integrated cosign into the distroless CI system, which builds and pushes images via Cloud Build. Signing every distroless image was as easy as adding an additional Cloud Build step to the Cloud Build job responsible for building and pushing the images. This additional step uses the cosign container image and a key pair stored in GCP KMS to sign every distroless image. With this additional signing step, users can now verify that the distroless image they’re running was built in the correct CI environment.


Right now, cosign can be run as an image or as a CLI tool. It supports:

  • Hardware and KMS signing
  • Bring-your-own PKI
  • Our free OIDC PKI (Fulcio)
  • Built-in binary transparency and timestamping service (Rekor)

Signing distroless with cosign is just the beginning, and we plan to incorporate other sigstore technologies into distroless to continue to improve it over the next few months. We also can’t wait to integrate sigstore with other critical projects. Stay tuned here for updates! To get started verifying your own distrolesss images, check out the distroless README and to learn more about sigstore, check out sigstore.dev.

Google, HTTPS, and device compatibility

Posted by Ryan Hurst, Product Management, Google Trust Services


Encryption is a fundamental building block when you’re on a mission to organize the world’s information and make it universally accessible with strong security and privacy. This is why a little over four years ago we created Google Trust Services—our publicly trusted Certificate Authority (CA).

The road to becoming a publicly trusted certificate authority is a long one - especially if the certificates you issue will be used by some of the most visited sites on the internet.

When we started on this journey, our goal was that within five years our root certificates would be embedded in enough devices that we could do a single transition to our long-term root certificates.

There are still a large number of active used devices that have not been updated with our root certificates.

To ensure as many clients as possible continue to have secure connections when using Google Trust Services certificates, we needed to obtain a cross-sign. A cross-sign allows a certificate authority that is already trusted by a device to extend its device compatibility to another certificate authority.

The rules governing public CA operations require that the CA being cross-signed meet the same strict operational, technological, and audit criteria as the certificate authority providing this cross-sign. Google Trust Services is already a publicly trusted CA that is operated to the highest standards and is in all major browser trust stores, so we already met the requirements for a cross-sign from another CA.. The key was to find one that is already trusted by the devices we wanted to be able to validate our certificates. We worked with GMO GlobalSign for this cross sign because they operate one of the oldest root certificates in wide use today.

Why are we telling you about this now? On December 15, 2021, the primary root certificate we use today (GlobalSign R2 which is owned and operated by Google Trust Services) will expire.

To ensure these older devices continue to work smoothly with the certificates we issue we will start sending a new cross-signed certificate to services when we issue new certificates.

The good news is that users shouldn’t notice the change. This is because the cross-certificate (GTS Root R1 Cross) we're deploying was signed by a root certificate created and trusted by most devices over 20 years ago.

In summary, when you use certificates from Google Trust Services, you and your customers will continue to get the benefit of the best device compatibility in the industry.

We know you might have some questions about this change. Here are our answers to the most frequent ones:

I am a developer or ISV that uses a Google API. What do I need to do?

Certificate Authorities change which root CA certificates they use from time to time, so we have always provided a list of certificates that we currently use or may use in the future. Anybody using this list won’t have to change anything. If you have not been using this list and updating it based on our published guidance, you will need to update your application to use these roots and regularly update the list you use so future changes go smoothly for your users.

I am a website operator that uses Google Trust Services certificates. Do I need to change anything?

You do not! Google Trust Services offers certificates to Alphabet products and services including many Google Cloud services. This means that those services are the ones responsible for configuring and managing TLS for you.

When will this change go into effect? 

We will begin rolling out certificate chains that use this cross-certificate in March 2021. We will slowly roll these changes out throughout the rest of the year and will complete them before December 15, 2021.

I use a service or product that uses Google Trust Services. Is there anything I need to change? No, this change should be invisible to all end users.

How can I test to see if my devices will trust certificates that rely on this cross-sign? 

We operate a test site that uses the cross-certificate that you can visit here. If you see "Google Trust Services Demo Page - Expected Status: good" and some additional certificate information, the new certificate chain works correctly on your device. If you get an error, the list of trusted roots for the device you're testing needs to be updated.

When does this cross-certificate expire and what happens when it does? 

The cross-certificate expires January 28th, 2028. Sometime between now and when it looks like it is no longer needed for broad device compatibility, we will stop providing this extra certificate to certificate requesters, as it will no longer be needed.

I use an old device and it does not trust the cross-sign. What should I do? 

Many devices handle root certificate updates as part of their security patching process. If you are running one of these devices, you should make sure you apply all relevant security updates. It is also possible the manufacturer no longer provides security updates for your device. If this is the case you may want to contact your provider or consider replacing your device.

Does this mean you are no longer using the Google Trust Services roots? We are still using the Google Trust Services roots, they are simply cross-signed. When it is no longer necessary to use the cross-sign, we will no longer distribute the cross-sign to certificate requestors.

To learn more, visit Google Trust Services.

A Spectre proof-of-concept for a Spectre-proof web

Posted by Stephen Röttger and Artur Janc, Information Security Engineers


Three years ago, Spectre changed the way we think about security boundaries on the web. It quickly became clear that flaws in modern processors undermined the guarantees that web browsers could make about preventing data leaks between applications. As a result, web browser vendors have been continuously collaborating on approaches intended to harden the platform at scale. Nevertheless, this class of attacks still remains a concern and requires web developers to deploy application-level mitigations.


In this post, we will share the results of Google Security Team's research on the exploitability of Spectre against web users, and present a fast, versatile proof-of-concept (PoC) written in JavaScript which can leak information from the browser's memory. We've confirmed that this proof-of-concept, or its variants, function across a variety of operating systems, processor architectures, and hardware generations.


By sharing our findings with the security community, we aim to give web application owners a better understanding of the impact Spectre vulnerabilities can have on the security of their users' data. Finally, this post describes the protections available to web authors and best practices for enabling them in web applications, based on our experience across Google.


A bit of background


The Spectre vulnerability, disclosed to the public in January 2018, makes use of a class of processor (CPU) design vulnerabilities that allow an attacker to change the intended program control flow while the CPU is speculatively executing subsequent instructions. For example, the CPU may speculate that a length check passes, while in practice it will access out-of-bounds memory. While the CPU state is rolled back once the misprediction is noticed, this behavior leaves observable side effects which can leak data to an attacker.


In 2019, the team responsible for V8, Chrome’s JavaScript engine, published a blog post and whitepaper concluding that such attacks can’t be reliably mitigated at the software level. Instead, robust solutions to these issues require security boundaries in applications such as web browsers to be aligned with low-level primitives, for example process-based isolation


In parallel, browser vendors and standards bodies developed security mechanisms to protect web users from these classes of attacks. This included both architectural changes which offer default protections enabled in some browser configurations (such as Site Isolation, out-of-process iframes, and Cross-Origin Read Blocking), as well as broadly applicable opt-in security features that web developers can deploy in their applications: Cross-Origin Resource Policy, Cross-Origin Opener Policy, Cross-Origin Embedder Policy, and others.


These mechanisms, while crucially important, don't prevent the exploitation of Spectre; rather, they protect sensitive data from being present in parts of the memory from which they can be read by the attacker. To evaluate the robustness of these defenses, it's therefore important to develop security tools that help security engineers understand the practical implications of speculative execution attacks for their applications.


Demonstrating Spectre in a web browser


Today, we’re sharing proof-of-concept (PoC) code that confirms the practicality of Spectre exploits against JavaScript engines. We use Google Chrome to demonstrate our attack, but these issues are not specific to Chrome, and we expect that other modern browsers are similarly vulnerable to this exploitation vector. We have developed an interactive demonstration of the attack available at https://leaky.page/; the code and a more detailed writeup are published on Github here.




The demonstration website can leak data at a speed of 1kB/s when running on Chrome 88 on an Intel Skylake CPU. Note that the code will likely require minor modifications to apply to other CPUs or browser versions; however, in our tests the attack was successful on several other processors, including the Apple M1 ARM CPU, without any major changes.


While experimenting, we also developed other PoCs with different properties. Some examples include:

  • A PoC which can leak 8kB/s of data at a cost of reduced stability using performance.now() as a timer with 5μs precision.

  • A PoC which leaks data at 60B/s using timers with a precision of 1ms or worse.


We chose to release the current PoC since it has a negligible setup time and works in the absence of high precision timers, such as SharedArrayBuffer.


The main building blocks of the PoC are:

  1. A Spectre gadget: code that triggers attacker-controlled transient execution.

  2. A side-channel: a way to observe side effects of the transient execution.


1. The gadget

For the published PoC, we implemented a simple Variant 1 gadget: a JavaScript array is speculatively accessed out of bounds after training the branch predictor that the compiler-inserted length check will succeed. This particular gadget can be mitigated at the software level; however, Chrome's V8 team concluded that this is not the case for other gadgets: “we found that effective mitigation of some variants of Spectre, particularly variant 4, to be simply infeasible in software.


We invite the security community to extend our research and develop code that makes use of other Spectre gadgets.


2. The side-channel

A common way to leak secret data via speculative execution is to use a cache side-channel. By observing if a certain memory location is present in the cache or not, we can infer if it has been accessed during the speculative execution. The challenge in JavaScript is to find a high resolution timer allowing to distinguish cache from memory accesses, as modern browsers have reduced the timer granularity of the performance.now() API and disabled SharedArrayBuffers in contexts without cross-origin isolation to prevent timing attacks.


Already in 2018, the V8 team shared their observation that reduced timer granularity is not sufficient to mitigate Spectre, since attackers can arbitrarily amplify timing differences. The presented amplification technique was based on reading secret data multiple times which can, however, reduce the effectiveness of the attack if the information leak is probabilistic.


In our PoC, we developed a new technique that overcomes this limitation. By abusing the behavior of the Tree-PLRU cache eviction strategy commonly found in modern CPUs, we were able to significantly amplify the cache timing with a single read of secret data. This allowed us to leak data efficiently even with low precision timers. For technical details, see the demonstration at https://leaky.page/plru.html.


While we don't believe this particular PoC can be re-used for nefarious purposes without significant modifications, it serves as a compelling demonstration of the risks of Spectre. In particular, we hope it provides a clear signal for web application developers that they need to consider this risk in their security evaluations and take active steps to protect their sites.


Deploying web defenses against Spectre


The low-level nature of speculative execution vulnerabilities makes them difficult to fix comprehensively, as a proper patch can require changes to the firmware or hardware on the user's device. While operating system and web browser developers have implemented important built-in protections where possible (including Site Isolation with out-of-process iframes and Cross-Origin Read Blocking in Google Chrome, or Project Fission in Firefox), the design of existing web APIs still makes it possible for data to inadvertently flow into an attacker's process.


With this in mind, web developers should consider more robustly isolating their sites by using new security mechanisms that actively deny attackers access to cross-origin resources. These protections mitigate Spectre-style hardware attacks and common web-level cross-site leaks, but require developers to assess the threat these vulnerabilities pose to their applications and understand how to deploy them. To assist in that evaluation, Chrome's web platform security team has published Post-Spectre Web Development and Mitigating Side-Channel Attacks with concrete advice for developers; we strongly recommend following their guidance and enabling the following protections:



  • Cross-Origin Opener Policy (COOP) lets developers ensure that their application window will not receive unexpected interactions from other websites, allowing the browser to isolate it in its own process. This adds an important process-level protection, particularly in browsers which don't enable full Site Isolation; see web.dev/coop-coep.


  • Cross-Origin Embedder Policy (COEP) ensures that any authenticated resources requested by the application have explicitly opted in to being loaded.  Today, to guarantee process-level isolation for highly sensitive applications in Chrome or Firefox, applications must enable both COEP and COOP; see web.dev/coop-coep.


In addition to enabling these isolation mechanisms, ensure your application also enables standard protections, such as the X-Frame-Options and X-Content-Type-Options headers, and uses SameSite cookies. Many Google applications have already deployed, or are in the process of deploying these mechanisms, providing a defense against speculative execution bugs in situations where default browser protections are insufficient. 


It's important to note that while all of the mechanisms described in this article are important and powerful security primitives, they don't guarantee complete protection against Spectre; they require a considered deployment approach which takes behaviors specific to the given application into account. We encourage security engineers and researchers to use and contribute to our Spectre proof-of-concept to review and improve the security posture of their sites.



Tip: To help you protect your website from Spectre, the Google Security Team has created Spectroscope, a prototype Chrome extension that scans your application and finds resources which may require enabling additional defenses. Consider using it to assist with your deployments of web isolation features.

#ShareTheMicInCyber: Brooke Pearson


In an effort to showcase the breadth and depth of Black+ contributions to security and privacy fields, we’ve launched a profile series that aims to elevate and celebrate the Black+ voices in security and privacy we have here at Google.



Brooke Pearson manages the Privacy Sandbox program at Google, and her team's mission is to, “Create a thriving web ecosystem that is respectful of users and private by default.” Brooke lives this mission and it is what makes her an invaluable asset to the Chrome team and Google. 

In addition to her work advancing the fields of security and privacy, she is a fierce advocate for women in the workplace and for elevating the voices of her fellow Black+ practitioners in security and privacy. She has participated and supported the #ShareTheMicInCyber campaign since its inception.

Brooke is passionate about delivering privacy solutions that work and making browsing the web an inherently more private experience for users around the world.Why do you work in security or privacy?

I work in security and privacy to protect people and their personal information. It’s that simple. Security and privacy are two issues that are core to shaping the future of technology and how we interact with each other over the Internet. The challenges are immense, and yet the ability to impact positive change is what drew me to the field.

Tell us a little bit about your career journey to Google

My career journey into privacy does not involve traditional educational training in the field. In fact, my background is in public policy and communications, but when I transitioned to the technology industry, I realized that the most pressing policy issues for companies like Google surround the nascent field of privacy and the growing field of security.

After I graduated from college at Azusa Pacific University, I was the recipient of a Fulbright scholarship to Macau, where I spent one year studying Chinese and teaching English. I then moved to Washington D.C. where I initially worked for the State Department while finishing my graduate degree in International Public Policy at George Washington University. I had an amazing experience in that role and it afforded me some incredible networking opportunities and the chance to travel the world, as I worked in Afghanistan and Central Asia.

After about five years in the public sector, I joined Facebook as a Program Manager for the Global Public Policy team, initially focused on social good programs like Safety Check and Charitable Giving. Over time, I could see that the security team at Facebook was focused on fighting the proliferation of misinformation, and this called to me as an area where I could put my expertise in communication and geopolitical policy to work. So I switched teams and I've been in the security and privacy field ever since, eventually for Uber and now with Google's Chrome team.

At Google, privacy and security are at the heart of everything we do. Chrome is tackling some of the world's biggest security and privacy problems, and everyday my work impacts billions of people around the world. Most days, that's pretty daunting, but every day it's humbling and inspiring.

What is your security or privacy "soapbox"?

If we want to encourage people to engage in more secure behavior, we have to make it easy to understand and easy to act on. Every day we strive to make our users safer with Google by implementing security and privacy controls that are effective and easy for our users to use and understand.

As a program manager, I’ve learned that it is almost always more effective to offer a carrot than a stick, when it comes to security and privacy hygiene. I encourage all of our users to visit our Safety Center to learn all the ways Google helps you stay safe online, every day.

If you are interested in following Brooke’s work here at Google and beyond, please follow her on Twitter @brookelenet. We will be bringing you more profiles over the coming weeks and we hope you will engage with and share these with your network.

If you are interested in participating or learning more about #ShareTheMicInCyber, click here.