Author Archives: Kaylin Trychon

Trick & Treat! ? Paying Leets and Sweets for Linux Kernel privescs and k8s escapes


Starting today and for the next 3 months (until January 31 2022), we will pay 31,337 USD to security researchers that exploit privilege escalation in our lab environment with a patched vulnerability, and 50,337 USD to those that use a previously unpatched vulnerability, or a new exploit technique.

We are constantly investing in the security of the Linux Kernel because much of the internet, and Google—from the devices in our pockets, to the services running on Kubernetes in the cloud—depend on the security of it. We research its vulnerabilities and attacks, as well as study and develop its defenses.

But we know that there is more work to do. That’s why we have decided to build on top of our kCTF VRP from last year and triple our previous reward amounts (for at least the next 3 months).

Our base rewards for each publicly patched vulnerability is 31,337 USD (at most one exploit per vulnerability), but the reward can go up to 50,337 USD in two cases:
  • If the vulnerability was otherwise unpatched in the Kernel (0day)
  • If the exploit uses a new attack or technique, as determined by Google
We hope the new rewards will encourage the security community to explore new Kernel exploitation techniques to achieve privilege escalation and drive quicker fixes for these vulnerabilities. It is important to note, that the easiest exploitation primitives are not available in our lab environment due to the hardening done on Container-Optimized OS. Note this program complements Android's VRP rewards, so exploits that work on Android could also be eligible for up to 250,000 USD (that's in addition to this program).

The mechanics are:
  1. Connect to the kCTF VRP cluster, obtain root and read the flag (read this writeup for how it was done before, and this threat model for inspiration), and then submit your flag and a checksum of your exploit in this form.
  2. (If applicable) report vulnerabilities to upstream.
    • We strongly recommend including a patch since that could qualify for an additional reward from our Patch Reward Program, but please report vulnerabilities upstream promptly once you confirm they are exploitable.
  3. Report your finding to Google VRP once all patches are publicly available (we don't want to receive details of unpatched vulnerabilities ahead of the public.)
    • Provide the exploit code and the algorithm used to calculate the hash checksum.
    • A rough description of the exploit strategy is welcome.
Reports will be triaged on a weekly basis. If anyone has problems with the lab environment (if it's unavailable, technical issues or other questions), contact us on Discord in #kctf. You can read more details about the program here. Happy hunting!

Protecting your device information with Private Set Membership


At Google, keeping you safe online is our top priority, so we continuously build the most advanced privacy-preserving technologies into our products. Over the past few years, we've utilized innovations in cryptographic research to keep your personal information private by design and secure by default. As part of this, we launched Password Checkup, which protects account credentials by notifying you if an entered username and password are known to have been compromised in a prior data breach. Using cryptographic techniques, Password Checkup can do this without revealing your credentials to anyone, including Google. Today, Password Checkup protects users across many platforms including Android, Chrome and Google Password Manager.

Another example is Private Join and Compute, an open source protocol which enables organizations to work together and draw insights from confidential data sets. Two parties are able to encrypt their data sets, join them, and compute statistics over the joint data. By leveraging secure multi-party computation, Private Join and Compute is designed to ensure that the plaintext data sets are concealed from all parties.

In this post, we introduce the next iteration of our research, Private Set Membership, as well as its open-source availability. At a high level, Private Set Membership considers the scenario in which Google holds a database of items, and user devices need to contact Google to check whether a specific item is found in the database. As an example, users may want to check membership of a computer program on a block list consisting of known malicious software before executing the program. Often, the set’s contents and the queried items are sensitive, so we designed Private Set Membership to perform this task while preserving the privacy of our users.

Protecting your device information during enrollment
Beginning in Chrome 94, Private Set Membership will enable Chrome OS devices to complete the enrollment process in a privacy-preserving manner. Device enrollment is an integral part of the out-of-box experience that welcomes you when getting started with a Chrome OS device.

The device enrollment process requires checking membership of device information in encrypted Google databases, including checking if a device is enterprise enrolled or determining if a device was pre-packaged with a license. The correct end state of your Chrome OS device is determined using the results of these membership checks.

During the enrollment process, we protect your Chrome OS devices by ensuring no information ever leaves the device that may be decrypted by anyone else when using Private Set Membership. Google will never learn any device information and devices will not learn any unnecessary information about other devices. ​​To our knowledge, this is the first instance of advanced cryptographic tools being leveraged to protect device information during the enrollment process.

A deeper look at Private Set Membership
Private Set Membership is built upon two cryptographic tools:
  • Homomorphic encryption is a powerful cryptographic tool that enables computation over encrypted data without the need for decryption. As an example, given the encryptions of values X and Y, homomorphic encryption enables computing the encryption of the sum of X and Y without ever needing to decrypt. This preserves privacy as the data remains concealed during the computation. Private Set Membership is built upon Google’s open source homomorphic encryption library.
  • Oblivious hashing is a cryptographic technique that enables two parties to jointly compute a hash, H(K, x), where the sender holds the key, K, and the receiver holds the hash input, x. The receiver will obtain the hash, H(K, x), without learning the key K. At the same time, the input x will be hidden from the sender.
Take a look at how Private Set Membership utilizes homomorphic encryption and oblivious hashing to protect data below:



For a deeper look into the technology behind Private Set Membership, you can also access our open source code.

Privacy properties
By using Private Set Membership, the following privacy properties are obtained:
  • No data leaves the device when checking membership. We designed Private Set Membership using advanced cryptographic techniques to ensure that data never leaves the device in an unencrypted manner when performing membership checks. As a result, the data on your device will be concealed from everyone, including Google.
  • Devices learn only membership information and nothing else. Private Set Membership was designed to prevent devices from learning any unnecessary information about other devices when querying. For each query, devices learn only the results of the membership check and no other information.
Using Private Set Membership to solve more problems
Private Set Membership is a powerful tool that solves a fundamental problem in a privacy-preserving manner. This is just the beginning of what’s possible using this technology. Private Set Membership can help preserve user privacy across a wide array of applications. For example:
  • Checking allow or block lists. In this setting, users check membership in an allow or block list to determine whether to proceed with the desired action. Private Set Membership enables this check without any information about the software leaving the device.
  • Control flows with conditional membership checks. Control flows are a common computer science concept that represent arbitrary computer programs with conditional branching. In many cases, the conditional branches require checking membership of sensitive data to determine the next step of the algorithm. By utilizing Private Set Membership, we enable execution of these algorithms while ensuring data never leaves the user’s device.
We still have a ways to go before Private Set Membership is used for general membership checks by devices. At Google, we are exploring a number of potential use cases to protect your privacy using Private Set Membership. We are excited to continue advancing the state-of-the-art cryptographic research to keep you safe.

Acknowledgements


The work in this post is the result of a collaboration between a large group of current and former Google engineers, research scientists and others including: Amr Aboelkher, Asra Ali, Ghous Amjad, Yves Arrouye, Roland Bock, Xi Chen, Maksim Ivanov, Dennis Kalinichenko, Nirdhar Khazanie, Dawon Lee, Tancrède Lepoint, Lawrence Lui, Pavol Marko, Thiemo Nagel, Mariana Raykova, Aaron Segal, Joon Young Seo, Karn Seth, and Jason Wong.

Launching a collaborative minimum security baseline


According to an Opus and Ponemon Institute study, 59% of companies have experienced a data breach caused by one of their vendors or third parties. Outsourcing operations to third-party vendors has become a popular business strategy as it allows organizations to save money and increase operational efficiency. While these are positives for business operations, they do create significant security risks. These vendors have access to critical systems and customer data and so their security posture becomes equally as important.

Up until today, organizations of all sizes have had to design and implement their own security baselines for vendors that align with their risk posture. Unfortunately, this creates an impossible situation for vendors and organizations alike as they try to accommodate thousands of different requirements.

To solve this challenge, organizations across the industry teamed up to design Minimum Viable Secure Product or MVSP – a vendor-neutral security baseline that is designed to eliminate overhead, complexity and confusion during the procurement, RFP and vendor security assessment process by establishing minimum acceptable security baselines. With MVSP, the industry can increase clarity during each phase so parties on both sides of the equation can achieve their goals, and reduce the onboarding and sales cycle by weeks or even months.

MVSP was developed and is backed by companies across the industry, including Google, Salesforce, Okta, Slack and more. Our goal is to increase the minimum bar for security across the industry while simplifying the vetting process.

MVSP is a collaborative baseline focused on developing a set of minimum security requirements for business-to-business software and business process outsourcing suppliers. Designed with simplicity in mind, it contains only those controls that must, at a minimum, be implemented to ensure a reasonable security posture. MVSP is presented in the form of a minimum baseline checklist that can be used to verify the security posture of a solution.

How can MVSP help you?





Security teams measuring vendor offerings against a set of minimum security baselines

MVSP ensures that vendor selection and RFP include a minimum baseline that is backed by the industry. Communicating minimum requirements up front ensures everyone understands where they stand and that the expectations are clear.

Internal teams looking to measure your security against minimum requirements

MVSP provides a set of minimum security baselines that can be used as a checklist to understand gaps in the security of a product or service. This can be used to highlight opportunities for improvement and raise their visibility within the organization, with clearly defined benefits.


Procurement teams gathering information about vendor services

MVSP provides a single set of security-relevant questions that are publicly available and industry-backed. Aligning on a single set of baselines allows clearer understanding from vendors, resulting in a quicker and more accurate response.

Legal teams negotiating contractual controls

MVSP ensures expectations regarding minimum security controls are understood up front, reducing discussions of controls at the contract negotiation stage. Referencing an external baseline helps to simplify contract language and increases familiarity with the requirements.

Compliance teams documenting processes

MVSP provides an externally recognized and adopted set of security baselines on top of which to build your compliance efforts.

We welcome community feedback and interest from other organizations who want to contribute to the MVSP baseline. Together we can raise the minimum bar for security across the industry and make everyone safer.

Acknowledgements

The work in this post is the result of a collaboration between a large number of security practitioners across the industry including: Marat Vyshegorodtsev, Chris John Riley, Gabor Acs-Kurucz, Sebastian Oglaza, Gen Buckley, and Kevin Clark.

Google Protects Your Accounts – Even When You No Longer Use Them





What happens to our digital accounts when we stop using them? It’s a question we should all ask ourselves, because when we are no longer keeping tabs on what’s happening with old accounts, they can become targets for cybercrime.

In fact, quite a few recent high-profile breaches targeted inactive accounts. The Colonial Pipeline ransomware attack came through an inactive account that didn’t use multifactor authentication, according to a consultant who investigated the incident. And in the case of the recent T-Mobile breach this summer, information from inactive prepaid accounts was accessed through old billing files. Inactive accounts can pose a serious security risk.

For Google users, Inactive Account Manager helps with that problem. You can decide when Google should consider your account inactive and whether Google should delete your data or share it with a trusted contact.

Here’s How it Works

Once you sign up for Inactive Account Manager, available in My Account settings, you are asked to decide three things:
  1. When the account should be considered inactive: You can choose 3, 6, 12 or 18 months of inactivity before Google takes action on your account. Google will notify you a month before the designated time via a message sent to your phone and an email sent to the address you provide.
  2. Who to notify and what to share: You can choose up to 10 people for Google to notify once your Google Account becomes inactive (they won’t be notified during setup). You can also give them access to some of your data. If you choose to share data with your trusted contacts, the email will include a list of the selected data you wanted to share with them, and a link they can follow to download that data. This can include things like photos, contacts, emails, documents and other data that you specifically choose to share with your trusted contact. You can also choose to set up a Gmail AutoReply, with a custom subject and message explaining that you’ve ceased using the account.
  3. If your inactive Google Account should be deleted: After your account becomes inactive, Google can delete all its content or send it to your designated contacts. If you’ve decided to allow someone to download your content, they’ll be able to do so for 3 months before it gets deleted. If you choose to delete your Google Account, this will include your publicly shared data (for example, your YouTube videos, or blogs on Blogger). You can review the data associated with your account on the Google Dashboard. If you use Gmail with your account, you'll no longer be able to access that email once your account becomes inactive. You'll also be unable to reuse that Gmail username.
Setting up an Inactive Account plan is a simple step you can take to protect your data, secure your account in case it becomes inactive, and ensure that your digital legacy is shared with your trusted contacts in case you become unable to access your account. Our Privacy Checkup now reminds you to set up a plan for your account, and we’ll send you an occasional reminder about your plan via email.

At Google, we are constantly working to keep you safer online. This October, as we celebrate Cybersecurity Awareness Month, we want to remind our users of the security and privacy controls they have at their fingertips. For more ways to enhance your security check out our top five safety tips and visit our Safety Center to learn all the ways Google helps keep you safer online, every day.

Introducing the Secure Open Source Pilot Program


Over the past year we have made a number of investments to strengthen the security of critical open source projects, and recently announced our $10 billion commitment to cybersecurity defense including $100 million to support third-party foundations that manage open source security priorities and help fix vulnerabilities.

Today, we are excited to announce our sponsorship for the Secure Open Source (SOS) pilot program run by the Linux Foundation. This program financially rewards developers for enhancing the security of critical open source projects that we all depend on. We are starting with a $1 million investment and plan to expand the scope of the program based on community feedback.


Why SOS?

SOS rewards a very broad range of improvements that proactively harden critical open source projects and supporting infrastructure against application and supply chain attacks. To complement existing programs that reward vulnerability management, SOS’s scope is comparatively wider in the type of work it rewards, in order to support project developers.


What projects are in scope?

Since there is no one definition of what makes an open source project critical, our selection process will be holistic. During submission evaluation we will consider the guidelines established by the National Institute of Standards and Technology’s definition in response to the recent Executive Order on Cybersecurity along with criteria listed below:
  • The impact of the project:
    • How many and what types of users will be affected by the security improvements?
    • Will the improvements have a significant impact on infrastructure and user security?
    • If the project were compromised, how serious or wide-reaching would the implications be?
  • The project’s rankings in existing open source criticality research:

What security improvements qualify? 

The program is initially focused on rewarding the following work:

  • Software supply chain security improvements including hardening CI/CD pipelines and distribution infrastructure. The SLSA framework suggests specific requirements to consider, such as basic provenance generation and verification.
  • Adoption of software artifact signing and verification. One option to consider is Sigstore's set of utilities (e.g. cosign).
  • Project improvements that produce higher OpenSSF Scorecard results. For example, a contributor can follow remediation suggestions for the following Scorecard checks:
    • Code-Review
    • Branch-Protection
    • Pinned-Dependencies
    • Dependency-Update-Tool
    • Fuzzing
  • Use of OpenSSF Allstar and remediation of discovered issues.
  • Earning a CII Best Practice Badge (which also improves the Scorecard results).
We'll continue adding to the above list, so check our FAQ for updates. You may also submit improvements not listed above, if you provide justification and evidence to help us understand the complexity and impact of the work.

Only work completed after October 1, 2021 qualifies for SOS rewards.

Upfront funding is available on a limited case by case basis for impactful improvements of moderate to high complexity over a longer time span. Such requests should explain why funding is required upfront and provide a detailed plan of how the improvements will be landed.

How to participate

Review our FAQ and fill out this form to submit your application.

Please include as much data or supporting evidence as possible to help us evaluate the significance of the project and your improvements. 


Reward amounts

Reward amounts are determined based on complexity and impact of work:
  • $10,000 or more for complicated, high-impact and lasting improvements that almost certainly prevent major vulnerabilities in the affected code or supporting infrastructure.
  • $5,000-$10,000 for moderately complex improvements that offer compelling security benefits.
  • $1,000-$5,000 for submissions of modest complexity and impact.
  • $505 for small improvements that nevertheless have merit from a security standpoint.

Looking Ahead

The SOS program is part of a broader effort to address a growing truth: the world relies on open source software, but widespread support and financial contributions are necessary to keep that software safe and secure. This $1 million investment is just the beginning—we envision the SOS pilot program as the starting point for future efforts that will hopefully bring together other large organizations and turn it into a sustainable, long-term initiative under the OpenSSF. We welcome community feedback and interest from others who want to contribute to the SOS program. Together we can pool our support to give back to the open source community that makes the modern internet possible.

Announcing New Patch Reward Program for Tsunami Security Scanner


One year ago, we published the Tsunami security scanner with the goal of detecting high severity, actively exploited vulnerabilities with high confidence. In the last several months, the Tsunami scanner team has been working closely with our vulnerability rewards program, Bug Hunters, to further improve Tsunami's security detection capabilities.

Today, we are announcing a new experimental Patch Reward Program for the Tsunami project. Participants in the program will receive patch rewards for providing novel Tsunami detection plugins and web application fingerprints. We hope this program will allow us to quickly extend the detection capabilities of the scanner to better benefit our users and uncover more vulnerabilities in their network infrastructure.

For this launch, we will accept two types of contributions:
  • Vulnerability detection plugins: In order to expand Tsunami scanner's detection capabilities, we encourage everyone who is interested in making contributions to this project to add new vulnerabilities detection plugins. All plugin contributions will be reviewed by our panel members in Google's Vulnerability Management team and the reward amount will be determined by the severity as well as the time sensitivity of the vulnerability.
  • Web application fingerprints: Several months ago, we added new web application fingerprinting capabilities to Tsunami that detect popular off-the-shelf web applications. It achieves this goal by matching application fingerprints against a database of known web application fingerprints. More fingerprint data is needed for this approach to support more web applications. You will be rewarded with a flat amount for each application added to the database.

As with other Security Reward Programs, rewards can be donated to charity—and we'll double your donation if you choose to do so. We'll run this program in iterations so that everyone interested has the opportunity to participate.

To learn more about this program, please check out our official rules and guidelines. And if you have any questions or suggestions for the program, feel free to contact us at [email protected].

Distroless Builds Are Now SLSA 2

A few months ago we announced that we started signing all distroless images with cosign, which allows users to verify that they have the correct image before starting the build process. Signing our images was our first step towards fully securing the distroless supply chain. Since then, we’ve implemented even more accountability in our supply chain and are excited to announce that distroless builds have achieved SLSA 2. SLSA is a security framework for increasing supply chain security, and Level 2 ensures that the build service is tamper resistant.


This means that in addition to a signature, each distroless image now has an associated signed provenance. This provenance is an in-toto attestation and includes information around how each image was built, what command was run, and what build system was used. It also includes any special parameters that were passed in, the exact commit the images were built at, and more. This provenance is a useful tool for builds that need to be audited in the future.

SLSA 2 Requirement

Distroless

Source - Version controlled

Source code in Github

Build - Scripted build

Build script exists as a Tekton Pipeline, invoked as a Google Cloud Build step

Build - Build service

All steps run on Kubernetes with Tekton

Provenance - Available

Provenance is available in the rekor transparency log as an in-toto attestation

Provenance - Authenticated

Provenance is signed with the distroless GCP KMS key

Provenance - Service generated

Provenance is generated by Tekton Chains from a Tekton TaskRun



Achieving SLSA 2 required some changes to the distroless build pipeline: we set up Tekton Pipelines and Tekton Chains in a GKE cluster to automate building images and generating provenance. Every time a pull request is merged to the distroless Github repo, a Tekton Pipeline is triggered. This Pipeline builds the distroless images, and Tekton Chains is responsible for generating signed provenance for each image. Tekton Chains stores the signed provenance alongside the image in an OCI registry and also stores a record of the provenance in the rekor transparency log.

Don't trust us?


You can try the build yourself. Because distroless builds are reproducible, all the information to replicate the build is in the provenance, and you or a trusted third party can build the image yourselves and verify the build is correct by matching image digests.


You can verify an attestation for a distroless image with cosign and the distroless public key:

$ cosign verify-attestation -key cosign.pub gcr.io/distroless/base@sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


Verification for gcr.io/distroless/base@sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91 --

The following checks were performed on each of these signatures:

  - The cosign claims were validated

  - The signatures were verified against the specified public key

  - Any certificates were verified against the Fulcio roots.


...


And you can find the provenance for the image in the rekor transparency log with the rekor-cli tool. For example, you could find the provenance for the above image by using the image’s digest and running:

$ rekor-cli search --sha sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8


$ rekor-cli get --uuid af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8 --format json | jq -r .Attestation | base64 --decode | jq


{

  "_type": "distroless-provenance",

  "predicateType": "https://tekton.dev/chains/provenance",

  "subject": [

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian11",

      "digest": {

        "sha256": "c9507268813f235b11e63a7ae01526b180c94858bd718d6b4746c9c0e8425f7a"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/java",

      "digest": {

        "sha256": "deb41661be772c6256194eb1df6b526cc95a6f60e5f5b740dda2769b20778c51"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/python3",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/python3-debian10",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    }

  ],

  "predicate": {

    "invocation": {

      "parameters": [

        "MANIFEST_SUBSECTION={string 0 []}",

        "CHAINS-GIT_COMMIT={string 976c1c9bc178ac0371d8888d69893145c3df09f0 []}",

        "CHAINS-GIT_URL={string https://github.com/GoogleContainerTools/distroless []}"

      ],

      "recipe_uri": "task://distroless-provenance",

      "event_id": "531c282f-806e-41e4-b3ad-b596c4283381",

      "builder.id": "tekton-chains"

    },

    "recipe": {

      "steps": [

        {

          "entryPoint": "#!/bin/sh\nset -ex\n\n# get the digests for a subset of images built, and store in the IMAGES result\ngo run provenance/provenance.go images $(params.MANIFEST_SUBSECTION) > $(results.IMAGES.path)\n",

          "arguments": null,

          "environment": {

            "container": "provenance",

            "image": "docker.io/library/golang@sha256:cb1a7482cb5cfc52527c5cdea5159419292360087d5249e3fe5472f3477be642"

          },

          "annotations": null

        }

      ]

    },

    "metadata": {

      "buildStartedOn": "2021-09-16T00:03:04Z",

      "buildFinishedOn": "2021-09-16T00:04:36Z"

    },

    "materials": [

      {

        "uri": "https://github.com/GoogleContainerTools/distroless",

        "digest": {

          "revision": "976c1c9bc178ac0371d8888d69893145c3df09f0"

        }

      }

    ]

  }

}



As you might guess, our next step is getting distroless to SLSA 3, which will require adding non-falsifiable provenance and isolated builds to the distroless supply chain. Stay tuned for more!

Distroless Builds Are Now SLSA 2

A few months ago we announced that we started signing all distroless images with cosign, which allows users to verify that they have the correct image before starting the build process. Signing our images was our first step towards fully securing the distroless supply chain. Since then, we’ve implemented even more accountability in our supply chain and are excited to announce that distroless builds have achieved SLSA 2. SLSA is a security framework for increasing supply chain security, and Level 2 ensures that the build service is tamper resistant.


This means that in addition to a signature, each distroless image now has an associated signed provenance. This provenance is an in-toto attestation and includes information around how each image was built, what command was run, and what build system was used. It also includes any special parameters that were passed in, the exact commit the images were built at, and more. This provenance is a useful tool for builds that need to be audited in the future.

SLSA 2 Requirement

Distroless

Source - Version controlled

Source code in Github

Build - Scripted build

Build script exists as a Tekton Pipeline, invoked as a Google Cloud Build step

Build - Build service

All steps run on Kubernetes with Tekton

Provenance - Available

Provenance is available in the rekor transparency log as an in-toto attestation

Provenance - Authenticated

Provenance is signed with the distroless GCP KMS key

Provenance - Service generated

Provenance is generated by Tekton Chains from a Tekton TaskRun



Achieving SLSA 2 required some changes to the distroless build pipeline: we set up Tekton Pipelines and Tekton Chains in a GKE cluster to automate building images and generating provenance. Every time a pull request is merged to the distroless Github repo, a Tekton Pipeline is triggered. This Pipeline builds the distroless images, and Tekton Chains is responsible for generating signed provenance for each image. Tekton Chains stores the signed provenance alongside the image in an OCI registry and also stores a record of the provenance in the rekor transparency log.

Don't trust us?


You can try the build yourself. Because distroless builds are reproducible, all the information to replicate the build is in the provenance, and you or a trusted third party can build the image yourselves and verify the build is correct by matching image digests.


You can verify an attestation for a distroless image with cosign and the distroless public key:

$ cosign verify-attestation -key cosign.pub gcr.io/distroless/base@sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


Verification for gcr.io/distroless/base@sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91 --

The following checks were performed on each of these signatures:

  - The cosign claims were validated

  - The signatures were verified against the specified public key

  - Any certificates were verified against the Fulcio roots.


...


And you can find the provenance for the image in the rekor transparency log with the rekor-cli tool. For example, you could find the provenance for the above image by using the image’s digest and running:

$ rekor-cli search --sha sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8


$ rekor-cli get --uuid af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8 --format json | jq -r .Attestation | base64 --decode | jq


{

  "_type": "distroless-provenance",

  "predicateType": "https://tekton.dev/chains/provenance",

  "subject": [

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian11",

      "digest": {

        "sha256": "c9507268813f235b11e63a7ae01526b180c94858bd718d6b4746c9c0e8425f7a"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/java",

      "digest": {

        "sha256": "deb41661be772c6256194eb1df6b526cc95a6f60e5f5b740dda2769b20778c51"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/python3",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/python3-debian10",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    }

  ],

  "predicate": {

    "invocation": {

      "parameters": [

        "MANIFEST_SUBSECTION={string 0 []}",

        "CHAINS-GIT_COMMIT={string 976c1c9bc178ac0371d8888d69893145c3df09f0 []}",

        "CHAINS-GIT_URL={string https://github.com/GoogleContainerTools/distroless []}"

      ],

      "recipe_uri": "task://distroless-provenance",

      "event_id": "531c282f-806e-41e4-b3ad-b596c4283381",

      "builder.id": "tekton-chains"

    },

    "recipe": {

      "steps": [

        {

          "entryPoint": "#!/bin/sh\nset -ex\n\n# get the digests for a subset of images built, and store in the IMAGES result\ngo run provenance/provenance.go images $(params.MANIFEST_SUBSECTION) > $(results.IMAGES.path)\n",

          "arguments": null,

          "environment": {

            "container": "provenance",

            "image": "docker.io/library/golang@sha256:cb1a7482cb5cfc52527c5cdea5159419292360087d5249e3fe5472f3477be642"

          },

          "annotations": null

        }

      ]

    },

    "metadata": {

      "buildStartedOn": "2021-09-16T00:03:04Z",

      "buildFinishedOn": "2021-09-16T00:04:36Z"

    },

    "materials": [

      {

        "uri": "https://github.com/GoogleContainerTools/distroless",

        "digest": {

          "revision": "976c1c9bc178ac0371d8888d69893145c3df09f0"

        }

      }

    ]

  }

}



As you might guess, our next step is getting distroless to SLSA 3, which will require adding non-falsifiable provenance and isolated builds to the distroless supply chain. Stay tuned for more!

Google Supports Open Source Technology Improvement Fund

 

We recently pledged to provide $100 million to support third-party foundations that manage open source security priorities and help fix vulnerabilities. As part of this commitment, we are excited to announce our support of the Open Source Technology Improvement Fund (OSTIF) to improve security of eight open-source projects.

Google’s support will allow OSTIF to launch the Managed Audit Program (MAP), which will expand in-depth security reviews to critical projects vital to the open source ecosystem. The eight libraries, frameworks and apps that were selected for this round are those that would benefit the most from security improvements and make the largest impact on the open-source ecosystem that relies on them. The projects include:
  • Git - de facto version control software used in modern DevOps.
  • Lodash - a modern JavaScript utility library with over 200 functions to facilitate web development, can be found in most environments that support JavaScript, which is most of the world wide web.
  • Laravel - a php web application framework that is used by many modern, full-stack web applications, including integrations with Google Cloud.
  • Slf4j - a logging facade for various Java logging frameworks.
  • Jackson-core & Jackson-databind - a JSON for Java, Streaming API, and extra shared components and the base for Jackson data-bind package.
  • Httpcomponents-core & Httpcomponents-client - these projects are responsible for creating and maintaining a toolset of low-level Java components focused on HTTP and associated protocols. 
We are excited to help OSTIF build a safer open source environment for everyone. If you are interested in getting involved or learning more please visit the OSTIF blog.

Updates on our continued collaboration with NIST to secure the Software Supply Chain


Yesterday, we were honored to participate in President Biden’s White House Cyber Security Summit where we shared recommendations to advance the administration’s cybersecurity agenda. This included our commitment to invest $10 billion over the next five years to expand zero-trust programs, help secure the software supply chain, and enhance open-source security.

At Google, we’ve long advocated for securing the software supply chain both through our internal best practices and industry efforts that enhance the integrity and security of software. That’s why we're thrilled to collaborate with the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) to support and develop a new framework that will help to improve the security and integrity of the technology supply chain.

This builds on our previous work in June of this year, where we submitted four statements in response to the National Telecommunications and Information Administration (NTIA) and NIST’s call for position papers to help guide adoption of new software supply chain security standards and guidelines that fulfill components of the Executive Order on Improving the Nation’s Cybersecurity.

The papers lay out concrete ways to increase the nation’s cybersecurity, based on Google’s experience building secure by design systems for our users and enterprise customers. Each of the suggestions are enactable solutions for software supply chain security, and were drawn from Google’s research and innovations in engineering away entire classes of vulnerabilities.

NIST and NTIA also released their guidelines in July for several of the Executive Order’s target areas (SBOM Minimum Elements, Critical Software Guidelines, Developer Verification of Software), incorporating specific recommendations from Google. Below are summaries of each of Google’s position papers, and background on our contributions and impact in each area.

High-Confidence, Scalable Secure Development

Instead of being reactive to vulnerabilities, we should eliminate them proactively with secure languages, platforms, and frameworks that stop entire classes of bugs.

Preventing problems before they leave the developer’s keyboard is safer and more cost effective than trying to fix vulnerabilities and their fallout. (Consider the enormous impact of the SolarWinds attack, which is predicted to take $100 billion to remediate.) Google promotes designs that are secure by default and impervious to simple errors that can lead to security vulnerabilities.

We want to see secure systems used as widely as possible, so we have invested in initiatives such as getting Rust into the Linux Kernel, published research papers, and shared guidance on secure frameworks.

Security Measures for Critical Software

Critical software does not exist in a vacuum; we must also harden the broader systems and run environments. Our paper outlines a list of actionable steps for critical software's configuration, the privileges with which it runs, and the network(s) to which it is connected.

Our suggestions are based on practices that have withstood the tests of time and scale, such as in our Google Cloud Products, built on one of the industry’s most trusted clouds.

Google contributes to open-source tools that help maintainers adopt these practices, such as gVisor for sandboxing, and GLOME for authentication and authorization. Additionally, to share the knowledge we have gained securing systems that serve billions of users, we released our book Building Secure and Reliable Systems, a resource for any organization that wants to design systems that are fundamentally secure, reliable, and scalable.
 
Software Source Code Testing

Continuous fuzzing is indispensable for identifying bugs and catching vulnerabilities before attackers do. We also suggest securing dependencies using automated tools such as Scorecards, Dependabot, and OSV.

Google has made huge contributions to the field of fuzzing, and has found tens of thousands of bugs with tools like libFuzzer and ClusterFuzz.

We have made continuous fuzzing available to all developers through OSS-Fuzz, and are funding integration costs and fuzzing internships. We are leading a shift in industry support: on top of bug bounties, which are rewards programs for finding bugs, we have also added patch rewards, money that can help fund maintainers remediate uncovered bugs.

Software Supply Chain Integrity

Google strongly encourages adoption of SLSA, an end-to-end framework for ensuring the integrity of software artifacts throughout the software supply chain. Four “SLSA Levels” provide incrementally adoptable guidelines that each raise the bar on security standards for open-source software.

SLSA is based on Google’s internal framework Binary Authorization for Borg (BAB) that ensures that all software packages used by the company meet high integrity standards. Given BAB’s success, we have adapted the framework to work for systems beyond Google and released it as SLSA to help protect other organizations and platforms.

We have shared many of Google’s practices for security and reliability in our Site Reliability Engineering book. Following our recent introduction of SLSA to the wider public, we are looking forward to making improvements in response to community feedback.

Minimum Requirements for SBOMs

Google submitted an additional paper in response to the NTIA’s request for comments on creating SBOMs, which will give users information about a software package’s contents. Modern development requires different approaches than classic packaged software, which means SBOMs must also deal with intermediate artifacts like containers and library dependencies.

SBOMs need a reasonable signal-to-noise ratio: if they contain too much information, they won’t be useful, so we urge the NTIA to establish both minimum and maximum requirements on granularity and depth for specific use-cases. We also recommend considerations for the creation of trustworthy SBOMs, such as using verifiable data generation methods to capture metadata, and preparing for the automation and tooling technologies that will be key for widespread SBOM adoption.

Improving Everyone’s Security

We are committed to helping advance collective cybersecurity. We also realize that too many guidelines and lists of best practices can become overwhelming, but any incremental changes in the right direction make a real difference. We encourage companies and maintainers to start evaluating today where they stand on the most important security postures, and to make improvements with the guidance of these papers in the areas of greatest risk. No single entity can fix the problems we all face in this area, but by being open about our practices and sharing our research and tools, we can all help raise the standards for our collective security.