Tag Archives: Security

Introducing SLSA, an End-to-End Framework for Supply Chain Integrity



Supply chain integrity attacks—unauthorized modifications to software packages—have been on the rise in the past two years, and are proving to be common and reliable attack vectors that affect all consumers of software. The software development and deployment supply chain is quite complicated, with numerous threats along the source ➞ build ➞ publish workflow. While point solutions do exist for some specific vulnerabilities, there is no comprehensive end-to-end framework that both defines how to mitigate threats across the software supply chain, and provides reasonable security guarantees. There is an urgent need for a solution in the face of the eye-opening, multi-billion dollar attacks in recent months (e.g. SolarWinds, Codecov), some of which could have been prevented or made more difficult had such a framework been adopted by software developers and consumers.


Our proposed solution is Supply chain Levels for Software Artifacts (SLSA, pronounced “salsa”), an end-to-end framework for ensuring the integrity of software artifacts throughout the software supply chain. It is inspired by Google’s internal “Binary Authorization for Borg” which has been in use for the past 8+ years and is mandatory for all of Google's production workloads. The goal of SLSA is to improve the state of the industry, particularly open source, to defend against the most pressing integrity threats. With SLSA, consumers can make informed choices about the security posture of the software they consume.

How SLSA helps

SLSA helps to protect against common supply chain attacks. The following image illustrates a typical software supply chain and includes examples of attacks that can occur at every link in the chain. Each type of attack has occured over the past several years and, unfortunately, is increasing as time goes on.




Threat

Known example

How SLSA could have helped

A

Submit bad code to the source repository

Linux hypocrite commits: Researcher attempted to intentionally introduce vulnerabilities into the Linux kernel via patches on the mailing list.

Two-person review caught most, but not all, of the vulnerabilities.

B

Compromise source control platform

PHP: Attacker compromised PHP’s self-hosted git server and injected two malicious commits.

A better-protected source code platform would have been a much harder target for the attackers. 

C

Build with official process but from code not matching source control

Webmin: Attacker modified the build infrastructure to use source files not matching source control.

A SLSA-compliant build server would have produced provenance identifying the actual sources used, allowing consumers to detect such tampering.

D

Compromise build platform

SolarWinds: Attacker compromised the build platform and installed an implant that injected malicious behavior during each build.

Higher SLSA levels require stronger security controls for the build platform, making it more difficult to compromise and gain persistence.

E

Use bad dependency (i.e. A-H, recursively)

event-stream: Attacker added an innocuous dependency and then updated the dependency to add malicious behavior. The update did not match the code submitted to GitHub (i.e. attack F).

Applying SLSA recursively to all dependencies would have prevented this particular vector, because the provenance would have indicated that it either wasn’t built from a proper builder or that the source did not come from GitHub.

F

Upload an artifact that was not built by the CI/CD system

CodeCov: Attacker used leaked credentials to upload a malicious artifact to a GCS bucket, from which users download directly.

Provenance of the artifact in the GCS bucket would have shown that the artifact was not built in the expected manner from the expected source repo.

G

Compromise package repository

Attacks on Package Mirrors: Researcher ran mirrors for several popular package repositories, which could have been used to serve malicious packages.

Similar to above (F), provenance of the malicious artifacts would have shown that they were not built as expected or from the expected source repo.

H

Trick consumer into using bad package

Browserify typosquatting: Attacker uploaded a malicious package with a similar name as the original.

SLSA does not directly address this threat, but provenance linking back to source control can enable and enhance other solutions.


What is SLSA

In its current state, SLSA is a set of incrementally adoptable security guidelines being established by industry consensus. In its final form, SLSA will differ from a list of best practices in its enforceability: it will support the automatic creation of auditable metadata that can be fed into policy engines to give "SLSA certification" to a particular package or build platform. SLSA is designed to be incremental and actionable, and to provide security benefits at every step. Once an artifact qualifies at the highest level, consumers can have confidence that it has not been tampered with and can be securely traced back to source—something that is difficult, if not impossible, to do with most software today.

SLSA consists of four levels, with SLSA 4 representing the ideal end state. The lower levels represent incremental milestones with corresponding incremental integrity guarantees. The requirements are currently defined as follows.



SLSA 1 requires that the build process be fully scripted/automated and generate provenance. Provenance is metadata about how an artifact was built, including the build process, top-level source, and dependencies. Knowing the provenance allows software consumers to make risk-based security decisions. Though provenance at SLSA 1 does not protect against tampering, it offers a basic level of code source identification and may aid in vulnerability management.


SLSA 2 requires using version control and a hosted build service that generates authenticated provenance. These additional requirements give the consumer greater confidence in the origin of the software. At this level, the provenance prevents tampering to the extent that the build service is trusted. SLSA 2 also provides an easy upgrade path to SLSA 3.


SLSA 3 further requires that the source and build platforms meet specific standards to guarantee the auditability of the source and the integrity of the provenance, respectively. We envision an accreditation process whereby auditors certify that platforms meet the requirements, which consumers can then rely on. SLSA 3 provides much stronger protections against tampering than earlier levels by preventing specific classes of threats, such as cross-build contamination.


SLSA 4 is currently the highest level, requiring two-person review of all changes and a hermetic, reproducible build process. Two-person review is an industry best practice for catching mistakes and deterring bad behavior. Hermetic builds guarantee that the provenance’s list of dependencies is complete. Reproducible builds, though not strictly required, provide many auditability and reliability benefits. Overall, SLSA 4 gives the consumer a high degree of confidence that the software has not been tampered with.


More details on these proposed levels can be found in the GitHub repository, including the corresponding Source and Build/Provenance requirements. We are open to feedback and suggestions for changes on these requirements.

Proof of Concept

Today, we are releasing a proof of concept for SLSA 1 provenance generator (repo, marketplace). This will allow a user to create and upload provenance alongside their build artifacts, thereby achieving SLSA 1. To use it, add the following snippet to your workflow:

- name: Generate provenance

  uses: slsa-framework/github-actions-[email protected].1

  with:

    artifact_path: <path-to-artifact/directory>


Going forward, we plan to work with popular source, build, and packaging platforms to make it as easy as possible to reach higher levels of SLSA. These plans include generating provenance automatically in build systems, propagating provenance natively in package repositories, and adding security features across the major platforms. Our long-term goal is to raise the security bar across the industry so that the default expectation is higher-level SLSA security standards, with minimal effort on the part of software producers.
 
Summary

SLSA is a practical framework for end-to-end software supply chain integrity, based on a model proven to work at scale in one of the world’s largest software engineering organizations. Achieving the highest level of SLSA for most projects may be difficult, but incremental improvements recognized by lower SLSA levels will already go a long way toward improving the security of the open source ecosystem.

We look forward to working with the community on refining the levels as we begin adopting SLSA for our own open source projects. If you are a project maintainer and interested in trying to adopt and provide feedback on SLSA, please reach out or come join the discussions taking place in the OpenSSF Digital Identity Attestation Working Group.

Check out the Know, Prevent, Fix post to read more about Google’s overall approach to open source security.

Our latest updates on Fully Homomorphic Encryption

Posted by Miguel Guevara, Product Manager, Privacy and Data Protection Office.

Privacy protection illustration

As developers, it’s our responsibility to help keep our users safe online and protect their data. This starts with building products that are secure by default, private by design, and put users in control. Everything we make at Google is underpinned by these principles, and we’re proud to be an industry leader in developing, deploying, and scaling new privacy-preserving technologies that make it possible to learn valuable insights and create helpful experiences while protecting our users’ privacy.

That’s why today, we are excited to announce that we’re open-sourcing a first-of-its-kind, general-purpose transpiler for Fully Homomorphic Encryption (FHE), which will enable developers to compute on encrypted data without being able to access any personally identifiable information.

A deeper look at the technology

With FHE, encrypted data can travel across the Internet to a server, where it can be processed without being decrypted. Google’s transpiler will enable developers to write code for any type of basic computation such as simple string processing or math, and run it on encrypted data. The transpiler will transform that code into a version that can run on encrypted data. This then allows developers to create new programming applications that don’t need unencrypted data. FHE can also be used to train machine learning models on sensitive data in a private manner.

For example, imagine you’re building an application for people with diabetes. This app might collect sensitive information from its users, and you need a way to keep this data private and protected while also sharing it with medical experts to learn valuable insights that could lead to important medical advancements. With Google’s transpiler for FHE, you can encrypt the data you collect and share it with medical experts who, in turn, can analyze the data without decrypting it - providing helpful information to the medical community, all while ensuring that no one can access the data’s underlying information.

In the next 10 years, FHE could even help researchers find associations between specific gene mutations by analyzing genetic information across thousands of encrypted samples and testing different hypotheses to identify the genes most strongly associated with the diseases they’re studying.

Making more products private by design

Our principle to make our products private by design drives us to build ground-breaking computing technologies that enable personalized experiences while protecting your private information. Privacy-preserving technologies are on the cutting-edge of Google’s innovations, and they have already shown great potential to help shape a more private internet.

In 2016, Google researchers invented Federated Learning, a technique that helps preserve privacy by keeping as much personal information on your device as possible. And in 2019, Google made its differential privacy library freely available to any organization or developer, an advanced anonymization technology that enables developers to learn from their data privately. No one has scaled the use of Differential Privacy more than we have.

We’ve been thrilled to see these technologies put to use across the globe; in France, for example, a startup called Arkhn has been able to accelerate scientific discovery using differential privacy to share data across hospitals.

We still have a ways to go before most computations happen with FHE -- but much as it took some time for HTTPS to take off and be widely adopted, today’s announcement is an important step towards bringing users helpful products that preserve their privacy and keep their data safe.

At Google, we know that open-sourcing our technologies with the developer community for feedback and use helps make them better. We will continue to invest and lead the privacy-preserving technology field by publishing new work, and open-sourcing it for everyone to use at scale - and we're excited to continue this practice by sharing this latest advancement with developers everywhere. We can't wait to see what you’ll build, and we look forward to collaborating on the journey towards a safer Internet.

Rust/C++ interop in the Android Platform

One of the main challenges of evaluating Rust for use within the Android platform was ensuring we could provide sufficient interoperability with our existing codebase. If Rust is to meet its goals of improving security, stability, and quality Android-wide, we need to be able to use Rust anywhere in the codebase that native code is required. To accomplish this, we need to provide the majority of functionality platform developers use. As we discussed previously, we have too much C++ to consider ignoring it, rewriting all of it is infeasible, and rewriting older code would likely be counterproductive as the bugs in that code have largely been fixed. This means interoperability is the most practical way forward.

Before introducing Rust into the Android Open Source Project (AOSP), we needed to demonstrate that Rust interoperability with C and C++ is sufficient for practical, convenient, and safe use within Android. Adding a new language has costs; we needed to demonstrate that Rust would be able to scale across the codebase and meet its potential in order to justify those costs. This post will cover the analysis we did more than a year ago while we evaluated Rust for use in Android. We also present a follow-up analysis with some insights into how the original analysis has held up as Android projects have adopted Rust.

Language interoperability in Android

Existing language interoperability in Android focuses on well defined foreign-function interface (FFI) boundaries, which is where code written in one programming language calls into code written in a different language. Rust support will likewise focus on the FFI boundary as this is consistent with how AOSP projects are developed, how code is shared, and how dependencies are managed. For Rust interoperability with C, the C application binary interface (ABI) is already sufficient.

Interoperability with C++ is more challenging and is the focus of this post. While both Rust and C++ support using the C ABI, it is not sufficient for idiomatic usage of either language. Simply enumerating the features of each language results in an unsurprising conclusion: many concepts are not easily translatable, nor do we necessarily want them to be. After all, we’re introducing Rust because many features and characteristics of C++ make it difficult to write safe and correct code. Therefore, our goal is not to consider all language features, but rather to analyze how Android uses C++ and ensure that interop is convenient for the vast majority of our use cases.

We analyzed code and interfaces in the Android platform specifically, not codebases in general. While this means our specific conclusions may not be accurate for other codebases, we hope the methodology can help others to make a more informed decision about introducing Rust into their large codebase. Our colleagues on the Chrome browser team have done a similar analysis, which you can find here.

This analysis was not originally intended to be published outside of Google: our goal was to make a data-driven decision on whether or not Rust was a good choice for systems development in Android. While the analysis is intended to be accurate and actionable, it was never intended to be comprehensive, and we’ve pointed out a couple of areas where it could be more complete. However, we also note that initial investigations into these areas showed that they would not significantly impact the results, which is why we decided to not invest the additional effort.

Methodology

Exported functions from Rust and C++ libraries are where we consider interop to be essential. Our goals are simple:

  • Rust must be able to call functions from C++ libraries and vice versa.
  • FFI should require a minimum of boilerplate.
  • FFI should not require deep expertise.

While making Rust functions callable from C++ is a goal, this analysis focuses on making C++ functions available to Rust so that new Rust code can be added while taking advantage of existing implementations in C++. To that end, we look at exported C++ functions and consider existing and planned compatibility with Rust via the C ABI and compatibility libraries. Types are extracted by running objdump on shared libraries to find external C++ functions they use1 and running c++filt to parse the C++ types. This gives functions and their arguments. It does not consider return values, but a preliminary analysis2 of those revealed that they would not significantly affect the results.

We then classify each of these types into one of the following buckets:

Supported by bindgen

These are generally simple types involving primitives (including pointers and references to them). For these types, Rust’s existing FFI will handle them correctly, and Android’s build system will auto-generate the bindings.

Supported by cxx compat crate

These are handled by the cxx crate. This currently includes std::string, std::vector, and C++ methods (including pointers/references to these types). Users simply have to define the types and functions they want to share across languages and cxx will generate the code to do that safely.

Native support

These types are not directly supported, but the interfaces that use them have been manually reworked to add Rust support. Specifically, this includes types used by AIDL and protobufs.

We have also implemented a native interface for StatsD as the existing C++ interface relies on method overloading, which is not well supported by bindgen and cxx3. Usage of this system does not show up in the analysis because the C++ API does not use any unique types.

Potential addition to cxx

This is currently common data structures such as std::optional and std::chrono::duration and custom string and vector implementations.

These can either be supported natively by a future contribution to cxx, or by using its ExternType facilities. We have only included types in this category that we believe are relatively straightforward to implement and have a reasonable chance of being accepted into the cxx project.

We don't need/intend to support

Some types are exposed in today’s C++ APIs that are either an implicit part of the API, not an API we expect to want to use from Rust, or are language specific. Examples of types we do not intend to support include:

  • Mutexes - we expect that locking will take place in one language or the other, rather than needing to pass mutexes between languages, as per our coarse-grained philosophy.
  • native_handle - this is a JNI interface type, so it is inappropriate for use in Rust/C++ communication.
  • std::locale& - Android uses a separate locale system from C++ locales. This type primarily appears in output due to e.g., cout usage, which would be inappropriate to use in Rust.

Overall, this category represents types that we do not believe a Rust developer should be using.

HIDL

Android is in the process of deprecating HIDL and migrating to AIDL for HALs for new services.We’re also migrating some existing implementations to stable AIDL. Our current plan is to not support HIDL, preferring to migrate to stable AIDL instead. These types thus currently fall into the “We don't need/intend to support'' bucket above, but we break them out to be more specific. If there is sufficient demand for HIDL support, we may revisit this decision later.

Other

This contains all types that do not fit into any of the above buckets. It is currently mostly std::string being passed by value, which is not supported by cxx.

Top C++ libraries

One of the primary reasons for supporting interop is to allow reuse of existing code. With this in mind, we determined the most commonly used C++ libraries in Android: liblog, libbase, libutils, libcutils, libhidlbase, libbinder, libhardware, libz, libcrypto, and libui. We then analyzed all of the external C++ functions used by these libraries and their arguments to determine how well they would interoperate with Rust.

Overall, 81% of types are in the first three categories (which we currently fully support) and 87% are in the first four categories (which includes those we believe we can easily support). Almost all of the remaining types are those we believe we do not need to support.

Mainline modules

In addition to analyzing popular C++ libraries, we also examined Mainline modules. Supporting this context is critical as Android is migrating some of its core functionality to Mainline, including much of the native code we hope to augment with Rust. Additionally, their modularity presents an opportunity for interop support.

We analyzed 64 binaries and libraries in 21 modules. For each analyzed library we examined their used C++ functions and analyzed the types of their arguments to determine how well they would interoperate with Rust in the same way we did above for the top 10 libraries.

Here 88% of types are in the first three categories and 90% in the first four, with almost all of the remaining being types we do not need to handle.

Analysis of Rust/C++ Interop in AOSP

With almost a year of Rust development in AOSP behind us, and more than a hundred thousand lines of code written in Rust, we can now examine how our original analysis has held up based on how C/C++ code is currently called from Rust in AOSP.4

The results largely match what we expected from our analysis with bindgen handling the majority of interop needs. Extensive use of AIDL by the new Keystore2 service results in the primary difference between our original analysis and actual Rust usage in the “Native Support” category.

A few current examples of interop are:

  • Cxx in Bluetooth - While Rust is intended to be the primary language for Bluetooth, migrating from the existing C/C++ implementation will happen in stages. Using cxx allows the Bluetooth team to more easily serve legacy protocols like HIDL until they are phased out by using the existing C++ support to incrementally migrate their service.
  • AIDL in keystore - Keystore implements AIDL services and interacts with apps and other services over AIDL. Providing this functionality would be difficult to support with tools like cxx or bindgen, but the native AIDL support is simple and ergonomic to use.
  • Manually-written wrappers in profcollectd - While our goal is to provide seamless interop for most use cases, we also want to demonstrate that, even when auto-generated interop solutions are not an option, manually creating them can be simple and straightforward. Profcollectd is a small daemon that only exists on non-production engineering builds. Instead of using cxx it uses some small manually-written C wrappers around C++ libraries that it then passes to bindgen.

Conclusion

Bindgen and cxx provide the vast majority of Rust/C++ interoperability needed by Android. For some of the exceptions, such as AIDL, the native version provides convenient interop between Rust and other languages. Manually written wrappers can be used to handle the few remaining types and functions not supported by other options as well as to create ergonomic Rust APIs. Overall, we believe interoperability between Rust and C++ is already largely sufficient for convenient use of Rust within Android.

If you are considering how Rust could integrate into your C++ project, we recommend doing a similar analysis of your codebase. When addressing interop gaps, we recommend that you consider upstreaming support to existing compat libraries like cxx.

Acknowledgements

Our first attempt at quantifying Rust/C++ interop involved analyzing the potential mismatches between the languages. This led to a lot of interesting information, but was difficult to draw actionable conclusions from. Rather than enumerating all the potential places where interop could occur, Stephen Hines suggested that we instead consider how code is currently shared between C/C++ projects as a reasonable proxy for where we’ll also likely want interop for Rust. This provided us with actionable information that was straightforward to prioritize and implement. Looking back, the data from our real-world Rust usage has reinforced that the initial methodology was sound. Thanks Stephen!

Also, thanks to:

  • Andrei Homescu and Stephen Crane for contributing AIDL support to AOSP.
  • Ivan Lozano for contributing protobuf support to AOSP.
  • David Tolnay for publishing cxx and accepting our contributions.
  • The many authors and contributors to bindgen.
  • Jeff Vander Stoep and Adrian Taylor for contributions to this post.


  1. We used undefined symbols of function type as reported by objdump to perform this analysis. This means that any header-only functions will be absent from our analysis, and internal (non-API) functions which are called by header-only functions may appear in it. 

  2. We extracted return values by parsing DWARF symbols, which give the return types of functions. 

  3. Even without automated binding generation, manually implementing the bindings is straightforward. 

  4. In the case of handwritten C/C++ wrappers, we analyzed the functions they call, not the wrappers themselves. For all uses of our native AIDL library, we analyzed the types used in the C++ version of the library. 

Verifiable Supply Chain Metadata for Tekton


If you've been paying attention to the news at all lately, you've probably noticed that software supply chain attacks are rapidly becoming a big problem. Whether you're trying to prevent these attacks, responding to an ongoing one or recovering from one, you understand that knowing what is happening in your CI/CD pipeline is critical.

Fortunately, the Kubernetes-native Tekton project – an open-source framework for creating CI/CD systems – was designed with security in mind from Day One, and the new Tekton Chains project is here to help take it to the next level. Tekton Chains securely captures metadata for CI/CD pipeline executions. We made two really important design decisions early on in Tekton that make supply chain security easy: declarative pipeline definitions and explicit state transitions. This next section will explain what these mean in practice and how they make it easy to build a secure delivery pipeline.


Definitions or “boxes and arrows”
Just like everything in your high school physics class, a CI/CD pipeline can be modeled as a series of boxes. Each box has some inputs, some outputs, and some steps that happen in the middle. Even if you have one big complicated bash script that fetches dependencies, builds programs, runs tests, downloads the internet and deploys to production, you can draw boxes and arrows to represent this flow. The boxes might be really big, but you can do it.

Since the initial whiteboard sketches, the Pipeline and Task CRDs in Tekton were designed to allow users to define each step of their pipeline at a granular level. These types include support for mandatory declared inputs, outputs, and build environments. This means you can track exactly what sources went into a build, what tools were used during the build itself and what artifacts came out at the end. By breaking up a large monolithic pipeline into a series of smaller, reusable steps, you can increase visibility into the overall system. This makes it easier to understand your exposure to supply chain attacks, detect issues when they do happen and recover from them after.


Explicit transitions
After a pipeline is defined, there are a few approaches to orchestrating it: level-triggered and edge-triggered. Like most of the Kubernetes ecosystem, Tekton is designed to operate in a level-triggered fashion. This means steps are executed explicitly by a central orchestrator which runs one task, waits for completion, then decides what to do next. In edge-based systems, a pipeline definition would be translated into a set of events and listeners. Each step fires off events when it completes, and these events are then picked up by listeners which run the next set of steps.

Event-based or edge-triggered systems are easy to reason about, but can be tricky to manage at scale. They also make it much harder to track an artifact as it flows through the entire system. Each step in the pipeline only knows about the one immediately before it; no step is responsible for tracking the entire execution. This can become problematic when you try to understand the security posture of your delivery pipeline.

Tekton was designed with the opposite approach in mind - level-triggered. Instead of a Rube-Goldberg machine tied together with duct tape and clothespins, Tekton is more like an explicit assembly-line. Level-triggered systems like Tekton move from state-to-state in a calculated manner by a central orchestrator. They require more explicit-design up front, but they are easier to observe and reason about after. Supply chains that use systems like Tekton are more secure.


Secure delivery pipeline through chains and provenance
So how do these two design decisions combine to make supply chain security easier? Enter Tekton Chains.

By observing the execution of a Task or a Pipeline and paying careful attention to the inputs, outputs, and steps along the way, we can make it easier to track down what happened and why later on. This "observer" can be run in a separate trust domain and cryptographically sign all of this captured metadata as it's stored, leaving a tamper-proof activity ledger. This technique is called "verifiable builds." This securely generated metadata can be used in a number of ways, from audit logging to recovering from security breaches to pre-deployment policy enforcement.

You can install Chains into any Tekton-enabled cluster and configure it to generate this cryptographically-signed supply chain metadata for your builds. Chains supports pluggable signature systems like PGP, x509 and Cloud KMS's. Payloads can be generated in a few different industry-standard formats like the RedHat Simple-Signing and the In-Toto Provenance specifications. The full documentation is available here, but you can get started quickly with something like this:


For this tutorial, you’ll need access to a GKE Kubernetes cluster and a GCR registry with push credentials. The cluster should already have Tekton Pipelines installed.


Install Tekton Chains into your cluster:

$ kubectl apply --filename https://storage.googleapis.com/tekton-releases/chains/latest/release.yaml



Next, you’ll set up registry authentication for the Tekton Chains controller, so that it can push OCI image signatures to your registry. To set up authentication, you’ll create a Service Account and download credentials:

$ export PROJECT_ID=<GCP Project ID>

$ gcloud iam service-accounts create tekton-chains

$ gcloud iam service-accounts keys create credentials.json [email protected]${PROJECT_ID}.iam.gserviceaccount.com



Now, create a Kubernetes Secret from your credentials file so the Chains controller can access it:

$ kubectl create secret docker-registry registry-credentials \

  --docker-server=gcr.io \

  --docker-username=_json_key \

  [email protected] \

  --docker-password="$(cat credentials.json)" \

  -n tekton-chains

$ kubectl patch serviceaccount tekton-chains-controller \

  -p "{\"imagePullSecrets\": [{\"name\": \"registry-credentials\"}]}" -n tekton-chains



We can use cosign to generate a keypair as a Kubernetes secret, which the Chains controller will use for signing. Cosign will ask for a password, which will be stored in the secret:

$ cosign generate-key-pair -k8s tekton-chains/signing-secrets


Next, you’ll need to set up authentication to your GCR registry for the kaniko task as another Kubernetes Secret.

$ export CREDENTIALS_SECRET=kaniko-credentials

$ kubectl create secret generic $CREDENTIALS_SECRET --from-file credentials.json



Now, we’ll create a kaniko-chains task which will build and push a container image to your registry. Tekton Chains will recognize that an image has been built, and sign it automatically.

$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/kaniko/gcp/kaniko.yaml

$ cat <<EOF | kubectl apply -f -

apiVersion: tekton.dev/v1beta1

kind: TaskRun

metadata:

  name: kaniko-run

spec:

  taskRef:

    name: kaniko-gcp

  params:

  - name: IMAGE

    value: gcr.io/${PROJECT_ID}/kaniko-chains

  workspaces:

  - name: source

    emptyDir: {}

  - name: credentials

    secret:

      secretName: ${CREDENTIALS_SECRET} 

EOF



Wait for the TaskRun to complete, and give the Tekton Chains controller a few seconds to sign the image and store the signature. You should be able to verify the signature with cosign and your public key:

$ cosign verify -key cosign.pub gcr.io/${PROJECT_ID}/kaniko-chains


Congratulations! You’ve successfully signed and verified an OCI image with Tekton Chains and cosign.


What's Next
Within Chains, we'll be improving integration with other supply-chain security projects. This includes support for Binary Transparency and Verifiable Builds through integrations with the Sigstore and In-Toto projects. We'll also be improving and providing a set of well-designed, highly secure Tasks and Pipeline definitions in the TektonCD Catalog.

In Tekton Pipelines, we plan on finishing up TEP-0025 (Hermekton) to enable the support for hermetic build execution. If you want to play around with it now, hermekton can be run as an alpha feature in experimental mode. When hermekton is enabled, a build runs in a locked-down environment without network connectivity. Hermetic builds guarantee all inputs have been explicitly declared ahead-of-time, providing for a more auditable supply-chain. Hermetic builds and Chains align well, because the hermeticity build property is contained in the full build provenance captured by Chains. Chains can generate and attest to metadata specifying exactly which sections of a build had network access.

This means policy can be defined around exactly which build tools are allowed to access the network and which ones are not. This metadata can be used in policies at build time (banning compilers with security vulnerabilities) or stored and used by policy engines at deploy time (only code-reviewed and verifiably built containers are allowed to run).

We believe supply-chain security must be built-in and by default. No task orchestrator can promise perfect supply-chain security, but TektonCD was designed with unique features in mind that make it easier to do the right thing. We're always looking for feedback on the design, goals and requirements. You can reach out on GitHub or the #chains Slack channel.

Announcing New Abuse Research Grants Program

Our Abuse Bug Bounty program has proved tremendously successful in the past three years since its introduction – thanks to our incredibly engaged community of researchers. Their contributions resulted in +1,000 valid bugs, helping us raise the bar in combating product abuse.

As a result of this continued success, today we are announcing a new experimental Abuse Research Grants Program in addition to the already existing Vulnerability Research Grants. Similar to other Research Grant Programs, these grants are up-front awards that our top researchers will receive before they ever submit a bug.

Last year, we increased our rewards to recognize the important work of our community. The growth of this program would not have been possible without partners like David (@xdavidhu), Zohar (ehpus.com), and Ademar (@nowaskyjr) who, on top of becoming our top research experts in Product Abuse, regularly contribute to transparency by sharing their work, further inspiring and influencing our community of researchers.

Despite the growth and success of this program, there remains more work to be done.

With our new Abuse Research Grants Program, we hope to bring even more awareness to product abuse by connecting more closely with our experienced researchers – so we can all work together to overcome these challenges, prevent product abuse and keep our users safe. Here’s how the program works:
  • We invite our top abuse researchers to the program.
  • We award grants immediately before research begins, no strings attached.
  • Bug Hunters apply for the targets we share with them and start their research.
  • On top of the grant, researchers are eligible for regular rewards for the bugs they discover in scope of our Bug Bounty program.
To learn more about this and other grant programs, visit our rules page.

New protections for Enhanced Safe Browsing users in Chrome

In 2020 we launched Enhanced Safe Browsing, which you can turn on in your Chrome security settings, with the goal of substantially increasing safety on the web. These improvements are being built on top of existing security mechanisms that already protect billions of devices. Since the initial launch, we have continuously worked behind the scenes to improve our real-time URL checks and apply machine learning models to warn on previously-unknown attacks. As a result, Enhanced Safe Browsing users are successfully phished 35% less than other users. Starting with Chrome 91, we will roll out new features to help Enhanced Safe Browsing users better choose their extensions, as well as offer additional protections against downloading malicious files on the web.

Chrome extensions - Better protection before installation

Every day millions of people rely on Chrome extensions to help them be more productive, save money, shop or simply improve their browser experience. This is why it is important for us to continuously improve the safety of extensions published in the Chrome Web Store. For instance, through our integration with Google Safe Browsing in 2020, the number of malicious extensions that Chrome disabled to protect users grew by 81%. This comes on top of a number of improvements for more peace of mind when it comes to privacy and security.

Enhanced Safe Browsing will now offer additional protection when you install a new extension from the Chrome Web Store. A dialog will inform you if an extension you’re about to install is not a part of the list of extensions trusted by Enhanced Safe Browsing.

Any extensions built by a developer who follows the Chrome Web Store Developer Program Policies, will be considered trusted by Enhanced Safe Browsing. For new developers, it will take at least a few months of respecting these conditions to become trusted. Eventually, we strive for all developers with compliant extensions to reach this status upon meeting these criteria. Today, this represents nearly 75% of all extensions in the Chrome Web Store and we expect this number to keep growing as new developers become trusted.

Improved download protection

Enhanced Safe Browsing will now offer you even better protection against risky files.

bad_file.exe may be dangerous. Send to Google for scanning?When you download a file, Chrome performs a first level check with Google Safe Browsing using metadata about the downloaded file, such as the digest of the contents and the source of the file, to determine whether it’s potentially suspicious. For any downloads that Safe Browsing deems risky, but not clearly unsafe, Enhanced Safe Browsing users will be presented with a warning and the ability to send the file to be scanned for a more in depth analysis (pictured above).

If you choose to send the file, Chrome will upload it to Google Safe Browsing, which will scan it using its static and dynamic analysis classifiers in real time. After a short wait, if Safe Browsing determines the file is unsafe, Chrome will display a warning. As always, you can bypass the warning and open the file without scanning. Uploaded files are deleted from Safe Browsing a short time after scanning.

Introducing Security By Design

Posted by Jon Markoff, Staff Developer Advocate & Sean Smith, Technical Program Manager

Android header graphic

As a developer, are you struggling to figure out when to build security threat protection into your roadmap? Integrating security into your app development lifecycle can save a lot of time, money, and risk. That’s why we’ve launched Security by Design on Google Play Academy to help developers identify, mitigate, and proactively protect against security threats.

The Android ecosystem, including Google Play, has many built-in security features that help protect developers and users. The course Introduction to app security best practices takes these protections one step further by helping you take advantage of additional security features to build into your app. For example, Jetpack Security helps developers properly encrypt their data at rest and provides only safe and well known algorithms for encrypting Files and SharedPreferences. Are you concerned about using Rooted or compromised devices that may allow a bad actor to use your app in a non-sanctioned way? The SafetyNet Attestation API is a solution to help identify potentially dangerous patterns in usage. There are several common design vulnerabilities that are important to look out for, including using shared or improper file storage, using insecure protocols, unprotected components such as Activities, and more. The course also provides methods to test your application, to keep apps safe in the wild after launch. Finally, you can set up a Vulnerability Disclosure Program (VDP) to engage security researchers to help.

In the next course, you can learn how to integrate security at every stage of the development process by adopting the Security Development Lifecycle. The SDL is an industry standard process and in this course you’ll learn the fundamentals of setting up a program, getting executive sponsorship and integration into your development lifecycle.

secruity development lifecycle graphic

Threat modeling is part of the Security Development Lifecycle, in this course you will learn to think like an attacker to identify, categorize, and address threats. By doing so early in the design phase of development, you can identify potential threats and start planning for how to mitigate them at much lower cost and create a more secure product for your users.

Secruity design graphic

Improving your app’s security is a never ending process. Sign up for the Security by Design module where in a few short courses, you will learn how to integrate security into your app development lifecycle, model potential threats, and app security best practices into your app, as well as avoid potential design pitfalls.

Introducing Security By Design

Integrating security into your app development lifecycle can save a lot of time, money, and risk. That’s why we’ve launched Security by Design on Google Play Academy to help developers identify, mitigate, and proactively protect against security threats.

The Android ecosystem, including Google Play, has many built-in security features that help protect developers and users. The course Introduction to app security best practices takes these protections one step further by helping you take advantage of additional security features to build into your app. For example, Jetpack Security helps developers properly encrypt their data at rest and provides only safe and well known algorithms for encrypting Files and SharedPreferences. The SafetyNet Attestation API is a solution to help identify potentially dangerous patterns in usage. There are several common design vulnerabilities that are important to look out for, including using shared or improper file storage, using insecure protocols, unprotected components such as Activities, and more. The course also provides methods to test your app in order to help you keep it safe after launch. Finally, you can set up a Vulnerability Disclosure Program (VDP) to engage security researchers to help.

In the next course, you can learn how to integrate security at every stage of the development process by adopting the Security Development Lifecycle (SDL). The SDL is an industry standard process and in this course you’ll learn the fundamentals of setting up a program, getting executive sponsorship and integration into your development lifecycle.

Threat modeling is part of the Security Development Lifecycle, and in this course you will learn to think like an attacker to identify, categorize, and address threats. By doing so early in the design phase of development, you can identify potential threats and start planning for how to mitigate them at a much lower cost and create a more secure product for your users.

Improving your app’s security is a never ending process. Sign up for the Security by Design module where in a few short courses, you will learn how to integrate security into your app development lifecycle, model potential threats, and app security best practices into your app, as well as avoid potential design pitfalls.

Introducing Half-Double: New hammering technique for DRAM Rowhammer bug


Today, we are sharing details around our discovery of Half-Double, a new Rowhammer technique that capitalizes on the worsening physics of some of the newer DRAM chips to alter the contents of memory.

Rowhammer is a DRAM vulnerability whereby repeated accesses to one address can tamper with the data stored at other addresses. Much like speculative execution vulnerabilities in CPUs, Rowhammer is a breach of the security guarantees made by the underlying hardware. As an electrical coupling phenomenon within the silicon itself, Rowhammer allows the potential bypass of hardware and software memory protection policies. This can allow untrusted code to break out of its sandbox and take full control of the system.

Rowhammer was first discussed in a paper in 2014 for what was then the mainstream generation of DRAM: DDR3. The following year, Google’s Project Zero released a working privilege-escalation exploit. In response, DRAM manufacturers implemented proprietary logic inside their chips that attempted to track frequently accessed addresses and reactively mitigate when necessary.

As DDR4 became widely adopted, it appeared as though Rowhammer had faded away thanks in part to these built-in defense mechanisms. However, in 2020, the TRRespass paper showed how to reverse-engineer and neutralize the defense by distributing accesses, demonstrating that Rowhammer techniques are still viable. Earlier this year, the SMASH research went one step further and demonstrated exploitation from JavaScript, without invoking cache-management primitives or system calls.

Traditionally, Rowhammer was understood to operate at a distance of one row: when a DRAM row is accessed repeatedly (the “aggressor”), bit flips were found only in the two adjacent rows (the “victims”). However, with Half-Double, we have observed Rowhammer effects propagating to rows beyond adjacent neighbors, albeit at a reduced strength. Given three consecutive rows A, B, and C, we were able to attack C by directing a very large number of accesses to A, along with just a handful (~dozens) to B. Based on our experiments, accesses to B have a non-linear gating effect, in which they appear to “transport” the Rowhammer effect of A onto C. Unlike TRRespass, which exploits the blind spots of manufacturer-dependent defenses, Half-Double is an intrinsic property of the underlying silicon substrate. This is likely an indication that the electrical coupling responsible for Rowhammer is a property of distance, effectively becoming stronger and longer-ranged as cell geometries shrink down. Distances greater than two are conceivable.




Google has been working with JEDEC, an independent semiconductor engineering trade organization, along with other industry partners, in search of possible solutions for the Rowhammer phenomenon. JEDEC has published two documents about DRAM and system-level mitigation techniques (JEP 300-1 and JEP301-1).

We are disclosing this work because we believe that it significantly advances the understanding of the Rowhammer phenomenon, and that it will help both researchers and industry partners to work together, to develop lasting solutions. The challenge is substantial and the ramifications are industry-wide. We encourage all stakeholders (server, client, mobile, automotive, IoT) to join the effort to develop a practical and effective solution that benefits all of our users.

Integrating Rust Into the Android Open Source Project

The Android team has been working on introducing the Rust programming language into the Android Open Source Project (AOSP) since 2019 as a memory-safe alternative for platform native code development. As with any large project, introducing a new language requires careful consideration. For Android, one important area was assessing how to best fit Rust into Android’s build system. Currently this means the Soong build system (where the Rust support resides), but these design decisions and considerations are equally applicable for Bazel when AOSP migrates to that build system. This post discusses some of the key design considerations and resulting decisions we made in integrating Rust support into Android’s build system.

Rust integration into large projects

A RustConf 2019 meeting on Rust usage within large organizations highlighted several challenges, such as the risk that eschewing Cargo in favor of using the Rust Compiler, rustc, directly (see next section) may remove organizations from the wider Rust community. We share this same concern. When changes to imported third-party crates might be beneficial to the wider community, our goal is to upstream those changes. Likewise when crates developed for Android could benefit the wider Rust community, we hope to release them as independent crates. We believe that the success of Rust within Android is dependent on minimizing any divergence between Android and the Rust community at large, and hope that the Rust community will benefit from Android’s involvement.

No nested build systems

Rust provides Cargo as the default build system and package manager, collecting dependencies and invoking rustc (the Rust compiler) to build the target crate (Rust package). Soong takes this role instead in Android and calls rustc directly for several reasons:

  • In Cargo, C dependencies are handled independently in an ad-hoc manner via build.rs scripts. Soong already provides a mechanism for building C libraries and defining them as dependencies, and Android carefully controls the compiler version and global compilation flags to ensure libraries are built a particular way. Relying on Cargo would introduce a second non-Soong mechanism for defining/building C libraries that would not be constrained by the carefully selected compilation controls implemented in Soong. This could also lead to multiple different versions of the same library, negatively impacting memory/disk usage.
  • Calling compilers directly through Soong provides the stability and control Android requires for the variety of build configurations it supports (for example, specifying where target-specific dependencies are and which compilation flags to use). While it would technically be possible to achieve the necessary level of control over rustc indirectly through Cargo, Soong would have no understanding of how the Cargo.toml (the Cargo build file) would influence the commands Cargo emits to rustc. Paired with the fact that Cargo evolves independently, this would severely restrict Soong’s ability to precisely control how build artifacts are created.
  • Builds which are self-contained and insensitive to the host configuration, known as hermetic builds, are necessary for Android to produce reproducible builds. Cargo, which relies on build.rs scripts, doesn’t yet provide hermeticity guarantees.
  • Incremental builds are important to maintain engineering productivity; building Android takes a considerable amount of resources. Cargo was not designed for integration into existing build systems and does not expose its compilation units. Each Cargo invocation builds the entire crate dependency graph for a given Cargo.toml, rebuilding crates multiple times across projects1. This is too coarse for integration into Soong’s incremental build support, which expects smaller compilation units. This support is necessary to scale up Rust usage within Android.

    Using the Rust compiler directly allows us to avoid these issues and is consistent with how we compile all other code in AOSP. It provides the most control over the build process and eases integration into Android’s existing build system. Unfortunately, avoiding it introduces several challenges and influences many other build system decisions because Cargo usage is so deeply ingrained in the Rust crate ecosystem.

    No build.rs scripts

    A build.rs script compiles to a Rust binary which Cargo builds and executes during a build to handle pre-build tasks, commonly setting up the build environment, or building libraries in other languages (for example C/C++). This is analogous to configure scripts used for other languages.

    Avoiding build.rs scripts somewhat flows naturally from not relying on Cargo since supporting these would require replicating Cargo behavior and assumptions. Beyond this however, there are good reasons for AOSP to avoid build scripts as well:

    • build.rs scripts can execute arbitrary code on the build host. From a security perspective, this introduces an additional burden when adding or updating third-party code as the build.rs script needs careful scrutiny.
    • Third-party build.rs scripts may not be hermetic or reproducible in potentially subtle ways. It is also common for build.rs files to access files outside the build directory (such as /usr/lib). When they are not hermetic, we would need to either carry a local patch or work with upstream to resolve the issue.
    • The most common task for build.rs is to build C libraries which Rust code depends on. We already support this through Soong.
    • Android likewise avoids running build scripts while building for other languages, instead, simply using them to inform the structure of the Android.bp file.

For instances in third-party code where a build script is used only to compile C dependencies, we either use existing cc_library Soong definitions (such as boringssl for quiche) or create new definitions for crate-specific code.

When the build.rs is used to generate source, we try to replicate the core functionality in a Soong rust_binary module for use as a custom source generator. In other cases where Soong can provide the information without source generation, we may carry a small patch that leverages this information.

Why proc_macro but not build.rs?

Why do we support proc_macros, which are compiler plug-ins that execute code on the host within the compiler context, but not build.rs scripts?

While build.rs code is written as one-off code to handle building a single crate, proc_macros define reusable functionality within the compiler which can become widely relied upon across the Rust community. As a result popular proc_macros are generally better maintained and more scrutinized upstream, which makes the code review process more manageable. They are also more readily sandboxed as part of the build process since they are less likely to have dependencies external to the compiler.

proc_macros are also a language feature rather than a method for building code. These are relied upon by source code, are unavoidable for third-party dependencies, and are useful enough to define and use within our platform code. While we can avoid build.rs by leveraging our build system, the same can’t be said of proc_macros.

There is also precedence for compiler plugin support within the Android build system. For example see Soong’s java_plugin modules.

Generated source as crates

Unlike C/C++ compilers, rustc only accepts a single source file representing an entry point to a binary or library. It expects that the source tree is structured such that all required source files can be automatically discovered. This means that generated source either needs to be placed in the source tree or provided through an include directive in source:

include!("/path/to/hello.rs");

The Rust community depends on build.rs scripts alongside assumptions about the Cargo build environment to get around this limitation. When building, the cargo command sets an OUT_DIR environment variable which build.rs scripts are expected to place generated source code in. This source can then be included via:

include!(concat!(env!("OUT_DIR"), "/hello.rs"));

This presents a challenge for Soong as outputs for each module are placed in their own out/ directory2; there is no single OUT_DIR where dependencies output their generated source.

For platform code, we prefer to package generated source into a crate that can be imported. There are a few reasons to favor this approach:

  • Prevent generated source file names from colliding.
  • Reduce boilerplate code checked-in throughout the tree and which needs to be maintained. Any boilerplate necessary to make the generated source compile into a crate can be centrally maintained.
  • Avoid implicit3 interactions between generated code and the surrounding crate.
  • Reduce pressure on memory and disk by dynamically liking commonly used generated sources.

    As a result, all of Android’s Rust source generation module types produce code that can be compiled and used as a crate.

    We still support third-party crates without modification by copying all the generated source dependencies for a module into a single per-module directory similar to Cargo. Soong then sets the OUT_DIR environment variable to that directory when compiling the module so the generated source can be found. However we discourage use of this mechanism in platform code unless absolutely necessary for the reasons described above.

    Dynamic linkage by default

    By default, the Rust ecosystem assumes that crates will be statically linked into binaries. The usual benefits of dynamic libraries are upgrades (whether for security or functionality) and decreased memory usage. Rust’s lack of a stable binary interface and usage of cross-crate information flow prevents upgrading libraries without upgrading all dependent code. Even when the same crate is used by two different programs on the system, it is unlikely to be provided by the same shared object4 due to the precision with which Rust identifies its crates. This makes Rust binaries more portable but also results in larger disk and memory footprints.

    This is problematic for Android devices where resources like memory and disk usage must be carefully managed because statically linking all crates into Rust binaries would result in excessive code duplication (especially in the standard library). However, our situation is also different from the standard host environment: we build Android using global decisions about dependencies. This means that nearly every crate is shareable between all users of that crate. Thus, we opt to link crates dynamically by default for device targets. This reduces the overall memory footprint of Rust in Android by allowing crates to be reused across multiple binaries which depend on them.

    Since this is unusual in the Rust community, not all third-party crates support dynamic compilation. Sometimes we must carry small patches while we work with upstream maintainers to add support.

    Current Status of Build Support

    We support building all output types supported by rustc (rlibs, dylibs, proc_macros, cdylibs, staticlibs, and executables). Rust modules can automatically request the appropriate crate linkage for a given dependency (rlib vs dylib). C and C++ modules can depend on Rust cdylib or staticlib producing modules the same way as they would for a C or C++ library.

    In addition to being able to build Rust code, Android’s build system also provides support for protobuf and gRPC and AIDL generated crates. First-class bindgen support makes interfacing with existing C code simple and we have support modules using cxx for tighter integration with C++ code.

    The Rust community produces great tooling for developers, such as the language server rust-analyzer. We have integrated support for rust-analyzer into the build system so that any IDE which supports it can provide code completion and goto definitions for Android modules.

    Source-based code coverage builds are supported to provide platform developers high level signals on how well their code is covered by tests. Benchmarks are supported as their own module type, leveraging the criterion crate to provide performance metrics. In order to maintain a consistent style and level of code quality, a default set of clippy lints and rustc lints are enabled by default. Additionally, HWASAN/ASAN fuzzers are supported, with the HWASAN rustc support added to upstream.

    In the near future, we plan to add documentation to source.android.com on how to define and use Rust modules in Soong. We expect Android’s support for Rust to continue evolving alongside the Rust ecosystem and hope to continue to participate in discussions around how Rust can be integrated into existing build systems.

    Thank you to Matthew Maurer, Jeff Vander Stoep, Joel Galenson, Manish Goregaokar, and Tyler Mandry for their contributions to this post.

    Notes


    1. This can be mitigated to some extent with workspaces, but requires a very specific directory arrangement that AOSP does not conform to. 

    2. This presents no problem for C/C++ and similar languages as the path to the generated source is provided directly to the compiler. 

    3. Since include! works by textual inclusion, it may reference values from the enclosing namespace, modify the namespace, or use constructs like #![foo]. These implicit interactions can be difficult to maintain. Macros should be preferred if interaction with the rest of the crate is truly required.  

    4. While libstd would usually be shareable for the same compiler revision, most other libraries would end up with several copies for Cargo-built Rust binaries, since each build would attempt to use a minimum feature set and may select different dependency versions for the library in question. Since information propagates across crate boundaries, you cannot simply produce a “most general” instance of that library.