Category Archives: Online Security Blog

The latest news and insights from Google on security and safety on the Internet

Scaling Rust Adoption Through Training

Android 14 is the third major Android release with Rust support. We are already seeing a number of benefits:

These positive early results provided an enticing motivation to increase the speed and scope of Rust adoption. We hoped to accomplish this by investing heavily in training to expand from the early adopters.

Scaling up from Early Adopters

Early adopters are often willing to accept more risk to try out a new technology. They know there will be some inconveniences and a steep learning curve but are willing to learn, often on their own time.

Scaling up Rust adoption required moving beyond early adopters. For that we need to ensure a baseline level of comfort and productivity within a set period of time. An important part of our strategy for accomplishing this was training. Unfortunately, the type of training we wanted to provide simply didn’t exist. We made the decision to write and implement our own Rust training.

Training Engineers

Our goals for the training were to:

  • Quickly ramp up engineers: It is hard to take people away from their regular work for a long period of time, so we aimed to provide a solid foundation for using Rust in days, not weeks. We could not make anybody a Rust expert in so little time, but we could give people the tools and foundation needed to be productive while they continued to grow. The goal is to enable people to use Rust to be productive members of their teams. The time constraints meant we couldn’t teach people programming from scratch; we also decided not to teach macros or unsafe Rust in detail.
  • Make it engaging (and fun!): We wanted people to see a lot of Rust while also getting hands-on experience. Given the scope and time constraints mentioned above, the training was necessarily information-dense. This called for an interactive setting where people could quickly ask questions to the instructor. Research shows that retention improves when people can quickly verify assumptions and practice new concepts.
  • Make it relevant for Android: The Android-specific tooling for Rust was already documented, but we wanted to show engineers how to use it via worked examples. We also wanted to document emerging standards, such as using thiserror and anyhow crates for error handling. Finally, because Rust is a new language in the Android Platform (AOSP), we needed to show how to interoperate with existing languages such as Java and C++.

With those three goals as a starting point, we looked at the existing material and available tools.

Existing Material

Documentation is a key value of the Rust community and there are many great resources available for learning Rust. First, there is the freely available Rust Book, which covers almost all of the language. Second, the standard library is extensively documented.

Because we knew our target audience, we could make stronger assumptions than most material found online. We created the course for engineers with at least 2–3 years of coding experience in either C, C++, or Java. This allowed us to move quickly when explaining concepts familiar to our audience, such as "control flow", “stack vs heap”, and “methods”. People with other backgrounds can learn Rust from the many other resources freely available online.

Technology

For free-form documentation, mdBook has become the de facto standard in the Rust community. It is used for official documentation such as the Rust Book and Rust Reference.

A particularly interesting feature is the ability to embed executable snippets of Rust code. This is key to making the training engaging since the code can be edited live and executed directly in the slides:

In addition to being a familiar community standard, mdBook offers the following important features:

  • Maintainability: mdbook test compiles and executes every code snippet in the course. This allowed us to evolve the class over time while ensuring that we always showed valid code to the participants.
  • Extensibility: mdBook has a plugin system which allowed us to extend the tool as needed. We relied on this feature for translations and ASCII art diagrams.

These features made it easy for us to choose mdBook. While mdBook is not designed for presentations, the output looked OK on a projector when we limited the vertical size of each page.

Supporting Translations

Android has developers and OEM partners in many countries. It is critical that they can adapt existing Rust code in AOSP to fit their needs. To support translations, we developed mdbook-i18n-helpers. Support for multilingual documentation has been a community wish since 2015 and we are glad to see the plugins being adopted by several other projects to produce maintainable multilingual documentation for everybody.

Comprehensive Rust

With the technology and format nailed down, we started writing the course. We roughly followed the outline from the Rust Book since it covered most of what we need to cover. This gave us a three day course which we called Rust Fundamentals. We designed it to run for three days for five hours a day and encompass Rust syntax, semantics, and important concepts such as traits, generics, and error handling.

We then extended Rust Fundamentals with three deep dives:

  • Rust in Android: a half-day course on using Rust for AOSP development. It includes interoperability with C, C++, and Java.
  • Bare-metal Rust: a full-day class on using Rust for bare-metal development. Android devices ship significant amounts of firmware. These components are often foundational in nature (for example, the bootloader, which establishes the trust for the rest of the system), thus they must be secure.
  • Concurrency in Rust: a full-day class on concurrency in Rust. We cover both multithreading with blocking synchronization primitives (such as mutexes) and async/await concurrency (cooperative multitasking using futures).

A large set of in-house and community translators have helped translate the course into several languages. The full translations were Brazilian Portuguese and Korean. We are working on Simplified Chinese and Traditional Chinese translations as well.

Course Reception

We started teaching the class in late 2022. In 2023, we hired a vendor, Immunant, to teach the majority of classes for Android engineers. This was important for scalability and for quality: dedicated instructors soon discovered where the course participants struggled and could adapt the delivery. In addition, over 30 Googlers have taught the course worldwide.

More than 500 Google engineers have taken the class. Feedback has been very positive: 96% of participants agreed it was worth their time. People consistently told us that they loved the interactive style, highlighting how it helped to be able to ask clarifying questions at any time. Instructors noted that people gave the course their undivided attention once they realized it was live. Live-coding demands a lot from the instructor, but it is worth it due to the high engagement it achieves.

Most importantly, people exited this course and were able to be immediately productive with Rust in their day jobs. When participants were asked three months later, they confirmed that they were able to write and review Rust code. This matched the results from the much larger survey we made in 2022.

Looking Forward

We have been teaching Rust classes at Google for a year now. There are a few things that we want to improve: better topic ordering, more exercises, and more speaker notes. We would also like to extend the course with more deep dives. Pull requests are very welcome!

The full course is available for free at https://google.github.io/comprehensive-rust/. We are thrilled to see people starting to use Comprehensive Rust for classes around the world. We hope it can be a useful resource for the Rust community and that it will help both small and large teams get started on their Rust journey!

Thanks!

We are grateful to the 190+ contributors from all over the world who created more than 1,000 pull requests and issues on GitHub. Their bug reports, fixes, and feedback improved the course in countless ways. This includes the 50+ people who worked hard on writing and maintaining the many translations.

Special thanks to Andrew Walbran for writing Bare-metal Rust and to Razieh Behjati, Dustin Mitchell, and Alexandre Senges for writing Concurrency in Rust.

We also owe a great deal of thanks to the many volunteer instructors at Google who have been spending their time teaching classes around the globe. Your feedback has helped shape the course.

Finally, thanks to Jeffrey Vander Stoep, Ivan Lozano, Matthew Maurer, Dmytro Hrybenko, and Lars Bergstrom for providing feedback on this post.

Capslock: What is your code really capable of?




When you import a third party library, do you review every line of code? Most software packages depend on external libraries, trusting that those packages aren’t doing anything unexpected. If that trust is violated, the consequences can be huge—regardless of whether the package is malicious, or well-intended but using overly broad permissions, such as with Log4j in 2021. Supply chain security is a growing issue, and we hope that greater transparency into package capabilities will help make secure coding easier for everyone.




Avoiding bad dependencies can be hard without appropriate information on what the dependency’s code actually does, and reviewing every line of that code is an immense task.  Every dependency also brings its own dependencies, compounding the need for review across an expanding web of transitive dependencies. But what if there was an easy way to know the capabilities–the privileged operations accessed by the code–of your dependencies? 




Capslock is a capability analysis CLI tool that informs users of privileged operations (like network access and arbitrary code execution) in a given package and its dependencies. Last month we published the alpha version of Capslock for the Go language, which can analyze and report on the capabilities that are used beneath the surface of open source software. 




This CLI tool will provide deeper insights into the behavior of dependencies by reporting code paths that access privileged operations in the standard libraries. In upcoming versions we will add support for open source maintainers to prescribe and sandbox the capabilities required for their packages, highlighting to users what capabilities are present and alerting them if they change.




Capabilities vs Vulnerabilities


Vulnerability management is an important part of your supply chain security, but it doesn’t give you a full picture of whether your dependencies are safe to use. Adding capability analysis into your security posture, gives you a better idea of the types of behavior you can expect from your dependencies, identifies potential weak points, and allows you to make a more informed choice about using a given dependency. 




Capslock is motivated by the belief that the principle of least privilege—the idea that access should be limited to the minimal set that is feasible and practical—should be a first-class design concept for secure and usable software. Applied to software development, this means that a package should be allowed access only to the capabilities that it requires as part of its core behaviors. For example, you wouldn’t expect a data analysis package to need access to the network or a logging library to include remote code execution capabilities. 




Capslock is initially rolling out for Go, a language with a strong security commitment and fantastic tooling for finding known vulnerabilities in package dependencies. When Capslock is used alongside Go’s vulnerability management tools, developers can use the additional, complementary signals to inform how they interpret vulnerabilities in their dependencies. 




These capability signals can be used to


  • Find code with the highest levels of access to prioritize audits, code reviews and vulnerability patches

  • Compare potential dependencies, or look for alternative packages when an existing dependency is no longer appropriate

  • Surface unwanted capability usage in packages to uncover new vulnerabilities or identify supply chain attacks in progress

  • Monitor for unexpected emerging capabilities due to package version or dependency changes, and even integrate capability monitoring into CI/CD pipelines 

  • Filter vulnerability data to respond to the most relevant cases, such as finding packages with network access during a network-specific vulnerability alert  





Using Capslock






We are looking forward to adding new features in future releases, such as better support for declaring the expected capabilities of a package, and extending to other programming languages. We are working to apply Capslock at scale and make capability information for open source packages broadly available in various community tools like deps.dev




You can try Capslock now, and we hope you find it useful for auditing your external dependencies and making informed decisions on your code’s capabilities.




We’ll be at Gophercon in San Diego on Sept 27th, 2023—come and chat with us! 




Android Goes All-in on Fuzzing

Fuzzing is an effective technique for finding software vulnerabilities. Over the past few years Android has been focused on improving the effectiveness, scope, and convenience of fuzzing across the organization. This effort has directly resulted in improved test coverage, fewer security/stability bugs, and higher code quality. Our implementation of continuous fuzzing allows software teams to find new bugs/vulnerabilities, and prevent regressions automatically without having to manually initiate fuzzing runs themselves. This post recounts a brief history of fuzzing on Android, shares how Google performs fuzzing at scale, and documents our experience, challenges, and success in building an infrastructure for automating fuzzing across Android. If you’re interested in contributing to fuzzing on Android, we’ve included instructions on how to get started, and information on how Android’s VRP rewards fuzzing contributions that find vulnerabilities.

A Brief History of Android Fuzzing

Fuzzing has been around for many years, and Android was among the early large software projects to automate fuzzing and prioritize it similarly to unit testing as part of the broader goal to make Android the most secure and stable operating system. In 2019 Android kicked off the fuzzing project, with the goal to help institutionalize fuzzing by making it seamless and part of code submission. The Android fuzzing project resulted in an infrastructure consisting of Pixel phones and Google cloud based virtual devices that enabled scalable fuzzing capabilities across the entire Android ecosystem. This project has since grown to become the official internal fuzzing infrastructure for Android and performs thousands of fuzzing hours per day across hundreds of fuzzers.

Under the Hood: How Is Android Fuzzed

Step 1: Define and find all the fuzzers in Android repo

The first step is to integrate fuzzing into the Android build system (Soong) to enable build fuzzer binaries. While developers are busy adding features to their codebase, they can include a fuzzer to fuzz their code and submit the fuzzer alongside the code they have developed. Android Fuzzing uses a build rule called cc_fuzz (see example below). cc_fuzz (we also support rust_fuzz and java_fuzz) defines a Soong module with source file(s) and dependencies that can be built into a binary.

cc_fuzz {
  name: "fuzzer_foo",

  srcs: [
    "fuzzer_foo.cpp",
  ],

  static_libs: [
    "libfoo",
  ],

  host_supported: true,
}

A packaging rule in Soong finds all of these cc_fuzz definitions and builds them automatically. The actual fuzzer structure itself is very simple and consists of one main method (LLVMTestOneInput):

#include <stddef.h>
#include <stdint.h>

extern "C" int LLVMFuzzerTestOneInput(
               const uint8_t *data,
               size_t size) {

  // Here you invoke the code to be fuzzed. 
  return 0;
}

This fuzzer gets automatically built into a binary and along with its static/dynamic dependencies (as specified in the Android build file) are packaged into a zip file which gets added to the main zip containing all fuzzers as shown in the example below.

Step 2: Ingest all fuzzers into Android builds

Once the fuzzers are found in the Android repository and they are built into binaries, the next step is to upload them to the cloud storage in preparation to run them on our backend. This process is run multiple times daily. The Android fuzzing infrastructure uses an open source continuous fuzzing framework (Clusterfuzz) to run fuzzers continuously on Android devices and emulators. In order to run the fuzzers on clusterfuzz, the fuzzers zip files are renamed after the build and the latest build gets to run (see diagram below):

The fuzzer zip file contains the fuzzer binary, corresponding dictionary as well as a subfolder containing its dependencies and the git revision numbers (sourcemap) corresponding to the build. Sourcemaps are used to enhance stack traces and produce crash reports.

Step 3: Run fuzzers continuously and find bugs

Running fuzzers continuously is done through scheduled jobs where each job is associated with a set of physical devices or emulators. A job is also backed by a queue that represents the fuzzing tasks that need to be run. These tasks are a combination of running a fuzzer, reproducing a crash found in an earlier fuzzing run, or minimizing the corpus, among other tasks.

Each fuzzer is run for multiple hours, or until they find a crash. After the run, Haiku takes all of the interesting input discovered during the run and adds it to the fuzzer corpus. This corpus is then shared across fuzzer runs and grows over time. The fuzzer is then prioritized in subsequent runs according to the growth of new coverage and crashes found (if any). This ensures we provide the most effective fuzzers more time to run and find interesting crashes.

Step 4: Generate fuzzers line coverage

What good is a fuzzer if it’s not fuzzing the code you care about? To improve the quality of the fuzzer and to monitor the overall progress of Android fuzzing, two types of coverage metrics are calculated and available to Android developers. The first metric is for edge coverage which refers to edges in the Control Flow Graph (CFG). By instrumenting the fuzzer and the code being fuzzed, the fuzzing engine can track small snippets of code that get triggered every time execution flow reaches them. That way, fuzzing engines know exactly how many (and how many times) each of these instrumentation points got hit on every run so they can aggregate them and calculate the coverage.

INFO: Seed: 2859304549
INFO: Loaded 1 modules   (773 inline 8-bit counters): 773 [0x5610921000, 0x5610921305),
INFO: Loaded 1 PC tables (773 PCs): 773 [0x5610921308,0x5610924358),
INFO: -max_len is not provided; libFuzzer will not generate inputs larger than 4096 bytes
INFO: A corpus is not provided, starting from an empty corpus
#2      INITED cov: 2 ft: 2 corp: 1/1b lim: 4 exec/s: 0 rss: 24Mb
#413    NEW    cov: 3 ft: 3 corp: 2/9b lim: 8 exec/s: 0 rss: 24Mb L: 8/8 MS: 1 InsertRepeatedBytes-
#3829   NEW    cov: 4 ft: 4 corp: 3/17b lim: 38 exec/s: 0 rss: 24Mb L: 8/8 MS: 1 ChangeBinInt-
...

Line coverage inserts instrumentation points specifying lines in the source code. Line coverage is very useful for developers as they can pinpoint areas in the code that are not covered and update their fuzzers accordingly to hit those areas in future fuzzing runs.

Drilling into any of the folders can show the stats per file:

Further clicking on one of the files shows the lines that were touched and lines that never got coverage. In the example below, the first line has been fuzzed ~5 million times, but the fuzzer never makes it into lines 3 and 4, indicating a gap in the coverage for this fuzzer.

We have dashboards internally that measure our fuzzing coverage across our entire codebase. In order to generate these coverage dashboards yourself, you follow these steps.

Another measurement of the quality of the fuzzers is how many fuzzing iterations can be done in one second. It has a direct relationship with the computation power and the complexity of the fuzz target. However, this parameter alone can not measure how good or effective the fuzzing is.

How we handle fuzzer bugs

Android fuzzing utilizes the Clusterfuzz fuzzing infrastructure to handle any found crashes and file a ticket to the Android security team. Android security makes an assessment of the crash based on the Android Severity Guidelines and then routes the vulnerability to the proper team for remediation. This entire process of finding the reproducible crash, routing to Android Security, and then assigning the issue to a team responsible can take as little as two hours, and up to a week depending on the type of crash and the severity of the vulnerability.

One example of a recent fuzzer success is (CVE 2022-20473), where an internal team wrote a 20-line fuzzer and submitted it to run on Android fuzzing infra. Within a day, the fuzzer was ingested and pushed to our fuzzing infrastructure to begin fuzzing, and shortly found a critical severity vulnerability! A patch for this CVE has been applied by the service team.

Why Android Continues to Invest in Fuzzing

Protection Against Code Regressions

The Android Open Source Project (AOSP) is a large and complex project with many contributors. As a result, there are thousands of changes made to the project every day. These changes can be anything from small bug fixes to large feature additions, and fuzzing helps to find vulnerabilities that may be inadvertently introduced and not caught during code review.

Continuous fuzzing has helped to find these vulnerabilities before they are introduced in production and exploited by attackers. One real-life example is (CVE-2023-21041), a vulnerability discovered by a fuzzer written three years ago. This vulnerability affected Android firmware and could have led to local escalation of privilege with no additional execution privileges needed. This fuzzer was running for many years with limited findings until a code regression led to the introduction of this vulnerability. This CVE has since been patched.

Protection against unsafe memory language pitfalls

Android has been a huge proponent of Rust, with Android 13 being the first Android release with the majority of new code in a memory safe language. The amount of new memory-unsafe code entering Android has decreased, but there are still millions of lines of code that remain, hence the need for fuzzing persists.

No One Code is Safe: Fuzzing code in memory-safe languages

Our work does not stop with non-memory unsafe languages, and we encourage fuzzer development in languages like Rust as well. While fuzzing won’t find common vulnerabilities that you would expect to see memory unsafe languages like C/C++, there have been numerous non-security issues discovered and remediated which contribute to the overall stability of Android.

Fuzzing Challenges

In addition to generic C/C++ binaries issues such as missing dependencies, fuzzers can have their own classes of problems:

Low executions per second: in order to fuzz efficiently, the number of mutations has to be in the order of hundreds per second otherwise the fuzzing will take a very long time to cover the code. We addressed this issue by adding a set of alerts that continuously monitor the health of the fuzzers as well as any sudden drop in coverage. Once a fuzzer is identified as underperforming, an automated email is sent to the fuzzer author with details to help them improve the fuzzer.

Fuzzing the wrong code: Like all resources, fuzzing resources are limited. We want to ensure that those resources give us the highest return, and that generally means devoting them towards fuzzing code that processes untrusted (i.e. potentially attacker controlled) inputs. This can cover any way that the phone can receive input including Bluetooth, NFC, USB, web, etc. Parsing structured input is particularly interesting since there is room for programming errors due to specs complexity. Code that generates output is not particularly interesting to fuzz. Similarly internal code that is not exposed publicly is also less of a security concern. We addressed this issue by identifying the most vulnerable code (see the following section).

What to fuzz

In order to fuzz the most important components of the Android source code, we focus on libraries that have:

  1. A history of vulnerabilities: the history should not be the distant history since context change but more focus on the last 12 months.
  2. Recent code changes: research indicates that more vulnerabilities are found in recently changed code than code that is more stable.
  3. Remote access: vulnerabilities in code that are reachable remotely can be critical.
  4. Privileged: Similarly to #3, vulnerabilities in code that runs in privileged processes can be critical.

How to submit a fuzzer to AOSP

We’re constantly writing and improving fuzzers internally to cover some of the most sensitive areas of Android, but there is always room for improvement. If you’d like to get started writing your own fuzzer for an area of AOSP, you’re welcome to do so to make Android more secure (example CL)::

  1. Get Android source code
  2. Have a testing phone?
  3. Write a fuzz target (follow guidelines in ‘What to fuzz’ section)
  4. Upload your fuzzer to AOSP.

Get started by reading our documentation on Fuzzing with libFuzzer and check your fuzzer into the Android Open Source project. If your fuzzer finds a bug, you can submit it to the Android Bug Bounty Program and could be eligible for a reward!

AI-Powered Fuzzing: Breaking the Bug Hunting Barrier



Since 2016, OSS-Fuzz has been at the forefront of automated vulnerability discovery for open source projects. Vulnerability discovery is an important part of keeping software supply chains secure, so our team is constantly working to improve OSS-Fuzz. For the last few months, we’ve tested whether we could boost OSS-Fuzz’s performance using Google’s Large Language Models (LLM). 




This blog post shares our experience of successfully applying the generative power of LLMs to improve the automated vulnerability detection technique known as fuzz testing (“fuzzing”). By using LLMs, we’re able to increase the code coverage for critical projects using our OSS-Fuzz service without manually writing additional code. Using LLMs is a promising new way to scale security improvements across the over 1,000 projects currently fuzzed by OSS-Fuzz and to remove barriers to future projects adopting fuzzing. 




LLM-aided fuzzing

We created the OSS-Fuzz service to help open source developers find bugs in their code at scale—especially bugs that indicate security vulnerabilities. After more than six years of running OSS-Fuzz, we now support over 1,000 open source projects with continuous fuzzing, free of charge. As the Heartbleed vulnerability showed us, bugs that could be easily found with automated fuzzing can have devastating effects. For most open source developers, setting up their own fuzzing solution could cost time and resources. With OSS-Fuzz, developers are able to integrate their project for free, automated bug discovery at scale.  




Since 2016, we’ve found and verified a fix for over 10,000 security vulnerabilities. We also believe that OSS-Fuzz could likely find even more bugs with increased code coverage. The fuzzing service covers only around 30% of an open source project’s code on average, meaning that a large portion of our users’ code remains untouched by fuzzing. Recent research suggests that the most effective way to increase this is by adding additional fuzz targets for every project—one of the few parts of the fuzzing workflow that isn’t yet automated.




When an open source project onboards to OSS-Fuzz, maintainers make an initial time investment to integrate their projects into the infrastructure and then add fuzz targets. The fuzz targets are functions that use randomized input to test the targeted code. Writing fuzz targets is a project-specific and manual process that is similar to writing unit tests. The ongoing security benefits from fuzzing make this initial investment of time worth it for maintainers, but writing a comprehensive set of fuzz targets is an tough expectation for project maintainers, who are often volunteers. 




But what if LLMs could write additional fuzz targets for maintainers?



“Hey LLM, fuzz this project for me”

To discover whether an LLM could successfully write new fuzz targets, we built an evaluation framework that connects OSS-Fuzz to the LLM, conducts the experiment, and evaluates the results. The steps look like this:  




  1. OSS-Fuzz’s Fuzz Introspector tool identifies an under-fuzzed, high-potential portion of the sample project’s code and passes the code to the evaluation framework. 

  2. The evaluation framework creates a prompt that the LLM will use to write the new fuzz target. The prompt includes project-specific information.

  3. The evaluation framework takes the fuzz target generated by the LLM and runs the new target. 

  4. The evaluation framework observes the run for any change in code coverage.

  5. In the event that the fuzz target fails to compile, the evaluation framework prompts the LLM to write a revised fuzz target that addresses the compilation errors.





Experiment overview: The experiment pictured above is a fully automated process, from identifying target code to evaluating the change in code coverage.






At first, the code generated from our prompts wouldn’t compile, however after several rounds of  prompt engineering and trying out the new fuzz targets, we saw projects gain between 1.5% and 31% code coverage. One of our sample projects, tinyxml2, went from 38% line coverage to 69% without any interventions from our team. The case of tinyxml2 taught us: when LLM-generated fuzz targets are added, tinyxml2 has the majority of its code covered. 









Example fuzz targets for tinyxml2: Each of the five fuzz targets shown is associated with a different part of the code and adds to the overall coverage improvement. 






To replicate tinyxml2’s results manually would have required at least a day’s worth of work—which would mean several years of work to manually cover all OSS-Fuzz projects. Given tinyxml2’s promising results, we want to implement them in production and to extend similar, automatic coverage to other OSS-Fuzz projects. 




Additionally, in the OpenSSL project, our LLM was able to automatically generate a working target that rediscovered CVE-2022-3602, which was in an area of code that previously did not have fuzzing coverage. Though this is not a new vulnerability, it suggests that as code coverage increases, we will find more vulnerabilities that are currently missed by fuzzing. 




Learn more about our results through our example prompts and outputs or through our experiment report. 




The goal: fully automated fuzzing

In the next few months, we’ll open source our evaluation framework to allow researchers to test their own automatic fuzz target generation. We’ll continue to optimize our use of LLMs for fuzzing target generation through more model finetuning, prompt engineering, and improvements to our infrastructure. We’re also collaborating closely with the Assured OSS team on this research in order to secure even more open source software used by Google Cloud customers.   




Our longer term goals include:



  • Adding LLM fuzz target generation as a fully integrated feature in OSS-Fuzz, with continuous generation of new targets for OSS-fuzz projects and zero manual involvement.

  • Extending support from C/C++ projects to additional language ecosystems, like Python and Java. 

  • Automating the process of onboarding a project into OSS-Fuzz to eliminate any need to write even initial fuzz targets. 




We’re working towards a future of personalized vulnerability detection with little manual effort from developers. With the addition of LLM generated fuzz targets, OSS-Fuzz can help improve open source security for everyone. 

AI-Powered Fuzzing: Breaking the Bug Hunting Barrier



Since 2016, OSS-Fuzz has been at the forefront of automated vulnerability discovery for open source projects. Vulnerability discovery is an important part of keeping software supply chains secure, so our team is constantly working to improve OSS-Fuzz. For the last few months, we’ve tested whether we could boost OSS-Fuzz’s performance using Google’s Large Language Models (LLM). 




This blog post shares our experience of successfully applying the generative power of LLMs to improve the automated vulnerability detection technique known as fuzz testing (“fuzzing”). By using LLMs, we’re able to increase the code coverage for critical projects using our OSS-Fuzz service without manually writing additional code. Using LLMs is a promising new way to scale security improvements across the over 1,000 projects currently fuzzed by OSS-Fuzz and to remove barriers to future projects adopting fuzzing. 




LLM-aided fuzzing

We created the OSS-Fuzz service to help open source developers find bugs in their code at scale—especially bugs that indicate security vulnerabilities. After more than six years of running OSS-Fuzz, we now support over 1,000 open source projects with continuous fuzzing, free of charge. As the Heartbleed vulnerability showed us, bugs that could be easily found with automated fuzzing can have devastating effects. For most open source developers, setting up their own fuzzing solution could cost time and resources. With OSS-Fuzz, developers are able to integrate their project for free, automated bug discovery at scale.  




Since 2016, we’ve found and verified a fix for over 10,000 security vulnerabilities. We also believe that OSS-Fuzz could likely find even more bugs with increased code coverage. The fuzzing service covers only around 30% of an open source project’s code on average, meaning that a large portion of our users’ code remains untouched by fuzzing. Recent research suggests that the most effective way to increase this is by adding additional fuzz targets for every project—one of the few parts of the fuzzing workflow that isn’t yet automated.




When an open source project onboards to OSS-Fuzz, maintainers make an initial time investment to integrate their projects into the infrastructure and then add fuzz targets. The fuzz targets are functions that use randomized input to test the targeted code. Writing fuzz targets is a project-specific and manual process that is similar to writing unit tests. The ongoing security benefits from fuzzing make this initial investment of time worth it for maintainers, but writing a comprehensive set of fuzz targets is an tough expectation for project maintainers, who are often volunteers. 




But what if LLMs could write additional fuzz targets for maintainers?



“Hey LLM, fuzz this project for me”

To discover whether an LLM could successfully write new fuzz targets, we built an evaluation framework that connects OSS-Fuzz to the LLM, conducts the experiment, and evaluates the results. The steps look like this:  




  1. OSS-Fuzz’s Fuzz Introspector tool identifies an under-fuzzed, high-potential portion of the sample project’s code and passes the code to the evaluation framework. 

  2. The evaluation framework creates a prompt that the LLM will use to write the new fuzz target. The prompt includes project-specific information.

  3. The evaluation framework takes the fuzz target generated by the LLM and runs the new target. 

  4. The evaluation framework observes the run for any change in code coverage.

  5. In the event that the fuzz target fails to compile, the evaluation framework prompts the LLM to write a revised fuzz target that addresses the compilation errors.





Experiment overview: The experiment pictured above is a fully automated process, from identifying target code to evaluating the change in code coverage.






At first, the code generated from our prompts wouldn’t compile, however after several rounds of  prompt engineering and trying out the new fuzz targets, we saw projects gain between 1.5% and 31% code coverage. One of our sample projects, tinyxml2, went from 38% line coverage to 69% without any interventions from our team. The case of tinyxml2 taught us: when LLM-generated fuzz targets are added, tinyxml2 has the majority of its code covered. 









Example fuzz targets for tinyxml2: Each of the five fuzz targets shown is associated with a different part of the code and adds to the overall coverage improvement. 






To replicate tinyxml2’s results manually would have required at least a day’s worth of work—which would mean several years of work to manually cover all OSS-Fuzz projects. Given tinyxml2’s promising results, we want to implement them in production and to extend similar, automatic coverage to other OSS-Fuzz projects. 




Additionally, in the OpenSSL project, our LLM was able to automatically generate a working target that rediscovered CVE-2022-3602, which was in an area of code that previously did not have fuzzing coverage. Though this is not a new vulnerability, it suggests that as code coverage increases, we will find more vulnerabilities that are currently missed by fuzzing. 




Learn more about our results through our example prompts and outputs or through our experiment report. 




The goal: fully automated fuzzing

In the next few months, we’ll open source our evaluation framework to allow researchers to test their own automatic fuzz target generation. We’ll continue to optimize our use of LLMs for fuzzing target generation through more model finetuning, prompt engineering, and improvements to our infrastructure. We’re also collaborating closely with the Assured OSS team on this research in order to secure even more open source software used by Google Cloud customers.   




Our longer term goals include:



  • Adding LLM fuzz target generation as a fully integrated feature in OSS-Fuzz, with continuous generation of new targets for OSS-fuzz projects and zero manual involvement.

  • Extending support from C/C++ projects to additional language ecosystems, like Python and Java. 

  • Automating the process of onboarding a project into OSS-Fuzz to eliminate any need to write even initial fuzz targets. 




We’re working towards a future of personalized vulnerability detection with little manual effort from developers. With the addition of LLM generated fuzz targets, OSS-Fuzz can help improve open source security for everyone. 

AI-Powered Fuzzing: Breaking the Bug Hunting Barrier



Since 2016, OSS-Fuzz has been at the forefront of automated vulnerability discovery for open source projects. Vulnerability discovery is an important part of keeping software supply chains secure, so our team is constantly working to improve OSS-Fuzz. For the last few months, we’ve tested whether we could boost OSS-Fuzz’s performance using Google’s Large Language Models (LLM). 




This blog post shares our experience of successfully applying the generative power of LLMs to improve the automated vulnerability detection technique known as fuzz testing (“fuzzing”). By using LLMs, we’re able to increase the code coverage for critical projects using our OSS-Fuzz service without manually writing additional code. Using LLMs is a promising new way to scale security improvements across the over 1,000 projects currently fuzzed by OSS-Fuzz and to remove barriers to future projects adopting fuzzing. 




LLM-aided fuzzing

We created the OSS-Fuzz service to help open source developers find bugs in their code at scale—especially bugs that indicate security vulnerabilities. After more than six years of running OSS-Fuzz, we now support over 1,000 open source projects with continuous fuzzing, free of charge. As the Heartbleed vulnerability showed us, bugs that could be easily found with automated fuzzing can have devastating effects. For most open source developers, setting up their own fuzzing solution could cost time and resources. With OSS-Fuzz, developers are able to integrate their project for free, automated bug discovery at scale.  




Since 2016, we’ve found and verified a fix for over 10,000 security vulnerabilities. We also believe that OSS-Fuzz could likely find even more bugs with increased code coverage. The fuzzing service covers only around 30% of an open source project’s code on average, meaning that a large portion of our users’ code remains untouched by fuzzing. Recent research suggests that the most effective way to increase this is by adding additional fuzz targets for every project—one of the few parts of the fuzzing workflow that isn’t yet automated.




When an open source project onboards to OSS-Fuzz, maintainers make an initial time investment to integrate their projects into the infrastructure and then add fuzz targets. The fuzz targets are functions that use randomized input to test the targeted code. Writing fuzz targets is a project-specific and manual process that is similar to writing unit tests. The ongoing security benefits from fuzzing make this initial investment of time worth it for maintainers, but writing a comprehensive set of fuzz targets is an tough expectation for project maintainers, who are often volunteers. 




But what if LLMs could write additional fuzz targets for maintainers?



“Hey LLM, fuzz this project for me”

To discover whether an LLM could successfully write new fuzz targets, we built an evaluation framework that connects OSS-Fuzz to the LLM, conducts the experiment, and evaluates the results. The steps look like this:  




  1. OSS-Fuzz’s Fuzz Introspector tool identifies an under-fuzzed, high-potential portion of the sample project’s code and passes the code to the evaluation framework. 

  2. The evaluation framework creates a prompt that the LLM will use to write the new fuzz target. The prompt includes project-specific information.

  3. The evaluation framework takes the fuzz target generated by the LLM and runs the new target. 

  4. The evaluation framework observes the run for any change in code coverage.

  5. In the event that the fuzz target fails to compile, the evaluation framework prompts the LLM to write a revised fuzz target that addresses the compilation errors.





Experiment overview: The experiment pictured above is a fully automated process, from identifying target code to evaluating the change in code coverage.






At first, the code generated from our prompts wouldn’t compile, however after several rounds of  prompt engineering and trying out the new fuzz targets, we saw projects gain between 1.5% and 31% code coverage. One of our sample projects, tinyxml2, went from 38% line coverage to 69% without any interventions from our team. The case of tinyxml2 taught us: when LLM-generated fuzz targets are added, tinyxml2 has the majority of its code covered. 









Example fuzz targets for tinyxml2: Each of the five fuzz targets shown is associated with a different part of the code and adds to the overall coverage improvement. 






To replicate tinyxml2’s results manually would have required at least a day’s worth of work—which would mean several years of work to manually cover all OSS-Fuzz projects. Given tinyxml2’s promising results, we want to implement them in production and to extend similar, automatic coverage to other OSS-Fuzz projects. 




Additionally, in the OpenSSL project, our LLM was able to automatically generate a working target that rediscovered CVE-2022-3602, which was in an area of code that previously did not have fuzzing coverage. Though this is not a new vulnerability, it suggests that as code coverage increases, we will find more vulnerabilities that are currently missed by fuzzing. 




Learn more about our results through our example prompts and outputs or through our experiment report. 




The goal: fully automated fuzzing

In the next few months, we’ll open source our evaluation framework to allow researchers to test their own automatic fuzz target generation. We’ll continue to optimize our use of LLMs for fuzzing target generation through more model finetuning, prompt engineering, and improvements to our infrastructure. We’re also collaborating closely with the Assured OSS team on this research in order to secure even more open source software used by Google Cloud customers.   




Our longer term goals include:



  • Adding LLM fuzz target generation as a fully integrated feature in OSS-Fuzz, with continuous generation of new targets for OSS-fuzz projects and zero manual involvement.

  • Extending support from C/C++ projects to additional language ecosystems, like Python and Java. 

  • Automating the process of onboarding a project into OSS-Fuzz to eliminate any need to write even initial fuzz targets. 




We’re working towards a future of personalized vulnerability detection with little manual effort from developers. With the addition of LLM generated fuzz targets, OSS-Fuzz can help improve open source security for everyone. 

AI-Powered Fuzzing: Breaking the Bug Hunting Barrier



Since 2016, OSS-Fuzz has been at the forefront of automated vulnerability discovery for open source projects. Vulnerability discovery is an important part of keeping software supply chains secure, so our team is constantly working to improve OSS-Fuzz. For the last few months, we’ve tested whether we could boost OSS-Fuzz’s performance using Google’s Large Language Models (LLM). 




This blog post shares our experience of successfully applying the generative power of LLMs to improve the automated vulnerability detection technique known as fuzz testing (“fuzzing”). By using LLMs, we’re able to increase the code coverage for critical projects using our OSS-Fuzz service without manually writing additional code. Using LLMs is a promising new way to scale security improvements across the over 1,000 projects currently fuzzed by OSS-Fuzz and to remove barriers to future projects adopting fuzzing. 




LLM-aided fuzzing

We created the OSS-Fuzz service to help open source developers find bugs in their code at scale—especially bugs that indicate security vulnerabilities. After more than six years of running OSS-Fuzz, we now support over 1,000 open source projects with continuous fuzzing, free of charge. As the Heartbleed vulnerability showed us, bugs that could be easily found with automated fuzzing can have devastating effects. For most open source developers, setting up their own fuzzing solution could cost time and resources. With OSS-Fuzz, developers are able to integrate their project for free, automated bug discovery at scale.  




Since 2016, we’ve found and verified a fix for over 10,000 security vulnerabilities. We also believe that OSS-Fuzz could likely find even more bugs with increased code coverage. The fuzzing service covers only around 30% of an open source project’s code on average, meaning that a large portion of our users’ code remains untouched by fuzzing. Recent research suggests that the most effective way to increase this is by adding additional fuzz targets for every project—one of the few parts of the fuzzing workflow that isn’t yet automated.




When an open source project onboards to OSS-Fuzz, maintainers make an initial time investment to integrate their projects into the infrastructure and then add fuzz targets. The fuzz targets are functions that use randomized input to test the targeted code. Writing fuzz targets is a project-specific and manual process that is similar to writing unit tests. The ongoing security benefits from fuzzing make this initial investment of time worth it for maintainers, but writing a comprehensive set of fuzz targets is an tough expectation for project maintainers, who are often volunteers. 




But what if LLMs could write additional fuzz targets for maintainers?



“Hey LLM, fuzz this project for me”

To discover whether an LLM could successfully write new fuzz targets, we built an evaluation framework that connects OSS-Fuzz to the LLM, conducts the experiment, and evaluates the results. The steps look like this:  




  1. OSS-Fuzz’s Fuzz Introspector tool identifies an under-fuzzed, high-potential portion of the sample project’s code and passes the code to the evaluation framework. 

  2. The evaluation framework creates a prompt that the LLM will use to write the new fuzz target. The prompt includes project-specific information.

  3. The evaluation framework takes the fuzz target generated by the LLM and runs the new target. 

  4. The evaluation framework observes the run for any change in code coverage.

  5. In the event that the fuzz target fails to compile, the evaluation framework prompts the LLM to write a revised fuzz target that addresses the compilation errors.





Experiment overview: The experiment pictured above is a fully automated process, from identifying target code to evaluating the change in code coverage.






At first, the code generated from our prompts wouldn’t compile, however after several rounds of  prompt engineering and trying out the new fuzz targets, we saw projects gain between 1.5% and 31% code coverage. One of our sample projects, tinyxml2, went from 38% line coverage to 69% without any interventions from our team. The case of tinyxml2 taught us: when LLM-generated fuzz targets are added, tinyxml2 has the majority of its code covered. 









Example fuzz targets for tinyxml2: Each of the five fuzz targets shown is associated with a different part of the code and adds to the overall coverage improvement. 






To replicate tinyxml2’s results manually would have required at least a day’s worth of work—which would mean several years of work to manually cover all OSS-Fuzz projects. Given tinyxml2’s promising results, we want to implement them in production and to extend similar, automatic coverage to other OSS-Fuzz projects. 




Additionally, in the OpenSSL project, our LLM was able to automatically generate a working target that rediscovered CVE-2022-3602, which was in an area of code that previously did not have fuzzing coverage. Though this is not a new vulnerability, it suggests that as code coverage increases, we will find more vulnerabilities that are currently missed by fuzzing. 




Learn more about our results through our example prompts and outputs or through our experiment report. 




The goal: fully automated fuzzing

In the next few months, we’ll open source our evaluation framework to allow researchers to test their own automatic fuzz target generation. We’ll continue to optimize our use of LLMs for fuzzing target generation through more model finetuning, prompt engineering, and improvements to our infrastructure. We’re also collaborating closely with the Assured OSS team on this research in order to secure even more open source software used by Google Cloud customers.   




Our longer term goals include:



  • Adding LLM fuzz target generation as a fully integrated feature in OSS-Fuzz, with continuous generation of new targets for OSS-fuzz projects and zero manual involvement.

  • Extending support from C/C++ projects to additional language ecosystems, like Python and Java. 

  • Automating the process of onboarding a project into OSS-Fuzz to eliminate any need to write even initial fuzz targets. 




We’re working towards a future of personalized vulnerability detection with little manual effort from developers. With the addition of LLM generated fuzz targets, OSS-Fuzz can help improve open source security for everyone. 

Toward Quantum Resilient Security Keys



As part of our effort to deploy quantum resistant cryptography, we are happy to announce the release of the first quantum resilient FIDO2 security key implementation as part of OpenSK, our open source security key firmware. This open-source hardware optimized implementation uses a novel ECC/Dilithium hybrid signature schema that benefits from the security of ECC against standard attacks and Dilithium’s resilience against quantum attacks. This schema was co-developed in partnership with the ETH Zürich and won the ACNS secure cryptographic implementation workshop best paper.




Quantum processor

Quantum processor




As progress toward practical quantum computers is accelerating, preparing for their advent is becoming a more pressing issue as time passes. In particular, standard public key cryptography which was designed to protect against traditional computers, will not be able to withstand quantum attacks. Fortunately, with the recent standardization of public key quantum resilient cryptography including the Dilithium algorithm, we now have a clear path to secure security keys against quantum attacks.




While quantum attacks are still in the distant future, deploying cryptography at Internet scale is a massive undertaking which is why doing it as early as possible is vital. In particular, for security keys this process is expected to be gradual as users will have to acquire new ones once FIDO has standardized post quantum cryptography resilient cryptography and this new standard is supported by major browser vendors.



Hybrid signature scheme

Hybrid signature: Strong nesting with classical and PQC scheme




Our proposed implementation relies on a hybrid approach that combines the battle tested ECDSA signature algorithm and the recently standardized quantum resistant signature algorithm, Dilithium. In collaboration with ETH, we developed this novel hybrid signature schema that offers the best of both worlds. Relying on a hybrid signature is critical as the security of Dilithium and other recently standardized quantum resistant algorithms haven’t yet stood the test of time and recent attacks on Rainbow (another quantum resilient algorithm) demonstrate the need for caution. This cautiousness is particularly warranted for security keys as most can’t be upgraded – although we are working toward it for OpenSK. The hybrid approach is also used in other post-quantum efforts like Chrome’s support for TLS.




On the technical side, a large challenge was to create a Dilithium implementation small enough to run on security keys’ constrained hardware. Through careful optimization, we were able to develop a Rust memory optimized implementation that only required 20 KB of memory, which was sufficiently small enough. We also spent time ensuring that our implementation signature speed was well within the expected security keys specification. That said, we believe improving signature speed further by leveraging hardware acceleration would allow for keys to be more responsive.




Moving forward, we are hoping  to see this implementation (or a variant of it), being standardized as part of the FIDO2 key specification and supported by major web browsers so that users' credentials can be protected against quantum attacks. If you are interested in testing this algorithm or contributing to security key research, head to our open source implementation OpenSK.

Making Chrome more secure by bringing Key Pinning to Android

Chrome 106 added support for enforcing key pins on Android by default, bringing Android to parity with Chrome on desktop platforms. But what is key pinning anyway?

One of the reasons Chrome implements key pinning is the “rule of two”. This rule is part of Chrome’s holistic secure development process. It says that when you are writing code for Chrome, you can pick no more than two of: code written in an unsafe language, processing untrustworthy inputs, and running without a sandbox. This blog post explains how key pinning and the rule of two are related.

The Rule of Two

Chrome is primarily written in the C and C++ languages, which are vulnerable to memory safety bugs. Mistakes with pointers in these languages can lead to memory being misinterpreted. Chrome invests in an ever-stronger multi-process architecture built on sandboxing and site isolation to help defend against memory safety problems. Android-specific features can be written in Java or Kotlin. These languages are memory-safe in the common case. Similarly, we’re working on adding support to write Chrome code in Rust, which is also memory-safe.

Much of Chrome is sandboxed, but the sandbox still requires a core high-privilege “broker” process to coordinate communication and launch sandboxed processes. In Chrome, the broker is the browser process. The browser process is the source of truth that allows the rest of Chrome to be sandboxed and coordinates communication between the rest of the processes.

If an attacker is able to craft a malicious input to the browser process that exploits a bug and allows the attacker to achieve remote code execution (RCE) in the browser process, that would effectively give the attacker full control of the victim’s Chrome browser and potentially the rest of the device. Conversely, if an attacker achieves RCE in a sandboxed process, such as a renderer, the attacker's capabilities are extremely limited. The attacker cannot reach outside of the sandbox unless they can additionally exploit the sandbox itself.

Without sandboxing, which limits the actions an attacker can take, and without memory safety, which removes the ability of a bug to disrupt the intended control flow of the program, the rule of two requires that the browser process does not handle untrustworthy inputs. The relative risks between sandboxed processes and the browser process are why the browser process is only allowed to parse trustworthy inputs and specific IPC messages.

Trustworthy inputs are defined extremely strictly: A “trustworthy source” means that Chrome can prove that the data comes from Google. Effectively, this means that in situations where the browser process needs access to data from external sources, it must be read from Google servers. We can cryptographically prove that data came from Google servers if that data comes from:

The component updater and the variations framework are services specific to Chrome used to ship data-only updates and configuration information. These services both use asymmetric cryptography to authenticate their data, and the public key used to verify data sent by these services is shipped in Chrome.

However, Chrome is a feature-filled browser with many different use cases, and many different features beyond just updating itself. Certain features, such as Sign-In and the Discover Feed, need to communicate with Google. For features like this, that communication can be considered trustworthy if it comes from a pinned HTTPS server.

When Chrome connects to an HTTPS server, the server says “a 3rd party you trust (a certification authority; CA) has vouched for my identity.” It does this by presenting a certificate issued by a trusted certification authority. Chrome verifies the certificate before continuing. The modern web necessarily has a lot of CAs, all of whom can provide authentication for any website. To further ensure that the Chrome browser process is communicating with a trustworthy Google server we want to verify something more: whether a specific CA is vouching for the server. We do this by building a map of sites → expected CAs directly into Chrome. We call this key pinning. We call the map the pin set.

What is Key Pinning?

Key pinning was born as a defense against real attacks seen in the wild: attackers who can trick a CA to issue a seemingly-valid certificate for a server, and then the attacker can impersonate that server. This happened to Google in 2011, when the DigiNotar certification authority was compromised and used to issue malicious certificates for Google services. To defend against this risk, Chrome contains a pin set for all Google properties, and we only consider an HTTPS input trustworthy if it’s authenticated using a key in this pin set. This protects against malicious certificate issuance by third parties.

Key pinning can be brittle, and is rarely worth the risks. Allowing the pin set to get out of date can lead to locking users out of a website or other services, potentially permanently. Whenever pinning, it’s important to have safety-valves such as not enforcing pinning (i.e. failing open) when the pins haven't been updated recently, including a “backup” key pin, and having fallback mechanisms for bootstrapping. It's hard for individual sites to manage all of these mechanisms, which is why dynamic pinning over HTTPS (HPKP) was deprecated. Key pinning is still an important tool for some use cases, however, where there's high-privilege communication that needs to happen between a client and server that are operated by the same entity, such as web browsers, automatic software updates, and package managers.

Security Benefits of Key Pinning in Chrome, Now on Android

By pinning in Chrome, we can protect users from CA compromise. We take steps to prevent an out-of-date pinset from unnecessarily blocking users from accessing Google or Google's services. As both a browser vendor and site operator, however, we have additional tools to ensure we keep our pin sets up to date—if we use a new key or a new domain, we can add it to the pin set in Chrome at the same time. In our original implementation of pinning, the pin set is directly compiled into Chrome and updating the pin set requires updating the entire Chrome binary. To make sure that users of old versions of Chrome can still talk to Google, pinning isn't enforced if Chrome detects that it is more than 10 weeks old.

Historically, Chrome enforced the age limit by comparing the current time to the build timestamp in the Chrome binary. Chrome did not enforce pinning on Android because the build timestamp on Android wasn’t always reflective of the age of the Chrome pinset, which meant that the chance of a false positive pin mismatch was higher.

Without enforcing pins on Android, Chrome was limiting the ways engineers could build features that comply with the rule of two. To remove this limitation, we built an improved mechanism for distributing the built-in pin set to Chrome installs, including Android devices. Chrome still contains a built-in pin set compiled into the binary. However, we now additionally distribute the pin set via the component updater, which is a mechanism for Chrome to dynamically push out data-only updates to all Chrome installs without requiring a full Chrome update or restart. The component contains the latest version of the built-in pin set, as well as the certificate transparency log list and the contents of the Chrome Root Store. This means that even if Chrome is out of date, it can still receive updates to the pin set. The component also includes the timestamp the pin list was last updated, rather than relying on build timestamp. This drastically reduces the false positive risk of enabling key pinning on Android.

After we moved the pin set to component updater, we were able to do a slow rollout of pinning enforcement on Android. We determined that the false positive risk was now in line with desktop platforms, and enabled key pinning enforcement by default since Chrome 106, released in September 2022.

This change has been entirely invisible to users of Chrome. While not all of the changes we make in Chrome are flashy, we're constantly working behind the scenes to keep Chrome as secure as possible and we're excited to bring this protection to Android.

Downfall and Zenbleed: Googlers helping secure the ecosystem



Finding and mitigating security vulnerabilities is critical to keeping Internet users safe.  However, the more complex a system becomes, the harder it is to secure—and that is also the case with computing hardware and processors, which have developed highly advanced capabilities over the years. This post will detail this trend by exploring Downfall and Zenbleed, two new security vulnerabilities (one of which was disclosed today) that prior to mitigation had the potential to affect billions of personal and cloud computers, signifying the importance of vulnerability research and cross-industry collaboration. Had these vulnerabilities not been discovered by Google researchers, and instead by adversaries, they would have enabled attackers to compromise Internet users. For both vulnerabilities, Google worked closely with our partners in the industry to develop fixes, deploy mitigations and gather details to share widely and better secure the ecosystem.

What are Downfall and Zenbleed?

Downfall (CVE-2022-40982) and Zenbleed (CVE-2023-20593) are two different vulnerabilities affecting CPUs - Intel Core (6th - 11th generation) and AMD Zen2, respectively. They allow an attacker to violate the software-hardware boundary established in modern processors. This could allow an attacker to access data in internal hardware registers that hold information belonging to other users of the system (both across different virtual machines and different processes). 


These vulnerabilities arise from complex optimizations in modern CPUs that speed up applications: 

  1. Preemptive multitasking and simultaneous multithreading enable users and applications to share CPU cores, while the CPU enforces security boundaries at the architecture level to stop a malicious user accessing data from other users. 

  2. Speculative execution allows the CPU core to execute instructions from a single execution thread without waiting for prior instructions to be completed.

  3. SIMD enables data-level parallelism where an instruction computes the same function multiple times with different data.


Downfall, affecting Intel CPUs, exploits the speculative forwarding of data from the SIMD Gather instruction. The Gather instruction helps the software access scattered data in memory quickly, which is crucial for high-performance computing workloads performing data encoding and processing. Downfall shows that this instruction forwards stale data from the internal physical hardware registers to succeeding instructions. Although this data is not directly exposed to software registers, it can trivially be extracted via similar exploitation techniques as Meltdown. Since these physical hardware register files are shared across multiple users sharing the same CPU core, an attacker can ultimately extract data from other users. 


Zenbleed, affecting AMD CPUs, shows that incorrectly implemented speculative execution of the SIMD Zeroupper instruction leaks stale data from physical hardware registers to software registers. Zeroupper instructions should clear the data in the upper-half of SIMD registers (e.g., 256-bit register YMM) which on Zen2 processors is done by just setting a flag that marks the upper half of the register as zero. However, if on the same cycle as a register to register move the Zeroupper instruction is mis-speculated, the zero flag doesn’t get rolled back properly, leading to the upper-half of the YMM register to hold stale data rather than the value of zero. Similar to Downfall, leaking stale data from physical hardware registers expose the data from other users who share the same CPU core and its internal physical registers. 


Comparison



Downfall

Zenbleed

Affects

Intel Core (6th-11th Gen)

AMD Zen 2

Leaks

Entire XMM/YMM/ZMM Register

Upper-half of 256-bit YMM Registers

Exploit

Gather Data Sampling

Architectural Data Leak

Discovered by

Microarchitectural Analysis

Fuzzing

Fix

Microcode blocking speculative forwarding from Gather

Microcode properly wiping out YMM register when Zeroupper 

Mitigation overhead

0-50% depending on the workload 

Statistically insignificant

Reported on

August 24, 2022

May, 15 2023

Fixed on

August 8, 2023

July 19, 2023


How did we protect our users?

Vulnerability research continues to be at the heart of our security work at Google. We invest in not only vulnerability research, but in the community as a whole in order to encourage further research that keeps all users safe. These vulnerabilities were no exception, and we worked closely with our industry partners to make them aware of the vulnerabilities, coordinate on mitigations, align on disclosure timelines and a plan to get details out to the ecosystem. 


Upon disclosures, we immediately published Security Bulletins for both Downfall and Zenbleed that detailed how Google responded to each vulnerability, and provided guidance for the industry. In addition to our bulletins, we posted technical details for insights on both Downfall and Zenbleed. It’s imperative that vulnerability research continues to be supported by the industry, and we’re dedicated to doing our part to helping protect those that do this important work.

Lessons learned 

These long existing vulnerabilities, their discovery and the mitigations that followed have provided several lessons learned that will help the industry move forward in vulnerability research, including: 

  • There are fundamental challenges in designing secure hardware that requires further research and understanding.

  • There are gaps in automated testing and verification of hardware for vulnerabilities. 

  • Optimization features that are supposed to make computation faster are closely related to security and can introduce new vulnerabilities, if not implemented properly.


As Downfall and Zenbleed, suggest, computer hardware is only becoming more complex everyday, and so we will see more vulnerabilities, which is why Google is investing in CPU/hardware security research. We look forward to continuing to share our insights and encourage the wider industry to join us in helping to expand on this work. 


Want to learn more?

Downfall will be presented at Blackhat USA 2023 on August 9 at 1:30pm. You can also read more about Zenbleed on this advisory.