Tag Archives: Security

Expanding our Fully Homomorphic Encryption offering

Posted by Miguel Guevara, Product Manager, Privacy and Data Protection Office

At Google, it’s our responsibility to keep users safe online and ensure they’re able to enjoy the products and services they love while knowing their personal information is private and secure. We’re able to do more with less data through the development of our privacy-enhancing technologies (PETs) like differential privacy and federated learning.

And throughout the global tech industry, we’re excited to see that adoption of PETs is on the rise. The UK’s Information Commissioner’s Office (ICO) recently published guidance for how organizations including local governments can start using PETs to aid with data minimization and compliance with data protection laws. Consulting firm Gartner predicts that within the next two years, 60% of all large organizations will be deploying PETs in some capacity.

We’re on the cusp of mainstream adoption of PETs, which is why we also believe it’s our responsibility to share new breakthroughs and applications from our longstanding development and investment in this space. By open sourcing dozens of our PETs over the past few years, we’ve made them freely available for anyone – developers, researchers, governments, business and more – to use in their own work, helping unlock the power of data sets without revealing personal information about users.

As part of this commitment, we open-sourced a first-of-its-kind Fully Homomorphic Encryption (FHE) transpiler two years ago, and have continued to remove barriers to entry along the way. FHE is a powerful technology that allows you to perform computations on encrypted data without being able to access sensitive or personal information and we’re excited to share our latest developments that were born out of collaboration with our developer and research community to expand what can be done with FHE.

Furthering the adoption of Fully Homomorphic Encryption

Today, we are introducing additional tools to help the community apply FHE technologies to video files. This advancement is important because video adoption can often be expensive and incur long run times, limiting the ability to scale FHE use to larger files and new formats.

This will encourage developers to try out more complex applications with FHE. Historically, FHE has been thought of as an intractable technology for large-scale applications. Our results processing large video files show it is possible to do FHE in previously unimaginable domains.Say you’re a developer at a company and are thinking of processing a large file (in the TBs order of magnitude, can be a video, or a sequence of characters) for a given task (e.g., convolution around specific data points to do a blurry filter on a video or detect object movement), you can now try this task using FHE.

To do so, we are expanding our FHE toolkit in three new ways to make it easier for developers to use FHE for a wider range of applications, such as private machine learning, text analysis, and video processing. As part of our toolkit, we will release new hardware, a software crypto library and an open source compiler toolchain. Our goal is to provide these new tools to researchers and developers to help advance how FHE is used to protect privacy while simultaneously lowering costs.


Expanding our toolkit

We believe—with more optimization and specialty hardware — there will be a wider amount of use cases for a myriad of similar private machine learning tasks, like privately analyzing more complex files, such as long videos, or processing text documents. Which is why we are releasing a TensorFlow-to-FHE compiler that will allow any developer to compile their trained TensorFlow Machine Learning models into a FHE version of those models.

Once a model has been compiled to FHE, developers can use it to run inference on encrypted user data without having access to the content of the user inputs or the inference results. For instance, our toolchain can be used to compile a TensorFlow Lite model to FHE, producing a private inference in 16 seconds for a 3-layer neural network. This is just one way we are helping researchers analyze large datasets without revealing personal information.

In addition, we are releasing Jaxite, a software library for cryptography that allows developers to run FHE on a variety of hardware accelerators. Jaxite is built on top of JAX, a high-performance cross-platform machine learning library, which allows Jaxite to run FHE programs on graphics processing units (GPUs) and Tensor Processing Units (TPUs). Google originally developed JAX for accelerating neural network computations, and we have discovered that it can also be used to speed up FHE computations.

Finally, we are announcing Homomorphic Encryption Intermediate Representation (HEIR), an open-source compiler toolchain for homomorphic encryption. HEIR is designed to enable interoperability of FHE programs across FHE schemes, compilers, and hardware accelerators. Built on top of MLIR, HEIR aims to lower the barriers to privacy engineering and research. We will be working on HEIR with a variety of industry and academic partners, and we hope it will be a hub for researchers and engineers to try new optimizations, compare benchmarks, and avoid rebuilding boilerplate. We encourage anyone interested in FHE compiler development to come to our regular meetings, which can be found on the HEIR website.

Launch diagram

Building advanced privacy technologies and sharing them with others

Organizations and governments around the world continue to explore how to use PETs to tackle societal challenges and help developers and researchers securely process and protect user data and privacy. At Google, we’re continuing to improve and apply these novel data processing techniques across many of our products, and investing in democratizing access to the PETs we’ve developed. We believe that every internet user deserves world-class privacy, and we continue to partner with others to further that goal. We’re excited for new testing and partnerships on our open source PETs and will continue investing in innovations, aiming at releasing more updates in the future.

These principles are the foundation for everything we make at Google and we’re proud to be an industry leader in developing and scaling new privacy-enhancing technologies (PETs) that make it possible to create helpful experiences while protecting our users’ privacy.

PETs are a key part of our Protected Computing effort at Google, which is a growing toolkit of technologies that transforms how, when and where data is processed to technically ensure its privacy and safety. And keeping users safe online shouldn’t stop with Google - it should extend to the whole of the internet. That’s why we continue to innovate privacy technologies and make them widely available to all.

Making Chrome more secure by bringing Key Pinning to Android

Chrome 106 added support for enforcing key pins on Android by default, bringing Android to parity with Chrome on desktop platforms. But what is key pinning anyway?

One of the reasons Chrome implements key pinning is the “rule of two”. This rule is part of Chrome’s holistic secure development process. It says that when you are writing code for Chrome, you can pick no more than two of: code written in an unsafe language, processing untrustworthy inputs, and running without a sandbox. This blog post explains how key pinning and the rule of two are related.

The Rule of Two

Chrome is primarily written in the C and C++ languages, which are vulnerable to memory safety bugs. Mistakes with pointers in these languages can lead to memory being misinterpreted. Chrome invests in an ever-stronger multi-process architecture built on sandboxing and site isolation to help defend against memory safety problems. Android-specific features can be written in Java or Kotlin. These languages are memory-safe in the common case. Similarly, we’re working on adding support to write Chrome code in Rust, which is also memory-safe.

Much of Chrome is sandboxed, but the sandbox still requires a core high-privilege “broker” process to coordinate communication and launch sandboxed processes. In Chrome, the broker is the browser process. The browser process is the source of truth that allows the rest of Chrome to be sandboxed and coordinates communication between the rest of the processes.

If an attacker is able to craft a malicious input to the browser process that exploits a bug and allows the attacker to achieve remote code execution (RCE) in the browser process, that would effectively give the attacker full control of the victim’s Chrome browser and potentially the rest of the device. Conversely, if an attacker achieves RCE in a sandboxed process, such as a renderer, the attacker's capabilities are extremely limited. The attacker cannot reach outside of the sandbox unless they can additionally exploit the sandbox itself.

Without sandboxing, which limits the actions an attacker can take, and without memory safety, which removes the ability of a bug to disrupt the intended control flow of the program, the rule of two requires that the browser process does not handle untrustworthy inputs. The relative risks between sandboxed processes and the browser process are why the browser process is only allowed to parse trustworthy inputs and specific IPC messages.

Trustworthy inputs are defined extremely strictly: A “trustworthy source” means that Chrome can prove that the data comes from Google. Effectively, this means that in situations where the browser process needs access to data from external sources, it must be read from Google servers. We can cryptographically prove that data came from Google servers if that data comes from:

The component updater and the variations framework are services specific to Chrome used to ship data-only updates and configuration information. These services both use asymmetric cryptography to authenticate their data, and the public key used to verify data sent by these services is shipped in Chrome.

However, Chrome is a feature-filled browser with many different use cases, and many different features beyond just updating itself. Certain features, such as Sign-In and the Discover Feed, need to communicate with Google. For features like this, that communication can be considered trustworthy if it comes from a pinned HTTPS server.

When Chrome connects to an HTTPS server, the server says “a 3rd party you trust (a certification authority; CA) has vouched for my identity.” It does this by presenting a certificate issued by a trusted certification authority. Chrome verifies the certificate before continuing. The modern web necessarily has a lot of CAs, all of whom can provide authentication for any website. To further ensure that the Chrome browser process is communicating with a trustworthy Google server we want to verify something more: whether a specific CA is vouching for the server. We do this by building a map of sites → expected CAs directly into Chrome. We call this key pinning. We call the map the pin set.

What is Key Pinning?

Key pinning was born as a defense against real attacks seen in the wild: attackers who can trick a CA to issue a seemingly-valid certificate for a server, and then the attacker can impersonate that server. This happened to Google in 2011, when the DigiNotar certification authority was compromised and used to issue malicious certificates for Google services. To defend against this risk, Chrome contains a pin set for all Google properties, and we only consider an HTTPS input trustworthy if it’s authenticated using a key in this pin set. This protects against malicious certificate issuance by third parties.

Key pinning can be brittle, and is rarely worth the risks. Allowing the pin set to get out of date can lead to locking users out of a website or other services, potentially permanently. Whenever pinning, it’s important to have safety-valves such as not enforcing pinning (i.e. failing open) when the pins haven't been updated recently, including a “backup” key pin, and having fallback mechanisms for bootstrapping. It's hard for individual sites to manage all of these mechanisms, which is why dynamic pinning over HTTPS (HPKP) was deprecated. Key pinning is still an important tool for some use cases, however, where there's high-privilege communication that needs to happen between a client and server that are operated by the same entity, such as web browsers, automatic software updates, and package managers.

Security Benefits of Key Pinning in Chrome, Now on Android

By pinning in Chrome, we can protect users from CA compromise. We take steps to prevent an out-of-date pinset from unnecessarily blocking users from accessing Google or Google's services. As both a browser vendor and site operator, however, we have additional tools to ensure we keep our pin sets up to date—if we use a new key or a new domain, we can add it to the pin set in Chrome at the same time. In our original implementation of pinning, the pin set is directly compiled into Chrome and updating the pin set requires updating the entire Chrome binary. To make sure that users of old versions of Chrome can still talk to Google, pinning isn't enforced if Chrome detects that it is more than 10 weeks old.

Historically, Chrome enforced the age limit by comparing the current time to the build timestamp in the Chrome binary. Chrome did not enforce pinning on Android because the build timestamp on Android wasn’t always reflective of the age of the Chrome pinset, which meant that the chance of a false positive pin mismatch was higher.

Without enforcing pins on Android, Chrome was limiting the ways engineers could build features that comply with the rule of two. To remove this limitation, we built an improved mechanism for distributing the built-in pin set to Chrome installs, including Android devices. Chrome still contains a built-in pin set compiled into the binary. However, we now additionally distribute the pin set via the component updater, which is a mechanism for Chrome to dynamically push out data-only updates to all Chrome installs without requiring a full Chrome update or restart. The component contains the latest version of the built-in pin set, as well as the certificate transparency log list and the contents of the Chrome Root Store. This means that even if Chrome is out of date, it can still receive updates to the pin set. The component also includes the timestamp the pin list was last updated, rather than relying on build timestamp. This drastically reduces the false positive risk of enabling key pinning on Android.

After we moved the pin set to component updater, we were able to do a slow rollout of pinning enforcement on Android. We determined that the false positive risk was now in line with desktop platforms, and enabled key pinning enforcement by default since Chrome 106, released in September 2022.

This change has been entirely invisible to users of Chrome. While not all of the changes we make in Chrome are flashy, we're constantly working behind the scenes to keep Chrome as secure as possible and we're excited to bring this protection to Android.

Downfall and Zenbleed: Googlers helping secure the ecosystem



Finding and mitigating security vulnerabilities is critical to keeping Internet users safe.  However, the more complex a system becomes, the harder it is to secure—and that is also the case with computing hardware and processors, which have developed highly advanced capabilities over the years. This post will detail this trend by exploring Downfall and Zenbleed, two new security vulnerabilities (one of which was disclosed today) that prior to mitigation had the potential to affect billions of personal and cloud computers, signifying the importance of vulnerability research and cross-industry collaboration. Had these vulnerabilities not been discovered by Google researchers, and instead by adversaries, they would have enabled attackers to compromise Internet users. For both vulnerabilities, Google worked closely with our partners in the industry to develop fixes, deploy mitigations and gather details to share widely and better secure the ecosystem.

What are Downfall and Zenbleed?

Downfall (CVE-2022-40982) and Zenbleed (CVE-2023-20593) are two different vulnerabilities affecting CPUs - Intel Core (6th - 11th generation) and AMD Zen2, respectively. They allow an attacker to violate the software-hardware boundary established in modern processors. This could allow an attacker to access data in internal hardware registers that hold information belonging to other users of the system (both across different virtual machines and different processes). 


These vulnerabilities arise from complex optimizations in modern CPUs that speed up applications: 

  1. Preemptive multitasking and simultaneous multithreading enable users and applications to share CPU cores, while the CPU enforces security boundaries at the architecture level to stop a malicious user accessing data from other users. 

  2. Speculative execution allows the CPU core to execute instructions from a single execution thread without waiting for prior instructions to be completed.

  3. SIMD enables data-level parallelism where an instruction computes the same function multiple times with different data.


Downfall, affecting Intel CPUs, exploits the speculative forwarding of data from the SIMD Gather instruction. The Gather instruction helps the software access scattered data in memory quickly, which is crucial for high-performance computing workloads performing data encoding and processing. Downfall shows that this instruction forwards stale data from the internal physical hardware registers to succeeding instructions. Although this data is not directly exposed to software registers, it can trivially be extracted via similar exploitation techniques as Meltdown. Since these physical hardware register files are shared across multiple users sharing the same CPU core, an attacker can ultimately extract data from other users. 


Zenbleed, affecting AMD CPUs, shows that incorrectly implemented speculative execution of the SIMD Zeroupper instruction leaks stale data from physical hardware registers to software registers. Zeroupper instructions should clear the data in the upper-half of SIMD registers (e.g., 256-bit register YMM) which on Zen2 processors is done by just setting a flag that marks the upper half of the register as zero. However, if on the same cycle as a register to register move the Zeroupper instruction is mis-speculated, the zero flag doesn’t get rolled back properly, leading to the upper-half of the YMM register to hold stale data rather than the value of zero. Similar to Downfall, leaking stale data from physical hardware registers expose the data from other users who share the same CPU core and its internal physical registers. 


Comparison



Downfall

Zenbleed

Affects

Intel Core (6th-11th Gen)

AMD Zen 2

Leaks

Entire XMM/YMM/ZMM Register

Upper-half of 256-bit YMM Registers

Exploit

Gather Data Sampling

Architectural Data Leak

Discovered by

Microarchitectural Analysis

Fuzzing

Fix

Microcode blocking speculative forwarding from Gather

Microcode properly wiping out YMM register when Zeroupper 

Mitigation overhead

0-50% depending on the workload 

Statistically insignificant

Reported on

August 24, 2022

May, 15 2023

Fixed on

August 8, 2023

July 19, 2023


How did we protect our users?

Vulnerability research continues to be at the heart of our security work at Google. We invest in not only vulnerability research, but in the community as a whole in order to encourage further research that keeps all users safe. These vulnerabilities were no exception, and we worked closely with our industry partners to make them aware of the vulnerabilities, coordinate on mitigations, align on disclosure timelines and a plan to get details out to the ecosystem. 


Upon disclosures, we immediately published Security Bulletins for both Downfall and Zenbleed that detailed how Google responded to each vulnerability, and provided guidance for the industry. In addition to our bulletins, we posted technical details for insights on both Downfall and Zenbleed. It’s imperative that vulnerability research continues to be supported by the industry, and we’re dedicated to doing our part to helping protect those that do this important work.

Lessons learned 

These long existing vulnerabilities, their discovery and the mitigations that followed have provided several lessons learned that will help the industry move forward in vulnerability research, including: 

  • There are fundamental challenges in designing secure hardware that requires further research and understanding.

  • There are gaps in automated testing and verification of hardware for vulnerabilities. 

  • Optimization features that are supposed to make computation faster are closely related to security and can introduce new vulnerabilities, if not implemented properly.


As Downfall and Zenbleed, suggest, computer hardware is only becoming more complex everyday, and so we will see more vulnerabilities, which is why Google is investing in CPU/hardware security research. We look forward to continuing to share our insights and encourage the wider industry to join us in helping to expand on this work. 


Want to learn more?

Downfall will be presented at Blackhat USA 2023 on August 9 at 1:30pm. You can also read more about Zenbleed on this advisory.

An update on Chrome Security updates – shipping security fixes to you faster

To get security fixes to you faster, starting now in Chrome 116, Chrome is shipping weekly Stable channel updates.

Chrome ships a new milestone release every four weeks. In between those major releases, we ship updates to address security and other high impact bugs. We currently schedule one of these Stable channel updates (or “Stable Refresh”) between each milestone. Starting in Chrome 116, Stable updates will be released every week between milestones.

This should not change how you use or update Chrome, nor is the frequency of milestone releases changing, but it does mean security fixes will get to you faster.

Reducing the Patch Gap

Chromium is the open source project which powers Chrome and many other browsers. Anyone can view the source code, submit changes for review, and see the changes made by anyone else, even security bug fixes. Users of our Canary (and Beta) channels receive those fixes and can sometimes give us early warning of unexpected stability, compatibility, or performance problems in advance of the fix reaching the Stable channel.

This openness has benefits in testing fixes and discovering bugs, but comes at a cost: bad actors could possibly take advantage of the visibility into these fixes and develop exploits to apply against browser users who haven’t yet received the fix. This exploitation of a known and patched security issue is referred to as n-day exploitation.

That’s why we believe it’s really important to ship security fixes as soon as possible, to minimize this “patch gap”.

When a Chrome security bug is fixed, the fix is landed in the public Chromium source code repository. The fix is then publicly accessible and discoverable. After the patch is landed, individuals across Chrome are working to test and verify the patch, and evaluate security bug fixes for backporting to affected release branches. Security fixes impacting Stable channel then await the next Stable channel update once they have been backported. The time between the patch being landed and shipped in a Stable channel update is the patch gap.

Chrome began releasing Stable channel updates every two weeks in 2020, with Chrome 77, as a way to help reduce the patch gap. Before Chrome 77, our patch gap averaged 35 days. Since moving the biweekly release cadence, the patch gap has been reduced to around 15 days. The switch to weekly updates allows us to ship security fixes even faster, and further reduce the patch gap.

While we can’t fully remove the potential for n-day exploitation, a weekly Chrome security update cadence allows up to ship security fixes 3.5 days sooner on average, greatly reducing the already small window for n-day attackers to develop and use an exploit against potential victims and making their lives much more difficult.

Getting Fixes to You Faster

Not all security bug fixes are used for n-day exploitation. But we don’t know which bugs are exploited in practice, and which aren't, so we treat all critical and high severity bugs as if they will be exploited. A lot of work goes into making sure these bugs get triaged and fixed as soon as possible. Rather than having fixes sitting and waiting to be included in the next bi-weekly update, weekly updates will allow us to get important security bug fixes to you sooner, and better protect you and your most sensitive data.

Reducing Unplanned Updates

As always, we treat any Chrome bug with a known in-the-wild exploit as a security incident of the highest priority and set about fixing the bug and getting a fix out to users as soon as possible. This has meant shipping the fix in an unscheduled update, so that you are protected immediately. By now shipping stable updates weekly, we expect the number of unplanned updates to decrease since we’ll be shipping updates more frequently.

What You Can Do

Keep a lookout for notifications from your desktop or mobile device letting you know an update of Chrome is available. If an update is available, please update immediately each time!

If you are concerned that updating Chrome will interrupt your work or result in lost tabs, not to worry – when relaunching Chrome to update, your open tabs and windows are saved and Chrome re-opens them after restart. If you are browsing in Incognito mode, your tabs will not be saved. You can simply choose to delay restarting by selecting Not now, and the updates will be applied the next time you restart Chrome.

We are exploring improved ways of informing you a new Chrome update is available. Keep a lookout for these new notifications which have been rolled out for Stable experimentation to 1% of users.

Other Chromium-based browsers have varying patch gaps. Chrome does not control the update cadence of other Chromium browsers. The change described here is only applicable to Chrome. If you are using other Chromium browsers, you may want to explore the security update cadence of those browsers.

The rest is on us – with this change we’re dedicated to continuing to work to get security fixes to you as fast as possible.

Android 14 introduces first-of-its-kind cellular connectivity security features

Android is the first mobile operating system to introduce advanced cellular security mitigations for both consumers and enterprises. Android 14 introduces support for IT administrators to disable 2G support in their managed device fleet. Android 14 also introduces a feature that disables support for null-ciphered cellular connectivity.

Hardening network security on Android

The Android Security Model assumes that all networks are hostile to keep users safe from network packet injection, tampering, or eavesdropping on user traffic. Android does not rely on link-layer encryption to address this threat model. Instead, Android establishes that all network traffic should be end-to-end encrypted (E2EE).

When a user connects to cellular networks for their communications (data, voice, or SMS), due to the distinctive nature of cellular telephony, the link layer presents unique security and privacy challenges. False Base Stations (FBS) and Stingrays exploit weaknesses in cellular telephony standards to cause harm to users. Additionally, a smartphone cannot reliably know the legitimacy of the cellular base station before attempting to connect to it. Attackers exploit this in a number of ways, ranging from traffic interception and malware sideloading, to sophisticated dragnet surveillance.

Recognizing the far reaching implications of these attack vectors, especially for at-risk users, Android has prioritized hardening cellular telephony. We are tackling well-known insecurities such as the risk presented by 2G networks, the risk presented by null ciphers, other false base station (FBS) threats, and baseband hardening with our ecosystem partners.

2G and a history of inherent security risk

The mobile ecosystem is rapidly adopting 5G, the latest wireless standard for mobile, and many carriers have started to turn down 2G service. In the United States, for example, most major carriers have shut down 2G networks. However, all existing mobile devices still have support for 2G. As a result, when available, any mobile device will connect to a 2G network. This occurs automatically when 2G is the only network available, but this can also be remotely triggered in a malicious attack, silently inducing devices to downgrade to 2G-only connectivity and thus, ignoring any non-2G network. This behavior happens regardless of whether local operators have already sunset their 2G infrastructure.

2G networks, first implemented in 1991, do not provide the same level of security as subsequent mobile generations do. Most notably, 2G networks based on the Global System for Mobile Communications (GSM) standard lack mutual authentication, which enables trivial Person-in-the-Middle attacks. Moreover, since 2010, security researchers have demonstrated trivial over-the-air interception and decryption of 2G traffic.

The obsolete security of 2G networks, combined with the ability to silently downgrade the connectivity of a device from both 5G and 4G down to 2G, is the most common use of FBSs, IMSI catchers and Stingrays.

Stingrays are obscure yet very powerful surveillance and interception tools that have been leveraged in multiple scenarios, ranging from potentially sideloading Pegasus malware into journalist phones to a sophisticated phishing scheme that allegedly impacted hundreds of thousands of users with a single FBS. This Stingray-based fraud attack, which likely downgraded device’s connections to 2G to inject SMSishing payloads, has highlighted the risks of 2G connectivity.

To address this risk, Android 12 launched a new feature that enables users to disable 2G at the modem level. Pixel 6 was the first device to adopt this feature and it is now supported by all Android devices that conform to Radio HAL 1.6+. This feature was carefully designed to ensure that users are not impacted when making emergency calls.

Mitigating 2G security risks for enterprises

The industry acknowledged the significant security and privacy benefits and impact of this feature for at-risk users, and we recognized how critical disabling 2G could also be for our Android Enterprise customers.

Enterprises that use smartphones and tablets require strong security to safeguard sensitive data and Intellectual Property. Android Enterprise provides robust management controls for connectivity safety capabilities, including the ability to disable WiFi, Bluetooth, and even data signaling over USB. Starting in Android 14, enterprise customers and government agencies managing devices using Android Enterprise will be able to restrict a device’s ability to downgrade to 2G connectivity.

The 2G security enterprise control in Android 14 enables our customers to configure mobile connectivity according to their risk model, allowing them to protect their managed devices from 2G traffic interception, Person-in-the-Middle attacks, and other 2G-based threats. IT administrators can configure this protection as necessary, always keeping the 2G radio off or ensuring employees are protected when traveling to specific high-risk locations.

These new capabilities are part of the comprehensive set of 200+ management controls that Android provides IT administrators through Android Enterprise. Android Enterprise also provides comprehensive audit logging with over 80 events including these new management controls. Audit logs are a critical part of any organization's security and compliance strategy. They provide a detailed record of all activity on a system, which can be used to track down unauthorized access, identify security breaches, and troubleshoot system problems.

Also in Android 14

The upcoming Android release also tackles the risk of cellular null ciphers. Although all IP-based user traffic is protected and E2EE by the Android platform, cellular networks expose circuit-switched voice and SMS traffic. These two particular traffic types are strictly protected only by the cellular link layer cipher, which is fully controlled by the network without transparency to the user. In other words, the network decides whether traffic is encrypted and the user has no visibility into whether it is being encrypted.

Recent reports identified usage of null ciphers in commercial networks, which exposes user voice and SMS traffic (such as One-Time Password) to trivial over the air interception. Moreover, some commercial Stingrays provide functionality to trick devices into believing ciphering is not supported by the network, thus downgrading the connection to a null cipher and enabling traffic interception.

Android 14 introduces a user option to disable support, at the modem-level, for null-ciphered connections. Similarly to 2G controls, it’s still possible to place emergency calls over an unciphered connection. This functionality will greatly improve communication privacy for devices that adopt the latest radio hardware abstraction layer (HAL). We expect this new connectivity security feature to be available in more devices over the next few years as it is adopted by Android OEMs.

Continuing to partner to raise the industry bar for cellular security

Alongside our Android-specific work, the team is regularly involved in the development and improvement of cellular security standards. We actively participate in standards bodies such as GSMA Fraud and Security Group as well as the 3rd Generation Partnership Project (3GPP), particularly its security and privacy group (SA3). Our long-term goal is to render FBS threats obsolete.

In particular, Android security is leading a new initiative within GSMA’s Fraud and Security Group (FASG) to explore the feasibility of modern identity, trust and access control techniques that would enable radically hardening the security of telco networks.

Our efforts to harden cellular connectivity adopt Android’s defense-in-depth strategy. We regularly partner with other internal Google teams as well, including the Android Red Team and our Vulnerability Rewards Program.

Moreover, in alignment with Android’s openness in security, we actively partner with top academic groups in cellular security research. For example, in 2022 we funded via our Android Security and Privacy Research grant (ASPIRE) a project to develop a proof-of-concept to evaluate cellular connectivity hardening in smartphones. The academic team presented the outcome of that project in the last ACM Conference on Security and Privacy in Wireless and Mobile Networks.

The security journey continues

User security and privacy, which includes the safety of all user communications, is a priority on Android. With upcoming Android releases, we will continue to add more features to harden the platform against cellular security threats.

We look forward to discussing the future of telco network security with our ecosystem and industry partners and standardization bodies. We will also continue to partner with academic institutions to solve complex problems in network security. We see tremendous opportunities to curb FBS threats, and we are excited to work with the broader industry to solve them.

Special thanks to our colleagues who were instrumental in supporting our cellular network security efforts: Nataliya Stanetsky, Robert Greenwalt, Jayachandran C, Gil Cukierman, Dominik Maier, Alex Ross, Il-Sung Lee, Kevin Deus, Farzan Karimi, Xuan Xing, Wes Johnson, Thiébaud Weksteen, Pauline Anthonysamy, Liz Louis, Alex Johnston, Kholoud Mohamed, Pavel Grafov

Pixel Binary Transparency: verifiable security for Pixel devices



Pixel Binary Transparency

With Android powering billions of devices, we’ve long put security first. There’s the more visible security features you might interact with regularly, like spam and phishing protection, as well as less obvious integrated security features, like daily scans for malware. For example, Android Verified Boot strives to ensure all executed code comes from a trusted source, rather than from an attacker or corruption. And with attacks on software and mobile devices constantly evolving, we’re continually strengthening these features and adding transparency into how Google protects users. This blog post peeks under the hood of Pixel Binary Transparency, a recent addition to Pixel security that puts you in control of checking if your Pixel is running a trusted installation of its operating system. 



Supply Chain Attacks & Binary Transparency

Pixel Binary Transparency responds to a new wave of attacks targeting the software supply chain—that is, attacks on software while in transit to users. These attacks are on the rise in recent years, likely in part because of the enormous impact they can have. In recent years, tens of thousands of software users from Fortune 500 companies to branches of the US government have been affected by supply chain attacks that targeted the systems that create software to install a backdoor into the code, allowing attackers to access and steal customer data. 




One way Google protects against these types of attacks is by auditing Pixel phone  firmware (also called “factory images”) before release, during which the software is thoroughly checked for backdoors. Upon boot, Android Verified Boot runs a check on your device to be sure that it’s still running the audited code that was officially released by Google. Pixel Binary Transparency now expands on that function, allowing you to personally confirm that the image running on your device is the official factory image—meaning that attackers haven’t inserted themselves somewhere in the source code, build process, or release aspects of the software supply chain. Additionally, this means that even if a signing key were compromised, binary transparency would flag the unofficially signed images, deterring attackers by making their compromises more detectable.



How it works


Pixel Binary Transparency is a public, cryptographic log that records metadata about official factory images. With this log, Pixel users can mathematically prove that their Pixels are running factory images that match what Google released and haven’t been tampered with.




The Pixel Binary Transparency log is cryptographically guaranteed to be append-only, which means entries can be added to the log, but never changed or deleted. Being append-only provides resilience against attacks on Pixel images as attackers know that it’s more difficult to insert malicious code without being caught, since an image that’s been altered will no longer match the metadata Google added to the log. There’s no way to change the information in the log to match the tampered version of the software without detection (Ideally the metadata represents the entirety of the software, but it cannot attest to integrity of the build and release processes.)




For those who want to understand more about how this works, the Pixel Binary Transparency log is append-only thanks to a data structure called a Merkle tree, which is also used in blockchain, Git, Bittorrent, and certain NoSQL databases. The append-only property is derived from the single root hash of the Merkle tree—the top level cryptographic value in the tree. The root hash is computed by hashing each leaf node containing data (for example, metadata that confirms the security of your Pixel’s software), and recursively hashing intermediate nodes. 



The root hash of a Merkle tree should not change, if and only if, the leaf nodes do not change. By keeping track of the most recent root hash, you also keep track of all the previous leaves. You can read more about the details in the Pixel Binary Transparency documentation




Merkle Trees Proofs

There are two important computations that can be performed on a Merkle tree: the consistency proof and inclusion proof. These two proofs together allow you to check whether an entry is included in a transparency log and to trust that the log has not been tampered with.




Before you trust the contents of the log, you should use the consistency proof to check the integrity of the append-only property of the tree. The consistency proof is a set of hashes that show when the tree grows, the root hash only changes from the addition of new entries and not because previous entries were modified.




Once you have established that the tree has not been tampered with, you can use the inclusion proof to check whether a particular entry is in the tree. In the case of Pixel Binary Transparency, you can check that a certain version of firmware is published in the log (and thus, an official image released by Google) before trusting it.




You can learn more about Merkle trees on Google’s transparency.dev site, which goes deeper into the same concepts in the context of our Trillian transparency log implementation. 



Try It Out

Most Pixel owners won’t ever need to perform the consistency and inclusion proofs to check their Pixel’s image—Android Verified Boot already has multiple safeguards in place, including verifying the hash of the code and data contents and checking the validity of the cryptographic signature. However, we’ve made the process available to anyone who wants to check themselves—the Pixel Binary Transparency Log Technical Detail Page will walk you through extracting the metadata from your phone and then running the inclusion and consistency proofs to compare against the log.



More Security to Come

The first iteration of Pixel Binary Transparency lays the groundwork for more security checks. For example, building on Pixel Binary Transparency, it will be possible to make even more security data transparent for users, allowing proactive assurance for a device’s other executed code beyond its factory image. We look forward to building further on Pixel Binary Transparency and continually increasing resilience against software supply chain attacks.

The Ups and Downs of 0-days: A Year in Review of 0-days Exploited In-the-Wild in 2022


This is Google’s fourth annual year-in-review of 0-days exploited in-the-wild [2021, 2020, 2019] and builds off of the mid-year 2022 review. The goal of this report is not to detail each individual exploit, but instead to analyze the exploits from the year as a whole, looking for trends, gaps, lessons learned, and successes. 

Executive Summary

41 in-the-wild 0-days were detected and disclosed in 2022, the second-most ever recorded since we began tracking in mid-2014, but down from the 69 detected in 2021.  Although a 40% drop might seem like a clear-cut win for improving security, the reality is more complicated. Some of our key takeaways from 2022 include:




N-days function like 0-days on Android due to long patching times. Across the Android ecosystem there were multiple cases where patches were not available to users for a significant time. Attackers didn’t need 0-day exploits and instead were able to use n-days that functioned as 0-days.




0-click exploits and new browser mitigations drive down browser 0-days. Many attackers have been moving towards 0-click rather than 1-click exploits. 0-clicks usually target components other than the browser. In addition, all major browsers also implemented new defenses that make exploiting a vulnerability more difficult and could have influenced attackers moving to other attack surfaces. 




Over 40% of the 0-days discovered were variants of previously reported vulnerabilities. 17 out of the 41 in-the-wild 0-days from 2022 are variants of previously reported vulnerabilities. This continues the unpleasant trend that we’ve discussed previously in both the 2020 Year in Review report and the mid-way through 2022 report. More than 20% are variants of previous in-the-wild 0-days from 2021 and 2020.




Bug collisions are high. 2022 brought more frequent reports of attackers using the same vulnerabilities as each other, as well as security researchers reporting vulnerabilities that were later discovered to be used by attackers. When an in-the-wild 0-day targeting a popular consumer platform is found and fixed, it's increasingly likely to be breaking another attacker's exploit as well.




Based on our analysis of 2022 0-days we hope to see the continued focus in the following areas across the industry:



  1. More comprehensive and timely patching to address the use of variants and n-days as 0-days.

  2. More platforms following browsers’ lead in releasing broader mitigations to make whole classes of vulnerabilities less exploitable. 

  3. Continued growth of transparency and collaboration between vendors and security defenders to share technical details and work together to detect exploit chains that cross multiple products.

By the Numbers

For the 41 vulnerabilities detected and disclosed in 2022, no single find accounted for a large percentage of all the detected 0-days. We saw them spread relatively evenly across the year: 20 in the first half and 21 in the second half. The combination of these two data points, suggests more frequent and regular detections. We also saw the number of organizations credited with in-the-wild 0-day discoveries stay high. Across the 69 detected 0-days from 2021 there were 20 organizations credited. In 2022 across the 41 in-the-wild 0-days there were 18 organizations credited. It’s promising to see the number of organizations working on 0-day detection staying high because we need as many people working on this problem as possible. 




2022 included the detection and disclosure of 41 in-the-wild 0-days, down from the 69 in 2021. While a significant drop from 2021, 2022 is still solidly in second place. All of the 0-days that we’re using for our analysis are tracked in this spreadsheet.  





Limits of Number of 0-days as a Security Metric

The number of 0-days detected and disclosed in-the-wild can’t tell us much about the state of security. Instead we use it as one indicator of many. For 2022, we believe that a combination of security improvements and regressions influenced the approximately 40% drop in the number of detected and disclosed 0-days from 2021 to 2022 and the continued higher than average number of 0-days that we saw in 2022. 




Both positive and negative changes can influence the number of in-the-wild 0-days to both rise and fall. We therefore can’t use this number alone to signify whether or not we’re progressing in the fight to keep users safe. Instead we use the number to analyze what factors could have contributed to it and then review whether or not those factors are areas of success or places that need to be addressed.




Example factors that would cause the number of detected and disclosed in-the-wild 0-days to rise:




Security Improvements - Attackers require more 0-days to maintain the same capability

  • Discovering and fixing 0-days more quickly

  • More entities publicly disclosing when a 0-day is known to be in-the-wild 

  • Adding security boundaries to platforms

Security Regressions - 0-days are easier to find and exploit 

  • Variant analysis is not performed on reported vulnerabilities

  • Exploit techniques are not mitigated

  • More exploitable vulnerabilities are added to code than fixed





Example factors that would cause the number of detected and disclosed in-the-wild 0-days to decline:


Security Improvements - 0-days take more time, money, and expertise to develop for use

  • Fewer exploitable 0-day vulnerabilities exist

  • Each new 0-day requires the creation of a new exploitation technique

  • New vulnerabilities require researching new attack surfaces

Security Regressions - Attackers need fewer 0-days to maintain the same capability

  • Slower to detect in-the-wild 0-days so a bug has a longer lifetime

  • Extended time until users are able to install a patch

  • Less sophisticated attack methods: phishing, malware, n-day exploits are sufficient




Brainstorming the different factors that could lead to this number rising and declining allows us to understand what’s happening behind the numbers and draw conclusions from there. Two key factors contributed to the higher than average number of in-the-wild 0-days for 2022: vendor transparency & variants. The continued work on detection and transparency from vendors is a clear win, but the high percentage of variants that were able to be used in-the-wild as 0-days is not great. We discuss these variants in more depth in the “Déjà vu of Déjà vu-lnerability” section. 




In the same vein, we assess that a few key factors likely led to the drop in the number of in-the-wild 0-days from 2021 to 2022,  positives such as fewer exploitable bugs such that many attackers are using the same bugs as each other, and negatives likeless sophisticated attack methods working just as well as 0-day exploits and slower to detect 0-days. The number of in-the-wild 0-days alone doesn’t tell us much about the state of in-the-wild exploitation, it’s instead the variety of factors that influenced this number where the real lessons lie. We dive into these in the following sections.

Are 0-days needed on Android?

In 2022, across the Android ecosystem we saw a series of cases where the upstream vendor had released a patch for the issue, but the downstream manufacturer had not taken the patch and released the fix for users to apply. Project Zero wrote about one of these cases in November 2022 in their “Mind the Gap” blog post




These gaps between upstream vendors and downstream manufacturers allow n-days - vulnerabilities that are publicly known - to function as 0-days because no patch is readily available to the user and their only defense is to stop using the device. While these gaps exist in most upstream/downstream relationships, they are more prevalent and longer in Android. 




This is a great case for attackers. Attackers can use the known n-day bug, but have it operationally function as a 0-day since it will work on all affected devices. An example of how this happened in 2022 on Android is CVE-2022-38181, a vulnerability in the ARM Mali GPU. The bug was originally reported to the Android security team in July 2022, by security researcher Man Yue Mo of the Github Security Lab. The Android security team then decided that they considered the issue a “Won’t Fix” because it was “device-specific”. However, Android Security referred the issue to ARM. In October 2022, ARM released the new driver version that fixed the vulnerability. In November 2022, TAG discovered the bug being used in-the-wild. While ARM had released the fixed driver version in October 2022, the vulnerability was not fixed by Android until April 2023, 6 months after the initial release by ARM, 9 months after the initial report by Man Yue Mo, and 5 months after it was first found being actively exploited in-the-wild.




  • July 2022: Reported to Android Security team

  • Aug 2022: Android Security labels “Won’t Fix” and sends to ARM

  • Oct 2022: Bug fixed by ARM

  • Nov 2022: In-the-wild exploit discovered

  • April 2023: Included in Android Security Bulletin




In December 2022, TAG discovered another exploit chain targeting the latest version of the Samsung Internet browser. At that time, the latest version of the Samsung Internet browser was running on Chromium 102, which had been released 7 months prior in May 2022. As a part of this chain, the attackers were able to use two n-day vulnerabilities which were able to function as 0-days: CVE-2022-3038 which had been patched in Chrome 105 in June 2022 and CVE-2022-22706 in the ARM Mali GPU kernel driver. ARM had released the patch for CVE-2022-22706 in January 2022 and even though it had been marked as exploited in-the-wild, attackers were still able to use it 11 months later as a 0-day. Although this vulnerability was known as exploited in the wild in January 2022, it was not included in the Android Security Bulletin until June 2023, 17 months after the patch released and it was publicly known to be actively exploited in-the-wild.




These n-days that function as 0-days fall into this gray area of whether or not to track as 0-days. In the past we have sometimes counted them as 0-days: CVE-2019-2215 and CVE-2021-1048. In the cases of these two vulnerabilities the bugs had been fixed in the upstream Linux kernel, but without assigning a CVE as is Linux’s standard. We included them because they had not been identified as security issues needing to be patched in Android prior to their in-the-wild discovery. Whereas in the case of CVE-2022-38181 the bug was initially reported to Android and ARM published security advisories to the issues indicating that downstream users needed to apply those patches. We will continue trying to decipher this “gray area” of bugs, but welcome input on how they ought to be tracked. 



Browsers Are So 2021

Similar to the overall numbers, there was a 42% drop in the number of detected in-the-wild 0-days targeting browsers from 2021 to 2022, dropping from 26 to 15. We assess this reflects browsers’ efforts to make exploitation more difficult overall as well as a shift in attacker behavior away from browsers towards 0-click exploits that target other components on the device. 







Advances in the defenses of the top browsers is likely influencing the push to other components as the initial vector in an exploit chain. Throughout 2022 we saw more browsers launching and improving additional defenses against exploitation. For Chrome that’s MiraclePtr, v8 Sandbox, and libc++ hardening. Safari launched Lockdown Mode and Firefox launched more fine-grained sandboxing. In his April 2023 Keynote at Zer0Con, Ki Chan Ahn, a vulnerability researcher and exploit developer at offensive security vendor, Dataflow Security, commented on how these types of mitigations are making browser exploitation more difficult and are an incentive for moving to other attack surfaces.




Browsers becoming more difficult to exploit pairs with an evolution in exploit delivery over the past few years to explain the drop in browser bugs in 2022. In 2019 and 2020, a decent percentage of the detected in-the-wild 0-days were delivered via watering hole attacks. A watering hole attack is where an attacker is targeting a group that they believe will visit a certain website. Anyone who visits that site is then exploited and delivered the final payload (usually spyware). In 2021, we generally saw a move to 1-click links as the initial attack vector. Both watering hole attacks and 1-click links use the browser as the initial vector onto the device. In 2022, more attackers began moving to using 0-click exploits instead, exploits that require no user interaction to trigger. 0-clicks tend to target device components other than browsers.




At the end of 2021, Citizen Lab captured a 0-click exploit targeting iMessage, CVE-2023-30860, used by NSO in their Pegasus spyware. Project Zero detailed the exploit in this 2-part blog post series. While no in-the-wild 0-clicks were publicly detected and disclosed in 2022, this does not signal a lack of use. We know that multiple attackers have and are using 0-click exploit chains.

0-clicks are difficult to detect because:



  • They are short lived

  • Often have no visible indicator of their presence

  • Can target many different components and vendors don’t even always realize all the components that are remotely accessible

  • Delivered directly to the target rather than broadly available like in a watering hole attack

  • Often not hosted on a navigable website or server




With 1-click exploits, there is a visible link that has to be clicked by the target to deliver the exploit. This means that the target or security tools may detect the link. The exploits are then hosted on a navigable server at that link.




0-clicks on the other hand often target the code that processes incoming calls or messages, meaning that they can often run prior to an indicator of an incoming message or call ever being shown. This also dramatically shortens their lifetime and the window in which they can be detected “live”. It’s likely that attackers will continue to move towards 0-click exploits and thus we as defenders need to be focused on how we can detect and protect users from these exploits. 



Déjà vu-lnerability: Complete patching remains one of the biggest opportunities

17 out of 41 of the 0-days discovered in-the-wild in 2022 are variants of previously public vulnerabilities. We first published about this in the 2020 Year in Review report, “Deja vu-lnerability,” identifying that 25% of the in-the-wild 0-days from 2020 were variants of previously public bugs. That number has continued to rise, which could be due to:



  • Defenders getting better at identifying variants, 

  • Defenders improving at detecting in-the-wild 0-days that are variants, 

  • Attackers are exploiting more variants, or

  • Vulnerabilities are being fixed less comprehensively and thus there are more variants.




The answer is likely a combination of all of the above, but we know that the number of variants that are able to be exploited against users as 0-days is not decreasing. Reducing the number of exploitable variants is one of the biggest areas of opportunity for the tech and security industries to force attackers to have to work harder to have functional 0-day exploits. 




Not only were over 40% of the 2020 in-the-wild 0-days variants, but more than 20% of the bugs are variants of previous in-the-wild 0-days: 7 from 2021 and 1 from 2020. When a 0-day is caught in the wild it’s a gift. Attackers don’t want us to know what vulnerabilities they have and the exploit techniques they’re using. Defenders need to take as much advantage as we can from this gift and make it as hard as possible for attackers to come back with another 0-day exploit. This involves: 


  • Analyzing the bug to find the true root cause, not just the way that the attackers chose to exploit it in this case

  • Looking for other locations that the same bug may exist

  • Evaluating any additional paths that could be used to exploit the bug

  • Comparing the patch to the true root cause and determining if there are any ways around it




We consider a patch to be complete only when it is both correct and comprehensive. A correct patch is one that fixes a bug with complete accuracy, meaning the patch no longer allows any exploitation of the vulnerability. A comprehensive patch applies that fix everywhere that it needs to be applied, covering all of the variants. When exploiting a single vulnerability or bug, there are often multiple ways to trigger the vulnerability, or multiple paths to access it. Many times we see vendors block only the path that is shown in the proof-of-concept or exploit sample, rather than fixing the vulnerability as a whole. Similarly, security researchers often report bugs without following up on how the patch works and exploring related attacks.




While the idea that incomplete patches are making it easier for attackers to exploit 0-days may be uncomfortable, the converse of this conclusion can give us hope. We have a clear path toward making 0-days harder. If more vulnerabilities are patched correctly and comprehensively, it will be harder for attackers to exploit 0-days.




We’ve included all identified vulnerabilities that are variants in the table below. For more thorough walk-throughs of how the in-the-wild 0-day is a variant, check out the presentation from the FIRST conference [video, slides], the slides from Zer0Con, the presentation from OffensiveCon [video, slides] on CVE-2022-41073, and this blog post on CVE-2022-22620.




Product

2022 ITW CVE

Variant

Windows win32k

CVE-2022-21882

CVE-2021-1732 (2021 itw)

iOS IOMobileFrameBuffer

CVE-2022-22587

CVE-2021-30983 (2021 itw)

WebKit “Zombie”

CVE-2022-22620

Bug was originally fixed in 2013, patch was regressed in 2016

Firefox WebGPU IPC

CVE-2022-26485

Fuzzing crash fixed in 2021

Android in ARM Mali GPU

CVE-2021-39793 CVE-2022-22706

CVE-2021-28664 (2021 itw)

Sophos Firewall

CVE-2022-1040

CVE-2020-12271 (2020 itw)

Chromium v8

CVE-2022-1096

CVE-2021-30551 (2021 itw)

Chromium

CVE-2022-1364

CVE-2021-21195

Windows “PetitPotam”

CVE-2022-26925

CVE-2021-36942 - Patch was regressed

Windows “Follina”

CVE-2022-30190

CVE-2021-40444 

(2021 itw)

Atlassian Confluence

CVE-2022-26134

CVE-2021-26084 (2021 itw)

Chromium Intents

CVE-2022-2856

CVE-2021-38000 (2021 itw)

Exchange SSRF “ProxyNotShell”

CVE-2022-41040

CVE-2021-34473  “ProxyShell”

Exchange RCE “ProxyNotShell”

CVE-2022-41082

CVE-2023-21529 “ProxyShell”

Internet Explorer JScript9

CVE-2022-41128

CVE-2021-34480

Windows “Print Spooler”

CVE-2022-41073

CVE-2022-37987

WebKit JSC

CVE-2022-42856

2016 bug discovered due to test failure




No Copyrights in Exploits

Unlike many commodities in the world, a 0-day itself is not finite. Just because one person has discovered the existence of a 0-day vulnerability and developed it into an exploit doesn’t prevent other people from independently finding it too and using it in their exploit. Most attackers who are doing their own vulnerability research and exploit development do not want anyone else to do the same as it lowers its value and makes it more likely to be detected and fixed quickly.




Over the last couple of years we’ve become aware of a trend of a high number of bug collisions, where more than one researcher has found the same vulnerability. This is happening amongst both attackers and security researchers who are reporting the bugs to vendors. While bug collisions have always occurred and we can’t measure the exact rate at which they’re occurring, the number of different entities independently being credited for the same vulnerability in security advisories, finding the same 0-day in two different exploits, and even conversations with researchers who work on both sides of the fence, suggest this is happening more often.




A higher number of bug collisions is a win for defense because that means attackers are overall using fewer 0-days. Limiting attack surfaces and making fewer bug classes exploitable can definitely contribute to researchers finding the same bugs, but more security researchers publishing their research also likely contributes. People read the same research and it incites an idea for their next project, but it incites similar ideas in many. Platforms and attack surfaces are also becoming increasingly complex so it takes quite a bit of investment in time to build up an expertise in a new component or target.




Security researchers and their vulnerability reports are helping to fix the same 0-days that attackers are using, even if those specific 0-days haven’t yet been detected in the wild, thus breaking the attackers’ exploits. We hope that vendors continue supporting researchers and investing in their bug bounty programs because it is helping fix the same vulnerabilities likely being used against users. It also highlights why thorough patching of known in-the-wild bugs and vulnerabilities by security researchers are both important.   



What now?

Looking back on 2022 our overall takeaway is that as an industry we are on the right path, but there are also plenty of areas of opportunity, the largest area being the industry’s response to reported vulnerabilities. 



  • We must get fixes and mitigations to users quickly so that they can protect themselves.
  • We must perform detailed analyses to ensure the root cause of the vulnerability is addressed.

  • We must share as many technical details as possible.

  • We must capitalize on reported vulnerabilities to learn and fix as much as we can from them.



None of this is easy, nor is any of this a surprise to security teams who operate in this space. It requires investment, prioritization, and developing a patching process that balances both protecting users quickly and ensuring it is comprehensive, which can at times be in tension. Required investments depend on each unique situation, but we see some common themes around staffing/resourcing, incentive structures, process maturity, automation/testing, release cadence, and partnerships. 




We’ve detailed some efforts that can help ensure bugs are correctly and comprehensively fixed in this post: including root cause, patch, variant, and exploit technique analyses. We will continue to help with these analyses, but we hope and encourage platform security teams and other independent security researchers to invest in these efforts as well.



Final Thoughts: TAG’s New Exploits Team

Looking into the second half of 2023, we’re excited for what’s to come. You may notice that our previous reports have been on the Project Zero blog. Our 0-days in-the-wild program has moved from Project Zero to TAG in order to combine the vulnerability analysis, detection, and threat actor tracking expertise all in one team, benefiting from more resources and ultimately making: TAG Exploits! More to come on that, but we’re really excited for what this means for protecting users from 0-days and making 0-day hard. 




One of the intentions of our Year in Review is to make our conclusions and findings “peer-reviewable”. If we want to best protect users from the harms of 0-days and make 0-day exploitation hard, we need all the eyes and brains we can get tackling this problem. We welcome critiques, feedback, and other ideas on our work in this area. Please reach out at 0day-in-the-wild <at> google.com.

Supply chain security for Go, Part 3: Shifting left


Previously in our Supply chain security for Go series, we covered dependency and vulnerability management tools and how Go ensures package integrity and availability as part of the commitment to countering the rise in supply chain attacks in recent years

In this final installment, we’ll discuss how “shift left” security can help make sure you have the security information you need, when you need it, to avoid unwelcome surprises. 

Shifting left


The software development life cycle (SDLC) refers to the series of steps that a software project goes through, from planning all the way through operation. It’s a cycle because once code has been released, the process continues and repeats through actions like coding new features, addressing bugs, and more. 

Shifting left involves implementing security practices earlier in the SDLC. For example, consider scanning dependencies for known vulnerabilities; many organizations do this as part of continuous integration (CI) which ensures that code has passed security scans before it is released. However, if a vulnerability is first found during CI, significant time has already been invested building code upon an insecure dependency. Shifting left in this case means allowing developers to run vulnerability scans locally, well before the CI-time scan, so they can learn about issues with their dependencies prior to investing time and effort into creating new code built upon vulnerable dependencies or functions.

Shifting left with Go

Go provides several features that help you address security early in your process, including govulncheck and pkg.go.dev discussed in Supply chain security for Go, Part 1. Today’s post covers two more features of special interest to supply chain security: the Go extension for Visual Studio Code and built-in fuzz testing. 


VS Code Go extension

The VS Code Go extension helps developers shift left by surfacing problems directly in their code editor. The plugin is loaded with features including built in testing and debugging and vulnerability information right in your IDE. Having these features at your fingertips while coding means good security practices are incorporated into your project as early as possible. For example, by running the govulncheck integration early and often, you'll know whether you are invoking a compromised function before it becomes difficult to extract. Check out the tutorial to get started today. 

Fuzz testing in Go

In 2022, Go became the first major programming language to include fuzz testing in its standard toolset with the release of Go 1.18. Fuzzing is a type of automated testing that continuously alters program inputs to find bugs. It plays a huge role in keeping the Go project itself secure – OSS-Fuzz has discovered eight vulnerabilities in the Go Standard library since 2020. 

Fuzz testing can find security exploits and vulnerabilities in edge cases that humans often miss, not only your code, but also in your dependencies—which means more insight into your supply chain. With fuzzing included in the standard Go tool set, developers can more easily shift left, fuzzing earlier in their development process. Our tutorial walks you through how to set up and run your fuzzing tests. 

If you maintain a Go package, your project may be eligible for free and continuous fuzzing provided by OSS-Fuzz, which supports native Go fuzzing. Fuzzing your project, whether on demand through the standard toolset or continuously through OSS-Fuzz is a great way to help protect the people and projects who will use your module. 

Security at the ecosystem level

In the same way that we’re working toward "secure Go practices" becoming "standard Go practices," the future of software will be more secure for everyone when they’re simply “standard development practices.” Supply chain security threats are real and complex, but we can contribute to solving them by building solutions directly into open source ecosystems.

If you’ve enjoyed this series, come meet the Go team at Gophercon this September! And check out our closing keynote—all about how Go’s vulnerability management can help you write more secure and reliable software.


A look at Chrome’s security review culture



Security reviewers must develop the confidence and skills to make fast, difficult decisions. A simplistic piece of advice to reviewers is “just be confident” but in reality that takes practice and experience. Confidence comes with time, and people are there to support each other as we learn. This post shares advice we give to people doing security reviews for Chrome.

Security Review in Chrome

Chrome has a lightweight launch process. Teams write requirements and design documents outlining why the feature should be built, how the feature will benefit users, and how the feature will be built. Developers write code behind a feature flag and must pass a Launch Review before turning it on. Teams think about security early-on and coordinate with the security team. Teams are responsible for the safety of their features and ensuring that the security team is able to say ‘yes’ to its security review.

Security review focuses on the design of a proposed feature, not its details and is distinct from code review. Chrome changes need approval from engineers familiar with the code being changed but not necessarily from security experts. It is not practical for security engineers to scrutinize every change. Instead we focus on the feature’s architecture, and how it might affect people using Chrome.

Reviewers function best in an open and supportive engineering culture. Security review is not an easy task – it applies security engineering insights in a social context that could become adversarial and fractious. Google, and Chrome, embody a security-centric engineering culture, where respectful disagreement is valued, where we learn from mistakes, where decisions can be revisited, and where developers see the security team as a partner that helps them ship features safely. Within the security team we support each other by encouraging questioning & learning, and provide mentorship and coaching to help reviewers enhance their reviewing skills.

Learning security review

Start by shadowing

Start with some help. As a new reviewer, you may not feel you’re 100% ready — don’t let that put you off. The best way to learn is to observe and see what’s involved before easing in to doing reviews on your own. Start by shadowing to get a feel for the process. Ask the person you are shadowing how they plan to approach the review, then look at the materials yourself. Concentrate on learning how to review rather than on the details of the thing you are reviewing. Don’t get too involved but observe how the reviewer does things and ask them why. Next time try to co-review something - ask the feature team some questions and talk through your thoughts with the other reviewer. Let them make the final approval decision. Do this a few times and you’ll be ready to be the main reviewer, and remember that you can always reach out for help and advice.

Read enough to make a decision

Read a lot, but know when to stop. Understand what the feature is doing, what’s new, and what’s built on existing, approved, mechanisms. Focus on the new things. If you need to educate yourself, skim older docs or code for context. It can help to look at related reviews for repeated issues and solutions. It is tempting to try to understand everything and at first you’ll dig deeper than you need to. You’ll get better at knowing when to stop after a few reviews. Treat existing, approved, features as building blocks that you don’t need to fully understand, but might be useful to skim as background.

Launch review is a gate. It’s ok to ask feature teams to have the materials ready. Try to use your time wisely — if a design doc is very brief and lacks any security discussion you can quickly say "please add a security considerations section" and stop thinking about it until the team comes back with more complete documentation. If the design document doesn’t fully explain something that is a sign the document needs to be expanded — if something isn’t clear to you or isn’t covered then start asking questions. Remember that you’re not looking for every possible bug, but ensuring that major concerns are addressed upfront.

As you’re reading, read actively and write down observations and questions as you go. Cross them off if you find an answer later. For your first reviews this will take a long time. Don’t worry too much about that - you won't know yet which details matter. Over time you’ll learn where to focus your attention. This is also a good time to pair up with a seasoned reviewer. Schedule a chat to go over your thoughts before you share them with the feature team. This will help you understand the process people go through and allows a safe evaluation of your thoughts before you share them more widely - this will help you build confidence. Next, clarify any questions with the feature team. Try to write a sentence or two describing the feature - if you can’t do this it indicates you need more information.

Ask questions to improve documentation

You have permission to be ignorant! Use it! Ask questions until you understand areas of uncertainty. Asking questions provides real value, and often triggers the team to realize that something should be done differently. In particular — if it’s confusing to you it’s probably badly explained or badly thought out, or shows that an assumption or tacit knowledge is missing from a design document. If you’re worried about looking ignorant, make use of the more experienced reviewers around you — ask on the chat or book some time to talk over your thoughts one-on-one. This should help you formulate your question so that it’s useful to the feature team. Try to write out what you think is happening, and let the feature team tell you if you’re close or not.

The chances that you’ll understand everything immediately are very low, and that’s ok. In meetings about a feature a favorite question of mine is ‘what are you secretly worried about?’ followed by an awkward pause. People will absolutely tell you things! Sometimes there's a domain-knowledge mismatch when you don't have the right words to ask the question, so you can't get a useful answer. Always ask for a diagram that shows which process or component different parts of a feature are happening in — this helps you hone in on the critical interfaces, and will illustrate the design more clearly than screenfuls of text or code.

Center people in your security analysis

We’re here to help people. Try to center people in your thoughts and arguments. How will people use the feature? Who are they? Who might harm them and how? Are there particular groups of people that might be more vulnerable than others, and what can we do to protect them? How does the feature make people feel? How will their experience of the application change? How will their lives be affected? Think about how a bad actor might abuse the feature. What implicit assumptions is the implementation making about the people using it? What or who are we asking people to trust? What if someone modifies traffic, changes a message, passes in bad data, or tricks someone into using the feature when they don't want to? This is a great thing to discuss when you’re pairing with another reviewer — be sure to ask them what they like to think about.

Think about what can go wrong

Take time to think and bring an adversarial mindset and bring a different perspective. In some ways the purpose of a security review is to stop and think before unleashing new ideas on the world. Make focus time in your calendar or sit somewhere unusual to give yourself space to think. A skeptical, enquiring mindset is more useful than deep knowledge. You’re there to ask the questions the feature team won’t have thought about. They will naturally focus on what they need to do to make the feature work. Security review is about thinking about what else might happen when it’s working, or what might happen if someone deliberately tries to do things the designers didn’t expect. Try to take a different perspective.

Trust your spidey-senses. If you can't quite put your finger on what might go wrong, but something feels off. Sometimes a feature is just plain complicated, or in a risky area of code, or feels like it's been rushed. It can be difficult to articulate these concerns to a team without rubbing people the wrong way. Use people you trust to bounce your thoughts off and hone in on what you are worried about. Discuss with other reviewers whether and how these risks can be communicated. Your spidey-senses are probably correct, and they're as important as any single concrete solvable threat you've spotted.

Approve and keep notes

Pause then approve. Once you’ve understood what’s happening and iterated through any concerns you’ve raised you’ll be ready to approve the feature for launch. It’s worth taking a short pause here to let your brain do its thinking in the background before you press the button. Try to concisely describe the feature — if you can’t then go back and ask more questions! It’s important to get questions and concerns to teams quickly but final approval can wait for some digestion time. If you cannot come up with a clear decision then reach out to other reviewers to discuss what to do next. Let the feature team know you’re working on it and when you’ll get back to them. After a pause, if nothing else occurs to you then click Approved and write a short paragraph saying why. Note any follow-on work the team has promised to complete before launching. This is also a great time to leave yourself a short note for your performance review — it’s easy to lose track of what you reviewed and the changes your input led to — having a rolling document will both help you spot patterns, and help you tell the story of the work you’ve done.

Expect to make mistakes, and learn from them

Nothing we do in software is forever, and many mistakes will be found and fixed later. You will make mistakes. Mainly small ones that won’t really matter. Security is about evaluating new risks in the context of the value provided to people using a product. This tradeoff extends into the design and launch process of which you are just a small part. You only have so much time, and It’s inevitable that you might sometimes see things that aren’t there, or not notice things that are. Security reviewers are one element in a layered defense and the consequences of a mistake will be contained by things you did spot. It’s good to try to find specific problems, but more important to locate and apply general security principles like sandboxing and the rule of two. Sometimes you might think something is fine, but later realize that it isn’t. This often happens when we learn something new about a feature, or discover that an assumption was invalid. This is where careful communication is important. Feature teams will be happy to know about any problems you uncover, and will find time to fix them later if possible. Remember that Looks Good To Me doesn’t mean Looks Perfect To Me.

How to be better

Experienced reviewers can always improve, and apply their insights widely within their organization.

It’s not always easy

It takes time to learn security engineering and build a working knowledge of the architecture of a complex product. Reviewing is different from the normal development journey - when an engineer works on a feature they start in an ambiguous situation and gradually learn or invent everything needed to deeply understand and solve the problem. To be effective as a security reviewer we have to embrace ambiguity and ignorance, and learn how to swiftly learn just enough to have a useful opinion, before starting again for our next review. This may seem daunting - and it is - but over time reviewers get better at knowing where to focus their efforts.

Security reviewing can feel invisible. Security is not an all-or-nothing quality of a feature. Rather it forms one concern that a product must balance while still shipping, adding new features, and appealing to people that use it. Security is an important concern (for Chrome it’s both a critical engineering pillar, and something people say they value when choosing Chrome) but it’s not the only factor. It’s our job to identify and articulate security risks, and advocate for better approaches, but sometimes another concern dominates. If deviations from our advice are well justified we shouldn’t feel ignored - we did our bit.

Your peers are there to help you. If you need support, ask questions on the reviewing team’s chat, or schedule thirty minutes or a coffee with another reviewer to discuss a particular review.

Help teams secure their features

Remember that developers know what they are doing, but might not be thinking about the things you are thinking about. You might not be confident in what you know about their feature, but imagine how the feature team feels coming to the mysterious halls of the security people! Often we’ll ask a team to implement one or more of our layered defenses before they get to launch their feature. This might be the first time they’ve had to write a fuzzer or harden a library. You’ll get requests for examples or help with implementation. Find an expert or spend time doing these things yourself. The security process should be as smooth a speed bump as possible. Any familiarity you have with these techniques will improve our interactions and maintain our reputation as a helpful team. If we ask someone to do something but can’t help them make progress we will be a source of frustration. If we help people they will be likely to approach us early-on next time they have a security question.

Training is available

Develop mind-tricks and frameworks for having difficult conversations. Sometimes (especially when you get involved early in a project’s design phase) you will need to disagree with a feature’s design, or nudge a team in a more secure direction. While a supportive technical culture should make it safe to surface and resolve technical differences, it takes energy and patience to work through these conflicts. It’s harder still to say ‘no’, or ask a team to commit to more work than they were expecting. These are skills you can practice and become more comfortable doing. Look for courses you can take. Some suggestions include “having difficult conversations”, “mentoring”, “coaching”, “persuasive writing”, and “threat modeling”.

Scale your impact

Find ways to scale your impact. Security decisions are made based on judgment and mechanisms but judgment doesn’t scale! To maintain a sustainable security workload for ourselves, and empower feature teams to make their own decisions, we need to make judgment as small a part of the puzzle as possible.

Encourage good patterns. If a design addresses a security concern, say so on the launch bug or a mailing list. This helps for later reviews, and provides useful feedback to the design team. Help newer reviewers see good or bad patterns, and the rhythm of reviews by telling a few stories of what went well and what got missed in the past. Establish architectural patterns that contain the consequences of a problem. Make these easy to follow while preventing anti-patterns - ideally a bad security idea shouldn’t even compile.

Write guidance or policies. Distill decisions into FAQs, threat models, principles or rules. Get involved with the people building foundational pieces of your product, and get them to own their security guidance so that it gets applied as part of that team’s advice to other teams. A checklist of things to look for in a particular area is a great starting point for the team making the next feature in that space, and for the reviewer that signs off at the end.

Level-up your developers. We can raise the level of expertise across the wider developer community, and reduce the burden of reviewing for security teams. Through repeated engagements with the same team you can start to set expectations - each time, drop some hints about what could be done better next time. Encourage system diagrams, risk assessments, threat modeling or sandboxing. Soon teams will start with these, and reviews will be much smoother.

Anoint security champions. In larger feature teams encourage a couple of security champions within the group to serve as initial points of contact and a first line of review. Support these people! Offer to talk them through their design docs and help them think about security concerns. They will grow into local experts who know when to call on security specialists. They can write security principles for their area, leading to secure features and smooth launch reviews.

Summary

Do a few reviews to develop confidence in your decisions. You won't understand all the details of a feature. You will sometimes say yes to the wrong things or get teams to do unnecessary work. You'll ask insightful questions and improve designs..

Remember that security reviewing is difficult. Remember that people are there to help you. Remember that every good decision you encourage keeps people safe from harm, and increases their trust in you and your product. As you mature, maintain a supportive culture where reviewers can grow, and where you help other teams develop new features with safety in mind.