Category Archives: Online Security Blog

The latest news and insights from Google on security and safety on the Internet

Making Linux Kernel Exploit Cooking Harder


Posted by Eduardo Vela, Exploit Critic

Cover of the medieval cookbook. Title in large letters kernel Exploits. Adorned. Featuring a small penguin. 15th century. Color. High quality picture. Private collection. Detailed.




The Linux kernel is a key component for the security of the Internet. Google uses Linux in almost everything, from the computers our employees use, to the products people around the world use daily like Chromebooks, Android on phones, cars, and TVs, and workloads on Google Cloud. Because of this, we have heavily invested in Linux’s security - and today, we’re announcing how we’re building on those investments and increasing our rewards.


In 2020, we launched an open-source Kubernetes-based Capture-the-Flag (CTF) project called, kCTF. The kCTF Vulnerability Rewards Program (VRP) lets researchers connect to our Google Kubernetes Engine (GKE) instances, and if they can hack it, they get a flag, and are potentially rewarded. All of GKE and its dependencies are in scope, but every flag caught so far has been a container breakout through a Linux kernel vulnerability. We’ve learned that finding and exploiting heap memory corruption vulnerabilities in the Linux kernel could be made a lot harder. Unfortunately, security mitigations are often hard to quantify, however, we think we’ve found a way to do so concretely going forward.


When we launched kCTF, we hoped to build a community of Linux kernel exploitation hackers. This worked well and allowed the community to learn from several members of the security community like Markak, starlabs, Crusaders of Rust, d3v17, slipper@pangu, valis, kylebot, pqlqpql and Awarau.


Now, we’re making updates to the kCTF program. First, we are indefinitely extending the increased reward amounts we announced earlier this year, meaning we’ll continue to pay $20,000 - $91,337 USD for vulnerabilities on our lab kCTF deployment to reward the important work being done to understand and improve kernel security. This is in addition to our existing patch rewards for proactive security improvements.


Second, we’re launching new instances with additional rewards to evaluate the latest Linux kernel stable image as well as new experimental mitigations in a custom kernel we've built. Rather than simply learning about the current state of the stable kernels, the new instances will be used to ask the community to help us evaluate the value of both our latest and more experimental security mitigations. 


Today, we are starting with a set of mitigations we believe will make most of the vulnerabilities (9/10 vulns and 10/13 exploits) we received this past year more difficult to exploit. For new exploits of vulnerabilities submitted which also compromise the latest Linux kernel, we will pay an additional $21,000 USD. For those which compromise our custom Linux kernel with our experimental mitigations, the reward will be another $21,000 USD (if they are clearly bypassing the mitigations we are testing). This brings the total rewards up to a maximum of $133,337 USD. We hope this will allow us to learn more about how hard (or easy) it is to bypass our experimental mitigations.


The mitigations we've built attempt to tackle the following exploit primitives:

  • Out-of-bounds write on slab

  • Cross-cache attacks

  • Elastic objects

  • Freelist corruption


With the kCTF VRP program, we are building a pipeline to analyze, experiment, measure and build security mitigations to make the Linux kernel as safe as we can with the help of the security community. We hope that, over time, we will be able to make security mitigations that make exploitation of Linux kernel vulnerabilities as hard as possible.

How Hash-Based Safe Browsing Works in Google Chrome

By Rohit Bhatia, Mollie Bates, Google Chrome Security

There are various threats a user faces when browsing the web. Users may be tricked into sharing sensitive information like their passwords with a misleading or fake website, also called phishing. They may also be led into installing malicious software on their machines, called malware, which can collect personal data and also hold it for ransom. Google Chrome, henceforth called Chrome, enables its users to protect themselves from such threats on the internet. When Chrome users browse the web with Safe Browsing protections, Chrome uses the Safe Browsing service from Google to identify and ward off various threats.

Safe Browsing works in different ways depending on the user's preferences. In the most common case, Chrome uses the privacy-conscious Update API (Application Programming Interface) from the Safe Browsing service. This API was developed with user privacy in mind and ensures Google gets as little information about the user's browsing history as possible. If the user has opted-in to "Enhanced Protection" (covered in an earlier post) or "Make Searches and Browsing Better", Chrome shares limited additional data with Safe Browsing only to further improve user protection.

This post describes how Chrome implements the Update API, with appropriate pointers to the technical implementation and details about the privacy-conscious aspects of the Update API. This should be useful for users to understand how Safe Browsing protects them, and for interested developers to browse through and understand the implementation. We will cover the APIs used for Enhanced Protection users in a future post.

Threats on the Internet

When a user navigates to a webpage on the internet, their browser fetches objects hosted on the internet. These objects include the structure of the webpage (HTML), the styling (CSS), dynamic behavior in the browser (Javascript), images, downloads initiated by the navigation, and other webpages embedded in the main webpage. These objects, also called resources, have a web address which is called their URL (Uniform Resource Locator). Further, URLs may redirect to other URLs when being loaded. Each of these URLs can potentially host threats such as phishing websites, malware, unwanted downloads, malicious software, unfair billing practices, and more. Chrome with Safe Browsing checks all URLs, redirects or included resources, to identify such threats and protect users.

Safe Browsing Lists

Safe Browsing provides a list for each threat it protects users against on the internet. A full catalog of lists that are used in Chrome can be found by visiting chrome://safe-browsing/#tab-db-manager on desktop platforms.

A list does not contain unsafe web addresses, also referred to as URLs, in entirety; it would be prohibitively expensive to keep all of them in a device’s limited memory. Instead it maps a URL, which can be very long, through a cryptographic hash function (SHA-256), to a unique fixed size string. This distinct fixed size string, called a hash, allows a list to be stored efficiently in limited memory. The Update API handles URLs only in the form of hashes and is also called hash-based API in this post.

Further, a list does not store hashes in entirety either, as even that would be too memory intensive. Instead, barring a case where data is not shared with Google and the list is small, it contains prefixes of the hashes. We refer to the original hash as a full hash, and a hash prefix as a partial hash.

A list is updated following the Update API’s request frequency section. Chrome also follows a back-off mode in case of an unsuccessful response. These updates happen roughly every 30 minutes, following the minimum wait duration set by the server in the list update response.

For those interested in browsing relevant source code, here’s where to look:

Source Code

  1. GetListInfos() contains all the lists, along with their associated threat types, the platforms they are used on, and their file names on disk.
  2. HashPrefixMap shows how the lists are stored and maintained. They are grouped by the size of prefixes, and appended together to allow quick binary search based lookups.

How is hash-based URL lookup done

As an example of a Safe Browsing list, let's say that we have one for malware, containing partial hashes of URLs known to host malware. These partial hashes are generally 4 bytes long, but for illustrative purposes, we show only 2 bytes.

['036b', '1a02', 'bac8', 'bb90']

Whenever Chrome needs to check the reputation of a resource with the Update API, for example when navigating to a URL, it does not share the raw URL (or any piece of it) with Safe Browsing to perform the lookup. Instead, Chrome uses full hashes of the URL (and some combinations) to look up the partial hashes in the locally maintained Safe Browsing list. Chrome sends only these matched partial hashes to the Safe Browsing service. This ensures that Chrome provides these protections while respecting the user’s privacy. This hash-based lookup happens in three steps in Chrome:

Step 1: Generate URL Combinations and Full Hashes

When Google blocks URLs that host potentially unsafe resources by placing them on a Safe Browsing list, the malicious actor can host the resource on a different URL. A malicious actor can cycle through various subdomains to generate new URLs. Safe Browsing uses host suffixes to identify malicious domains that host malware in their subdomains. Similarly, malicious actors can also cycle through various subpaths to generate new URLs. So Safe Browsing also uses path prefixes to identify websites that host malware at various subpaths. This prevents malicious actors from cycling through subdomains or paths for new malicious URLs, allowing robust and efficient identification of threats.

To incorporate these host suffixes and path prefixes, Chrome first computes the full hashes of the URL and some patterns derived from the URL. Following Safe Browsing API's URLs and Hashing specification, Chrome computes the full hashes of URL combinations by following these steps:

  1. First, Chrome converts the URL into a canonical format, as defined in the specification.
  2. Then, Chrome generates up to 5 host suffixes/variants for the URL.
  3. Then, Chrome generates up to 6 path prefixes/variants for the URL.
  4. Then, for the combined 30 host suffixes and path prefixes combinations, Chrome generates the full hash for each combination.

Source Code

  1. V4LocalDatabaseManager::CheckBrowseURL is an example which performs a hash-based lookup.
  2. V4ProtocolManagerUtil::UrlToFullHashes creates the various URL combinations for a URL, and computes their full hashes.

Example

For instance, let's say that a user is trying to visit https://evil.example.com/blah#frag. The canonical url is https://evil.example.com/blah. The host suffixes to be tried are evil.example.com, and example.com. The path prefixes are / and /blah. The four combined URL combinations are evil.example.com/, evil.example.com/blah, example.com/, and example.com/blah.

url_combinations = ["evil.example.com/", "evil.example.com/blah","example.com/", "example.com/blah"]
full_hashes = ['1a02…28', 'bb90…9f', '7a9e…67', 'bac8…fa']

Step 2: Search Partial Hashes in Local Lists

Chrome then checks the full hashes of the URL combinations against the locally maintained Safe Browsing lists. These lists, which contain partial hashes, do not provide a decisive malicious verdict, but can quickly identify if the URL is considered not malicious. If the full hash of the URL does not match any of the partial hashes from the local lists, the URL is considered safe and Chrome proceeds to load it. This happens for more than 99% of the URLs checked.

Source Code

  1. V4LocalDatabaseManager::GetPrefixMatches gets the matching partial hashes for the full hashes of the URL and its combinations.

Example

Chrome finds that three full hashes 1a02…28, bb90…9f, and bac8…fa match local partial hashes. We note that this is for demonstration purposes, and a match here is rare.

Step 3: Fetch Matching Full Hashes

Next, Chrome sends only the matching partial hash (not the full URL or any particular part of the URL, or even their full hashes), to the Safe Browsing service's fullHashes.find method. In response, it receives the full hashes of all malicious URLs for which the full hash begins with one of the partial hashes sent by Chrome. Chrome checks the fetched full hashes with the generated full hashes of the URL combinations. If any match is found, it identifies the URL with various threats and their severities inferred from the matched full hashes.

Source Code

  1. V4GetHashProtocolManager::GetFullHashes performs the lookup for the full hashes for the matched partial hashes.

Example

Chrome sends the matched partial hashes 1a02, bb90, and bac8 to fetch the full hashes. The server returns full hashes that match these partial hashes, 1a02…28, bb90…ce, and bac8…01. Chrome finds that one of the full hashes matches with the full hash of the URL combination being checked, and identifies the malicious URL as hosting malware.

Conclusion

Safe Browsing protects Chrome users from various malicious threats on the internet. While providing these protections, Chrome faces challenges such as constraints in memory capacity, network bandwidth usage, and a dynamic threat landscape. Chrome is also mindful of the users’ privacy choices, and shares little data with Google.

In a follow up post, we will cover the more advanced protections Chrome provides to its users who have opted in to “Enhanced Protection”.

DNS-over-HTTP/3 in Android

Posted by Matthew Maurer and Mike Yu, Android team

To help keep Android users’ DNS queries private, Android supports encrypted DNS. In addition to existing support for DNS-over-TLS, Android now supports DNS-over-HTTP/3 which has a number of improvements over DNS-over-TLS.

Most network connections begin with a DNS lookup. While transport security may be applied to the connection itself, that DNS lookup has traditionally not been private by default: the base DNS protocol is raw UDP with no encryption. While the internet has migrated to TLS over time, DNS has a bootstrapping problem. Certificate verification relies on the domain of the other party, which requires either DNS itself, or moves the problem to DHCP (which may be maliciously controlled). This issue is mitigated by central resolvers like Google, Cloudflare, OpenDNS and Quad9, which allow devices to configure a single DNS resolver locally for every network, overriding what is offered through DHCP.

In Android 9.0, we announced the Private DNS feature, which uses DNS-over-TLS (DoT) to protect DNS queries when enabled and supported by the server. Unfortunately, DoT incurs overhead for every DNS request. An alternative encrypted DNS protocol, DNS-over-HTTPS (DoH), is rapidly gaining traction within the industry as DoH has already been deployed by most public DNS operators, including the Cloudflare Resolver and Google Public DNS. While using HTTPS alone will not reduce the overhead significantly, HTTP/3 uses QUIC, a transport that efficiently multiplexes multiple streams over UDP using a single TLS session with session resumption. All of these features are crucial to efficient operation on mobile devices.

DNS-over-HTTP/3 (DoH3) support was released as part of a Google Play system update, so by the time you’re reading this, Android devices from Android 11 onwards1 will use DoH3 instead of DoT for well-known2 DNS servers which support it. Which DNS service you are using is unaffected by this change; only the transport will be upgraded. In the future, we aim to support DDR which will allow us to dynamically select the correct configuration for any server. This feature should decrease the performance impact of encrypted DNS.

Performance

DNS-over-HTTP/3 avoids several problems that can occur with DNS-over-TLS operation:

  • As DoT operates on a single stream of requests and responses, many server implementations suffer from head-of-line blocking3. This means that if the request at the front of the line takes a while to resolve (possibly because a recursive resolution is necessary), responses for subsequent requests that would have otherwise been resolved quickly are blocked waiting on that first request. DoH3 by comparison runs each request over a separate logical stream, which means implementations will resolve requests out-of-order by default.
  • Mobile devices change networks frequently as the user moves around. With DoT, these events require a full renegotiation of the connection. By contrast, the QUIC transport HTTP/3 is based on can resume a suspended connection in a single RTT.
  • DoT intends for many queries to use the same connection to amortize the cost of TCP and TLS handshakes at the start. Unfortunately, in practice several factors (such as network disconnects or server TCP connection management) make these connections less long-lived than we might like. Once a connection is closed, establishing the connection again requires at least 1 RTT.

    In unreliable networks, DoH3 may even outperform traditional DNS. While unintuitive, this is because the flow control mechanisms in QUIC can alert either party that packets weren’t received. In traditional DNS, the timeout for a query needs to be based on expected time for the entire query, not just for the resolver to receive the packet.

Field measurements during the initial limited rollout of this feature show that DoH3 significantly improves on DoT’s performance. For successful queries, our studies showed that replacing DoT with DoH3 reduces median query time by 24%, and 95th percentile query time by 44%. While it might seem suspect that the reported data is conditioned on successful queries, both DoT and DoH3 resolve 97% of queries successfully, so their metrics are directly comparable. UDP resolves only 83% of queries successfully. As a result, UDP latency is not directly comparable to TLS/HTTP3 latency because non-connection-oriented protocols have a different notion of what a "query" is. We have still included it for rough comparison.

Memory Safety

The DNS resolver processes input that could potentially be controlled by an attacker, both from the network and from apps on the device. To reduce the risk of security vulnerabilities, we chose to use a memory safe language for the implementation.

Fortunately, we’ve been adding Rust support to the Android platform. This effort is intended exactly for cases like this — system level features which need to be performant or low level (both in this case) and which would carry risk to implement in C++. While we’ve previously launched Keystore 2.0, this represents our first foray into Rust in Mainline Modules. Cloudflare maintains an HTTP/3 library called quiche, which fits our use case well, as it has a memory-safe implementation, few dependencies, and a small code size. Quiche also supports use directly from C++. We considered this, but even the request dispatching service had sufficient complexity that we chose to implement that portion in Rust as well.

We built the query engine using the Tokio async framework to simultaneously handle new requests, incoming packet events, control signals, and timers. In C++, this would likely have required multiple threads or a carefully crafted event loop. By leveraging asynchronous in Rust, this occurs on a single thread with minimal locking4. The DoH3 implementation is 1,640 lines and uses a single runtime thread. By comparison, DoT takes 1,680 lines while managing less and using up to 4 threads per DoT server in use.

Safety and Performance — Together at Last

With the introduction of Rust, we are able to improve both security and the performance at the same time. Likewise, QUIC allows us to improve network performance and privacy simultaneously. Finally, Mainline ensures that such improvements are able to make their way to more Android users sooner.

Acknowledgements

Special thanks to Luke Huang who greatly contributed to the development of this feature, and Lorenzo Colitti for his in-depth review of the technical aspects of this post.


  1. Some Android 10 devices which adopted Google Play system updates early will also receive this feature. 

  2. Google DNS and Cloudflare DNS at launch, others may be added in the future. 

  3. DoT can be implemented in a way that avoids this problem, as the client must accept server responses out of order. However, in practice most servers do not implement this reordering. 

  4. There is a lock used for the SSL context which is accessed once per DNS server, and another on the FFI when issuing a request. The FFI lock could be removed with changes to the C++ side, but has remained because it is low contention. 

Game on! The 2022 Google CTF is here.



Are you ready to put your hacking skills to the test? It’s Google CTF time!

The competition kicks off on July 1 2022 6:00 PM UTC and runs through July 3 2022 6:00 PM UTC. Registration is now open at http://goo.gle/ctf.

In true old Google CTF fashion, the top 8 teams will qualify for our Hackceler8 speedrunning meets CTFs competition. The prize pool stands similar to previous years at more than $40,000.


We can’t wait to see whether PPP will be able to defend their crown. For those of you looking to satisfy your late-night hacking hunger: past year's challenges, including Hackceler8 2021 matches, are open-sourced here. On top of that there are hours of Hackceler8 2020 videos to watch!

If you are just starting out in this space, last year’s Beginner’s Quest is a great resource to get started. For later in the year, we have something mysterious planned - stay tuned to find out more!
Whether you’re a seasoned CTF player or just curious about cyber security and ethical hacking, we want you to join us. Sign up to expand your skill set, meet new friends in the security community, and even watch the pros in action. For the latest announcements, see g.co/ctf, subscribe to our mailing list, or follow us on @GoogleVRP. Interested in bug hunting for Google? Check out bughunters.google.com. See you there!

SBOM in Action: finding vulnerabilities with a Software Bill of Materials


The past year has seen an industry-wide effort to embrace Software Bills of Materials (SBOMs)—a list of all the components, libraries, and modules that are required to build a piece of software. In the wake of the 2021 Executive Order on Cybersecurity, these ingredient labels for software became popular as a way to understand what’s in the software we all consume. The guiding idea is that it’s impossible to judge the risks of particular software without knowing all of its components—including those produced by others. This increased interest in SBOMs saw another boost after the National Institute of Standards and Technology (NIST) released its Secure Software Development Framework, which requires SBOM information to be available for software. But now that the industry is making progress on methods to generate and share SBOMs, what do we do with them?

Generating an SBOM is only one half of the story. Once an SBOM is available for a given piece of software, it needs to be mapped onto a list of known vulnerabilities to know which components could pose a threat. By connecting these two sources of information, consumers will know not just what’s in what’s in their software, but also its risks and whether they need to remediate any issues.

In this blog post, we demonstrate the process of taking an SBOM from a large and critical project—Kubernetes—and using an open source tool to identify the vulnerabilities it contains. Our example’s success shows that we don’t need to wait for SBOM generation to reach full maturity before we begin mapping SBOMs to common vulnerability databases. With just a few updates from SBOM creators to address current limitations in connecting the two sources of data, this process is poised to become easily within reach of the average software consumer.

OSV: Connecting SBOMs to vulnerabilities

The following example uses Kubernetes, a major project that makes its SBOM available using the Software Package Data Exchange (SPDX) format—an international open standard (ISO) for communicating SBOM information. The same idea should apply to any project that makes its SBOM available, and for projects that don’t, you can generate your own SBOM using the same bom tool Kubernetes created.

We have chosen to map the SBOM to the Open Source Vulnerabilities (OSV) database, which describes vulnerabilities in a format that was specifically designed to map to open source package versions or commit hashes. The OSV database excels here as it provides a standardized format and aggregates information across multiple ecosystems (e.g., Python, Golang, Rust) and databases (e.g., Github Advisory Database (GHSA), Global Security Database (GSD)).

To connect the SBOM to the database, we’ll use the SPDX spdx-to-osv tool. This open source tool takes in an SPDX SBOM document, queries the OSV database of vulnerabilities, and returns an enumeration of vulnerabilities present in the software’s declared components.
Example: Kubernetes’ SBOM

The first step is to download Kubernetes’ SBOM, which is publicly available and contains information on the project, dependencies, versions, and licenses. Anyone can download it with a simple curl command:

```

# Download the Kubernetes SPDX source document

curl -L https://sbom.k8s.io/v1.21.3/source > k8s-1.21.3-source.spdx

```


The next step is to use the SPDX spdx-to-osv tool to connect the Kubernetes’ SBOM to the OSV database:


```

# Run the spdx-to-osv tool, taking the information from the SPDX SBOM and mapping it to OSV vulnerabilities

$ java -jar ./target/spdx-to-osv-0.0.4-SNAPSHOT-jar-with-dependencies.jar -I k8s-1.21.3-source.spdx -O out-k8s.1.21.3.json




# Show the output OSV vulnerabilities of the spdx-to-osv tool

$ cat out-k8s.1.21.3.json



{

"id": "GHSA-w73w-5m7g-f7qc",

"published": "2021-05-18T21:08:21Z",

"modified": "2021-06-28T21:32:34Z",

"aliases": [

"CVE-2020-26160"

],

"summary": "Authorization bypass in github.com/dgrijalva/jwt-go",

"details": "jwt-go allows attackers to bypass intended access restrictions in situations with []string{} for m[\"aud\"] (which is allowed by the specification). Because the type assertion fails, \"\" is the value of aud. This is a security problem if the JWT token is presented to a service that lacks its own audience check. There is no patch available and users of jwt-go are advised to migrate to [golang-jwt](https://github.com/golang-jwt/jwt) at version 3.2.1",

"affected": [

{

"package": {

"name": "github.com/dgrijalva/jwt-go",

"ecosystem": "Go",

"purl": "pkg:golang/github.com/dgrijalva/jwt-go"

},



```

The output of the tool shows that v1.21.3 of Kubernetes contains the CVE-2020-26160 vulnerability. This information can be helpful to determine if any additional action is required to manage the risk of operating this software. For example, if an organization is using v1.21.3 of Kubernetes, measures can be taken to trigger company policy to update the deployment, which will protect the organization against attacks exploiting this vulnerability.

Suggestions for SBOM tooling improvements

To get the spdx-to-osv tool to work we had to make some minor changes to disambiguate the information provided in the SBOM:
  • In the current implementation of the bom tool, the version was included as part of the package name (gopkg.in/square/[email protected]). We needed to trim the suffix to match the SPDX format, which has a different field for version number.
  • The SBOM created by the bom tool does not specify an ecosystem. Without an ecosystem, it's impossible to reliably disambiguate which library or package is affected in an automated way. Vulnerability scanners could return false positives if one ecosystem was affected but not others. It would be more helpful if the SBOM differentiated between different library and package versions.
These are relatively minor hurdles, though, and we were able to successfully run the tool with only small manual adjustments. To make the process easier in the future, we have the following recommendation for improving SBOM generation tooling:

  • SBOM tooling creators should add a reference using an identification scheme such as Purl for all packages included in the software. This type of identification scheme both specifies the ecosystem and also makes package identification easier, since the scheme is more resilient to small deviations in package descriptors like the suffix example above. SPDX supports this via external references to Purl and other package identification schemas.
SBOM in the future

It’s clear that we’re getting very close to achieving the original goal of SBOMs: using them to help manage the risk of vulnerabilities in software. Our example queried the OSV database, but we will soon see the same success in mapping SBOM data to other vulnerability databases and even using them with new standards like VEX, which provides additional context around whether vulnerabilities in software have been mitigated.

Continuing on this path of widespread SBOM adoption and tooling refinement, we will hopefully soon be able to not only request and download SBOMs for every piece of software, but also use them to understand the vulnerabilities affecting any software we consume. This example is a peek into a possible future of what SBOMs can offer when we bridge the gap to connect them with vulnerability databases: a new normal of worrying less about the risks in the software we use.
 
A special thanks to Gary O’Neall of Source Auditor for creating the spdx-to-osv tool and contributing to this blog post.

Announcing the winners of the 2021 GCP VRP Prize


2021 was another record-breaking year for our Vulnerability Rewards Program (VRP). We paid a total of $8.7 million in rewards, our highest amount yet. 2021 saw some amazing work from the security research community. It is worth noting that a significant portion of the reports we received were for findings in Google Cloud Platform (GCP) products. It is heartening to see an increasing number of talented researchers getting involved in cloud security.

We first announced the GCP VRP Prize in 2019 to encourage security researchers to focus on the security of GCP, in turn helping us make GCP more secure for our users, customers, and the internet at large. Even 3 years into the program, the submissions we are getting never cease to amaze us. After careful evaluation of the submissions, we are excited to announce the 2021 winners:

First Prize, $133,337: Sebastian Lutz for the report and write-up Bypassing Identity-Aware Proxy. Sebastian's excellent write-up outlines how he found a bug in Identity-Aware Proxy (IAP) which an attacker could have exploited to gain access to a user's IAP-protected resources by making them visit an attacker-controlled URL and stealing their IAP auth token.

Second Prize, $73,331: Imre Rad for the report and write-up GCE VM takeover via DHCP flood. The flaw described in the write-up would have allowed an attacker to gain access to a Google Compute Engine VM by sending malicious DHCP packets to the VM and impersonating the GCE metadata server.

Third Prize, $73,331: Mike Brancato for the report and write-up Remote Code Execution in Google Cloud Dataflow. Mike's write-up describes how he discovered that Dataflow nodes were exposing an unauthenticated Java JMX port and how an attacker could have exploited this to run arbitrary commands on the VM under some configurations.

Fourth Prize, $31,337: Imre Rad for the write-up The Speckle Umbrella story — part 2 which details multiple vulnerabilities that Imre found in Cloud SQL.

(Remember, you can make multiple submissions for the GCP VRP Prize and be eligible for more than one prize!)

Fifth Prize, $1,001: Anthony Weems for the report and write-up Remote code execution in Managed Anthos Service Mesh control plane. Anthony found a bug in Managed Anthos Service Mesh and came up with a clever exploit to execute arbitrary commands authenticated as a Google-managed per-project service account.

Sixth Prize, $1,000: Ademar Nowasky Junior for the report and write-up Command Injection in Google Cloud Shell. Ademar found a way to bypass some of the validation checks done by Cloud Shell. This would have allowed an attacker to run arbitrary commands in a user's Cloud Shell session by making them visit a maliciously crafted link.

Congratulations to all the winners!

Here's a video that with more details about each of the winning submissions:



New Details About 2022 GCP VRP


We will pay out a total of $313,337 to the top seven submissions in the 2022 edition of the GCP VRP Prize. Individual prize amounts will be as follows:

  • 1st prize: $133,337
  • 2nd prize: $73,331
  • 3rd prize: $31,337
  • 4th prize: $31,311
  • 5th prize: $17,311
  • 6th prize: $13,373
  • 7th prize: $13,337

If you are a security researcher, here's how you can enter the competition for the GCP VRP Prize 2022:
  • Find a vulnerability in a GCP product (check out Google Cloud Free Program to get started).
  • Report it to bughunters.google.com. Your bug needs to be awarded a financial reward to be eligible for the GCP VRP Prize (the GCP VRP Prize money will be in addition to what you received for your bug!).
  • Create a public write-up describing your vulnerability report. One of the goals behind the GCP VRP Prize is to promote open research into cloud security.
  • Submit it here.
Make sure to submit your VRP reports and write-ups before January 15, 2023 at 23:59 PT. VRP reports which were submitted in preceding years but fixed only in 2022 are also eligible. You can check out the official rules for the prize here. Good luck!

Announcing the winners of the 2021 GCP VRP Prize


2021 was another record-breaking year for our Vulnerability Rewards Program (VRP). We paid a total of $8.7 million in rewards, our highest amount yet. 2021 saw some amazing work from the security research community. It is worth noting that a significant portion of the reports we received were for findings in Google Cloud Platform (GCP) products. It is heartening to see an increasing number of talented researchers getting involved in cloud security.

We first announced the GCP VRP Prize in 2019 to encourage security researchers to focus on the security of GCP, in turn helping us make GCP more secure for our users, customers, and the internet at large. Even 3 years into the program, the submissions we are getting never cease to amaze us. After careful evaluation of the submissions, we are excited to announce the 2021 winners:

First Prize, $133,337: Sebastian Lutz for the report and write-up Bypassing Identity-Aware Proxy. Sebastian's excellent write-up outlines how he found a bug in Identity-Aware Proxy (IAP) which an attacker could have exploited to gain access to a user's IAP-protected resources by making them visit an attacker-controlled URL and stealing their IAP auth token.

Second Prize, $73,331: Imre Rad for the report and write-up GCE VM takeover via DHCP flood. The flaw described in the write-up would have allowed an attacker to gain access to a Google Compute Engine VM by sending malicious DHCP packets to the VM and impersonating the GCE metadata server.

Third Prize, $73,331: Mike Brancato for the report and write-up Remote Code Execution in Google Cloud Dataflow. Mike's write-up describes how he discovered that Dataflow nodes were exposing an unauthenticated Java JMX port and how an attacker could have exploited this to run arbitrary commands on the VM under some configurations.

Fourth Prize, $31,337: Imre Rad for the write-up The Speckle Umbrella story — part 2 which details multiple vulnerabilities that Imre found in Cloud SQL.

(Remember, you can make multiple submissions for the GCP VRP Prize and be eligible for more than one prize!)

Fifth Prize, $1,001: Anthony Weems for the report and write-up Remote code execution in Managed Anthos Service Mesh control plane. Anthony found a bug in Managed Anthos Service Mesh and came up with a clever exploit to execute arbitrary commands authenticated as a Google-managed per-project service account.

Sixth Prize, $1,000: Ademar Nowasky Junior for the report and write-up Command Injection in Google Cloud Shell. Ademar found a way to bypass some of the validation checks done by Cloud Shell. This would have allowed an attacker to run arbitrary commands in a user's Cloud Shell session by making them visit a maliciously crafted link.

Congratulations to all the winners!

Here's a video that with more details about each of the winning submissions:



New Details About 2022 GCP VRP


We will pay out a total of $313,337 to the top seven submissions in the 2022 edition of the GCP VRP Prize. Individual prize amounts will be as follows:

  • 1st prize: $133,337
  • 2nd prize: $73,331
  • 3rd prize: $31,337
  • 4th prize: $31,311
  • 5th prize: $17,311
  • 6th prize: $13,373
  • 7th prize: $13,337

If you are a security researcher, here's how you can enter the competition for the GCP VRP Prize 2022:
  • Find a vulnerability in a GCP product (check out Google Cloud Free Program to get started).
  • Report it to bughunters.google.com. Your bug needs to be awarded a financial reward to be eligible for the GCP VRP Prize (the GCP VRP Prize money will be in addition to what you received for your bug!).
  • Create a public write-up describing your vulnerability report. One of the goals behind the GCP VRP Prize is to promote open research into cloud security.
  • Submit it here.
Make sure to submit your VRP reports and write-ups before January 15, 2023 at 23:59 PT. VRP reports which were submitted in preceding years but fixed only in 2022 are also eligible. You can check out the official rules for the prize here. Good luck!

Retrofitting Temporal Memory Safety on C++


Memory safety in Chrome is an ever-ongoing effort to protect our users. We are constantly experimenting with different technologies to stay ahead of malicious actors. In this spirit, this post is about our journey of using heap scanning technologies to improve memory safety of C++.



Let’s start at the beginning though. Throughout the lifetime of an application its state is generally represented in memory. Temporal memory safety refers to the problem of guaranteeing that memory is always accessed with the most up to date information of its structure, its type. C++ unfortunately does not provide such guarantees. While there is appetite for different languages than C++ with stronger memory safety guarantees, large codebases such as Chromium will use C++ for the foreseeable future.



auto* foo = new Foo();

delete foo;

// The memory location pointed to by foo is not representing

// a Foo object anymore, as the object has been deleted (freed).

foo->Process();



In the example above, foo is used after its memory has been returned to the underlying system. The out-of-date pointer is called a dangling pointer and any access through it results in a use-after-free (UAF) access. In the best case such errors result in well-defined crashes, in the worst case they cause subtle breakage that can be exploited by malicious actors. 



UAFs are often hard to spot in larger codebases where ownership of objects is transferred between various components. The general problem is so widespread that to this date both industry and academia regularly come up with mitigation strategies. The examples are endless: C++ smart pointers of all kinds are used to better define and manage ownership on application level; static analysis in compilers is used to avoid compiling problematic code in the first place; where static analysis fails, dynamic tools such as C++ sanitizers can intercept accesses and catch problems on specific executions.



Chrome’s use of C++ is sadly no different here and the majority of high-severity security bugs are UAF issues. In order to catch issues before they reach production, all of the aforementioned techniques are used. In addition to regular tests, fuzzers ensure that there’s always new input to work with for dynamic tools. Chrome even goes further and employs a C++ garbage collector called Oilpan which deviates from regular C++ semantics but provides temporal memory safety where used. Where such deviation is unreasonable, a new kind of smart pointer called MiraclePtr was introduced recently to deterministically crash on accesses to dangling pointers when used. Oilpan, MiraclePtr, and smart-pointer-based solutions require significant adoptions of the application code.



Over the last years, another approach has seen some success: memory quarantine. The basic idea is to put explicitly freed memory into quarantine and only make it available when a certain safety condition is reached. In the Linux kernel a probabilistic approach was used where memory was eventually just recycled. A more elaborate approach uses heap scanning to avoid reusing memory that is still reachable from the application. This is similar to a garbage collected system in that it provides temporal memory safety by prohibiting reuse of memory that is still reachable. The rest of this article summarizes our journey of experimenting with quarantines and heap scanning in Chrome.



(At this point, one may ask where pointer authentication fits into this picture – keep on reading!)

Quarantining and Heap Scanning, the Basics

The main idea behind assuring temporal safety with quarantining and heap scanning is to avoid reusing memory until it has been proven that there are no more (dangling) pointers referring to it. To avoid changing C++ user code or its semantics, the memory allocator providing new and delete is intercepted.

Upon invoking delete, the memory is actually put in a quarantine, where it is unavailable for being reused for subsequent new calls by the application. At some point a heap scan is triggered which scans the whole heap, much like a garbage collector, to find references to quarantined memory blocks. Blocks that have no incoming references from the regular application memory are transferred back to the allocator where they can be reused for subsequent allocations.



There are various hardening options which come with a performance cost:

  • Overwrite the quarantined memory with special values (e.g. zero);

  • Stop all application threads when the scan is running or scan the heap concurrently;

  • Intercept memory writes (e.g. by page protection) to catch pointer updates;

  • Scan memory word by word for possible pointers (conservative handling) or provide descriptors for objects (precise handling);

  • Segregation of application memory in safe and unsafe partitions to opt-out certain objects which are either performance sensitive or can be statically proven as being safe to skip;

  • Scan the execution stack in addition to just scanning heap memory;



We call the collection of different versions of these algorithms StarScan [stɑː skæn], or *Scan for short.

Reality Check

We apply *Scan to the unmanaged parts of the renderer process and use Speedometer2 to evaluate the performance impact. 



We have experimented with different versions of *Scan. To minimize performance overhead as much as possible though, we evaluate a configuration that uses a separate thread to scan the heap and avoids clearing of quarantined memory eagerly on delete but rather clears quarantined memory when running *Scan. We opt in all memory allocated with new and don’t discriminate between allocation sites and types for simplicity in the first implementation.


Note that the proposed version of *Scan is not complete. Concretely, a malicious actor may exploit a race condition with the scanning thread by moving a dangling pointer from an unscanned to an already scanned memory region. Fixing this race condition requires keeping track of writes into blocks of already scanned memory, by e.g. using memory protection mechanisms to intercept those accesses, or stopping all application threads in safepoints from mutating the object graph altogether. Either way, solving this issue comes at a performance cost and exhibits an interesting performance and security trade-off. Note that this kind of attack is not generic and does not work for all UAF. Problems such as depicted in the introduction would not be prone to such attacks as the dangling pointer is not copied around.



Since the security benefits really depend on the granularity of such safepoints and we want to experiment with the fastest possible version, we disabled safepoints altogether.



Running our basic version on Speedometer2 regresses the total score by 8%. Bummer…



Where does all this overhead come from? Unsurprisingly, heap scanning is memory bound and quite expensive as the entire user memory must be walked and examined for references by the scanning thread.



To reduce the regression we implemented various optimizations that improve the raw scanning speed. Naturally, the fastest way to scan memory is to not scan it at all and so we partitioned the heap into two classes: memory that can contain pointers and memory that we can statically prove to not contain pointers, e.g. strings. We avoid scanning memory that cannot contain any pointers. Note that such memory is still part of the quarantine, it is just not scanned.



We extended this mechanism to also cover allocations that serve as backing memory for other allocators, e.g., zone memory that is managed by V8 for the optimizing JavaScript compiler. Such zones are always discarded at once (c.f. region-based memory management) and temporal safety is established through other means in V8.



On top, we applied several micro optimizations to speed up and eliminate computations: we use helper tables for pointer filtering; rely on SIMD for the memory-bound scanning loop; and minimize the number of fetches and lock-prefixed instructions.



We also improve upon the initial scheduling algorithm that just starts a heap scan when reaching a certain limit by adjusting how much time we spent in scanning compared to actually executing the application code (c.f. mutator utilization in garbage collection literature).



In the end, the algorithm is still memory bound and scanning remains a noticeably expensive procedure. The optimizations helped to reduce the Speedometer2 regression from 8% down to 2%.



While we improved raw scanning time, the fact that memory sits in a quarantine increases the overall working set of a process. To further quantify this overhead, we use a selected set of Chrome’s real-world browsing benchmarks to measure memory consumption. *Scan in the renderer process regresses memory consumption by about 12%. It’s this increase of the working set that leads to more memory being paged in which is noticeable on application fast paths.


Hardware Memory Tagging to the Rescue

MTE (Memory Tagging Extension) is a new extension on the ARM v8.5A architecture that helps with detecting errors in software memory use. These errors can be spatial errors (e.g. out-of-bounds accesses) or temporal errors (use-after-free). The extension works as follows. Every 16 bytes of memory are assigned a 4-bit tag. Pointers are also assigned a 4-bit tag. The allocator is responsible for returning a pointer with the same tag as the allocated memory. The load and store instructions verify that the pointer and memory tags match. In case the tags of the memory location and the pointer do not match a hardware exception is raised.



MTE doesn't offer a deterministic protection against use-after-free. Since the number of tag bits is finite there is a chance that the tag of the memory and the pointer match due to overflow. With 4 bits, only 16 reallocations are enough to have the tags match. A malicious actor may exploit the tag bit overflow to get a use-after-free by just waiting until the tag of a dangling pointer matches (again) the memory it is pointing to.



*Scan can be used to fix this problematic corner case. On each delete call the tag for the underlying memory block gets incremented by the MTE mechanism. Most of the time the block will be available for reallocation as the tag can be incremented within the 4-bit range. Stale pointers would refer to the old tag and thus reliably crash on dereference. Upon overflowing the tag, the object is then put into quarantine and processed by *Scan. Once the scan verifies that there are no more dangling pointers to this block of memory, it is returned back to the allocator. This reduces the number of scans and their accompanying cost by ~16x.



The following picture depicts this mechanism. The pointer to foo initially has a tag of 0x0E which allows it to be incremented once again for allocating bar. Upon invoking delete for bar the tag overflows and the memory is actually put into quarantine of *Scan.

We got our hands on some actual hardware supporting MTE and redid the experiments in the renderer process. The results are promising as the regression on Speedometer was within noise and we only regressed memory footprint by around 1% on Chrome’s real-world browsing stories.



Is this some actual free lunch? Turns out that MTE comes with some cost which has already been paid for. Specifically, PartitionAlloc, which is Chrome’s underlying allocator, already performs the tag management operations for all MTE-enabled devices by default. Also, for security reasons, memory should really be zeroed eagerly. To quantify these costs, we ran experiments on an early hardware prototype that supports MTE in several configurations:

  1. MTE disabled and without zeroing memory;

  2. MTE disabled but with zeroing memory;

  3. MTE enabled without *Scan;

  4. MTE enabled with *Scan;



(We are also aware that there’s synchronous and asynchronous MTE which also affects determinism and performance. For the sake of this experiment we kept using the asynchronous mode.) 

The results show that MTE and memory zeroing come with some cost which is around 2% on Speedometer2. Note that neither PartitionAlloc, nor hardware has been optimized for these scenarios yet. The experiment also shows that adding *Scan on top of MTE comes without measurable cost. 


Conclusions

C++ allows for writing high-performance applications but this comes at a price, security. Hardware memory tagging may fix some security pitfalls of C++, while still allowing high performance. We are looking forward to see a more broad adoption of hardware memory tagging in the future and suggest using *Scan on top of hardware memory tagging to fix temporary memory safety for C++. Both the used MTE hardware and the implementation of *Scan are prototypes and we expect that there is still room for performance optimizations.


Privileged pod escalations in Kubernetes and GKE



At the KubeCon EU 2022 conference in Valencia, security researchers from Palo Alto Networks presented research findings on “trampoline pods”—pods with an elevated set of privileges required to do their job, but that could conceivably be used as a jumping off point to gain escalated privileges.

The research mentions GKE, including how developers should look at the privileged pod problem today, what the GKE team is doing to minimize the use of privileged pods, and actions GKE users can take to protect their clusters.

Privileged pods within the context of GKE security

While privileged pods can pose a security issue, it’s important to look at them within the overall context of GKE security. To use a privileged pod as a “trampoline” in GKE, there is a major prerequisite – the attacker has to first execute a successful application compromise and container breakout attack.

Because the use of privileged pods in an attack requires a first step such as a container breakout to be effective, let’s look at two areas:
  1. features of GKE you can use to reduce the likelihood of a container breakout
  2. steps the GKE team is taking to minimize the use of privileged pods and the privileges needed in them.
Reducing container breakouts

There are a number of features in GKE along with some best practices that you can use to reduce the likelihood of a container breakout:

More information can be found in the GKE Hardening Guide.

How GKE is reducing the use of privileged pods.

While it’s not uncommon for customers to install privileged pods into their clusters, GKE works to minimize the privilege levels held by our system components, especially those that are enabled by default. However, there are limits as to how many privileges can be removed from certain features. For example, Anthos Config Management requires permissions to modify most Kubernetes objects to be able to create and manage those objects.

Some other privileges are baked into the system, such as those held by Kubelet. Previously, we worked with the Kubernetes community to build the Node Restriction and Node Authorizer features to limit Kubelet's access to highly sensitive objects, such as secrets, adding protection against an attacker with access to the Kubelet credentials.

More recently, we have taken steps to reduce the number of privileged pods across GKE and have added additional documentation on privileges used in system pods as well as information on how to improve pod isolation. Below are the steps we’ve taken:
  1. We have added an admission controller to GKE Autopilot and GKE Standard (on by default) and GKE/Anthos (opt-in) that stops attempts to run as a more privileged service account, which blocks a method of escalating privileges using privileged pods.
  2. We created a permission scanning tool that identifies pods that have privileges that could be used for escalation, and we used that tool to perform an audit across GKE and Anthos.
  3. The permission scanning tool is now integrated into our standard code review and testing processes to reduce the risk of introducing privileged pods into the system. As mentioned earlier, some features require privileges to perform their function.
  4. We are using the audit results to reduce permissions available to pods. For example, we removed “update nodes and pods” permissions from anetd in GKE.
  5. Where privileged pods are required for the operation of a feature, we’ve added additional documentation to illustrate that fact.
  6. We added documentation that outlines how to isolate GKE-managed workloads in dedicated node pools when you’re unable to use GKE Sandbox to reduce the risk of privilege escalation attacks.
In addition to the measures above, we recommend users take advantage of tools that can scan RBAC settings to detect overprivileged pods used in their applications. As part of their presentation, the Palo Alto researchers announced an open source tool, called rbac-police, that can be used for the task. So, while it only takes a single overprivileged workload to trampoline to the cluster, there are a number of actions you can take to minimize the likelihood of the prerequisite container breakout and the number of privileges used by a pod.

I/O 2022: Android 13 security and privacy (and more!)

Every year at I/O we share the latest on privacy and security features on Android. But we know some users like to go a level deeper in understanding how we’re making the latest release safer, and more private, while continuing to offer a seamless experience. So let’s dig into the tools we’re building to better secure your data, enhance your privacy and increase trust in the apps and experiences on your devices.

Low latency, frictionless security

Regardless of whether a smartphone is used for consumer or enterprise purposes, attestation is a key underpinning to ensure the integrity of the device and apps running on the device. Fundamentally, key attestation lets a developer bind a secret or designate data to a device. This is a strong assertion: "same user, same device" as long as the key is available, a cryptographic assertion of integrity can be made.

With Android 13 we have migrated to a new model for the provisioning of attestation keys to Android devices which is known as Remote Key Provisioning (RKP). This new approach will strengthen device security by eliminating factory provisioning errors and providing key vulnerability recovery by moving to an architecture where Google takes more responsibility in the certificate management lifecycle for these attestation keys. You can learn more about RKP here.

We’re also making even more modules updatable directly through Google Play System Updates so we can automatically upgrade more system components and fix bugs, seamlessly, without you having to worry about it. We now have more than 30 components in Android that can be automatically updated through Google Play, including new modules in Android 13 for Bluetooth and ultra-wideband (UWB).

Last year we talked about how the majority of vulnerabilities in major operating systems are caused by undefined behavior in programming languages like C/C++. Rust is an alternative language that provides the efficiency and flexibility required in advanced systems programming (OS, networking) but Rust comes with the added boost of memory safety. We are happy to report that Rust is being adopted in security critical parts of Android, such as our key management components and networking stacks.

Hardening the platform doesn’t just stop with continual improvements with memory safety and expansion of anti-exploitation techniques. It also includes hardening our API surfaces to provide a more secure experience to our end users.

In Android 13 we implemented numerous enhancements to help mitigate potential vulnerabilities that app developers may inadvertently introduce. This includes making runtime receivers safer by allowing developers to specify whether a particular broadcast receiver in their app should be exported and visible to other apps on the device. On top of this, intent filters block non-matching intents which further hardens the app and its components.

For enterprise customers who need to meet certain security certification requirements, we’ve updated our security logging reporting to add more coverage and consolidate security logs in one location. This is helpful for companies that need to meet standards like Common Criteria and is useful for partners such as management solutions providers who can review all security-related logs in one place.

Privacy on your terms

Android 13 brings developers more ways to build privacy-centric apps. Apps can now implement a new Photo picker that allows the user to select the exact photos or videos they want to share without having to give another app access to their media library.

With Android 13, we’re also reducing the number of apps that require your location to function using the nearby devices permission introduced last year. For example, you won’t have to turn on location to enable Wi-fi for certain apps and situations. We’ve also changed how storage works, requiring developers to ask for separate permissions to access audio, image and video files.

Previously, we’ve limited apps from accessing your clipboard in the background and alerted you when an app accessed it. With Android 13, we’re automatically deleting your clipboard history after a short period so apps are blocked from seeing old copied information.

In Android 11, we began automatically resetting permissions for apps you haven’t used for an extended period of time, and have since expanded the feature to devices running Android 6 and above. Since then, we’ve automatically reset over 5 billion permissions.

In Android 13, app makers can go above and beyond in removing permissions even more proactively on behalf of their users. Developers will be able to provide even more privacy by reducing the time their apps have access to unneeded permissions.

Finally, we know notifications are critical for many apps but are not always of equal importance to users. In Android 13, you’ll have more control over which apps you would like to get alerts from, as new apps on your device are required to ask you for permission by default before they can send you notifications.

Apps you can trust

Most app developers build their apps using a variety of software development kits (SDKs) that bundle in pre-packaged functionality. While SDKs provide amazing functionality, app developers typically have little visibility or control over the SDK code or insight into their performance.

We’re working with developers to make their apps more secure with a new Google Play SDK Index that helps them see SDK safety and reliability signals before they build the code into their apps. This ensures we're helping everyone build a more secure and private app ecosystem.

Last month, we also started rolling out a new Data safety section in Google Play to help you understand how apps plan to collect, share, and protect your data, before you install it. To instill even more trust in Play apps, we're enabling developers to have their apps independently validated against OWASP’s MASVS, a globally recognized standard for mobile app security.

We’re working with a small group of developers and authorized lab partners to evolve the program. Developers who have completed this independent validation can showcase this on their Data safety section.

Additional mobile security and safety

Just like our anti-malware protection Google Play, which now scans 125 billion apps a day, we believe spam and phishing detection should be built in. We’re proud to announce that in a recent analyst report, Messages was the highest rated built-in messaging app for anti-phishing and scams protection.

Messages is now also helping to protect you against 1.5 billion spam messages per month, so you can avoid both annoying texts and attempts to access your data. These phishing attempts are increasingly how bad actors are trying to get your information, by getting you to click on a link or download an app, so we are always looking for ways to offer another line of defense.

Last year, we introduced end-to-end encryption in Messages to provide more security for your mobile conversations. Later this year, we’ll launch end-to-end encryption group conversations in beta to ensure your personal messages get even more protection.

As with a lot of features we build, we try to do it in an open and transparent way. In Android 11 we announced a new platform feature that was backed by an ISO standard to enable the use of digital IDs on a smartphone in a privacy-preserving way. When you hand over your plastic license (or other credential) to someone for verification it’s all or nothing which means they have access to your full name, date of birth, address, and other personally identifiable information (PII). The mobile version of this allows for much more fine-grained control where the end user and/or app can select exactly what to share with the verifier. In addition, the verifier must declare whether they intend to retain the data returned. In addition, you can present certain details of your credentials, such as age, without revealing your identity.

Over the last two Android releases we have been improving this API and making it easier for third-party organizations to leverage it for various digital identity use cases, such as driver’s licenses, student IDs, or corporate badges. We’re now announcing that Google Wallet uses Android Identity Credential to support digital IDs and driver’s licenses. We’re working with states in the US and governments around the world to bring digital IDs to Wallet later this year. You can learn more about all of the new enhancements in Google Wallet here.

Protected by Android

We don’t think your security and privacy should be hard to understand and control. Later this year, we’ll begin rolling out a new destination in settings on Android 13 devices that puts all your device security and data privacy front and center.

The new Security & Privacy settings page will give you a simple, color-coded way to understand your safety status and will offer clear and actionable guidance to improve it. The page will be anchored by new action cards that notify you of critical steps you should take to address any safety risks. In addition to notifications to warn you about issues, we’ll also provide timely recommendations on how to enhance your privacy.

We know that to feel safe and in control of your data, you need to have a secure foundation you can count on. Because if your device isn’t secure, it’s not private either. We’re working hard to make sure you’re always protected by Android. Learn more about these protections on our website.