Category Archives: Online Security Blog

The latest news and insights from Google on security and safety on the Internet

Adopting the Arm Memory Tagging Extension in Android

As part of our continuous commitment to improve the security of the Android ecosystem, we are partnering with Arm to design the memory tagging extension (MTE). Memory safety bugs, common in C and C++, remain one of the largest vulnerabilities in the Android platform and although there have been previous hardening efforts, memory safety bugs comprised more than half of the high priority security bugs in Android 9. Additionally, memory safety bugs manifest as hard to diagnose reliability problems, including sporadic crashes or silent data corruption. This reduces user satisfaction and increases the cost of software development. Software testing tools, such as ASAN and HWASAN help, but their applicability on current hardware is limited due to noticeable overheads.

MTE, a hardware feature, aims to further mitigate these memory safety bugs by enabling us to detect them with low overhead. It has two execution modes:

  • Precise mode: Provides more detailed information about the memory violation
  • Imprecise mode: Has lower CPU overhead and is more suitable to be always-on.

Arm recently published a whitepaper on MTE and has added documentation to the Arm v8.5 Architecture Reference Manual.

We envision several different usage modes for MTE.

  • MTE provides a version of ASAN/HWASAN that is easier to use for testing and fuzzing in laboratory environments. It will find more bugs in a fraction of the time and at a lower cost, reducing the complexity of the development process. In many cases, MTE will allow testing memory safety using the same binary as shipped to production. The bug reports produced by MTE will be as detailed and actionable as those from ASAN and HWASAN.
  • MTE will be used as a mechanism for testing complex software scenarios in production. App Developers and OEMs will be able to selectively turn on MTE for parts of the software stack. Where users have provided consent, bug reports will be available to developers via familiar mechanisms like Google Play Console.
  • MTE can be used as a strong security mitigation in the Android System and applications for many classes of memory safety bugs. For most instances of such vulnerabilities, a probabilistic mitigation based on MTE could prevent exploitation with a higher than 90% chance of detecting each invalid memory access. By implementing these protections and ensuring that attackers can't make repeated attempts to exploit security-critical components, we can significantly reduce the risk to users posed by memory safety issues.

We believe that memory tagging will detect the most common classes of memory safety bugs in the wild, helping vendors identify and fix them, discouraging malicious actors from exploiting them. During the past year, our team has been working to ensure readiness of the Android platform and application software for MTE. We have deployed HWASAN, a software implementation of the memory tagging concept, to test our entire platform and a few select apps. This deployment has uncovered close to 100 memory safety bugs. The majority of these bugs were detected on HWASAN enabled phones in everyday use. MTE will greatly improve upon this in terms of overhead, ease of deployment, and scale. In parallel, we have been working on supporting MTE in the LLVM compiler toolchain and in the Linux kernel. The Android platform support for MTE will be complete by the time of silicon availability.

Google is committed to supporting MTE throughout the Android software stack. We are working with select Arm System On Chip (SoC) partners to test MTE support and look forward to wider deployment of MTE in the Android software and hardware ecosystem. Based on the current data points, MTE provides tremendous benefits at acceptable performance costs. We are considering MTE as a possible foundational requirement for certain tiers of Android devices.

Thank you to Mitch Phillips, Evgenii Stepanov, Vlad Tsyrklevich, Mark Brand, and Serban Constantinescu for their contributions to this post.

Titan Security Keys are now available in Canada, France, Japan, and the UK

Posted by Christiaan Brand, Product Manager, Google Cloud


Credential compromise as a result of phishing is one of the most common causes of security breaches. Security keys provide the strongest protection against these types of attacks, and that’s one of the main reasons why Google requires them as a second factor of authentication for our employees.

Last year, we launched Titan Security Keys in the United States and were excited to see strong demand from users and businesses choosing to protect their personal and work Google Accounts. Starting today, Titan Security Keys are also available on the Google Store in Canada, France, Japan, and the United Kingdom (UK).



Titan Security Keys


Titan Security Keys are built with a hardware chip that includes firmware engineered by Google to verify the keys’ integrity. Each key leverages FIDO standards to cryptographically verify your identity and URL of the login page, preventing an attacker from accessing your account even if you are tricked into providing your username and password. Security keys are appropriate for any security-conscious user or enterprise, and we recommend that all users, especially those at higher risk such as IT administrators, executives, politicians, and activists consider signing in via security keys.

Bundles of two Titan Security Keys (one USB/NFC and one Bluetooth) are available on the Google Store in Canada, France, Japan, and the UK in addition to the US. To set up your security keys with your personal or work Google Account, sign in and navigate to the 2-Step Verification page. In addition, you can enroll in the Advanced Protection Program, which provides Google’s strongest security for anyone at risk of targeted attacks. Titan Security Keys can also be used anywhere FIDO security keys are supported, including Coinbase, Dropbox, Facebook, GitHub, Salesforce, Stripe, Twitter, and more

Enterprise administrators can require security keys for their users in G Suite and Google Cloud Platform (GCP). Bulk orders of unbundled Titan Security Keys are available in Canada, Japan, and the US.

Chrome Fuzzer Program Update And How-To

TL;DR We increased the Chrome Fuzzer Program bonus from $500 to $1,000 as part of our recent update of reward amounts.

Chrome Fuzzer Program is a part of the Google Chrome Vulnerability Reward Program that lets security researchers run their fuzzers at scale on the ClusterFuzz infrastructure. It makes bug reporting fully automated, and the fuzzer authors get the same rewards as if they reported the bugs manually, plus an extra bonus ($1,000 as of now) on top of it for every new vulnerability.

We run fuzzers indefinitely, and some of the fuzzers contributed years ago are still finding security issues in ever changing Chrome code. This is a win-win for both sides, as security researchers do not have to spend time analyzing the crashes, and Chrome developers receive high quality bug reports automatically.

To learn more about the Chrome Fuzzer Program, let’s talk to Ned Williamson, who’s been a participant since 2017 and now works on the Google Security team.

Q: Hey Ned! It looks like you’ve received over $50,000 by participating in the Google Chrome Vulnerability Reward Program with your quic_stream_factory_fuzzer.

A: Yes, it’s true. I wrote a fuzzer for QUIC which helped me find and report two critical vulnerabilities, each worth $10,000. Because I knew my fuzzer worked well, I submitted it to the Chrome Fuzzer Program. Then, in the next few months, I received that reward three more times (plus a bonus), as the fuzzer caught several security regressions on ClusterFuzz soon after they happened.

Q: Have you intentionally focused on the areas that yield higher severity issues and bigger rewards?

A: Yes. While vulnerabilities in code that is more critical to user security yield larger reward amounts, I actually started by looking at lower severity bugs and incrementally began looking for more severe bugs until I could find critical ones. You can see this progression by looking at the bugs I reported manually as an external researcher.

Q: Would you suggest starting by looking for non-critical bugs?

A: I would say so. Security-critical code is generally better designed and more thoroughly audited, so it might be discouraging to start from there. Finding less critical security bugs and winning bounties is a good way to build confidence and stay motivated.

Q: Can you share an algorithm on how to find security bugs in Chrome?

A: Looking at previous and existing bug reports, even for non-security crashes, is a great way to tell which code is security-critical and potentially buggy. From there, if some code looks like it’s exposed to user inputs, I’d set up a fuzzing campaign against that component. After you gain experience you will not need to rely on existing reports to find new attack surface, which in turn helps you find places that have not been considered by previous researchers. This was the case for my QUIC fuzzer.

Q: How did you learn to write fuzzers?

A: I didn’t have any special knowledge about fuzzing before I started looking for vulnerabilities in Chrome. I followed the documentation in the repository and I still follow the same process today.

Q: Your fuzzer isn’t very simple compared to many other fuzzers. How did you get to that implementation?

A: The key insight in the QUIC fuzzer was realizing that the parts of the code that handled plaintext messages after decryption were prone to memory corruption. Typically, fuzzing does not perform well with encrypted inputs (it’s pretty hard to “randomly” generate a packet that can be successfully decrypted), so I extended the QUIC testing code to allow for testing with encryption disabled.

Q: Are there any other good examples of fuzz targets employing a similar logic?

A: Another example is pdf_formcalc_context_fuzzer that wraps the fuzzing input around with a valid hardcoded PDF file, therefore focusing fuzzing only on the XFA script part of it. As a researcher, you just need to choose what exactly you want to fuzz, and then understand how to execute that code properly. Looking at the unit tests is usually the easiest way to get such an understanding.

Useful links:

Happy fuzzing and bug hunting!

Bigger Rewards for Security Bugs

Chrome has always been built with security at its core, by a passionate worldwide community as part of the Chromium open source project. We're proud that community includes world class security researchers who help defend Chrome, and other Chromium based browsers.

Back in 2010 we created the Chrome Vulnerability Rewards Program which provides cash rewards to researchers for finding and reporting security bugs that help keep our users safe. Since its inception the program has received over 8,500 reports and paid out over five million dollars! A big thank you to every one of the researchers - it's an honor working with you.

Over the years we've expanded the program, including rewarding full chain exploits on Chrome OS, and the Chrome Fuzzer Program, where we run researchers' fuzzers on thousands of Google cores and automatically submit bugs they find for reward.

Today, we're delighted to announce an across the board increase in our reward amounts! Full details can be found on our program rules page but highlights include tripling the maximum baseline reward amount from $5,000 to $15,000 and doubling the maximum reward amount for high quality reports from $15,000 to $30,000. The additional bonus given to bugs found by fuzzers running under Chrome Fuzzer Program is also doubling to $1,000.

We've also clarified what we consider a high quality report, to help reporters get the highest possible reward, and we've updated the bug categories to better reflect the types of bugs that are reported and that we are most interested in.

But that's not all! On Chrome OS we're increasing our standing reward to $150,000 for exploit chains that can compromise a Chromebook or Chromebox with persistence in guest mode. Security bug in firmware and lock screen bypasses also get their own reward categories.

These new reward amounts will apply to bugs submitted after today on the Chromium bug tracker using the Security template. As always, see the Chrome Vulnerability Reward Program Rules for full details about the program.

In other news, our friends over at the Google Play Security Reward Program have increased their rewards for remote code execution bugs from $5,000 to $20,000, theft of insecure private data from $1,000 to $3,000, and access to protected app components from $1,000 to $3,000. The Google Play Security Reward Program also pays bonus rewards for responsibly disclosing vulnerabilities to participating app developers. Check out the program to learn more and see which apps are in scope.

Happy bug hunting!

How Google adopted BeyondCorp


It's been almost five years since we released the first of multiple BeyondCorp papers, describing the motivation and design principles that eliminated network-based trust from our internal networks. With that anniversary looming and many organizations actively working to adopt models like BeyondCorp (which has also become known as Zero Trust in the industry), we thought it would be a good time to revisit topics we have previously explored in those papers, share the lessons that we have learned over the years, and describe where BeyondCorp is going as businesses move to the cloud.

This is the first post in a series that will focus on Google’s internal implementation of BeyondCorp, providing necessary context for how Google adopted BeyondCorp.

Why did we adopt BeyondCorp?

With a traditional enterprise perimeter security model, access to services and resources is provided by a device being connected to a privileged network. If an employee is in a corporate office, on the right network, services are directly accessible. If they're outside the office, at home or in a coffee shop, they frequently use a VPN to get access to services behind the enterprise firewall. This is the way most organizations protect themselves.

By 2011, it became clear to Google that this model was problematic, and we needed to rethink how enterprise services are accessed and protected for the following reasons:

Improving productivity
  • A growing number of employees were not in the office at all times. They were working from home, a coffee shop, a hotel or even on a bus or airplane. When they were outside the office, they needed to connect via a VPN, creating friction and extending the network perimeter.
  • The user experience of a VPN client may be acceptable, even if suboptimal, from a laptop. Use of VPN is less acceptable, from both employees and admins perspectives, when considering growing use of devices such as smartphones and tablets to perform work.
  • A number of users were contractors or other partners who only needed selective access to some of our internal resources, even though they were working in the office.
Keeping Google secure
  • The expanded use of public clouds and software-as-a-service (SaaS) apps meant that some of our corporate services were no longer deployed on-premises, further blurring the traditional perimeter and trust domain. This introduced new attack vectors that needed to be protected against.
  • There was ongoing concern about relying solely on perimeter defense, especially when the perimeter was growing consistently. With the proliferation of laptops and mobile devices, vulnerable and compromised devices were regularly brought within the perimeter.
  • Finally, if a vulnerability was observed or an attack did happen, we wanted the ability to respond as quickly and automatically as possible.

How did we do it?

In order to address these challenges, we implemented a new approach that we called BeyondCorp. Our mission was to have every Google employee work successfully from untrusted networks on a variety of devices without using a client-side VPN. BeyondCorp has three core principles:
  • Connecting from a particular network does not determine which service you can access.
  • Access to services is granted based on what the infrastructure knows about you and your device.
  • All access to services must be authenticated, authorized and encrypted for every request (not just the initial access).


High level architecture for BeyondCorp

BeyondCorp gave us the security that we were looking for along with the user experience that made our employees more productive inside and outside the office.

What lessons did we learn?

Given this was uncharted territory at the time, we had to learn quickly and adapt when we encountered surprises. Here are some key lessons we learned.

Obtain executive support early on and keep it

Moving to BeyondCorp is not a quick, painless exercise. It took us several years just to get most of the basics in place, and to this day we are still continuing to improve and refine our implementation. Before embarking on this journey to implement BeyondCorp, we got buy in from leadership very early in the project. With a mandate, you can ask for support from lots of different groups along the way.

We make a point to re-validate this buy-in on an ongoing basis, ensuring that the business still understands and values this important shift.

Recognize data quality challenges from the very beginning

Access decisions depend on the quality of your input data. More specifically, it depends on trust analysis, which requires a combination of employee and device data.

If this data is unreliable, the result will be incorrect access decisions, suboptimal user experiences and, in the worst case, an increase in system vulnerability, so the stakes are definitely high.
We put in a lot of work to make sure our data is clean and reliable before making any impactful changes, and we have both workflows and technical measures in place to ensure data quality remains high going forward.

Enable painless migration and usage

The migration should be a zero-touch or invisible experience for your employees, making it easy for them to continue working without interruptions or added steps. If you make it difficult for your employees to migrate or maintain productivity, they might feel frustrated by the process. Complex environments are difficult to fully migrate with initial solutions, so be prepared to review, grant and manage exceptions at least in the early stages. With this in mind, start small, migrate a small number of resources, apps, users and devices, and only increase coverage after confirming the solution is reliable.

Assign employee and helpdesk advocates

We also had employee and helpdesk advocates on the team who represented the user experience from those perspectives. This helped us architect our implementation in a way that avoided putting excess burden on employees or technical support staff.

Clear employee communications

Communicating clearly with employees so that they know what is happening is very important. We sent our employees, partners, and company leaders regular communications whenever we made important changes, ensuring motivations were well understood and there was a window for feedback and iteration prior to enforcement changes.

Run highly reliable systems

Since every request goes through the core BeyondCorp infrastructure, we needed a global, highly reliable and resilient set of services. If these services are degraded, employee productivity suffers.

We used Site Reliability Engineering (SRE) principles to run our BeyondCorp services.

Next time

In the next post in this series, we will go deeper into when you should trust a device, what data you should use to determine whether or not a device should be trusted, and what we have learned by going through that process.

In the meantime, if you want to learn more, you can check out the BeyondCorp research papers. In addition, getting started with BeyondCorp is now easier using zero trust solutions from Google Cloud (context-aware access) and other enterprise providers.

Google Public DNS over HTTPS (DoH) supports RFC 8484 standard



Ever since we launched Google Public DNS in 2009, our priority has been the security of DNS resolution. In 2016, we launched a unique and innovative experimental service -- DNS over HTTPS, now known as DoH. Today we are announcing general availability for our standard DoH service. Now our users can resolve DNS using DoH at the dns.google domain with the same anycast addresses (like 8.8.8.8) as regular DNS service, with lower latency from our edge PoPs throughout the world.

General availability of DoH includes full RFC 8484 support at a new URL path, and continued support for the JSON API launched in 2016. The new endpoints are:

  • https://dns.google/dns-query (RFC 8484 – GET and POST)
  • https://dns.google/resolve (JSON API – GET)
We are deprecating internet-draft DoH support on the /experimental URL path and DoH service from dns.google.com, and will turn down support for them in a few months.

With Google Public DNS, we’re committed to providing fast, private, and secure DNS resolution through both DoH and DNS over TLS (DoT). We plan to support the JSON API until there is a comparable standard for webapp-friendly DoH.


What the new DoH service means for developers

To use our DoH service, developers should configure their applications to use the new DoH endpoints and properly handle HTTP 4xx error and 3xx redirection status codes.
  • Applications should use dns.google instead of dns.google.com. Applications can query dns.google at well-known Google Public DNS addresses, without needing an extra DNS lookup.
  • Developers using the older /experimental internet-draft DoH API need to switch to the new /dns-query URL path and confirm full RFC 8484 compliance. The older API accepts queries using features from early drafts of the DoH standard that are rejected by the new API.
  • Developers using the JSON API can use two new GET parameters that can be used for DNS/DoH proxies or DNSSEC-aware applications.
Redirection of /experimental and dns.google.com

The /experimental API will be turned down in 30 days and HTTP requests for it will get an HTTP redirect to an equivalent https://dns.google/dns-query URI. Developers should make sure DoH applications handle HTTP redirects by retrying at the URI specified in the Location header.

Turning down the dns.google.com domain will take place in three stages.
  1. The first stage (in 45 days) will update the dns.google.com domain name to return 8.8.8.8 and other Google Public DNS anycast addresses, but continue to return DNS responses to queries sent to former addresses of dns.google.com. This will provide a transparent transition for most clients.
  2. The second stage (in 90 days) will return HTTP redirects to dns.google for queries sent to former addresses of dns.google.com.
  3. The final stage (in 12 months) will send HTTP redirects to dns.google for any queries sent to the anycast addresses using the dns.google.com domain.
We will post timelines for redirections on the public‑dns‑announce forum and on the DoH migration page. You can find further technical details in our DoH documentation, and if you have a question or problem with our DoH service, you can create an issue on our tracker or ask on our discussion group. As always, please provide as much information as possible to help us investigate the problem!

Helping organizations do more without collecting more data



We continually invest in new research to advance innovations that preserve individual privacy while enabling valuable insights from data. Earlier this year, we launched Password Checkup, a Chrome extension that helps users detect if a username and password they enter on a website has been compromised. It relies on a cryptographic protocol known as private set intersection (PSI) to match your login’s credentials against an encrypted database of over 4 billion credentials Google knows to be unsafe. At the same time, it ensures that no one – including Google – ever learns your actual credentials.

Today, we’re rolling out the open-source availability of Private Join and Compute, a new type of secure multi-party computation (MPC) that augments the core PSI protocol to help organizations work together with confidential data sets while raising the bar for privacy.


Collaborating with data in privacy-safe ways

Many important research, business, and social questions can be answered by combining data sets from independent parties where each party holds their own information about a set of shared identifiers (e.g. email addresses), some of which are common. But when you’re working with sensitive data, how can one party gain aggregated insights about the other party’s data without either of them learning any information about individuals in the datasets? That’s the exact challenge that Private Join and Compute helps solve.

Using this cryptographic protocol, two parties can encrypt their identifiers and associated data, and then join them. They can then do certain types of calculations on the overlapping set of data to draw useful information from both datasets in aggregate. All inputs (identifiers and their associated data) remain fully encrypted and unreadable throughout the process. Neither party ever reveals their raw data, but they can still answer the questions at hand using the output of the computation. This end result is the only thing that’s decrypted and shared in the form of aggregated statistics. For example, this could be a count, sum, or average of the data in both sets.


A deeper look at the technology 


Private Join and Compute combines two fundamental cryptographic techniques to protect individual data:

  • Private set intersection allows two parties to privately join their sets and discover the identifiers they have in common. We use an oblivious variant which only marks encrypted identifiers without learning any of the identifiers.
  • Homomorphic encryption allows certain types of computation to be performed directly on encrypted data without having to decrypt it first, which preserves the privacy of raw data. Throughout the process, individual identifiers and values remain concealed. For example, you can count how many identifiers are in the common set or compute the sum of values associated with marked encrypted identifiers – without learning anything about individuals. 

This combination of techniques ensures that nothing but the size of the joined set and the statistics (e.g. sum) of its associated values is revealed. Individual items are strongly encrypted with random keys throughout and are not available in raw form to the other party or anyone else.

Watch this video or click to view the full infographic below on how Private Join and Compute works:

Private Join and Compute

Using multi-party computation to solve real-world problems


Multi-party computation (MPC) is a field with a long history, but it has typically faced many hurdles to widespread adoption beyond academic communities. Common challenges include finding effective and efficient ways to tailor encryption techniques and tools to solve practical problems.

We’re committed to applying MPC and encryption technologies to more concrete, real-world issues at Google and beyond by making privacy technology more widely available. We are exploring a number of potential use cases at Google across collaborative machine learning, user security, and aggregated ads measurement.

And this is just the beginning of what’s possible. This technology can help advance valuable research in a wide array of fields that require organizations to work together without revealing anything about individuals represented in the data. For example:

  • Public policy - if a government implements new wellness initiatives in public schools (e.g. better lunch options and physical education curriculums), what are the long-term health outcomes for impacted students?
  • Diversity and inclusion - when industries create new programs to close gender and racial pay gaps, how does this impact compensation across companies by demographic?
  • Healthcare - when a new preventative drug is prescribed to patients across the country, does it reduce the incidence of disease? 
  • Car safety standards - when auto manufacturers add more advanced safety features to vehicles, does it coincide with a decrease in reported car accidents?

Private Join and Compute keeps individual information safe while allowing organizations to accurately compute and draw useful insights from aggregate statistics. By sharing the technology more widely, we hope this expands the use cases for secure computing. To learn more about the research and methodology behind Private Join and Compute, read the full paper and access the open source code and documentation. We’re excited to see how other organizations will advance MPC and cryptography to answer important questions while upholding individual privacy.


Acknowledgements


Product Manager - Nirdhar Khazanie
Software Engineers - Mihaela Ion, Benjamin Kreuter, Erhan Nergiz, and Karn Seth
Research Scientist - Mariana Raykova


New Chrome Protections from Deception


Chrome was built with security in mind from the very beginning. Today we’re launching two new features to help protect users from deceptive websites. The Suspicious Site Reporter Extension will improve security for Chrome users by giving power users an easy way to report suspicious sites to Google Safe Browsing. We’re also launching a new warning to protect users from sites with deceptive URLs.

We designed Chrome to be secure by default, and easy to use by everyone. Google Safe Browsing has helped protect Chrome users from phishing attacks for over 10 years, and now helps protect more than 4 billion devices every day across multiple browsers and apps by showing warnings to people before they visit dangerous sites or download dangerous files. We’re constantly improving Safe Browsing, and now you can help.

Safe Browsing works by automatically analyzing the websites that we know about through Google Search’s web crawlers, and creating lists of sites that are dangerous or deceptive. With the Suspicious Site Reporter extension, you can help Safe Browsing protect web users by reporting suspicious sites. You can install the extension to start seeing an icon when you’re on a potentially suspicious site, and more information about why the site might be suspicious. By clicking the icon, you’re now able to report unsafe sites to Safe Browsing for further evaluation. If the site is added to Safe Browsing’s lists, you’ll not only protect Chrome users, but users of other browsers and across the entire web.


Help us protect web users by reporting dangerous or deceptive sites to Google Safe Browsing through the Suspicious Site Reporter extension.

One way that deceptive sites might try to trick you is by using a confusing URL. For example, it’s easy to confuse “go0gle.com” with “google.com”. In Chrome 75, we’re launching a new warning to direct users away from sites that have confusing URLs.


Starting in the current version of Chrome (75), you’ll see a warning when the page URL might be confused for URLs of sites you’ve visited recently.

This new warning works by comparing the URL of the page you’re currently on to URLs of pages you’ve recently visited. If the URL looks similar, and might cause you to be confused or deceived, we’ll show a warning that helps you get back to safety.

We believe that you shouldn't have to be a security expert to feel safe on the web, and that many Chrome power-users share our mission to make the web more secure for everyone. We’ll continue improving Chrome Security to help make Chrome easy to use safely, and are looking forward to collaborating with the community to further that goal. If you'd like to help out, install the new extension and start helping protect the web!

Improving Security and Privacy for Extensions Users

No, Chrome isn’t killing ad blockers -- we’re making them safer

The Chrome Extensions ecosystem has seen incredible advancement, adoption, and growth since its launch over ten years ago. Extensions are a great way for users to customize their experience in Chrome and on the web. As this system grows and expands in both reach and power, user safety and protection remains a core focus of the Chromium project.
In October, we announced a number of changes to improve the security, privacy, and performance of Chrome extensions. These changes include increased user options to control extension permissions, changes to the review process and readability requirements, and requiring two-step verification for developers. In addition, we’ve helped curb abuse through restricting inline installation on websites, preventing the use of deceptive installation practices, and limiting the data collected by extensions. We’ve also made changes to the teams themselves — over the last year, we’ve increased the size of the engineering teams that work on extension abuse by over 300% and the number of reviewers by over 400%.
These and other changes have driven down the rate of malicious installations by 89% since early 2018. Today, we block approximately 1,800 malicious uploads a month, preventing them from ever reaching the store. While the Chrome team is proud of these improvements, the review process alone can't catch all abuse. In order to provide better protection to our users, we need to make changes to the platform as well. This is the suite of changes we’re calling Manifest V3.
This effort is motivated by a desire to keep users safe and to give them more visibility and control over the data they’re sharing with extensions. One way we are doing this is by helping users be deliberate in granting access to sensitive data - such as emails, photos, and access to social media accounts. As we make these changes we want to continue to support extensions in empowering users and enhancing their browsing experience.
To help with this balance, we’re reimagining the way a number of powerful APIs work. Instead of a user granting each extension access to all of their sensitive data, we are creating ways for developers to request access to only the data they need to accomplish the same functionality. One example of this is the introduction of the Declarative Net Request API, which is replacing parts of the Web Request API.
At a high level, this change means that an extension does not need access to all a user’s sensitive data in order to block content. With the current Web Request API, users grant permission for Chrome to pass all information about a network request - which can include things like emails, photos, or other private information - to the extension. In contrast, the Declarative Net Request API allows extensions to block content without requiring the user to grant access to any sensitive information. Additionally, because we are able to cut substantial overhead in the browser, the Declarative Net Request API can have significant, system-level performance benefits over Web Request.


This has been a controversial change since the Web Request API is used by many popular extensions, including ad blockers. We are not preventing the development of ad blockers or stopping users from blocking ads. Instead, we want to help developers, including content blockers, write extensions in a way that protects users’ privacy.
You can read more about the Declarative Net Request API and how it compares to the Web Request API here.
We understand that these changes will require developers to update the way in which their extensions operate. However, we think it is the right choice to enable users to limit the sensitive data they share with third-parties while giving them the ability to curate their own browsing experience. We are continuing to iterate on many aspects of the Manifest V3 design, and are working with the developer community to find solutions that both solve the use cases extensions have today and keep our users safe and in control.

Use your Android phone’s built-in security key to verify sign-in on iOS devices


Compromised credentials are one of the most common causes of security breaches. While Google automatically blocks the majority of unauthorized sign-in attempts, adding 2-Step Verification (2SV) considerably improves account security. At Cloud Next ‘19, we introduced a new 2SV method, enabling more than a billion users worldwide to better protect their accounts with a security key built into their Android phones.
This technology can be used to verify your sign-in to Google and Google Cloud services on Bluetooth-enabled Chrome OS, macOS, and Windows 10 devices. Starting today, you can use your Android phone to verify your sign-in on Apple iPads and iPhones as well.
Security keys
FIDO security keys provide the strongest protection against automated bots, bulk phishing, and targeted attacks by leveraging public key cryptography to verify your identity and URL of the login page, so that an attacker can’t access your account even if you are tricked into providing your username and password. Learn more by watching our presentation from Cloud Next ‘19.


On Chrome OS, macOS, and Windows 10 devices, we leverage the Chrome browser to communicate with your Android phone’s built-in security key over Bluetooth using FIDO’s CTAP2 protocol. On iOS devices, Google’s Smart Lock app is leveraged in place of the browser.


User experience on an iPad with Pixel 3


Until now, there were limited options for using FIDO2 security keys on iOS devices. Now, you can get the strongest 2SV method with the convenience of an Android phone that’s always in your pocket at no additional cost.
It’s easy to get started
Follow these simple steps to protect your Google Account today:
Step 1: Add the security key to your Google Account
  • Add your personal or work Google Account to your Android 7.0+ (Nougat) phone.
  • Make sure you’re enrolled in 2-Step Verification (2SV).
  • On your computer, visit the 2SV settings and click "Add security key".
  • Choose your Android phone from the list of available devices.
Step 2: Use your Android phone's built-in security key
You can find more detailed instructions here. Within enterprise organizations, admins can require the use of security keys for their users in G Suite and Google Cloud Platform (GCP), letting them choose between using a physical security key, an Android phone, or both.
We also recommend that you register a backup hardware security key (from Google or a number of other vendors) for your account and keep it in a safe place, so that you can gain access to your account if you lose your Android phone.