Category Archives: Online Security Blog

The latest news and insights from Google on security and safety on the Internet

System hardening in Android 11

In Android 11 we continue to increase the security of the Android platform. We have moved to safer default settings, migrated to a hardened memory allocator, and expanded the use of compiler mitigations that defend against classes of vulnerabilities and frustrate exploitation techniques.

Initializing memory

We’ve enabled forms of automatic memory initialization in both Android 11’s userspace and the Linux kernel. Uninitialized memory bugs occur in C/C++ when memory is used without having first been initialized to a known safe value. These types of bugs can be confusing, and even the term “uninitialized” is misleading. Uninitialized may seem to imply that a variable has a random value. In reality it isn’t random. It has whatever value was previously placed there. This value may be predictable or even attacker controlled. Unfortunately this behavior can result in a serious vulnerability such as information disclosure bugs like ASLR bypasses, or control flow hijacking via a stack or heap spray. Another possible side effect of using uninitialized values is advanced compiler optimizations may transform the code unpredictably, as this is considered undefined behavior by the relevant C standards.

In practice, uses of uninitialized memory are difficult to detect. Such errors may sit in the codebase unnoticed for years if the memory happens to be initialized with some "safe" value most of the time. When uninitialized memory results in a bug, it is often challenging to identify the source of the error, particularly if it is rarely triggered.

Eliminating an entire class of such bugs is a lot more effective than hunting them down individually. Automatic stack variable initialization relies on a feature in the Clang compiler which allows choosing initializing local variables with either zeros or a pattern.

Initializing to zero provides safer defaults for strings, pointers, indexes, and sizes. The downsides of zero init are less-safe defaults for return values, and exposing fewer bugs where the underlying code relies on zero initialization. Pattern initialization tends to expose more bugs and is generally safer for return values and less safe for strings, pointers, indexes, and sizes.

Initializing Userspace:

Automatic stack variable initialization is enabled throughout the entire Android userspace. During the development of Android 11, we initially selected pattern in order to uncover bugs relying on zero init and then moved to zero-init after a few months for increased safety. Platform OS developers can build with `AUTO_PATTERN_INITIALIZE=true m` if they want help uncovering bugs relying on zero init.

Initializing the Kernel:

Automatic stack and heap initialization were recently merged in the upstream Linux kernel. We have made these features available on earlier versions of Android’s kernel including 4.14, 4.19, and 5.4. These features enforce initialization of local variables and heap allocations with known values that cannot be controlled by attackers and are useless when leaked. Both features result in a performance overhead, but also prevent undefined behavior improving both stability and security.

For kernel stack initialization we adopted the CONFIG_INIT_STACK_ALL from upstream Linux. It currently relies on Clang pattern initialization for stack variables, although this is subject to change in the future.

Heap initialization is controlled by two boot-time flags, init_on_alloc and init_on_free, with the former wiping freshly allocated heap objects with zeroes (think s/kmalloc/kzalloc in the whole kernel) and the latter doing the same before the objects are freed (this helps to reduce the lifetime of security-sensitive data). init_on_alloc is a lot more cache-friendly and has smaller performance impact (within 2%), therefore it has been chosen to protect Android kernels.

Scudo is now Android's default native allocator

In Android 11, Scudo replaces jemalloc as the default native allocator for Android. Scudo is a hardened memory allocator designed to help detect and mitigate memory corruption bugs in the heap, such as:

Scudo does not fully prevent exploitation but it does add a number of sanity checks which are effective at strengthening the heap against some memory corruption bugs.

It also proactively organizes the heap in a way that makes exploitation of memory corruption more difficult, by reducing the predictability of the allocation patterns, and separating allocations by sizes.

In our internal testing, Scudo has already proven its worth by surfacing security and stability bugs that were previously undetected.

Finding Heap Memory Safety Bugs in the Wild (GWP-ASan)

Android 11 introduces GWP-ASan, an in-production heap memory safety bug detection tool that's integrated directly into the native allocator Scudo. GWP-ASan probabilistically detects and provides actionable reports for heap memory safety bugs when they occur, works on 32-bit and 64-bit processes, and is enabled by default for system processes and system apps.

GWP-ASan is also available for developer applications via a one line opt-in in an app's AndroidManifest.xml, with no complicated build support or recompilation of prebuilt libraries necessary.

Software Tag-Based KASAN

Continuing work on adopting the Arm Memory Tagging Extension (MTE) in Android, Android 11 includes support for kernel HWASAN, also known as Software Tag-Based KASAN. Userspace HWASAN is supported since Android 10.

KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to find out-of-bound and use-after-free bugs in the Linux kernel. Its Software Tag-Based mode is a software implementation of the memory tagging concept for the kernel. Software Tag-Based KASAN is available in 4.14, 4.19 and 5.4 Android kernels, and can be enabled with the CONFIG_KASAN_SW_TAGS kernel configuration option. Currently Tag-Based KASAN only supports tagging of slab memory; support for other types of memory (such as stack and globals) will be added in the future.

Compared to Generic KASAN, Tag-Based KASAN has significantly lower memory requirements (see this kernel commit for details), which makes it usable on dog food testing devices. Another use case for Software Tag-Based KASAN is checking the existing kernel code for compatibility with memory tagging. As Tag-Based KASAN is based on similar concepts as the future in-kernel MTE support, making sure that kernel code works with Tag-Based KASAN will ease in-kernel MTE integration in the future.

Expanding existing compiler mitigations

We’ve continued to expand the compiler mitigations that have been rolled out in prior releases as well. This includes adding both integer and bounds sanitizers to some core libraries that were lacking them. For example, the libminikin fonts library and the libui rendering library are now bounds sanitized. We’ve hardened the NFC stack by implementing both integer overflow sanitizer and bounds sanitizer in those components.

In addition to the hard mitigations like sanitizers, we also continue to expand our use of CFI as an exploit mitigation. CFI has been enabled in Android’s networking daemon, DNS resolver, and more of our core javascript libraries like libv8 and the PacProcessor.

The effectiveness of our software codec sandbox

Prior to the Release of Android 10 we announced a new constrained sandbox for software codecs. We’re really pleased with the results. Thus far, Android 10 is the first Android release since the infamous stagefright vulnerabilities in Android 5.0 with zero critical-severity vulnerabilities in the media frameworks.

Thank you to Jeff Vander Stoep, Alexander Potapenko, Stephen Hines, Andrey Konovalov, Mitch Phillips, Ivan Lozano, Kostya Kortchinsky, Christopher Ferris, Cindy Zhou, Evgenii Stepanov, Kevin Deus, Peter Collingbourne, Elliott Hughes, Kees Cook and Ken Chen for their contributions to this post.

System hardening in Android 11

In Android 11 we continue to increase the security of the Android platform. We have moved to safer default settings, migrated to a hardened memory allocator, and expanded the use of compiler mitigations that defend against classes of vulnerabilities and frustrate exploitation techniques.

Initializing memory

We’ve enabled forms of automatic memory initialization in both Android 11’s userspace and the Linux kernel. Uninitialized memory bugs occur in C/C++ when memory is used without having first been initialized to a known safe value. These types of bugs can be confusing, and even the term “uninitialized” is misleading. Uninitialized may seem to imply that a variable has a random value. In reality it isn’t random. It has whatever value was previously placed there. This value may be predictable or even attacker controlled. Unfortunately this behavior can result in a serious vulnerability such as information disclosure bugs like ASLR bypasses, or control flow hijacking via a stack or heap spray. Another possible side effect of using uninitialized values is advanced compiler optimizations may transform the code unpredictably, as this is considered undefined behavior by the relevant C standards.

In practice, uses of uninitialized memory are difficult to detect. Such errors may sit in the codebase unnoticed for years if the memory happens to be initialized with some "safe" value most of the time. When uninitialized memory results in a bug, it is often challenging to identify the source of the error, particularly if it is rarely triggered.

Eliminating an entire class of such bugs is a lot more effective than hunting them down individually. Automatic stack variable initialization relies on a feature in the Clang compiler which allows choosing initializing local variables with either zeros or a pattern.

Initializing to zero provides safer defaults for strings, pointers, indexes, and sizes. The downsides of zero init are less-safe defaults for return values, and exposing fewer bugs where the underlying code relies on zero initialization. Pattern initialization tends to expose more bugs and is generally safer for return values and less safe for strings, pointers, indexes, and sizes.

Initializing Userspace:

Automatic stack variable initialization is enabled throughout the entire Android userspace. During the development of Android 11, we initially selected pattern in order to uncover bugs relying on zero init and then moved to zero-init after a few months for increased safety. Platform OS developers can build with `AUTO_PATTERN_INITIALIZE=true m` if they want help uncovering bugs relying on zero init.

Initializing the Kernel:

Automatic stack and heap initialization were recently merged in the upstream Linux kernel. We have made these features available on earlier versions of Android’s kernel including 4.14, 4.19, and 5.4. These features enforce initialization of local variables and heap allocations with known values that cannot be controlled by attackers and are useless when leaked. Both features result in a performance overhead, but also prevent undefined behavior improving both stability and security.

For kernel stack initialization we adopted the CONFIG_INIT_STACK_ALL from upstream Linux. It currently relies on Clang pattern initialization for stack variables, although this is subject to change in the future.

Heap initialization is controlled by two boot-time flags, init_on_alloc and init_on_free, with the former wiping freshly allocated heap objects with zeroes (think s/kmalloc/kzalloc in the whole kernel) and the latter doing the same before the objects are freed (this helps to reduce the lifetime of security-sensitive data). init_on_alloc is a lot more cache-friendly and has smaller performance impact (within 2%), therefore it has been chosen to protect Android kernels.

Scudo is now Android's default native allocator

In Android 11, Scudo replaces jemalloc as the default native allocator for Android. Scudo is a hardened memory allocator designed to help detect and mitigate memory corruption bugs in the heap, such as:

Scudo does not fully prevent exploitation but it does add a number of sanity checks which are effective at strengthening the heap against some memory corruption bugs.

It also proactively organizes the heap in a way that makes exploitation of memory corruption more difficult, by reducing the predictability of the allocation patterns, and separating allocations by sizes.

In our internal testing, Scudo has already proven its worth by surfacing security and stability bugs that were previously undetected.

Finding Heap Memory Safety Bugs in the Wild (GWP-ASan)

Android 11 introduces GWP-ASan, an in-production heap memory safety bug detection tool that's integrated directly into the native allocator Scudo. GWP-ASan probabilistically detects and provides actionable reports for heap memory safety bugs when they occur, works on 32-bit and 64-bit processes, and is enabled by default for system processes and system apps.

GWP-ASan is also available for developer applications via a one line opt-in in an app's AndroidManifest.xml, with no complicated build support or recompilation of prebuilt libraries necessary.

Software Tag-Based KASAN

Continuing work on adopting the Arm Memory Tagging Extension (MTE) in Android, Android 11 includes support for kernel HWASAN, also known as Software Tag-Based KASAN. Userspace HWASAN is supported since Android 10.

KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to find out-of-bound and use-after-free bugs in the Linux kernel. Its Software Tag-Based mode is a software implementation of the memory tagging concept for the kernel. Software Tag-Based KASAN is available in 4.14, 4.19 and 5.4 Android kernels, and can be enabled with the CONFIG_KASAN_SW_TAGS kernel configuration option. Currently Tag-Based KASAN only supports tagging of slab memory; support for other types of memory (such as stack and globals) will be added in the future.

Compared to Generic KASAN, Tag-Based KASAN has significantly lower memory requirements (see this kernel commit for details), which makes it usable on dog food testing devices. Another use case for Software Tag-Based KASAN is checking the existing kernel code for compatibility with memory tagging. As Tag-Based KASAN is based on similar concepts as the future in-kernel MTE support, making sure that kernel code works with Tag-Based KASAN will ease in-kernel MTE integration in the future.

Expanding existing compiler mitigations

We’ve continued to expand the compiler mitigations that have been rolled out in prior releases as well. This includes adding both integer and bounds sanitizers to some core libraries that were lacking them. For example, the libminikin fonts library and the libui rendering library are now bounds sanitized. We’ve hardened the NFC stack by implementing both integer overflow sanitizer and bounds sanitizer in those components.

In addition to the hard mitigations like sanitizers, we also continue to expand our use of CFI as an exploit mitigation. CFI has been enabled in Android’s networking daemon, DNS resolver, and more of our core javascript libraries like libv8 and the PacProcessor.

The effectiveness of our software codec sandbox

Prior to the Release of Android 10 we announced a new constrained sandbox for software codecs. We’re really pleased with the results. Thus far, Android 10 is the first Android release since the infamous stagefright vulnerabilities in Android 5.0 with zero critical-severity vulnerabilities in the media frameworks.

Thank you to Jeff Vander Stoep, Alexander Potapenko, Stephen Hines, Andrey Konovalov, Mitch Phillips, Ivan Lozano, Kostya Kortchinsky, Christopher Ferris, Cindy Zhou, Evgenii Stepanov, Kevin Deus, Peter Collingbourne, Elliott Hughes, Kees Cook and Ken Chen for their contributions to this post.

11 Weeks of Android: Privacy and Security

This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.

mobile security illustration

Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.

As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:

One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.

Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.

In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.

In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.

Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.

Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.

Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.

Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.

BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.

Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.

Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.

Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

Making the Advanced Protection Program and Titan Security Keys easier to use on Apple iOS devices




Starting today, we’re rolling out a change that enables native support for the W3C WebAuthn implementation for Google Accounts on Apple devices running iOS 13.3 and above. This capability, available for both personal and work Google Accounts, simplifies your security key experience on compatible iOS devices and allows you to use more types of security keys for your Google Account and the Advanced Protection Program.


Using an NFC security key on iPhone

More security key choices for users
  • Both the USB-A and Bluetooth Titan Security Keys have NFC functionality built-in. This allows you to tap your key to the back of your iPhone when prompted at sign-in.
  • You can use a Lightning security key like the YubiKey 5Ci or any USB security key if you have an Apple Lightning to USB Camera Adapter.
  • You can plug a USB-C security key in directly to an iOS device that has a USB-C port (such as an iPad Pro).
  • We suggest installing the Smart Lock app in order to use Bluetooth security keys and your phone’s built-in security key, which allows you to use your iPhone as an additional security key for your Google Account.
In order to add your Google Account to your iOS device, navigate to “Settings > Passwords & Accounts” on your iOS device or install the Google app and sign in.


Account security best practices


We highly recommend users at a higher risk of targeted attacks to get security keys (such as Titan Security Key or your Android or iOS phone) and enroll into the Advanced Protection Program. If you’re working for political committees in the United States, you may be eligible to request free Titan Security Keys through the Defending Digital Campaigns to get help enrolling into Advanced Protection.

You can also use security keys for any site where FIDO security keys are supported for 2FA, including your personal or work Google Account, 1Password, Bitbucket, Bitfinex, Coinbase, Dropbox, Facebook, GitHub, Salesforce, Stripe, Twitter, and more.

The Advanced Protection Program comes to Google Nest



The Advanced Protection Program is our strongest level of Google Account security for people at high risk of targeted online attacks, such as journalists, activists, business leaders, and people working on elections. Anyone can sign up to automatically receive extra safeguards against phishing, malware, and fraudulent access to their data.

Since we launched, one of our goals has been to bring Advanced Protection’s features to other Google products. Over the years, we’ve incorporated many of them into GSuite, Google Cloud Platform, Chrome, and most recently, Android. We want as many users as possible to benefit from the additional levels of security that the Program provides.

Today we’re announcing one of the top requests we’ve received: to bring the Advanced Protection Program to Nest.  Now people can seamlessly use their Google Accounts with both Advanced Protection and Google Nest devices -- previously, a user could use their Google Account on only one of these at a time.

Feeling safe at home has never been more important and Nest has announced a variety of new security features this year, including using reCAPTCHA Enterprise, to significantly lower the likelihood of automated attacks. Today’s improvement adds yet another layer of protection for people with Nest devices.

For more information about using Advanced Protection with Google Nest devices, check out this article in our help center.

Expanding our work with the open source security community



At Google, we’ve always believed in the benefits and importance of using open source technologies to innovate. We enjoy being a part of the community and we want to give back in new ways. As part of this effort, we are excited to announce an expansion of our Google Vulnerability Rewards Program (VRP) to cover all the critical open-source dependencies of Google Kubernetes Engine (GKE). We have designed this expansion with the goal of incentivizing the security community to work even more closely with open source projects, supporting the maintainers whose work we all rely on.

The CNCF, in partnership with Google, recently announced a bug bounty program for Kubernetes that pays up to $10,000 for vulnerabilities discovered within the project. And today, in addition to that, we are expanding the scope of the Google VRP program to also include privilege escalation bugs in a hardened GKE lab cluster we've set up for this purpose. This will cover exploitable vulnerabilities in all dependencies that can lead to a node compromise, such as privilege escalation bugs in the Linux kernel, as well as in the underlying hardware or other components of our infrastructure that could allow for privilege escalation inside a GKE cluster.

How it works
We have set up a lab environment on GKE based on an open-source Kubernetes-based Capture-the-Flag (CTF) project called kCTF. Participants will be required to:
  • Break out of a containerized environment running on a Kubernetes pod and,
  • Read one of two secret flags: One flag is on the same pod, and the other one is in another Kubernetes pod in a different namespace.
Flags will be changed often, and participants need to submit the secret flag as proof of successful exploitation. The lab environment does not store any data (such as the commands or files used to exploit it), so participants need the flags to demonstrate they were able to compromise it.

The rewards will work in the following way:
  • Bugs that affect the lab GKE environment that can lead to stealing both flags will be rewarded up to 10,000 USD, but we will review each report on a case-by-case basis. Any vulnerabilities are in scope, regardless of where they are: Linux, Kubernetes, kCTF, Google, or any other dependency. Instructions on how to submit the flags and exploits are available here.
  • Bugs that are 100% in Google code, qualify for an additional Google VRP reward.
  • Bugs that are 100% in Kubernetes code, qualify for an additional CNCF Kubernetes reward.
Any vulnerabilities found outside of GKE (like Kubernetes or the Linux kernel) should be reported to the corresponding upstream project security teams. To make this program expansion as efficient as possible for the maintainers, we will only reward vulnerabilities shown to be exploitable by stealing a flag. If your exploit relies on something in upstream Kubernetes, the Linux Kernel, or any other dependency, you need to report it there first, get it resolved, and then report it to Google. See instructions here.

The GKE lab environment is built on top of a CTF infrastructure that we just open-sourced on GitHub. The infrastructure is new, and we are looking forward to receiving feedback from the community before it can be actively used in CTF competitions. By including the CTF infrastructure in the scope of the Google VRP, we want to incentivise the community to help us secure not just the CTF competitions that will use it, but also GKE and the broader Kubernetes ecosystems.

In March 2020, we announced the winner for the first Google Cloud Platform (GCP) VRP Prize and since then we have seen increased interest and research happening on Google Cloud. With this new initiative, we hope to bring even more awareness to Google Cloud by experienced security researchers, so we can all work together to secure our shared open-source foundations.

Enhanced Safe Browsing Protection now available in Chrome

Over the past few years we’ve seen threats on the web becoming increasingly sophisticated. Phishing sites rotate domains very quickly to avoid being blocked, and malware campaigns are directly targeting at-risk users. We’ve realized that to combat these most effectively, security cannot be one-size-fits-all anymore: That’s why today we are announcing Enhanced Safe Browsing protection in Chrome, a new option for users who require or want a more advanced level of security while browsing the web.

Turning on Enhanced Safe Browsing will substantially increase protection from dangerous websites and downloads. By sharing real-time data with Google Safe Browsing, Chrome can proactively protect you against dangerous sites. If you’re signed in, Chrome and other Google apps you use (Gmail, Drive, etc) will be able to provide improved protection based on a holistic view of threats you encounter on the web and attacks against your Google Account. In other words, we’re bringing the intelligence of Google’s cutting-edge security tools directly into your browser.

Over the next year, we’ll be adding even more protections to this mode, including tailored warnings for phishing sites and file downloads and cross-product alerts.

Building upon Safe Browsing

Safe Browsing’s blocklist API is an existing security protocol that protects billions of devices worldwide. Every day, Safe Browsing discovers thousands of new unsafe sites and adds them to the blocklist API that is shared with the web industry. Chrome checks the URL of each site you visit or file you download against a local list, which is updated approximately every 30 minutes. Increasingly, some sophisticated phishing sites slip through that 30-minute refresh window by switching domains very quickly.

This protocol is designed so that Google cannot determine the actual URL Chrome visited from this information, and thus by necessity the same verdict is returned regardless of the user’s situation. This means Chrome can’t adjust protection based on what kinds of threats a particular user is seeing or the type of sites they normally visit. So while the Safe Browsing blocklist API remains very powerful and will continue to protect users, we’ve been looking for ways to provide more proactive and tailored protections.

How Enhanced Safe Browsing works

When you switch to Enhanced Safe Browsing, Chrome will share additional security data directly with Google Safe Browsing to enable more accurate threat assessments. For example, Chrome will check uncommon URLs in real time to detect whether the site you are about to visit may be a phishing site. Chrome will also send a small sample of pages and suspicious downloads to help discover new threats against you and other Chrome users.

If you are signed in to Chrome, this data is temporarily linked to your Google Account. We do this so that when an attack is detected against your browser or account, Safe Browsing can tailor its protections to your situation. In this way, we can provide the most precise protection without unnecessary warnings. After a short period, Safe Browsing anonymizes this data so it is no longer connected to your account.

You can opt in to this mode by visiting Privacy and Security settings > Security > and selecting the “Enhanced protection” mode under Safe Browsing. It will be rolled out gradually in M83 on desktop platforms, with Android support coming in a future release. Enterprise administrators can control this setting via the SafeBrowsingProtectionLevel policy.

Tailored protections

Chrome’s billions of users are incredibly diverse, with a full spectrum of needs and perspectives in security and privacy. We will continue to invest in both Standard and Enhanced Safe Browsing with the goal to expand Chrome’s security offerings to cover all users.

Introducing portability of Google Authenticator 2SV codes across Android devices


Today is World Password Day, and we found it fitting to release an update that'll make it even easier for users to manage Google Authenticator 2-Step Verification (2SV) codes across multiple devices. We are introducing one of the most anticipated features - allowing users to transfer their 2SV secrets, the data used to generate 2SV codes across devices that have Google Authenticator installed. For instance, when upgrading from an old phone to a new phone. This feature has started rolling out and is available in the latest version (5.10) of Google Authenticator on Android.



Transferring accounts from one device to another with Google Authenticator

Using 2SV, 2-Factor Authentication (2FA) or Multi-Factor Authentication (MFA) is critical to protecting your accounts from unauthorized access. With these mechanisms, users verify their identity through their password and an additional proof of identity, such as a security key or a passcode.

Google Authenticator makes it easy to use 2SV on accounts. In addition to supplying only a password when logging in, a user also enters a code generated by the Google Authenticator app on their phone. This is a safer alternative, used by millions of users, compared to passcodes via text message.

Users place their trust in Google Authenticator to keep their accounts safe. As a result, security is always a high priority. We made several explicit design decisions to minimize the attack surface while increasing the overall usability of the app. 
  • We ensured that no data is sent to Google’s servers during the transfer -- communication is directly between your two devices. Your 2SV secrets can’t be accessed without having physical access to your phone and the ability to unlock it.
  • We implemented a variety of alerting mechanisms and in-app logs to make sure users are aware when the transfer function has been used.

You can find more information about the Google Authenticator and its usage guide here.

Research Grants to support Google VRP Bug Hunters during COVID-19



In 2015, we launched our Vulnerability Research Grant program, which allows us to recognize the time and efforts of security researchers, including the situations where they don't find any vulnerabilities. To support our community of security researchers and to help protect our users around the world during COVID-19, we are announcing a temporary expansion of our Vulnerability Research Grant efforts.

In light of new challenges caused by the coronavirus outbreak, we are expanding this initiative by  creating a COVID-19 grant fund. As of today, every Google VRP Bug Hunter who submitted at least two remunerated reports from 2018 through April 2020 will be eligible for a $1,337 research grant. We are dedicating these grants to support our researchers during this time. We are committed to protecting our users and we want to encourage the research community to help us identify threats and to prevent potential vulnerabilities in our products.

We understand the individual challenges COVID-19 has placed on the research community are different for everyone and we hope that these grants will allow us to support our Bug Hunters during these uncertain times. Even though our grants are intended to recognize the efforts of our frequent researchers regardless of their results, as always, bugs found during the grant are eligible for regular rewards per the Vulnerability Reward Program (VRP) rules. We are aware that some of our partners might not be interested in monetary grants. In such cases, we will offer the option to donate the grant to an established COVID-19 related charity and within our discretion, will monetarily match these charitable donations.

For those of you who recently joined us or are planning to start, it’s never too late. We are committed to continue the Vulnerability Research Grant program throughout 2020, so stay tuned for future announcements and follow us on @GoogleVRP!

Introducing our new book “Building Secure and Reliable Systems”



For years, I’ve wished that someone would write a book like this. Since their publication, I’ve often admired and recommended the Google Site Reliability Engineering (SRE) books—so I was thrilled to find that a book focused on security and reliability was already underway when I arrived at Google. Ever since I began working in the tech industry, across organizations of varying sizes, I’ve seen people struggling with the question of how security should be organized: Should it be centralized or federated? Independent or embedded? Operational or consultative? Technical or governing? The list goes on and on.

Both SRE and security have strong dependencies on classic software engineering teams. Yet both differ from classic software engineering teams in fundamental ways:
  • Site Reliability Engineers (SREs) and security engineers tend to break and fix, as well as build.
  • Their work encompasses operations, in addition to development.
  • SREs and security engineers are specialists, rather than classic software engineers.
  • They are often viewed as roadblocks, rather than enablers.
  • They are frequently siloed, rather than integrated in product teams.
For many years, my colleagues and I have argued that security should be a first-class and embedded quality of software. I believe that embracing an SRE- inspired approach is a logical step in that direction. As my understanding of the intersection between security and SRE has deepened, I’ve become even more certain that it’s important to more thoroughly integrate security practices into the full lifecycle of software and data services. The nature of the modern hybrid cloud—much of which is based on open source software frameworks that offer interconnected data and microservices—makes tightly integrated security and resilience capabilities even more important.

At the same time, enterprises are at a critical point where cloud computing, various forms of machine learning, and a complicated cybersecurity landscape are together determining where an increasingly digital world is going, how quickly it will get there, and what risks are involved.

The operational and organizational approaches to security in large enterprises have varied dramatically over the past 20 years. The most prominent instantiations include fully centralized chief information security officers and core infrastructure operations that encompass firewalls, directory services, proxies, and much more—teams that have grown to hundreds or thousands of employees. On the other end of the spectrum, federated business information security teams have either the line of business or technical expertise required to support or govern a named list of functions or business operations. Somewhere in the middle, committees, metrics, and regulatory requirements might govern security policies, and embedded Security Champions might either play a relationship management role or track issues for a named organizational unit. Recently, I’ve seen teams riffing on the SRE model by evolving the embedded role into something like a site security engineer, or into a specific Agile scrum role for specialist security teams.

For good reasons, enterprise security teams have largely focused on confidentiality. However, organizations often recognize data integrity and availability to be equally important, and address these areas with different teams and different controls. The SRE function is a best-in-class approach to reliability. However, it also plays a role in the real-time detection of and response to technical issues—including security- related attacks on privileged access or sensitive data. Ultimately, while engineering teams are often organizationally separated according to specialized skill sets, they have a common goal: ensuring the quality and safety of the system or application.

In a world that is becoming more dependent upon technology every year, a book about approaches to security and reliability drawn from experiences at Google and across the industry is an important contribution to the evolution of software development, systems management, and data protection. As the threat landscape evolves, a dynamic and integrated approach to defense is now a basic necessity. In my previous roles, I looked for a more formal exploration of these questions; I hope that a variety of teams inside and outside of security organizations find this discussion useful as approaches and tools evolve. This project has reinforced my belief that the topics it covers are worth discussing and promoting in the industry—particularly as more organizations adopt DevOps, DevSecOps, SRE, and hybrid cloud architectures along with their associated operating models. At a minimum, this book is another step in the evolution and enhancement of system and data security in an increasingly digital world.

The new book can be downloaded for free from the Google SRE website, or purchased as a physical copy from your preferred retailer.