Tag Archives: Security

Safe Browsing: Protecting more than 3 billion devices worldwide, automatically

[Cross-posted from The Keyword]

In 2007, we launched Safe Browsing, one of Google’s earliest anti-malware efforts. To keep our users safe, we’d show them a warning before they visited a site that might’ve harmed their computers.
Computing has evolved a bit in the last decade, though. Smartphones created a more mobile internet, and now AI is increasingly changing how the world interacts with it. Safe Browsing also had to evolve to effectively protect users.

And it has: In May 2016, we announced that Safe Browsing was protecting more than 2 billion devices from badness on the internet. Today we’re announcing that Safe Browsing has crossed the threshold to 3 billion devices. We’re sharing a bit more about how we got here, and where we’re going.

What is Safe Browsing?

You may not know Safe Browsing by name, since most of the time we’re invisibly protecting you, without getting in the way. But you may have seen a warning like this at some point:
This notification is one of the visible parts of Safe Browsing, a collection of Google technologies that hunt badness—typically websites that deceive users—on the internet. We identify sites that might try to phish you, or sites that install malware or other undesirable software. The systems that make up Safe Browsing work together to identify, analyze and continuously keep Safe Browsing’s knowledge of the harmful parts of the internet up to date.

This protective information that we generate—a curated list of places that are dangerous for people and their devices—is used across many of our products. It helps keep search results safe and keep ads free from badness; it’s integral to Google Play Protect and keeps you safe on Android; and it helps Gmail shield you from malicious messages.

And Safe Browsing doesn’t protect only Google’s products. For many years, Safari and Firefox have protected their users with Safe Browsing as well. If you use an up-to-date version of Chrome, Firefox or Safari, you’re protected by default. Safe Browsing is also used widely by web developers and app developers (including Snapchat), who integrate our protections by checking URLs before they’re presented to their users.

Protecting more people with fewer bits

In the days when web browsers were used only on personal computers, we didn’t worry much about the amount of data Safe Browsing sent over the internet to keep your browser current. Mobile devices changed all that: Slow connections, expensive mobile data plans, and scarce battery capacity became important new considerations.

So over the last few years, we’ve rethought how Safe Browsing delivers data. We built new technologies to make its data as compact as possible: We only send the information that’s most protective to a given device, and we make sure this data is compressed as tightly as possible. (All this work benefits desktop browsers, too!)

We initially introduced our new mobile-optimized method in late 2015 with Chrome on Android, made it more broadly available in mid-2016, when we also started actively encouraging Android developers to integrate it. With the release of iOS 10 in September 2016, Safari began using our new, efficient Safe Browsing update technology, giving iOS users a protection boost.

Safe Browsing in an AI-first world

The internet is at the start of another major shift. Safe Browsing has already been using machine learning for many years to detect much badness of many kinds. We’re continually evaluating and integrating cutting-edge new approaches to improve Safe Browsing.

Protecting all users across all their platforms makes the internet safer for everyone. Wherever the future of the internet takes us, Safe Browsing will be there, continuing to evolve, expand, and protect people wherever they are.

Chrome’s Plan to Distrust Symantec Certificates

This post is a broader announcement of plans already finalized on the blink-dev mailing list.

At the end of July, the Chrome team and the PKI community converged upon a plan to reduce, and ultimately remove, trust in Symantec’s infrastructure in order to uphold users’ security and privacy when browsing the web. This plan, arrived at after significant debate on the blink-dev forum, would allow reasonable time for a transition to new, independently-operated Managed Partner Infrastructure while Symantec modernizes and redesigns its infrastructure to adhere to industry standards. This post reiterates this plan and includes a timeline detailing when site operators may need to obtain new certificates.

On January 19, 2017, a public posting to the mozilla.dev.security.policy newsgroup drew attention to a series of questionable website authentication certificates issued by Symantec Corporation’s PKI. Symantec’s PKI business, which operates a series of Certificate Authorities under various brand names, including Thawte, VeriSign, Equifax, GeoTrust, and RapidSSL, had issued numerous certificates that did not comply with the industry-developed CA/Browser Forum Baseline Requirements. During the subsequent investigation, it was revealed that Symantec had entrusted several organizations with the ability to issue certificates without the appropriate or necessary oversight, and had been aware of security deficiencies at these organizations for some time.

This incident, while distinct from a previous incident in 2015, was part of a continuing pattern of issues over the past several years that has caused the Chrome team to lose confidence in the trustworthiness of Symantec’s infrastructure, and as a result, the certificates that have been or will be issued from it.

After our agreed-upon proposal was circulated, Symantec announced the selection of DigiCert to run this independently-operated Managed Partner Infrastructure, as well as their intention to sell their PKI business to DigiCert in lieu of building a new trusted infrastructure. This post outlines the timeline for that transition and the steps that existing Symantec customers should take to minimize disruption to their users.

Information For Site Operators

Starting with Chrome 66, Chrome will remove trust in Symantec-issued certificates issued prior to June 1, 2016. Chrome 66 is currently scheduled to be released to Chrome Beta users on March 15, 2018 and to Chrome Stable users around April 17, 2018.

If you are a site operator with a certificate issued by a Symantec CA prior to June 1, 2016, then prior to the release of Chrome 66, you will need to replace the existing certificate with a new certificate from any Certificate Authority trusted by Chrome.

Additionally, by December 1, 2017, Symantec will transition issuance and operation of publicly-trusted certificates to DigiCert infrastructure, and certificates issued from the old Symantec infrastructure after this date will not be trusted in Chrome.

Around the week of October 23, 2018, Chrome 70 will be released, which will fully remove trust in Symantec’s old infrastructure and all of the certificates it has issued. This will affect any certificate chaining to Symantec roots, except for the small number issued by the independently-operated and audited subordinate CAs previously disclosed to Google.

Site operators that need to obtain certificates from Symantec’s existing root and intermediate certificates may do so from the old infrastructure until December 1, 2017, although these certificates will need to be replaced again prior to Chrome 70. Additionally, certificates issued from Symantec’s infrastructure will have their validity limited to 13 months. Alternatively, site operators may obtain replacement certificates from any other Certificate Authority currently trusted by Chrome, which are unaffected by this distrust or validity period limit.
Reference Timeline

The following is a timeline of relevant dates associated with this plan, which distills the various requirements and milestones into an actionable set of information for site operators. As always, Chrome release dates can vary by a number of days, but upcoming release dates can be tracked here.

~March 15, 2018
Site Operators using Symantec-issued TLS server certificates issued before June 1, 2016 should replace these certificates. These certificates can be replaced by any currently trusted CA.
~October 24, 2017
Chrome 62 released to Stable, which will add alerting in DevTools when evaluating certificates that will be affected by the Chrome 66 distrust.
December 1, 2017
According to Symantec, DigiCert’s new “Managed Partner Infrastructure” will at this point be capable of full issuance. Any certificates issued by Symantec’s old infrastructure after this point will cease working in a future Chrome update.

From this date forward, Site Operators can obtain TLS server certificates from the new Managed Partner Infrastructure that will continue to be trusted after Chrome 70 (~October 23, 2018).

December 1, 2017 does not mandate any certificate changes, but represents an opportunity for site operators to obtain TLS server certificates that will not be affected by Chrome 70’s distrust of the old infrastructure.
~March 15, 2018
Chrome 66 released to beta, which will remove trust in Symantec-issued certificates with a not-before date prior to June 1, 2016. As of this date Site Operators must be using either a Symantec-issued TLS server certificate issued on or after June 1, 2016 or a currently valid certificate issued from any other trusted CA as of Chrome 66.

Site Operators that obtained a certificate from Symantec’s old infrastructure after June 1, 2016 are unaffected by Chrome 66 but will need to obtain a new certificate by the Chrome 70 dates described below.
~April 17, 2018
Chrome 66 released to Stable.
~September 13, 2018
Chrome 70 released to Beta, which will remove trust in the old Symantec-rooted Infrastructure. This will not affect any certificate chaining to the new Managed Partner Infrastructure, which Symantec has said will be operational by December 1, 2017.

Only TLS server certificates issued by Symantec’s old infrastructure will be affected by this distrust regardless of issuance date.
~October 23, 2018
Chrome 70 released to Stable.

Hardening the Kernel in Android Oreo

Posted by Sami Tolvanen, Senior Software Engineer, Android Security

The hardening of Android's userspace has increasingly made the underlying Linux kernel a more attractive target to attackers. As a result, more than a third of Android security bugs were found in the kernel last year. In Android 8.0 (Oreo), significant effort has gone into hardening the kernel to reduce the number and impact of security bugs.

Android Nougat worked to protect the kernel by isolating it from userspace processes with the addition of SELinux ioctl filtering and requiring seccomp-bpf support, which allows apps to filter access to available system calls when processing untrusted input. Android 8.0 focuses on kernel self-protection with four security-hardening features backported from upstream Linux to all Android kernels supported in devices that first ship with this release.

Hardened usercopy

Usercopy functions are used by the kernel to transfer data from user space to kernel space memory and back again. Since 2014, missing or invalid bounds checking has caused about 45% of Android's kernel vulnerabilities. Hardened usercopy adds bounds checking to usercopy functions, which helps developers spot misuse and fix bugs in their code. Also, if obscure driver bugs slip through, hardening these functions prevents the exploitation of such bugs.

This feature was introduced in the upstream kernel version 4.8, and we have backported it to Android kernels 3.18 and above.

int buggy_driver_function(void __user *src, size_t size)
    /* potential size_t overflow (don’t do this) */
    u8 *buf = kmalloc(size * N, GPF_KERNEL);
    /* results in buf smaller than size, and a heap overflow */
    if (copy_from_user(buf, src, size))
    return -EFAULT;

    /* never reached with CONFIG_HARDENED_USERCOPY=y */

An example of a security issue that hardened usercopy prevents.

Privileged Access Never (PAN) emulation

While hardened usercopy functions help find and mitigate security issues, they can only help if developers actually use them. Currently, all kernel code, including drivers, can access user space memory directly, which can lead to various security issues.

To mitigate this, CPU vendors have introduced features such as Supervisor Mode Access Prevention (SMAP) in x86 and Privileged Access Never (PAN) in ARM v8.1. These features prevent the kernel from accessing user space directly and ensure developers go through usercopy functions. Unfortunately, these hardware features are not yet widely available in devices that most Android users have today.

Upstream Linux introduced software emulation for PAN in kernel version 4.3 for ARM and 4.10 in ARM64. We have backported both features to Android kernels starting from 3.18.

Together with hardened usercopy, PAN emulation has helped find and fix bugs in four kernel drivers in Pixel devices.

int buggy_driver_copy_data(struct mydata *src, void __user *ptr)
    /* failure to keep track of user space pointers */
    struct mydata *dst = (struct mydata *)ptr;
    /* read/write from/to an arbitrary user space memory location */
    dst->field = … ;    /* use copy_(from|to)_user instead! */
    /* never reached with PAN (emulation) or SMAP */

An example of a security issue that PAN emulation mitigates.

Kernel Address Space Layout Randomization (KASLR)

Android has included support for Address Space Layout Randomization (ASLR) for years. Randomizing memory layout makes code reuse attacks probabilistic and therefore more difficult for an attacker to exploit, especially remotely. Android 8.0 brings this feature to the kernel. While Linux has supported KASLR on x86 since version 3.14, KASLR for ARM64 has only been available upstream since Linux 4.6. Android 8.0 makes KASLR available in Android kernels 4.4 and newer.

KASLR helps mitigate kernel vulnerabilities by randomizing the location where kernel code is loaded on each boot. On ARM64, for example, it adds 13–25 bits of entropy depending on the memory configuration of the device, which makes code reuse attacks more difficult.

Post-init read-only memory

The final hardening feature extends existing memory protections in the kernel by creating a memory region that's marked read-only after the kernel has been initialized. This makes it possible for developers to improve protection on data that needs to be writable during initialization, but shouldn't be modified after that. Having less writable memory reduces the internal attack surface of the kernel, making exploitation harder.

Post-init read-only memory was introduced in upstream kernel version 4.6 and we have backported it to Android kernels 3.18 and newer. While we have applied these protections to some data structures in the core kernel, this feature is extremely useful for developers working on kernel drivers.


Android Oreo includes mitigations for the most common source of security bugs in the kernel. This is especially relevant because 85% of kernel security bugs in Android have been in vendor drivers that tend to get much less scrutiny. These updates make it easier for driver developers to discover common bugs during development, stopping them before they can reach end user devices.

Certified Android devices: Safe and secure

Android is an open-source platform with an ecosystem that started with one device, one carrier and one manufacturer in 2008. Today, there are over 2 billion active devices worldwide. The pace of innovation has never been greater, offering users choice and diversity.

Google provides certification for Android devices to make sure users receive secure and stable experiences. We work with manufacturers across the globe to run hundreds of compatibility tests that ensure devices adhere to the Android security and permissions model. These tests also verify that the Google apps pre-installed on devices are authentic, and that apps from the Play Store can work as intended.

Certified devices also come with Google Play Protect (हिंदी में) out-of-the-box, providing users with a suite of security features that include automatic device scanning for malware. This provides baseline protection against malware, privacy hacks and more.

Today, hundreds of licensed manufacturers offer a great experience by getting their devices certified. You can find the list of our partners here (हिंदी में). And when you shop for a new Android phone or tablet, we recommend asking for a certified device or looking for the Google Play Protect logo on the box, to make sure it brings the benefits of certification and additional layers of security provided by Play Protect.

Please visit https://www.android.com/intl/en_in/certified/ (हिंदी में), a new website we launched today, to learn more about certified Android devices.

Certified Android devices: Safe and secure

Android is an open-source platform with an ecosystem that started with one device, one carrier and one manufacturer in 2008. Today, there are over 2 billion active devices worldwide. The pace of innovation has never been greater, offering users choice and diversity.

Google provides certification for Android devices to make sure users receive secure and stable experiences. We work with manufacturers across the globe to run hundreds of compatibility tests that ensure devices adhere to the Android security and permissions model. These tests also verify that the Google apps pre-installed on devices are authentic, and that apps from the Play Store can work as intended.

Certified devices also come with Google Play Protect (हिंदी में) out-of-the-box, providing users with a suite of security features that include automatic device scanning for malware. This provides baseline protection against malware, privacy hacks and more.

Today, hundreds of licensed manufacturers offer a great experience by getting their devices certified. You can find the list of our partners here (हिंदी में). And when you shop for a new Android phone or tablet, we recommend asking for a certified device or looking for the Google Play Protect logo on the box, to make sure it brings the benefits of certification and additional layers of security provided by Play Protect.

Please visit https://www.android.com/intl/en_in/certified/ (हिंदी में), a new website we launched today, to learn more about certified Android devices.

Making it safer to get apps on Android O

Posted by Edward Cunningham. Product Manager, Android Security

Eagle-eyed users of Android O will have noticed the absence of the 'Allow unknown sources' setting, which has existed since the earliest days of Android to facilitate the installation of apps from outside of Google Play and other preloaded stores. In this post we'll talk about the new Install unknown apps permission and the security benefits it brings for both Android users and developers.

Earlier this year we introduced Google Play Protect - comprehensive security services that are always at work to protect your device from harm. Google Play continues to be one of the safest places for Android users to download their apps, with the majority of Potentially Harmful Apps (PHAs) originating from third-party sources.

A common strategy employed by PHA authors is to deliver their apps via a hostile downloader. For example, a gaming app might not contain malicious code but instead might notify the user to install a PHA that masquerades as an important security update. (You can read more about hostile downloaders in the Android Security 2016 Year in Review). Users who have enabled the installation of apps from unknown sources leave themselves vulnerable to this deceptive behavior.

Left (pre-Android O): The install screen for a PHA masquerading as a system update.
Right (Android O): Before the PHA is installed, the user must first grant permission to the app that triggered the install.

In Android O, the Install unknown apps permission makes it safer to install apps from unknown sources. This permission is tied to the app that prompts the install— just like other runtime permissions—and ensures that the user grants permission to use the install source before it can prompt the user to install an app. When used on a device running Android O and higher, hostile downloaders cannot trick the user into installing an app without having first been given the go-ahead.

This new permission provides users with transparency, control, and a streamlined process to enable installs from trusted sources. The Settings app shows the list of apps that the user has approved for installing unknown apps. Users can revoke the permission for a particular app at any time.

At any time, users can review the apps that they've allowed for installing unknown apps. To make the permission-granting process easier, app developers can choose to direct users to their permission screen as part of the setup flow.

Developer changes

To take advantage of this new behavior, developers of apps that require the ability to download and install other apps via the Package Installer may need to make some changes. If an app uses a targetSdkLevel of 26 or above and prompts the user to install other apps, the manifest file needs to include the REQUEST_INSTALL_PACKAGES permission:

<uses-permission android:name="android.permission.REQUEST_INSTALL_PACKAGES" />

Apps that haven't declared this permission cannot install other apps, a handy security protection for apps that have no intention of doing so. You can choose to pre-emptively direct your users to the Install unknown apps permission screen using the ACTION_MANAGE_UNKNOWN_APP_SOURCES Intent action. You can also query the state of this permission using the PackageManager canRequestPackageInstalls() API.

Remember that Play policies still apply to apps distributed on Google Play if those apps can install and update other apps. In the majority of cases, such behavior is inappropriate; you should instead provide a deep link to the app's listing on the Play Store.

Be sure to check out the updated publishing guide that provides more information about installing unknown apps, and stay tuned for more posts on security hardening in Android O.

Android Bug Swatting with Sanitizers

Posted by Dan Austin, Android Security team

LLVM, the compiler infrastructure used to build Android, contains multiple components that perform static and dynamic analysis. One set of these components that have been used extensively when analyzing Android are the sanitizers, specifically AddressSanitizer, UndefinedBehaviorSanitizer and SanitizerCoverage. These sanitizers are compiler-based instrumentation components contained in compiler-rt that can be used in the development and testing process to push out bugs and make Android better. The sanitizers that are currently available in Android can discover and diagnose many memory misuse bugs and undefined behavior and can give code coverage metrics to ensure that your test suite is as complete as possible.

This blog post details the internals of the current Android sanitizers—AddressSanitizer, UndefinedBehaviorSanitizer and SanitizerCoverage—and show how they can be used within the Android build system.

Address Sanitizer

AddressSanitizer (ASan) is a compiler based instrumentation capability that allows for runtime detection of many types of memory errors in C/C++ code. In Android, the checks for the following classes of memory errors have been tested:

  • Out-of-bounds accesses to heap, stack and globals
  • Use-after-free
  • Use-after-return (runtime flag ASAN_OPTIONS=detect_stack_use_after_return=1)
  • Use-after-scope (clang flag -fsanitize-address-use-after-scope)
  • Double-free, invalid free

Android allows for full build instrumentation by ASan, and also allows for ASan instrumentation at the app level through asanwrapper. Instructions for both instrumentation techniques can be found on source.android.com.

AddressSanitizer is built upon two high-level concepts. The first is instrumentation of all memory-related function calls, including alloca, malloc, and free, with information to track memory allocation, free, and usage statistics. This instrumentation allows for ASan to detect invalid memory usage bugs including double-free, use-after scope, return, and free. ASan can also detect reads and writes that occur out of bounds of defined memory regions. It does this by padding all allocated memory buffers and variables. If a read or write to this padding region occurs, ASan catches it and outputs information useful for diagnosing the memory violation. This padding is known as poisoned memory in ASan terms. Here is an example of what poisoned memory padding looks like with stack allocated variables:

Figure 1. Example of ASANified stack variables with an int8_t array of 8 elements, a uint32_t, and an int8_t array of 16 elements. The memory layout after compiling with ASAN is on the right, with padding between each variable. For each stack variable, there are 32 bytes of padding before and after the variable. If the object size of a variable is not 32 bytes, then an additional 32 - n bytes of padding are inserted, where n is the object size.

ASan uses shadow memory to keep track of which bytes are normal memory and which bytes are poisoned memory. Bytes can be marked as completely normal (marked as 0 in shadow memory), completely poisoned (high bit of the corresponding shadow byte is set), or the first k bytes are unpoisoned (shadow byte value is k). If shadow memory indicates a byte is poisoned, then ASan crashes the program and outputs information useful for debugging purposes, including the call stack, shadow memory map, the type of memory violation, what was read or written, PC that caused the violation and the memory contents.

AddressSanitizer: heap-buffer-overflow on address 0xe6146cf3 at pc 0xe86eeb3c bp 0xffe67348 sp 0xffe66f14
WRITE of size 39 at 0xe6146cf3 thread T0
    #0 0xe86eeb3b  (/system/lib/libclang_rt.asan-arm-android.so+0x64b3b)
    #1 0xaddc5d27  (/data/simple_test_fuzzer+0x4d27)
    #2 0xaddd08b9  (/data/simple_test_fuzzer+0xf8b9)
    #3 0xaddd0a97  (/data/simple_test_fuzzer+0xfa97)
    #4 0xaddd0fbb  (/data/simple_test_fuzzer+0xffbb)
    #5 0xaddd109f  (/data/simple_test_fuzzer+0x1009f)
    #6 0xaddcbfb9  (/data/simple_test_fuzzer+0xafb9)
    #7 0xaddc9ceb  (/data/simple_test_fuzzer+0x8ceb)
    #8 0xe8655635  (/system/lib/libc.so+0x7a635)
0xe6146cf3 is located 0 bytes to the right of 35-byte region [0xe6146cd0,0xe6146cf3)
allocated by thread T0 here:
    #0 0xe87159df  (/system/lib/libclang_rt.asan-arm-android.so+0x8b9df)
    #1 0xaddc5ca7  (/data/simple_test_fuzzer+0x4ca7)
    #2 0xaddd08b9  (/data/simple_test_fuzzer+0xf8b9)
SUMMARY: AddressSanitizer: heap-buffer-overflow (/system/lib/libclang_rt.asan-arm-android.so+0x64b3b) 
Shadow bytes around the buggy address:
  0x1cc28d40: fa fa 00 00 00 00 07 fa fa fa fd fd fd fd fd fd
  0x1cc28d50: fa fa 00 00 00 00 07 fa fa fa fd fd fd fd fd fd
  0x1cc28d60: fa fa 00 00 00 00 00 02 fa fa fd fd fd fd fd fd
  0x1cc28d70: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x1cc28d80: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x1cc28d90: fa fa fa fa fa fa fa fa fa fa 00 00 00 00[03]fa
  0x1cc28da0: fa fa 00 00 00 00 07 fa fa fa 00 00 00 00 03 fa
  0x1cc28db0: fa fa fd fd fd fd fd fa fa fa fd fd fd fd fd fa
  0x1cc28dc0: fa fa 00 00 00 00 00 02 fa fa fd fd fd fd fd fd
  0x1cc28dd0: fa fa 00 00 00 00 00 02 fa fa fd fd fd fd fd fd
  0x1cc28de0: fa fa 00 00 00 00 00 02 fa fa fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

More information on what each part of the report means, and how to make it more user-friendly can be found on the LLVM website and in Github.

Sometimes, the bug discovery process can appear to be non-deterministic, especially when bugs require special setup or more advanced techniques, such as heap priming or race condition exploitation. Many of these bugs are not immediately apparent, and could surface thousands of instructions away from the memory violation that was the actual root cause. As ASan instruments all memory-related functions and pads data with areas that cannot be accessed without triggering an ASan-related callback, memory violations are caught the instant they occur, instead of waiting for a crash-inducing corruption. This is extremely useful in bug discovery and root cause diagnosis. In addition, ASAN is an extremely useful tool for fuzzing, and has been used in many fuzzing efforts on Android.


UndefinedBehaviorSanitizer (UBSan) performs compile-time instrumentation to check for various types of undefined behavior. Device manufacturers can include it in their test builds by including LOCAL_SANITIZE:=default-ub in their makefiles or default-ub: true in the sanitize block of blueprint files. While UBSan can detect many undefined behaviors, Android's build system directly supports:

  • bool
  • integer-divide-by-zero
  • return
  • returns-nonnull-attribute
  • shift-exponent
  • unreachable
  • vla-bound

UBSan's integer overflow checking is also used in Android's build system. UBSan also supports unsigned-integer-overflow, which is not technically undefined behavior, but is included in the sanitizer. These can be enabled in makefiles by setting LOCAL_SANITIZE to signed-integer-overflow, unsigned-integer-overflow, or the combination flag integer, which enables signed-integer-overflow, unsigned-integer-overflow, integer-divide-by-zero, shift-base, and shift-exponent. These can be enabled in blueprint files by setting Misc_undefined to the desired flag. These UBSan targets, especially unsigned-integer-overflow are used extensively in the mediaserver components to eliminate any latent integer overflow vulnerabilities.

The default implementation on Android is to abort the program when undefined behavior is encountered. However, starting in October 2016, UBSan on Android has an optional runtime library that gives more detailed error reporting, including type of undefined behavior encountered, file and source code line information.

In Android.mk files, this is enabled with:

LOCAL_SANITIZE:=unsigned-integer-overflow signed-integer-overflow
LOCAL_SANITIZE_DIAG:=unsigned-integer-overflow signed-integer-overflow

And in Android.bp files, it is enabled with:

sanitize: {
        misc_undefined: [
        diag: {
            misc_undefined: [

Here is an example of the information provided by the UBSan runtime library:

external/icu/icu4c/source/common/ucnv.c:1193:23: runtime error: unsigned integer overflow: 4291925010 + 2147483647 cannot be represented in type 'unsigned int'
external/icu/icu4c/source/common/cstring.c:288:16: runtime error: unsigned integer overflow: 0 - 1 cannot be represented in type 'uint32_t' (aka 'unsigned int')
external/harfbuzz_ng/src/hb-private.hh:894:16: runtime error: unsigned integer overflow: 72 - 55296 cannot be represented in type 'unsigned int'
external/harfbuzz_ng/src/hb-set-private.hh:82:24: runtime error: unsigned integer overflow: 32 - 562949953421312 cannot be represented in type 'unsigned long'
system/keymaster/authorization_set.cpp:500:37: runtime error: unsigned integer overflow: 6843601868186924302 * 24 cannot be represented in type 'unsigned long'


Sanitizer tools have a very simple code coverage tool built in. SanitizerCoverage allows for code coverage at the call level, basic block level, or edge level. These can be used as a standalone instrumentation technique or in conjunction with any of the sanitizers, including AddressSanitizer and UndefinedBehaviorSanitizer. To use the new guard-based coverage, set fsanitize-coverage=trace-pc-guard. This causes the compiler to insert __sanitizer_cov_trace_pc_guard(&guard_variable) on every edge. Each edge has its own uint32_t guard_variable. In addition, a module constructor, __sanitizer_cov_trace_pc_guard_init(uint32_t* start, uint32_t* stop) is also generated. All the __sanitizer_cov_ functions should be provided by the user. You can follow the example on Tracing PCs with guards.

In addition to control flow tracing, SanitizerCoverage allows for data flow tracing. This is activated with fsanitize-coverage=trace-cmp and is implemented by instrumenting all switch and comparison instructions with __sanitizer_cov_trace_* functions. Similar functionality exists for integer division and GEP instructions, activated with fsanitize-coverage=trace-div and fsanitize-coverage=trace-gep respectively. This is an experimental interface, is not thread-safe, and could change at any time, however, it is available and functional in Android builds.

During a coverage sanitizer session, the coverage information is recorded in two files, a .sancov file, and a sancov.map file. The first contains all instrumented points in the program, and the other contains the execution trace represented as a sequence of indices into the first file. By default, these files are stored in the current working directory, with one created for each executable and shared object that ran during the execution.


ASan, UBSan, and SanitizerCoverage are just the beginning of LLVM sanitizer use in Android. More LLVM Sanitizers are being integrated into the Android build system. The sanitizers described here can be used as a code health and system stability mechanism and are even currently being used by Android Security to find and prevent security bugs!

From Chrysaor to Lipizzan: Blocking a new targeted spyware family

Android Security is always developing new ways of using data to find and block potentially harmful apps (PHAs) from getting onto your devices. Earlier this year, we announced we had blocked Chrysaor targeted spyware, believed to be written by NSO Group, a cyber arms company. In the course of our Chrysaor investigation, we used similar techniques to discover a new and unrelated family of spyware called Lipizzan. Lipizzan’s code contains references to a cyber arms company, Equus Technologies.

Lipizzan is a multi-stage spyware product capable of monitoring and exfiltrating a user’s email, SMS messages, location, voice calls, and media. We have found 20 Lipizzan apps distributed in a targeted fashion to fewer than 100 devices in total and have blocked the developers and apps from the Android ecosystem. Google Play Protect has notified all affected devices and removed the Lipizzan apps.

We’ve enhanced Google Play Protect’s capabilities to detect the targeted spyware used here and will continue to use this framework to block more targeted spyware. To learn more about the methods Google uses to find targeted mobile spyware like Chrysaor and Lipizzan, attend our BlackHat talk, Fighting Targeted Malware in the Mobile Ecosystem.

How does Lipizzan work?

Getting on a target device

Lipizzan was a sophisticated two stage spyware tool. The first stage found by Google Play Protect was distributed through several channels, including Google Play, and typically impersonated an innocuous-sounding app such as a "Backup” or “Cleaner” app. Upon installation, Lipizzan would download and load a second "license verification" stage, which would survey the infected device and validate certain abort criteria. If given the all-clear, the second stage would then root the device with known exploits and begin to exfiltrate device data to a Command & Control server.

Once implanted on a target device

The Lipizzan second stage was capable of performing and exfiltrating the results of the following tasks:

  • Call recording
  • VOIP recording
  • Recording from the device microphone
  • Location monitoring
  • Taking screenshots
  • Taking photos with the device camera(s)
  • Fetching device information and files
  • Fetching user information (contacts, call logs, SMS, application-specific data)

The PHA had specific routines to retrieve data from each of the following apps:

  • Gmail
  • Hangouts
  • KakaoTalk
  • LinkedIn
  • Messenger
  • Skype
  • Snapchat
  • StockEmail
  • Telegram
  • Threema
  • Viber
  • Whatsapp

We saw all of this behavior on a standalone stage 2 app, com.android.mediaserver (not related to Android MediaServer). This app shared a signing certificate with one of the stage 1 applications, com.app.instantbackup, indicating the same author wrote the two. We could use the following code snippet from the 2nd stage (com.android.mediaserver) to draw ties to the stage 1 applications.

Morphing first stage

After we blocked the first set of apps on Google Play, new apps were uploaded with a similar format but had a couple of differences.

The apps changed from ‘backup’ apps to looking like a “cleaner”, “notepad”, “sound recorder”, and “alarm manager” app. The new apps were uploaded within a week of the takedown, showing that the authors have a method of easily changing the branding of the implant apps.
The app changed from downloading an unencrypted stage 2 to including stage 2 as an encrypted blob. The new stage 1 would only decrypt and load the 2nd stage if it received an intent with an AES key and IV.

Despite changing the type of app and the method to download stage 2, we were able to catch the new implant apps soon after upload.

How many devices were affected?

There were fewer than 100 devices that checked into Google Play Protect with the apps listed below. That means the family affected only 0.000007% of Android devices. Since we identified Lipizzan, Google Play Protect removed Lipizzan from affected devices and actively blocks installs on new devices.

What can you do to protect yourself?

  • Ensure you are opted into Google Play Protect
  • Exclusively use the Google Play store. The chance you will install a PHA is much lower on Google Play than using other install mechanisms.
  • Keep “unknown sources” disabled while not using it.
  • Keep your phone patched to the latest Android security update.

List of samples

1st stage

Newer version 

Standalone 2nd stage

Seccomp filter in Android O

Posted by Paul Lawrence, Android Security Engineer
In Android-powered devices, the kernel does the heavy lifting to enforce the Android security model. As the security team has worked to harden Android's userspace and isolate and deprivilege processes, the kernel has become the focus of more security attacks. System calls are a common way for attackers to target the kernel.
All Android software communicates with the Linux kernel using system calls, or syscalls for short. The kernel provides many device- and SOC-specific syscalls that allow userspace processes, including apps, to directly interact with the kernel. All apps rely on this mechanism to access collections of behavior indexed by unique system calls, such as opening a file or sending a Binder message. However, many of these syscalls are not used or officially supported by Android.
Android O takes advantage of a Linux feature called seccomp that makes unused system calls inaccessible to application software. Because these syscalls cannot be accessed by apps, they can't be exploited by potentially harmful apps.

seccomp filter

Android O includes a single seccomp filter installed into zygote, the process from which all the Android applications are derived. Because the filter is installed into zygote—and therefore all apps—the Android security team took extra caution to not break existing apps. The seccomp filter allows:
  • all the syscalls exposed via bionic (the C runtime for Android). These are defined in bionic/libc/SYSCALLS.TXT.
  • syscalls to allow Android to boot
  • syscalls used by popular Android applications, as determined by running Google's full app compatibility suite
Android O's seccomp filter blocks certain syscalls, such as swapon/swapoff, which have been implicated in some security attacks, and the key control syscalls, which are not useful to apps. In total, the filter blocks 17 of 271 syscalls in arm64 and 70 of 364 in arm.


Test your app for illegal syscalls on a device running Android O.

Detecting an illegal syscall

In Android O, the system crashes an app that uses an illegal syscall. The log printout shows the illegal syscall, for example:
03-09 16:39:32.122 15107 15107 I crash_dump32: performing dump of process 14942 (target tid = 14971)
03-09 16:39:32.127 15107 15107 F DEBUG   : *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
03-09 16:39:32.127 15107 15107 F DEBUG   : Build fingerprint: 'google/sailfish/sailfish:O/OPP1.170223.013/3795621:userdebug/dev-keys'
03-09 16:39:32.127 15107 15107 F DEBUG   : Revision: '0'
03-09 16:39:32.127 15107 15107 F DEBUG   : ABI: 'arm'
03-09 16:39:32.127 15107 15107 F DEBUG   : pid: 14942, tid: 14971, name: WorkHandler  >>> com.redacted <<<
03-09 16:39:32.127 15107 15107 F DEBUG   : signal 31 (SIGSYS), code 1 (SYS_SECCOMP), fault addr --------
03-09 16:39:32.127 15107 15107 F DEBUG   : Cause: seccomp prevented call to disallowed system call 55
03-09 16:39:32.127 15107 15107 F DEBUG   :     r0 00000091  r1 00000007  r2 ccd8c008  r3 00000001
03-09 16:39:32.127 15107 15107 F DEBUG   :     r4 00000000  r5 00000000  r6 00000000  r7 00000037
Affected developers should rework their apps to not call the illegal syscall.

Toggling seccomp filters during testing

In addition to logging errors, the seccomp installer respects setenforce on devices running userdebug and eng builds, which allows you to test whether seccomp is responsible for an issue. If you type:
adb shell setenforce 0 && adb stop && adb start
then no seccomp policy will be installed into zygote. Because you cannot remove a seccomp policy from a running process, you have to restart the shell for this option to take effect.

Device manufacturers

Because Android O includes the relevant seccomp filters at //bionic/libc/seccomp, device manufacturers don't need to do any additional implementation. However, there is a CTS test that checks for seccomp at //cts/tests/tests/security/jni/android_security_cts_SeccompTest.cpp. The test checks that add_key and keyctl syscalls are blocked and openat is allowed, along with some app-specific syscalls that must be present for compatibility.

Final removal of trust in WoSign and StartCom Certificates

As previously announced, Chrome has been in the process of removing trust from certificates issued by the CA WoSign and its subsidiary StartCom, as a result of several incidents not in keeping with the high standards expected of CAs.

We started the phase out in Chrome 56 by only trusting certificates issued prior to October 21st 2016, and subsequently restricted trust to a set of whitelisted hostnames based on the Alexa Top 1M. We have been reducing the size of the whitelist over the course of several Chrome releases.

Beginning with Chrome 61, the whitelist will be removed, resulting in full distrust of the existing WoSign and StartCom root certificates and all certificates they have issued.

Based on the Chromium Development Calendar, this change is visible in the Chrome Dev channel now, the Chrome Beta channel around late July 2017, and will be released to Stable around mid September 2017.

Sites still using StartCom or WoSign-issued certificates should consider replacing these certificates as a matter of urgency to minimize disruption for Chrome users.