Author Archives: Eugene Liderman

How we’re blocking malware on Android Enterprise devices

October is the time when ghosts do their best haunting. Yet there are other unwelcome guests who cause havoc year-round, trying to sneak into your company phone and make real-world mischief.

Malware and other threats are always on the mind of mobile security professionals, which is why the issue gets special attention every October for Cybersecurity Awareness Month. While malware is something to guard against for all smartphone users, enterprises must be especially mindful of protecting sensitive corporate data.

It’s why Android ships with strong protections enabled through Google security services, with anti-phishing features coming from Google Safe Browsing and anti-malware features like Google Play Protect continuously monitoring for malware and removing any if discovered. With so much work taking place on mobile devices, built-in security features are essential.

We are committed to providing transparency into our efforts to reduce Potentially Harmful Application (PHA) rates on devices and in Google Play. Recently, we specifically analyzed Android Enterprise devices to gauge how our enterprise devices fare when they use Google Security Services and Android Enterprise as their sole anti-malware solution. We were pleased to find only .003 percent of devices with any active PHAs. That’s less likely than being hit by a comet or asteroid in the United States!

How our security services deliver for enterprises

We’ve been able to keep the PHA rate on enterprise devices very low by combining built-in malware defense with management APIs so admins have the controls they need to minimize threats to their device fleets. 

Google Play Protect scans over 100 billion apps every day, finding PHAs in real-time, notifying users of potential threats and removing them if necessary. Google Play Protect also works even if the device is offline and users can always perform a manual scan.

google play protect review

With machine learning and human analysis, we work to prevent the spread and impact of malicious apps.

Android Enterprise admins further tighten controls with managed Google Play. By using blocklist and allowlist they can control exactly which apps are allowed on devices, closing another potential opening for malware. 

Admins can take other proactive steps such as forcing devices to install operating system updates and disabling the ability to install applications from unknown sources. EMM partners can leverage our APIs to implement security compliance into their mobility offerings. The SafetyNet Verify Apps API, for example, taps into our malware intelligence to help detect if any malware resides on the device.

The integration of our security services and Android Enterprise management tools give our partners and customers the security they need for today’s threats. Last year we also introduced the App Defense Alliance where we partnered with industry leaders to stop bad apps before they reach users’ devices. We’re pleased to see such positive results from these efforts and tangible evidence of how Android Enterprise management paired with our security services help provide comprehensive device protection.  Learn more about Android Enterprise security and how to take advantage of these essential security services.

How Android Enterprise supports a Zero Trust security model

The surge in remote and mobile working has put an increased emphasis on how organizations should best manage and secure device access to critical information. New research from Omdia, in a survey of 700 IT decision makers, found that businesses are  expanding and strengthening access controls now that many employees spend very little or no time in the office.

This has piqued interest in the Zero Trust security model, which is built on the premise that access to corporate resources should continuously be verified. In the Omdia survey, 31 percent of the respondents are currently using a Zero Trust, with another 47 percent planning to do so in the near future.

Understanding the Zero-Trust security model

The Zero Trust security model enables a mobile and remote workforce to securely connect to company resources from virtually anywhere. Devices are vetted before being granted access to company resources. Companies can use tight, granular controls to specify the level of access whether the devices are connected to a corporate network, from home, or elsewhere.

An effective Zero Trust implementation requires numerous device signals, context and controls to make intelligent decisions about access. A key piece of a Zero Trust architecture is the enforcement point, which is the identity or network component that grants or denies access based on the various device and user signals that are available. For example, the enforcement point may decline access to devices that do not have the most recent security patch or show signs of running a compromised operating system.

030A5F14-D734-4EFA-9823-9F8B19D5FA12_4_5005_c.jpeg

A Zero Trust diagram showing how various device and user signals are used as part of contextual rules that dictate the control.

How Android enables a Zero Trust security approach

Android has a wealth of platform features and APIs that our enterprise mobility management and security partners leverage to safeguard backend services and resources. Let’s look at how Android provides the building blocks you need for a Zero Trust deployment.

Android provides a variety of device signals that administrators can use in building systems to verify the security and integrity of devices. In a Zero Trust model, these signals are used to assess whether a device should be allowed to access corporate information.

The first thing that needs to be checked is the OS version and Security Patch Level of the device. The SafetyNet Attestation API verifies a device has not been rooted, while the SafetyNet VerifyApps API checks for the presence of malware. Admins can also confirm if applications are complying with Android security standards. The NetworkEvent and SecurityLog logs provide data to check for any suspicious activity or anomalies on devices.

The next aspect is context: 

  • Who is trying to access a particular resource—are we sure that this is in fact the right device and person?

  • What resource are they trying to access—is this resource restricted to a select audience or region?

  • When are they trying to access it—is this during work hours or after hours?

  • Where are they trying to access it from—are they in their normal region or traveling?

  • How are they attempting to access it—are they accessing this from a web app or native app, is the device fully managed or BYOD?

  • Why do they need to access it—is this someone who typically accesses this information?

Putting Zero Trust to work for you

Now that we have device security signals and the context we can decide how to control the access to the information.

Here are some examples:

  • If a user is on a rooted device — no access.

  • If a user is traveling international — limited access.

  • If a user is trying to access a resource for the first time in a while — prompt for a second factor during the authentication flow.

The Android platform provides the signals and intelligence in order to understand the context and define appropriate controls in a Zero Trust deployment. What makes Android unique as a Zero Trust endpoint is that unlike other operating systems where you need to rely on the Enterprise Mobility Management (EMM) solution to gather the appropriate device signals and attributes, on Android access to these signals can be delegated so that which ever component is acting as the enforcement point; whether it be the identity provider or the network access control component, can collect all of the necessary information directly off the endpoint device as opposed to integrating with a multitude of backend systems.

If you are currently using Zero Trust or moving in that direction, make sure to confirm that your EMM or your enforcement point can access the plethora of signals directly from the device. And check out the Omdia security report to learn more about growing adoption of Zero Trust security.

Trust but verify attestation with revocation

Posted by Rob Barnes & Shawn Willden, Android Security & Privacy Team
[Cross-posted from the Android Developers Blog]

Billions of people rely on their Android-powered devices to securely store their sensitive information. A vital component of the Android security stack is the key attestation system. Android devices since Android 7.0 are able to generate an attestation certificate that attests to the security properties of the device’s hardware and software. OEMs producing devices with Android 8.0 or higher must install a batch attestation key provided by Google on each device at the time of manufacturing.
These keys might need to be revoked for a number of reasons including accidental disclosure, mishandling, or suspected extraction by an attacker. When this occurs, the affected keys must be immediately revoked to protect users. The security of any Public-Key Infrastructure system depends on the robustness of the key revocation process.
All of the attestation keys issued so far include an extension that embeds a certificate revocation list (CRL) URL in the certificate. We found that the CRL (and online certificate status protocol) system was not flexible enough for our needs. So we set out to replace the revocation system for Android attestation keys with something that is flexible and simple to maintain and use.
Our solution is a single TLS-secured URL (https://android.googleapis.com/attestation/status) that returns a list containing all revoked Android attestation keys. This list is encoded in JSON and follows a strict format defined by JSON schema. Only keys that have non-valid status appear in the list, so it is not an exhaustive list of all issued keys.
This system allows us to express more nuance about the status of a key and the reason for the status. A key can have a status of REVOKED or SUSPENDED, where revoked is permanent and suspended is temporary. The reason for the status is described as either KEY_COMPROMISE, CA_COMPROMISE, SUPERSEDED, or SOFTWARE_FLAW. A complete, up-to-date list of statuses and reasons can be found in the developer documentation.
The CRL URLs embedded in existing batch certificates will continue to operate. Going forward, attestation batch certificates will no longer contain a CRL extension. The status of these legacy certificates will also be included in the attestation status list, so developers can safely switch to using the attestation status list for both current and legacy certificates. An example of how to correctly verify Android attestation keys is included in the Key Attestation sample.

Adopting the Arm Memory Tagging Extension in Android

As part of our continuous commitment to improve the security of the Android ecosystem, we are partnering with Arm to design the memory tagging extension (MTE). Memory safety bugs, common in C and C++, remain one of the largest vulnerabilities in the Android platform and although there have been previous hardening efforts, memory safety bugs comprised more than half of the high priority security bugs in Android 9. Additionally, memory safety bugs manifest as hard to diagnose reliability problems, including sporadic crashes or silent data corruption. This reduces user satisfaction and increases the cost of software development. Software testing tools, such as ASAN and HWASAN help, but their applicability on current hardware is limited due to noticeable overheads.

MTE, a hardware feature, aims to further mitigate these memory safety bugs by enabling us to detect them with low overhead. It has two execution modes:

  • Precise mode: Provides more detailed information about the memory violation
  • Imprecise mode: Has lower CPU overhead and is more suitable to be always-on.

Arm recently published a whitepaper on MTE and has added documentation to the Arm v8.5 Architecture Reference Manual.

We envision several different usage modes for MTE.

  • MTE provides a version of ASAN/HWASAN that is easier to use for testing and fuzzing in laboratory environments. It will find more bugs in a fraction of the time and at a lower cost, reducing the complexity of the development process. In many cases, MTE will allow testing memory safety using the same binary as shipped to production. The bug reports produced by MTE will be as detailed and actionable as those from ASAN and HWASAN.
  • MTE will be used as a mechanism for testing complex software scenarios in production. App Developers and OEMs will be able to selectively turn on MTE for parts of the software stack. Where users have provided consent, bug reports will be available to developers via familiar mechanisms like Google Play Console.
  • MTE can be used as a strong security mitigation in the Android System and applications for many classes of memory safety bugs. For most instances of such vulnerabilities, a probabilistic mitigation based on MTE could prevent exploitation with a higher than 90% chance of detecting each invalid memory access. By implementing these protections and ensuring that attackers can't make repeated attempts to exploit security-critical components, we can significantly reduce the risk to users posed by memory safety issues.

We believe that memory tagging will detect the most common classes of memory safety bugs in the wild, helping vendors identify and fix them, discouraging malicious actors from exploiting them. During the past year, our team has been working to ensure readiness of the Android platform and application software for MTE. We have deployed HWASAN, a software implementation of the memory tagging concept, to test our entire platform and a few select apps. This deployment has uncovered close to 100 memory safety bugs. The majority of these bugs were detected on HWASAN enabled phones in everyday use. MTE will greatly improve upon this in terms of overhead, ease of deployment, and scale. In parallel, we have been working on supporting MTE in the LLVM compiler toolchain and in the Linux kernel. The Android platform support for MTE will be complete by the time of silicon availability.

Google is committed to supporting MTE throughout the Android software stack. We are working with select Arm System On Chip (SoC) partners to test MTE support and look forward to wider deployment of MTE in the Android software and hardware ecosystem. Based on the current data points, MTE provides tremendous benefits at acceptable performance costs. We are considering MTE as a possible foundational requirement for certain tiers of Android devices.

Thank you to Mitch Phillips, Evgenii Stepanov, Vlad Tsyrklevich, Mark Brand, and Serban Constantinescu for their contributions to this post.

Chrome Fuzzer Program Update And How-To

TL;DR We increased the Chrome Fuzzer Program bonus from $500 to $1,000 as part of our recent update of reward amounts.

Chrome Fuzzer Program is a part of the Google Chrome Vulnerability Reward Program that lets security researchers run their fuzzers at scale on the ClusterFuzz infrastructure. It makes bug reporting fully automated, and the fuzzer authors get the same rewards as if they reported the bugs manually, plus an extra bonus ($1,000 as of now) on top of it for every new vulnerability.

We run fuzzers indefinitely, and some of the fuzzers contributed years ago are still finding security issues in ever changing Chrome code. This is a win-win for both sides, as security researchers do not have to spend time analyzing the crashes, and Chrome developers receive high quality bug reports automatically.

To learn more about the Chrome Fuzzer Program, let’s talk to Ned Williamson, who’s been a participant since 2017 and now works on the Google Security team.

Q: Hey Ned! It looks like you’ve received over $50,000 by participating in the Google Chrome Vulnerability Reward Program with your quic_stream_factory_fuzzer.

A: Yes, it’s true. I wrote a fuzzer for QUIC which helped me find and report two critical vulnerabilities, each worth $10,000. Because I knew my fuzzer worked well, I submitted it to the Chrome Fuzzer Program. Then, in the next few months, I received that reward three more times (plus a bonus), as the fuzzer caught several security regressions on ClusterFuzz soon after they happened.

Q: Have you intentionally focused on the areas that yield higher severity issues and bigger rewards?

A: Yes. While vulnerabilities in code that is more critical to user security yield larger reward amounts, I actually started by looking at lower severity bugs and incrementally began looking for more severe bugs until I could find critical ones. You can see this progression by looking at the bugs I reported manually as an external researcher.

Q: Would you suggest starting by looking for non-critical bugs?

A: I would say so. Security-critical code is generally better designed and more thoroughly audited, so it might be discouraging to start from there. Finding less critical security bugs and winning bounties is a good way to build confidence and stay motivated.

Q: Can you share an algorithm on how to find security bugs in Chrome?

A: Looking at previous and existing bug reports, even for non-security crashes, is a great way to tell which code is security-critical and potentially buggy. From there, if some code looks like it’s exposed to user inputs, I’d set up a fuzzing campaign against that component. After you gain experience you will not need to rely on existing reports to find new attack surface, which in turn helps you find places that have not been considered by previous researchers. This was the case for my QUIC fuzzer.

Q: How did you learn to write fuzzers?

A: I didn’t have any special knowledge about fuzzing before I started looking for vulnerabilities in Chrome. I followed the documentation in the repository and I still follow the same process today.

Q: Your fuzzer isn’t very simple compared to many other fuzzers. How did you get to that implementation?

A: The key insight in the QUIC fuzzer was realizing that the parts of the code that handled plaintext messages after decryption were prone to memory corruption. Typically, fuzzing does not perform well with encrypted inputs (it’s pretty hard to “randomly” generate a packet that can be successfully decrypted), so I extended the QUIC testing code to allow for testing with encryption disabled.

Q: Are there any other good examples of fuzz targets employing a similar logic?

A: Another example is pdf_formcalc_context_fuzzer that wraps the fuzzing input around with a valid hardcoded PDF file, therefore focusing fuzzing only on the XFA script part of it. As a researcher, you just need to choose what exactly you want to fuzz, and then understand how to execute that code properly. Looking at the unit tests is usually the easiest way to get such an understanding.

Useful links:

Happy fuzzing and bug hunting!

Improving Security and Privacy for Extensions Users

No, Chrome isn’t killing ad blockers -- we’re making them safer

The Chrome Extensions ecosystem has seen incredible advancement, adoption, and growth since its launch over ten years ago. Extensions are a great way for users to customize their experience in Chrome and on the web. As this system grows and expands in both reach and power, user safety and protection remains a core focus of the Chromium project.
In October, we announced a number of changes to improve the security, privacy, and performance of Chrome extensions. These changes include increased user options to control extension permissions, changes to the review process and readability requirements, and requiring two-step verification for developers. In addition, we’ve helped curb abuse through restricting inline installation on websites, preventing the use of deceptive installation practices, and limiting the data collected by extensions. We’ve also made changes to the teams themselves — over the last year, we’ve increased the size of the engineering teams that work on extension abuse by over 300% and the number of reviewers by over 400%.
These and other changes have driven down the rate of malicious installations by 89% since early 2018. Today, we block approximately 1,800 malicious uploads a month, preventing them from ever reaching the store. While the Chrome team is proud of these improvements, the review process alone can't catch all abuse. In order to provide better protection to our users, we need to make changes to the platform as well. This is the suite of changes we’re calling Manifest V3.
This effort is motivated by a desire to keep users safe and to give them more visibility and control over the data they’re sharing with extensions. One way we are doing this is by helping users be deliberate in granting access to sensitive data - such as emails, photos, and access to social media accounts. As we make these changes we want to continue to support extensions in empowering users and enhancing their browsing experience.
To help with this balance, we’re reimagining the way a number of powerful APIs work. Instead of a user granting each extension access to all of their sensitive data, we are creating ways for developers to request access to only the data they need to accomplish the same functionality. One example of this is the introduction of the Declarative Net Request API, which is replacing parts of the Web Request API.
At a high level, this change means that an extension does not need access to all a user’s sensitive data in order to block content. With the current Web Request API, users grant permission for Chrome to pass all information about a network request - which can include things like emails, photos, or other private information - to the extension. In contrast, the Declarative Net Request API allows extensions to block content without requiring the user to grant access to any sensitive information. Additionally, because we are able to cut substantial overhead in the browser, the Declarative Net Request API can have significant, system-level performance benefits over Web Request.


This has been a controversial change since the Web Request API is used by many popular extensions, including ad blockers. We are not preventing the development of ad blockers or stopping users from blocking ads. Instead, we want to help developers, including content blockers, write extensions in a way that protects users’ privacy.
You can read more about the Declarative Net Request API and how it compares to the Web Request API here.
We understand that these changes will require developers to update the way in which their extensions operate. However, we think it is the right choice to enable users to limit the sensitive data they share with third-parties while giving them the ability to curate their own browsing experience. We are continuing to iterate on many aspects of the Manifest V3 design, and are working with the developer community to find solutions that both solve the use cases extensions have today and keep our users safe and in control.

Use your Android phone’s built-in security key to verify sign-in on iOS devices


Compromised credentials are one of the most common causes of security breaches. While Google automatically blocks the majority of unauthorized sign-in attempts, adding 2-Step Verification (2SV) considerably improves account security. At Cloud Next ‘19, we introduced a new 2SV method, enabling more than a billion users worldwide to better protect their accounts with a security key built into their Android phones.
This technology can be used to verify your sign-in to Google and Google Cloud services on Bluetooth-enabled Chrome OS, macOS, and Windows 10 devices. Starting today, you can use your Android phone to verify your sign-in on Apple iPads and iPhones as well.
Security keys
FIDO security keys provide the strongest protection against automated bots, bulk phishing, and targeted attacks by leveraging public key cryptography to verify your identity and URL of the login page, so that an attacker can’t access your account even if you are tricked into providing your username and password. Learn more by watching our presentation from Cloud Next ‘19.


On Chrome OS, macOS, and Windows 10 devices, we leverage the Chrome browser to communicate with your Android phone’s built-in security key over Bluetooth using FIDO’s CTAP2 protocol. On iOS devices, Google’s Smart Lock app is leveraged in place of the browser.


User experience on an iPad with Pixel 3


Until now, there were limited options for using FIDO2 security keys on iOS devices. Now, you can get the strongest 2SV method with the convenience of an Android phone that’s always in your pocket at no additional cost.
It’s easy to get started
Follow these simple steps to protect your Google Account today:
Step 1: Add the security key to your Google Account
  • Add your personal or work Google Account to your Android 7.0+ (Nougat) phone.
  • Make sure you’re enrolled in 2-Step Verification (2SV).
  • On your computer, visit the 2SV settings and click "Add security key".
  • Choose your Android phone from the list of available devices.
Step 2: Use your Android phone's built-in security key
You can find more detailed instructions here. Within enterprise organizations, admins can require the use of security keys for their users in G Suite and Google Cloud Platform (GCP), letting them choose between using a physical security key, an Android phone, or both.
We also recommend that you register a backup hardware security key (from Google or a number of other vendors) for your account and keep it in a safe place, so that you can gain access to your account if you lose your Android phone.

PHA Family Highlights: Triada



We continue our PHA family highlights series with the Triada family, which was first discovered early in 2016. The main purpose of Triada apps was to install spam apps on a device that displays ads. The creators of Triada collected revenue from the ads displayed by the spam apps. The methods Triada used were complex and unusual for these types of apps. Triada apps started as rooting trojans, but as Google Play Protect strengthened defenses against rooting exploits, Triada apps were forced to adapt, progressing to a system image backdoor. However, thanks to OEM cooperation and our outreach efforts, OEMs prepared system images with security updates that removed the Triada infection.

History of Triada

Triada was first described in a blog post on the Kaspersky Lab website in March 2016 and in a follow-up blog post in June 2016. Back then, it was a rooting trojan that tried to exploit the device and after getting elevated privileges, it performed a host of different actions. To hide these actions from analysts, Triada used a combination of dynamic code loading and additional app installs. The Kaspersky posts detail the code injection technique used by Triada and provide some statistics on infected devices at the time. In this post, we’ll focus on the peculiar encryption routine and the unusual binary files used by Triada.
Triada’s first action was to install a type of superuser (su) binary file. This su binary allowed other apps on the device to use root permissions. The su binary used by Triada required a password, so was unique compared to regular su binary files common with other Linux systems.
The binary accepted two passwords, od2gf04pd9 and ac32dorbdq. This is illustrated in the IDA screenshot below. Depending on which one was provided, the binary either 1) ran the command given as an argument as root or 2) concatenated all of the arguments, ran that concatenation preceded by sh, then ran them as root. Either way, the app had to know the correct password to run the command as root.
This Triada rooting trojan was mainly used to install apps and display ads. This trojan targeted older devices because the rooting exploits didn’t work on newer ones. Therefore, the trojan implemented a weight watching feature to decide if old apps needed to be deleted to make space for new installs.
Weight watching included several steps and attempted to free up space on the device’s user partition and system partition. Using a blacklist and whitelist of apps it first removed all the apps on its blacklist. If more free space was required it would remove all other apps leaving only the apps on the whitelist. This process freed space while ensuring the apps needed for the phone to function properly were not removed.
Every app on the system partition had a number, or weight, associated with it. The weight was a sum of the number of apps installed on the same date as the app in question and the number of apps signed with the same certificate. The apps with the lowest weight were installed in isolation (that is, not on a day that the device system image was created) and weren’t signed by the OEM or weren’t part of a developer bundle. In the weight watching process, these apps were removed first, until enough space was made for the new app.
su binary accepts two passwords
In addition to installing apps that display ads, Triada injected code into four web browsers: AOSP (com.android.browser), 360 Secure (com.qihoo.browser), Cheetah (com.ijinshan.browser_fast), and Oupeng (com.oupeng.browser). The code was injected using the same technique described in our blog post about the Zen PHA family and in previously mentioned Kaspersky blog posts.
The web browser injection was done to overwrite the URLs and substitute ad banners on websites with ads benefiting the Triada authors.
Triada also used a peculiar and complex communication encryption routine. Whenever it had to send a request to the Command and Control (C&C) server, it encrypted the request using two XOR loops with different passwords. Because of XOR rules, if the passwords had the same character in the same position, those characters weren’t encrypted. The encrypted request was saved to a file, which had the same name as its size. Finally, the file was zipped and sent to the C&C server in the POST request body.
The example below illustrates one such request. The yellow bytes are the zip file’s signature of the central directory file header. The red bytes show the uncompressed file size of 0x0952. The blue bytes show the file name length (4) and the name itself (2386, a decimal version of 0x0952).
09 00 00 50 4B 01 02 14 00 14 00 08 00 08 00 4F ...PK..........O
91 F3 48 AE CF 91 D5 B1 04 00 00 52 09 00 00 04 ..H........R....
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 32 33 38 36 50 4B 05 06 00 00 00 00 01 00 01 .2386PK.........
00 32 00 00 00 E3 04 00 00 00 00 .2.........
The underlying data protocol changed periodically. It was either a simple JSON, a list of key-value pairs similar to the properties file, or a proprietary format as shown below.
[collect_Head]device=Nexus 5X
[collect_Space]xadevicekey=xxxxx

[collect_Space]collentmod=opappresultmode
[collect_Space]registerUser=true
[collect_End]
When Triada was discovered, we implemented detection that removed Triada samples from all devices with Google Play Protect. This implementation, combined with the increased security on newer Android devices, made it significantly harder for Triada to infect devices.

When rooting doesn’t work…

During the summer of 2017 we noticed a change in new Triada samples. Instead of rooting the device to obtain elevating privileges, Triada evolved to become a pre-installed Android framework backdoor. The changes to Triada included an additional call in the Android framework log function, demonstrated below with a highlighted configuration string.
LABEL+13:
V18 = -1;
LABEL_18:
j___config_log_println(v7, v6, v10, v11, "cf89450001");
if ( v10 )
This backdoored log function version of Triada was first described by Dr.Web in July 2017. The blog post includes a description of Triada code injection methods.
By backdooring the log function, the additional code executes every time the log method is called (that is, every time any app on the phone tries to log something). These log attempts happen many times per second, so the additional code is running non-stop. The additional code also executes in the context of the app logging a message, so Triada can execute code in any app context. The code injection framework in early versions of Triada worked on Android releases prior to Marshmallow.
The main purpose of the backdoor function was to execute code in another app’s context. The backdoor attempts to execute additional code every time the app needs to log something. Triada developers created a new file format, which we called MMD, based on the file header.
The MMD format was an encrypted version of a DEX file, which was then executed in the app context. The encryption algorithm was a double XOR loop with two different passwords. The format is illustrated below.
Each MMD file had a specific file name of the format <MD5 of the process name>36.jmd. By using the MD5 of the process name, the Triada authors tried to obscure the injection target. However, the pool of all available process names is fairly small, so this hash was easily reversible.
We identified two code injection targets: com.android.systemui (the System UI app) and com.android.vending (the Google Play app). The first target was injected to get the GET_REAL_TASKS permission. This is a signature-level permission, which means that it can’t be held by ordinary Android apps.
Starting with Android Lollipop, the getRecentTasks() method is deprecated to protect users' privacy. However, apps holding the GET_REAL_TASKS permission can get the result of this method call. To hold the GET_REAL_TASKS permission, an app has to be signed with a specific certificate, the device’s platform cert, which is held by the OEM. Triada didn’t have access to this cert. Instead it executed additional code in the System UI app, which has the GET_REAL_TASKS permission.
The injected code returned the app running on top (the activity running in the foreground and being actively used by the device user) to other apps on the device. This app was exposed using two methods: an intent or a socket created for this purpose. When an app on the device sent the intent or wrote to a socket created by Triada’s code injection, it received the package name of the app running on top. Triada used the package name to determine if an ad was displayed. The assumption was that if the app running on top was a browser, the user would expect to see some ads, so Triada displayed ads from the background.
The second injection target was the Google Play app. This injection supported five commands and responses to them. The supported commands are shown below in Chinese, a language that was used throughout the Triada app and injection. English translations are given on the right.
  1. 下载请求
  2. 下载结果
  3. 安装请求
  4. 安装结果
  5. 激活请求
  6. 激活结果
  7. 拉活请求
  8. 拉活结果
  9. 卸载请求
  10. 卸载结果
  1. download request
  2. download result
  3. install request
  4. installation result
  5. activation request
  6. activation result
  7. pull request
  8. pull the results
  9. uninstall request
  10. uninstall result
The commands trigger the heartbeat (pull request), download, installation, uninstallation (in the Google Play app context), and activation (the first execution) of the apps. In the Google Play app context, installation meant that Triada didn’t have to turn on installation from unknown sources and all app installs looked like they were from Google Play.
The apps were downloaded from the C&C server and the communication with the C&C was encrypted using the same custom encryption routine using double XOR and zip. The downloaded and installed apps used the package names of unpopular apps available on Google Play. They didn’t have any relation to the apps on Google Play apart from the same package name.
The last piece of the puzzle was the way the backdoor in the log function communicated with the installed apps. This communication prompted the investigation: the change in Triada behavior mentioned at the beginning of this section made it appear that there was another component on the system image. The apps could communicate with the Triada backdoor by logging a line with a specific predefined tag and message.
The reverse communication was more complicated. The backdoor used Java properties to relay a message to the app. These properties were key-value pairs similar to Android system properties, but they were scoped to a specific process. Setting one of these properties in one app context ensures that other apps won’t see this property. Despite that, some versions of Triada indiscriminately created the properties in every single app process.
The diagram below illustrates the communication mechanisms of the Triada backdoor.
Communication mechanisms of Triada

Reverse engineering countermeasures and development

The Triada backdoor was hidden to make the analysis harder. The strings in the Android framework library that related to Triada activities were encrypted, as shown below.
Android framework strings
The strings were encrypted using the algorithm of two XOR loops. However, the first highlighted string, 36.jmd, wasn’t encrypted. This is the MMD file name string mentioned before.
Another anti-analysis measure implemented by the Triada authors was function padding, including additional exported functions that don't serve any purpose apart from making the file size bigger and the function layout more random with every compilation. Four types of these functions are shown in the screenshots below.
Example of function padding
One final interesting feature of Triada worth mentioning is the development cycle. By analyzing subsequent versions of the Triada backdoor (up to 1.5.1) we saw the changes in the code. In the newest version, they substituted MD5 with SHA1. This is used to hash the filenames, which come from a restricted pool of values. The newest version also encrypted the 36.jmd string and introduced changes to the code for compatibility with Android Nougat.
There are also code stubs pointing at the modification of the SystemUI and WebView Android framework elements. We couldn’t find the code that was executed by these modifications, just code stubs suggesting more development in the future.

OEM outreach

Triada infects device system images through a third-party during the production process. Sometimes OEMs want to include features that aren’t part of the Android Open Source Project, such as face unlock. The OEM might partner with a third-party that can develop the desired feature and send the whole system image to that vendor for development.
Based on analysis, we believe that a vendor using the name Yehuo or Blazefire infected the returned system image with Triada.
Production process with malicious party
We coordinated with the affected OEMs to provide system updates and remove traces of Triada. We also scan for Triada and similar threats on all Android devices.
OEMs should ensure that all third-party code is reviewed and can be tracked to its source. Additionally, any functionality added to the system image should only support requested features. It’s a good practice to perform a security review of a system image after adding third-party code.

Summary

Triada was inconspicuously included in the system image as third-party code for additional features requested by the OEMs. This highlights the need for thorough ongoing security reviews of system images before the device is sold to the users as well as any time they get updated over-the-air (OTA).
By working with the OEMs and supplying them with instructions for removing the threat from devices, we reduced the spread of preinstalled Triada variants and removed infections from the devices through the OTA updates.
The Triada case is a good example of how Android malware authors are becoming more adept. This case also shows that it’s harder to infect Android devices, especially if the malware author requires privilege elevation.
We are also performing a security review of system images through the Build Test Suite. You can read more about this program in the Android Security 2018 Year in Review report. Triada indicators of compromise are one of many signatures included in the system image scan. Additionally, Google Play Protect continues to track and remove any known versions of Triada and Triada-related apps it detects from user devices.