Tag Archives: Security

How Google adopted BeyondCorp


It's been almost five years since we released the first of multiple BeyondCorp papers, describing the motivation and design principles that eliminated network-based trust from our internal networks. With that anniversary looming and many organizations actively working to adopt models like BeyondCorp (which has also become known as Zero Trust in the industry), we thought it would be a good time to revisit topics we have previously explored in those papers, share the lessons that we have learned over the years, and describe where BeyondCorp is going as businesses move to the cloud.

This is the first post in a series that will focus on Google’s internal implementation of BeyondCorp, providing necessary context for how Google adopted BeyondCorp.

Why did we adopt BeyondCorp?

With a traditional enterprise perimeter security model, access to services and resources is provided by a device being connected to a privileged network. If an employee is in a corporate office, on the right network, services are directly accessible. If they're outside the office, at home or in a coffee shop, they frequently use a VPN to get access to services behind the enterprise firewall. This is the way most organizations protect themselves.

By 2011, it became clear to Google that this model was problematic, and we needed to rethink how enterprise services are accessed and protected for the following reasons:

Improving productivity
  • A growing number of employees were not in the office at all times. They were working from home, a coffee shop, a hotel or even on a bus or airplane. When they were outside the office, they needed to connect via a VPN, creating friction and extending the network perimeter.
  • The user experience of a VPN client may be acceptable, even if suboptimal, from a laptop. Use of VPN is less acceptable, from both employees and admins perspectives, when considering growing use of devices such as smartphones and tablets to perform work.
  • A number of users were contractors or other partners who only needed selective access to some of our internal resources, even though they were working in the office.
Keeping Google secure
  • The expanded use of public clouds and software-as-a-service (SaaS) apps meant that some of our corporate services were no longer deployed on-premises, further blurring the traditional perimeter and trust domain. This introduced new attack vectors that needed to be protected against.
  • There was ongoing concern about relying solely on perimeter defense, especially when the perimeter was growing consistently. With the proliferation of laptops and mobile devices, vulnerable and compromised devices were regularly brought within the perimeter.
  • Finally, if a vulnerability was observed or an attack did happen, we wanted the ability to respond as quickly and automatically as possible.

How did we do it?

In order to address these challenges, we implemented a new approach that we called BeyondCorp. Our mission was to have every Google employee work successfully from untrusted networks on a variety of devices without using a client-side VPN. BeyondCorp has three core principles:
  • Connecting from a particular network does not determine which service you can access.
  • Access to services is granted based on what the infrastructure knows about you and your device.
  • All access to services must be authenticated, authorized and encrypted for every request (not just the initial access).


High level architecture for BeyondCorp

BeyondCorp gave us the security that we were looking for along with the user experience that made our employees more productive inside and outside the office.

What lessons did we learn?

Given this was uncharted territory at the time, we had to learn quickly and adapt when we encountered surprises. Here are some key lessons we learned.

Obtain executive support early on and keep it

Moving to BeyondCorp is not a quick, painless exercise. It took us several years just to get most of the basics in place, and to this day we are still continuing to improve and refine our implementation. Before embarking on this journey to implement BeyondCorp, we got buy in from leadership very early in the project. With a mandate, you can ask for support from lots of different groups along the way.

We make a point to re-validate this buy-in on an ongoing basis, ensuring that the business still understands and values this important shift.

Recognize data quality challenges from the very beginning

Access decisions depend on the quality of your input data. More specifically, it depends on trust analysis, which requires a combination of employee and device data.

If this data is unreliable, the result will be incorrect access decisions, suboptimal user experiences and, in the worst case, an increase in system vulnerability, so the stakes are definitely high.
We put in a lot of work to make sure our data is clean and reliable before making any impactful changes, and we have both workflows and technical measures in place to ensure data quality remains high going forward.

Enable painless migration and usage

The migration should be a zero-touch or invisible experience for your employees, making it easy for them to continue working without interruptions or added steps. If you make it difficult for your employees to migrate or maintain productivity, they might feel frustrated by the process. Complex environments are difficult to fully migrate with initial solutions, so be prepared to review, grant and manage exceptions at least in the early stages. With this in mind, start small, migrate a small number of resources, apps, users and devices, and only increase coverage after confirming the solution is reliable.

Assign employee and helpdesk advocates

We also had employee and helpdesk advocates on the team who represented the user experience from those perspectives. This helped us architect our implementation in a way that avoided putting excess burden on employees or technical support staff.

Clear employee communications

Communicating clearly with employees so that they know what is happening is very important. We sent our employees, partners, and company leaders regular communications whenever we made important changes, ensuring motivations were well understood and there was a window for feedback and iteration prior to enforcement changes.

Run highly reliable systems

Since every request goes through the core BeyondCorp infrastructure, we needed a global, highly reliable and resilient set of services. If these services are degraded, employee productivity suffers.

We used Site Reliability Engineering (SRE) principles to run our BeyondCorp services.

Next time

In the next post in this series, we will go deeper into when you should trust a device, what data you should use to determine whether or not a device should be trusted, and what we have learned by going through that process.

In the meantime, if you want to learn more, you can check out the BeyondCorp research papers. In addition, getting started with BeyondCorp is now easier using zero trust solutions from Google Cloud (context-aware access) and other enterprise providers.

Google Public DNS over HTTPS (DoH) supports RFC 8484 standard



Ever since we launched Google Public DNS in 2009, our priority has been the security of DNS resolution. In 2016, we launched a unique and innovative experimental service -- DNS over HTTPS, now known as DoH. Today we are announcing general availability for our standard DoH service. Now our users can resolve DNS using DoH at the dns.google domain with the same anycast addresses (like 8.8.8.8) as regular DNS service, with lower latency from our edge PoPs throughout the world.

General availability of DoH includes full RFC 8484 support at a new URL path, and continued support for the JSON API launched in 2016. The new endpoints are:

  • https://dns.google/dns-query (RFC 8484 – GET and POST)
  • https://dns.google/resolve (JSON API – GET)
We are deprecating internet-draft DoH support on the /experimental URL path and DoH service from dns.google.com, and will turn down support for them in a few months.

With Google Public DNS, we’re committed to providing fast, private, and secure DNS resolution through both DoH and DNS over TLS (DoT). We plan to support the JSON API until there is a comparable standard for webapp-friendly DoH.


What the new DoH service means for developers

To use our DoH service, developers should configure their applications to use the new DoH endpoints and properly handle HTTP 4xx error and 3xx redirection status codes.
  • Applications should use dns.google instead of dns.google.com. Applications can query dns.google at well-known Google Public DNS addresses, without needing an extra DNS lookup.
  • Developers using the older /experimental internet-draft DoH API need to switch to the new /dns-query URL path and confirm full RFC 8484 compliance. The older API accepts queries using features from early drafts of the DoH standard that are rejected by the new API.
  • Developers using the JSON API can use two new GET parameters that can be used for DNS/DoH proxies or DNSSEC-aware applications.
Redirection of /experimental and dns.google.com

The /experimental API will be turned down in 30 days and HTTP requests for it will get an HTTP redirect to an equivalent https://dns.google/dns-query URI. Developers should make sure DoH applications handle HTTP redirects by retrying at the URI specified in the Location header.

Turning down the dns.google.com domain will take place in three stages.
  1. The first stage (in 45 days) will update the dns.google.com domain name to return 8.8.8.8 and other Google Public DNS anycast addresses, but continue to return DNS responses to queries sent to former addresses of dns.google.com. This will provide a transparent transition for most clients.
  2. The second stage (in 90 days) will return HTTP redirects to dns.google for queries sent to former addresses of dns.google.com.
  3. The final stage (in 12 months) will send HTTP redirects to dns.google for any queries sent to the anycast addresses using the dns.google.com domain.
We will post timelines for redirections on the public‑dns‑announce forum and on the DoH migration page. You can find further technical details in our DoH documentation, and if you have a question or problem with our DoH service, you can create an issue on our tracker or ask on our discussion group. As always, please provide as much information as possible to help us investigate the problem!

Security Crawl Maze: An Open Source Tool to Test Web Security Crawlers

Scanning modern web applications for security vulnerabilities can be a difficult task, especially if they are built with Javascript frameworks, which is why crawlers have to use a multi-stage crawling approach to discover all the resources on modern websites.

Living in the times of dynamically changing specifications and the constant appearance of new frameworks, we often have to adjust our crawlers so that they are able to discover new ways in which developers can link resources from their applications. The issue we face in such situations is measuring if changes to crawling logic improve the effectiveness. While working on replacing a crawler for a web security scanner that has been in use for a number of years, we found we needed a universal test bed, both to test our current capabilities and to discover cases that are currently missed. Inspired by Firing Range, today we’re announcing the open-source release of Security Crawl Maze – a universal test bed for web security crawlers.

Security Crawl Maze is a simple Python application built with the Flask framework that contains a wide variety of cases for ways in which a web based application can link other resources on the Web. We also provide a Dockerfile which allows you to build a docker image and deploy it to an environment of your choice. While the initial release is covering the most important cases for HTTP crawling, it’s a subset of what we want to achieve in the near future. You’ll soon be able to test whether your crawler is able to discover known files (robots.txt, sitemap.xml, etc…) or crawl modern single page applications written with the most popular JS frameworks (Angular, Polymer, etc.).

Security crawlers are mostly interested in code coverage, not in content coverage, which means the deduplication logic has to be different. This is why we plan to add cases which allow for testing if your crawler deduplicates URLs correctly (e.g. blog posts, e-commerce). If you believe there is something else, feel free to add a test case for it – it’s super simple! Code is available on GitHub and through a public deployed version.

We hope that others will find it helpful in evaluating the capabilities of their crawlers, and we certainly welcome any contributions and feedback from the broader security research community.

By Maciej Trzos, Information Security Engineer

Helping organizations do more without collecting more data



We continually invest in new research to advance innovations that preserve individual privacy while enabling valuable insights from data. Earlier this year, we launched Password Checkup, a Chrome extension that helps users detect if a username and password they enter on a website has been compromised. It relies on a cryptographic protocol known as private set intersection (PSI) to match your login’s credentials against an encrypted database of over 4 billion credentials Google knows to be unsafe. At the same time, it ensures that no one – including Google – ever learns your actual credentials.

Today, we’re rolling out the open-source availability of Private Join and Compute, a new type of secure multi-party computation (MPC) that augments the core PSI protocol to help organizations work together with confidential data sets while raising the bar for privacy.


Collaborating with data in privacy-safe ways

Many important research, business, and social questions can be answered by combining data sets from independent parties where each party holds their own information about a set of shared identifiers (e.g. email addresses), some of which are common. But when you’re working with sensitive data, how can one party gain aggregated insights about the other party’s data without either of them learning any information about individuals in the datasets? That’s the exact challenge that Private Join and Compute helps solve.

Using this cryptographic protocol, two parties can encrypt their identifiers and associated data, and then join them. They can then do certain types of calculations on the overlapping set of data to draw useful information from both datasets in aggregate. All inputs (identifiers and their associated data) remain fully encrypted and unreadable throughout the process. Neither party ever reveals their raw data, but they can still answer the questions at hand using the output of the computation. This end result is the only thing that’s decrypted and shared in the form of aggregated statistics. For example, this could be a count, sum, or average of the data in both sets.


A deeper look at the technology 


Private Join and Compute combines two fundamental cryptographic techniques to protect individual data:

  • Private set intersection allows two parties to privately join their sets and discover the identifiers they have in common. We use an oblivious variant which only marks encrypted identifiers without learning any of the identifiers.
  • Homomorphic encryption allows certain types of computation to be performed directly on encrypted data without having to decrypt it first, which preserves the privacy of raw data. Throughout the process, individual identifiers and values remain concealed. For example, you can count how many identifiers are in the common set or compute the sum of values associated with marked encrypted identifiers – without learning anything about individuals. 

This combination of techniques ensures that nothing but the size of the joined set and the statistics (e.g. sum) of its associated values is revealed. Individual items are strongly encrypted with random keys throughout and are not available in raw form to the other party or anyone else.

Watch this video or click to view the full infographic below on how Private Join and Compute works:

Private Join and Compute

Using multi-party computation to solve real-world problems


Multi-party computation (MPC) is a field with a long history, but it has typically faced many hurdles to widespread adoption beyond academic communities. Common challenges include finding effective and efficient ways to tailor encryption techniques and tools to solve practical problems.

We’re committed to applying MPC and encryption technologies to more concrete, real-world issues at Google and beyond by making privacy technology more widely available. We are exploring a number of potential use cases at Google across collaborative machine learning, user security, and aggregated ads measurement.

And this is just the beginning of what’s possible. This technology can help advance valuable research in a wide array of fields that require organizations to work together without revealing anything about individuals represented in the data. For example:

  • Public policy - if a government implements new wellness initiatives in public schools (e.g. better lunch options and physical education curriculums), what are the long-term health outcomes for impacted students?
  • Diversity and inclusion - when industries create new programs to close gender and racial pay gaps, how does this impact compensation across companies by demographic?
  • Healthcare - when a new preventative drug is prescribed to patients across the country, does it reduce the incidence of disease? 
  • Car safety standards - when auto manufacturers add more advanced safety features to vehicles, does it coincide with a decrease in reported car accidents?

Private Join and Compute keeps individual information safe while allowing organizations to accurately compute and draw useful insights from aggregate statistics. By sharing the technology more widely, we hope this expands the use cases for secure computing. To learn more about the research and methodology behind Private Join and Compute, read the full paper and access the open source code and documentation. We’re excited to see how other organizations will advance MPC and cryptography to answer important questions while upholding individual privacy.


Acknowledgements


Product Manager - Nirdhar Khazanie
Software Engineers - Mihaela Ion, Benjamin Kreuter, Erhan Nergiz, and Karn Seth
Research Scientist - Mariana Raykova


New Chrome Protections from Deception


Chrome was built with security in mind from the very beginning. Today we’re launching two new features to help protect users from deceptive websites. The Suspicious Site Reporter Extension will improve security for Chrome users by giving power users an easy way to report suspicious sites to Google Safe Browsing. We’re also launching a new warning to protect users from sites with deceptive URLs.

We designed Chrome to be secure by default, and easy to use by everyone. Google Safe Browsing has helped protect Chrome users from phishing attacks for over 10 years, and now helps protect more than 4 billion devices every day across multiple browsers and apps by showing warnings to people before they visit dangerous sites or download dangerous files. We’re constantly improving Safe Browsing, and now you can help.

Safe Browsing works by automatically analyzing the websites that we know about through Google Search’s web crawlers, and creating lists of sites that are dangerous or deceptive. With the Suspicious Site Reporter extension, you can help Safe Browsing protect web users by reporting suspicious sites. You can install the extension to start seeing an icon when you’re on a potentially suspicious site, and more information about why the site might be suspicious. By clicking the icon, you’re now able to report unsafe sites to Safe Browsing for further evaluation. If the site is added to Safe Browsing’s lists, you’ll not only protect Chrome users, but users of other browsers and across the entire web.


Help us protect web users by reporting dangerous or deceptive sites to Google Safe Browsing through the Suspicious Site Reporter extension.

One way that deceptive sites might try to trick you is by using a confusing URL. For example, it’s easy to confuse “go0gle.com” with “google.com”. In Chrome 75, we’re launching a new warning to direct users away from sites that have confusing URLs.


Starting in the current version of Chrome (75), you’ll see a warning when the page URL might be confused for URLs of sites you’ve visited recently.

This new warning works by comparing the URL of the page you’re currently on to URLs of pages you’ve recently visited. If the URL looks similar, and might cause you to be confused or deceived, we’ll show a warning that helps you get back to safety.

We believe that you shouldn't have to be a security expert to feel safe on the web, and that many Chrome power-users share our mission to make the web more secure for everyone. We’ll continue improving Chrome Security to help make Chrome easy to use safely, and are looking forward to collaborating with the community to further that goal. If you'd like to help out, install the new extension and start helping protect the web!

Improving Security and Privacy for Extensions Users

No, Chrome isn’t killing ad blockers -- we’re making them safer

The Chrome Extensions ecosystem has seen incredible advancement, adoption, and growth since its launch over ten years ago. Extensions are a great way for users to customize their experience in Chrome and on the web. As this system grows and expands in both reach and power, user safety and protection remains a core focus of the Chromium project.
In October, we announced a number of changes to improve the security, privacy, and performance of Chrome extensions. These changes include increased user options to control extension permissions, changes to the review process and readability requirements, and requiring two-step verification for developers. In addition, we’ve helped curb abuse through restricting inline installation on websites, preventing the use of deceptive installation practices, and limiting the data collected by extensions. We’ve also made changes to the teams themselves — over the last year, we’ve increased the size of the engineering teams that work on extension abuse by over 300% and the number of reviewers by over 400%.
These and other changes have driven down the rate of malicious installations by 89% since early 2018. Today, we block approximately 1,800 malicious uploads a month, preventing them from ever reaching the store. While the Chrome team is proud of these improvements, the review process alone can't catch all abuse. In order to provide better protection to our users, we need to make changes to the platform as well. This is the suite of changes we’re calling Manifest V3.
This effort is motivated by a desire to keep users safe and to give them more visibility and control over the data they’re sharing with extensions. One way we are doing this is by helping users be deliberate in granting access to sensitive data - such as emails, photos, and access to social media accounts. As we make these changes we want to continue to support extensions in empowering users and enhancing their browsing experience.
To help with this balance, we’re reimagining the way a number of powerful APIs work. Instead of a user granting each extension access to all of their sensitive data, we are creating ways for developers to request access to only the data they need to accomplish the same functionality. One example of this is the introduction of the Declarative Net Request API, which is replacing parts of the Web Request API.
At a high level, this change means that an extension does not need access to all a user’s sensitive data in order to block content. With the current Web Request API, users grant permission for Chrome to pass all information about a network request - which can include things like emails, photos, or other private information - to the extension. In contrast, the Declarative Net Request API allows extensions to block content without requiring the user to grant access to any sensitive information. Additionally, because we are able to cut substantial overhead in the browser, the Declarative Net Request API can have significant, system-level performance benefits over Web Request.


This has been a controversial change since the Web Request API is used by many popular extensions, including ad blockers. We are not preventing the development of ad blockers or stopping users from blocking ads. Instead, we want to help developers, including content blockers, write extensions in a way that protects users’ privacy.
You can read more about the Declarative Net Request API and how it compares to the Web Request API here.
We understand that these changes will require developers to update the way in which their extensions operate. However, we think it is the right choice to enable users to limit the sensitive data they share with third-parties while giving them the ability to curate their own browsing experience. We are continuing to iterate on many aspects of the Manifest V3 design, and are working with the developer community to find solutions that both solve the use cases extensions have today and keep our users safe and in control.

Use your Android phone’s built-in security key to verify sign-in on iOS devices


Compromised credentials are one of the most common causes of security breaches. While Google automatically blocks the majority of unauthorized sign-in attempts, adding 2-Step Verification (2SV) considerably improves account security. At Cloud Next ‘19, we introduced a new 2SV method, enabling more than a billion users worldwide to better protect their accounts with a security key built into their Android phones.
This technology can be used to verify your sign-in to Google and Google Cloud services on Bluetooth-enabled Chrome OS, macOS, and Windows 10 devices. Starting today, you can use your Android phone to verify your sign-in on Apple iPads and iPhones as well.
Security keys
FIDO security keys provide the strongest protection against automated bots, bulk phishing, and targeted attacks by leveraging public key cryptography to verify your identity and URL of the login page, so that an attacker can’t access your account even if you are tricked into providing your username and password. Learn more by watching our presentation from Cloud Next ‘19.


On Chrome OS, macOS, and Windows 10 devices, we leverage the Chrome browser to communicate with your Android phone’s built-in security key over Bluetooth using FIDO’s CTAP2 protocol. On iOS devices, Google’s Smart Lock app is leveraged in place of the browser.


User experience on an iPad with Pixel 3


Until now, there were limited options for using FIDO2 security keys on iOS devices. Now, you can get the strongest 2SV method with the convenience of an Android phone that’s always in your pocket at no additional cost.
It’s easy to get started
Follow these simple steps to protect your Google Account today:
Step 1: Add the security key to your Google Account
  • Add your personal or work Google Account to your Android 7.0+ (Nougat) phone.
  • Make sure you’re enrolled in 2-Step Verification (2SV).
  • On your computer, visit the 2SV settings and click "Add security key".
  • Choose your Android phone from the list of available devices.
Step 2: Use your Android phone's built-in security key
You can find more detailed instructions here. Within enterprise organizations, admins can require the use of security keys for their users in G Suite and Google Cloud Platform (GCP), letting them choose between using a physical security key, an Android phone, or both.
We also recommend that you register a backup hardware security key (from Google or a number of other vendors) for your account and keep it in a safe place, so that you can gain access to your account if you lose your Android phone.

PHA Family Highlights: Triada



We continue our PHA family highlights series with the Triada family, which was first discovered early in 2016. The main purpose of Triada apps was to install spam apps on a device that displays ads. The creators of Triada collected revenue from the ads displayed by the spam apps. The methods Triada used were complex and unusual for these types of apps. Triada apps started as rooting trojans, but as Google Play Protect strengthened defenses against rooting exploits, Triada apps were forced to adapt, progressing to a system image backdoor. However, thanks to OEM cooperation and our outreach efforts, OEMs prepared system images with security updates that removed the Triada infection.

History of Triada

Triada was first described in a blog post on the Kaspersky Lab website in March 2016 and in a follow-up blog post in June 2016. Back then, it was a rooting trojan that tried to exploit the device and after getting elevated privileges, it performed a host of different actions. To hide these actions from analysts, Triada used a combination of dynamic code loading and additional app installs. The Kaspersky posts detail the code injection technique used by Triada and provide some statistics on infected devices at the time. In this post, we’ll focus on the peculiar encryption routine and the unusual binary files used by Triada.
Triada’s first action was to install a type of superuser (su) binary file. This su binary allowed other apps on the device to use root permissions. The su binary used by Triada required a password, so was unique compared to regular su binary files common with other Linux systems.
The binary accepted two passwords, od2gf04pd9 and ac32dorbdq. This is illustrated in the IDA screenshot below. Depending on which one was provided, the binary either 1) ran the command given as an argument as root or 2) concatenated all of the arguments, ran that concatenation preceded by sh, then ran them as root. Either way, the app had to know the correct password to run the command as root.
This Triada rooting trojan was mainly used to install apps and display ads. This trojan targeted older devices because the rooting exploits didn’t work on newer ones. Therefore, the trojan implemented a weight watching feature to decide if old apps needed to be deleted to make space for new installs.
Weight watching included several steps and attempted to free up space on the device’s user partition and system partition. Using a blacklist and whitelist of apps it first removed all the apps on its blacklist. If more free space was required it would remove all other apps leaving only the apps on the whitelist. This process freed space while ensuring the apps needed for the phone to function properly were not removed.
Every app on the system partition had a number, or weight, associated with it. The weight was a sum of the number of apps installed on the same date as the app in question and the number of apps signed with the same certificate. The apps with the lowest weight were installed in isolation (that is, not on a day that the device system image was created) and weren’t signed by the OEM or weren’t part of a developer bundle. In the weight watching process, these apps were removed first, until enough space was made for the new app.
su binary accepts two passwords
In addition to installing apps that display ads, Triada injected code into four web browsers: AOSP (com.android.browser), 360 Secure (com.qihoo.browser), Cheetah (com.ijinshan.browser_fast), and Oupeng (com.oupeng.browser). The code was injected using the same technique described in our blog post about the Zen PHA family and in previously mentioned Kaspersky blog posts.
The web browser injection was done to overwrite the URLs and substitute ad banners on websites with ads benefiting the Triada authors.
Triada also used a peculiar and complex communication encryption routine. Whenever it had to send a request to the Command and Control (C&C) server, it encrypted the request using two XOR loops with different passwords. Because of XOR rules, if the passwords had the same character in the same position, those characters weren’t encrypted. The encrypted request was saved to a file, which had the same name as its size. Finally, the file was zipped and sent to the C&C server in the POST request body.
The example below illustrates one such request. The yellow bytes are the zip file’s signature of the central directory file header. The red bytes show the uncompressed file size of 0x0952. The blue bytes show the file name length (4) and the name itself (2386, a decimal version of 0x0952).
09 00 00 50 4B 01 02 14 00 14 00 08 00 08 00 4F ...PK..........O
91 F3 48 AE CF 91 D5 B1 04 00 00 52 09 00 00 04 ..H........R....
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 32 33 38 36 50 4B 05 06 00 00 00 00 01 00 01 .2386PK.........
00 32 00 00 00 E3 04 00 00 00 00 .2.........
The underlying data protocol changed periodically. It was either a simple JSON, a list of key-value pairs similar to the properties file, or a proprietary format as shown below.
[collect_Head]device=Nexus 5X
[collect_Space]xadevicekey=xxxxx

[collect_Space]collentmod=opappresultmode
[collect_Space]registerUser=true
[collect_End]
When Triada was discovered, we implemented detection that removed Triada samples from all devices with Google Play Protect. This implementation, combined with the increased security on newer Android devices, made it significantly harder for Triada to infect devices.

When rooting doesn’t work…

During the summer of 2017 we noticed a change in new Triada samples. Instead of rooting the device to obtain elevating privileges, Triada evolved to become a pre-installed Android framework backdoor. The changes to Triada included an additional call in the Android framework log function, demonstrated below with a highlighted configuration string.
LABEL+13:
V18 = -1;
LABEL_18:
j___config_log_println(v7, v6, v10, v11, "cf89450001");
if ( v10 )
This backdoored log function version of Triada was first described by Dr.Web in July 2017. The blog post includes a description of Triada code injection methods.
By backdooring the log function, the additional code executes every time the log method is called (that is, every time any app on the phone tries to log something). These log attempts happen many times per second, so the additional code is running non-stop. The additional code also executes in the context of the app logging a message, so Triada can execute code in any app context. The code injection framework in early versions of Triada worked on Android releases prior to Marshmallow.
The main purpose of the backdoor function was to execute code in another app’s context. The backdoor attempts to execute additional code every time the app needs to log something. Triada developers created a new file format, which we called MMD, based on the file header.
The MMD format was an encrypted version of a DEX file, which was then executed in the app context. The encryption algorithm was a double XOR loop with two different passwords. The format is illustrated below.
Each MMD file had a specific file name of the format <MD5 of the process name>36.jmd. By using the MD5 of the process name, the Triada authors tried to obscure the injection target. However, the pool of all available process names is fairly small, so this hash was easily reversible.
We identified two code injection targets: com.android.systemui (the System UI app) and com.android.vending (the Google Play app). The first target was injected to get the GET_REAL_TASKS permission. This is a signature-level permission, which means that it can’t be held by ordinary Android apps.
Starting with Android Lollipop, the getRecentTasks() method is deprecated to protect users' privacy. However, apps holding the GET_REAL_TASKS permission can get the result of this method call. To hold the GET_REAL_TASKS permission, an app has to be signed with a specific certificate, the device’s platform cert, which is held by the OEM. Triada didn’t have access to this cert. Instead it executed additional code in the System UI app, which has the GET_REAL_TASKS permission.
The injected code returned the app running on top (the activity running in the foreground and being actively used by the device user) to other apps on the device. This app was exposed using two methods: an intent or a socket created for this purpose. When an app on the device sent the intent or wrote to a socket created by Triada’s code injection, it received the package name of the app running on top. Triada used the package name to determine if an ad was displayed. The assumption was that if the app running on top was a browser, the user would expect to see some ads, so Triada displayed ads from the background.
The second injection target was the Google Play app. This injection supported five commands and responses to them. The supported commands are shown below in Chinese, a language that was used throughout the Triada app and injection. English translations are given on the right.
  1. 下载请求
  2. 下载结果
  3. 安装请求
  4. 安装结果
  5. 激活请求
  6. 激活结果
  7. 拉活请求
  8. 拉活结果
  9. 卸载请求
  10. 卸载结果
  1. download request
  2. download result
  3. install request
  4. installation result
  5. activation request
  6. activation result
  7. pull request
  8. pull the results
  9. uninstall request
  10. uninstall result
The commands trigger the heartbeat (pull request), download, installation, uninstallation (in the Google Play app context), and activation (the first execution) of the apps. In the Google Play app context, installation meant that Triada didn’t have to turn on installation from unknown sources and all app installs looked like they were from Google Play.
The apps were downloaded from the C&C server and the communication with the C&C was encrypted using the same custom encryption routine using double XOR and zip. The downloaded and installed apps used the package names of unpopular apps available on Google Play. They didn’t have any relation to the apps on Google Play apart from the same package name.
The last piece of the puzzle was the way the backdoor in the log function communicated with the installed apps. This communication prompted the investigation: the change in Triada behavior mentioned at the beginning of this section made it appear that there was another component on the system image. The apps could communicate with the Triada backdoor by logging a line with a specific predefined tag and message.
The reverse communication was more complicated. The backdoor used Java properties to relay a message to the app. These properties were key-value pairs similar to Android system properties, but they were scoped to a specific process. Setting one of these properties in one app context ensures that other apps won’t see this property. Despite that, some versions of Triada indiscriminately created the properties in every single app process.
The diagram below illustrates the communication mechanisms of the Triada backdoor.
Communication mechanisms of Triada

Reverse engineering countermeasures and development

The Triada backdoor was hidden to make the analysis harder. The strings in the Android framework library that related to Triada activities were encrypted, as shown below.
Android framework strings
The strings were encrypted using the algorithm of two XOR loops. However, the first highlighted string, 36.jmd, wasn’t encrypted. This is the MMD file name string mentioned before.
Another anti-analysis measure implemented by the Triada authors was function padding, including additional exported functions that don't serve any purpose apart from making the file size bigger and the function layout more random with every compilation. Four types of these functions are shown in the screenshots below.
Example of function padding
One final interesting feature of Triada worth mentioning is the development cycle. By analyzing subsequent versions of the Triada backdoor (up to 1.5.1) we saw the changes in the code. In the newest version, they substituted MD5 with SHA1. This is used to hash the filenames, which come from a restricted pool of values. The newest version also encrypted the 36.jmd string and introduced changes to the code for compatibility with Android Nougat.
There are also code stubs pointing at the modification of the SystemUI and WebView Android framework elements. We couldn’t find the code that was executed by these modifications, just code stubs suggesting more development in the future.

OEM outreach

Triada infects device system images through a third-party during the production process. Sometimes OEMs want to include features that aren’t part of the Android Open Source Project, such as face unlock. The OEM might partner with a third-party that can develop the desired feature and send the whole system image to that vendor for development.
Based on analysis, we believe that a vendor using the name Yehuo or Blazefire infected the returned system image with Triada.
Production process with malicious party
We coordinated with the affected OEMs to provide system updates and remove traces of Triada. We also scan for Triada and similar threats on all Android devices.
OEMs should ensure that all third-party code is reviewed and can be tracked to its source. Additionally, any functionality added to the system image should only support requested features. It’s a good practice to perform a security review of a system image after adding third-party code.

Summary

Triada was inconspicuously included in the system image as third-party code for additional features requested by the OEMs. This highlights the need for thorough ongoing security reviews of system images before the device is sold to the users as well as any time they get updated over-the-air (OTA).
By working with the OEMs and supplying them with instructions for removing the threat from devices, we reduced the spread of preinstalled Triada variants and removed infections from the devices through the OTA updates.
The Triada case is a good example of how Android malware authors are becoming more adept. This case also shows that it’s harder to infect Android devices, especially if the malware author requires privilege elevation.
We are also performing a security review of system images through the Build Test Suite. You can read more about this program in the Android Security 2018 Year in Review report. Triada indicators of compromise are one of many signatures included in the system image scan. Additionally, Google Play Protect continues to track and remove any known versions of Triada and Triada-related apps it detects from user devices.

New research: How effective is basic account hygiene at preventing hijacking


Every day, we protect users from hundreds of thousands of account hijacking attempts. Most attacks stem from automated bots with access to third-party password breaches, but we also see phishing and targeted attacks. Earlier this year, we suggested how just five simple steps like adding a recovery phone number can help keep you safe, but we wanted to prove it in practice.
We teamed up with researchers from New York University and the University of California, San Diego to find out just how effective basic account hygiene is at preventing hijacking. The year-long study, on wide-scale attacks and targeted attacks, was presented on Wednesday at a gathering of experts, policy makers, and users called The Web Conference.
Our research shows that simply adding a recovery phone number to your Google Account can block up to 100% of automated bots, 99% of bulk phishing attacks, and 66% of targeted attacks that occurred during our investigation.


Google’s automatic, proactive hijacking protection
We provide an automatic, proactive layer of security to better protect all our users against account hijacking. Here’s how it works: if we detect a suspicious sign-in attempt (say, from a new location or device), we’ll ask for additional proof that it’s really you. This proof might be confirming you have access to a trusted phone or answering a question where only you know the correct response.
If you’ve signed into your phone or set up a recovery phone number, we can provide a similar level of protection to 2-Step Verification via device-based challenges. We found that an SMS code sent to a recovery phone number helped block 100% of automated bots, 96% of bulk phishing attacks, and 76% of targeted attacks. On-device prompts, a more secure replacement for SMS, helped prevent 100% of automated bots, 99% of bulk phishing attacks and 90% of targeted attacks.


Both device- and knowledge-based challenges help thwart automated bots, while device-based challenges help thwart phishing and even targeted attacks.

If you don’t have a recovery phone number established, then we might fall back on the weaker knowledge-based challenges, like recalling your last sign-in location. This is an effective defense against bots, but protection rates for phishing can drop to as low as 10%. The same vulnerability exists for targeted attacks. That’s because phishing pages and targeted attackers can trick you into revealing any additional identifying information we might ask for.
Given the security benefits of challenges, one might ask why we don’t require them for all sign-ins. The answer is that challenges introduce additional friction and increase the risk of account lockout. In an experiment, 38% of users did not have access to their phone when challenged. Another 34% of users could not recall their secondary email address.
If you lose access to your phone, or can’t solve a challenge, you can always return to a trusted device you previously logged in from to gain access to your account.


Digging into “hack for hire” attacks
Where most bots and phishing attacks are blocked by our automatic protections, targeted attacks are more pernicious. As part of our ongoing efforts to monitor hijacking threats, we have been investigating emerging “hack for hire” criminal groups that purport to break into a single account for a fee on the order of $750 USD. These attackers often rely on spear phishing emails that impersonate family members, colleagues, government officials, or even Google. If the target doesn’t fall for the first spear phishing attempt, follow-on attacks persist for upwards of a month.


Example man-in-the-middle phishing attack that checks for password validity in real-time. Afterwards, the page prompts victims to disclose SMS authentication codes to access the victim’s account.

We estimate just one in a million users face this level of risk. Attackers don’t target random individuals though. While the research shows that our automatic protections can help delay, and even prevent as many as 66% of the targeted attacks that we studied, we still recommend that high-risk users enroll in our Advanced Protection Program. In fact, zero users that exclusively use security keys fell victim to targeted phishing during our investigation.



Take a moment to help keep your account secure
Just like buckling a seat belt, take a moment to follow our five tips to help keep your account secure. As our research shows, one of the easiest things you can do to protect your Google Account is to set up a recovery phone number. For high-risk users—like journalists, activists, business leaders, and political campaign teams—our Advanced Protection Program provides the highest level of security. You can also help protect your non-Google accounts from third-party password breaches by installing the Password Checkup Chrome extension.

Advisory: Security Issue with Bluetooth Low Energy (BLE) Titan Security Keys

We’ve become aware of an issue that affects the Bluetooth Low Energy (BLE) version of the Titan Security Key available in the U.S. and are providing users with the immediate steps they need to take to protect themselves and to receive a free replacement key. This bug affects Bluetooth pairing only, so non-Bluetooth security keys are not affected. Current users of Bluetooth Titan Security Keys should continue to use their existing keys while waiting for a replacement, since security keys provide the strongest protection against phishing.

What is the security issue?

Due to a misconfiguration in the Titan Security Keys’ Bluetooth pairing protocols, it is possible for an attacker who is physically close to you at the moment you use your security key -- within approximately 30 feet -- to (a) communicate with your security key, or (b) communicate with the device to which your key is paired. In order for the misconfiguration to be exploited, an attacker would have to align a series of events in close coordination:

  • When you’re trying to sign into an account on your device, you are normally asked to press the button on your BLE security key to activate it. An attacker in close physical proximity at that moment in time can potentially connect their own device to your affected security key before your own device connects. In this set of circumstances, the attacker could sign into your account using their own device if the attacker somehow already obtained your username and password and could time these events exactly.
  • Before you can use your security key, it must be paired to your device. Once paired, an attacker in close physical proximity to you could use their device to masquerade as your affected security key and connect to your device at the moment you are asked to press the button on your key. After that, they could attempt to change their device to appear as a Bluetooth keyboard or mouse and potentially take actions on your device.

This security issue does not affect the primary purpose of security keys, which is to protect you against phishing by a remote attacker. Security keys remain the strongest available protection against phishing; it is still safer to use a key that has this issue, rather than turning off security key-based two-step verification (2SV) on your Google Account or downgrading to less phishing-resistant methods (e.g. SMS codes or prompts sent to your device). This local proximity Bluetooth issue does not affect USB or NFC security keys.

Am I affected?

This issue affects the BLE version of Titan Security Keys. To determine if your key is affected, check the back of the key. If it has a “T1” or “T2” on the back of the key, your key is affected by the issue and is eligible for free replacement.

Steps to protect yourself

If you want to minimize the remaining risk until you receive your replacement keys, you can perform the following additional steps:

iOS devices:

On devices running iOS version 12.2 or earlier, we recommend using your affected security key in a private place where a potential attacker is not within close physical proximity (approximately 30 feet). After you’ve used your key to sign into your Google Account on your device, immediately unpair it. You can use your key in this manner again while waiting for your replacement, until you update to iOS 12.3.

Once you update to iOS 12.3, your affected security key will no longer work. You will not be able to use your affected key to sign into your Google Account, or any other account protected by the key, and you will need to order a replacement key. If you are already signed into your Google Account on your iOS device, do not sign out because you won’t be able to sign in again until you get a new key. If you are locked out of your Google Account on your iOS device before your replacement key arrives, see these instructions for getting back into your account. Note that you can continue to sign into your Google Account on non-iOS devices..

On Android and other devices:

We recommend using your affected security key in a private place where a potential attacker is not within close physical proximity (approximately 30 feet). After you’ve used your affected security key to sign into your Google Account, immediately unpair it. Android devices updated with the upcoming June 2019 Security Patch Level (SPL) and beyond will automatically unpair affected Bluetooth devices, so you won’t need to unpair manually. You can also continue to use your USB or NFC security keys, which are supported on Android and not affected by this issue.

How to get a replacement key

We recommend that everyone with an affected BLE Titan Security Key get a free replacement by visiting google.com/replacemykey.

Is it still safe to use my affected BLE Titan Security Key?

It is much safer to use the affected key instead of no key at all. Security keys are the strongest protection against phishing currently available.