Author Archives: Aaron Stein

Google CTF 2019 is here



June has become the month where we’re inviting thousands of security aficionados to put their skills to the test...

In 2018, 23,563 people submitted at least one flag on their hunt for the secret cake recipe in the Beginner’s Quest. While 330 teams competed for a place in the CTF Finals, the lucky 10 winning teams got a trip to London to play with fancy tools, solve mysterious videos and dine in Churchill’s old chambers.

This June, we will be hosting our fourth-annual Capture the Flag event. Teams of security researchers will again come together from all over the globe for one weekend to eat, sleep and breathe security puzzles and challenges - some of them working together around the clock to solve some of the toughest security challenges on the planet.

Up for grabs this year is $31,337.00 in prize money and the title of Google CTF Champion.

Ready? Here are the details:


  1. The qualification round will take place online Sat/Sun June 22 and 23 2019
  2. The top 10 teams will qualify for the onsite final (location and details coming soon)
  3. Players from the Beginner's Quest can enter the draw for 10 tickets to witness the Google CTF finals
Whether you’re a seasoned CTF player or just curious about cyber security and ethical hacking, we want you to join us. If you’re just starting out, the “Beginner's Quest” is perfect for you. Sign up to learn skills, meet new friends in the security community and even watch the pros in action. See you there! For the latest announcements, see g.co/ctf, subscribe to our mailing list or follow us on @GoogleVRP.


Disclosing vulnerabilities to protect users across platforms



On Wednesday, February 27th, we reported two 0-day vulnerabilities — previously publicly-unknown vulnerabilities — one affecting Google Chrome and another in Microsoft Windows that were being exploited together.

To remediate the Chrome vulnerability (CVE-2019-5786), Google released an update for all Chrome platforms on March 1; this update was pushed through Chrome auto-update. We encourage users to verify that Chrome auto-update has already updated Chrome to 72.0.3626.121 or later.

The second vulnerability was in Microsoft Windows. It is a local privilege escalation in the Windows win32k.sys kernel driver that can be used as a security sandbox escape. The vulnerability is a NULL pointer dereference in win32k!MNGetpItemFromIndex when NtUserMNDragOver() system call is called under specific circumstances.

We strongly believe this vulnerability may only be exploitable on Windows 7 due to recent exploit mitigations added in newer versions of Windows. To date, we have only observed active exploitation against Windows 7 32-bit systems.

Pursuant to Google’s vulnerability disclosure policy, when we discovered the vulnerability we reported it to Microsoft. Today, also in compliance with our policy, we are publicly disclosing its existence, because it is a serious vulnerability in Windows that we know was being actively exploited in targeted attacks. The unpatched Windows vulnerability can still be used to elevate privileges or combined with another browser vulnerability to evade security sandboxes. Microsoft have told us they are working on a fix.

As mitigation advice for this vulnerability users should consider upgrading to Windows 10 if they are still running an older version of Windows, and to apply Windows patches from Microsoft when they become available. We will update this post when they are available.

Open sourcing ClusterFuzz



[Cross-posted from the Google Open-Source Blog]

Fuzzing is an automated method for detecting bugs in software that works by feeding unexpected inputs to a target program. It is effective at finding memory corruption bugs, which often have serious security implications. Manually finding these issues is both difficult and time consuming, and bugs often slip through despite rigorous code review practices. For software projects written in an unsafe language such as C or C++, fuzzing is a crucial part of ensuring their security and stability.

In order for fuzzing to be truly effective, it must be continuous, done at scale, and integrated into the development process of a software project. To provide these features for Chrome, we wrote ClusterFuzz, a fuzzing infrastructure running on over 25,000 cores. Two years ago, we began offering ClusterFuzz as a free service to open source projects through OSS-Fuzz.

Today, we’re announcing that ClusterFuzz is now open source and available for anyone to use.
We developed ClusterFuzz over eight years to fit seamlessly into developer workflows, and to make it dead simple to find bugs and get them fixed. ClusterFuzz provides end-to-end automation, from bug detection, to triage (accurate deduplication, bisection), to bug reporting, and finally to automatic closure of bug reports.

ClusterFuzz has found more than 16,000 bugs in Chrome and more than 11,000 bugs in over 160 open source projects integrated with OSS-Fuzz. It is an integral part of the development process of Chrome and many other open source projects. ClusterFuzz is often able to detect bugs hours after they are introduced and verify the fix within a day.

Check out our GitHub repository. You can try ClusterFuzz locally by following these instructions. In production, ClusterFuzz depends on some key Google Cloud Platform services, but you can use your own compute cluster. We welcome your contributions and look forward to any suggestions to help improve and extend this infrastructure. Through open sourcing ClusterFuzz, we hope to encourage all software developers to integrate fuzzing into their workflows.

Android Pie à la mode: Security & Privacy

Posted by Vikrant Nanda and René Mayrhofer, Android Security & Privacy Team

[Cross-posted from the Android Developers Blog]


There is no better time to talk about Android dessert releases than the holidays because who doesn't love dessert? And what is one of our favorite desserts during the holiday season? Well, pie of course.

In all seriousness, pie is a great analogy because of how the various ingredients turn into multiple layers of goodness: right from the software crust on top to the hardware layer at the bottom. Read on for a summary of security and privacy features introduced in Android Pie this year.
Platform hardening
With Android Pie, we updated File-Based Encryption to support external storage media (such as, expandable storage cards). We also introduced support for metadata encryption where hardware support is present. With filesystem metadata encryption, a single key present at boot time encrypts whatever content is not encrypted by file-based encryption (such as, directory layouts, file sizes, permissions, and creation/modification times).

Android Pie also introduced a BiometricPrompt API that apps can use to provide biometric authentication dialogs (such as, fingerprint prompt) on a device in a modality-agnostic fashion. This functionality creates a standardized look, feel, and placement for the dialog. This kind of standardization gives users more confidence that they're authenticating against a trusted biometric credential checker.

New protections and test cases for the Application Sandbox help ensure all non-privileged apps targeting Android Pie (and all future releases of Android) run in stronger SELinux sandboxes. By providing per-app cryptographic authentication to the sandbox, this protection improves app separation, prevents overriding safe defaults, and (most significantly) prevents apps from making their data widely accessible.
Anti-exploitation improvements
With Android Pie, we expanded our compiler-based security mitigations, which instrument runtime operations to fail safely when undefined behavior occurs.

Control Flow Integrity (CFI) is a security mechanism that disallows changes to the original control flow graph of compiled code. In Android Pie, it has been enabled by default within the media frameworks and other security-critical components, such as for Near Field Communication (NFC) and Bluetooth protocols. We also implemented support for CFI in the Android common kernel, continuing our efforts to harden the kernel in previous Android releases.

Integer Overflow Sanitization is a security technique used to mitigate memory corruption and information disclosure vulnerabilities caused by integer operations. We've expanded our use of Integer Overflow sanitizers by enabling their use in libraries where complex untrusted input is processed or where security vulnerabilities have been reported.
Continued investment in hardware-backed security

One of the highlights of Android Pie is Android Protected Confirmation, the first major mobile OS API that leverages a hardware-protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. Developers can use this API to display a trusted UI prompt to the user, requesting approval via a physical protected input (such as, a button on the device). The resulting cryptographically signed statement allows the relying party to reaffirm that the user would like to complete a sensitive transaction through their app.

We also introduced support for a new Keystore type that provides stronger protection for private keys by leveraging tamper-resistant hardware with dedicated CPU, RAM, and flash memory. StrongBox Keymaster is an implementation of the Keymaster hardware abstraction layer (HAL) that resides in a hardware security module. This module is designed and required to have its own processor, secure storage, True Random Number Generator (TRNG), side-channel resistance, and tamper-resistant packaging.

Other Keystore features (as part of Keymaster 4) include Keyguard-bound keys, Secure Key Import, 3DES support, and version binding. Keyguard-bound keys enable use restriction so as to protect sensitive information. Secure Key Import facilitates secure key use while protecting key material from the application or operating system. You can read more about these features in our recent blog post as well as the accompanying release notes.
Enhancing user privacy

User privacy has been boosted with several behavior changes, such as limiting the access background apps have to the camera, microphone, and device sensors. New permission rules and permission groups have been created for phone calls, phone state, and Wi-Fi scans, as well as restrictions around information retrieved from Wi-Fi scans. We have also added associated MAC address randomization, so that a device can use a different network address when connecting to a Wi-Fi network.

On top of that, Android Pie added support for encrypting Android backups with the user's screen lock secret (that is, PIN, pattern, or password). By design, this means that an attacker would not be able to access a user's backed-up application data without specifically knowing their passcode. Auto backup for apps has been enhanced by providing developers a way to specify conditions under which their app's data is excluded from auto backup. For example, Android Pie introduces a new flag to determine whether a user's backup is client-side encrypted.

As part of a larger effort to move all web traffic away from cleartext (unencrypted HTTP) and towards being secured with TLS (HTTPS), we changed the defaults for Network Security Configuration to block all cleartext traffic. We're protecting users with TLS by default, unless you explicitly opt-in to cleartext for specific domains. Android Pie also adds built-in support for DNS over TLS, automatically upgrading DNS queries to TLS if a network's DNS server supports it. This protects information about IP addresses visited from being sniffed or intercepted on the network level.


We believe that the features described in this post advance the security and privacy posture of Android, but you don't have to take our word for it. Year after year our continued efforts are demonstrably resulting in better protection as evidenced by increasing exploit difficulty and independent mobile security ratings. Now go and enjoy some actual pie while we get back to preparing the next Android dessert release!

Making Android more secure requires a combination of hardening the platform and advancing anti-exploitation techniques.


Acknowledgements: This post leveraged contributions from Chad Brubaker, Janis Danisevskis, Giles Hogben, Troy Kensinger, Ivan Lozano, Vishwath Mohan, Frank Salim, Sami Tolvanen, Lilian Young, and Shawn Willden.

Combating Potentially Harmful Applications with Machine Learning at Google: Datasets and Models



[Cross-posted from the Android Developers Blog]

In a previous blog post, we talked about using machine learning to combat Potentially Harmful Applications (PHAs). This blog post covers how Google uses machine learning techniques to detect and classify PHAs. We'll discuss the challenges in the PHA detection space, including the scale of data, the correct identification of PHA behaviors, and the evolution of PHA families. Next, we will introduce two of the datasets that make the training and implementation of machine learning models possible, such as app analysis data and Google Play data. Finally, we will present some of the approaches we use, including logistic regression and deep neural networks.

Using Machine Learning to Scale

Detecting PHAs is challenging and requires a lot of resources. Our security experts need to understand how apps interact with the system and the user, analyze complex signals to find PHA behavior, and evolve their tactics to stay ahead of PHA authors. Every day, Google Play Protect (GPP) analyzes over half a million apps, which makes a lot of new data for our security experts to process.

Leveraging machine learning helps us detect PHAs faster and at a larger scale. We can detect more PHAs just by adding additional computing resources. In many cases, machine learning can find PHA signals in the training data without human intervention. Sometimes, those signals are different than signals found by security experts. Machine learning can take better advantage of this data, and discover hidden relationships between signals more effectively.

There are two major parts of Google Play Protect's machine learning protections: the data and the machine learning models.

Data Sources

The quality and quantity of the data used to create a model are crucial to the success of the system. For the purpose of PHA detection and classification, our system mainly uses two anonymous data sources: data from analyzing apps and data from how users experience apps.

App Data

Google Play Protect analyzes every app that it can find on the internet. We created a dataset by decomposing each app's APK and extracting PHA signals with deep analysis. We execute various processes on each app to find particular features and behaviors that are relevant to the PHA categories in scope (for example, SMS fraud, phishing, privilege escalation). Static analysis examines the different resources inside an APK file while dynamic analysis checks the behavior of the app when it's actually running. These two approaches complement each other. For example, dynamic analysis requires the execution of the app regardless of how obfuscated its code is (obfuscation hinders static analysis), and static analysis can help detect cloaking attempts in the code that may in practice bypass dynamic analysis-based detection. In the end, this analysis produces information about the app's characteristics, which serve as a fundamental data source for machine learning algorithms.

Google Play Data

In addition to analyzing each app, we also try to understand how users perceive that app. User feedback (such as the number of installs, uninstalls, user ratings, and comments) collected from Google Play can help us identify problematic apps. Similarly, information about the developer (such as the certificates they use and their history of published apps) contribute valuable knowledge that can be used to identify PHAs. All these metrics are generated when developers submit a new app (or new version of an app) and by millions of Google Play users every day. This information helps us to understand the quality, behavior, and purpose of an app so that we can identify new PHA behaviors or identify similar apps.

In general, our data sources yield raw signals, which then need to be transformed into machine learning features for use by our algorithms. Some signals, such as the permissions that an app requests, have a clear semantic meaning and can be directly used. In other cases, we need to engineer our data to make new, more powerful features. For example, we can aggregate the ratings of all apps that a particular developer owns, so we can calculate a rating per developer and use it to validate future apps. We also employ several techniques to focus in on interesting data.To create compact representations for sparse data, we use embedding. To help streamline the data to make it more useful to models, we use feature selection. Depending on the target, feature selection helps us keep the most relevant signals and remove irrelevant ones.

By combining our different datasets and investing in feature engineering and feature selection, we improve the quality of the data that can be fed to various types of machine learning models.

Models

Building a good machine learning model is like building a skyscraper: quality materials are important, but a great design is also essential. Like the materials in a skyscraper, good datasets and features are important to machine learning, but a great algorithm is essential to identify PHA behaviors effectively and efficiently.

We train models to identify PHAs that belong to a specific category, such as SMS-fraud or phishing. Such categories are quite broad and contain a large number of samples given the number of PHA families that fit the definition. Alternatively, we also have models focusing on a much smaller scale, such as a family, which is composed of a group of apps that are part of the same PHA campaign and that share similar source code and behaviors. On the one hand, having a single model to tackle an entire PHA category may be attractive in terms of simplicity but precision may be an issue as the model will have to generalize the behaviors of a large number of PHAs believed to have something in common. On the other hand, developing multiple PHA models may require additional engineering efforts, but may result in better precision at the cost of reduced scope.

We use a variety of modeling techniques to modify our machine learning approach, including supervised and unsupervised ones.

One supervised technique we use is logistic regression, which has been widely adopted in the industry. These models have a simple structure and can be trained quickly. Logistic regression models can be analyzed to understand the importance of the different PHA and app features they are built with, allowing us to improve our feature engineering process. After a few cycles of training, evaluation, and improvement, we can launch the best models in production and monitor their performance.

For more complex cases, we employ deep learning. Compared to logistic regression, deep learning is good at capturing complicated interactions between different features and extracting hidden patterns. The millions of apps in Google Play provide a rich dataset, which is advantageous to deep learning.

In addition to our targeted feature engineering efforts, we experiment with many aspects of deep neural networks. For example, a deep neural network can have multiple layers and each layer has several neurons to process signals. We can experiment with the number of layers and neurons per layer to change model behaviors.

We also adopt unsupervised machine learning methods. Many PHAs use similar abuse techniques and tricks, so they look almost identical to each other. An unsupervised approach helps define clusters of apps that look or behave similarly, which allows us to mitigate and identify PHAs more effectively. We can automate the process of categorizing that type of app if we are confident in the model or can request help from a human expert to validate what the model found.

PHAs are constantly evolving, so our models need constant updating and monitoring. In production, models are fed with data from recent apps, which help them stay relevant. However, new abuse techniques and behaviors need to be continuously detected and fed into our machine learning models to be able to catch new PHAs and stay on top of recent trends. This is a continuous cycle of model creation and updating that also requires tuning to ensure that the precision and coverage of the system as a whole matches our detection goals.

Looking forward

As part of Google's AI-first strategy, our work leverages many machine learning resources across the company, such as tools and infrastructures developed by Google Brain and Google Research. In 2017, our machine learning models successfully detected 60.3% of PHAs identified by Google Play Protect, covering over 2 billion Android devices. We continue to research and invest in machine learning to scale and simplify the detection of PHAs in the Android ecosystem.

Acknowledgements

This work was developed in joint collaboration with Google Play Protect, Safe Browsing and Play Abuse teams with contributions from Andrew Ahn, Hrishikesh Aradhye, Daniel Bali, Hongji Bao, Yajie Hu, Arthur Kaiser, Elena Kovakina, Salvador Mandujano, Melinda Miller, Rahul Mishra, Damien Octeau, Sebastian Porst, Chuangang Ren, Monirul Sharif, Sri Somanchi, Sai Deep Tetali, Zhikun Wang, and Mo Yu.

Android Protected Confirmation: Taking transaction security to the next level



[Cross-posted from the Android Developers Blog]

In Android Pie, we introduced Android Protected Confirmation, the first major mobile OS API that leverages a hardware protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. This Trusted UI protects the choices you make from fraudulent apps or a compromised operating system. When an app invokes Protected Confirmation, control is passed to the Trusted UI, where transaction data is displayed and user confirmation of that data's correctness is obtained.
Once confirmed, your intention is cryptographically authenticated and unforgeable when conveyed to the relying party, for example, your bank. Protected Confirmation increases the bank's confidence that it acts on your behalf, providing a higher level of protection for the transaction.
Protected Confirmation also adds additional security relative to other forms of secondary authentication, such as a One Time Password or Transaction Authentication Number. These mechanisms can be frustrating for mobile users and also fail to protect against a compromised device that can corrupt transaction data or intercept one-time confirmation text messages.
Once the user approves a transaction, Protected Confirmation digitally signs the confirmation message. Because the signing key never leaves the Trusted UI's hardware sandbox, neither app malware nor a compromised operating system can fool the user into authorizing anything. Protected Confirmation signing keys are created using Android's standard AndroidKeyStore API. Before it can start using Android Protected Confirmation for end-to-end secure transactions, the app must enroll the public KeyStore key and its Keystore Attestation certificate with the remote relying party. The attestation certificate certifies that the key can only be used to sign Protected Confirmations.
There are many possible use cases for Android Protected Confirmation. At Google I/O 2018, the What's new in Android security session showcased partners planning to leverage Android Protected Confirmation in a variety of ways, including Royal Bank of Canada person to person money transfers; Duo Security, Nok Nok Labs, and ProxToMe for user authentication; and Insulet Corporation and Bigfoot Biomedical, for medical device control.
Insulet, a global leading manufacturer of tubeless patch insulin pumps, has demonstrated how they can modify their FDA cleared Omnipod DASH TM Insulin management system in a test environment to leverage Protected Confirmation to confirm the amount of insulin to be injected. This technology holds the promise for improved quality of life and reduced cost by enabling a person with diabetes to leverage their convenient, familiar, and secure smartphone for control rather than having to rely on a secondary, obtrusive, and expensive remote control device. (Note: The Omnipod DASH™ System is not cleared for use with Pixel 3 mobile device or Protected Confirmation).

This work is fulfilling an important need in the industry. Since smartphones do not fit the mold of an FDA approved medical device, we've been working with FDA as part of DTMoSt, an industry-wide consortium, to define a standard for phones to safely control medical devices, such as insulinSince smartphones do not fit the mold of an FDA approved medical device, we've been working with FDA as part of DTMoSt, an industry-wide consortium, to define a standard for phones to safely control medical devices, such as insulin pumps. A technology like Protected Confirmation plays an important role in gaining higher assurance of user intent and medical safety.
To integrate Protected Confirmation into your app, check out the Android Protected Confirmation training article. Android Protected Confirmation is an optional feature in Android Pie. Because it has low-level hardware dependencies, Protected Confirmation may not be supported by all devices running Android Pie. Google Pixel 3 and 3XL devices are the first to support Protected Confirmation, and we are working closely with other manufacturers to adopt this market-leading security innovation on more devices.

Trustworthy Chrome Extensions, by Default



[Cross-posted from the Chromium blog]

Incredibly, it’s been nearly a decade since we launched the Chrome extensions system. Thanks to the hard work and innovation of our developer community, there are now more than 180,000 extensions in the Chrome Web Store, and nearly half of Chrome desktop users actively use extensions to customize Chrome and their experience on the web.

The extensions team's dual mission is to help users tailor Chrome’s functionality to their individual needs and interests, and to empower developers to build rich and useful extensions. But, first and foremost, it’s crucial that users be able to trust the extensions they install are safe, privacy-preserving, and performant. Users should always have full transparency about the scope of their extensions’ capabilities and data access.

We’ve recently taken a number of steps toward improved extension security with the launch of out-of-process iframes, the removal of inline installation, and significant advancements in our ability to detect and block malicious extensions using machine learning. Looking ahead, there are more fundamental changes needed so that all Chrome extensions are trustworthy by default.

Today we’re announcing some upcoming changes and plans for the future:

User controls for host permissions

Beginning in Chrome 70, users will have the choice to restrict extension host access to a custom list of sites, or to configure extensions to require a click to gain access to the current page.


While host permissions have enabled thousands of powerful and creative extension use cases, they have also led to a broad range of misuse - both malicious and unintentional - because they allow extensions to automatically read and change data on websites. Our aim is to improve user transparency and control over when extensions are able to access site data. In subsequent milestones, we’ll continue to optimize the user experience toward this goal while improving usability. If your extension requests host permissions, we encourage you to review our transition guide and begin testing as soon as possible.

Changes to the extensions review process

Going forward, extensions that request powerful permissions will be subject to additional compliance review. We’re also looking very closely at extensions that use remotely hosted code, with ongoing monitoring. Your extension’s permissions should be as narrowly-scoped as possible, and all your code should be included directly in the extension package, to minimize review time.
New code reliability requirements

Starting today, Chrome Web Store will no longer allow extensions with obfuscated code. This includes code within the extension package as well as any external code or resource fetched from the web. This policy applies immediately to all new extension submissions. Existing extensions with obfuscated code can continue to submit updates over the next 90 days, but will be removed from the Chrome Web Store in early January if not compliant.

Today over 70% of malicious and policy violating extensions that we block from Chrome Web Store contain obfuscated code. At the same time, because obfuscation is mainly used to conceal code functionality, it adds a great deal of complexity to our review process. This is no longer acceptable given the aforementioned review process changes.

Additionally, since JavaScript code is always running locally on the user's machine, obfuscation is insufficient to protect proprietary code from a truly motivated reverse engineer. Obfuscation techniques also come with hefty performance costs such as slower execution and increased file and memory footprints.

Ordinary minification, on the other hand, typically speeds up code execution as it reduces code size, and is much more straightforward to review. Thus, minification will still be allowed, including the following techniques:

  • Removal of whitespace, newlines, code comments, and block delimiters
  • Shortening of variable and function names
  • Collapsing the number of JavaScript files
If you have an extension in the store with obfuscated code, please review our updated content policies as well as our recommended minification techniques for Google Developers, and submit a new compliant version before January 1st, 2019.


Required 2-step verification

In 2019, enrollment in 2-Step Verification will be required for Chrome Web Store developer accounts. If your extension becomes popular, it can attract attackers who want to steal it by hijacking your account, and 2-Step Verification adds an extra layer of security by requiring a second authentication step from your phone or a physical security key. We strongly recommend that you enroll as soon as possible.

For even stronger account security, consider the Advanced Protection Program. Advanced protection offers the same level of security that Google relies on for its own employees, requiring a physical security key to provide the strongest defense against phishing attacks.


Looking ahead: Manifest v3

In 2019 we will introduce the next extensions manifest version. Manifest v3 will entail additional platform changes that aim to create stronger security, privacy, and performance guarantees. We want to help all developers fall into the pit of success; writing a secure and performant extension in Manifest v3 should be easy, while writing an insecure or non-performant extension should be difficult.

Some key goals of manifest v3 include:
  • More narrowly-scoped and declarative APIs, to decrease the need for overly-broad access and enable more performant implementation by the browser, while preserving important functionality
  • Additional, easier mechanisms for users to control the permissions granted to extensions
  • Modernizing to align with new web capabilities, such as supporting Service Workers as a new type of background process
We intend to make the transition to manifest v3 as smooth as possible and we’re thinking carefully about the rollout plan. We’ll be in touch soon with more specific details.

We recognize that some of the changes announced today may require effort in the future, depending on your extension. But we believe the collective result will be worth that effort for all users, developers, and for the long term health of the Chrome extensions ecosystem. We’re committed to working with you to transition through these changes and are very interested in your feedback. If you have questions or comments, please get in touch with us on the Chromium extensions forum.

Leveraging AI to protect our users and the web



Recent advances in AI are transforming how we combat fraud and abuse and implement new security protections. These advances are critical to meeting our users’ expectations and keeping increasingly sophisticated attackers at bay, but they come with brand new challenges as well.

This week at RSA, we explored the intersection between AI, anti-abuse, and security in two talks.

Our first talk provided a concise overview of how we apply AI to fraud and abuse problems. The talk started by detailing the fundamental reasons why AI is key to building defenses that keep up with user expectations and combat increasingly sophisticated attacks. It then delved into the top 10 anti-abuse specific challenges encountered while applying AI to abuse fighting and how to overcome them. Check out the infographic at the end of the post for a quick overview of the challenges we covered during the talk.

Our second talk looked at attacks on ML models themselves and the ongoing effort to develop new defenses.

It covered attackers’ attempts to recover private training data, to introduce examples into the training set of a machine learning model to cause it to learn incorrect behaviors, to modify the input that a machine learning model receives at classification time to cause it to make a mistake, and more.

Our talk also looked at various defense solutions, including differential privacy, which provides a rigorous theoretical framework for preventing attackers from recovering private training data.

Hopefully you were to able to join us at RSA! But if not, here is re-recording and the slides of our first talk on applying AI to abuse-prevention, along with the slides from our second talk about protecting ML models.

Today’s CPU vulnerability: what you need to know



[Google Cloud, G Suite, and Chrome customers can visit the Google Cloud blog for details about those products]
[For more technical details about this issue, please read Project Zero's blog post]

Last year, Google’s Project Zero team discovered serious security flaws caused by “speculative execution,” a technique used by most modern processors (CPUs) to optimize performance.

The Project Zero researcher, Jann Horn, demonstrated that malicious actors could take advantage of speculative execution to read system memory that should have been inaccessible. For example, an unauthorized party may read sensitive information in the system’s memory such as passwords, encryption keys, or sensitive information open in applications. Testing also showed that an attack running on one virtual machine was able to access the physical memory of the host machine, and through that, gain read-access to the memory of a different virtual machine on the same host.

These vulnerabilities affect many CPUs, including those from AMD, ARM, and Intel, as well as the devices and operating systems running on them.

As soon as we learned of this new class of attack, our security and product development teams mobilized to defend Google’s systems and our users’ data. We have updated our systems and affected products to protect against this new type of attack. We also collaborated with hardware and software manufacturers across the industry to help protect their users and the broader web. These efforts have included collaborative analysis and the development of novel mitigations.

We are posting before an originally coordinated disclosure date of January 9, 2018 because of existing public reports and growing speculation in the press and security research community about the issue, which raises the risk of exploitation. The full Project Zero report is forthcoming (update: this has been published; see above).

Mitigation status for Google products

A list of affected Google products and their current status of mitigation against this attack appears here. As this is a new class of attack, our patch status refers to our mitigation for currently known vectors for exploiting the flaw. The issue has been mitigated in many products (or wasn’t a vulnerability in the first place). In some instances, users and customers may need to take additional steps to ensure they’re using a protected version of a product. This list and a product’s status may change as new developments warrant. In the case of new developments, we will post updates to this blog.

  • All Google products not explicitly listed below require no user or customer action.
  • Android
    • Devices with the latest security update are protected. Furthermore, we are unaware of any successful reproduction of this vulnerability that would allow unauthorized information disclosure on ARM-based Android devices.
    • Supported Nexus and Pixel devices with the latest security update are protected.
    • Further information is available here.
  • Google Apps / G Suite (Gmail, Calendar, Drive, Sites, etc.):
    • No additional user or customer action needed.
  • Google Chrome
    • Some user or customer action needed. More information here.
  • Google Chrome OS (e.g., Chromebooks):
    • Some additional user or customer action needed. More information here.
  • Google Cloud Platform
    • Google App Engine: No additional customer action needed.
    • Google Compute Engine: Some additional customer action needed. More information here.
    • Google Kubernetes Engine: Some additional customer action needed. More information here.
    • Google Cloud Dataflow: Some additional customer action needed. More information here.
    • Google Cloud Dataproc: Some additional customer action needed. More information here
    • All other Google Cloud products and services: No additional action needed.
  • Google Home / Chromecast:
    • No additional user action needed.
  • Google Wifi/OnHub:
    • No additional user action needed.
Multiple methods of attack

To take advantage of this vulnerability, an attacker first must be able to run malicious code on the targeted system.

The Project Zero researchers discovered three methods (variants) of attack, which are effective under different conditions. All three attack variants can allow a process with normal user privileges to perform unauthorized reads of memory data, which may contain sensitive information such as passwords, cryptographic key material, etc.

In order to improve performance, many CPUs may choose to speculatively execute instructions based on assumptions that are considered likely to be true. During speculative execution, the processor is verifying these assumptions; if they are valid, then the execution continues. If they are invalid, then the execution is unwound, and the correct execution path can be started based on the actual conditions. It is possible for this speculative execution to have side effects which are not restored when the CPU state is unwound, and can lead to information disclosure.

There is no single fix for all three attack variants; each requires protection independently. Many vendors have patches available for one or more of these attacks.

We will continue our work to mitigate these vulnerabilities and will update both our product support page and this blog post as we release further fixes. More broadly, we appreciate the support and involvement of all the partners and Google engineers who worked tirelessly over the last few months to make our users and customers safe.

Blog post update log

  • Added link to Project Zero blog
  • Added link to Google Cloud blog

Broadening HSTS to secure more of the Web


The security of the Web is of the utmost importance to Google. One of the most powerful tools in the Web security toolbox is ensuring that connections to websites are encrypted using HTTPS, which prevents Web traffic from being intercepted, altered, or misdirected in transit. We have taken many actions to make the use of HTTPS more widespread, both within Google and on the larger Internet.

We began in 2010 by defaulting to HTTPS for Gmail and starting the transition to encrypted search by default. In 2014, we started encouraging other websites to use HTTPS by giving secure sites a ranking boost in Google Search. In 2016, we became a platinum sponsor of Let’s Encrypt, a service that provides simple and free SSL certificates. Earlier this year we announced that Chrome will start displaying warnings on insecure sites, and we recently introduced fully managed SSL certificates in App Engine. And today we’re proud to announce that we are beginning to use another tool in our toolbox, the HTTPS Strict Transport Security (HSTS) preload list, in a new and more impactful way.

The HSTS preload list is built in to all major browsers (Chrome, Firefox, Safari, Internet Explorer/Edge, and Opera). It consists of a list of hostnames for which browsers automatically enforce HTTPS-secured connections. For example, gmail.com is on the list, which means that the aforementioned browsers will never make insecure connections to Gmail; if the user types http://gmail.com, the browser first changes it to https://gmail.com before sending the request. This provides greater security because the browser never loads an http-to-https redirect page, which could be intercepted.

The HSTS preload list can contain individual domains or subdomains and even top-level domains (TLDs), which are added through the HSTS website. The TLD is the last part of the domain name, e.g., .com, .net, or .org. Google operates 45 TLDs, including .google, .how, and .soy. In 2015 we created the first secure TLD when we added .google to the HSTS preload list, and we are now rolling out HSTS for a larger number of our TLDs, starting with .foo and .dev.

The use of TLD-level HSTS allows such namespaces to be secure by default. Registrants receive guaranteed protection for themselves and their users simply by choosing a secure TLD for their website and configuring an SSL certificate, without having to add individual domains or subdomains to the HSTS preload list. Moreover, since it typically takes months between adding a domain name to the list and browser upgrades reaching a majority of users, using an already-secured TLD provides immediate protection rather than eventual protection. Adding an entire TLD to the HSTS preload list is also more efficient, as it secures all domains under that TLD without the overhead of having to include all those domains individually.

We hope to make some of these secure TLDs available for registration soon, and would like to see TLD-wide HSTS become the security standard for new TLDs.