Author Archives: Edward Fernandez

A look at Chrome’s security review culture



Security reviewers must develop the confidence and skills to make fast, difficult decisions. A simplistic piece of advice to reviewers is “just be confident” but in reality that takes practice and experience. Confidence comes with time, and people are there to support each other as we learn. This post shares advice we give to people doing security reviews for Chrome.

Security Review in Chrome

Chrome has a lightweight launch process. Teams write requirements and design documents outlining why the feature should be built, how the feature will benefit users, and how the feature will be built. Developers write code behind a feature flag and must pass a Launch Review before turning it on. Teams think about security early-on and coordinate with the security team. Teams are responsible for the safety of their features and ensuring that the security team is able to say ‘yes’ to its security review.

Security review focuses on the design of a proposed feature, not its details and is distinct from code review. Chrome changes need approval from engineers familiar with the code being changed but not necessarily from security experts. It is not practical for security engineers to scrutinize every change. Instead we focus on the feature’s architecture, and how it might affect people using Chrome.

Reviewers function best in an open and supportive engineering culture. Security review is not an easy task – it applies security engineering insights in a social context that could become adversarial and fractious. Google, and Chrome, embody a security-centric engineering culture, where respectful disagreement is valued, where we learn from mistakes, where decisions can be revisited, and where developers see the security team as a partner that helps them ship features safely. Within the security team we support each other by encouraging questioning & learning, and provide mentorship and coaching to help reviewers enhance their reviewing skills.

Learning security review

Start by shadowing

Start with some help. As a new reviewer, you may not feel you’re 100% ready — don’t let that put you off. The best way to learn is to observe and see what’s involved before easing in to doing reviews on your own. Start by shadowing to get a feel for the process. Ask the person you are shadowing how they plan to approach the review, then look at the materials yourself. Concentrate on learning how to review rather than on the details of the thing you are reviewing. Don’t get too involved but observe how the reviewer does things and ask them why. Next time try to co-review something - ask the feature team some questions and talk through your thoughts with the other reviewer. Let them make the final approval decision. Do this a few times and you’ll be ready to be the main reviewer, and remember that you can always reach out for help and advice.

Read enough to make a decision

Read a lot, but know when to stop. Understand what the feature is doing, what’s new, and what’s built on existing, approved, mechanisms. Focus on the new things. If you need to educate yourself, skim older docs or code for context. It can help to look at related reviews for repeated issues and solutions. It is tempting to try to understand everything and at first you’ll dig deeper than you need to. You’ll get better at knowing when to stop after a few reviews. Treat existing, approved, features as building blocks that you don’t need to fully understand, but might be useful to skim as background.

Launch review is a gate. It’s ok to ask feature teams to have the materials ready. Try to use your time wisely — if a design doc is very brief and lacks any security discussion you can quickly say "please add a security considerations section" and stop thinking about it until the team comes back with more complete documentation. If the design document doesn’t fully explain something that is a sign the document needs to be expanded — if something isn’t clear to you or isn’t covered then start asking questions. Remember that you’re not looking for every possible bug, but ensuring that major concerns are addressed upfront.

As you’re reading, read actively and write down observations and questions as you go. Cross them off if you find an answer later. For your first reviews this will take a long time. Don’t worry too much about that - you won't know yet which details matter. Over time you’ll learn where to focus your attention. This is also a good time to pair up with a seasoned reviewer. Schedule a chat to go over your thoughts before you share them with the feature team. This will help you understand the process people go through and allows a safe evaluation of your thoughts before you share them more widely - this will help you build confidence. Next, clarify any questions with the feature team. Try to write a sentence or two describing the feature - if you can’t do this it indicates you need more information.

Ask questions to improve documentation

You have permission to be ignorant! Use it! Ask questions until you understand areas of uncertainty. Asking questions provides real value, and often triggers the team to realize that something should be done differently. In particular — if it’s confusing to you it’s probably badly explained or badly thought out, or shows that an assumption or tacit knowledge is missing from a design document. If you’re worried about looking ignorant, make use of the more experienced reviewers around you — ask on the chat or book some time to talk over your thoughts one-on-one. This should help you formulate your question so that it’s useful to the feature team. Try to write out what you think is happening, and let the feature team tell you if you’re close or not.

The chances that you’ll understand everything immediately are very low, and that’s ok. In meetings about a feature a favorite question of mine is ‘what are you secretly worried about?’ followed by an awkward pause. People will absolutely tell you things! Sometimes there's a domain-knowledge mismatch when you don't have the right words to ask the question, so you can't get a useful answer. Always ask for a diagram that shows which process or component different parts of a feature are happening in — this helps you hone in on the critical interfaces, and will illustrate the design more clearly than screenfuls of text or code.

Center people in your security analysis

We’re here to help people. Try to center people in your thoughts and arguments. How will people use the feature? Who are they? Who might harm them and how? Are there particular groups of people that might be more vulnerable than others, and what can we do to protect them? How does the feature make people feel? How will their experience of the application change? How will their lives be affected? Think about how a bad actor might abuse the feature. What implicit assumptions is the implementation making about the people using it? What or who are we asking people to trust? What if someone modifies traffic, changes a message, passes in bad data, or tricks someone into using the feature when they don't want to? This is a great thing to discuss when you’re pairing with another reviewer — be sure to ask them what they like to think about.

Think about what can go wrong

Take time to think and bring an adversarial mindset and bring a different perspective. In some ways the purpose of a security review is to stop and think before unleashing new ideas on the world. Make focus time in your calendar or sit somewhere unusual to give yourself space to think. A skeptical, enquiring mindset is more useful than deep knowledge. You’re there to ask the questions the feature team won’t have thought about. They will naturally focus on what they need to do to make the feature work. Security review is about thinking about what else might happen when it’s working, or what might happen if someone deliberately tries to do things the designers didn’t expect. Try to take a different perspective.

Trust your spidey-senses. If you can't quite put your finger on what might go wrong, but something feels off. Sometimes a feature is just plain complicated, or in a risky area of code, or feels like it's been rushed. It can be difficult to articulate these concerns to a team without rubbing people the wrong way. Use people you trust to bounce your thoughts off and hone in on what you are worried about. Discuss with other reviewers whether and how these risks can be communicated. Your spidey-senses are probably correct, and they're as important as any single concrete solvable threat you've spotted.

Approve and keep notes

Pause then approve. Once you’ve understood what’s happening and iterated through any concerns you’ve raised you’ll be ready to approve the feature for launch. It’s worth taking a short pause here to let your brain do its thinking in the background before you press the button. Try to concisely describe the feature — if you can’t then go back and ask more questions! It’s important to get questions and concerns to teams quickly but final approval can wait for some digestion time. If you cannot come up with a clear decision then reach out to other reviewers to discuss what to do next. Let the feature team know you’re working on it and when you’ll get back to them. After a pause, if nothing else occurs to you then click Approved and write a short paragraph saying why. Note any follow-on work the team has promised to complete before launching. This is also a great time to leave yourself a short note for your performance review — it’s easy to lose track of what you reviewed and the changes your input led to — having a rolling document will both help you spot patterns, and help you tell the story of the work you’ve done.

Expect to make mistakes, and learn from them

Nothing we do in software is forever, and many mistakes will be found and fixed later. You will make mistakes. Mainly small ones that won’t really matter. Security is about evaluating new risks in the context of the value provided to people using a product. This tradeoff extends into the design and launch process of which you are just a small part. You only have so much time, and It’s inevitable that you might sometimes see things that aren’t there, or not notice things that are. Security reviewers are one element in a layered defense and the consequences of a mistake will be contained by things you did spot. It’s good to try to find specific problems, but more important to locate and apply general security principles like sandboxing and the rule of two. Sometimes you might think something is fine, but later realize that it isn’t. This often happens when we learn something new about a feature, or discover that an assumption was invalid. This is where careful communication is important. Feature teams will be happy to know about any problems you uncover, and will find time to fix them later if possible. Remember that Looks Good To Me doesn’t mean Looks Perfect To Me.

How to be better

Experienced reviewers can always improve, and apply their insights widely within their organization.

It’s not always easy

It takes time to learn security engineering and build a working knowledge of the architecture of a complex product. Reviewing is different from the normal development journey - when an engineer works on a feature they start in an ambiguous situation and gradually learn or invent everything needed to deeply understand and solve the problem. To be effective as a security reviewer we have to embrace ambiguity and ignorance, and learn how to swiftly learn just enough to have a useful opinion, before starting again for our next review. This may seem daunting - and it is - but over time reviewers get better at knowing where to focus their efforts.

Security reviewing can feel invisible. Security is not an all-or-nothing quality of a feature. Rather it forms one concern that a product must balance while still shipping, adding new features, and appealing to people that use it. Security is an important concern (for Chrome it’s both a critical engineering pillar, and something people say they value when choosing Chrome) but it’s not the only factor. It’s our job to identify and articulate security risks, and advocate for better approaches, but sometimes another concern dominates. If deviations from our advice are well justified we shouldn’t feel ignored - we did our bit.

Your peers are there to help you. If you need support, ask questions on the reviewing team’s chat, or schedule thirty minutes or a coffee with another reviewer to discuss a particular review.

Help teams secure their features

Remember that developers know what they are doing, but might not be thinking about the things you are thinking about. You might not be confident in what you know about their feature, but imagine how the feature team feels coming to the mysterious halls of the security people! Often we’ll ask a team to implement one or more of our layered defenses before they get to launch their feature. This might be the first time they’ve had to write a fuzzer or harden a library. You’ll get requests for examples or help with implementation. Find an expert or spend time doing these things yourself. The security process should be as smooth a speed bump as possible. Any familiarity you have with these techniques will improve our interactions and maintain our reputation as a helpful team. If we ask someone to do something but can’t help them make progress we will be a source of frustration. If we help people they will be likely to approach us early-on next time they have a security question.

Training is available

Develop mind-tricks and frameworks for having difficult conversations. Sometimes (especially when you get involved early in a project’s design phase) you will need to disagree with a feature’s design, or nudge a team in a more secure direction. While a supportive technical culture should make it safe to surface and resolve technical differences, it takes energy and patience to work through these conflicts. It’s harder still to say ‘no’, or ask a team to commit to more work than they were expecting. These are skills you can practice and become more comfortable doing. Look for courses you can take. Some suggestions include “having difficult conversations”, “mentoring”, “coaching”, “persuasive writing”, and “threat modeling”.

Scale your impact

Find ways to scale your impact. Security decisions are made based on judgment and mechanisms but judgment doesn’t scale! To maintain a sustainable security workload for ourselves, and empower feature teams to make their own decisions, we need to make judgment as small a part of the puzzle as possible.

Encourage good patterns. If a design addresses a security concern, say so on the launch bug or a mailing list. This helps for later reviews, and provides useful feedback to the design team. Help newer reviewers see good or bad patterns, and the rhythm of reviews by telling a few stories of what went well and what got missed in the past. Establish architectural patterns that contain the consequences of a problem. Make these easy to follow while preventing anti-patterns - ideally a bad security idea shouldn’t even compile.

Write guidance or policies. Distill decisions into FAQs, threat models, principles or rules. Get involved with the people building foundational pieces of your product, and get them to own their security guidance so that it gets applied as part of that team’s advice to other teams. A checklist of things to look for in a particular area is a great starting point for the team making the next feature in that space, and for the reviewer that signs off at the end.

Level-up your developers. We can raise the level of expertise across the wider developer community, and reduce the burden of reviewing for security teams. Through repeated engagements with the same team you can start to set expectations - each time, drop some hints about what could be done better next time. Encourage system diagrams, risk assessments, threat modeling or sandboxing. Soon teams will start with these, and reviews will be much smoother.

Anoint security champions. In larger feature teams encourage a couple of security champions within the group to serve as initial points of contact and a first line of review. Support these people! Offer to talk them through their design docs and help them think about security concerns. They will grow into local experts who know when to call on security specialists. They can write security principles for their area, leading to secure features and smooth launch reviews.

Summary

Do a few reviews to develop confidence in your decisions. You won't understand all the details of a feature. You will sometimes say yes to the wrong things or get teams to do unnecessary work. You'll ask insightful questions and improve designs..

Remember that security reviewing is difficult. Remember that people are there to help you. Remember that every good decision you encourage keeps people safe from harm, and increases their trust in you and your product. As you mature, maintain a supportive culture where reviewers can grow, and where you help other teams develop new features with safety in mind.

An important step towards secure and interoperable messaging

Most modern consumer messaging platforms (including Google Messages) support end-to-end encryption, but users today are limited to communicating with contacts who use the same platform. This is why Google is strongly supportive of regulatory efforts that require interoperability for large end-to-end messaging platforms.

For interoperability to succeed in practice, however, regulations must be combined with open, industry-vetted, standards, particularly in the area of privacy, security, and end-to-end encryption. Without robust standardization, the result will be a spaghetti of ad hoc middleware that could lower security standards to cater for the lowest common denominator and raise implementation costs, particularly for smaller providers. Lack of standardization would also make advanced features such as end-to-end encrypted group messaging impossible in practice – group messages would have to be encrypted and delivered multiple times to cater for every different protocol.

With the recent publication of the IETF’s Message Layer Security (MLS) specification RFC 9420, messaging users can look forward to this reality. For the first time, MLS enables practical interoperability across services and platforms, scaling to groups of thousands of multi-device users. It is also flexible enough to allow providers to address emerging threats to user privacy and security, such as quantum computing.

By ensuring a uniformly high security and privacy bar that users can trust, MLS will unleash a huge field of new opportunities for the users and developers of interoperable messaging services that adopt it. This is why we intend to build MLS into Google Messages and support its wide deployment across the industry by open sourcing our implementation in the Android codebase.

Protect and manage browser extensions using Chrome Browser Cloud Management

Browser extensions, while offering valuable functionalities, can seem risky to organizations. One major concern is the potential for security vulnerabilities. Poorly designed or malicious extensions could compromise data integrity and expose sensitive information to unauthorized access. Moreover, certain extensions may introduce performance issues or conflicts with other software, leading to system instability. Therefore, many organizations find it crucial to have visibility into the usage of extensions and the ability to control them. Chrome browser offers these extension management capabilities and reporting via Chrome Browser Cloud Management. In this blog post, we will walk you through how to utilize these features to keep your data and users safe.

Visibility into Extensions being used in your environment

Having visibility into what and how extensions are being used enables IT and security teams to assess potential security implications, ensure compliance with organizational policies, and mitigate potential risks. There are three ways you can get critical information about extensions in your organization:

1. App and extension usage reporting

Organizations can gain visibility into every Chrome extension that is installed across an enterprise’s fleet in Chrome App and Extension Usage Reporting.

2. Extension Risk Assessment

CRXcavator and Spin.AI Risk Assessment are tools used to assess the risks of browser extensions and minimize the risks associated with them. We are making extension scores via these two platforms available directly in Chrome Browser Cloud Management, so security teams can have an at-a-glance view of risk scores of the extensions being used in their browser environment.

3. Extension event reporting

Extension installs events are now available to alert IT and security teams of new extension usage in their environment.

Organizations can send critical browser security events to their chosen solution providers, such as Splunk, Crowdstrike, Palo Alto Networks, and Google solutions, including Chronicle, Cloud PubSub, and Google Workspace, for further analysis. You can also view the event logs directly in Chrome Browser Cloud Management.

Extension controls for added security

By having control over extensions, organizations can maintain a secure and stable browsing environment, safeguard sensitive data, and protect their overall digital infrastructure. There are several ways IT and security teams can set and enforce controls over extensions:

1. Extension permission

Organizations can control whether users can install apps or extensions based on the information an extension can access—also known as permissions. For example, you might want to prevent users from installing any extension that wants permission to see a device location.

2. Extension workflow

Extension workflow allows end users to request extensions for review. From there, IT can either approve or deny them. This is a great approach for teams that want to control the extensions in their environment, but allow end users to have some say over the extensions they use.

We recently added a prompt for business justification for documenting why users are requesting the extension. This gives admins more information as to why the extension may or may not be helpful for their workforce, and can help speed approvals for business users.

3. Force install/block extensions

In the requests tab, you can select a requested extension, and approve that extension request and force install it, so the extension shows up automatically on all the managed browsers in that specific Organizational Unit or group.

Admins also have the ability to block extensions in their browser environment.

4. Extension version pinning

Admins have the ability to pin extensions to specific versions. As a result, the pinned extensions will not be upgraded unless the admin explicitly changes the version. This gives organizations the ability to evaluate new extension versions before they decide to allow them in their environment.

Extensions play a huge role in user productivity for many organizations, and Chrome is committed to help enterprises securely take advantage of them. If you haven’t already, you can sign up for Chrome Browser Cloud Management and start managing extensions being used in your organizations at no additional cost.

If you’re already using Chrome Browser Cloud Management, but want to see how you can get the most out of it, check out our newly released Beyond Browsing demo series where Chrome experts share demos on some of our frequently asked management questions.

Announcing the Chrome Browser Full Chain Exploit Bonus

For 13 years, a key pillar of the Chrome Security ecosystem has included encouraging security researchers to find security vulnerabilities in Chrome browser and report them to us, through the Chrome Vulnerability Rewards Program.

Starting today and until 1 December 2023, the first security bug report we receive with a functional full chain exploit, resulting in a Chrome sandbox escape, is eligible for triple the full reward amount. Your full chain exploit could result in a reward up to $180,000 (potentially more with other bonuses).

Any subsequent full chains submitted during this time are eligible for double the full reward amount!

We have historically put a premium on reports with exploits – “high quality reports with a functional exploit” is the highest tier of reward amounts in our Vulnerability Rewards Program. Over the years, the threat model of Chrome browser has evolved as features have matured and new features and new mitigations, such a MiraclePtr, have been introduced. Given these evolutions, we’re always interested in explorations of new and novel approaches to fully exploit Chrome browser and we want to provide opportunities to better incentivize this type of research. These exploits provide us valuable insight into the potential attack vectors for exploiting Chrome, and allow us to identify strategies for better hardening specific Chrome features and ideas for future broad-scale mitigation strategies.

The full details of this bonus opportunity are available on the Chrome VRP rules and rewards page. The summary is as follows:

  • The bug reports may be submitted in advance while exploit development continues during this 180-day window. The functional exploits must be submitted to Chrome by the end of the 180-day window to be eligible for the triple or double reward.
    • The first functional full chain exploit we receive is eligible for the triple reward amount.
  • The full chain exploit must result in a Chrome browser sandbox escape, with a demonstration of attacker control / code execution outside of the sandbox.
  • Exploitation must be able to be performed remotely and no or very limited reliance on user interaction.
  • The exploit must have been functional in an active release channel of Chrome (Dev, Beta, Stable, Extended Stable) at the time of the initial reports of the bugs in that chain. Please do not submit exploits developed from publicly disclosed security bugs or other artifacts in old, past versions of Chrome.

As is consistent with our general rewards policy, if the exploit allows for remote code execution (RCE) in the browser or other highly-privileged process, such as network or GPU process, to result in a sandbox escape without the need of a first stage bug, the reward amount for renderer RCE “high quality report with functional exploit” would be granted and included in the calculation of the bonus reward total.

Based on our current Chrome VRP reward matrix, your full chain exploit could result in a total reward of over $165,000 -$180,000 for the first full chain exploit and over $110,000 - $120,000 for subsequent full chain exploits we receive in the six month window of this reward opportunity.

We’d like to thank our entire Chrome researcher community for your past and ongoing efforts and security bug submissions! You’ve truly helped us make Chrome more secure for all users.

Happy Hunting!

Adding Chrome Browser Cloud Management remediation actions in Splunk using Alert Actions

Introduction

Chrome is trusted by millions of business users as a secure enterprise browser. Organizations can use Chrome Browser Cloud Management to help manage Chrome browsers more effectively. As an admin, they can use the Google Admin console to get Chrome to report critical security events to third-party service providers such as Splunk® to create custom enterprise security remediation workflows.

Security remediation is the process of responding to security events that have been triggered by a system or a user. Remediation can be done manually or automatically, and it is an important part of an enterprise security program.

Why is Automated Security Remediation Important?

When a security event is identified, it is imperative to respond as soon as possible to prevent data exfiltration and to prevent the attacker from gaining a foothold in the enterprise. Organizations with mature security processes utilize automated remediation to improve the security posture by reducing the time it takes to respond to security events. This allows the usually over burdened Security Operations Center (SOC) teams to avoid alert fatigue.

Automated Security Remediation using Chrome Browser Cloud Management and Splunk

Chrome integrates with Chrome Enterprise Recommended partners such as Splunk® using Chrome Enterprise Connectors to report security events such as malware transfer, unsafe site visits, password reuse. Other supported events can be found on our support page.

The Splunk integration with Chrome browser allows organizations to collect, analyze, and extract insights from security events. The extended security insights into managed browsers will enable SOC teams to perform better informed automated security remediations using Splunk® Alert Actions.

Splunk Alert Actions are a great capability for automating security remediation tasks. By creating alert actions, enterprises can automate the process of identifying, prioritizing, and remediating security threats.

In Splunk®, SOC teams can use alerts to monitor for and respond to specific Chrome Browser Cloud Management events. Alerts use a saved search to look for events in real time or on a schedule and can trigger an Alert Action when search results meet specific conditions as outlined in the diagram below.

Use Case

If a user downloads a malicious file after bypassing a Chrome “Dangerous File” message their managed browser/managed CrOS device should be quarantined.

Prerequisites

Setup

  1. Install the Google Chrome Add-on for Splunk App

    Please follow installation instructions here depending on your Splunk Installation to install the Google Chrome Add-on for Splunk App.

  2. Setting up Chrome Browser Cloud Management and Splunk Integration

    Please follow the guide here to set up Chrome Browser Cloud Management and Splunk® integration.

  3. Setting up Chrome Browser Cloud Management API access

    To call the Chrome Browser Cloud Management API, use a service account properly configured in the Google admin console. Create a (or use an existing) service account and download the JSON representation of the key.

    Create a (or use an existing) role in the admin console with all the “Chrome Management” privileges as shown below.

    Assign the created role to the service account using the “Assign service accounts” button.

  4. Setting up Chrome Browser Cloud Management App in Splunk®

    Install the App i.e. Alert Action from our Github page. You will notice that the Splunk App uses the below directory structure. Please take some time to understand the directory structure layout.

  5. Setting up a Quarantine OU in Chrome Browser Cloud Management

    Create a “Quarantine” OU to move managed browsers into. Apply restrictive policies to this OU which will then be applied to managed browsers and managed CrOS devices that are moved to this OU. In our case we set the below policies for our “Quarantine” OU called Investigate.These policies ensure that the quarantined CrOS device/browser can only open a limited set of approved URLS.

Configuration

  1. Start with a search for the Chrome Browser Cloud Management events in the Google Chrome Add-on for Splunk App. For our instance we used the below search query to search for known malicious file download events.
  2. Save the search as an alert. The alert uses the saved search to check for events. Adjust the alert type to configure how often the search runs. Use a scheduled alert to check for events on a regular basis. Use a real-time alert to monitor for events continuously. An alert does not have to trigger every time it generates search results. Set trigger conditions to manage when the alert triggers. Customize the alert settings as per enterprise security policies. For our example we used a real time alert with a per-result trigger. The setup we used is as shown below.

  3. As seen in the screenshot we have configured the Chrome Browser Cloud Management Remediation Alert Action App with

    • The OU Path of the Quarantine OU i.e. /Investigate
    • The Customer Id of the workspace domain
    • Service Account Key JSON value

    Test the setup

    Use the testsafebrowsing website to generate sample security events to test the setup.

    1. Open the testsafebrowsing website
    2. Click the link for line item 4 under the Desktop Download Warnings section i.e. “Should show an "uncommon" warning, for .exe”
    3. You will see a Dangerous Download blocked warning giving you two options to either Discard or Keep the downloaded file. Click on Keep
    4. This will trigger the alert action and move your managed browser or managed CrOS device to the “Quarantine” OU (OU name Investigate in our example) with restricted policies.

    Conclusion

    Security remediation is vital to any organization’s security program. In this blog we discussed configuring automated security remediation of Chrome Browser Cloud Management security events using Splunk alert actions. This scalable approach can be used to protect a company from online security threats by detecting and quickly responding to high fidelity Chrome Browser Cloud Management security events thereby greatly reducing the time to respond.

    Our team will be at the Gartner Security and Risk Management Summit in National Harbor, MD, next week. Come see us in action if you’re attending the summit.

How the Chrome Root Program Keeps Users Safe

What is the Chrome Root Program?

A root program is one of the foundations for securing connections to websites. The Chrome Root Program was announced in September 2022. If you missed it, don’t worry - we’ll give you a quick summary below!

Chrome Root Program: TL;DR

Chrome uses digital certificates (often referred to as “certificates,” “HTTPS certificates,” or “server authentication certificates”) to ensure the connections it makes for its users are secure and private. Certificates are issued by trusted entities called “Certification Authorities” (CAs). The collection of digital certificates, CA systems, and other related online servicews is the foundation of HTTPS and is often referred to as the “Web PKI.”

Before issuing a certificate to a website, the CA must verify that the certificate requestor legitimately controls the domain whose name will be represented in the certificate. This process is often referred to as “domain validation” and there are several methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value’s presence. Typically, domain validation practices must conform with a set of security requirements described in both industry-wide and browser-specific policies, like the CA/Browser Forum “Baseline Requirements” and the Chrome Root Program policy.

Upon connecting to a website, Chrome verifies that a recognized (i.e., trusted) CA issued its certificate, while also performing additional evaluations of the connection’s security properties (e.g., validating data from Certificate Transparency logs). Once Chrome determines that the certificate is valid, Chrome can use it to establish an encrypted connection to the website. Encrypted connections prevent attackers from being able to intercept (i.e., eavesdrop) or modify communication. In security speak, this is known as confidentiality and integrity.

The Chrome Root Program, led by members of the Chrome Security team, provides governance and security review to determine the set of CAs trusted by default in Chrome. This set of so-called "root certificates" is known at the Chrome Root Store.

How does the Chrome Root Program keep users safe?

The Chrome Root Program keeps users safe by ensuring the CAs Chrome trusts to validate domains are worthy of that trust. We do that by:

  • administering policy and governance activities to manage the set of CAs trusted by default in Chrome,
  • evaluating impact and corresponding security implications related to public security incident disclosures by participating CAs, and
  • leading positive change to make the ecosystem more resilient.

Policy and Governance

The Chrome Root Program policy defines the minimum requirements a CA owner must meet for inclusion in the Chrome Root Store. It incorporates the industry-wide CA/Browser Forum Baseline Requirements and further adds security controls to improve Chrome user security.

The CA application process includes a public discussion phase, where members of the Web PKI community are free to raise well-founded, fact-based concerns related to an applicant on an open discussion forum.

We consider public discussion valuable because it:

  • improves security, transparency, and interoperability, and
  • highlights concerning behavior, practices, or ownership background information not readily available through public audits, policy reviews, or other application process inputs.

For a CA owner’s inclusion request to be accepted, it must clearly demonstrate that the value proposition for the security and privacy of Chrome’s end users exceeds the corresponding risk of inclusion.

Once a CA is trusted, it can issue certificates for any website on the internet; thus, each newly added CA represents an additional attack surface, and the Web PKI is only as safe as its weakest link. For example, in 2011 a compromised CA led to a large-scale attack on web users in Iran.

Incident Management

No CA is perfect. When a CA owner violates the Chrome Root Program policy – or experiences any other situation that affects the CA’s integrity, trustworthiness, or compatibility – we call it an incident. Incidents can happen. They are an expected part of building a secure Web PKI. All the same, incidents represent opportunities to improve practices, systems, and understanding. Our program is committed to continuous improvement and participates in a public Web PKI incident management process.

When incidents occur, we expect CA owners to identify the root cause and remediate it to help prevent similar incidents from happening again. CA owners record the incident in a report that the Chrome Root Program and the public can review, which encourages an understanding of all contributing factors to reduce the probability of its reoccurrence in the Web PKI.

The Chrome Root Program prioritizes the security and privacy of its users and is unwilling to compromise on these values. In rare cases, incidents may result in the Chrome Root Program losing confidence in the CA owner’s ability to operate securely and reliably. This may happen when there is evidence of a CA owner:

  • knowingly violating requirements or obfuscating incidents,
  • demonstrating sustained patterns of failure, untimely and opaque communications, or an unwillingness to improve elements that are critical to security, or
  • performing other actions that negatively impact or otherwise degrade the security of the Web.

In these cases, Chrome may distrust a CA – that is, remove the CA from the Chrome Root Store. Depending on the circumstance, Chrome may also block the certificate with a non-bypassable error page.

The above cases are only illustrative, and considerations for CA distrust are not limited to these examples. The Chrome Root Program may remove certificates from the Chrome Root Store, as it deems appropriate and at its sole discretion, to enhance security and promote interoperability in Chrome.

Positive Ecosystem Change

The Chrome Root Program collaborates with members of the Web PKI ecosystem in various forums (e.g., the CA/Browser Forum) and committees (e.g., the CCADB Steering Committee). We share best practices, advocate for and develop new standards to promote user security, and seek ecosystem participant feedback on proposed initiatives. Collectively, ecosystem participants contributing to these working groups are protecting the Web.

In June 2022, we announced the “Moving Forward, Together” initiative that shared our vision of the future Web PKI that includes modern, reliable, agile, and purpose-driven architectures with a focus on automation, simplicity, and security. The initiative represents the goals and priorities of the Chrome Root Program and reinforces our commitment to working alongside CA owners to make the Web a safer place.

Some of our current priorities include:

  • reducing misissuance of certificates that do not comply with the Baseline Requirements, a CA’s own policies, or the Chrome Root Program policy,
  • increasing accountability and ecosystem integrity with high-quality, independent audits,
  • automating certificate issuance and strengthening the domain validation process, and
  • preparing for a “post-quantum” world.

We believe implementing proposals related to these priorities will help manage risk and make the Web a safer place for everyone.

However, as the name suggests, we can only realize these opportunities to improve with the collective contributions of the community. We understand CAs to be an essential element of the Web PKI, and we are encouraged by continued feedback and participation from existing and future CA owners in our program.

The Chrome Root Program is committed to openness and transparency, and we are optimistic we can achieve this shared vision. If you’re interested in seeing what new initiatives are being explored by the Chrome Root Program to keep Chrome users safe - you can learn more here.

New Android & Google Device Vulnerability Reward Program Initiatives

As technology continues to advance, so do efforts by cybercriminals who look to exploit vulnerabilities in software and devices. This is why at Google and Android, security is a top priority, and we are constantly working to make our products more secure. One way we do this is through our Vulnerability Reward Programs (VRP), which incentivize security researchers to find and report vulnerabilities in our operating system and devices.

We are pleased to announce that we are implementing a new quality rating system for security vulnerability reports to encourage more security research in higher impact areas of our products and ensure the security of our users. This system will rate vulnerability reports as High, Medium, or Low quality based on the level of detail provided in the report. We believe that this new system will encourage researchers to provide more detailed reports, which will help us address reported issues more quickly and enable researchers to receive higher bounty rewards.

The highest quality and most critical vulnerabilities are now eligible for larger rewards of up to $15,000!

There are a few key elements we are looking for:

Accurate and detailed description: A report should clearly and accurately describe the vulnerability, including the device name and version. The description should be detailed enough to easily understand the issue and begin working on a fix.

Root cause analysis: A report should include a full root cause analysis that describes why the issue is occurring and what Android source code should be patched to fix it. This analysis should be thorough and provide enough information to understand the underlying cause of the vulnerability.


Proof-of-concept: A report should include a proof-of-concept that effectively demonstrates the vulnerability. This can include video recordings, debugger output, or other relevant information. The proof-of-concept should be of high quality and include the minimum amount of code possible to demonstrate the issue.

Reproducibility: A report should include a step-by-step explanation of how to reproduce the vulnerability on an eligible device running the latest version. This information should be clear and concise and should allow our engineers to easily reproduce the issue and begin working on a fix.

Evidence of reachability: Finally, a report should include evidence or analysis that demonstrates the type of issue and the level of access or execution achieved.

*Note: This criteria may change over time. For the most up to date information, please refer to our public rules page.

Additionally, starting March 15th, 2023, Android will no longer assign Common Vulnerabilities and Exposures (CVEs) to most moderate severity issues. CVEs will continue to be assigned to critical and high severity vulnerabilities.

We believe that incentivizing researchers to provide high-quality reports will benefit both the broader security community and our ability to take action. We look forward to continuing to work with researchers to make the Android ecosystem more secure.

If you would like more information on the Android & Google Device Vulnerability Reward Program, please visit our public rules page to learn more!

I/O 2023: What’s new in Android security and privacy


Android is built with multiple layers of security and privacy protections to help keep you, your devices, and your data safe. Most importantly, we are committed to transparency, so you can see your device safety status and know how your data is being used.

Android uses the best of Google’s AI and machine learning expertise to proactively protect you and help keep you out of harm’s way. We also empower you with tools that help you take control of your privacy.

I/O is a great moment to show how we bring these features and protections all together to help you stay safe from threats like phishing attacks and password theft, while remaining in charge of your personal data.

Safe Browsing: faster more intelligent protection

Android uses Safe Browsing to protect billions of users from web-based threats, like deceptive phishing sites. This happens in the Chrome default browser and also in Android WebView, when you open web content from apps.

Safe Browsing is getting a big upgrade with a new real-time API that helps ensure you’re warned about fast-emerging malicious sites. With the newest version of Safe Browsing, devices will do real-time blocklist checks for low reputation sites. Our internal analysis has found that a significant number of phishing sites only exist for less than ten minutes to try and stay ahead of block-lists. With this real-time detection, we expect we’ll be able to block an additional 25 percent of phishing attempts every month in Chrome and Android1.

Safe Browsing isn’t just getting faster at warning users. We’ve also been building in more intelligence, leveraging Google’s advances in AI. Last year, Chrome browser on Android and desktop started utilizing a new image-based phishing detection machine learning model to visually inspect fake sites that try to pass themselves off as legitimate log-in pages. By leveraging a TensorFlow Lite model, we’re able to find 3x more2 phishing sites compared to previous machine learning models and help warn you before you get tricked into signing in. This year, we're expanding the coverage of the model to detect hundreds of more phishing campaigns and leverage new ML technologies.

This is just one example of how we use our AI expertise to keep your data safe. Last year, Android used AI to protect users from 100 billion suspected spam messages and calls.3

Passkeys helps move users beyond passwords

For many, passwords are the primary protection for their online life. In reality, they are frustrating to create, remember and are easily hacked. But hackers can’t phish a password that doesn’t exist. Which is why we are excited to share another major step forward in our passwordless journey: Passkeys.

Passkeys combine the advanced security of 2-Step Verification with the convenience of simply unlocking your device — so signing in is as easy as glancing at your phone or scanning your fingerprint. And because they use cutting-edge cryptography to create a “key” that is unique between you and a specific app or website, passkeys can’t be stolen by hackers the way that passwords can.

Last week, we announced you can use a passkey to log in to your Google Account on all major platforms. We’re the first major tech company to simplify sign-in with passkeys across our own platform. You can also use passkeys on services like PayPal, Shopify, and Docusign, with many more on the way. Start saying goodbye to passwords and try it today.

To help support developers as they incorporate passkeys, we’ve launched a Credential Manager Jetpack API that brings together multiple sign-in methods, such as passkeys, passwords and federated sign in, into a unified interface for users and a single API for developers.

Better protections for apps

Accessibility services are helpful for people with disabilities but their broad powers can be used by malware and bad apps to read screen content. In Android 14, we’re introducing a new API that lets developers limit accessibility services from interacting with their apps. Now, with a new app attribute, developers can limit access to only apps that have declared and have been validated by Google Play Protect as accessibility tools. This adds more protection from side-loaded apps that may get installed and are trying to access sensitive data.

In Android 14, we’re preventing apps that target an SDK level lower than 23 from being installed. This is because malware often targets older levels to get around newer security and privacy protections. This won’t affect existing apps on your device, but new installs will have to meet this requirement.

More transparency around how your data is used

We launched the Data safety section in Google Play last year to help you see how developers collect, share, and protect user data. Every day, millions of users use the Data Safety section information to evaluate an app’s safety before installing it.

In Android 14, we’re extending this transparency to permission dialogs, starting with location data usage. So every time an app asks for permission to use location data, you’ll be able to see right away if the app shares the location data with third parties.

And if an app changes its data sharing practices, for example, to start using it for ads purposes, we’ll notify you through a new monthly notification. As with the permissions dialogs, we’re starting with location data but will be expanding to other permission types in future releases.



We’re also empowering you with greater clarity and control over your account data by making it easier to delete accounts that you’ve created in apps. Developers will soon need to provide ways for you to ask for your account and data to be deleted via the app and the app’s Data safety section in Google Play, giving you more control both inside and outside of apps. They can also offer you an option to clean up your account and ask for other data, like activity history or images, to be deleted instead of your entire account.

Better control and protection over your photos and videos

Last year, we announced the Android Photo Picker, a new tool that apps can use to request access to specific photos and videos instead of requesting permission to a users' entire media library. We’re updating Photo Picker through Google Play services to support older devices going back to Android 4.4.

With Android 14, we modified the photo/video permissions to let you choose only specific media to share, even if an app hasn’t opted into Photo Picker. You can still decide to allow or deny all access to photos but this provides more granular control.

We’re also introducing a new API that will enable developers to recognize screenshots without requiring them to get access to your photos. This helps limit media access for developers while still providing them with the tools they need to detect screenshots in their apps.

Android remains committed to protecting users by combining advanced security and AI with thoughtful privacy controls and transparency to protect billions of users around the world. Stay tuned for more upcoming protections we’ll be launching throughout the year and learn more about how Android keeps you safe at android.com/safety.

Notes


  1. Based on estimated daily increase across desktop and mobile comparing Safe Browsing API 5 to API 4 

  2. Based on internal data from January to May 2023." 

  3. Estimating from annual and monthly spam call and spam messaging data 

Google and Apple lead initiative for an industry specification to address unwanted tracking

Companies welcome input from industry participants and advocacy groups on a draft specification to alert users in the event of suspected unwanted tracking

Location-tracking devices help users find personal items like their keys, purse, luggage, and more through crowdsourced finding networks. However, they can also be misused for unwanted tracking of individuals.

Today Google and Apple jointly submitted a proposed industry specification to help combat the misuse of Bluetooth location-tracking devices for unwanted tracking. The first-of-its-kind specification will allow Bluetooth location-tracking devices to be compatible with unauthorized tracking detection and alerts across Android and iOS platforms. Samsung, Tile, Chipolo, eufy Security, and Pebblebee have expressed support for the draft specification, which offers best practices and instructions for manufacturers, should they choose to build these capabilities into their products.

“Bluetooth trackers have created tremendous user benefits but also bring the potential of unwanted tracking, which requires industry-wide action to solve,” said Dave Burke, Google’s vice president of Engineering for Android. “Android has an unwavering commitment to protecting users and will continue to develop strong safeguards and collaborate with the industry to help combat the misuse of Bluetooth tracking devices.”

“Apple launched AirTag to give users the peace of mind knowing where to find their most important items,” said Ron Huang, Apple’s vice president of Sensing and Connectivity. “We built AirTag and the Find My network with a set of proactive features to discourage unwanted tracking, a first in the industry, and we continue to make improvements to help ensure the technology is being used as intended. “This new industry specification builds upon the AirTag protections, and through collaboration with Google, results in a critical step forward to help combat unwanted tracking across iOS and Android.”

In addition to incorporating feedback from device manufacturers, input from various safety and advocacy groups has been integrated into the development of the specification.

“The National Network to End Domestic Violence has been advocating for universal standards to protect survivors – and all people – from the misuse of bluetooth tracking devices. This collaboration and the resulting standards are a significant step forward. NNEDV is encouraged by this progress,” said Erica Olsen, the National Network to End Domestic Violence’s senior director of its Safety Net Project. “These new standards will minimize opportunities for abuse of this technology and decrease burdens on survivors in detecting unwanted trackers. We are grateful for these efforts and look forward to continuing to work together to address unwanted tracking and misuse.”

“Today’s release of a draft specification is a welcome step to confront harmful misuses of Bluetooth location trackers,” said Alexandra Reeve Givens, the Center for Democracy & Technology’s president and CEO. “CDT continues to focus on ways to make these devices more detectable and reduce the likelihood that they will be used to track people. A key element to reducing misuse is a universal, OS-level solution that is able to detect trackers made by different companies on the variety of smartphones that people use every day. We commend Apple and Google for their partnership and dedication to developing a uniform solution to improve detectability. We look forward to the specification moving through the standardization process and to further engagement on ways to reduce the risk of Bluetooth location trackers being misused.”

The specification has been submitted as an Internet-Draft via the Internet Engineering Task Force (IETF), a leading standards development organization. Interested parties are invited and encouraged to review and comment over the next three months. Following the comment period, Google and Apple will partner to address feedback and will release a production implementation of the specification for unwanted tracking alerts by the end of 2023 that will then be supported in future versions of Android and iOS.

Google will share more on how we’re combatting unwanted tracking at I/O 2023.

Secure mobile payment transactions enabled by Android Protected Confirmation

Unlike other mobile OSes, Android is built with a transparent, open-source architecture. We firmly believe that our users and the mobile ecosystem at-large should be able to verify Android’s security and safety and not just take our word for it.

We’ve demonstrated our deep belief in security transparency by investing in features that enable users to confirm that what they expect is happening on their device is actually happening.

The Assurance of Android Protected Confirmation

One of those features is Android Protected Confirmation, an API that enables developers to utilize Android hardware to provide users even more assurance that a critical action has been executed securely. Using a hardware-protected user interface, Android Protected Confirmation can help developers verify a user’s action intent with a very high degree of confidence. This can be especially useful in a number of user moments – like during mobile payment transactions - that greatly benefit from additional verification and security.

We’re excited to see that Android Protected Confirmation is now gaining ecosystem attention as an industry-leading method for confirming critical user actions via hardware. Recently, UBS Group AG and the Bern University of Applied Sciences, co-financed by Innosuisse and UBS Next, announced they’re working with Google on a pilot project to establish Protected Confirmation as a common application programmable interface standard. In a pilot planned for 2023, UBS online banking customers with Pixel 6 or 7 devices can use Android Protected Confirmation backed by StrongBox, a certified hardware vault with physical attack protections, to confirm payments and verify online purchases through a hardware-based confirmation in their UBS Access App.


Demonstrating Real-World Use for Android Protection Confirmation

We’ve been working closely with UBS to bring this pilot to life and ensure they’re able to test it on Google Pixel devices. Demonstrating real-world use cases that are enabled by Android Protected Confirmation unlocks the promise of this technology by delivering improved and innovative experiences for our users. We’re seeing interest in Android Protected Confirmation across the industry and OEMs are increasingly looking at how to build even more hardware-based confirmation into critical security user moments. We look forward to forming an industry alliance that will work together to strengthen mobile security and home in on protecting confirmation support.