Tag Archives: chrome security

Vulnerability Reward Program: 2023 Year in Review

Last year, we again witnessed the power of community-driven security efforts as researchers from around the world contributed to help us identify and address thousands of vulnerabilities in our products and services. Working with our dedicated bug hunter community, we awarded $10 million to our 600+ researchers based in 68 countries.

New Resources and Improvements

Just like every year, 2023 brought a series of changes and improvements to our vulnerability reward programs:

  • Through our new Bonus Awards program, we now periodically offer time-limited, extra rewards for reports to specific VRP targets.
  • We expanded our exploit reward program to Chrome and Cloud through the launch of v8CTF, a CTF focused on V8, the JavaScript engine that powers Chrome.
  • We launched Mobile VRP which focuses on first-party Android applications.
  • Our new Bughunters blog shared ways in which we make the internet, as a whole, safer, and what that journey entails. Take a look at our ever-growing repository of posts!
  • To further our engagement with top security researchers, we also hosted our yearly security conference ESCAL8 in Tokyo. It included live hacking events and competitions, student training with our init.g workshops, and talks from researchers and Googlers. Stay tuned for details on ESCAL8 2024.

As in past years, we are sharing our 2023 Year in Review statistics across all of our programs. We would like to give a special thank you to all of our dedicated researchers for their continued work with our programs - we look forward to more collaboration in the future!

Android and Google Devices

In 2023, the Android VRP achieved significant milestones, reflecting our dedication to securing the Android ecosystem. We awarded over $3.4 million in rewards to researchers who uncovered remarkable vulnerabilities within Android and increased our maximum reward amount to $15,000 for critical vulnerabilities. We also saw a sharpened focus on higher severity issues as a result of our changes to incentivize report quality and increasing rewards for high and critical severity issues.

Expanding our program’s scope, Wear OS has been added to the program to further incentivize research in new wearable technology to ensure users’ safety.

Working closely with top researchers at the ESCAL8 conference, we also hosted a live hacking event for Wear OS and Android Automotive OS which resulted in $70,000 rewarded to researchers for finding over 20 critical vulnerabilities!

We would also like to spotlight the hardwear.io security conferences. Hardwear.io gave us a platform to engage with top hardware security researchers who uncovered over 50 vulnerabilities in Nest, Fitbit, and Wearables, and received a total of $116,000 last year!

The Google Play Security Reward Program continued to foster security research across popular Android apps on Google Play.

A huge thank you to the researchers who made our program such a success. A special shout out to Zinuo Han (@ele7enxxh) of OPPO Amber Security Lab and Yu-Cheng Lin (林禹成) (@AndroBugs) for your hard work and continuing to be some of the top researchers contributing to Android VRPs!

Chrome

2023 was a year of changes and experimentation for the Chrome VRP. In Chrome Milestone 116, MiraclePtr was launched across all Chrome platforms. This resulted in raising the difficulty of discovery of fully exploitable non-renderer UAFs in Chrome and resulted in lower reward amounts for MiraclePtr-protected UAFs, as highly mitigated security bugs. While code and issues protected by MiraclePtr are expected to be resilient to the exploitation of non-renderer UAFs, the Chrome VRP launched the MiraclePtr Bypass Reward to incentivize research toward discovering potential bypasses of this protection.

The Chrome VRP also launched the Full Chain Exploit Bonus, offering triple the standard full reward amount for the first Chrome full-chain exploit reported and double the standard full reward amount for any follow-up reports. While both of these large incentives have gone unclaimed, we are leaving the door open in 2024 for any researchers looking to take on these challenges.

In 2023, Chrome VRP also introduced increased rewards for V8 bugs in older channels of Chrome, with an additional bonus for bugs existing before M105. This resulted in a few very impactful reports of long-existing V8 bugs, including one report of a V8 JIT optimization bug in Chrome since at least M91, which resulted in a $30,000 reward for that researcher.

All of this resulted in $2.1M in rewards to security researchers for 359 unique reports of Chrome Browser security bugs. We were also able to meet some of our top researchers from previous years who were invited to participate in bugSWAT as part of Google’s ESCAL8 event in Tokyo in October. We capped off the year by publicly announcing our 2023 Top 20 Chrome VRP reporters who received a bonus reward for their contributions.

Thank you to the Chrome VRP security researcher community for your contributions and efforts to help us make Chrome more secure for everyone!

Generative AI

Last year, we also ran a bugSWAT live-hacking event targeting LLM products. Apart from fun, sun, and a lot to do, we also got 35 reports, totaling more than $87,000 - and discovered issues like Johann, Joseph, and Kai’s “Hacking Google Bard - From Prompt Injection to Data Exfiltration” and Roni, Justin, and Joseph’s “We Hacked Google A.I. for $50,000”.

To help AI-focused bughunters know what’s in scope and what’s not, we recently published our criteria for bugs in AI products. This criteria aims to facilitate testing for traditional security vulnerabilities as well as risks specific to AI systems, and is one way that we are implementing the voluntary AI commitments that Google made at the White House in July.

Looking Forward

We remain committed to fostering collaboration, innovation, and transparency with the security community. Our ongoing mission is to stay ahead of emerging threats, adapt to evolving technologies, and continue to strengthen the security posture of Google’s products and services. We look forward to continuing to drive greater advancements in the world of cybersecurity.

A huge thank you to our bug hunter community for helping to make Google products and platforms more safe and secure for our users around the world!

Thank you to Adam Bacchus, Dirk Göhmann, Eduardo Vela, Sarah Jacobus, Amy Ressler, Martin Straka, Jan Keller, Tony Mendez.

Qualified certificates with qualified risks

Improving the interoperability of web services is an important and worthy goal. We believe that it should be easier for people to maintain and control their digital identities. And we appreciate that policymakers working on European Union digital certificate legislation, known as eIDAS, are working toward this goal. However, a specific part of the legislation, Article 45, hinders browsers’ ability to enforce certain security requirements on certificates, potentially holding back advances in web security for decades. We and many past and present leaders in the international web community have significant concerns about Article 45's impact on security.

We urge lawmakers to heed the calls of scientists and security experts to revise this part of the legislation rather than erode users’ privacy and security on the web.

Making Chrome more secure by bringing Key Pinning to Android

Chrome 106 added support for enforcing key pins on Android by default, bringing Android to parity with Chrome on desktop platforms. But what is key pinning anyway?

One of the reasons Chrome implements key pinning is the “rule of two”. This rule is part of Chrome’s holistic secure development process. It says that when you are writing code for Chrome, you can pick no more than two of: code written in an unsafe language, processing untrustworthy inputs, and running without a sandbox. This blog post explains how key pinning and the rule of two are related.

The Rule of Two

Chrome is primarily written in the C and C++ languages, which are vulnerable to memory safety bugs. Mistakes with pointers in these languages can lead to memory being misinterpreted. Chrome invests in an ever-stronger multi-process architecture built on sandboxing and site isolation to help defend against memory safety problems. Android-specific features can be written in Java or Kotlin. These languages are memory-safe in the common case. Similarly, we’re working on adding support to write Chrome code in Rust, which is also memory-safe.

Much of Chrome is sandboxed, but the sandbox still requires a core high-privilege “broker” process to coordinate communication and launch sandboxed processes. In Chrome, the broker is the browser process. The browser process is the source of truth that allows the rest of Chrome to be sandboxed and coordinates communication between the rest of the processes.

If an attacker is able to craft a malicious input to the browser process that exploits a bug and allows the attacker to achieve remote code execution (RCE) in the browser process, that would effectively give the attacker full control of the victim’s Chrome browser and potentially the rest of the device. Conversely, if an attacker achieves RCE in a sandboxed process, such as a renderer, the attacker's capabilities are extremely limited. The attacker cannot reach outside of the sandbox unless they can additionally exploit the sandbox itself.

Without sandboxing, which limits the actions an attacker can take, and without memory safety, which removes the ability of a bug to disrupt the intended control flow of the program, the rule of two requires that the browser process does not handle untrustworthy inputs. The relative risks between sandboxed processes and the browser process are why the browser process is only allowed to parse trustworthy inputs and specific IPC messages.

Trustworthy inputs are defined extremely strictly: A “trustworthy source” means that Chrome can prove that the data comes from Google. Effectively, this means that in situations where the browser process needs access to data from external sources, it must be read from Google servers. We can cryptographically prove that data came from Google servers if that data comes from:

The component updater and the variations framework are services specific to Chrome used to ship data-only updates and configuration information. These services both use asymmetric cryptography to authenticate their data, and the public key used to verify data sent by these services is shipped in Chrome.

However, Chrome is a feature-filled browser with many different use cases, and many different features beyond just updating itself. Certain features, such as Sign-In and the Discover Feed, need to communicate with Google. For features like this, that communication can be considered trustworthy if it comes from a pinned HTTPS server.

When Chrome connects to an HTTPS server, the server says “a 3rd party you trust (a certification authority; CA) has vouched for my identity.” It does this by presenting a certificate issued by a trusted certification authority. Chrome verifies the certificate before continuing. The modern web necessarily has a lot of CAs, all of whom can provide authentication for any website. To further ensure that the Chrome browser process is communicating with a trustworthy Google server we want to verify something more: whether a specific CA is vouching for the server. We do this by building a map of sites → expected CAs directly into Chrome. We call this key pinning. We call the map the pin set.

What is Key Pinning?

Key pinning was born as a defense against real attacks seen in the wild: attackers who can trick a CA to issue a seemingly-valid certificate for a server, and then the attacker can impersonate that server. This happened to Google in 2011, when the DigiNotar certification authority was compromised and used to issue malicious certificates for Google services. To defend against this risk, Chrome contains a pin set for all Google properties, and we only consider an HTTPS input trustworthy if it’s authenticated using a key in this pin set. This protects against malicious certificate issuance by third parties.

Key pinning can be brittle, and is rarely worth the risks. Allowing the pin set to get out of date can lead to locking users out of a website or other services, potentially permanently. Whenever pinning, it’s important to have safety-valves such as not enforcing pinning (i.e. failing open) when the pins haven't been updated recently, including a “backup” key pin, and having fallback mechanisms for bootstrapping. It's hard for individual sites to manage all of these mechanisms, which is why dynamic pinning over HTTPS (HPKP) was deprecated. Key pinning is still an important tool for some use cases, however, where there's high-privilege communication that needs to happen between a client and server that are operated by the same entity, such as web browsers, automatic software updates, and package managers.

Security Benefits of Key Pinning in Chrome, Now on Android

By pinning in Chrome, we can protect users from CA compromise. We take steps to prevent an out-of-date pinset from unnecessarily blocking users from accessing Google or Google's services. As both a browser vendor and site operator, however, we have additional tools to ensure we keep our pin sets up to date—if we use a new key or a new domain, we can add it to the pin set in Chrome at the same time. In our original implementation of pinning, the pin set is directly compiled into Chrome and updating the pin set requires updating the entire Chrome binary. To make sure that users of old versions of Chrome can still talk to Google, pinning isn't enforced if Chrome detects that it is more than 10 weeks old.

Historically, Chrome enforced the age limit by comparing the current time to the build timestamp in the Chrome binary. Chrome did not enforce pinning on Android because the build timestamp on Android wasn’t always reflective of the age of the Chrome pinset, which meant that the chance of a false positive pin mismatch was higher.

Without enforcing pins on Android, Chrome was limiting the ways engineers could build features that comply with the rule of two. To remove this limitation, we built an improved mechanism for distributing the built-in pin set to Chrome installs, including Android devices. Chrome still contains a built-in pin set compiled into the binary. However, we now additionally distribute the pin set via the component updater, which is a mechanism for Chrome to dynamically push out data-only updates to all Chrome installs without requiring a full Chrome update or restart. The component contains the latest version of the built-in pin set, as well as the certificate transparency log list and the contents of the Chrome Root Store. This means that even if Chrome is out of date, it can still receive updates to the pin set. The component also includes the timestamp the pin list was last updated, rather than relying on build timestamp. This drastically reduces the false positive risk of enabling key pinning on Android.

After we moved the pin set to component updater, we were able to do a slow rollout of pinning enforcement on Android. We determined that the false positive risk was now in line with desktop platforms, and enabled key pinning enforcement by default since Chrome 106, released in September 2022.

This change has been entirely invisible to users of Chrome. While not all of the changes we make in Chrome are flashy, we're constantly working behind the scenes to keep Chrome as secure as possible and we're excited to bring this protection to Android.

An update on Chrome Security updates – shipping security fixes to you faster

To get security fixes to you faster, starting now in Chrome 116, Chrome is shipping weekly Stable channel updates.

Chrome ships a new milestone release every four weeks. In between those major releases, we ship updates to address security and other high impact bugs. We currently schedule one of these Stable channel updates (or “Stable Refresh”) between each milestone. Starting in Chrome 116, Stable updates will be released every week between milestones.

This should not change how you use or update Chrome, nor is the frequency of milestone releases changing, but it does mean security fixes will get to you faster.

Reducing the Patch Gap

Chromium is the open source project which powers Chrome and many other browsers. Anyone can view the source code, submit changes for review, and see the changes made by anyone else, even security bug fixes. Users of our Canary (and Beta) channels receive those fixes and can sometimes give us early warning of unexpected stability, compatibility, or performance problems in advance of the fix reaching the Stable channel.

This openness has benefits in testing fixes and discovering bugs, but comes at a cost: bad actors could possibly take advantage of the visibility into these fixes and develop exploits to apply against browser users who haven’t yet received the fix. This exploitation of a known and patched security issue is referred to as n-day exploitation.

That’s why we believe it’s really important to ship security fixes as soon as possible, to minimize this “patch gap”.

When a Chrome security bug is fixed, the fix is landed in the public Chromium source code repository. The fix is then publicly accessible and discoverable. After the patch is landed, individuals across Chrome are working to test and verify the patch, and evaluate security bug fixes for backporting to affected release branches. Security fixes impacting Stable channel then await the next Stable channel update once they have been backported. The time between the patch being landed and shipped in a Stable channel update is the patch gap.

Chrome began releasing Stable channel updates every two weeks in 2020, with Chrome 77, as a way to help reduce the patch gap. Before Chrome 77, our patch gap averaged 35 days. Since moving the biweekly release cadence, the patch gap has been reduced to around 15 days. The switch to weekly updates allows us to ship security fixes even faster, and further reduce the patch gap.

While we can’t fully remove the potential for n-day exploitation, a weekly Chrome security update cadence allows up to ship security fixes 3.5 days sooner on average, greatly reducing the already small window for n-day attackers to develop and use an exploit against potential victims and making their lives much more difficult.

Getting Fixes to You Faster

Not all security bug fixes are used for n-day exploitation. But we don’t know which bugs are exploited in practice, and which aren't, so we treat all critical and high severity bugs as if they will be exploited. A lot of work goes into making sure these bugs get triaged and fixed as soon as possible. Rather than having fixes sitting and waiting to be included in the next bi-weekly update, weekly updates will allow us to get important security bug fixes to you sooner, and better protect you and your most sensitive data.

Reducing Unplanned Updates

As always, we treat any Chrome bug with a known in-the-wild exploit as a security incident of the highest priority and set about fixing the bug and getting a fix out to users as soon as possible. This has meant shipping the fix in an unscheduled update, so that you are protected immediately. By now shipping stable updates weekly, we expect the number of unplanned updates to decrease since we’ll be shipping updates more frequently.

What You Can Do

Keep a lookout for notifications from your desktop or mobile device letting you know an update of Chrome is available. If an update is available, please update immediately each time!

If you are concerned that updating Chrome will interrupt your work or result in lost tabs, not to worry – when relaunching Chrome to update, your open tabs and windows are saved and Chrome re-opens them after restart. If you are browsing in Incognito mode, your tabs will not be saved. You can simply choose to delay restarting by selecting Not now, and the updates will be applied the next time you restart Chrome.

We are exploring improved ways of informing you a new Chrome update is available. Keep a lookout for these new notifications which have been rolled out for Stable experimentation to 1% of users.

Other Chromium-based browsers have varying patch gaps. Chrome does not control the update cadence of other Chromium browsers. The change described here is only applicable to Chrome. If you are using other Chromium browsers, you may want to explore the security update cadence of those browsers.

The rest is on us – with this change we’re dedicated to continuing to work to get security fixes to you as fast as possible.

A look at Chrome’s security review culture



Security reviewers must develop the confidence and skills to make fast, difficult decisions. A simplistic piece of advice to reviewers is “just be confident” but in reality that takes practice and experience. Confidence comes with time, and people are there to support each other as we learn. This post shares advice we give to people doing security reviews for Chrome.

Security Review in Chrome

Chrome has a lightweight launch process. Teams write requirements and design documents outlining why the feature should be built, how the feature will benefit users, and how the feature will be built. Developers write code behind a feature flag and must pass a Launch Review before turning it on. Teams think about security early-on and coordinate with the security team. Teams are responsible for the safety of their features and ensuring that the security team is able to say ‘yes’ to its security review.

Security review focuses on the design of a proposed feature, not its details and is distinct from code review. Chrome changes need approval from engineers familiar with the code being changed but not necessarily from security experts. It is not practical for security engineers to scrutinize every change. Instead we focus on the feature’s architecture, and how it might affect people using Chrome.

Reviewers function best in an open and supportive engineering culture. Security review is not an easy task – it applies security engineering insights in a social context that could become adversarial and fractious. Google, and Chrome, embody a security-centric engineering culture, where respectful disagreement is valued, where we learn from mistakes, where decisions can be revisited, and where developers see the security team as a partner that helps them ship features safely. Within the security team we support each other by encouraging questioning & learning, and provide mentorship and coaching to help reviewers enhance their reviewing skills.

Learning security review

Start by shadowing

Start with some help. As a new reviewer, you may not feel you’re 100% ready — don’t let that put you off. The best way to learn is to observe and see what’s involved before easing in to doing reviews on your own. Start by shadowing to get a feel for the process. Ask the person you are shadowing how they plan to approach the review, then look at the materials yourself. Concentrate on learning how to review rather than on the details of the thing you are reviewing. Don’t get too involved but observe how the reviewer does things and ask them why. Next time try to co-review something - ask the feature team some questions and talk through your thoughts with the other reviewer. Let them make the final approval decision. Do this a few times and you’ll be ready to be the main reviewer, and remember that you can always reach out for help and advice.

Read enough to make a decision

Read a lot, but know when to stop. Understand what the feature is doing, what’s new, and what’s built on existing, approved, mechanisms. Focus on the new things. If you need to educate yourself, skim older docs or code for context. It can help to look at related reviews for repeated issues and solutions. It is tempting to try to understand everything and at first you’ll dig deeper than you need to. You’ll get better at knowing when to stop after a few reviews. Treat existing, approved, features as building blocks that you don’t need to fully understand, but might be useful to skim as background.

Launch review is a gate. It’s ok to ask feature teams to have the materials ready. Try to use your time wisely — if a design doc is very brief and lacks any security discussion you can quickly say "please add a security considerations section" and stop thinking about it until the team comes back with more complete documentation. If the design document doesn’t fully explain something that is a sign the document needs to be expanded — if something isn’t clear to you or isn’t covered then start asking questions. Remember that you’re not looking for every possible bug, but ensuring that major concerns are addressed upfront.

As you’re reading, read actively and write down observations and questions as you go. Cross them off if you find an answer later. For your first reviews this will take a long time. Don’t worry too much about that - you won't know yet which details matter. Over time you’ll learn where to focus your attention. This is also a good time to pair up with a seasoned reviewer. Schedule a chat to go over your thoughts before you share them with the feature team. This will help you understand the process people go through and allows a safe evaluation of your thoughts before you share them more widely - this will help you build confidence. Next, clarify any questions with the feature team. Try to write a sentence or two describing the feature - if you can’t do this it indicates you need more information.

Ask questions to improve documentation

You have permission to be ignorant! Use it! Ask questions until you understand areas of uncertainty. Asking questions provides real value, and often triggers the team to realize that something should be done differently. In particular — if it’s confusing to you it’s probably badly explained or badly thought out, or shows that an assumption or tacit knowledge is missing from a design document. If you’re worried about looking ignorant, make use of the more experienced reviewers around you — ask on the chat or book some time to talk over your thoughts one-on-one. This should help you formulate your question so that it’s useful to the feature team. Try to write out what you think is happening, and let the feature team tell you if you’re close or not.

The chances that you’ll understand everything immediately are very low, and that’s ok. In meetings about a feature a favorite question of mine is ‘what are you secretly worried about?’ followed by an awkward pause. People will absolutely tell you things! Sometimes there's a domain-knowledge mismatch when you don't have the right words to ask the question, so you can't get a useful answer. Always ask for a diagram that shows which process or component different parts of a feature are happening in — this helps you hone in on the critical interfaces, and will illustrate the design more clearly than screenfuls of text or code.

Center people in your security analysis

We’re here to help people. Try to center people in your thoughts and arguments. How will people use the feature? Who are they? Who might harm them and how? Are there particular groups of people that might be more vulnerable than others, and what can we do to protect them? How does the feature make people feel? How will their experience of the application change? How will their lives be affected? Think about how a bad actor might abuse the feature. What implicit assumptions is the implementation making about the people using it? What or who are we asking people to trust? What if someone modifies traffic, changes a message, passes in bad data, or tricks someone into using the feature when they don't want to? This is a great thing to discuss when you’re pairing with another reviewer — be sure to ask them what they like to think about.

Think about what can go wrong

Take time to think and bring an adversarial mindset and bring a different perspective. In some ways the purpose of a security review is to stop and think before unleashing new ideas on the world. Make focus time in your calendar or sit somewhere unusual to give yourself space to think. A skeptical, enquiring mindset is more useful than deep knowledge. You’re there to ask the questions the feature team won’t have thought about. They will naturally focus on what they need to do to make the feature work. Security review is about thinking about what else might happen when it’s working, or what might happen if someone deliberately tries to do things the designers didn’t expect. Try to take a different perspective.

Trust your spidey-senses. If you can't quite put your finger on what might go wrong, but something feels off. Sometimes a feature is just plain complicated, or in a risky area of code, or feels like it's been rushed. It can be difficult to articulate these concerns to a team without rubbing people the wrong way. Use people you trust to bounce your thoughts off and hone in on what you are worried about. Discuss with other reviewers whether and how these risks can be communicated. Your spidey-senses are probably correct, and they're as important as any single concrete solvable threat you've spotted.

Approve and keep notes

Pause then approve. Once you’ve understood what’s happening and iterated through any concerns you’ve raised you’ll be ready to approve the feature for launch. It’s worth taking a short pause here to let your brain do its thinking in the background before you press the button. Try to concisely describe the feature — if you can’t then go back and ask more questions! It’s important to get questions and concerns to teams quickly but final approval can wait for some digestion time. If you cannot come up with a clear decision then reach out to other reviewers to discuss what to do next. Let the feature team know you’re working on it and when you’ll get back to them. After a pause, if nothing else occurs to you then click Approved and write a short paragraph saying why. Note any follow-on work the team has promised to complete before launching. This is also a great time to leave yourself a short note for your performance review — it’s easy to lose track of what you reviewed and the changes your input led to — having a rolling document will both help you spot patterns, and help you tell the story of the work you’ve done.

Expect to make mistakes, and learn from them

Nothing we do in software is forever, and many mistakes will be found and fixed later. You will make mistakes. Mainly small ones that won’t really matter. Security is about evaluating new risks in the context of the value provided to people using a product. This tradeoff extends into the design and launch process of which you are just a small part. You only have so much time, and It’s inevitable that you might sometimes see things that aren’t there, or not notice things that are. Security reviewers are one element in a layered defense and the consequences of a mistake will be contained by things you did spot. It’s good to try to find specific problems, but more important to locate and apply general security principles like sandboxing and the rule of two. Sometimes you might think something is fine, but later realize that it isn’t. This often happens when we learn something new about a feature, or discover that an assumption was invalid. This is where careful communication is important. Feature teams will be happy to know about any problems you uncover, and will find time to fix them later if possible. Remember that Looks Good To Me doesn’t mean Looks Perfect To Me.

How to be better

Experienced reviewers can always improve, and apply their insights widely within their organization.

It’s not always easy

It takes time to learn security engineering and build a working knowledge of the architecture of a complex product. Reviewing is different from the normal development journey - when an engineer works on a feature they start in an ambiguous situation and gradually learn or invent everything needed to deeply understand and solve the problem. To be effective as a security reviewer we have to embrace ambiguity and ignorance, and learn how to swiftly learn just enough to have a useful opinion, before starting again for our next review. This may seem daunting - and it is - but over time reviewers get better at knowing where to focus their efforts.

Security reviewing can feel invisible. Security is not an all-or-nothing quality of a feature. Rather it forms one concern that a product must balance while still shipping, adding new features, and appealing to people that use it. Security is an important concern (for Chrome it’s both a critical engineering pillar, and something people say they value when choosing Chrome) but it’s not the only factor. It’s our job to identify and articulate security risks, and advocate for better approaches, but sometimes another concern dominates. If deviations from our advice are well justified we shouldn’t feel ignored - we did our bit.

Your peers are there to help you. If you need support, ask questions on the reviewing team’s chat, or schedule thirty minutes or a coffee with another reviewer to discuss a particular review.

Help teams secure their features

Remember that developers know what they are doing, but might not be thinking about the things you are thinking about. You might not be confident in what you know about their feature, but imagine how the feature team feels coming to the mysterious halls of the security people! Often we’ll ask a team to implement one or more of our layered defenses before they get to launch their feature. This might be the first time they’ve had to write a fuzzer or harden a library. You’ll get requests for examples or help with implementation. Find an expert or spend time doing these things yourself. The security process should be as smooth a speed bump as possible. Any familiarity you have with these techniques will improve our interactions and maintain our reputation as a helpful team. If we ask someone to do something but can’t help them make progress we will be a source of frustration. If we help people they will be likely to approach us early-on next time they have a security question.

Training is available

Develop mind-tricks and frameworks for having difficult conversations. Sometimes (especially when you get involved early in a project’s design phase) you will need to disagree with a feature’s design, or nudge a team in a more secure direction. While a supportive technical culture should make it safe to surface and resolve technical differences, it takes energy and patience to work through these conflicts. It’s harder still to say ‘no’, or ask a team to commit to more work than they were expecting. These are skills you can practice and become more comfortable doing. Look for courses you can take. Some suggestions include “having difficult conversations”, “mentoring”, “coaching”, “persuasive writing”, and “threat modeling”.

Scale your impact

Find ways to scale your impact. Security decisions are made based on judgment and mechanisms but judgment doesn’t scale! To maintain a sustainable security workload for ourselves, and empower feature teams to make their own decisions, we need to make judgment as small a part of the puzzle as possible.

Encourage good patterns. If a design addresses a security concern, say so on the launch bug or a mailing list. This helps for later reviews, and provides useful feedback to the design team. Help newer reviewers see good or bad patterns, and the rhythm of reviews by telling a few stories of what went well and what got missed in the past. Establish architectural patterns that contain the consequences of a problem. Make these easy to follow while preventing anti-patterns - ideally a bad security idea shouldn’t even compile.

Write guidance or policies. Distill decisions into FAQs, threat models, principles or rules. Get involved with the people building foundational pieces of your product, and get them to own their security guidance so that it gets applied as part of that team’s advice to other teams. A checklist of things to look for in a particular area is a great starting point for the team making the next feature in that space, and for the reviewer that signs off at the end.

Level-up your developers. We can raise the level of expertise across the wider developer community, and reduce the burden of reviewing for security teams. Through repeated engagements with the same team you can start to set expectations - each time, drop some hints about what could be done better next time. Encourage system diagrams, risk assessments, threat modeling or sandboxing. Soon teams will start with these, and reviews will be much smoother.

Anoint security champions. In larger feature teams encourage a couple of security champions within the group to serve as initial points of contact and a first line of review. Support these people! Offer to talk them through their design docs and help them think about security concerns. They will grow into local experts who know when to call on security specialists. They can write security principles for their area, leading to secure features and smooth launch reviews.

Summary

Do a few reviews to develop confidence in your decisions. You won't understand all the details of a feature. You will sometimes say yes to the wrong things or get teams to do unnecessary work. You'll ask insightful questions and improve designs..

Remember that security reviewing is difficult. Remember that people are there to help you. Remember that every good decision you encourage keeps people safe from harm, and increases their trust in you and your product. As you mature, maintain a supportive culture where reviewers can grow, and where you help other teams develop new features with safety in mind.

Protect and manage browser extensions using Chrome Browser Cloud Management

Browser extensions, while offering valuable functionalities, can seem risky to organizations. One major concern is the potential for security vulnerabilities. Poorly designed or malicious extensions could compromise data integrity and expose sensitive information to unauthorized access. Moreover, certain extensions may introduce performance issues or conflicts with other software, leading to system instability. Therefore, many organizations find it crucial to have visibility into the usage of extensions and the ability to control them. Chrome browser offers these extension management capabilities and reporting via Chrome Browser Cloud Management. In this blog post, we will walk you through how to utilize these features to keep your data and users safe.

Visibility into Extensions being used in your environment

Having visibility into what and how extensions are being used enables IT and security teams to assess potential security implications, ensure compliance with organizational policies, and mitigate potential risks. There are three ways you can get critical information about extensions in your organization:

1. App and extension usage reporting

Organizations can gain visibility into every Chrome extension that is installed across an enterprise’s fleet in Chrome App and Extension Usage Reporting.

2. Extension Risk Assessment

CRXcavator and Spin.AI Risk Assessment are tools used to assess the risks of browser extensions and minimize the risks associated with them. We are making extension scores via these two platforms available directly in Chrome Browser Cloud Management, so security teams can have an at-a-glance view of risk scores of the extensions being used in their browser environment.

3. Extension event reporting

Extension installs events are now available to alert IT and security teams of new extension usage in their environment.

Organizations can send critical browser security events to their chosen solution providers, such as Splunk, Crowdstrike, Palo Alto Networks, and Google solutions, including Chronicle, Cloud PubSub, and Google Workspace, for further analysis. You can also view the event logs directly in Chrome Browser Cloud Management.

Extension controls for added security

By having control over extensions, organizations can maintain a secure and stable browsing environment, safeguard sensitive data, and protect their overall digital infrastructure. There are several ways IT and security teams can set and enforce controls over extensions:

1. Extension permission

Organizations can control whether users can install apps or extensions based on the information an extension can access—also known as permissions. For example, you might want to prevent users from installing any extension that wants permission to see a device location.

2. Extension workflow

Extension workflow allows end users to request extensions for review. From there, IT can either approve or deny them. This is a great approach for teams that want to control the extensions in their environment, but allow end users to have some say over the extensions they use.

We recently added a prompt for business justification for documenting why users are requesting the extension. This gives admins more information as to why the extension may or may not be helpful for their workforce, and can help speed approvals for business users.

3. Force install/block extensions

In the requests tab, you can select a requested extension, and approve that extension request and force install it, so the extension shows up automatically on all the managed browsers in that specific Organizational Unit or group.

Admins also have the ability to block extensions in their browser environment.

4. Extension version pinning

Admins have the ability to pin extensions to specific versions. As a result, the pinned extensions will not be upgraded unless the admin explicitly changes the version. This gives organizations the ability to evaluate new extension versions before they decide to allow them in their environment.

Extensions play a huge role in user productivity for many organizations, and Chrome is committed to help enterprises securely take advantage of them. If you haven’t already, you can sign up for Chrome Browser Cloud Management and start managing extensions being used in your organizations at no additional cost.

If you’re already using Chrome Browser Cloud Management, but want to see how you can get the most out of it, check out our newly released Beyond Browsing demo series where Chrome experts share demos on some of our frequently asked management questions.

Announcing the Chrome Browser Full Chain Exploit Bonus

For 13 years, a key pillar of the Chrome Security ecosystem has included encouraging security researchers to find security vulnerabilities in Chrome browser and report them to us, through the Chrome Vulnerability Rewards Program.

Starting today and until 1 December 2023, the first security bug report we receive with a functional full chain exploit, resulting in a Chrome sandbox escape, is eligible for triple the full reward amount. Your full chain exploit could result in a reward up to $180,000 (potentially more with other bonuses).

Any subsequent full chains submitted during this time are eligible for double the full reward amount!

We have historically put a premium on reports with exploits – “high quality reports with a functional exploit” is the highest tier of reward amounts in our Vulnerability Rewards Program. Over the years, the threat model of Chrome browser has evolved as features have matured and new features and new mitigations, such a MiraclePtr, have been introduced. Given these evolutions, we’re always interested in explorations of new and novel approaches to fully exploit Chrome browser and we want to provide opportunities to better incentivize this type of research. These exploits provide us valuable insight into the potential attack vectors for exploiting Chrome, and allow us to identify strategies for better hardening specific Chrome features and ideas for future broad-scale mitigation strategies.

The full details of this bonus opportunity are available on the Chrome VRP rules and rewards page. The summary is as follows:

  • The bug reports may be submitted in advance while exploit development continues during this 180-day window. The functional exploits must be submitted to Chrome by the end of the 180-day window to be eligible for the triple or double reward.
    • The first functional full chain exploit we receive is eligible for the triple reward amount.
  • The full chain exploit must result in a Chrome browser sandbox escape, with a demonstration of attacker control / code execution outside of the sandbox.
  • Exploitation must be able to be performed remotely and no or very limited reliance on user interaction.
  • The exploit must have been functional in an active release channel of Chrome (Dev, Beta, Stable, Extended Stable) at the time of the initial reports of the bugs in that chain. Please do not submit exploits developed from publicly disclosed security bugs or other artifacts in old, past versions of Chrome.

As is consistent with our general rewards policy, if the exploit allows for remote code execution (RCE) in the browser or other highly-privileged process, such as network or GPU process, to result in a sandbox escape without the need of a first stage bug, the reward amount for renderer RCE “high quality report with functional exploit” would be granted and included in the calculation of the bonus reward total.

Based on our current Chrome VRP reward matrix, your full chain exploit could result in a total reward of over $165,000 -$180,000 for the first full chain exploit and over $110,000 - $120,000 for subsequent full chain exploits we receive in the six month window of this reward opportunity.

We’d like to thank our entire Chrome researcher community for your past and ongoing efforts and security bug submissions! You’ve truly helped us make Chrome more secure for all users.

Happy Hunting!

Adding Chrome Browser Cloud Management remediation actions in Splunk using Alert Actions

Introduction

Chrome is trusted by millions of business users as a secure enterprise browser. Organizations can use Chrome Browser Cloud Management to help manage Chrome browsers more effectively. As an admin, they can use the Google Admin console to get Chrome to report critical security events to third-party service providers such as Splunk® to create custom enterprise security remediation workflows.

Security remediation is the process of responding to security events that have been triggered by a system or a user. Remediation can be done manually or automatically, and it is an important part of an enterprise security program.

Why is Automated Security Remediation Important?

When a security event is identified, it is imperative to respond as soon as possible to prevent data exfiltration and to prevent the attacker from gaining a foothold in the enterprise. Organizations with mature security processes utilize automated remediation to improve the security posture by reducing the time it takes to respond to security events. This allows the usually over burdened Security Operations Center (SOC) teams to avoid alert fatigue.

Automated Security Remediation using Chrome Browser Cloud Management and Splunk

Chrome integrates with Chrome Enterprise Recommended partners such as Splunk® using Chrome Enterprise Connectors to report security events such as malware transfer, unsafe site visits, password reuse. Other supported events can be found on our support page.

The Splunk integration with Chrome browser allows organizations to collect, analyze, and extract insights from security events. The extended security insights into managed browsers will enable SOC teams to perform better informed automated security remediations using Splunk® Alert Actions.

Splunk Alert Actions are a great capability for automating security remediation tasks. By creating alert actions, enterprises can automate the process of identifying, prioritizing, and remediating security threats.

In Splunk®, SOC teams can use alerts to monitor for and respond to specific Chrome Browser Cloud Management events. Alerts use a saved search to look for events in real time or on a schedule and can trigger an Alert Action when search results meet specific conditions as outlined in the diagram below.

Use Case

If a user downloads a malicious file after bypassing a Chrome “Dangerous File” message their managed browser/managed CrOS device should be quarantined.

Prerequisites

Setup

  1. Install the Google Chrome Add-on for Splunk App

    Please follow installation instructions here depending on your Splunk Installation to install the Google Chrome Add-on for Splunk App.

  2. Setting up Chrome Browser Cloud Management and Splunk Integration

    Please follow the guide here to set up Chrome Browser Cloud Management and Splunk® integration.

  3. Setting up Chrome Browser Cloud Management API access

    To call the Chrome Browser Cloud Management API, use a service account properly configured in the Google admin console. Create a (or use an existing) service account and download the JSON representation of the key.

    Create a (or use an existing) role in the admin console with all the “Chrome Management” privileges as shown below.

    Assign the created role to the service account using the “Assign service accounts” button.

  4. Setting up Chrome Browser Cloud Management App in Splunk®

    Install the App i.e. Alert Action from our Github page. You will notice that the Splunk App uses the below directory structure. Please take some time to understand the directory structure layout.

  5. Setting up a Quarantine OU in Chrome Browser Cloud Management

    Create a “Quarantine” OU to move managed browsers into. Apply restrictive policies to this OU which will then be applied to managed browsers and managed CrOS devices that are moved to this OU. In our case we set the below policies for our “Quarantine” OU called Investigate.These policies ensure that the quarantined CrOS device/browser can only open a limited set of approved URLS.

Configuration

  1. Start with a search for the Chrome Browser Cloud Management events in the Google Chrome Add-on for Splunk App. For our instance we used the below search query to search for known malicious file download events.
  2. Save the search as an alert. The alert uses the saved search to check for events. Adjust the alert type to configure how often the search runs. Use a scheduled alert to check for events on a regular basis. Use a real-time alert to monitor for events continuously. An alert does not have to trigger every time it generates search results. Set trigger conditions to manage when the alert triggers. Customize the alert settings as per enterprise security policies. For our example we used a real time alert with a per-result trigger. The setup we used is as shown below.

  3. As seen in the screenshot we have configured the Chrome Browser Cloud Management Remediation Alert Action App with

    • The OU Path of the Quarantine OU i.e. /Investigate
    • The Customer Id of the workspace domain
    • Service Account Key JSON value

    Test the setup

    Use the testsafebrowsing website to generate sample security events to test the setup.

    1. Open the testsafebrowsing website
    2. Click the link for line item 4 under the Desktop Download Warnings section i.e. “Should show an "uncommon" warning, for .exe”
    3. You will see a Dangerous Download blocked warning giving you two options to either Discard or Keep the downloaded file. Click on Keep
    4. This will trigger the alert action and move your managed browser or managed CrOS device to the “Quarantine” OU (OU name Investigate in our example) with restricted policies.

    Conclusion

    Security remediation is vital to any organization’s security program. In this blog we discussed configuring automated security remediation of Chrome Browser Cloud Management security events using Splunk alert actions. This scalable approach can be used to protect a company from online security threats by detecting and quickly responding to high fidelity Chrome Browser Cloud Management security events thereby greatly reducing the time to respond.

    Our team will be at the Gartner Security and Risk Management Summit in National Harbor, MD, next week. Come see us in action if you’re attending the summit.

Thank you and goodbye to the Chrome Cleanup Tool

Starting in Chrome 111 we will begin to turn down the Chrome Cleanup Tool, an application distributed to Chrome users on Windows to help find and remove unwanted software (UwS).

Origin story

The Chrome Cleanup Tool was introduced in 2015 to help users recover from unexpected settings changes, and to detect and remove unwanted software. To date, it has performed more than 80 million cleanups, helping to pave the way for a cleaner, safer web.

A changing landscape

In recent years, several factors have led us to reevaluate the need for this application to keep Chrome users on Windows safe.

First, the user perspective – Chrome user complaints about UwS have continued to fall over the years, averaging out to around 3% of total complaints in the past year. Commensurate with this, we have observed a steady decline in UwS findings on users' machines. For example, last month just 0.06% of Chrome Cleanup Tool scans run by users detected known UwS.

Next, several positive changes in the platform ecosystem have contributed to a more proactive safety stance than a reactive one. For example, Google Safe Browsing as well as antivirus software both block file-based UwS more effectively now, which was originally the goal of the Chrome Cleanup Tool. Where file-based UwS migrated over to extensions, our substantial investments in the Chrome Web Store review process have helped catch malicious extensions that violate the Chrome Web Store's policies.

Finally, we've observed changing trends in the malware space with techniques such as Cookie Theft on the rise – as such, we've doubled down on defenses against such malware via a variety of improvements including hardened authentication workflows and advanced heuristics for blocking phishing and social engineering emails, malware landing pages, and downloads.

What to expect

Starting in Chrome 111, users will no longer be able to request a Chrome Cleanup Tool scan through Safety Check or leverage the "Reset settings and cleanup" option offered in chrome://settings on Windows. Chrome will also remove the component that periodically scans Windows machines and prompts users for cleanup should it find anything suspicious.

Even without the Chrome Cleanup Tool, users are automatically protected by Safe Browsing in Chrome. Users also have the option to turn on Enhanced protection by navigating to chrome://settings/security – this mode substantially increases protection from dangerous websites and downloads by sharing real-time data with Safe Browsing.

While we'll miss the Chrome Cleanup Tool, we wanted to take this opportunity to acknowledge its role in combating UwS for the past 8 years. We'll continue to monitor user feedback and trends in the malware ecosystem, and when adversaries adapt their techniques again – which they will – we'll be at the ready.

As always, please feel free to send us feedback or find us on Twitter @googlechrome.

8 ways to secure Chrome browser for Google Workspace users

1. Bring Chrome under Cloud Management

Your journey towards keeping your Google Workspace users and data safe, starts with bringing your Chrome browsers under Cloud Management at no additional cost. Chrome Browser Cloud Management is a single destination for applying Chrome Browser policies and security controls across Windows, Mac, Linux, iOS and Android. You also get deep visibility into your browser fleet including which browsers are out of date, which extensions your users are using and bringing insight to potential security blindspots in your enterprise.

Managing Chrome from the cloud allows Google Workspace admins to enforce enterprise protections and policies to the whole browser on fully managed devices, which no longer requires a user to sign into Chrome to have policies enforced. You can also enforce policies that apply when your managed users sign in to Chrome browser on any Windows, Mac, or Linux computer (via Chrome Browser user-level management) --not just on corporate managed devices.

This enables you to keep your corporate data and users safe, whether they are accessing work resources from fully managed, personal, or unmanaged devices used by your vendors.

Getting started is easy. If your organization hasn’t already, check out this guide for steps on how to enroll your devices.

2. Enforce built-in protections against Phishing, Ransomware & Malware

Chrome uses Google’s Safe Browsing technology to help protect billions of devices every day by showing warnings to users when they attempt to navigate to dangerous sites or download dangerous files. Safe Browsing is enabled by default for all users when they download Chrome. As an administrator, you can prevent your users from disabling Safe Browsing by enforcing the SafeBrowsingProtectionLevel policy.

Over the past few years, we’ve seen threats on the web becoming increasingly sophisticated. Turning on Enhanced Safe Browsing will substantially increase protection from dangerous websites, malicious downloads and extensions. For the best protections against web based attacks Google has to offer, enforce Enhanced Safe Browsing for your users.

3. Enable Enterprise Credential Protections in Chrome

Enterprise password reuse introduces significant security risks. Quite often, employees reuse corporate credentials as personal logins and vice versa. Occasionally, employees even enter their corporate passwords into phishing websites. Reused employee logins give criminals easy paths to access corporate data.

Chrome Enterprise Password Reuse detection helps enterprises avoid identity theft and employee and organizational data breaches by detecting when an employee enters their corporate credentials into any other website.

Google Password Manager in Chrome also has a built-in Password Checkup feature that alerts users when Google discovers a username and password has been exposed in a public data breach.

Password alerts are surfaced in Audit Logs and Security Investigation Tool which helps admins create automated rules or take appropriate steps to mitigate this by asking users to reset their passwords.

4. Gain insights into critical security events via Audit Logs, Google Security Center or your SIEM of choice

IT teams can gain useful insights about potential security threats and events that your Google Workspace users are encountering when browsing the web using Chrome. IT teams can take preventive measures against threats through Security Reporting.

In the Google Workspace Admin console, organizations can enroll their Chrome browser and get detailed information about their browser deployment. IT teams can also set policies, manage extensions, and more. The Chrome management policies can be set to work alongside any end user-based policies that may be in place.

Once you’ve enabled Security events reporting (pictured above), you can then view reporting events within audit logs. Google Workspace Enterprise Plus or Education Plus users can use the Workspace Security Investigation Tool to identify, triage, and act on potential security threats.

As of today, Chrome can report on when users:

  • Navigate to a known malicious site.
  • Download or upload files containing known malware.
  • Reuse corporate passwords on non-approved sites.
  • Change corporate passwords after reusing them on non-approved sites.
  • Install extensions.

In addition to Google Workspace, you can also export these events into other Google Cloud products, such as Google Cloud Pub/Sub, Chronicle, or leading 3rd party products such as Splunk, Crowdstrike and PaloAlto Networks.

5. Mitigate risk by keeping your browsers up to date with latest security updates

Modern web browsers, like any other software, can have "zero day" vulnerabilities, which are undiscovered flaws in the software that can be exploited by attackers until they are identified and resolved. Fortunately, among all the browsers, Chrome is known to patch zero day vulnerabilities quickly. However, to take advantage of this, the IT team has to ensure that all browsers within the browser environment are up-to-date. Our enterprise tools provide a smooth and seamless browser update process, enabling user productivity while maintaining optimal security. By leveraging these tools, businesses can ensure their users are safe and protected from potential security threats.

  • Version Report: Easily see all the versions of Chrome in your fleet across various operating systems in a daily report.
  • Force Auto Updates in Chrome: Trigger updates to newer versions of Chrome as soon as they’re available. Force users to relaunch Chrome to take updates more rapidly using enterprise policies. This keeps users on the latest version of Chrome, with the latest security fixes.
  • Controlling legacy browser usage: Some users continue to need access to old web applications that use plugins and ActiveX technology not supported by modern browsers. Legacy Browser Support functionality is integrated into Chrome, and reduces the time users spend with less secure browsers.

6. Ensure employees only use vetted extensions

Extensions pose a large security risk. Many extensions request powerful permissions that if misused, could lead to security breaches or data loss. However, due to strong end user demand, it’s often not possible to fully block the installation of extensions.

  • Apps & Extensions usage report: Provides visibility into every Chrome extension that is installed across an enterprise’s fleet. Admins can force install or block any extension across any segment of their fleet.
  • Extensions workflow: Admins can decide under which circumstances an extension install needs to be reviewed by IT. A review workflow in the Google Admin console makes it easy for admins to review and approve install requests for specific users requesting an extension, or for their broader fleet.
  • Extensions details: Admins can see additional details about an extension’s permissions, and other relevant metadata. This info is surfaced in the Extensions list and Extensions workflow pages to make it easier for administrators to manage extensions.

7. Ensure your Google Workspace resources are only accessed from Managed Chrome Browsers with protections enabled

Context-Aware Access ensures only the right people, under the right conditions, access confidential information. Using Context-Aware Access, you can create granular access control policies for apps that access Workspace data based on attributes, such as user identity, location, device security status, and IP address.

To ensure that your Google Workspace resources are only accessed from managed Chrome browsers with protection enabled, you create custom access levels in Advanced mode, using Common Expressions Language (CEL). Learn more about managed queries in this help center article.

8. Enable BeyondCorp Enterprise Threat and Data Protections

For the organizations that want to take an even more proactive approach to data security, they can deploy BeyondCorp Enterprise to protect their information and enable data loss prevention (including control over upload, download, print, save, copy and paste), real-time phishing protection, malware deep scanning, and Zero Trust access to SaaS applications. Since BeyondCorp Enterprise is already built into Chrome, organizations can frictionlessly implement it without having to install additional agents.

Learn more about how Google supports today’s workforce with secure enterprise browsing here.