Category Archives: Online Security Blog

The latest news and insights from Google on security and safety on the Internet

Celebrating SLSA v1.0: securing the software supply chain for everyone

Last week the Open Source Security Foundation (OpenSSF) announced the release of SLSA v1.0, a framework that helps secure the software supply chain. Ten years of using an internal version of SLSA at Google has shown that it’s crucial to warding off tampering and keeping software secure. It’s especially gratifying to see SLSA reaching v1.0 as an open source project—contributors have come together to produce solutions that will benefit everyone.

SLSA for safer supply chains

Developers and organizations that adopt SLSA will be protecting themselves against a variety of supply chain attacks, which have continued rising since Google first donated SLSA to OpenSSF in 2021. In that time, the industry has also seen a U.S. Executive Order on Cybersecurity and the associated NIST Secure Software Development Framework (SSDF) to guide national standards for software used by the U.S. government, as well as the Network and Information Security (NIS2) Directive in the European Union. SLSA offers not only an onramp to meeting these standards, but also a way to prepare for a climate of increased scrutiny on software development practices.

As organizations benefit from using SLSA, it’s also up to them to shoulder part of the burden of spreading these benefits to open source projects. Many maintainers of the critical open source projects that underpin the internet are volunteers; they cannot be expected to do all the work when so many of the rewards of adopting SLSA roll out across the supply chain to benefit everyone.

Supply chain security for all

That’s why beyond contributing to SLSA, we’ve also been laying the foundation to integrate supply chain solutions directly into the ecosystems and platforms used to create open source projects. We’re also directly supporting open source maintainers, who often cite lack of time or resources as limiting factors when making security improvements to their projects.

Our Open Source Security Upstream Team consists of developers who spend 100% of their time contributing to critical open source projects to make security improvements. For open source developers who choose to adopt SLSA on their own, we’ve funded the Secure Open Source Rewards Program, which pays developers directly for these types of security improvements.

Currently, open source developers who want to secure their builds can use the free SLSA L3 GitHub Builder, which requires only a one-time adjustment to the traditional build process implemented through GitHub actions. There’s also the SLSA Verifier tool for software consumers. Users of npm—or Node Package Manager, the world’s largest software repository—can take advantage of their recently released beta SLSA integration, which streamlines the process of creating and verifying SLSA provenance through the npm command line interface. We’re also supporting the integration of Sigstore into many major package ecosystems, meaning that users can sign and verify artifacts directly from package management tooling, without having to manage keys. Our intention is to continue to expand these types of integrations across open source ecosystems so supply chain security solutions are universal and easily accessible.

We’re also making it easier for everyone to understand their dependencies. Vulnerabilities like Log4Shell have shown the importance (and difficulty) of knowing what projects you depend on and where their security weaknesses might be. Developers can use the deps.dev API to generate real dependency graphs, with OpenSSF Scorecard security scores and other security metadata for each dependency they use. They can also use OSV-Scanner to generate a high quality list of actionable vulnerabilities in those dependencies. In the future, we hope to support automatic remediation and patching through the OSV database service, minimizing the effort that open source developers spend on securing their projects.

Continued community contributions

Ultimately, our goal is to make supply chain security invisible and available to everyone, built directly into each ecosystem for frictionless adoption. To get there, we’ll continue contributing to these efforts and encouraging other organizations who rely on open source to similarly dedicate developers to upstream support. The internet as we know it today wouldn’t be available without open source software, and it’s in everyone’s best interests to give back to the communities that make modern software development possible.

Securely Hosting User Data in Modern Web Applications

Many web applications need to display user-controlled content. This can be as simple as serving user-uploaded images (e.g. profile photos), or as complex as rendering user-controlled HTML (e.g. a web development tutorial). This has always been difficult to do securely, so we’ve worked to find easy, but secure solutions that can be applied to most types of web applications.

Classical Solutions for Isolating Untrusted Content

The classic solution for securely serving user-controlled content is to use what are known as “sandbox domains”. The basic idea is that if your application's main domain is example.com, you could serve all untrusted content on exampleusercontent.com. Since these two domains are cross-site, any malicious content on exampleusercontent.com can’t impact example.com.

This approach can be used to safely serve all kinds of untrusted content including images, downloads, and HTML. While it may not seem like it is necessary to use this for images or downloads, doing so helps avoid risks from content sniffing, especially in legacy browsers.

Sandbox domains are widely used across the industry and have worked well for a long time. But, they have two major downsides:

  1. Applications often need to restrict content access to a single user, which requires implementing authentication and authorization. Since sandbox domains purposefully do not share cookies with the main application domain, this is very difficult to do securely. To support authentication, sites either have to rely on capability URLs, or they have to set separate authentication cookies for the sandbox domain. This second method is especially problematic in the modern web where many browsers restrict cross-site cookies by default.
  2. While user content is isolated from the main site, it isn’t isolated from other user content. This creates the risk of malicious user content attacking other data on the sandbox domain (e.g. via reading same-origin data).

It is also worth noting that sandbox domains help mitigate phishing risks since resources are clearly segmented onto an isolated domain.

Modern Solutions for Serving User Content

Over time the web has evolved, and there are now easier, more secure ways to serve untrusted content. There are many different approaches here, so we will outline two solutions that are currently in wide use at Google.

Approach 1: Serving Inactive User Content

If a site only needs to serve inactive user content (i.e. content that is not HTML/JS, for example images and downloads), this can now be safely done without an isolated sandbox domain. There are two key steps:

  1. Always set the Content-Type header to a well-known MIME type that is supported by all browsers and guaranteed not to contain active content (when in doubt, application/octet-stream is a safe choice).
  2. In addition, always set the below response headers to ensure that the browser fully isolates the response.

Response Header

Purpose

X-Content-Type-Options: nosniff

Prevents content sniffing

Content-Disposition: attachment; filename="download"

Triggers a download rather than rendering

Content-Security-Policy: sandbox

Sandboxes the content as if it was served on a separate domain

Content-Security-Policy: default-src ‘none’

Disables JS execution (and inclusion of any subresources)

Cross-Origin-Resource-Policy: same-site

Prevents the page from being included cross-site

This combination of headers ensures that the response can only be loaded as a subresource by your application, or downloaded as a file by the user. Furthermore, the headers provide multiple layers of protection against browser bugs through the CSP sandbox header and the default-src restriction. Overall, the setup outlined above provides a high degree of confidence that responses served in this way cannot lead to injection or isolation vulnerabilities.

Defense In Depth

While the above solution represents a generally sufficient defense against XSS, there are a number of additional hardening measures that you can apply to provide additional layers of security:

  • Set a X-Content-Security-Policy: sandbox header for compatibility with IE11
  • Set a Content-Security-Policy: frame-ancestors 'none' header to block the endpoint from being embedded
  • Sandbox user content on an isolated subdomain by:
    • Serving user content on an isolated subdomain (e.g. Google uses domains such as product.usercontent.google.com)
    • Set Cross-Origin-Opener-Policy: same-origin and Cross-Origin-Embedder-Policy: require-corp to enable cross-origin isolation

Approach 2: Serving Active User Content

Safely serving active content (e.g. HTML or SVG images) can also be done without the weaknesses of the classic sandbox domain approach.

The simplest option is to take advantage of the Content-Security-Policy: sandbox header to tell the browser to isolate the response. While not all web browsers currently implement process isolation for sandbox documents, ongoing refinements to browser process models are likely to improve the separation of sandboxed content from embedding applications. If SpectreJS and renderer compromise attacks are outside of your threat model, then using CSP sandbox is likely a sufficient solution.

At Google, we’ve developed a solution that can fully isolate untrusted active content by modernizing the concept of sandbox domains. The core idea is to:

  1. Create a new sandbox domain that is added to the public suffix list. For example, by adding exampleusercontent.com to the PSL, you can ensure that foo.exampleusercontent.com and bar.exampleusercontent.com are cross-site and thus fully isolated from each other.
  2. URLs matching *.exampleusercontent.com/shim are all routed to a static shim file. This shim file contains a short HTML/JS snippet that listens to the message event handler and renders any content it receives.
  3. To use this, the product creates either an iframe or a popup to $RANDOM_VALUE.exampleusercontent.com/shim and uses postMessage to send the untrusted content to the shim for rendering.
  4. The rendered content is transformed to a Blob and rendered inside a sandboxed iframe.

Compared to the classic sandbox domain approach, this ensures that all content is fully isolated on a unique site. And, by having the main application deal with retrieving the data to be rendered, it is no longer necessary to use capability URLs.

Conclusion

Together, these two solutions make it possible to migrate off of classic sandbox domains like googleusercontent.com to more secure solutions that are compatible with third-party cookie blocking. At Google, we’ve already migrated many products to use these solutions and have more migrations planned for the next year. We hope that by sharing these solutions, we can help other websites easily serve untrusted content in a secure manner.

Supply chain security for Go, Part 1: Vulnerability management

High profile open source vulnerabilities have made it clear that securing the supply chains underpinning modern software is an urgent, yet enormous, undertaking. As supply chains get more complicated, enterprise developers need to manage the tidal wave of vulnerabilities that propagate up through dependency trees. Open source maintainers need streamlined ways to vet proposed dependencies and protect their projects. A rise in attacks coupled with increasingly complex supply chains means that supply chain security problems need solutions on the ecosystem level.

One way developers can manage this enormous risk is by choosing a more secure language. As part of Google’s commitment to advancing cybersecurity and securing the software supply chain, Go maintainers are focused this year on hardening supply chain security, streamlining security information to our users, and making it easier than ever to make good security choices in Go.

This is the first in a series of blog posts about how developers and enterprises can secure their supply chains with Go. Today’s post covers how Go helps teams with the tricky problem of managing vulnerabilities in their open source packages.

Extensive Package Insights

Before adopting a dependency, it’s important to have high-quality information about the package. Seamless access to comprehensive information can be the difference between an informed choice and a future security incident from a vulnerability in your supply chain. Along with providing package documentation and version history, the Go package discovery site links to Open Source Insights. The Open Source Insights page includes vulnerability information, a dependency tree, and a security score provided by the OpenSSF Scorecard project. Scorecard evaluates projects on more than a dozen security metrics, each backed up with supporting information, and assigns the project an overall score out of ten to help users quickly judge its security stance (example). The Go package discovery site puts all these resources at developers’ fingertips when they need them most—before taking on a potentially risky dependency.

Curated Vulnerability Information

Large consumers of open source software must manage many packages and a high volume of vulnerabilities. For enterprise teams, filtering out noisy, low quality advisories and false positives from critical vulnerabilities is often the most important task in vulnerability management. If it is difficult to tell which vulnerabilities are important, it is impossible to properly prioritize their remediation. With granular advisory details, the Go vulnerability database removes barriers to vulnerability prioritization and remediation.

All vulnerability database entries are reviewed and curated by the Go security team. As a result, entries are accurate and include detailed metadata to improve the quality of vulnerability scans and to make vulnerability information more actionable. This metadata includes information on affected functions, operating systems, and architectures. With this information, vulnerability scanners can reduce the number of false positives using symbol information to filter out vulnerabilities that aren’t called by client code.

Consider the case of GO-2022-0646, which describes an unfixed vulnerability present in all versions of the package. It can only be triggered, though, if a particular, deprecated function is called. For the majority of users, this vulnerability is a false positive—but every user would need to spend time and effort to manually determine whether they’re affected if their vulnerability database doesn’t include function metadata. This amounts to enormous wasted effort that could be spent on more productive security efforts.

The Go vulnerability database streamlines this process by including accurate affected function level metadata for GO-2022-0646. Vulnerability scanners can then use static analysis to accurately determine if the project uses the affected function. Because of Go’s high quality metadata, a vulnerability such as this one can automatically be excluded with less frustration for developers, allowing them to focus on more relevant vulnerabilities. And for projects that do incorporate the affected function, Go’s metadata provides a remediation path: at the time of writing, it’s not possible to upgrade the package to fix the vulnerability, but you can stop using the vulnerable function. Whether or not the function is called, Go’s high quality metadata provides the user with the next step.

Entries in the Go vulnerability database are served as JSON files in the OSV format from vuln.go.dev. The OSV format is a minimal and precise industry-accepted reporting format for open source vulnerabilities that has coverage over 16 ecosystems. OSV treats open source as a first class citizen by including information specific to open source, like git commit hashes. The OSV format ensures that the vulnerability information is both machine readable and easy for developers to understand. That means that not only are the database entries easy to read and browse, but that the format is also compatible with automated tools like scanners. Go provides such a scanner that intelligently matches vulnerabilities to Go codebases.

Low noise, reliable vulnerability scanning

The Go team released a new command line tool, govulncheck, last September. Govulncheck does more than simply match dependencies to known vulnerabilities in the Go vulnerability database; it uses the additional metadata to analyze your project’s source code and narrow results to vulnerabilities that actually affect the application. This cuts down on false positives, reducing noise and making it easier to prioritize and fix issues.

You can run govulncheck as a command-line tool throughout your development process to see if a recent change introduced a new exploitable path. Fortunately, it’s easy to run govulncheck directly from your editor using the latest VS Code Go extension. Users have even incorporated govulncheck into their CI/CD pipeline. Finding new vulnerabilities early can help you fix them before they’re in production.

The Go team has been collaborating with the OSV team to bring source analysis capabilities to OSV-Scanner through a beta integration with govulncheck. OSV-Scanner is a general purpose, multi-ecosystem, vulnerability scanner that matches project dependencies to known vulnerabilities. Go vulnerabilities can now be marked as “unexecuted” thanks to govulncheck’s analysis.

Govulncheck is under active development, and the team appreciates feedback from users. Go package maintainers are also encouraged to contribute vulnerability reports to the Go vulnerability database.

Additionally, you can report a security bug in the Go project itself, following the Go Security Policy. These may be eligible for the Open Source Vulnerability Rewards Program, which gives financial rewards for vulnerabilities found in Google’s open source projects. These contributions improve security for all users and reports are always appreciated.

Security across the supply chain

Google is committed to helping developers use Go software securely across the end-to-end supply chain, connecting users to dependable data and tools throughout the development lifecycle. As supply chain complexities and threats continue to increase, Go’s mission is to provide the most secure development environment for software engineering at scale.

Our next installment in this series on supply chain security will cover how Go’s checksum database can help protect users from compromised dependencies. Watch for it in the coming weeks!

Announcing the deps.dev API: critical dependency data for secure supply chains

Today, we are excited to announce the deps.dev API, which provides free access to the deps.dev dataset of security metadata, including dependencies, licenses, advisories, and other critical health and security signals for more than 50 million open source package versions.

Software supply chain attacks are increasingly common and harmful, with high profile incidents such as Log4Shell, Codecov, and the recent 3CX hack. The overwhelming complexity of the software ecosystem causes trouble for even the most diligent and well-resourced developers.

We hope the deps.dev API will help the community make sense of complex dependency data that allows them to respond to—or even prevent—these types of attacks. By integrating this data into tools, workflows, and analyses, developers can more easily understand the risks in their software supply chains.

The power of dependency data

As part of Google’s ongoing efforts to improve open source security, the Open Source Insights team has built a reliable view of software metadata across 5 packaging ecosystems. The deps.dev data set is continuously updated from a range of sources: package registries, the Open Source Vulnerability database, code hosts such as GitHub and GitLab, and the software artifacts themselves. This includes 5 million packages, more than 50 million versions, from the Go, Maven, PyPI, npm, and Cargo ecosystems—and you'd better believe we're counting them!

We collect and aggregate this data and derive transitive dependency graphs, advisory impact reports, OpenSSF Security Scorecard information, and more. Where the deps.dev website allows human exploration and examination, and the BigQuery dataset supports large-scale bulk data analysis, this new API enables programmatic, real-time access to the corpus for integration into tools, workflows, and analyses.

The API is used by a number of teams internally at Google to support the security of our own products. One of the first publicly visible uses is the GUAC integration, which uses the deps.dev data to enrich SBOMs. We have more exciting integrations in the works, but we’re most excited to see what the greater open source community builds!

We see the API as being useful for tool builders, researchers, and tinkerers who want to answer questions like:

  • What versions are available for this package?
  • What are the licenses that cover this version of a package—or all the packages in my codebase?
  • How many dependencies does this package have? What are they?
  • Does the latest version of this package include changes to dependencies or licenses?
  • What versions of what packages correspond to this file?

Taken together, this information can help answer the most important overarching question: how much risk would this dependency add to my project?

The API can help surface critical security information where and when developers can act. This data can be integrated into:

  • IDE Plugins, to make dependency and security information immediately available.
  • CI/CD integrations to prevent rolling out code with vulnerability or license problems).
  • Build tools and policy engine integrations to help ensure compliance.
  • Post-release analysis tools to detect newly discovered vulnerabilities in your codebase.
  • Tools to improve inventory management and mystery file identification.
  • Visualizations to help you discover what your dependency graph actually looks like:

    Unique features

    The API has a couple of great features that aren’t available through the deps.dev website.

    Hash queries

    A unique feature of the API is hash queries: you can look up the hash of a file's contents and find all the package versions that contain that file. This can help figure out what version of which package you have even absent other build metadata, which is useful in areas such as SBOMs, container analysis, incident response, and forensics.

    Real dependency graphs

    The deps.dev dependency data is not just what a package declares (its manifests, lock files, etc.), but rather a full dependency graph computed using the same algorithms as the packaging tools (Maven, npm, Pip, Go, Cargo). This gives a real set of dependencies similar to what you would get by actually installing the package, which is useful when a package changes but the developer doesn’t update the lock file. With the deps.dev API, tools can assess, monitor, or visualize expected (or unexpected!) dependencies.

    API in action

    For a demonstration of how the API can help software supply chain security efforts, consider the questions it could answer in a situation like the Log4Shell discovery:

    • Am I affected? - A CI/CD integration powered by the free API would automatically detect that a new, critical vulnerability is affecting your codebase, and alert you to act.
    • Where? - A dependency visualization tool pulling from the deps.dev API transitive dependency graphs would help you identify whether you can update one of your direct dependencies to fix the issue. If you were blocked, the tool would point you at the package(s) that are yet to be patched, so you could contribute a PR and help unblock yourself further up the tree.
    • Where else? - You could query the API with hashes of vendored JAR files to check if vulnerable log4j versions were unexpectedly hiding therein.
    • How much of the ecosystem is impacted? - Researchers, package managers, and other interested observers could use the API to understand how their ecosystem has been affected, as we did in this blog post about Log4Shell’s impact.

    Getting started

    The API service is globally replicated and highly available, meaning that you and your tools can depend on it being there when you need it.

    It's also free and immediately available—no need to register for an API key. It's just a simple, unauthenticated HTTPS API that returns JSON objects:

    # List the advisories affecting log4j 1.2.17
    $ curl https://api.deps.dev/v3alpha/systems/maven/packages/log4j%3Alog4j/versions/1.2.17 \
            | jq '.advisoryKeys[].id'
    "GHSA-2qrg-x229-3v8q"
    "GHSA-65fg-84f6-3jq3"
    "GHSA-f7vh-qwp3-x37m"
    "GHSA-fp5r-v3w9-4333"
    "GHSA-w9p3-5cr8-m3jj"

    A single API call to list all the GHSA advisories affecting a specific version of log4j.

    Check out the API Documentation to get started, or jump straight into the code with some examples.

    Securing supply chains

    Software supply chain security is hard, but it’s in all our interests to make it easier. Every day, Google works hard to create a safer internet, and we’re proud to be releasing this API to help do just that, and make this data universally accessible and useful to everyone.

    We look forward to seeing what you might do with the API, and would appreciate your feedback. (What works? What doesn't? What makes it better?) You can reach us at [email protected], or by filing an issue on our GitHub repo.

OSV and the Vulnerability Life Cycle

It is an interesting time for everyone concerned with open source vulnerabilities. The U.S. Executive Order on Improving the Nation's Cybersecurity requirements for vulnerability disclosure programs and assurances for software used by the US government will go into effect later this year. Finding and fixing security vulnerabilities has never been more important, yet with increasing interest in the area, the vulnerability management space has become fragmented—there are a lot of new tools and competing standards.

In 2021, we announced the launch of OSV, a database of open source vulnerabilities built partially from vulnerabilities found through Google’s OSS-Fuzz program. OSV has grown since then and now includes a widely adopted OpenSSF schema and a vulnerability scanner. In this blog post, we’ll cover how these tools help maintainers track vulnerabilities from discovery to remediation, and how to use OSV together with other SBOM and VEX standards.

Vulnerability Databases

The lifecycle of a known vulnerability begins when it is discovered. To reach developers, the vulnerability needs to be added to a database. CVEs are the industry standard for describing vulnerabilities across all software, but there was a lack of an open source centric database. As a result, several independent vulnerability databases exist across different ecosystems.

To address this, we announced the OSV Schema to unify open source vulnerability databases. The schema is machine readable, and is designed so dependencies can be easily matched to vulnerabilities using automation. The OSV Schema remains the only widely adopted schema that treats open source as a first class citizen. Since becoming a part of OpenSSF, the OSV Schema has seen adoption from services like GitHub, ecosystems such as Rust and Python, and Linux distributions such as Rocky Linux.

Thanks to such wide community adoption of the OSV Schema, OSV.dev is able to provide a distributed vulnerability database and service that pulls from language specific authoritative sources. In total, the OSV.dev database now includes 43,302 vulnerabilities from 16 ecosystems as of March 2023. Users can check OSV for a comprehensive view of all known vulnerabilities in open source.

Every vulnerability in OSV.dev contains package manager versions and git commit hashes, so open source users can easily determine if their packages are impacted because of the familiar style of versioning. Maintainers are also familiar with OSV’s community driven and distributed collaboration on the development of OSV’s database, tools, and schema.

Matching

The next step in managing vulnerabilities is to determine project dependencies and their associated vulnerabilities. Last December we released OSV-Scanner, a free, open source tool which scans software projects’ lockfiles, SBOMs, or git repositories to identify vulnerabilities found in the OSV.dev database. When a project is scanned, the user gets a list of all known vulnerabilities in the project.

In the two months since launch, OSV-Scanner has seen positive reception from the community, including over 4,600 stars and 130 PRs from 29 contributors. Thank you to the community, which has been incredibly helpful in identifying bugs, supporting new lockfile formats, and helping us prioritize new features for the tool.

Remediation

Once a vulnerability has been identified, it needs to be remediated. Removing a vulnerability through upgrading the package is often not as simple as it seems. Sometimes an upgrade will break your project or cause another dependency to not function correctly. These complex dependency graph constraints can be difficult to resolve. We’re currently working on building features in OSV-Scanner to improve this process by suggesting minimal upgrade paths.

Sometimes, it isn’t even necessary to upgrade a package. A vulnerable component may be present in a project, but that doesn’t mean it is exploitable–and VEX statements provide this information to help in prioritization of vulnerability remediation. For example, it may not be necessary to update a vulnerable component if it is never called. In cases like this, a VEX (Vulnerability Exploitability eXchange) statement can provide this justification.

Manually generating VEX statements is time intensive and complex, requiring deep expertise in the project’s codebase and libraries included in its dependency tree. These costs are barriers to VEX adoption at scale, so we’re working on the ability to auto-generate high quality VEX statements based on static analysis and manual ignore files. The format for this will likely be one or more of the current emerging VEX standards.

Compatibility

Not only are there multiple emerging VEX standards (such as OpenVEX, CycloneDX, and CSAF), there are also multiple advisory formats (CVE, CSAF) and SBOM formats (CycloneDX, SPDX). Compatibility is a concern for project maintainers and open source users throughout the process of identifying and fixing project vulnerabilities. A developer may be obligated to use another standard and wonder if OSV can be used alongside it.

Fortunately, the answer is generally yes! OSV provides a focused, first-class experience for describing open source vulnerabilities, while providing an easy bridge to other standards.

CVE 5.0

The OSV team has directly worked with the CVE Quality Working Group on a key new feature of the latest CVE 5.0 standard: a new versioning schema that closely resembles OSV’s own versioning schema. This will enable easy conversion from OSV to CVE 5.0, and vice versa. It also enables OSV to contribute high quality metadata directly back to CVE, and drive better machine readability and data quality across the open source ecosystem.

Other emerging standards

Not all standards will convert as effortlessly as CVE to OSV. Emerging standards like CSAF are comparatively complicated because they support broader use cases. These standards often need to encode affected proprietary software, and CSAF includes rich mechanisms to express complicated nested product trees that are unnecessary for open source. As a result, the spec is roughly six times the size of OSV and difficult to use directly for open source.

OSV Schema's strong adoption shows that the open source community prefers a lightweight standard, tailored for open source. However, the OSV Schema maintains compatibility with CSAF for identification of packages through the Package URL and vers standards. CSAF records that use these mechanisms can be directly converted to OSV, and all OSV entries can be converted to CSAF.

SBOM and VEX standards

Similarly, all emerging SBOM and VEX standards maintain compatibility with OSV through the Package URL specification. OSV-Scanner today also already provides scanning support for the SPDX and CycloneDX SBOM standards.

OSV in 2023

OSV already provides straightforward compatibility with established standards such as CVE, SPDX, and CycloneDX. While it’s not clear yet which other emerging SBOM and VEX formats will become the standard, OSV has a clear path to supporting all of them. Open source developers and ecosystems will likely find OSV to be convenient for recording and consuming vulnerability information given OSV’s focused, minimal design.

OSV is not just built for open source, it is an open source project. We desire to build tools that will easily fit into your workflow and will help you identify and fix vulnerabilities in your projects. Your input, through contributions, questions, and feedback, is very valuable to us as we work towards that goal. Questions can be asked by opening an issue and all of our projects (OSV.dev, OSV-Scanner, OSV-Schema) welcome contributors.


Want to keep up with the latest OSV developments? We’ve just launched a project blog! Check out our first major post, all about how VEX could work at scale.

Thank you and goodbye to the Chrome Cleanup Tool

Starting in Chrome 111 we will begin to turn down the Chrome Cleanup Tool, an application distributed to Chrome users on Windows to help find and remove unwanted software (UwS).

Origin story

The Chrome Cleanup Tool was introduced in 2015 to help users recover from unexpected settings changes, and to detect and remove unwanted software. To date, it has performed more than 80 million cleanups, helping to pave the way for a cleaner, safer web.

A changing landscape

In recent years, several factors have led us to reevaluate the need for this application to keep Chrome users on Windows safe.

First, the user perspective – Chrome user complaints about UwS have continued to fall over the years, averaging out to around 3% of total complaints in the past year. Commensurate with this, we have observed a steady decline in UwS findings on users' machines. For example, last month just 0.06% of Chrome Cleanup Tool scans run by users detected known UwS.

Next, several positive changes in the platform ecosystem have contributed to a more proactive safety stance than a reactive one. For example, Google Safe Browsing as well as antivirus software both block file-based UwS more effectively now, which was originally the goal of the Chrome Cleanup Tool. Where file-based UwS migrated over to extensions, our substantial investments in the Chrome Web Store review process have helped catch malicious extensions that violate the Chrome Web Store's policies.

Finally, we've observed changing trends in the malware space with techniques such as Cookie Theft on the rise – as such, we've doubled down on defenses against such malware via a variety of improvements including hardened authentication workflows and advanced heuristics for blocking phishing and social engineering emails, malware landing pages, and downloads.

What to expect

Starting in Chrome 111, users will no longer be able to request a Chrome Cleanup Tool scan through Safety Check or leverage the "Reset settings and cleanup" option offered in chrome://settings on Windows. Chrome will also remove the component that periodically scans Windows machines and prompts users for cleanup should it find anything suspicious.

Even without the Chrome Cleanup Tool, users are automatically protected by Safe Browsing in Chrome. Users also have the option to turn on Enhanced protection by navigating to chrome://settings/security – this mode substantially increases protection from dangerous websites and downloads by sharing real-time data with Safe Browsing.

While we'll miss the Chrome Cleanup Tool, we wanted to take this opportunity to acknowledge its role in combating UwS for the past 8 years. We'll continue to monitor user feedback and trends in the malware ecosystem, and when adversaries adapt their techniques again – which they will – we'll be at the ready.

As always, please feel free to send us feedback or find us on Twitter @googlechrome.

Google Trust Services now offers TLS certificates for Google Domains customers



We’re excited to announce changes that make getting Google Trust Services TLS certificates easier for Google Domains customers. With this integration, all Google Domains customers will be able to acquire public certificates for their websites at no additional cost, whether the site runs on a Google service or uses another provider. Additionally, Google Domains is now making an API available to allow for DNS-01 challenges with Google Domains DNS servers to issue and renew certificates automatically.



Like the existing Google Cloud integration, Automatic Certificate Management Environment (ACME) protocol is used to enable seamless automatic lifecycle management of TLS certificates. 



These certificates are issued by the same Certificate Authority (CA) Google uses for its own sites, so they are widely supported across the entire spectrum of devices used to access your services.



How do I use it?



Using ACME ensures your certificates are renewed automatically and many hosting services already support ACME. If you're running your own web servers / services, there are ACME clients that integrate easily with common servers. To use this feature, you will need an API key called an External Account Binding key. This enables your certificate requests to be associated with your Google Domains account. You can get an API key by visiting Google Domains and navigating to the Security page for your domain. There you’ll see a section for Google Trust Services where you can get your EAB Key.



Example of EAB Credentials in Google Domains



As an example, with the popular Certbot ACME client, the configuration to register an account looks like:


certbot register --email <CONTACT_EMAIL> --no-eff-email --server "https://dv.acme-v02.api.pki.goog/directory"  --eab-kid "<EAB_KEY_ID>" --eab-hmac-key "<EAB_HMAC_KEY>"




The EAB_KEY_ID and EAB_HMAC_KEY are both provided on your Google Domains security page.



After the account is created, you may issue certificates by running:

certbot certonly -d <domain.com> --server "https://dv.acme-v02.api.pki.goog/directory" --standalone



Then follow the prompts to complete validation and download your certificate. If you need additional information please visit the Google Domains help center.



Google Domains and ACME DNS-01




ACME uses challenges to validate domain control before issuing certificates. The ACME DNS-01 challenge can be an efficient way for users to automate the validation process and integrate with existing websites and web hosting services.



Google Domains now provides an API for ACME DNS-01 challenges that helps streamline the process for users to authenticate domain control quickly and securely. This is now offered in some popular ACME clients like Certbot via this plugin, Caddy, Certify The Web, Posh-ACME. You can find additional information on the Google Domains site.






Example of DNS API Access Token in Google Domains



To set up automatic certificate provisioning with ACME and DNS-01, follow these steps:



  1. Sign in to Google Domains.
  2. Select the domain that you want to use.
  3. At the top left, click “Menu” and select “Security”.
  4. Under section “ACME DNS API”, click “Create token”.
  5. A dialog box will appear with an “API Token”. This is the API Token you will need to enter into your ACME client. You will need to copy this value and can do so by clicking the copy button next to the API Token. 
    • NOTE: This value is only shown once. After the dialog box is closed you  will not be able to see this API Token again. Store this token in a safe place, since anyone that has it gains the ability to modify some DNS TXT records for your Domain.  
    • If you did not save this value before closing the dialog box, you can easily delete and create a new API token.
    • A limit of 10 API tokens per domain can exist at a time. 
  6. Once the dialog box is closed you will be able to see in the list that the token has been created. You can delete this token at any time to revoke its access. 
  7. The API token can now be used in an ACME client that supports the Google Domains ACME DNS API. Each ACME client differs slightly on how to specify this API Token so you will need to read the documentation on your desired ACME client. 




Regardless of which ACME client you use, Google Domains and Google Trust Services are excited to offer a reliable option for no-cost TLS certificates. This continues the mission of helping build a safer internet by providing a transparent, trusted, and reliable Certificate Authority.

8 ways to secure Chrome browser for Google Workspace users

1. Bring Chrome under Cloud Management

Your journey towards keeping your Google Workspace users and data safe, starts with bringing your Chrome browsers under Cloud Management at no additional cost. Chrome Browser Cloud Management is a single destination for applying Chrome Browser policies and security controls across Windows, Mac, Linux, iOS and Android. You also get deep visibility into your browser fleet including which browsers are out of date, which extensions your users are using and bringing insight to potential security blindspots in your enterprise.

Managing Chrome from the cloud allows Google Workspace admins to enforce enterprise protections and policies to the whole browser on fully managed devices, which no longer requires a user to sign into Chrome to have policies enforced. You can also enforce policies that apply when your managed users sign in to Chrome browser on any Windows, Mac, or Linux computer (via Chrome Browser user-level management) --not just on corporate managed devices.

This enables you to keep your corporate data and users safe, whether they are accessing work resources from fully managed, personal, or unmanaged devices used by your vendors.

Getting started is easy. If your organization hasn’t already, check out this guide for steps on how to enroll your devices.

2. Enforce built-in protections against Phishing, Ransomware & Malware

Chrome uses Google’s Safe Browsing technology to help protect billions of devices every day by showing warnings to users when they attempt to navigate to dangerous sites or download dangerous files. Safe Browsing is enabled by default for all users when they download Chrome. As an administrator, you can prevent your users from disabling Safe Browsing by enforcing the SafeBrowsingProtectionLevel policy.

Over the past few years, we’ve seen threats on the web becoming increasingly sophisticated. Turning on Enhanced Safe Browsing will substantially increase protection from dangerous websites, malicious downloads and extensions. For the best protections against web based attacks Google has to offer, enforce Enhanced Safe Browsing for your users.

3. Enable Enterprise Credential Protections in Chrome

Enterprise password reuse introduces significant security risks. Quite often, employees reuse corporate credentials as personal logins and vice versa. Occasionally, employees even enter their corporate passwords into phishing websites. Reused employee logins give criminals easy paths to access corporate data.

Chrome Enterprise Password Reuse detection helps enterprises avoid identity theft and employee and organizational data breaches by detecting when an employee enters their corporate credentials into any other website.

Google Password Manager in Chrome also has a built-in Password Checkup feature that alerts users when Google discovers a username and password has been exposed in a public data breach.

Password alerts are surfaced in Audit Logs and Security Investigation Tool which helps admins create automated rules or take appropriate steps to mitigate this by asking users to reset their passwords.

4. Gain insights into critical security events via Audit Logs, Google Security Center or your SIEM of choice

IT teams can gain useful insights about potential security threats and events that your Google Workspace users are encountering when browsing the web using Chrome. IT teams can take preventive measures against threats through Security Reporting.

In the Google Workspace Admin console, organizations can enroll their Chrome browser and get detailed information about their browser deployment. IT teams can also set policies, manage extensions, and more. The Chrome management policies can be set to work alongside any end user-based policies that may be in place.

Once you’ve enabled Security events reporting (pictured above), you can then view reporting events within audit logs. Google Workspace Enterprise Plus or Education Plus users can use the Workspace Security Investigation Tool to identify, triage, and act on potential security threats.

As of today, Chrome can report on when users:

  • Navigate to a known malicious site.
  • Download or upload files containing known malware.
  • Reuse corporate passwords on non-approved sites.
  • Change corporate passwords after reusing them on non-approved sites.
  • Install extensions.

In addition to Google Workspace, you can also export these events into other Google Cloud products, such as Google Cloud Pub/Sub, Chronicle, or leading 3rd party products such as Splunk, Crowdstrike and PaloAlto Networks.

5. Mitigate risk by keeping your browsers up to date with latest security updates

Modern web browsers, like any other software, can have "zero day" vulnerabilities, which are undiscovered flaws in the software that can be exploited by attackers until they are identified and resolved. Fortunately, among all the browsers, Chrome is known to patch zero day vulnerabilities quickly. However, to take advantage of this, the IT team has to ensure that all browsers within the browser environment are up-to-date. Our enterprise tools provide a smooth and seamless browser update process, enabling user productivity while maintaining optimal security. By leveraging these tools, businesses can ensure their users are safe and protected from potential security threats.

  • Version Report: Easily see all the versions of Chrome in your fleet across various operating systems in a daily report.
  • Force Auto Updates in Chrome: Trigger updates to newer versions of Chrome as soon as they’re available. Force users to relaunch Chrome to take updates more rapidly using enterprise policies. This keeps users on the latest version of Chrome, with the latest security fixes.
  • Controlling legacy browser usage: Some users continue to need access to old web applications that use plugins and ActiveX technology not supported by modern browsers. Legacy Browser Support functionality is integrated into Chrome, and reduces the time users spend with less secure browsers.

6. Ensure employees only use vetted extensions

Extensions pose a large security risk. Many extensions request powerful permissions that if misused, could lead to security breaches or data loss. However, due to strong end user demand, it’s often not possible to fully block the installation of extensions.

  • Apps & Extensions usage report: Provides visibility into every Chrome extension that is installed across an enterprise’s fleet. Admins can force install or block any extension across any segment of their fleet.
  • Extensions workflow: Admins can decide under which circumstances an extension install needs to be reviewed by IT. A review workflow in the Google Admin console makes it easy for admins to review and approve install requests for specific users requesting an extension, or for their broader fleet.
  • Extensions details: Admins can see additional details about an extension’s permissions, and other relevant metadata. This info is surfaced in the Extensions list and Extensions workflow pages to make it easier for administrators to manage extensions.

7. Ensure your Google Workspace resources are only accessed from Managed Chrome Browsers with protections enabled

Context-Aware Access ensures only the right people, under the right conditions, access confidential information. Using Context-Aware Access, you can create granular access control policies for apps that access Workspace data based on attributes, such as user identity, location, device security status, and IP address.

To ensure that your Google Workspace resources are only accessed from managed Chrome browsers with protection enabled, you create custom access levels in Advanced mode, using Common Expressions Language (CEL). Learn more about managed queries in this help center article.

8. Enable BeyondCorp Enterprise Threat and Data Protections

For the organizations that want to take an even more proactive approach to data security, they can deploy BeyondCorp Enterprise to protect their information and enable data loss prevention (including control over upload, download, print, save, copy and paste), real-time phishing protection, malware deep scanning, and Zero Trust access to SaaS applications. Since BeyondCorp Enterprise is already built into Chrome, organizations can frictionlessly implement it without having to install additional agents.

Learn more about how Google supports today’s workforce with secure enterprise browsing here.

Our commitment to fighting invalid traffic on Connected TV



Connected TV (CTV) has not only transformed the entertainment world, it has also created a vibrant new platform for digital advertising. However, as with any innovative space, there are challenges that arise, including the emergence of bad actors aiming to siphon money away from advertisers and publishers through fraudulent or invalid ad traffic. Invalid traffic is an evolving challenge that has the potential to affect the integrity and health of digital advertising on CTV. However, there are steps the industry can take to combat invalid traffic and foster a clean, trustworthy, and sustainable ecosystem.



Information sharing and following best practices

Every player across the digital advertising ecosystem has the opportunity to help reduce the risk of CTV ad fraud. It starts by spreading awareness across the industry and building a commitment among partners to share best practices for defending against invalid traffic. Greater transparency and communication are crucial to creating lasting solutions.


One key best practice is contributing to and using relevant industry standards. We encourage CTV inventory providers to follow the CTV/OTT Device & App Identification Guidelines and IFA Guidelines. These guidelines, both of which were developed by the IAB Tech Lab, foster greater transparency, which in turn reduces the risk of invalid traffic on CTV. More information and details about using these resources can be found in the following guide: Protecting your ad-supported CTV experiences.



Collaborating on standards and solutions

No single company or industry group can solve this challenge on their own, we need to work collaboratively to solve the problem. Fortunately, we’re already seeing constructive efforts in this direction with industry-wide standards.


For example, the broad implementation of the IAB Tech Lab’s app-ads.txt and its web counterpart, ads.txt, have brought greater transparency to the digital advertising supply chain and have helped combat ad fraud by allowing advertisers to verify the sellers from whom they buy inventory. In 2021, the IAB Tech Lab extended the app-ads.txt standard to CTV in order to better protect and support CTV advertisers. This update is the first of several industry-wide steps that have been taken to further protect CTV advertising. In early 2022, the IAB Tech Lab released the ads.cert 2.0 “protocol suite,” along with a proposal to utilize this new standard to secure server-side connections (including for server-side ad insertion). Ads.cert 2.0 will also power future industry standards focused on securing the supply chain and preventing misrepresentation.


In addition to these efforts, the Media Rating Council (MRC) also engaged with stakeholders to develop its Server-Side Ad Insertion and OTT (Over-the-Top) Guidance, which provides a consistent set of guidelines specific to CTV for organizations that seek MRC accreditation for invalid traffic detection and filtration. We’re also seeing key partners tackle this challenge through informal working groups. For example, we collaborated with various CTV and security partners across our industry on a solution that allows companies to ensure video ad requests are coming from a valid Roku device


But more work is needed. Players across the digital advertising ecosystem need to continue to build momentum through opportunities and initiatives that enable further collaboration on solutions.



Our ongoing investment in invalid traffic defenses

At Google, we’ve been defending our ad systems against invalid traffic for nearly two decades. By striking the right balance between automation and human expertise, we’ve developed a comprehensive set of measures to respond to threats like botnets, click farms, domain misrepresentation, and more. We’re now applying a similar approach to minimize the risk of CTV ad fraud, balancing innovation with tried-and-true technologies.


We’ve developed a machine learning platform built on TensorFlow, which has enabled us to expand the amount of inventory we can review and scale our defenses against invalid traffic to include additional surfaces, such as CTV. While machine learning has allowed us to better analyze ad traffic in new and diverse ways, we’ve also continued to leverage the work of research analysts and industry experts to ensure our automated enforcement systems are running effectively on CTV.


In addition to setting up new defenses for CTV, we’re also taking a more conservative approach with the CTV inventory we make available. This ensures that we aren’t exposing advertisers to unnecessary risk while CTV standards and best practices continue to evolve and mature, and while their adoption by the industry increases. 



Evolving and adapting

We know that bad actors continuously evolve and adapt their methods to evade detection and enforcement of our policies. The tactics behind invalid traffic and ad fraud will inevitably become more sophisticated with the growth of CTV. However, if the industry pulls together, we’ll be in a better position to not only address these new threats head on, but stay one step ahead of them while building a CTV advertising ecosystem that is safe and sustainable for everyone.