Tag Archives: vulnerabilities

A new chapter for Google’s Vulnerability Reward Program



A little over 10 years ago, we launched our Vulnerability Rewards Program (VRP). Our goal was to establish a channel for security researchers to report bugs to Google and offer an efficient way for us to thank them for helping make Google, our users, and the Internet a safer place. To recap our progress on these goals, here is a snapshot of what VRP has accomplished with the community over the past 10 years:
  • Total bugs rewarded: 11,055
  • Number of rewarded researchers: 2,022
  • Representing 84 different countries
  • Total rewards: $29,357,516
To celebrate our anniversary and ensure the next 10 years are just as (or even more) successful and collaborative, we are excited to announce the launch of our new platform, bughunters.google.com.

This new site brings all of our VRPs (Google, Android, Abuse, Chrome and Play) closer together and provides a single intake form that makes it easier for bug hunters to submit issues. Other improvements you will notice include:
  • More opportunities for interaction and a bit of healthy competition through gamification, per-country leaderboards, awards/badges for certain bugs and more!
  • A more functional and aesthetically pleasing leaderboard. We know a lot of you are using your achievements in the VRP to find jobs (we’re hiring!) and we hope this acts as a useful resource.
  • A stronger emphasis on learning: Bug hunters can improve their skills through the content available in our new Bug Hunter University
  • Streamlined publication process: we know the value that knowledge sharing brings to our community. That’s why we want to make it easier for you to publish your bug reports.
  • Swag will now be supported for special occasions (we heard you loud and clear!)

We also want to take a moment to shine a light on some aspects of the VRP that are not yet well-known, such as:


When we launched our very first VRP, we had no idea how many valid vulnerabilities - if any - would be submitted on the first day. Everyone on the team put in their estimate, with predictions ranging from zero to 20. In the end, we actually received more than 25 reports, taking all of us by surprise.

Since its inception, the VRP program has not only grown significantly in terms of report volume, but the team of security engineers behind it has also expanded – including almost 20 bug hunters who reported vulnerabilities to us and ended up joining the Google VRP team.

That is why we are thrilled to bring you this new platform, continue to grow our community of bug hunters and support the skill development of up-and-coming vulnerability researchers.

Thanks again to the entire Google bug hunter community for making our vulnerability rewards program successful. As you continue to play around with the new site and reporting system, tell us about it - we would love to hear your feedback. Until next time, keep on finding those bugs!

Announcing a unified vulnerability schema for open source


In recent months, Google has launched several efforts to strengthen open-source security on multiple fronts. One important focus is improving how we identify and respond to known security vulnerabilities without doing extensive manual work. It is essential to have a precise common data format to triage and remediate security vulnerabilities, particularly when communicating about risks to affected dependencies—it enables easier automation and empowers consumers of open-source software to know when they are impacted and make security fixes as soon as possible.

We released the Open Source Vulnerabilities (OSV) database in February with the goal of automating and improving vulnerability triage for developers and users of open source software. This initial effort was bootstrapped with a dataset of a few thousand vulnerabilities from the OSS-Fuzz project. Implementing OSV to communicate precise vulnerability data for hundreds of critical open-source projects proved the success and utility of the format, and garnered feedback to help us improve the project; for example, we dropped the Cloud API key requirement, making the database even easier to access by more users. The community response also showed that there was broad interest in extending the effort further.

Today, we’re excited to announce a new milestone in expanding OSV to several key open-source ecosystems: Go, Rust, Python, and DWF. This expansion unites and aggregates four important vulnerability databases, giving software developers a better way to track and remediate the security issues that affect them. Our effort also aligns with the recent US Executive Order on Improving the Nation’s Cybersecurity, which emphasized the need to remove barriers to sharing threat information in order to strengthen national infrastructure. This expanded shared vulnerability database marks an important step toward creating a more secure open-source environment for all users.
A simple, unified schema for describing vulnerabilities precisely

As with open source development, vulnerability databases in open source follow a distributed model, with many ecosystems and organizations creating their own database. Since each uses their own format to describe vulnerabilities, a client tracking vulnerabilities across multiple databases must handle each completely separately. Sharing of vulnerabilities between databases is also difficult.

The Google Open Source Security team, Go team, and the broader open-source community have been developing a simple vulnerability interchange schema for describing vulnerabilities that’s designed from the beginning for open-source ecosystems. After starting work on the schema a few months ago, we requested public feedback and received hundreds of comments. We have incorporated the input from readers to arrive at the current schema:

{

        "id": string,

        "modified": string,

        "published": string,

        "withdrawn": string,

        "aliases": [ string ],

        "related": [ string ],

        "package": {

                "ecosystem": string,

                "name": string,

                "purl": string,

        },

        "summary": string,

        "details": string,

        "affects": [ {

                "ranges": [ {

                        "type": string,

                        "repo": string,

                        "introduced": string,

                        "fixed": string

                } ],

                "versions": [ string ]

        } ],

        "references": [ {

                "type": string,

                "url": string

        } ],

        "ecosystem_specific": { see spec },

        "database_specific": { see spec },

}



This new vulnerability schema aims to address some key problems with managing vulnerabilities in open source. We found that there was no existing standard format which:

  • Enforces version specification that precisely matches naming and versioning schemes used in actual open source package ecosystems. For instance, matching a vulnerability such as a CVE to a package name and set of versions in a package manager is difficult to do in an automated way using existing mechanisms such as CPEs.
  • Can be used to describe vulnerabilities in any open source ecosystem, while not requiring ecosystem-dependent logic to process them.
  • Is easy to use by both automated systems and humans.

With this schema we hope to define a format that all vulnerability databases can export. A unified format means that vulnerability databases, open source users, and security researchers can easily share tooling and consume vulnerabilities across all of open source. This means a more complete view of vulnerabilities in open source for everyone, as well as faster detection and remediation times resulting from easier automation.

The current state


The vulnerability schema spec has gone through several iterations, and we are inviting further feedback as it gets closer to finalized. A number of public vulnerability databases today are already exporting this format, with more in the pipeline:
The OSV service has also aggregated all of these vulnerability databases, which are viewable at our web UI. They can also be queried with a single command via the same existing APIs:

  curl -X POST -d \

      '{"commit": "a46c08c533cfdf10260e74e2c03fa84a13b6c456"}' \

      "https://api.osv.dev/v1/query"

    

  curl -X POST -d \

      '{"version": "2.4.1", "package": {"name": "jinja2", "ecosystem": "PyPI"}}' \

      "https://api.osv.dev/v1/query"



Automating vulnerability database maintenance


Producing quality vulnerability data is also difficult. In addition to OSV’s existing automation, we built more automation tools for vulnerability database maintenance, and used these tools to bootstrap the community Python advisory database. This automation takes existing feeds, accurately matches them to packages, and generates entries containing precise, validated version ranges with minimal human intervention. We plan to extend this tooling to other ecosystems for which there is no existing vulnerability database, or little support for ongoing database maintenance.


Get involved


Thank you to all the open source developers who have provided feedback and adopted this format. We’re continuing to work with open source communities to develop this further and earn more widespread adoption in all ecosystems. If you are interested in adopting this format, we’d appreciate any feedback on our public spec.

Mitigating Memory Safety Issues in Open Source Software


Memory-safety vulnerabilities have dominated the security field for years and often lead to issues that can be exploited to take over entire systems. 


A recent study found that "~70% of the vulnerabilities addressed through a security update each year continue to be memory safety issues.” Another analysis on security issues in the ubiquitous `curl` command line tool showed that 53 out of 95 bugs would have been completely prevented by using a memory-safe language.


Software written in unsafe languages often contains hard-to-catch bugs that can result in severe security vulnerabilities, and we take these issues seriously at Google. That’s why we’re expanding our collaboration with the Internet Security Research Group to support the reimplementation of critical open-source software in memory-safe languages. We previously worked with the ISRG to help secure the Internet by making TLS certificates available to everyone for free, and we're looking forward to continuing to work together on this new initiative.


It's time to start taking advantage of memory-safe programming languages that prevent these errors from being introduced. At Google, we understand the value of the open source community and in giving back to support a strong ecosystem. 


To date, our free OSS-Fuzz service has found over 5,500 vulnerabilities across 375 open source projects caused by memory safety errors, and our Rewards Program helps encourage adoption of fuzzing through financial incentives. We've also released other projects like Syzkaller to detect bugs in operating system kernels, and sandboxes like gVisor to reduce the impact of bugs when they are found.


The ISRG's approach of working directly with maintainers to support rewriting tools and libraries incrementally falls directly in line with our perspective here at Google. 


The new Rust-based HTTP and TLS backends for curl and now this new TLS library for Apache httpd are an important starting point in this overall effort. These codebases sit at the gateway to the internet and their security is critical in the protection of data for millions of users worldwide. 


We'd like to thank the maintainers of these projects for working on such widely-used and important infrastructure, and for participating in this effort.


We're happy to be able to support these communities and the ISRG to make the Internet a safer place. We appreciate their leadership in this area and we look forward to expanding this program in 2021.


Open source security is a collaborative effort. If you're interested in learning more about our efforts, please join us in the Securing Critical Projects Working Group of the Open Source Security Foundation.

Launching OSV – Better vulnerability triage for open source

Open Source Vulnerabilities logo


We are excited to launch OSV (Open Source Vulnerabilities), our first step towards improving vulnerability triage for developers and consumers of open source software. The goal of OSV is to provide precise data on where a vulnerability was introduced and where it got fixed, thereby helping consumers of open source software accurately identify if they are impacted and then make security fixes as quickly as possible. We have started OSV with a data set of fuzzing vulnerabilities found by the OSS-Fuzz service. OSV project evolved from our recent efforts to improve vulnerability management in open source ("Know, Prevent, Fix" framework).

Vulnerability management can be painful for both consumers and maintainers of open source software, with tedious manual work involved in many cases.

For consumers of open source software, it is often difficult to map a vulnerability such as a Common Vulnerabilities and Exposures (CVE) entry to the package versions they are using. This comes from the fact that versioning schemes in existing vulnerability standards (such as Common Platform Enumeration (CPE)) do not map well with the actual open source versioning schemes, which are typically versions/tags and commit hashes. The result is missed vulnerabilities that affect downstream consumers.

Similarly, it is time consuming for maintainers to determine an accurate list of affected versions or commits across all their branches for downstream consumers after a vulnerability is fixed, in addition to the process required for publication. Unfortunately, many open source projects, including ones that are critical to modern infrastructure, are under resourced and overworked. Maintainers don't always have the bandwidth to create and publish thorough, accurate information about their vulnerabilities even if they want to.

These challenges result in open source consumers not incorporating important security fixes promptly. OSV aims to:
  1. Reduce the work required by maintainers to publish vulnerabilities, and
  2. Improve the accuracy of vulnerability queries for downstream consumers by providing precise vulnerability metadata in an easy-to-query database (complementing existing vulnerability databases).

Automation

OSV aims to simplify the vulnerability reporting process for an open source package maintainer by accurately determining the list of affected versions and commits. This requires providing both the commits that introduce and fix the bugs. If that information is not available, OSV requires providing a reproduction test case and steps to generate an application build, and then it performs bisection to find these commits in an automated fashion. OSV takes care of the rest of the analysis to figure out impacted commit ranges (accounting for cherry picks) and versions/tags.

How OSV works


OSV automates the triage workflow for an open source package consumer by providing an API to query for vulnerabilities. A typical OSV workflow for a package consumer looks like the picture above:
  1. A package consumer sends a query to OSV with a package version or commit hash as input.
    curl -X POST -d \
    '{"commit": "6879efc2c1596d11a6a6ad296f80063b558d5e0f"}' \
    'https://api.osv.dev/v1/query?key=$API_KEY'

     curl -X POST -d \
      '{"version": "1.0.0", "package": {"name": "pkg", "ecosystem""pypi"}' \
      'https://api.osv.dev/v1/query?key=$API_KEY'
  1. OSV looks up the set of vulnerabilities affecting that particular version and returns a list of vulnerabilities impacting the package. The vulnerability metadata is returned in a machine-readable JSON format.
  2. The package consumer uses this information to either cherry-pick security fixes (based on precise fix metadata) or update to a later version.

Ongoing work

OSV currently provides access to thousands of vulnerabilities from 380+ critical OSS projects integrated with OSS-Fuzz. We are planning to work with open source communities to extend with data from various language ecosystems (e.g. NPM, PyPI) and work out a pipeline for package maintainers to submit vulnerabilities with minimal work.

Our goal with OSV is to rethink and promote better, scalable vulnerability tracking for open source. In an ideal world, vulnerability management should be done closer to the actual open source development process, aided by automated infrastructure. Projects that depend on open source should be promptly notified and fixes uptaken quickly when a vulnerability is reported.

You can access the OSV website and documentation at https://osv.dev. You can explore the open source repo or contribute to the project on GitHub, and join the mailing list to stay up to date with OSV and share your thoughts on vulnerability tracking. 

By Oliver Chang and Kim Lewandowski, Google Security Team

Know, Prevent, Fix: A framework for shifting the discussion around vulnerabilities in open source

Executive Summary:
The security of open source software has rightfully garnered the industry’s attention, but solutions require consensus about the challenges and cooperation in the execution. The problem is complex and there are many facets to cover: supply chain, dependency management, identity, and build pipelines. Solutions come faster when the problem is well-framed; we propose a framework (“Know, Prevent, Fix”) for how the industry can think about vulnerabilities in open source and concrete areas to address first, including:
  • Consensus on metadata and identity standards: We need consensus on fundamentals to tackle these complex problems as an industry. Agreements on metadata details and identities will enable automation, reduce the effort required to update software, and minimize the impact of vulnerabilities.
  • Increased transparency and review for critical software: For software that is critical to security, we need to agree on development processes that ensure sufficient review, avoid unilateral changes, and transparently lead to well-defined, verifiable official versions.
The following framework and goals are proposed with the intention of sparking industry-wide discussion and progress on the security of open source software.


Due to recent events, the software world gained a deeper understanding about the real risk of supply-chain attacks. Open source software should be less risky on the security front, as all of the code and dependencies are in the open and available for inspection and verification. And while that is generally true, it assumes people are actually looking. With so many dependencies, it is impractical to monitor them all, and many open source packages are not well maintained.

It is common for a program to depend, directly or indirectly, on thousands of packages and libraries. For example, Kubernetes now depends on about 1,000 packages. Open source likely makes more use of dependencies than closed source, and from a wider range of suppliers; the number of distinct entities that need to be trusted can be very high. This makes it extremely difficult to understand how open source is used in products and what vulnerabilities might be relevant. There is also no assurance that what is built matches the source code.

Taking a step back, although supply-chain attacks are a risk, the vast majority of vulnerabilities are mundane and unintentional—honest errors made by well-intentioned developers. Furthermore, bad actors are more likely to exploit known vulnerabilities than to find their own: it’s just easier. As such, we must focus on making fundamental changes to address the majority of vulnerabilities, as doing so will move the entire industry far along in addressing the complex cases as well, including supply-chain attacks.

Few organizations can verify all of the packages they use, let alone all of the updates to those packages. In the current landscape, tracking these packages takes a non-trivial amount of infrastructure, and significant manual effort. At Google, we have those resources and go to extraordinary lengths to manage the open source packages we use—including keeping a private repo of all open source packages we use internally—and it is still challenging to track all of the updates. The sheer flow of updates is daunting. A core part of any solution will be more automation, and this will be a key theme for our open source security work in 2021 and beyond.

Because this is a complex problem that needs industry cooperation, our purpose here is to focus the conversation around concrete goals. Google co-founded the OpenSSF to be a focal point for this collaboration, but to make progress, we need participation across the industry, and agreement on what the problems are and how we might address them. To get the discussion started, we present one way to frame this problem, and a set of concrete goals that we hope will accelerate industry-wide solutions.

We suggest framing the challenge as three largely independent problem areas, each with concrete objectives:
  1. Know about the vulnerabilities in your software
  2. Prevent the addition of new vulnerabilities, and
  3. Fix or remove vulnerabilities.
A related but separate problem, which is critical to securing the supply chain, is improving the security of the development process. We’ve outlined the challenges of this problem and proposed goals in the fourth section, Prevention for Critical Software.

Know your Vulnerabilities

Knowing your vulnerabilities is harder than expected for many reasons. Although there are mechanisms for reporting vulnerabilities, it is hard to know if they actually affect the specific versions of software you are using.

Goal: Precise Vulnerability Data
First, it is crucial to capture precise vulnerability metadata from all available data sources. For example, knowing which version introduced a vulnerability helps determine if one's software is affected, and knowing when it was fixed results in accurate and timely patching (and a reduced window for potential exploitation). Ideally, this triaging workflow should be automated.

Second, most vulnerabilities are in your dependencies, rather than the code you write or control directly. Thus, even when your code is not changing, there can be a constant churn in your vulnerabilities: some get fixed and others get added.1

Goal: Standard Schema for Vulnerability Databases
Infrastructure and industry standards are needed to track and maintain open source vulnerabilities, understand their consequences, and manage their mitigations. A standard vulnerability schema would allow common tools to work across multiple vulnerability databases and simplify the task of tracking, especially when vulnerabilities touch multiple languages or subsystems.

Goal: Accurate Tracking of Dependencies
Better tooling is needed to understand quickly what software is affected by a newly discovered vulnerability, a problem made harder by the scale and dynamic nature of large dependency trees. Current practices also often make it difficult to predict exactly what versions are used without actually doing an installation, as the software for version resolution is only available through the installer.

Prevent New Vulnerabilities

It would be ideal to prevent vulnerabilities from ever being created, and although testing and analysis tools can help, prevention will always be a hard problem. Here we focus on two specific aspects:
  • Understanding risks when deciding on a new dependency
  • Improving development processes for security-critical software
Goal: Understand the Risks for New Dependencies
The first category is essentially knowing about vulnerabilities at the time you decide to use a package. Taking on a new dependency has inherent risk and it needs to be an informed decision. Once you have a dependency, it generally becomes harder to remove over time. Knowing about vulnerabilities is a great start, but there is more that we can do. Continue reading

Announcing the launch of the Android Partner Vulnerability Initiative

Posted by Kylie McRoberts, Program Manager and Alec Guertin, Security Engineer

Android graphic

Google’s Android Security & Privacy team has launched the Android Partner Vulnerability Initiative (APVI) to manage security issues specific to Android OEMs. The APVI is designed to drive remediation and provide transparency to users about issues we have discovered at Google that affect device models shipped by Android partners.

Another layer of security

Android incorporates industry-leading security features and every day we work with developers and device implementers to keep the Android platform and ecosystem safe. As part of that effort, we have a range of existing programs to enable security researchers to report security issues they have found. For example, you can report vulnerabilities in Android code via the Android Security Rewards Program (ASR), and vulnerabilities in popular third-party Android apps through the Google Play Security Rewards Program. Google releases ASR reports in Android Open Source Project (AOSP) based code through the Android Security Bulletins (ASB). These reports are issues that could impact all Android based devices. All Android partners must adopt ASB changes in order to declare the current month’s Android security patch level (SPL). But until recently, we didn’t have a clear way to process Google-discovered security issues outside of AOSP code that are unique to a much smaller set of specific Android OEMs. The APVI aims to close this gap, adding another layer of security for this targeted set of Android OEMs.

Improving Android OEM device security

The APVI covers Google-discovered issues that could potentially affect the security posture of an Android device or its user and is aligned to ISO/IEC 29147:2018 Information technology -- Security techniques -- Vulnerability disclosure recommendations. The initiative covers a wide range of issues impacting device code that is not serviced or maintained by Google (these are handled by the Android Security Bulletins).

Protecting Android users

The APVI has already processed a number of security issues, improving user protection against permissions bypasses, execution of code in the kernel, credential leaks and generation of unencrypted backups. Below are a few examples of what we’ve found, the impact and OEM remediation efforts.

Permission Bypass

In some versions of a third-party pre-installed over-the-air (OTA) update solution, a custom system service in the Android framework exposed privileged APIs directly to the OTA app. The service ran as the system user and did not require any permissions to access, instead checking for knowledge of a hardcoded password. The operations available varied across versions, but always allowed access to sensitive APIs, such as silently installing/uninstalling APKs, enabling/disabling apps and granting app permissions. This service appeared in the code base for many device builds across many OEMs, however it wasn’t always registered or exposed to apps. We’ve worked with impacted OEMs to make them aware of this security issue and provided guidance on how to remove or disable the affected code.

Credential Leak

A popular web browser pre-installed on many devices included a built-in password manager for sites visited by the user. The interface for this feature was exposed to WebView through JavaScript loaded in the context of each web page. A malicious site could have accessed the full contents of the user’s credential store. The credentials are encrypted at rest, but used a weak algorithm (DES) and a known, hardcoded key. This issue was reported to the developer and updates for the app were issued to users.

Overly-Privileged Apps

The checkUidPermission method in the PackageManagerService class was modified in the framework code for some devices to allow special permissions access to some apps. In one version, the method granted apps with the shared user ID com.google.uid.shared any permission they requested and apps signed with the same key as the com.google.android.gsf package any permission in their manifest. Another version of the modification allowed apps matching a list of package names and signatures to pass runtime permission checks even if the permission was not in their manifest. These issues have been fixed by the OEMs.

More information

Keep an eye out at https://bugs.chromium.org/p/apvi/ for future disclosures of Google-discovered security issues under this program, or find more information there on issues that have already been disclosed.

Acknowledgements: Scott Roberts, Shailesh Saini and Łukasz Siewierski, Android Security and Privacy Team

Announcing the launch of the Android Partner Vulnerability Initiative

Posted by Kylie McRoberts, Program Manager and Alec Guertin, Security Engineer

Android graphic

Google’s Android Security & Privacy team has launched the Android Partner Vulnerability Initiative (APVI) to manage security issues specific to Android OEMs. The APVI is designed to drive remediation and provide transparency to users about issues we have discovered at Google that affect device models shipped by Android partners.

Another layer of security

Android incorporates industry-leading security features and every day we work with developers and device implementers to keep the Android platform and ecosystem safe. As part of that effort, we have a range of existing programs to enable security researchers to report security issues they have found. For example, you can report vulnerabilities in Android code via the Android Security Rewards Program (ASR), and vulnerabilities in popular third-party Android apps through the Google Play Security Rewards Program. Google releases ASR reports in Android Open Source Project (AOSP) based code through the Android Security Bulletins (ASB). These reports are issues that could impact all Android based devices. All Android partners must adopt ASB changes in order to declare the current month’s Android security patch level (SPL). But until recently, we didn’t have a clear way to process Google-discovered security issues outside of AOSP code that are unique to a much smaller set of specific Android OEMs. The APVI aims to close this gap, adding another layer of security for this targeted set of Android OEMs.

Improving Android OEM device security

The APVI covers Google-discovered issues that could potentially affect the security posture of an Android device or its user and is aligned to ISO/IEC 29147:2018 Information technology -- Security techniques -- Vulnerability disclosure recommendations. The initiative covers a wide range of issues impacting device code that is not serviced or maintained by Google (these are handled by the Android Security Bulletins).

Protecting Android users

The APVI has already processed a number of security issues, improving user protection against permissions bypasses, execution of code in the kernel, credential leaks and generation of unencrypted backups. Below are a few examples of what we’ve found, the impact and OEM remediation efforts.

Permission Bypass

In some versions of a third-party pre-installed over-the-air (OTA) update solution, a custom system service in the Android framework exposed privileged APIs directly to the OTA app. The service ran as the system user and did not require any permissions to access, instead checking for knowledge of a hardcoded password. The operations available varied across versions, but always allowed access to sensitive APIs, such as silently installing/uninstalling APKs, enabling/disabling apps and granting app permissions. This service appeared in the code base for many device builds across many OEMs, however it wasn’t always registered or exposed to apps. We’ve worked with impacted OEMs to make them aware of this security issue and provided guidance on how to remove or disable the affected code.

Credential Leak

A popular web browser pre-installed on many devices included a built-in password manager for sites visited by the user. The interface for this feature was exposed to WebView through JavaScript loaded in the context of each web page. A malicious site could have accessed the full contents of the user’s credential store. The credentials are encrypted at rest, but used a weak algorithm (DES) and a known, hardcoded key. This issue was reported to the developer and updates for the app were issued to users.

Overly-Privileged Apps

The checkUidPermission method in the PackageManagerService class was modified in the framework code for some devices to allow special permissions access to some apps. In one version, the method granted apps with the shared user ID com.google.uid.shared any permission they requested and apps signed with the same key as the com.google.android.gsf package any permission in their manifest. Another version of the modification allowed apps matching a list of package names and signatures to pass runtime permission checks even if the permission was not in their manifest. These issues have been fixed by the OEMs.

More information

Keep an eye out at https://bugs.chromium.org/p/apvi/ for future disclosures of Google-discovered security issues under this program, or find more information there on issues that have already been disclosed.

Acknowledgements: Scott Roberts, Shailesh Saini and Łukasz Siewierski, Android Security and Privacy Team