Airbnb uses Jetpack Compose to empower devs to do their best work

How Compose enables Airbnb to create better host and guest experiences

Airbnb uses Jetpack Compose to empower devs to do their best work 

Since 2007, Airbnb has grown to connect more than 4 million hosts with more than 1 billion guests across the globe. One of the reasons behind the app’s success is that its developers aim to achieve engineering excellence by focusing on two main principles: using technology that sparks innovative development and empowering the engineers behind the work.

Jetpack Compose, Android’s modern UI-building toolkit, directly supports both of Airbnb’s development principles. Compose provided a solid foundation for adaptable, quality engineering and reduced boilerplate code, so developers could focus on delivering a great user experience — and advance their two-fold pursuit of engineering excellence.

Image with Airbnb tech lead

Airbnb started testing Compose in 2020 when it was in developer preview. As an early adopter, the Airbnb team was eager use the various new features and simplify their workflow. Now, having gained confidence using Compose in production, Airbnb engineers continue to be satisfied with how it improved their development process.

Equipping engineers for success

Compose’s deterministic testing helped ensure Airbnb’s engineers had tight control over the UI tests they ran and eliminated common flakiness, thereby strengthening their confidence in the quality of every part of their app and the user experiences they were creating. Engineers can now also use Compose to test animations they previously couldn't.

Similarly, Airbnb developers used Compose to add automated screenshot tests to their codebase. Because they didn’t need to write the code for screenshot testing, engineers could go straight into using it to catch bugs and regressions. This gave them more time to review and guarantee feature functionality and UI appearance across a variety of devices.

Compose is great to use alongside Views. This interoperability made it easy for Airbnb engineers to onboard and test the new UI toolkit at their own pace, so they were able to experience the benefits of Compose without having to migrate entire features.

These engineering improvements gave them the solid technical foundations they needed to serve users in fresh and improved ways.

Engineering efficiencies improve user experiences

Airbnb keeps hosts and guests at the heart of their decisions. The engineering team was excited to adopt Compose when they learned about how it would enable them to more easily and efficiently produce UI, resulting in better experiences for their end users.

Because Compose made Airbnb’s features require significantly less code to write and manage, the Airbnb team boosted their efficiency. All of this meant the team could focus its energy on executing the complex tasks involved in developing the innovative features that could best serve users.

Because their features now require less code, the Airbnb team will be able to slow the growth of their app size in the long run. Providing a smaller app is important to Airbnb as an organization with users across the globe that looks to ensure all hosts and guests can easily download and access their app — especially those with older devices or logging on from countries with high data costs.

Using Compose’s engineering enhancements, the Airbnb team was able to put user needs first.

Improve developer productivity with Compose

Compose simplified UI development to allow Airbnb engineers the freedom to focus on more dynamic and innovative features that benefit the app’s hosts and guests.

Learn how you can improve your team’s productivity with Jetpack Compose.

Airbnb uses Jetpack Compose to empower devs to do their best work

How Compose enables Airbnb to create better host and guest experiences

Airbnb uses Jetpack Compose to empower devs to do their best work 

Since 2007, Airbnb has grown to connect more than 4 million hosts with more than 1 billion guests across the globe. One of the reasons behind the app’s success is that its developers aim to achieve engineering excellence by focusing on two main principles: using technology that sparks innovative development and empowering the engineers behind the work.

Jetpack Compose, Android’s modern UI-building toolkit, directly supports both of Airbnb’s development principles. Compose provided a solid foundation for adaptable, quality engineering and reduced boilerplate code, so developers could focus on delivering a great user experience — and advance their two-fold pursuit of engineering excellence.

Image with Airbnb tech lead

Airbnb started testing Compose in 2020 when it was in developer preview. As an early adopter, the Airbnb team was eager use the various new features and simplify their workflow. Now, having gained confidence using Compose in production, Airbnb engineers continue to be satisfied with how it improved their development process.

Equipping engineers for success

Compose’s deterministic testing helped ensure Airbnb’s engineers had tight control over the UI tests they ran and eliminated common flakiness, thereby strengthening their confidence in the quality of every part of their app and the user experiences they were creating. Engineers can now also use Compose to test animations they previously couldn't.

Similarly, Airbnb developers used Compose to add automated screenshot tests to their codebase. Because they didn’t need to write the code for screenshot testing, engineers could go straight into using it to catch bugs and regressions. This gave them more time to review and guarantee feature functionality and UI appearance across a variety of devices.

Compose is great to use alongside Views. This interoperability made it easy for Airbnb engineers to onboard and test the new UI toolkit at their own pace, so they were able to experience the benefits of Compose without having to migrate entire features.

These engineering improvements gave them the solid technical foundations they needed to serve users in fresh and improved ways.

Engineering efficiencies improve user experiences

Airbnb keeps hosts and guests at the heart of their decisions. The engineering team was excited to adopt Compose when they learned about how it would enable them to more easily and efficiently produce UI, resulting in better experiences for their end users.

Because Compose made Airbnb’s features require significantly less code to write and manage, the Airbnb team boosted their efficiency. All of this meant the team could focus its energy on executing the complex tasks involved in developing the innovative features that could best serve users.

Because their features now require less code, the Airbnb team will be able to slow the growth of their app size in the long run. Providing a smaller app is important to Airbnb as an organization with users across the globe that looks to ensure all hosts and guests can easily download and access their app — especially those with older devices or logging on from countries with high data costs.

Using Compose’s engineering enhancements, the Airbnb team was able to put user needs first.

Improve developer productivity with Compose

Compose simplified UI development to allow Airbnb engineers the freedom to focus on more dynamic and innovative features that benefit the app’s hosts and guests.

Learn how you can improve your team’s productivity with Jetpack Compose.

Set up SSO profiles for multiple third-party identity providers with the Multi-IdP SSO beta launch

What’s changing 

For over a decade, we have given admins the ability to configure authentication through a third-party identity provider . In 2021, we expanded this capability by making it possible to choose between third-party identity provider or Google authentication for specific groups or organizational units (OUs). 


Now, you can further customize authentication by setting up single sign-on (SSO) profiles for multiple identity providers and then configuring authentication for each group or OU. This feature is available beginning today as an open beta, which means you can use it without enrolling in a specific beta program.


You can now set up SSO profiles for multiple third-party identity providers




Who’s impacted


Admins

Why you’d use it

Currently, you can configure SSO with a third-party identity provider to apply to your entire domain and then require a subset of your users, such as vendors or contractors, to authenticate with Google instead. However, if you have more than one identity provider, you might require greater customization of authentication options. For example, your company might be migrating from one provider to another, or it might have acquired another company that uses a different provider.


The Multi-IdP SSO beta lets you set up SSO profiles for each of your third-party identity providers, giving you the flexibility to specify the authentication method for various users in your organization as needed.

Getting started

  • Admins: In the Admin console, navigate to Security > Settings > Set up single sign-on (SSO) with a third party IdP > Manage SSO Profile assignments. Visit the Help Center to learn more about setting up SSO for your organization.


Go to the Security settings to set up SSO profiles for third-party identity providers

  • End users: There is no end user setting for this feature.

Rollout pace

  • This feature is available now for all users.


Availability

  • Available to Google Workspace Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, Frontline, and Nonprofits, as well as legacy G Suite Basic and Business customers
  • Available to all Cloud Identity customers
  • ​​Not available to Google Workspace Essentials customers
  • Not available to users with personal Google Accounts

Resources

Dev Channel Update for Desktop

The Dev channel has been updated to 103.0.5056.0 for Linux, and Mac, 103.0.5057.3 for Windows.

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvikumar Bommana

Google Chrome

Stable Channel Update for Desktop

The Stable channel has been updated to 101.0.4951.67 for Windows which will roll out over the coming days/weeks.

A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.




Prudhvikumar Bommana
Google Chrome

Challenges in Multi-objective Optimization for Automatic Wireless Network Planning

Economics, combinatorics, physics, and signal processing conspire to make it difficult to design, build, and operate high-quality, cost-effective wireless networks. The radio transceivers that communicate with our mobile phones, the equipment that supports them (such as power and wired networking), and the physical space they occupy are all expensive, so it’s important to be judicious in choosing sites for new transceivers. Even when the set of available sites is limited, there are exponentially many possible networks that can be built. For example, given only 50 sites, there are 250 (over a million billion) possibilities!

Further complicating things, for every location where service is needed, one must know which transceiver provides the strongest signal and how strong it is. However, the physical characteristics of radio propagation in an environment containing buildings, hills, foliage, and other clutter are incredibly complex, so accurate predictions require sophisticated, computationally-intensive models. Building all possible sites would yield the best coverage and capacity, but even if this were not prohibitively expensive, it would create unacceptable interference among nearby transceivers. Balancing these trade-offs is a core mathematical difficulty.

The goal of wireless network planning is to decide where to place new transceivers to maximize coverage and capacity while minimizing cost and interference. Building an automatic network planning system (a.k.a., auto-planner) that quickly solves national-scale problems at fine-grained resolution without compromising solution quality has been among the most important and difficult open challenges in telecom research for decades.

To address these issues, we are piloting network planning tools built using detailed geometric models derived from high-resolution geographic data, that feed into radio propagation models powered by distributed computing. This system provides fast, high-accuracy predictions of signal strength. Our optimization algorithms then intelligently sift through the exponential space of possible networks to output a small menu of candidate networks that each achieve different desirable trade-offs among cost, coverage, and interference, while ensuring enough capacity to meet demand.

Example auto-planning project in Charlotte, NC. Blue dots denote selected candidate sites. The heat map indicates signal strength from the propagation engine. The inset emphasizes the non-isotropic path loss in downtown.

Radio Propagation
The propagation of radio waves near Earth's surface is complicated. Like ripples in a pond, they decay with distance traveled, but they can also penetrate, bounce off, or bend around obstacles, further weakening the signal. Computing radio wave attenuation across a real-world landscape (called path loss) is a hybrid process combining traditional physics-based calculations with learned corrections accounting for obstruction, diffraction, reflection, and scattering of the signal by clutter (e.g., trees and buildings).

We have developed a radio propagation modeling engine that leverages the same high-res geodata that powers Google Earth, Maps and Street View to map the 3D distribution of vegetation and buildings. While accounting for signal origin, frequency, broadcast strength, etc., we train signal correction models using extensive real-world measurements, which account for diverse propagation environments — from flat to hilly terrain and from dense urban to sparse rural areas.

While such hybrid approaches are common, using detailed geodata enables accurate path loss predictions below one-meter resolution. Our propagation engine provides fast point-to-point path loss calculations and scales massively via distributed computation. For instance, computing coverage for 25,000 transceivers scattered across the continental United States can be done at 4 meter resolution in only 1.5 hours, using 1000 CPU cores.

Photorealistic 3D model in Google Earth (top-left) and corresponding clutter height model (top-right). Path profile through buildings and trees from a source to destination in the clutter model (bottom). Gray denotes buildings and green denotes trees.

Auto-Planning Inputs
Once accurate coverage estimates are available, we can use them to optimize network planning, for example, deciding where to place hundreds of new sites to maximize network quality. The auto-planning solver addresses large-scale combinatorial optimization problems such as these, using a fast, robust, scalable approach.

Formally, an auto-planning input instance contains a set of demand points (usually a square grid) where service is to be provided, a set of candidate transceiver sites, predicted signal strengths from candidate sites to demand points (supplied by the propagation model), and a cost budget. Each demand point includes a demand quantity (e.g., estimated from the population of wireless users), and each site includes a cost and capacity. Signal strengths below some threshold are omitted. Finally, the input may include an overall cost budget.

Data Summarization for Large Instances
Auto-planning inputs can be huge, not just because of the number of candidate sites (tens of thousands), and demand points (billions), but also because it requires signal strengths to all demand points from all nearby candidate sites. Simple downsampling is insufficient because population density may vary widely over a given region. Therefore, we apply methods like priority sampling to shrink the data. This technique produces a low-variance, unbiased estimate of the original data, preserving an accurate view of the network traffic and interference statistics, and shrinking the input data enough that a city-size instance fits into memory on one machine.

Multi-objective Optimization via Local Search
Combinatorial optimization remains a difficult task, so we created a domain-specific local search algorithm to optimize network quality. The local search algorithmic paradigm is widely applied to address computationally-hard optimization problems. Such algorithms move from one solution to another through a search space of candidate solutions by applying small local changes, stopping at a time limit or when the solution is locally optimal. To evaluate the quality of a candidate network, we combine the different objective functions into a single one, as described in the following section.

The number of local steps to reach a local optimum, number of candidate moves we evaluate per step, and time to evaluate each candidate can all be large when dealing with realistic networks. To achieve a high-quality algorithm that finishes within hours (rather than days), we must address each of these components. Fast candidate evaluation benefits greatly from dynamic data structures that maintain the mapping between each demand point and the site in the candidate solution that provides the strongest signal to it. We update this “strongest-signal” map efficiently as the candidate solution evolves during local search. The following observations help limit both the number of steps to convergence and evaluations per step.

Bipartite graph representing candidate sites (left) and demand points (right). Selected sites are circled in red, and each demand point is assigned to its strongest available connection. The topmost demand point has no service because the only site that can reach it was not selected. The third and fourth demand points from the top may have high interference if the signal strengths attached to their gray edges are only slightly lower than the ones on their red edges. The bottommost site has high congestion because many demand points get their service from that site, possibly exceeding its capacity.

Selecting two nearby sites is usually not ideal because they interfere. Our algorithm explicitly forbids such pairs of sites, thereby steering the search toward better solutions while greatly reducing the number of moves considered per step. We identify pairs of forbidden sites based on the demand points they cover, as measured by the weighted Jaccard index. This captures their functional proximity much better than simple geographic distance does, especially in urban or hilly areas where radio propagation is highly non-isotropic.

Breaking the local search into epochs also helps. The first epoch mostly adds sites to increase the coverage area while avoiding forbidden pairs. As we approach the cost budget, we begin a second epoch that includes swap moves between forbidden pairs to fine-tune the interference. This restriction limits the number of candidate moves per step, while focusing on those that improve interference with less change to coverage.

Three candidate local search moves. Red circles indicate selected sites and the orange edge indicates a forbidden pair.

Outputting a Diverse Set of Good Solutions
As mentioned before, auto-planning must balance three competing objectives: maximizing coverage, while minimizing interference and capacity violations, subject to a cost budget. There is no single correct tradeoff, so our algorithm delegates the final decision to the user by providing a small menu of candidate networks with different emphases. We apply a multiplier to each objective and optimize the sum. Raising the multiplier for a component guides the algorithm to emphasize it. We perform grid search over multipliers and budgets, generating a large number of solutions, filter out any that are worse than another solution along all four components (including cost), and finally select a small subset that represent different tradeoffs.

Menu of candidate solutions, one per row, displaying metrics. Clicking on a solution turns the selected sites pink and displays a plot of the interference distribution across covered area and demand. Sites not selected are blue.

Conclusion
We described our efforts to address the most vexing challenges facing telecom network operators. Using combinatorial optimization in concert with geospatial and radio propagation modeling, we built a scalable auto-planner for wireless telecommunication networks. We are actively exploring how to expand these capabilities to best meet the needs of our customers. Stay tuned!

For questions and other inquiries, please reach out to [email protected].

Acknowledgements
These technological advances were enabled by the tireless work of our collaborators: Aaron Archer, Serge Barbosa Da Torre, Imad Fattouch, Danny Liberty, Pishoy Maksy, Zifei Tong, and Mat Varghese. Special thanks to Corinna Cortes, Mazin Gilbert, Rob Katcher, Michael Purdy, Bea Sebastian, Dave Vadasz, Josh Williams, and Aaron Yonas, along with Serge and especially Aaron Archer for their assistance with this blog post.

Source: Google AI Blog


Stadia Savepoint: April updates

It’s time for another round of our Savepoint series, where we recap the new games, features and updates available on Stadia.

A video recapping Stadia's highlights from April 2022
10:25

In April, players took a swing at the leaderboard in Golf With Your Friends. Meanwhile, those who wanted a bit more action fought the undead in House of the Dead: REMAKE, a callback to the original 1997 arcade classic. Like any game on Stadia, players could jump into these titles right away, without waiting for downloads or installs.

Active Stadia Pro subscribers enjoyed four additions to the Pro games library, including two new titles that launched on Stadia — World War Z: Aftermath and City Legends - The Curse of the Crimson Shadow. Pro subscribers could also claim Chicken Police - Paint it RED! and Ys IX: Monstrum Nox at no cost to add to their growing library of titles. It’s easy to try out new games like these in the Pro library, especially since creating a new Stadia account includes a one-month trial of Stadia Pro.

We’re continuing to make it easier for players to try Stadia in seconds, without having to create an account. In April, we doubled the number of timed trials for full games on Stadia, adding seven new trials — including World War Z: Aftermath, HUMANKIND and DRAGON QUEST XI S: Echoes of an Elusive Age. You can now choose from 13 trials to play, all at no cost.

A GIF shows a user progressing through black Stadia menus and entering a blue gameplay environment in Risk of Rain 2, controlling a red character as they attack blue enemies firing orange fireballs.

Stadia feature updates

  • Store game page refresh: Check out all content bundles, sales, available trials and add-on content for Stadia store games with an updated look on web and mobile web.
The Stadia store page for Ys IX: Monstrum Vox shows a trial, bundled content, and other actions for users to try.
  • Public parties game search: Create a public party for any multiplayer game when playing with a new game search bar on web.

Stadia Pro updates

April content launches on Stadia:

Stadia announcements in April:

  • DEATHRUN TV
  • Five Nights at Freddy’s: Security Breach

As always, we’ll be back next month to share another recap. In the meantime, keep an eye on the Stadia Community Blog, Facebook, YouTube and Twitter for the latest on new games, features and updates.

Now in Android – a new, open source, real-world sample app

Posted by Paris Hsu, Product & Design, Android and Don Turner, Developer Relations Engineer, Android

Now in Android Splash logo

The Now in Android app is now on GitHub!

For two years, 'Now in Android' has been a popular blog and YouTube series, providing you with the latest and greatest developer news from the Android team. Starting today, you can check out the alpha version of the Now in Android app on GitHub! ?

The app has two goals:

Firstly, it showcases best practices, opinionated designs, and solutions to complex real-world problems which other sample apps don’t handle. It does so with an open source implementation of a real world app.

Secondly, it helps you (the developer) keep up to date with the areas of Android development which interest you most. It is a working app planned for publication on the Play Store.

image of Now in Android app screen designs on three phones

Now in Android app screen designs

For this first alpha release, the Now in Android app includes:

As well as these features, we are also documenting the learning journeys we took to certain decisions with the app's design and implementation. Check out our first journey on the app's Architecture here.

image showing how the Now in Android app adapts based on device screen size

The Now in Android screens adapt based on device screen size

Since this is an alpha release, we expect that there will be bugs and missing features, and we would greatly appreciate your feedback. We have some exciting features planned, such as user authentication and loading data from a real backend. We can’t wait for you to check out the app and let us know what you think!

Finally, if you want to learn about the tools we used to build the app and how we target multiple screen sizes, check out these talks from this year's Google I/O:

Shared success in building a safer open source community

Today we joined the Open Source Security Foundation (OpenSSF), Linux Foundation and industry leaders for a meeting to continue progressing the open source software security initiatives discussed during January’s White House Summit on Open Source Security. During this meeting, Google announced the creation of its new “Open Source Maintenance Crew” — a dedicated staff of Google engineers who will work closely with upstream maintainers on improving the security of critical open source projects. In addition to this initiative, we contributed ideas and participated in discussions on improving the security and trustworthiness of open source software.

Amid all this momentum and progress, it is important to take stock on how far we’ve come as a community over the past year and a half. In this post we will provide an update on some major milestones and projects that have launched and look towards the future and the work that still needs to be done.

Know, Prevent, Fix

A little over a year ago we published Know, Prevent, Fix, which laid out a framework for how the software industry could address vulnerabilities in open source software. At the time, there was a growing interest in the topic and the hope was to generate momentum in the cause of advancing and improving software supply-chain security.

The landscape has changed greatly since then:

  • Prominent attacks and vulnerabilities in critical open source libraries such as Log4j and Codecov made headline news, bringing a new level of awareness to the issue and unifying the industry to address the problem.
  • The US government formalized the push for higher security standards in the May 2021 Executive Order on Cybersecurity. The release of the Secure Software Development Framework, a set of guidelines for national security standards on software development, sparked an industry-wide discussion about how to implement them.
  • Last August, technology leaders including Google, Apple, IBM, Microsoft, and Amazon invested in improving cybersecurity — and Google alone pledged $10 billion over the next five years to strengthen cybersecurity, including $100 million to support third-party foundations, like OpenSSF, that manage open source security priorities and help fix vulnerabilities.

In light of these changes, the Know, Prevent, Fix framework proved prescient: beyond just the increased discussion about open source security, we’re witnessing real progress in the industry to act on those discussions. In particular, the OpenSSF has become a community town hall for driving security engineering efforts, discussions, and industry-wide collaboration.

These successes have also surfaced new challenges, though, and we believe the next step is to increase accessibility. Security tools should be more easily adopted into common developer workflows, more integrated across the ecosystem, and simpler to connect into projects. Underlying all of this is a need to streamline the process of matching projects with available funds and resources to enable security improvements.

This follow-up blog post discusses Google’s efforts in collaboration with the open source community to make progress on security goals in the past year, the lessons learned and how the industry can build on this momentum.

Know

Our goals for “Know” were to capture more precise data about vulnerabilities, establish a standard schema to track vulnerabilities across databases, and create tooling for better tracking of dependencies.

Over the past year the community’s investment in Open Source Vulnerabilities (OSV) has resulted in a new vulnerability format. The format was developed and adopted by several open source ecosystems (Python, Rust, Go), as well as vulnerability databases such as GitHub’s Security Advisories (GHSA) and the Global Security Database. Google also worked closely with MITRE on the new CVE 5.0 JSON schema to simplify future interoperability. OSV.dev also supports a searchable vulnerability database that, thanks to the standardized format, aggregates vulnerabilities from all other databases into one easily searched location.

During the Log4j vulnerability response, the Google-supported Open Source Insights project helped the community understand the impact of the vulnerability. This project analyzes open source packages and provides detailed graphs of dependencies and their properties. With this information, developers can understand how their software is put together and the consequences to changes in their dependencies—which, as Log4j showed, can be severe when affected dependencies are many layers deep in the dependency graph. Today, we’re also making the data powering Open Source Insights available as a public Google Cloud Dataset.

The OSV project showed that connecting a CVE to the vulnerability patch development workflow can be difficult without precise vulnerability metadata. It will take cooperation across disparate development communities to reap the full benefits of the progress, but with collaboration OSV can scale quickly across language and project ecosystems.

We believe the next major goal is to lower the barrier of entry for users. Integrating with developer tools and processes will bring high quality information to where it is most useful. For instance, OSV findings can be everywhere from code editors (e.g., when deciding whether to include a library) to deployment (e.g., stopping vulnerable workloads from deploying).

Prevent

“Prevent” was conceived to help users understand the risks of new dependencies so they can make informed decisions about the packages and components they consume.

We’ve seen strong community involvement in the prevention of vulnerabilities, particularly in the Security Scorecards project. Scorecards evaluates a project’s adherence to security best practice and assigns scores that developers can consult before consuming a dependency. Users can choose to avoid projects that, for example, don’t use branch protection settings or employ dangerous workflows (which make projects vulnerable to malicious commits), and gravitate to projects that follow strong security practices like signing their releases and using fuzzing. Thanks to contributions from Cisco, Datto and several other open source contributors, there are now regular Scorecard scans of 1 million projects, and Scorecards has developed from a command line tool into an automated GitHub Actions that runs after any change to GitHub project. More organizations are adopting Scorecards, and the Scorecard GitHub Action has been installed on over 1000 projects, with continued growth. With increased adoption, overall security will improve across entire ecosystems.

Additionally, Sigstore is helping prevent attacks by creating new tools for signing, verifying and protecting software. Recently, Kubernetes announced that it is using sigstore to sign its releases, showing that artifact signing at a large scale is now within reach. As adoption expands, we can expect stronger links between published source code and the binaries that run it.

Community collaborators like Citi, Chainguard, DataDog, VMWare and others have actively contributed to the OpenSSF’s SLSA framework. This project is based on Google’s internal Binary Authorization for Borg (BAB), which for more than a decade has been mitigating the risk of source and production attacks at Google. SLSA lays out an actionable path for organizations to increase their overall software supply-chain security by providing step-by-step guidelines and practical goals for protecting source and build system integrity. The SLSA framework addresses a limitation of Software Bills of Materials (SBOMs), which on their own do not provide sufficient information about integrity and provenance. An SBOM created using SLSA provenance and metadata is more complete and addresses both source code and build threat vectors. Using SLSA may also help users implement Secure Software Development Framework (SSDF) requirements.

Continued improvements to the OSS-Fuzz service for open source developers have helped get over 2300 vulnerabilities fixed across 500+ projects in the past year. Google has also been heavily investing in expanding the scope of fuzzing through adding support for new languages such as Java and Swift and developing bug detectors to find issues like Log4shell.

Through the Linux Kernel Self-Protection Project, Google has been providing a steady stream of changes to overhaul internal kernel APIs so that the compiler can detect and stop buffer overflows in fragile areas that have seen repeated vulnerabilities. For everyone in the ecosystem staying current on Linux kernel versions, this removes a large class of flaws that could lead to security exploits.

Looking ahead, this area’s rapid growth highlights the community’s concern about integrity in software supply chains. Users are searching for solutions that they can trust across ecosystems, such as provenance metadata that connects deployed software to its original source code. Additionally, we expect increased scrutiny of development processes to ensure that software is built in the most secure way possible.

The next goals for open source software security should involve broad adoption of best practices and scalability. Increasing the use of these tools will multiply the positive effects as more projects become secured, but adoption needs to happen in a scalable way across ecosystems (e.g., via the OpenSSF Securing Package Repositories Working Group focused on improving security in centralized package managers). Education will be a driving force to speed the shift from project-by-project adoption to broadscale ecosystem conversion: greater awareness will bring greater momentum for change.

Fix

“Fix” was conceived to help users understand their options to remove vulnerabilities, enable notifications that help speed repairs, and fix widely used versions of affected software, not just the most recent versions.

Google supported open source innovation, security, collaboration, and sustainability through our programs and services by giving $15 million to open source last year. This includes $7.5 million to targeted security efforts in areas such as supply chain security, fuzzing, kernel security and critical infrastructure security. For example, $2.5 million of the security funding went to the Alpha-Omega project, which made its first grant to the Node.js foundation to strengthen its security team and support vulnerability remediation.

Other security investments include $1 million to SOS Rewards, and $300,000 to the Internet Security Research Group to improve memory safety by incorporating Rust into the Linux kernel. The remaining funding supports security audits, package signing, fuzzing, reproducible builds, infrastructure security and security research.

Beyond financial investments, Google employees contribute their hours, effort, and code to tens of thousands of open source repositories each year. One issue frequently cited by open source maintainers is limited time. Since under-maintained, critical open source components are a security risk, Google is starting a new Open Source Maintenance Crew, a dedicated staff of Google engineers who will work closely with upstream maintainers on improving the security of critical open source projects. We hope that other enterprises that rely on open source will invest in similar efforts to help accelerate security improvements in the open source ecosystem.

Up Next

The amount of progress in the past year is very encouraging: we as an industry have come together to discuss, fund, and make headway on many of the difficult problems that affect us all. The solutions are not just being talked about, but also built, refined, and applied. Now we need to magnify this progress by integrating these solutions with tooling and language ecosystems: every open source developer should have effortless access to end-to-end security by default.

Google is committed to continuing our work with the OpenSSF to achieve these goals. To learn more about the OpenSSF foundation and join its efforts, check it out here.