Tag Archives: Open source

Introducing Eclipsa Audio: immersive audio for everyone

In the real world, we hear sounds from all around us. Some sounds are ahead of us, some are to our sides, some are behind us, and - yes - some are above or below us. Spatial audio technology brings an immersive audio experience that goes beyond traditional stereo sound. It creates a 3D soundscape, making you feel like sounds are coming from all around you, not just from the left and right speakers.

Spatial audio technologies were first developed over 50 years ago, and playback has been available to consumers for over a decade, but creating spatial audio has been mostly limited to professionals in the movie or music industries. That’s why Google and Samsung are releasing Eclipsa Audio, an open source spatial audio format for everyone.


From Creation to Distribution to Experience

Eclipsa Audio is based on Immersive Audio Model and Formats (IAMF), an audio format developed by Google, Samsung, and other key contributors within the Alliance for Open Media (AOM), and released under the AOM royalty-free license. Because IAMF is open source, Eclipsa Audio files can be created by anyone using freely available audio tools, which support a wide variety of workflows:

A diagram shows three different workflows for encoding video and audio using `iamf_tools` and `ffmpeg` to create MP4 files with IAmF audio and video.  Each workflow handles a different input type, including ADM-BWF, Wav files, Textproto, and Video.

An open source reference renderer [1] is freely available for standalone spatial audio playback, or you can test your Eclipsa Audio files right in your browser at the Binaural Web Demo Application.

Starting in 2025, creators will be able to upload videos with Eclipsa Audio tracks to YouTube. As the first in the industry to adopt Eclipsa Audio, Samsung is integrating the technology across its 2025 TV lineup — from the Crystal UHD series to the premium flagship Neo QLED 8K models — to ensure that consumers who want to experience this advanced technology can choose from a wide range of options. Google and Samsung will be launching a certification and brand licensing program in 2025 to provide quality assurance to manufacturers and consumers for products that support Eclipsa Audio.


Next Steps

To simplify the creation of Eclipsa Audio files, later this spring we will release a free Eclipsa Audio plugin for AVID Pro Tools Digital Audio Workstation. We also plan to bring native Eclipsa Audio playback to the Google Chrome browser as well as to TVs and Soundbars from multiple manufacturers later in 2025. Eclipsa Audio support will also arrive in an upcoming Android AOSP release; stay tuned for more information.

We believe that Eclipsa Audio has the potential to change the way we experience sound. We are excited to see how it is used to create new and innovative audio experiences.

By Matt Frost, Jani Huoponen, Jan Skoglund, Roshan Baliga – the Open Audio team

[1]Special thanks to Arm for providing high performance optimizations to the IAMF reference software.

Google Summer of Code 2025 is here!

Level up your coding skills in Google Summer of Code 2025

Get ready for the 2025 Google Summer of Code (GSoC) program! We started this adventure in 2005 and over the past 20 years we have welcomed over 21,000 new contributors to open source through the program under the guidance of 20,000+ mentors from over 1,000 open source organizations. Check out the video below to learn more about the impact GSoC has made over the last two decades.

Our mission since day one has been to foster the next generation of open source contributors. Participants are immersed in a supportive environment where they spend 3+ months collaborating on real-world projects alongside experienced mentors. This deep dive into open source not only builds valuable coding skills, but cultivates a strong understanding of community dynamics and best practices, empowering them to become impactful contributors.

2024 was a milestone year with 1,127 GSoC contributors completing their projects with 195 open source organizations. We hope to surpass these numbers in 2025!


Be a GSoC 2025 mentoring organization

Application period: January 27 – Feb 11

Interested organizations can learn more by visiting our website; there you’ll find supportive materials to get started.

An invaluable resource is our Mentor Guide, which is a quick way to familiarize yourself with the GSoC program. You’ll find tips on how to engage your community, suggestions on how to present achievable project ideas, and guidance on applying these to your communities.

We are happy to welcome organizations new to GSoC each year. Typically, 20-30 organizations join us for the first time, and we encourage you to apply. In 2025, we're particularly excited to expand our reach in the Security and Machine Learning domains.


Learn more about being a GSoC mentoring organization

Join us in our first information session of the year:

  • Organization Applications Tips on Tuesday, January 22 17:00 UTC

Be a GSoC 2025 contributor

Application period: March 24 - April 8

If you are a beginner to open source development or a student interested in learning about open source, this is your chance to get involved and gain experience on real-world software development projects. Follow these quick steps to set yourself up for success:


Learn more about being a GSoC contributor

Join one of our upcoming information sessions:

  • Contributor Talk #1 on Wednesday, February 19, 16:00 UTC
  • Contributor Talk #2 on Tuesday, February 25, 2:00 UTC
  • Contributor Talk #3 on Thursday, March 6, 16:00 UTC

Please help us spread the word about GSoC 2025 to your peers, family members, colleagues, universities and anyone interested in making a difference in the open source community. Join us and help shape the future of open source!

By Stephanie Taylor, Mary Radomile, and Lucila Ortiz

Google Summer of Code 2025 is here!

Level up your coding skills in Google Summer of Code 2025

Get ready for the 2025 Google Summer of Code (GSoC) program! We started this adventure in 2005 and over the past 20 years we have welcomed over 21,000 new contributors to open source through the program under the guidance of 20,000+ mentors from over 1,000 open source organizations. Check out the video below to learn more about the impact GSoC has made over the last two decades.

Our mission since day one has been to foster the next generation of open source contributors. Participants are immersed in a supportive environment where they spend 3+ months collaborating on real-world projects alongside experienced mentors. This deep dive into open source not only builds valuable coding skills, but cultivates a strong understanding of community dynamics and best practices, empowering them to become impactful contributors.

2024 was a milestone year with 1,127 GSoC contributors completing their projects with 195 open source organizations. We hope to surpass these numbers in 2025!


Be a GSoC 2025 mentoring organization

Application period: January 27 – Feb 11

Interested organizations can learn more by visiting our website; there you’ll find supportive materials to get started.

An invaluable resource is our Mentor Guide, which is a quick way to familiarize yourself with the GSoC program. You’ll find tips on how to engage your community, suggestions on how to present achievable project ideas, and guidance on applying these to your communities.

We are happy to welcome organizations new to GSoC each year. Typically, 20-30 organizations join us for the first time, and we encourage you to apply. In 2025, we're particularly excited to expand our reach in the Security and Machine Learning domains.


Learn more about being a GSoC mentoring organization

Join us in our first information session of the year:

  • Organization Applications Tips on Tuesday, January 22 17:00 UTC

Be a GSoC 2025 contributor

Application period: March 24 - April 8

If you are a beginner to open source development or a student interested in learning about open source, this is your chance to get involved and gain experience on real-world software development projects. Follow these quick steps to set yourself up for success:


Learn more about being a GSoC contributor

Join one of our upcoming information sessions:

  • Contributor Talk #1 on Wednesday, February 19, 16:00 UTC
  • Contributor Talk #2 on Tuesday, February 25, 2:00 UTC
  • Contributor Talk #3 on Thursday, March 6, 16:00 UTC

Please help us spread the word about GSoC 2025 to your peers, family members, colleagues, universities and anyone interested in making a difference in the open source community. Join us and help shape the future of open source!

By Stephanie Taylor, Mary Radomile, and Lucila Ortiz

Kubernetes 1.32 is now available on GKE


Kubernetes 1.32 is now available in the Google Kubernetes Engine (GKE) Rapid Channel, just one week after the OSS release! For more information about the content of Kubernetes 1.32, read the official Kubernetes 1.32 Release Notes and the specific GKE 1.32 Release Notes.

This release consists of 44 enhancements. Of those enhancements, 13 have graduated to Stable, 12 are entering Beta, and 19 have graduated to Alpha.


Kubernetes 1.32: Key Features


Dynamic Resource Allocation graduated to beta

  • Dynamic Resource Allocation graduated to beta, enabling advanced selection, configuration, scheduling and sharing of accelerators and other devices. As a beta API, using it in GKE clusters requires opt-in. You must also deploy a DRA-compatible kubelet plugin for your devices and use the DRA API instead of the traditional extended resource API used for the existing Device Plugin.

Support for more efficient API streaming

  • The Streaming lists operation has graduated to beta and is enabled by default; the new operation supplies the initial list needed by the list + watch data access pattern over a watch stream and improves kube-apiserver stability and resource usage by enabling informers to receive a continuous data stream. See k8s blog for more information.

Recovery from volume expansion failure

  • Support for recovery from volume expansion failure graduated to beta and is enabled by default. If a user initiates an invalid volume resize, for example by specifying a new size that is too big to be satisfied by the underlying storage system, expansion of PVC will continuously be retried and fail. With this new feature, such a PVC can now be edited to request a smaller size to unblock the PVC. The PVC can be monitored by watching .status.allocatedResourceStatuses and events on the PVC.

Job API for management by external controllers

  • Support in the Job API for the managed-by mechanism graduated to beta and is enabled by default. This enables integration with external controllers like MultiKueue.

Improved scheduling performance

  • The Kubernetes QueueingHint feature enhances scheduling throughput by preventing unnecessary scheduling retries. It’s achieved by allowing scheduler plugins to provide per-plugin callback functions that make efficient requeuing decisions.

Acknowledgements

As always, we want to thank all the Googlers that provide their time, passion, talent and leadership to keep making Kubernetes the best container orchestration platform. We would like to mention especially Googlers who helped drive the features mentioned in this blog [John Belamaric, Wojciech Tyczyński, Michelle Au, Matthew Cary, Aldo Culquicondor, Tim Hockin Maciej Skoczeń Michał Woźniak] and the Googlers who helped bring 1.32 to GKE in record time.

By Federico Bongiovanni, Benjamin Elder, and Sen Lu – Google Kubernetes Engine

Google Season of Docs announces results of 2024 program

Google Season of Docs is happy to announce the 2024 program results, including the project case studies.

Google Season of Docs is a grant-based program where open source organizations apply for US$5-15,000 to hire technical writers to complete documentation projects. At the end of the six-month documentation development phase, organizations submit a case study to outline the problems their documentation project was intended to solve, how they are measuring the success of their documentation project, and what they learned during the project. The case studies are publicly available and are intended to help other open source organizations learn best practices in open source documentation.

The 2024 Google Season of Docs documentation development phase began on April 10 and ended November 22, 2024. Participants in the 2024 program will also answer three followup surveys in 2025, in order to better track the impact of these documentation projects over time.


Key Takeaways from 2024 Google Season of Docs

Eleven different organizations participated in the 2024 program. These organizations represented a variety of open source project types, including databases, AI/ML, cloud infrastructure, programming languages, and science and medicine. The documentation projects hoped to address a variety of different problems. The most common challenges addressed by the Season of Docs projects were:

  • Project documentation disorganized
  • Potential project users having difficulty installing, using, or integrating your project
  • Project documentation outdated
  • Project documentation needs to be converted to a different tool, platform, or format
  • Potential contributors having difficulty onboarding to your project
  • Project documentation lacking for a specific key use case
  • Potential project users lack fundamental understanding of the project domain

Program participants learned a lot from their projects. These lessons are detailed in the published case studies, to help other open source organizations who are interested in taking on their own documentation projects. Some highlights include:

“Putting time and effort into your project’s infrastructure, such as communication channels and onboarding processes, is really valuable work."
“Perhaps the key piece of advice that we came away with that could be useful for other projects is to be flexible in what you set out to accomplish: what looks like the top-five items on the to-do list on day one may not be what you think is the most important at the end."
“This particular experiment with a different medium has turned out to be successful and we encourage other communities to also explore different media depending on their audience and information needs."

“Developing documentation also helped us identify ambiguities in the interface and other areas of the site design or features that needed refinement or decisions made in order to document them properly. Now, with a base site and well-established documentation workflow, we are documenting more features as they are developed."

Take a look at the participant list to see the initial project plans and case studies for all of the participating projects!


What’s next?

Stay tuned for information about Google Season of Docs 2025 – watch for posts on this blog and sign up for the announcements email list.

By Elena Spitzer and Erin McKean, Google Open Source Programs Office

BazelCon 2024: A celebration of community and the launch of Bazel 8


The Bazel community celebrated a landmark year at BazelCon 2024. With a record-breaking 330+ attendees, 125+ talk proposal submissions, and a renewed focus on community-driven development, BazelCon marked a significant step forward for the build system and its users.


BazelCon 2024: Key highlights

A cross section of ther audience facingthe stage at BazelCon 2024

The 8th annual build conference was held at the Computer History Museum in Mountain View, CA, on October 14 - 15, 2024. This was the first BazelCon not solely organized by Google; instead, it was organized by The Linux Foundation together with sponsors Google, BuildBuddy, EngFlow, NativeLink, AspectBuild, Gradle, Modus Create, and VirtusLab. The conference welcomed build enthusiasts from around the world to explore the latest advancements in build technologies, share learnings, and connect with each other.

The conference kicked off with an opening keynote delivered by Mícheál Ó Foghlú and Tobias Werth (Google), Alex Eagle (Aspect Build Systems), Helen Altshuler (EngFlow), and Chuck Grindel (Reveal Technology). The keynote highlighted the vital role of community contributions and charted a course for a future where Bazel thrives through shared stewardship.


Following the keynote, John Field and Tobias Werth (Engineering Managers at Google) delivered a state-of-the-union address, celebrating the year's top contributors and highlighting key achievements within the Bazel ecosystem.


Over the course of the conference, members of the Bazel community showcased their expertise and shared key insights through a series of live presentations. Some highlights include:

  • Spotify's compelling Bazel adoption journey
  • EngFlow's insightful post-mortems on remote execution
  • Explorations of cutting-edge features like BuildBuddy's "Remote Bazel”

Take a look at our playlist of BazelCon 2024 Talks at your convenience.

In addition to main stage talks, BazelCon provided ample opportunities for attendees to connect and collaborate. Birds of a Feather sessions fostered lively discussions on topics ranging from generating SBOM using Bazel, to IDE integrations, to external dependency management, allowing community members to provide direct feedback and shape the future of Bazel. Make sure to check out raw BazelCon '24 Birds of a Feather notes from these sessions.

BazelCon 2024 also served as the launchpad for Bazel 8, a long-term support (LTS) release that brings significant enhancements to modularity, performance, and dependency management.

Bazel 8 logo

What’s new in Bazel 8?

  • Starlark-powered modularity: Many core language rules traditionally shipped with Bazel are now Starlarkified and split into their own modules, including all Android, Java, Protocol Buffers, Python, and shell rules.
  • WORKSPACE deprecation: The legacy WORKSPACE mechanism for external dependency management is disabled by default in Bazel 8, and is slated for removal in Bazel 9. Bzlmod, the default since Bazel 7, is the recommended solution going forward.
  • Symbolic macros: Bazel 8 introduces a new way to write macros for build files, addressing many pitfalls and footguns of legacy macros. Symbolic macros offer better visibility encapsulation, type safety, and are amenable to lazy evaluation, coming in a future Bazel release.

Read the full release notes for Bazel 8.


Stay connected with the Bazel community

We extend our gratitude to everyone that contributed to the success of BazelCon 2024! We look forward to seeing you again next year.

To stay informed about the latest developments in the Bazel world, connect with us through the following channels:

We encourage you to share your Bazel projects and experiences with us at [email protected]. We're always excited to hear from you!

By Keerthana Kumar and Xudong Yang, on behalf of the Google Bazel Team

A Robust Open Ecosystem for All: Accelerating AI Infrastructure


JAX now runs on AWS Trainium: Open Source Fuels AI Innovation

Open source software is the foundation of machine learning. It accelerates innovation through an ethos of flexibility and collaboration. This philosophy drives the open development of JAX, our high-performance array computing library, as well as OpenXLA, the compiler and runtime infrastructure it relies on.

Today we're excited to highlight how this commitment to openness, together with JAX and OpenXLA's modular designs, enables seamless integration of AWS Trainium and Trainium2 chips accelerators into the JAX ecosystem. Users get more portability, more choice, and faster progress.


JAX and OpenXLA, abstraction and modularity

JAX is a Python library for high-performance, large-scale numerical computing and machine learning. Its unique compiler-oriented design makes numerical computation familiar and portable while also accelerator-friendly and scalable. It combines a NumPy-like API with composable transformations for automatic differentiation, vectorization, parallelization, and more. Under the hood, JAX leverages the XLA compiler to optimize and scale computations over a broad set of backends.

This abstraction layer is key to its portability: JAX presents a consistent interface while XLA optimizes performance, whether you're running on CPUs, GPUs, TPUs, or something new.

In fact, OpenXLA infrastructure is designed to be modular and extensible to new platforms. By developing a PJRT plugin and leveraging existing XLA compiler components, JAX code can target new platforms, even when scaling from a single device to thousands.


Enter AWS Trainium and Inferentia

We are excited to announce that AWS Trainium is the latest platform to embrace JAX and OpenXLA. With the JAX Neuron plugin, AWS Trainium and Inferentia can be used as native JAX devices.

This new backend demonstrates how abstraction and modularity make JAX and OpenXLA especially extensible and amenable to collaboration, even on new hardware. We're thrilled to have diverse hardware partners like AMD, Arm, Intel, Nvidia, and AWS taking advantage of JAX's portability and performance. If you're interested in bringing new platforms into the JAX and OpenXLA ecosystem, please reach out!

A multi-platform ecosystem fosters open collaboration in advancing AI infrastructure. Our goal is to drive continuous development of open standards and to accelerate progress. And if you're a machine learning developer or numerical computing user, we're excited for you to try JAX on any platform you choose.

By Matthew Johnson - Principal Scientist, with additional contributors: Aditi Joshi, Fenghui Zhang, Roy Frostig, and Carlos Araya

My review is stuck – what now?

Troubleshooting your code reviews

Google engineers contribute thousands of open source pull requests every month! With that experience, we've learned that a stuck review isn't the end of the world. Here are a few techniques for moving a change along when the frustration starts to wear on you.

You've been working on a pull request for weeks - it implements a feature you're excited about, and sure, you've hit a few roadbumps along the review path, but you've been dealing with them and making changes to your code, and surely all this hard work will pay off. But now, you're not sure what to do: your pull request hasn't moved forward in some time, you dread working on another revision just to be pushed back on, and you're not sure your reviewers are even on your side anymore. What happened? Your feature is so cool - how are you going to land it?

Reviews can stall for a number of reasons:

  • There may be multiple solutions to the problem you're trying to solve, but none of them are ideal - or, all of them are ideal, but it’s hard to tell which one is most ideal. Or, your reviewers don't agree on which solution is right.
  • You may not be getting any response from reviewers whatsoever. Either your previously active reviewers have disappeared, or you have had trouble attracting any in the first place.
  • The goalposts have moved; you thought you were just going to implement a feature, but then your reviewer pointed out that your tests are poor style, and in fact, the tests in that whole suite are poor style, and you need to refactor them all before moving forward. Or, you implemented it with one approach, and your reviewer asked for a different one; after a few rounds with that different approach, another reviewer suggests you try… the approach you started with in the first place!
  • The reviewer is asking you to make a change that you think is a bad idea, and you aren't convinced by their arguments. Or maybe it's the second reviewer who isn't convinced, and you aren't sure which side to take in their disagreement - and which approach to implement in your PR.
  • You think the reviewer's suggestion is a good one, but when you sit down to write the code, it's toilsome, or ugly, or downright impossible to make work.
  • The reviewer is pointing out what's wrong, but not explaining what they want instead; the reviews you are receiving aren't actionable, clear, or consistent.

These things happen all the time, especially in large projects where review consensus is a bit muddy. But you can't change the project overnight - and anyway, you just want your code to land! What can you do to get past these obstacles?


Step Back and Summarize the Discussion

As the author of your change, it's likely that you're thinking about this change all the time. You spent hours working on it; you spent hours looking through reviewer comments; you were personally impacted by the lack of this change in the codebase, because you ran into a bug or needed some functionality that didn't exist. It's easy to assume that your reviewers have the same context that you do - but they probably don't!

You can get a lot of mileage out of reminding your reviewers what the goal of your change is in the first place. It can be really helpful to take a step back and summarize the change and discussion so far for your reviewers. For the most impact, make sure you're covering these key points:


What is the goal of your pull request?

Avoid being too specific, as you can limit your options: the goal isn't to add a button that does X, the goal is to make users who need X feel less pain. If your goal is to solve a problem for your day job, make sure you explain that, too - don't just say "my company needs X," say "my company's employees use this project in this way with these scaling or resource constraints, and we need X in order to do this thing."

There will also be some goals inherent to the project. Most of the time, there is an unstated goal to keep the codebase maintainable and clean. You may need to ensure your new feature is accessible to users with disabilities. Maybe your new feature needs to perform well on low-end devices, or compile on a platform that you don't use.

Not all of these constraints will apply to every change - but some of them will be highly relevant to your work. Make sure you restate the constraints that need to be considered for this pull request.


What constraints have your reviewers stated?

During the life of the pull request until now, you've probably received feedback from at least one reviewer. What does it seem like the reviewer cares about? Beyond individual pieces of feedback, what are their core concerns? Is your reviewer asking whether your feature is at the appropriate level because they're in the middle of a long campaign to organize the project's architecture? Are they nitpicking your error messages because they care about translatability?

Try to summarize the root of the concerns your reviewers are discussing. By summarizing them in one place, it becomes pretty clear when there's a conflict; you can lay out constraints which mutually exclude each other here, and ask for help determining whether any may be worth discarding - or whether there's a way to meet both constraints that you hadn't considered yet.


What alternative approaches have you considered?

Before you wrote the code in the first place, you may have considered a couple different ways of approaching the problem. You probably discarded the paths not taken for a reason, but your reviewers may have never seen that reasoning. You may also have received feedback suggesting you reimplement your change with a different approach; you may or may not think that approach is better.

Summarize all the possible approaches that you considered, and some of their pros and cons. Even better, since you've already summarized the goals and constraints that have been uncovered so far during your review, you can weigh each approach against these criteria. Perhaps it turns out the approach you discarded early on is actually the most performant one, and performance is a higher priority than you had realized at first. The community may even chime in with an approach you hadn't considered at all, but which is much more architecturally acceptable to the project.


List assumptions and remaining open questions

Through the process of writing this recap, you may come to some conclusions. State those explicitly - don't rely on your reviewers to come to the same conclusions you did. Now you have all the reasoning out in the open to defend your point.

You may also be left with some open questions - like that two reviewers are in disagreement without a clear solution, or that you aren't sure how to meet a given constraint. List those questions, too, and ask for your co-contributors' help answering them.

Here's an example of a time the author did this on the Git project - it's not in a pull request, but the same techniques are used. Sometimes these summaries are so comprehensive that it's worth checking them into the project to remind future contributors how we arrived at certain decisions.


Pushing Back

Contributors - especially ones who are new to the project, but even long-time members - often feel like the reviewer has all the power and authority. Surely the wise veteran reviewing my code is infallible; whatever she says must be true and the best approach. But it's not so! Your reviewers are human, and they want to work together with you to come to a good solution.

Pushing back isn't always the answer. For example, if you disagree with the project's established style, or with a direction that's already been discussed and agreed upon (like avoiding adding new dependencies unless completely necessary), trying to push back against the will of the entire project is likely fruitless - especially if you're trying to get an exception for your PR. Or if the root of your disagreement is "it works on my machine, so who cares," you probably need to think a little more broadly before arguing with your reviewer who says the code isn't portable to another platform or maintainable in the long term. Reviewers and project maintainers have to live with your code for a long time; you may be only sending one patch before moving on to another project. So in matters of maintainability, it's often best to defer to the reviewer.

However, there are times when it could be the right move:

  • Your reviewer's top goal is avoiding code churn, but your top goal is a tidy end state.
  • When you try out the approach your reviewer recommended, but it is difficult for a reason they may not have anticipated.
  • When the reviewer's feedback truly doesn't have any improvements over your existing approach, and it's purely a matter of taste. (Use caution here - push back by asking for more justification, in case you missed something!)

Sometimes, after hearing pushback, your reviewer may reply pointing out something that you hadn't considered which converts you over to their way of thinking. This isn't a loss - you understand their goals better, and the quality of your PR goes up as a result. But sometimes your reviewer comes over to your side instead, and agrees that their feedback isn't actually necessary.


Lean on Reviewers for Help

Code reviews can feel a little bit like homework. You send your draft in; you get feedback on it; you make a bunch of changes on your own; rinse and repeat. But they don't have to be this way - it's much better to think of reviews as a team effort, where you and the reviewers are all pitching in to produce a high-quality code change. You're not being graded!

If implementing changes based on the feedback you received is difficult - like it seems that you'll need to do a large-scale refactor to make the function interface change you were asked for - or you're running into challenges that are hard to figure out on your own - like that one test failure doesn't make any sense - you can ask for more help! Just keep in mind the usual rules of asking for help online:

  • Use a lot of detail to describe where you're having trouble - cite the exact test that's failing or error case you're seeing.
  • Explain what you've tried so far, and what you think the issue is.
  • Include your in-progress code so that your reviewers can see how far you've gotten.
  • Be polite; you're asking for help, not demanding the author change their feedback or implement it for you.

Remember that if you need to, you can also ask your reviewer to switch to a different medium. It might be easier to debug a failing test over half an hour of instant messaging instead of over a week of GitHub comments; it might be easier to reason through a difficult API change with a video conference and a whiteboard. Your reviewer may be busy, but they're still a human, and they're invested in the success of your patch, too. (And if you don't think they're invested - did you try summarizing the issue, as described above? Make sure they understand where you're coming from!) Don't be afraid to ask for a bit more bandwidth if you think it'd be valuable.


Get a Second Opinion

When you're really, truly stuck - nobody agrees and your pushback isn't working and you met with your reviewer and they're stumped too - remember that most of the time, you and your reviewer aren't the only people working on the project. It's often possible to ask for a second opinion.

It may sound transactional, but it's okay to offer review-for-review: "Hi Alice, I know you're quite busy, but I see that you have a PR out now; I can make time to review it, but it'd help me out a lot if you can review mine, too. Bob and I are stuck because…." This can be a win for both you and Alice!

If you're stuck and can't come to a decision, you can also escalate. Escalation sounds scary, but it is a healthy part of a well-run project. You can talk to the project or subsystem maintainer, share a link to the conversation, and mention that you need help coming to a decision. Or, if your disagreement is with the maintainer, you can raise it with another trusted and experienced contributor the same way.

Sometimes you may also receive comments that feel particularly harsh or insulting, especially when they're coming from a contributor you've never spoken with face-to-face. In cases like this, it can be helpful to even just get someone else to read that comment and tell you if they find it rude, too. You can ask just about anybody - your friend, your teammate at your day job - but it's best if you ask someone who has some prior context with the project or with the person who made the comment. Or, if you'd rather dig, you can examine this same reviewer's comments on other contributions - you may find that there's a language barrier, or that they're incredibly active and are pressed for time to reply to your review, or that they're just an equal-opportunity jerk (or not-so-equal-opportunity), and your review isn't the exception.

If you can’t find another person to help, you can read through past closed and merged PRs to see the comments – do any of them apply to your PR, and perhaps the maintainer just hasn’t had time to give you the same feedback? Remember, this might be your first PR in the project but the maintainer might think “oh I’ve given this feedback a thousand times”.


When to Give Up

All these techniques help put more agency back in your hands as the author of a pull request, but they're not a sure thing. You might find that no matter how many of these techniques you use, you're still stuck - or that none of these techniques seem applicable. How do you know when to quit? And when you do want to, how can you do it without burning any bridges?


What did I want out of this PR, anyway?

There are many reasons to contribute to open source, and not all of us have the same motivations. Make sure you know why you sent this change. Were you personally or professionally affected by the bug or lack of feature? Did you want to learn what it's like to contribute to open source software? Do you love the community and just feel a need to give back to it?

No matter what your goal is, there's usually a way to achieve it without landing your pull request. If you need the change, it's always an option to fork - more on that in a moment. If you're wanting to try out contribution for the first time, but this project just doesn't seem to be working for it, you can learn from the experience and try out a different project instead - most projects love new contributors! If you're trying to help the community, but they don't want this change, then continuing to push it through isn't actually all that helpful.

Like when you re-summarized the review, or when you pushed back on your reviewer's opinions, think deeply about your own goals - and think whether this PR is the best way to meet them. Above all, if you're finding that you dread coming back to the project, and nothing is making that feeling go away, there's absolutely nothing wrong with moving on instead.


Taking a break

It's not necessary for you to definitively decide to stop working on your PR. You can take a break from it! Although you'll need to rebase your work when you come back to it, your idea won't go stale from a few weeks or months in timeout, and if any strong emotions (yours or the reviewers') were playing a role, they'll have calmed down by then. You may even come back to an email from your long-lost disappeared reviewer, saying they've been out on parental leave for the last two months and they're happy to get back to your change now. And once the idea has been seeded in the project, it's not unusual for community members to say weeks or months later, talking about a different problem, "Yeah, this would be a lot easier if we picked up something like <your abandoned pr>, actually. Maybe we should revive that effort."

These breaks can be helpful for everyone - but it's best practice to make sure the project knows you're stepping away. Leave a comment saying that you'll be quiet for a while; this way, a reviewer who may not know you feel stuck isn't left hanging waiting for you to resume your work.


Forking

For smaller projects, you often have the option of forking - a huge benefit of using open source software! You can decide whether or not to share your forked change with the world (in accordance with the license); "forking" can even mean that you just compile the project from source and use it with your own modifications on your local machine.

And if you were working on this PR for your day job, you can maintain a soft fork of the project with those changes applied. Google even has a tool for doing this efficiently with monorepos called Copybara.

Bonus: if you use your change yourself (or give it to your users) for a period of time, and come back to the PR after a bit of a break, you now have information about your code's proven stability in the wild; this can strengthen the case for your change when you resume work on it.


Giving up responsibly

If you do decide you're done with the effort, it's helpful to the project for you to say so explicitly. That way, if someone wants to take over your code change and finish pushing it through, they don't need to worry about stepping on your toes in the process. If you're comfortable doing so, it can be helpful to say why:

"After 15 iterations, I've decided to instead invest my spare time in something that is moving more quickly. If anybody wants to pick up this series and run with it, you have my blessing."
"It seems like Alice and I have an intractable disagreement. Since we're not able to resolve it, I'm going to let this change drop for now."
"My obligations at home are ramping up for the summer and I won't have time to continue working on this until at least September. If anybody wants to take over, feel free; I'll be available to provide review, but please don't block submission on my behalf, as I'll be quite busy."

Avoid the temptation to flounce, though. The above samples explain your circumstances and reasoning in a way that's easy to accept; it's less productive to say something like, "It's obvious that this project has no respect for its developers. Further work on this effort is a waste of my time!" Think about how you would react to reading such an email. You'd probably dismiss it as someone who's easily frustrated, maybe feel a little guilty, but ultimately, it's unlikely that you'd make any changes.


You Are the Main Driver of Your PR

In the end, it's important to remember that, ultimately, the author of a pull request is usually the one most invested in the success of that pull request. When you run into roadblocks, remember these techniques to overcome them, and know that you have power to help your code keep moving forward.

By Emily Shaffer – Staff Software Engineer

Emily Shaffer is a Staff Software Engineer at Google. They are the tech lead of Google's development efforts on the first-party Git client, and help advise Google on participation in other open source projects in the version control space, including jj, JGit, and Copybara. They formerly worked as a subsystem maintainer within the OpenBMC project. You can recognize Emily on most social media under the handle nasamuffin.

Celebrating 20 Years of Google Summer of Code


Nurturing the Next Generation of Open Source Contributors


In the ever-evolving landscape of technology, open source software development plays a pivotal role in fostering innovation and collaboration on a global scale. For 20 years the Google Summer of Code (GSoC) program has introduced and nurtured new contributors entering the open source community. At All Things Open 2024, we’re excited to celebrate the 20th anniversary of GSoC, and reflect on some of the contributions this initiative has made to the world of software development.


About the Google Summer of Code program

Since its inception in 2005, one of GSoC’s goals has been to act as a bridge between aspiring developers and the open source ecosystem. The program's core principle revolves around mentorship; pairing participants with experienced developers from open source organizations of all shapes and sizes. GSoC has facilitated the connection of over 21,000 contributors from 123 countries. In 2005, the program reached over 200 contributors from 51 countries––a 10,400% increase in 20 years! This global reach underscores the program's commitment to fostering an inclusive and diverse open source community.

Over the years, participants have collectively produced over 43 million lines of code, and contributed to the development and health of over 1,000 open source organizations. This substantial body of work not only strengthens the foundation of open source projects, but also shows the program's effectiveness by empowering new developers to make meaningful contributions.


GSoC's Impact: Bridging the gap between aspiring developers and open source

GSoC has far-reaching positive effects extending beyond the participants accepted into the program. For organizations, the GSoC application process itself acts as a catalyst for positive change in their communities. To apply, they must refine their documentation, develop newcomer-friendly tasks, and foster a collaborative environment to define potential projects. These efforts strengthen their communities, enhance organization, and create a more welcoming space for new members. Even when their application isn't accepted, organizations and contributors that don't make it into the program continue to benefit from these improvements.

Similarly, developers who apply to GSoC gain valuable insights. They discover open source projects aligned with their interests and realize the vast and exciting landscape of open source work. Many even go on to contribute to these projects independently, outside the formal GSoC structure.

The impact of GSoC extends beyond numbers. Participants tell us that one of the most significant aspects of GSoC is the invaluable learning experience it offers. Through their 12+ week programming projects, contributors gain exposure to real-world software development practices, coding standards, and collaboration techniques. The guidance and mentorship provided by seasoned open source developers enables participants to hone their skills, build confidence, and develop a deeper understanding of the open source ethos.


The value of mentorship and learning within the open source community

For participating organizations, GSoC serves as a valuable pipeline for identifying and attracting fresh perspectives. Many GSoC contributors continue to engage with their new communities long after the program concludes, becoming active members, maintainers, and even mentors themselves; in fact, GSoC has had more than 20,000 mentors hailing from 138 countries. This cycle of learning and contribution perpetuates the growth and sustainability of the open source ecosystem. We’re excited to build an even deeper connection with our GSoC alumni in 2025 to help strengthen the long term contributor and maintainer community further.


Celebrating 20 Years of GSoC

With the 20th anniversary of Google Summer of Code, we celebrate the program's enduring tradition as a catalyst for open source innovation. By providing a platform for collaboration, mentorship, and skill development, GSoC has empowered countless individuals to embark on fulfilling careers in software development while simultaneously enriching the open source ecosystem.

Join me at Google’s All Things Open keynote to learn more about GSoC and celebrate its 20th anniversary.

Posted by Timothy Jordan – Director, Developer Relations & Open Source, Google

Keys to a resilient Open Source future


In today’s world, free and open source software is a critical component of almost every software solution. Research shows that 70% of modern software relies on open source components, with over 97% of applications leveraging open source code. Unsurprisingly, 90% of companies are using or applying open source code in some form.


These statistics highlight the importance of open source software in modern technology and software development. At the same time, they demonstrate that as its relevance grows, so do the challenges associated with keeping it safe. At Open Source Summit EU, we discussed these challenges and how open source security could be improved. Let’s begin by breaking down the landscape of open source.

The open source ecosystem is fragmented, with diverse languages, build systems, and testing pipelines, making it difficult to maintain consistent security standards. This fragmentation forces developers to juggle multiple roles, such as managing security vulnerabilities, often without adequate tools and support. As a result, inconsistencies and security gaps arise, leaving open source projects vulnerable to attacks. Creating consistent security practices across the board is key to addressing vulnerabilities, which standardization helps to minimize while streamlining the development process.

Google’s SLSA (Supply Chain Levels for Software Artifacts) framework and OSV (Open Source Vulnerabilities) schemas are prime examples of how de facto standardization can transform open source security. SLSA has united several companies to create a standard that enables developers to improve their supply chain security posture, helping prevent attacks like those experienced by SolarWinds and Codecov.

The OSV schema has also been successful, with more than 20 language ecosystems adopting it. This schema allows vulnerabilities to be exported in a precise, machine-readable format, making them easier to manage and address. Thanks to its standardized format, over 150,000 vulnerabilities in open source software have been aggregated and made accessible to anyone in the world via a single API call.

However, many tasks remain manual, making them time-consuming and more prone to human error. Developers must integrate multiple tools at different stages of the software development cycle. The future of open source security lies in creating a fully integrated platform—a tool suite that integrates the best-in-industry tools and solutions, and provides simple hooks for continuous operation in the CI/CD system. Automation is crucial.


The key to revolutionizing open source security is AI, as it can automate manual and error-prone tasks, and reduce the burden on developers.

Google has already started leveraging AI in open source security by successfully using it to write and improve fuzzer unit tests. Google's OSS-Fuzz has been a game changer with a 40% increase in code coverage for over 160 projects. Since its inception, it has identified over 12,000 vulnerabilities with a 90% fix rate. Its effectiveness is due to its close integration with the developer’s workflow, testing the latest commits and helping to fix regressions quickly.

While AI remains an area of active research, and it has not yet solved all security challenges, Google is eager to collaborate with the community to push the boundaries of what AI can achieve in open source security.

Google's approach to open source security is now focused on long-term thinking and scalable solutions. To make a meaningful difference at scale, it is focusing on three key aspects:

  • Simplifying and applying security best practices consistently: Common, usable standards are key to reducing vulnerabilities and maintaining a secure ecosystem.
  • Developing an intelligent and integrated platform: A seamless, integrated platform that automates security tasks and naturally integrates into the developer workflow.
  • Leveraging AI to accelerate and enhance security: Reducing the workload on developers and catching vulnerabilities that might go undetected.

By maintaining this focus and continuing to collaborate with the community, Google and the open source ecosystem can ensure that FOSS remains a secure, reliable foundation for the software solutions of tomorrow.

By Abhishek Arya – Principal Engineer, Open Source and Supply Chain Security