StyleDrop: Text-to-image generation in any style

Text-to-image models trained on large volumes of image-text pairs have enabled the creation of rich and diverse images encompassing many genres and themes. Moreover, popular styles such as “anime” or “steampunk”, when added to the input text prompt, may translate to specific visual outputs. While many efforts have been put into prompt engineering, a wide range of styles are simply hard to describe in text form due to the nuances of color schemes, illumination, and other characteristics. As an example, “watercolor painting” may refer to various styles, and using a text prompt that simply says “watercolor painting style” may either result in one specific style or an unpredictable mix of several.

When we refer to "watercolor painting style," which do we mean? Instead of specifying the style in natural language, StyleDrop allows the generation of images that are consistent in style by referring to a style reference image*.

In this blog we introduce “StyleDrop: Text-to-Image Generation in Any Style”, a tool that allows a significantly higher level of stylized text-to-image synthesis. Instead of seeking text prompts to describe the style, StyleDrop uses one or more style reference images that describe the style for text-to-image generation. By doing so, StyleDrop enables the generation of images in a style consistent with the reference, while effectively circumventing the burden of text prompt engineering. This is done by efficiently fine-tuning the pre-trained text-to-image generation models via adapter tuning on a few style reference images. Moreover, by iteratively fine-tuning the StyleDrop on a set of images it generated, it achieves the style-consistent image generation from text prompts.


Method overview

StyleDrop is a text-to-image generation model that allows generation of images whose visual styles are consistent with the user-provided style reference images. This is achieved by a couple of iterations of parameter-efficient fine-tuning of pre-trained text-to-image generation models. Specifically, we build StyleDrop on Muse, a text-to-image generative vision transformer.


Muse: text-to-image generative vision transformer

Muse is a state-of-the-art text-to-image generation model based on the masked generative image transformer (MaskGIT). Unlike diffusion models, such as Imagen or Stable Diffusion, Muse represents an image as a sequence of discrete tokens and models their distribution using a transformer architecture. Compared to diffusion models, Muse is known to be faster while achieving competitive generation quality.


Parameter-efficient adapter tuning

StyleDrop is built by fine-tuning the pre-trained Muse model on a few style reference images and their corresponding text prompts. There have been many works on parameter-efficient fine-tuning of transformers, including prompt tuning and Low-Rank Adaptation (LoRA) of large language models. Among those, we opt for adapter tuning, which is shown to be effective at fine-tuning a large transformer network for language and image generation tasks in a parameter-efficient manner. For example, it introduces less than one million trainable parameters to fine-tune a Muse model of 3B parameters, and it requires only 1000 training steps to converge.

Parameter-efficient adapter tuning of Muse.

Iterative training with feedback

While StyleDrop is effective at learning styles from a few style reference images, it is still challenging to learn from a single style reference image. This is because the model may not effectively disentangle the content (i.e., what is in the image) and the style (i.e., how it is being presented), leading to reduced text controllability in generation. For example, as shown below in Step 1 and 2, a generated image of a chihuahua from StyleDrop trained from a single style reference image shows a leakage of content (i.e., the house) from the style reference image. Furthermore, a generated image of a temple looks too similar to the house in the reference image (concept collapse).

We address this issue by training a new StyleDrop model on a subset of synthetic images, chosen by the user or by image-text alignment models (e.g., CLIP), whose images are generated by the first round of the StyleDrop model trained on a single image. By training on multiple synthetic image-text aligned images, the model can easily disentangle the style from the content, thus achieving improved image-text alignment.

Iterative training with feedback*. The first round of StyleDrop may result in reduced text controllability, such as a content leakage or concept collapse, due to the difficulty of content-style disentanglement. Iterative training using synthetic images, generated by the previous rounds of StyleDrop models and chosen by human or image-text alignment models, improves the text adherence of stylized text-to-image generation.

Experiments


StyleDrop gallery

We show the effectiveness of StyleDrop by running experiments on 24 distinct style reference images. As shown below, the images generated by StyleDrop are highly consistent in style with each other and with the style reference image, while depicting various contexts, such as a baby penguin, banana, piano, etc. Moreover, the model can render alphabet images with a consistent style.

Stylized text-to-image generation. Style reference images* are on the left inside the yellow box. Text prompts used are:
First row: a baby penguin, a banana, a bench.
Second row: a butterfly, an F1 race car, a Christmas tree.
Third row: a coffee maker, a hat, a moose.
Fourth row: a robot, a towel, a wood cabin.
Stylized visual character generation. Style reference images* are on the left inside the yellow box. Text prompts used are: (first row) letter 'A', letter 'B', letter 'C', (second row) letter 'E', letter 'F', letter 'G'.

Generating images of my object in my style

Below we show generated images by sampling from two personalized generation distributions, one for an object and another for the style.

Images at the top in the blue border are object reference images from the DreamBooth dataset (teapot, vase, dog and cat), and the image on the left at the bottom in the red border is the style reference image*. Images in the purple border (i.e. the four lower right images) are generated from the style image of the specific object.

Quantitative results

For the quantitative evaluation, we synthesize images from a subset of Parti prompts and measure the image-to-image CLIP score for style consistency and image-to-text CLIP score for text consistency. We study non–fine-tuned models of Muse and Imagen. Among fine-tuned models, we make a comparison to DreamBooth on Imagen, state-of-the-art personalized text-to-image method for subjects. We show two versions of StyleDrop, one trained from a single style reference image, and another, “StyleDrop (HF)”, that is trained iteratively using synthetic images with human feedback as described above. As shown below, StyleDrop (HF) shows significantly improved style consistency score over its non–fine-tuned counterpart (0.694 vs. 0.556), as well as DreamBooth on Imagen (0.694 vs. 0.644). We observe an improved text consistency score with StyleDrop (HF) over StyleDrop (0.322 vs. 0.313). In addition, in a human preference study between DreamBooth on Imagen and StyleDrop on Muse, we found that 86% of the human raters preferred StyleDrop on Muse over DreamBooth on Imagen in terms of consistency to the style reference image.


Conclusion

StyleDrop achieves style consistency at text-to-image generation using a few style reference images. Google’s AI Principles guided our development of Style Drop, and we urge the responsible use of the technology. StyleDrop was adapted to create a custom style model in Vertex AI, and we believe it could be a helpful tool for art directors and graphic designers — who might want to brainstorm or prototype visual assets in their own styles, to improve their productivity and boost their creativity — or businesses that want to generate new media assets that reflect a particular brand. As with other generative AI capabilities, we recommend that practitioners ensure they align with copyrights of any media assets they use. More results are found on our project website and YouTube video.


Acknowledgements

This research was conducted by Kihyuk Sohn, Nataniel Ruiz, Kimin Lee, Daniel Castro Chin, Irina Blok, Huiwen Chang, Jarred Barber, Lu Jiang, Glenn Entis, Yuanzhen Li, Yuan Hao, Irfan Essa, Michael Rubinstein, and Dilip Krishnan. We thank owners of images used in our experiments (links for attribution) for sharing their valuable assets.


*See image sources 

Source: Google AI Blog


Google Workspace Updates Weekly Recap – December 15, 2023

2 New updates

Unless otherwise indicated, the features below are available to all Google Workspace customers, and are fully launched or in the process of rolling out. Rollouts should take no more than 15 business days to complete if launching to both Rapid and Scheduled Release at the same time. If not, each stage of rollout should take no more than 15 business days to complete.


We have begun enforcing 2-step verification for all admin accounts 
Two-step verification (2SV) is a critical security measure that has been proven to reduce password-based hijacking by more than 50%. We are committed to protecting the security of our users and are taking additional steps to help customers guard against data compromise and prevent account takeovers.

We have begun enforcing 2SV for all admin accounts and will continue this enforcement on an ongoing basis. As of December 2023, this change is already in effect for some customers. When this goes into effect for your organization, you will receive the following notifications:
  • 30 days prior to enforcement in your domain: Super admins will receive various email and in-app notifications informing them of the forthcoming enforcement, encouraging them to verify their admins’ 2SV status. 
  • Once enforcement goes into effect in your domain: All admins will receive email and in-app notifications upon signing into their accounts for the next thirty days. If they do not enable 2SV within this time period, they will be locked out and will need to follow these steps to recover an administrator account.
We highly encourage all administrators to turn on 2SV as soon as possible. Visit the Help Center for more details and further guidance.



Dynamic groups limit increased to 500 
We’re increasing the number of dynamic groups a customer can have from 100 to 500. Dynamic groups are defined as groups whose membership is managed automatically based on specific criteria, such as a user’s department or location. This increase gives admins more flexibility to create dynamic groups as needed and cuts down on manual group management tasks that would otherwise be required. | Rolling out now to Rapid Release and Scheduled Release domains at a gradual pace (up to 15 days for feature visibility). | Available for Google Workspace Frontline Standard, Enterprise Standard and Enterprise Plus, Education Standard and Education Plus, Enterprise Essentials Plus, and Cloud Identity Premium customers only. | Learn more about dynamic groups.


Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Meet Add-ons SDK available in Developer Preview 
The Google Meet Web Add-ons SDK is available through our Developer Preview Program. Developers can use the SDK to bring their app experience right into Meet. End users can install, open, and collaborate in apps right inside a meeting, either as the meeting focal point, or in the sidebar — all without ever leaving Meet. | Learn more about Meet Add-ons SDK .

Huddly cameras bring continuous framing to Google Meet Series One room kits 
As part of our initiative to bring adaptive framing to Google Meet meeting rooms, we’re proud to announce that you can now access Huddly’s continuous framing capability available as part of the Series One room kit hardware devices. | Available to all Google Workspace customers using Google Meet Series One room kits only. | Learn more about Google Meet Series One.

Record and share your name pronunciation across Google Workspace products 
From your Google account settings, you can now record your name and share its pronunciation with other users. The pronunciation can be played from your profile card across various Google Workspace tools such as Gmail or Google Docs on web or mobile devices. | Available to Google Workspace Business Starter, Business Standard, Business Plus, Essentials Starter, Enterprise Essentials, Enterprise Essentials Plus, Enterprise Standard, Enterprise Plus, Frontline Starter, Frontline Standard, and Nonprofits customers only. | Learn more about name pronunciation. 

Easy access to people, documents, building blocks and more in Google Docs 
When moving to a blank line within your Doc, you will see an “@” button with the option to select, search and insert smart chips, such as people, dates, timers, or files, building blocks, calendar events, groups and more. | Learn more about bringing smart canvas features to the forefront of your workflow

Excuse assignments in Google Classroom 
Teachers can mark an assignment for a particular student as “Excused” instead of giving it a 0-100 score. This will exclude that particular assignment from the student’s overall grade. | Learn more about excusing assignments. 

Introducing interactive questions for YouTube videos in Google Classroom 
Educators can now turn any YouTube video into an interactive lesson by adding questions for their students to answer throughout the video. | Available to Education Plus and the Teaching and Learning Upgrade only. | Learn more about interactive videos. 

Introducing the Bitbucket app for Google Chat 
We’re adding Bitbucket for Google Chat. Bitbucket is a Git-based code and CI/CD tool optimized for teams using Atlassian’s Jira. | Learn more about Bitbucket app for Google Chat. 

Use “Profile Discovery” to display basic information only in search results, available in open beta 
Google Workspace admins can now turn on “Profile discovery” for their users. When turned on, users can customize how they appear across Google products to people who search for them by their phone number or email. Specifically, you can choose how you want your name to be displayed and how your profile picture will be displayed. | Learn more about Profile Discovery.


Completed rollouts

The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.


Rapid Release Domains: 
Scheduled Release Domains: 
Rapid and Scheduled Release Domains: 

For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).

Congratulations to the winners of Google’s Immersive Geospatial Challenge

Posted by Bradford Lee – Product Marketing Manager, Augmented Reality, and Ahsan Ashraf – Product Marketing Manager, Google Maps Platform

In September, we launched Google's Immersive Geospatial Challenge on Devpost where we invited developers and creators from all over the world to create an AR experience with Geospatial Creator or a virtual 3D immersive experience with Photorealistic 3D Tiles.

"We were impressed by the innovation and creativity of the projects submitted. Over 2,700 participants across 100+ countries joined to build something they were truly passionate about and to push the boundaries of what is possible. Congratulations to all the winners!" 

 Shahram Izadi, VP of AR at Google

We judged all submissions on five key criteria:

  • Functionality - How are the APIs used in the application?
  • Purpose - What problem is the application solving?
  • Content - How creative is the application?
  • User Experience - How easy is the application to use?
  • Technical Execution - How well are you showcasing Geospatial Creator and/or Photorealistic 3D Tiles?

Many of the entries are working prototypes, with which our judges thoroughly enjoyed experiencing and interacting. Thank you to everyone who participated in this hackathon.



From our outstanding list of submissions, here are the winners of Google’s Immersive Geospatial Challenge:


Category: Best of Entertainment and Events

Winner, AR Experience: World Ensemble

Description: World Ensemble is an audio-visual app that positions sound objects in 3D, creating an immersive audio-visual experience.


Winner, Virtual 3D Experience: Realistic Event Showcaser

Description: Realistic Event Showcaser is a fully configurable and immersive platform to customize your event experience and showcase its unique location stories and charm.


Winner, Virtual 3D Experience: navigAtoR

Description: navigAtoR is an augmented reality app that is changing the way you navigate through cities by providing a 3 dimensional map of your surroundings.



Category: Best of Commerce

Winner, AR Experience: love ya

Description: love ya showcases three user scenarios for a special time of year that connect local businesses with users.



Category: Best of Travel and Local Discovery

Winner, AR Experience: Sutro Baths AR Tour

Description: This guided tour through the Sutro Baths historical landmark using an illuminated walking path, information panels with text and images, and a 3D rendering of how the Sutro Baths swimming pool complex would appear to those attending.


Winner, Virtual 3D Experience: Hyper Immersive Panorama

Description: Hyper Immersive Panorama uses real time facial detection to allow the user to look left, right, up or down, in the virtual 3D environment.


Winner, Virtual 3D Experience: The World is Flooding!

Description: The World is Flooding! allows you to visualize a 3D, realistic flooding view of your neighborhood.


Category: Best of Productivity and Business

Winner, AR Experience: GeoViz

Description: GeoViz revolutionizes architectural design, allowing users to create, modify, and visualize architectural designs in their intended context. The platform facilitates real-time collaboration, letting multiple users contribute to designs and view them in AR on location.



Category: Best of Sustainability

Winner, AR Experience: Geospatial Solar

Description: Geospatial Solar combines the Google Geospatial API with the Google Solar API for instant analysis of a building's solar potential by simply tapping it.


Winner, Virtual 3D Experience: EarthLink - Geospatial Social Media

Description: EarthLink is the first geospatial social media platform that uses 3D photorealistic tiles to enable users to create and share immersive experiences with their friends.


Honorable Mentions

In addition, we have five projects that earned honorable mentions:

  1. Simmy
  2. FrameView
  3. City Hopper
  4. GEOMAZE - The Urban Quest
  5. Geospatial Route Check

Congratulations to the winners and thank you to all the participants! Check out all the amazing projects submitted. We can't wait to see you at the next hackathon.

Open sourcing tools for Google Cloud performance and resource optimization

Over the years, we at Google have identified common requests from customers to optimize their Kubernetes clusters on Google Cloud. Today, we are releasing a set of open source tools to help customers with these tasks, including bin packing, load testing, and performance benchmarking. These tools are designed to help customers optimize their clusters for cost, performance, and scalability.

Those identified common requests from customers are around the following use cases:

  1. Google Cloud customers ask whether Google Cloud has a bin packing recommendation feature or tool to optimize GKE Standard's nodes usage?
  2. How to easily run Aerospike, Cassandra, PgBench benchmark or other popular benchmarking tools on Google Cloud?
  3. How to load test our application running on Google Cloud? How many requests per second could my app handle given the current size of the existing Google Cloud infrastructure?

The underlying motivation is that customers want some evidence-based tooling in order to help them optimize their Google Cloud resources, optimize for cost, run benchmarks, identify performance bottlenecks, or even to start a performance discussion.

For such use cases mentioned above, we are open sourcing a set of tools for the public to self-service the installation of each application which comprises UI and Backend components deployable to their respective Google Cloud Project. We name the collection of these tools as sa-tools.


BinPacker

BinPacker recommender for GKE node size
There are currently no bin packing recommendation features available in GCP Cloud Console. We are open sourcing a tool to visually scan your GKE cluster and recommend the optimal node’s bin packing size. Users can opt to select services that are grouped together to be in the same node. The installation guide can be found here.

Perfkit Benchmarker with UI

Perfkit Benchmarker with UI

What if you could install an easy-to-use version of Perfkit Benchmarker (PKB) with a click-and-select UI?

With this version, you could simply select the benchmark tool you want to use from a dropdown menu and provide a YAML configuration file. PKB would then automatically spin up a GKE Autopilot cluster with the configuration you have provided and run the benchmark. You could then view the performance metrics results in the UI.

This easy-to-use version of PKB would make it easier to run benchmarks and compare the performance of different systems, even if you don't have much technical experience. The installation guide can be found here.


Web Performance Testing

gTools Performance Testing

We built an open source UI wrapper on top of Locust, running inside your GCP Project. You can have a Locust farm instance run for a specific group of users in comparison to the generic Locust setup where everyone is able to access the Locust instance. The installation guide can be found here.

For more info you may reach us via the contributor list in the repository.

By Yudy Hendry, Anant Damle, Kozzy Hasebe, Jun Sheng, and Chuan Chen – Cloud Solutions Architects Team

Caritas of Austin: Alleviating Homelessness and Creating a Connected Community


In each of our cities, Google Fiber works with incredible community partners and organizations on digital inclusion and equity issues. In Texas, we’re working with Caritas of Austin to help bring fast, reliable internet to the residents of Espero at Rutland, an affordable and supportive housing community and our newest Gigabit Community. GFiber is providing access to high speed internet and digital literacy classes at no cost to residents. In today’s guest post, Rachel Hanover, Deputy Director of Espero Rutland Housing Services shares what this represents for this community.

Thumbnail


















At Caritas of Austin, we believe all people deserve to have their basic needs met and a stable place to call home. We use a multi-layered approach to make homelessness rare, brief and nonrecurring in Central Texas by helping the unhoused population attain proper housing, employment, education, food and a supportive community.

As technology advances and society transitions to “paperless,” an internet connection is vital for finding permanent housing, applying for jobs and accessing other supplemental benefits like unemployment, food assistance  and health insurance. But for tens of millions of Americans, a high-speed internet connection is a luxury they can’t afford. This barrier makes life considerably more challenging to navigate, which is especially true for people experiencing homelessness.

Espero Rutland


In a joint venture to help unhoused individuals find permanent housing, we partnered with The Vecino Group and Austin Housing Finance Corporation to develop Espero Rutland, an affordable and intensely supportive housing community that is scheduled to open early next year.


Espero Rutland consists of 171 studio apartments and features many amenities, including an indoor community room, business center, gym and yoga studio, community dining room, and an outdoor courtyard area with lawn games, gazebo, BBQ stations and community garden. 



















We employ onsite case managers who work closely with residents to curate a personalized plan to help them manage personal finances, develop vocational skills and apply for supplemental benefit programs. To offer these services, it is imperative that residents have a stable internet connection. 

Creating a connected community with Google Fiber



Caritas of Austin is excited to partner with Google Fiber to provide access to a free, high-speed internet connection to every residential unit and property amenity at Espero Rutland. This partnership, which is part of GFiber’s Gigabit Communities program, will support broadband internet free of charge to very low income households. 

In addition to providing internet services at no cost to residents of Caritas Espero Rutland, GFiber will help to provide laptops and digital literacy classes to our residents. The virtual and onsite classes will help residents learn how to use their new laptops to access job applications, healthcare and supplemental benefits. 














At Caritas of Austin, we are committed to ending homelessness by creating a connected, supported community. Homelessness is a complex issue with no “one size fits all” solution. Through partnerships with local organizations like GFiber, we can help our clients build a solid foundation for their future. 

Empowering those experiencing homelessness,transforms individual lives, which contributes to the overall well-being of society–building a stronger, more connected community for everyone. 
Posted by Rachel Hanover, Deputy Director of Espero Rutland Housing Services


Chrome Dev for Desktop Update

The Dev channel has been updated to 122.0.6182.0 for Windows, Mac and Linux.

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvi Bommana
Google Chrome

Google Open Source Peer Bonus program announces second group of 2023 winners



We are excited to announce the second group of winners for the 2023 Google Open Source Peer Bonus Program! This program recognizes external open source contributors who have been nominated by Googlers for their exceptional contributions to open source projects.

The Google Open Source Peer Bonus Program is a key part of Google's ongoing commitment to open source software. By supporting the development and growth of open source projects, Google is fostering a more collaborative and innovative software ecosystem that benefits everyone.

This cycle's Open Source Peer Bonus Program received 163 nominations and winners come from 35 different countries around the world, reflecting the program's global reach and the immense impact of open source software. Community collaboration is a key driver of innovation and progress, and we are honored to be able to support and celebrate the contributions of these talented individuals from around the world through this program.

We would like to extend our congratulations to the winners! Included below are those who have agreed to be named publicly.

Winner

Open Source Project

Tim Dettmers

8-bit CUDA functions for PyTorch

Odin Asbjørnsen

Accompanist

Lazarus Akelo

Android FHIR

Khyati Vyas

Android FHIR

Fikri Milano

Android FHIR

Veyndan Stuart

AndroidX

Alex Van Boxel

Apache Beam

Dezső Biczó

Apigee Edge Drupal module

Felix Yan

Arch Linux

Gerlof Langeveld

atop

Fabian Meumertzheim

Bazel

Keith Smiley

Bazel

Andre Brisco

Bazel Build Rules for Rust

Cecil Curry

beartype

Paul Marcombes

bigfunctions

Lucas Yuji Yoshimine

Camposer

Anita Ihuman

CHAOSS

Jesper van den Ende

Chrome DevTools

Aboobacker MK

CircuitVerse.org

Aaron Ballman

Clang

Alejandra González

Clippy

Catherine Flores

Clippy

Rajasekhar Kategaru

Compose Actors

Olivier Charrez

comprehensive-rust

John O'Reilly

Confetti

James DeFelice

container-storage-interface

Akihiro Suda

containerd, runc, OCI specs, Docker, Kubernetes

Neil Bowers

CPAN

Aleksandr Mikhalitsyn

CRIU

Daniel Stenberg

curl

Ryosuke TOKUAMI

Dataform

Salvatore Bonaccorso

Debian

Moritz Muehlenhoff

Debian

Sylvestre Ledru

DebianLLVM

Andreas Deininger

Docsy

Róbert Fekete

Docsy

David Sherret

dprint

Justin Grant

ECMAScript Time Zone Canonicalization Proposal

Chris White

EditorConfig

Charles Schlosser

Eigen

Daniel Roe

Elk - Mastodon Client

Christopher Quadflieg

FakerJS

Ostap Taran

Firebase Apple SDK

Frederik Seiffert

Firebase C++ SDK

Juraj Čarnogurský

firebase-tools

Callum Moffat

Flutter

Anton Borries

Flutter

Tomasz Gucio

Flutter

Chinmoy Chakraborty

Flutter

Daniil Lipatkin

Flutter

Tobias Löfstrand

Flutter go_router package

Ole André Vadla Ravnås

Frida

Jaeyoon Choi

Fuchsia

Jeuk Kim

Fuchsia

Dongjin Kim

Fuchsia

Seokhwan Kim

Fuchsia

Marcel Böhme

FuzzBench

Md Awsafur Rahman

GCViT-tf, TransUNet-tf,Kaggle

Qiusheng Wu

GEEMap

Karsten Ohme

GlobalPlatform

Sacha Chua

GNU Emacs

Austen Novis

Goblet

Tiago Temporin

Golang

Josh van Leeuwen

Google Certificate Authority Service Issuer for cert-manager

Dustin Walker

google-cloud-go

Parth Patel

GUAC

Kevin Conner

GUAC

Dejan Bosanac

GUAC

Jendrik Johannes

Guava

Chao Sun

Hive, Spark

Sean Eddy

hmmer

Paulus Schoutsen

Home Assistant

Timo Lassmann

Kalign

Stephen Augustus

Kubernetes

Vyom Yadav

Kubernetes

Meha Bhalodiya

Kubernetes

Madhav Jivrajani

Kubernetes

Priyanka Saggu

Kubernetes

DANIEL FINNERAN

kubeVIP

Junfeng Li

LanguageClient-neovim

Andrea Fioraldi

LibAFL

Dongjia Zhang

LibAFL

Addison Crump

LibAFL

Yuan Tong

libavif

Gustavo A. R. Silva

Linux kernel

Mathieu Desnoyers

Linux kernel

Nathan Chancellor

Linux Kernel, LLVM

Gábor Horváth

LLVM / Clang

Martin Donath

Material for MkDocs

Jussi Pakkanen

Meson Build System

Amos Wenger

Mevi

Anders F Björklund

minikube

Maksim Levental

MLIR

Andrzej Warzynski

MLIR, IREE

Arnaud Ferraris

Mobian

Rui Ueyama

mold

Ryan Lahfa

nixpkgs

Simon Marquis

Now in Android

William Cheng

OpenAPI Generator

Kim O'Sullivan

OpenFIPS201

Yigakpoa Laura Ikpae

Oppia

Aanuoluwapo Adeoti

Oppia

Philippe Antoine

oss-fuzz

Tornike Kurdadze

Pinput

Andrey Sitnik

Postcss (and others: Autoprefixer, postcss, browserslist, logux)

Marc Gravell

protobuf-net

Jean Abou Samra

Pygments

Qiming Sun

PySCF

Trey Hunner

Python

Will Constable

PyTorch/XLA

Jay Berkenbilt

qpdf

Ahmed El-Helw

Quran App for Android

Jan Gorecki

Reproducible benchmark of database-like ops

Ralf Jung

Rust

Frank Steffahn

Rust, ICU4X

Bhaarat Krishnan

Serverless Web APIs Workshop

Maximilian Keppeler

Sheets-Compose-Dialogs

Cory LaViska

Shoelace

Carlos Panato

Sigstore

Keith Zantow

spdx/tools-golang

Hayley Patton

Steel Bank Common Lisp

Qamar Safadi

Sunflower

Victor Julien

Suricata

Eyoel Defare

textfield_tags

Giedrius Statkevičius

Thanos

Michael Park

The Good Docs Project

Douglas Theobald

Theseus

David Blevins

Tomee

Anthony Fu

Vitest

Ryuta Mizuno

Volcago

Nicolò Ribaudo

WHATWG HTML Living Standard; ECMAScript Language Specification

Antoine Martin

xpra

Toru Komatsu

youki

We are incredibly proud of all of the nominees for their outstanding contributions to open source, and we look forward to seeing even more amazing contributions in the years to come. An additional thanks to Maria Tabak who has helped to lay the groundwork and management of this program for the past 5 years!

By Mike Bufano, Google Open Source Peer Bonus Program Lead