Category Archives: Open Source Blog

News about Google’s open source projects and programs

USB Keystroke Injection Protection

USB keystroke injection attacks have been an issue for a long time—problematic and affordable, due to the availability and price of keystroke injection tools. Those attacks send keystrokes immensely fast, in a human eyeblink, while being effectively invisible to the victim. Initially proposed to ease system administrator tasks, attackers learned how to use this technology for their purpose and compromise user systems. Here is an attack example, with a more or less benign payload:



To make the life of an attacker harder, we propose a tool that measures the timing of incoming keystrokes and determines if it is an attack based on predefined heuristics (without a user being involved in the decision). In contrast to the successful “attack” shown above, the following shows the same payload but with the tool installed on the system:


Choosing the RUN mode

The tool offers two different modes of operation: MONITOR and HARDENING. When running it in monitoring mode it won’t block a device that was classified as malicious, but will write a log line with information about the device to syslog. If it is run in hardening mode, it will immediately block a device that was classified as malicious/attacking. Out of the box, the tool is shipped in HARDENING mode.

Investigation

If the tool is running in monitoring mode, it logs information to the syslog. For one time inspection, this log can be read by simply using journalctl:
journalctl -u ukip.service
If it is rolled out to more machines in a network, it makes sense to collect each syslog at a centralized place for investigation.

Choosing the heuristics

A challenge when running the tool is the proper selection of the two main heuristic variables: KEYSTROKE_WINDOW and ABNORMAL_TYPING, which control the behaviour of the tool and its detection capabilities. The first one is the number of keystrokes it looks at, to determine whether it’s dealing with an attack or not. The lower the number, the higher the false positive rate; if the number is 2, the tool only looks at 1 interarrival time (the time between 2 keystrokes) to determine if it's an attack. Since users sometimes hit two keys almost at the same time it leads to the aforementioned false positive. Based on internal observations, 5 is an effective value, but should be adjusted based on the specific user’s experiences and typing behavior. The second variable specifies what interarrival time should be classified as malicious. More false-positives arise with a higher number (normal typing speed will be classified as malicious), versus with a lower number where more false-negatives arise (even very fast typing attacks will be classified as benign). That said, the preset 50000 after initial installation is a safe default but should be changed to a number reflecting the typing speed of the user using the tool. Finding the proper speed can be achieved in two ways: 1) By using one of the various online tools to measure the typing speed, and 2) using the Monitoring mode and letting it run for a few days (or even weeks) and gradually lower the false positive rate until it’s gone.

Getting it up and running

The README on Github contains a step-by-step guide to prepare the tool, set it up and run it as a systemd daemon, that is enabled on reboot. Over time it may be necessary to revise the variables for the tool by simply adjusting the values on top of /usr/sbin/ukip and restarting of the daemon:
sudo systemctl restart ukip.service

A note on silver bullets

The tool is not a silver bullet against USB-based attacks or keystroke injection attacks, since an attacker with access to a user’s machine (required for USB-based keystroke injection attacks) can do worse things if the machine is left unlocked. The tool is meant to provide another layer of protection and to defend a user sitting in front of their unlocked machine by them seeing the attack happening. They are able to see the attack either because the keystrokes are delayed enough to circumvent the tool’s logic or fast enough to be detected by it, i.e., blocking the device by unbinding its driver and logging information to syslog.

Keystroke injection attacks are difficult to detect and prevent since they’re delivered over USB (the most widely used computer peripheral connector) and require a Human Interface Device Driver (available on likely every operating system for mouse and keyboard input). The proposed tool raises the bar making it more difficult for the attacker while removing the user in the decision about whether a device is malicious or benign, apart from the refinement of the heuristic variables mentioned above. The tool can be complemented with other Linux tools, such as fine-grained udev rules or open source projects like USBGuard, to make successful attacks more challenging. The latter lets users define policies and allow/block specific USB devices or block USB devices while the screen is locked. That feature is specifically useful, since an attacker could plug in a device while the user is away from their keyboard and launch an attack once they are back. With USBGuard in place, the device would need to be replugged when the system is unlocked to work correctly.

By Sebastian Neuner, Google Information Security Engineering Team

Season of Docs announces the successful 2019 long-running projects


Season of Docs is happy to announce that all eight of the 2019 long-running documentation projects have finished successfully!

The successful long-running documentation projects are (in alphabetical order):

Apache Cassandra (Project Report, Project Description)

CERN-HSF (Project Report, Project Description)

Kolibri (Project Report, Project Description)

Mattermost (Project Report, Project Description)

MDAnalysis (Project Report, Project Description)

Open Food Facts (Project Report, Project Description)

Open Source Geospatial Foundation (Project Report, Project Description)

Tor Project (Project Report, Project Description)

Congratulations to the technical writers and organization mentors on these successful projects!

During the program, technical writers spent a few months working closely with an open source community. They brought their technical writing expertise to improve the project's documentation while the open source projects provided mentors to introduce the technical writers to open source tools, workflows, and the project's technology.

The technical writers and their mentors did a fantastic job with the inaugural year of Season of Docs! Participants in the 2019 program represented countries across all continents except for Antarctica!

You can view a list of the 44 successfully completed technical writing projects and read their project reports on the Season of Docs website.

What’s next?

Program participants should expect an email in the next few weeks about how to get their Season of Docs 2019 t-shirt (sure to be a collector’s item)!

If you’re interested in participating in a future Season of Docs, stay tuned for more information shortly—watch for posts on this blog and sign up for the announcements email list.

By Erin McKean and Kassandra Dhillon, Google Open Source

Season of Docs announces the successful 2019 long-running projects


Season of Docs is happy to announce that all eight of the 2019 long-running documentation projects have finished successfully!

The successful long-running documentation projects are (in alphabetical order):

Apache Cassandra (Project Report, Project Description)

CERN-HSF (Project Report, Project Description)

Kolibri (Project Report, Project Description)

Mattermost (Project Report, Project Description)

MDAnalysis (Project Report, Project Description)

Open Food Facts (Project Report, Project Description)

Open Source Geospatial Foundation (Project Report, Project Description)

Tor Project (Project Report, Project Description)

Congratulations to the technical writers and organization mentors on these successful projects!

During the program, technical writers spent a few months working closely with an open source community. They brought their technical writing expertise to improve the project's documentation while the open source projects provided mentors to introduce the technical writers to open source tools, workflows, and the project's technology.

The technical writers and their mentors did a fantastic job with the inaugural year of Season of Docs! Participants in the 2019 program represented countries across all continents except for Antarctica!

You can view a list of the 44 successfully completed technical writing projects and read their project reports on the Season of Docs website.

What’s next?

Program participants should expect an email in the next few weeks about how to get their Season of Docs 2019 t-shirt (sure to be a collector’s item)!

If you’re interested in participating in a future Season of Docs, stay tuned for more information shortly—watch for posts on this blog and sign up for the announcements email list.

By Erin McKean and Kassandra Dhillon, Google Open Source

FuzzBench: Fuzzer Benchmarking as a Service

We are excited to launch FuzzBench, a fully automated, open source, free service for evaluating fuzzers. The goal of FuzzBench is to make it painless to rigorously evaluate fuzzing research and make fuzzing research easier for the community to adopt.

Fuzzing is an important bug finding technique. At Google, we’ve found tens of thousands of bugs (1, 2) with fuzzers like libFuzzer and AFL. There are numerous research papers that either improve upon these tools (e.g. MOpt-AFL, AFLFast, etc) or introduce new techniques (e.g. Driller, QSYM, etc) for bug finding. However, it is hard to know how well these new tools and techniques generalize on a large set of real world programs. Though research normally includes evaluations, these often have shortcomings—they don't use a large and diverse set of real world benchmarks, use few trials, use short trials, or lack statistical tests to illustrate if findings are significant. This is understandable since full scale experiments can be prohibitively expensive for researchers. For example, a 24-hour, 10-trial, 10 fuzzer, 20 benchmark experiment would require 2,000 CPUs to complete in a day.

To help solve these issues the OSS-Fuzz team is launching FuzzBench, a fully automated, open source, free service. FuzzBench provides a framework for painlessly evaluating fuzzers in a reproducible way. To use FuzzBench, researchers can simply integrate a fuzzer and FuzzBench will run an experiment for 24 hours with many trials and real world benchmarks. Based on data from this experiment, FuzzBench will produce a report comparing the performance of the fuzzer to others and give insights into the strengths and weaknesses of each fuzzer. This should allow researchers to focus more of their time on perfecting techniques and less time setting up evaluations and dealing with existing fuzzers.

Integrating a fuzzer with FuzzBench is simple as most integrations are less than 50 lines of code (example). Once a fuzzer is integrated, it can fuzz almost all 250+ OSS-Fuzz projects out of the box. We have already integrated ten fuzzers, including AFL, LibFuzzer, Honggfuzz, and several academic projects such as QSYM and Eclipser.

Reports include statistical tests to give an idea how likely it is that performance differences between fuzzers are simply due to chance, as well as the raw data so researchers can do their own analysis. Performance is determined by the amount of covered program edges, though we plan on adding crashes as a performance metric. You can view a sample report here.

How to Participate

Our goal is to develop FuzzBench with community contributions and input so that it becomes the gold standard for fuzzer evaluation. We invite members of the fuzzing research community to contribute their fuzzers and techniques, even while they are in development. Better evaluations will lead to more adoption and greater impact for fuzzing research.

We also encourage contributions of better ideas and techniques for evaluating fuzzers. Though we have made some progress on this problem, we have not solved it and we need the community’s help in developing these best practices.

Please join us by contributing to the FuzzBench repo on GitHub.

By: Jonathan Metzman, Abhishek Arya, Google OSS-Fuzz Team and László Szekeres‎, Google Software Analysis Team

Google Summer of Code 2020 mentoring orgs announced!

We are delighted to announce the open source projects and organizations that have been accepted for Google Summer of Code (GSoC) 2020, the 16th year of the program!

After careful review, we have chosen 200 open source projects to be mentor organizations this year, 30 of which are new to the program. Please see the program website for a complete list of the accepted organizations.

Are you a student interested in participating in GSoC this year? We will begin accepting student applications on Monday, March 16, 2020 at 18:00 UTC and the deadline to apply is Tuesday, March 31, 2020 at 18:00 UTC.


The most successful applications come from students who start preparing now. Here are some proactive tips for a successful before the application period begins:
You can find more information on our website which includes a full timeline of important dates. We also highly recommend perusing the FAQ and Program Rules and watching some of our other videos with more details about GSoC for students and mentors.

A hearty congratulations—and thank you—to all of our mentor organizations! We look forward to working with all of you during Google Summer of Code 2020.

By Stephanie Taylor, Google Open Source

AutoFlip: An Open Source Framework for Intelligent Video Reframing

Originally posted on the AI Blog

Videos filmed and edited for television and desktop are typically created and viewed in landscape aspect ratios (16:9 or 4:3). However, with an increasing number of users creating and consuming content on mobile devices, historical aspect ratios don’t always fit the display being used for viewing. Traditional approaches for reframing video to different aspect ratios usually involve static cropping, i.e., specifying a camera viewport, then cropping visual contents that are outside. Unfortunately, these static cropping approaches often lead to unsatisfactory results due to the variety of composition and camera motion styles. More bespoke approaches, however, typically require video curators to manually identify salient contents on each frame, track their transitions from frame-to-frame, and adjust crop regions accordingly throughout the video. This process is often tedious, time-consuming, and error-prone.

To address this problem, we are happy to announce AutoFlip, an open source framework for intelligent video reframing. AutoFlip is built on top of the MediaPipe framework that enables the development of pipelines for processing time-series multimodal data. Taking a video (casually shot or professionally edited) and a target dimension (landscape, square, portrait, etc.) as inputs, AutoFlip analyzes the video content, develops optimal tracking and cropping strategies, and produces an output video with the same duration in the desired aspect ratio.
Left: Original video (16:9). Middle: Reframed using a standard central crop (9:16). Right: Reframed with AutoFlip (9:16). By detecting the subjects of interest, AutoFlip is able to avoid cropping off important visual content.

AutoFlip Overview

AutoFlip provides a fully automatic solution to smart video reframing, making use of state-of-the-art ML-enabled object detection and tracking technologies to intelligently understand video content. AutoFlip detects changes in the composition that signify scene changes in order to isolate scenes for processing. Within each shot, video analysis is used to identify salient content before the scene is reframed by selecting a camera mode and path optimized for the contents.

Shot (Scene) Detection

A scene or shot is a continuous sequence of video without cuts (or jumps). To detect the occurrence of a shot change, AutoFlip computes the color histogram of each frame and compares this with prior frames. If the distribution of frame colors changes at a different rate than a sliding historical window, a shot change is signaled. AutoFlip buffers the video until the scene is complete before making reframing decisions, in order to optimize the reframing for the entire scene.

Video Content Analysis

We utilize deep learning-based object detection models to find interesting, salient content in the frame. This content typically includes people and animals, but other elements may be identified, depending on the application, including text overlays and logos for commercials, or motion and ball detection for sports.

The face and object detection models are integrated into AutoFlip through MediaPipe, which uses TensorFlow Lite on CPU. This structure allows AutoFlip to be extensible, so developers may conveniently add new detection algorithms for different use cases and video content. Each object type is associated with a weight value, which defines its relative importance — the higher the weight, the more influence the feature will have when computing the camera path.


Left: People detection on sports footage. Right: Two face boxes (‘core’ and ‘all’ face landmarks). In narrow portrait crop cases, often only the core landmark box can fit.

Reframing

After identifying the subjects of interest on each frame, logical decisions about how to reframe the content for a new view can be made. AutoFlip automatically chooses an optimal reframing strategy — stationary, panning or tracking — depending on the way objects behave during the scene (e.g., moving around or stationary). In stationary mode, the reframed camera viewport is fixed in a position where important content can be viewed throughout the majority of the scene. This mode can effectively mimic professional cinematography in which a camera is mounted on a stationary tripod or where post-processing stabilization is applied. In other cases, it is best to pan the camera, moving the viewport at a constant velocity. The tracking mode provides continuous and steady tracking of interesting objects as they move around within the frame.

Based on which of these three reframing strategies the algorithm selects, AutoFlip then determines an optimal cropping window for each frame, while best preserving the content of interest. While the bounding boxes track the objects of focus in the scene, they typically exhibit considerable jitter from frame-to-frame and, consequently, are not sufficient to define the cropping window. Instead, we adjust the viewport on each frame through the process of Euclidean-norm optimization, in which we minimize the residuals between a smooth (low-degree polynomial) camera path and the bounding boxes.

Top: Camera paths resulting from following the bounding boxes from frame-to-frame. Bottom: Final smoothed camera paths generated using Euclidean-norm path formation. Left: Scene in which objects are moving around, requiring a tracking camera path. Right: Scene where objects stay close to the same position; a stationary camera covers the content for the full duration of the scene.

AutoFlip’s configuration graph provides settings for either best-effort or required reframing. If it becomes infeasible to cover all the required regions (for example, when they are too spread out on the frame), the pipeline will automatically switch to a less aggressive strategy by applying a letterbox effect, padding the image to fill the frame. For cases where the background is detected as being a solid color, this color is used to create seamless padding; otherwise a blurred version of the original frame is used.

AutoFlip Use Cases

We are excited to release this tool directly to developers and filmmakers, reducing the barriers to their design creativity and reach through the automation of video editing. The ability to adapt any video format to various aspect ratios is becoming increasingly important as the diversity of devices for video content consumption continues to rapidly increase. Whether your use case is portrait to landscape, landscape to portrait, or even small adjustments like 4:3 to 16:9, AutoFlip provides a solution for intelligent, automated and adaptive video reframing.


What’s Next?

Like any machine learning algorithm, AutoFlip can benefit from an improved ability to detect objects relevant to the intent of the video, such as speaker detection for interviews or animated face detection on cartoons. Additionally, a common issue arises when input video has important overlays on the edges of the screen (such as text or logos) as they will often be cropped from the view. By combining text/logo detection and image inpainting technology, we hope that future versions of AutoFlip can reposition foreground objects to better fit the new aspect ratios. Lastly, in situations where padding is required, deep uncrop technology could provide improved ability to expand beyond the original viewable area.

While we work to improve AutoFlip internally at Google, we encourage contributions from developers and filmmakers in the open source communities.

Acknowledgments

We would like to thank our colleagues who contributed to Autoflip, Alexander Panagopoulos, Jenny Jin, Brian Mulford, Yuan Zhang, Alex Chen, Xue Yang, Mickey Wang, Justin Parra, Hartwig Adam, Jingbin Wang, and Weilong Yang; MediaPipe team who helped with open sourcing, Jiuqiang Tang, Tyler Mullen, Mogan Shieh, Ming Guang Yong, and Chuo-Ling Chang.

By Nathan Frey, Senior Software Engineer, Google Research, Los Angeles and Zheng Sun, Senior Software Engineer, Google Research, Mountain View

HarbourBridge: From PostgreSQL to Cloud Spanner

Would you like to try out Cloud Spanner with data from an existing PostgreSQL database? Maybe you’ve wanted to ‘kick the tires’ on Spanner, but have been discouraged by the effort involved?

Today, we’re announcing a tool that makes trying out Cloud Spanner using PostgreSQL data simple and easy.

HarbourBridge is a tool that loads Spanner with the contents of an existing PostgreSQL database. It requires zero configuration—no manifests or data maps to write. Instead, it ingests pg_dump output, automatically builds a Spanner schema, and creates a new Spanner database populated with data from pg_dump.

HarbourBridge is part of the Cloud Spanner Ecosystem, a collection of public, open source repositories contributed to, owned, and maintained by the Cloud Spanner user community. None of these repositories are officially supported by Google as part of Cloud Spanner.

Get up and running fast

HarbourBridge is designed to simplify Spanner evaluation, and in particular to bootstrap the process by getting moderate-size PostgreSQL datasets into Spanner (up to a few GB). Many features of PostgreSQL, especially those that don't map directly to Spanner features, are ignored, e.g. (non-primary) indexes, functions and sequences.

View HarbourBridge as a way to get up and running fast, so you can focus on critical things like tuning performance and getting the most out of Spanner. Expect that you'll need to tweak and enhance what HarbourBridge produces—More on this later.

Quick-start guide

The HarbourBridge README contains a step-by-step quick-start guide. We’ll quickly review the main steps. Before you begin, you'll need a Cloud Spanner instance, Cloud Spanner API enabled for your Google Cloud project, authentication credentials configured to use the Cloud API, and Go installed on your development machine.

To download HarbourBridge and install it, run
go get -u github.com/cloudspannerecosystem/harbourbridge
The tool should now be installed as $GOPATH/bin/harbourbridge. To use HarbourBridge on a PostgreSQL database called mydb, run
pg_dump mydb | $GOPATH/bin/harbourbridge
The tool will use the cloud project specified by the GCLOUD_PROJECT environment variable, automatically determine the Cloud Spanner instance associated with this project, convert the PostgreSQL schema for mydb to a Spanner schema, create a new Cloud Spanner database with this schema, and finally, populate this new database with the data from mydb. HarbourBridge also generates several files when it runs: a schema file, a report file (with details of the conversion), and a bad data file (if any data is dropped). See Files Generated by HarbourBridge.

Take care with ACLs

Note that PostgreSQL table-level and row-level ACLs are dropped during conversion since they are not supported by Spanner (Spanner manages access control at the database level). All data written to Spanner will be visible to anyone who can access the database created by HarbourBridge (which inherits default permissions from your Cloud Spanner instance).

Next steps

The tables created by HarbourBridge provide a starting point for evaluation of Spanner. While they preserve much of the core structure of your PostgreSQL schema and data, many important PostgreSQL features have been dropped.

In particular, HarbourBridge preserves primary keys but drops all other indexes. This means that the out-of-the-box performance you get from the tables created by HarbourBridge can be significantly slower than PostgreSQL performance. If HarbourBridge has dropped indexes that are important to the performance of your SQL queries, consider adding Secondary Indexes to the tables created by HarbourBridge. Use the existing PostgreSQL indexes as a guide. In addition, Spanner's Interleaved Tables can provide a significant performance boost.

Other dropped features include functions, sequences, procedures, triggers, and views. In addition, types have been mapped based on the types supported by Spanner. Types such as integers, floats, char/text, bools, timestamps and (some) array types map fairly directly to Spanner, but many other types do not and instead are mapped to Spanner's STRING(MAX). See Schema Conversion for details of the type conversions and their tradeoffs.

Recap

HarbourBridge automates much of the manual work of trying out Cloud Spanner using PostgreSQL data. The goal is to bootstrap your evaluation and help get you to the meaty issues as quickly as possible. The tables generated by HarbourBridge provide a starting point, but they will likely need to be tweaked and enhanced to support a full evaluation.

We encourage you to try out the tool, send feedback, file issues, fork and modify the codebase, and send PRs for fixes and new functionality. Our plans and aspirations for developing HarbourBridge further are outlined in the HarbourBridge Whitepaper. HarbourBridge is part of the Cloud Spanner Ecosystem, owned and maintained by the Cloud Spanner user community. It is not officially supported by Google as part of Cloud Spanner.

By Nevin Heintze, Cloud Spanner

Importing SA360 WebQuery reports to BigQuery

Context

Search Ads 360 (SA36) is an enterprise-class search campaign management platform used by marketers to manage global ad campaigns across multiple engines. It offers powerful reporting capability through WebQuery reports, API, BiqQuery and Datastudio connectors.

Effective Ad campaign management requires multi-dimensional analysis of campaign data along with customers’ first-party data by building custom reports with dimensions combined from paid-search reports and business data.

Customers’ business data resides in a data-warehouse, which is designed for analysis, insights and reporting. To integrate ads data into the data-warehouse, the usual approach is to bring/ load the campaign data into the warehouse; to achieve this, SA360 offers various options to retrieve paid-search data, each of these methods provide a unique capabilities.

Comparison AreaWebQueryBQ ConnectorDatastudio ConnectorAPI
Technical complexityLow
Medium
Medium
High
Ease of report customizationHigh
Medium
Low
High
Reporting DetailsCompleteLimited
Reports not supported on API are not available
E.g.
Location targets
Remarketing targets
Audience reports
Possible Data WarehouseAny
The report is generic and needs to be loaded into the data-warehouse using DWs custom loading methods.
BigQuery ONLYNoneAny
Comparing these approaches, in terms of technical knowledge required, as well as, supporters data warehousing solution, the easiest one is WebQuery report for which a marketer can build a report by choosing the dimensions/metrics they want on the SA360 User Interface.

BigQuery data-transfer service is limited to importing data in BigQuery and Datastudio connector does not allow retrieving data.

WebQuery offers a simpler and customizable method than other alternatives and also offers more options for the kind of data (vs. BQ transfer service which does not bring Business Data from SA360 to BigQuery). It was originally designed for Microsoft Excel to provide an updatable view of a report. In the era of cloud computing, a need was felt for a tool which would help consume the report and make it available on an analytical platform or a cloud data warehouse like BigQuery.

Solution Approach



This tool showcases how to bridge this gap of bringing SA360 data to a data warehouse, in generic fashion, where the report from SA360 is fetched in XML format and converted it into a CSV file using SAX parsers. This CSV file is then transferred to staging storage to be finally ETLed into the Data Warehouse.

As a concrete example, we chose to showcase a solution with BigQuery as the destination (cloud) data warehouse, though the solution architecture is flexible for any other system.

Conclusion

The tool helps marketers bring advertising data closer to their analytical systems helping them derive better insights. In case you use BigQuery as your Data Warehouse, you can use this tool as-is. You can also adopt by adding components for analytical/data-warehousing systems you use and improve it for the larger community.

To get started, follow our step-by-step guide.
Notable Features of the tool are as following:
  • Modular Authorization module
  • Handle arbitrarily large web-query reports
  • Batch mode to process multiple reports in a single call
  • Can be used as part of ETL workflow (Airflow compatible)
By Anant Damle, Solutions Architect and Meera Youn, Technical Partnership Lead

Announcing our Google Code-in 2019 Winners!

Google Code-in (GCI) 2019 was epic in every regard. Not only did we celebrate 10 years of the Google Code-in program, but we also broke all of our previous records for the program. It was a very, very busy seven weeks for everyone—we had 3,566 students from 76 countries complete 20,840 tasks with a record 29 open source organizations!

We want to congratulate all of the students who took part in this year’s 10th anniversary of Google Code-in. Great job!

Today we are excited to announce the Grand Prize Winners, Runners Up, and Finalists with each organization.

The 58 Grand Prize Winners completed an impressive 2,158 tasks while also helping other students.

Each of the Grand Prize Winners will be awarded a four-day trip to Google’s campus in northern California to meet with Google engineers, one of the mentors they worked with during the contest, and enjoy some fun in California with the other winners. We look forward to seeing these winners in a few months!

Grand Prize Winners

The Grand Prize Winners hail from 21 countries, listed by full name alphabetically below:
Name
Organization
Country
Aayushman Choudhary
JBoss Community
India
Abdur-Raheem Idowu
Haiku
Norway
Abhinav Kaushlya
The Julia Programming Language
India
Aditya Vardhan Singh
The ns-3 Network Simulator project
India
Anany Sachan
OpenWISP
India
Andrea Gonzales
Sugar Labs
Malaysia
Anmol Jhamb
Fedora Project
India
Aria Vikram
Open Roberta
India
Artur Grochal
Drupal
Poland
Bartłomiej Pacia
Systers, An AnitaB.org Community
Poland
Ben Houghton
Wikimedia
United Kingdom
Benjamin Amos
The Terasology Foundation
United Kingdom
Chamindu Amarasinghe
SCoRe Lab
Sri Lanka
Danny Lin
CCExtractor Development
United States
Diogo Fernandes
Apertium
Luxembourg
Divyansh Agarwal
AOSSIE
India
Duc Minh Nguyen
Metabrainz Foundation
Vietnam
Dylan Iskandar
Liquid Galaxy
United States
Emilie Ma
Liquid Galaxy
Canada
Himanshu Sekhar Nayak
BRL-CAD
India
Jayaike Ndu
CloudCV
Nigeria
Jeffrey Liu
BRL-CAD
United States
Joseph Semrai
SCoRe Lab
United States
Josh Heng
Circuitverse.org
United Kingdom
Kartik Agarwala
The ns-3 Network Simulator project
India
Kartik Singhal
AOSSIE
India
Kaustubh Maske Patil
CloudCV
India
Kim Fung
The Julia Programming Language
United Kingdom
Kumudtiha Karunarathna
FOSSASIA
Sri Lanka
M.Anantha Vijay
Circuitverse.org
India
Maathavan Nithiyananthan
Apertium
Sri Lanka
Manuel Alcaraz Zambrano
Wikimedia
Spain
Naman Modani
Copyleft Games
India
Navya Garg
OSGeo
India
Neel Gopaul
Drupal
Mauritius
Nils André
CCExtractor Development
United Kingdom
Paraxor
Fedora Project
United Arab Emirates
Paweł Sadowski
OpenWISP
Poland
Pola Łabędzka
Systers, An AnitaB.org Community
Poland
Pranav Karthik
FOSSASIA
Canada
Pranay Joshi
OSGeo
India
Prathamesh Mutkure
OpenMRS
India
Pratish Rai
R Project for Statistical Computing
India
Pun Waiwitlikhit
The Mifos Initiative
Thailand
Rachit Gupta
The Mifos Initiative
India
Rafał Bernacki
Haiku
Poland
Ray Ma
OpenMRS
New Zealand
Rick Wierenga
TensorFlow
Netherlands
Sayam Sawai
JBoss Community
India
Sidaarth “Sid” Sabhnani
Copyleft Games
United States
Srevin Saju
Sugar Labs
Bahrain
Susan He
Open Roberta
Australia
Swapneel Singh
The Terasology Foundation
India
Sylvia Li
Metabrainz Foundation
New Zealand
Umang Majumder
R Project for Statistical Computing
India
Uzay Girit
Public Lab
France
Vladimir Mikulic
Public Lab
Bosnia and Herzegovina
William Zhang
TensorFlow
United States

Runners Up

And a big kudos to our 58 Runners Up from 20 countries. They will receive a GCI backpack, jacket and a GCI tshirt. The Runners Up are listed alphabetically by First name below:
Name
Organization

Name
Organization
Adev Saputra
Drupal

Kunal Bhatia
Score Lab
Adrian Serapio
R Project for Statistical Computing

Laxya Pahuja
The Mifos Initiative
Alberto Navalón Lillo
Apertium

Łukasz Zbrzeski
Score Lab
Alvii_07
Liquid Galaxy

Madhav Mehndiratta
Fedora Project
Amar Fadil
OpenWISP

Marcus Chong
Sugar Labs
Ananya Gangavarapu
TensorFlow

Mateusz Samkiewicz
JBoss Community
Andrey Shcherbakov
Wikimedia

Maya Farber Brodsky
CCExtractor Development
Antara Bhattacharya
Metabrainz Foundation

Michał Piechowiak
Fedora Project
Anthony Zhou
Public Lab

Moodhunt
Metabrainz Foundation
Bartosz Dokurno
Circuitverse.org

Muhammad Wasif
FOSSASIA
Ching Lam Choi
The Julia Programming Language

name not shown
Haiku
Chirag Bhansali
AOSSIE

Nathan Taylor
Sugar Labs
Chiranjiv Singh Malhi
BRL-CAD

Nishanth Thumma
Open Roberta
Daksha Aeer
Systers, An AnitaB.org Community

Panagiotis Vasilopoulos
Haiku
Devansh Khetan
OpenMRS

Rachin Kalakheti
TensorFlow
Dhanus SL
OSGeo

Regan Iwadha
JBoss Community
Dhhyey Desai
AOSSIE

Ribhav Sharma
OpenMRS
Eric Xue
Copyleft Games

Richard Botez
Open Roberta
Eryk Mikołajek
BRL-CAD

Rishabh Verma
The Mifos Initiative
Hannah Guo
The Terasology Foundation

Rishank Kanaparti
Copyleft Games
Harsh Khandeparkar
Public Lab

Rishi R
R Project for Statistical Computing
Hirochika Matsumoto
CloudCV

Sai Putravu
The ns-3 Network Simulator project
Ilya Maier
Systers, An AnitaB.org Community

Samuel Sloniker
Apertium
Irvan Ayush Chengadu
Drupal

Shivam Rai
OSGeo
Jakub Niklas
The Terasology Foundation

Siddharth Sinha
FOSSASIA
Jun Rong Lam
Circuitverse.org

Soumitra Shewale
The Julia Programming Language
Karol Ołtarzewski
OpenWISP

Stanisław Howard
The ns-3 Network Simulator project
Kripa Kini
Liquid Galaxy

Suryansh Pathak
CloudCV
Krzysztof Krysiński
CCExtractor Development

Taavi Väänänen
Wikimedia

Finalists

And a hearty congratulations to our 58 Finalists from 20 countries. The finalists will win a special GCI jacket and a GCI tshirt. They are listed alphabetically by first name below:
Name
Organization

Name
Organization
Abinav Chari
CloudCV

Musab Kılıç
CCExtractor Development
Andre Christoga Pramaditya
CloudCV

Nail Anıl Örcün
The Terasology Foundation
Anish Agnihotri
OSGeo

Natalie Shapiro
Circuitverse.org
Aryan Gulati
FOSSASIA

Nate Clark
The Terasology Foundation
Ayush Sharma
Fedora Project

Nicholas Gregory
Wikimedia
Ayush Sharma
SCoRe Lab

Nikita Ermishin
OpenWISP
Daniel Oluojomu
JBoss Community

Nishith P
FOSSASIA
Dhruv Baronia
TensorFlow

Oliver Fogelin
R Project for Statistical Computing
Diana Hernandez
Systers, An AnitaB.org Community

Oussama Hassini
The Mifos Initiative
Gambali Seshasai Chaitanya
Apertium

Param Nayar
Copyleft Games
Hao Liu
R Project for Statistical Computing

Peter Terpstra
The ns-3 Network Simulator project
Hardik Jhalani
Systers, An AnitaB.org Community

Piyush Sharma
The Mifos Initiative
Hrishikesh Patil
OpenMRS

Robert Chen
Public Lab
Jackson Lewis
The ns-3 Network Simulator project

Rohan Cherivirala
Open Roberta
Jan Rosa
Wikimedia

Ruixuan Tu
Haiku
Janiru Hettiarachchi
Liquid Galaxy

Saptashwa Mandal
Drupal
Janiru Wijekoon
Metabrainz Foundation

Sashreek Magan
Sugar Labs
Joshua Yang
Apertium

Sauhard Jain
AOSSIE
Kevin Liu
Open Roberta
Sharman Maheshwari
SCoRe Lab
Krishna Rama Rao
AOSSIE

Sumagna Das
BRL-CAD
Li Chen
Fedora Project

Tanvir Singh
OSGeo
Madhav Shekhar Sharma
The Julia Programming Language

Techno-Disaster
CCExtractor Development
Mbah Javis
TensorFlow

Thusal Ranawaka
BRL-CAD
Merul Dhiman
Liquid Galaxy

Vivek Mishra
Copyleft Games
Michelle (Wai Man) Lo
OpenMRS

Yu Fai Wong
JBoss Community
Mihir Bhave
OpenWISP

Yuqi Qiu
Metabrainz Foundation
Mohit S A
Circuitverse.org

Zakhar Vozmilov
Public Lab
Mokshit Jain
Drupal

Zakiyah Hasanah
Sugar Labs
Mudit Somani
The Julia Programming Language

Zoltán Szatmáry
Haiku

Our 794 mentors, the heart and soul of GCI, are the reason the contest thrives. Mentors volunteer their time to help these bright students become open source contributors. They spend hundreds of hours during their holiday breaks answering questions, reviewing submitted tasks, and welcoming the students to their communities. GCI would not be possible without their dedication, patience and tireless efforts.

We will post more numbers from GCI 2019 here on the Google Open Source Blog over the next few weeks, so please stay tuned.

Congratulations to our Grand Prize Winners, Runners Up, Finalists, and all of the students who spent the last couple of months learning about, and contributing to, open source. We hope they will continue their journey in open source!

By Stephanie Taylor, Google Open Source

Announcing the 2019 second cycle Google Open Source Peer Bonus winners


We are happy to announce the 2019 second cycle winners of the Google Open Source Peer Bonus! This cohort represents the largest number of winners to date, with 115 awardees from 26 countries, including: Australia, Austria, Belgium, Canada, China, Colombia, Denmark, Finland, France, Germany, India, Ireland, Israel, Italy, Japan, Republic of Korea, Mexico, Netherlands, Poland, Portugal, Russia, Spain, Sweden, Switzerland, United Kingdom, and the United States.

The Google Open Source Peer Bonus is an award for open source contributors that are not employed by Google, but nominated by Googlers for their exceptional contributions to open source. Initially, the program began as a way to reward developers; however, has evolved into one that supports all contributors of open source from technical writers and designers to operations.

Below is the list of winners who gave us permission to thank them publicly:
WinnerOpen Source Project
Miina SikkAMP Plugin for WordPress, AMP Stories
Ryan KienstraAMP Plugin for WordPress, AMP Stories
Joost KoehoornAngular
Ash Berlin-TaylorApache Airflow
Jarek PotiukApache Airflow
Kamil BregulaApache Airflow
Ismael Mejia Apache Beam, Avro
Jose FonsecaAPITrace
Lars Zawallich Appleseed
Maximilian Michels Beam, Flink
Roman Lebedevbenchmark
Ben Manes Caffeine
Yang Luocasbin; npcap; nmap
Sedat DilekClangBuiltLinux
Nathan ChancellorClangBuiltLinux
Pablo Galindo Salgado CPython
Karthikeyan Singaravelan CPython
Tobe OsakweDart build system
Drew Banin DBT
Michael Johnson Discourse - Google+ Import Script
Philip Rebohledxvk
Mike Blandforder9x/ersky9x radio firmware
Simon EdwardsExtraterm
Ethan LeeFNA, FAudio, SDL2
Vasco Asturianoforce-graph
Alexandre AlapetiteFreshRSS
Jenny Bryan gargle: an R package for calling Google APIs from R, including auth.
Patrick Mulhall Gerrit Code Review
Gert van Dijk Gerrit Code Review
Rafael Ascensão Git
Arnold RobbinsGNU awk
Alberto DonizettiGo
Alessandro ArzilliGo
Tobias Klauser Go
Emmanuel Odeke Go
Brian KesslerGo
Giovanni Bajo Go compiler
Glenn Lewis go-github
Cedric Staub go-jose
Paul Jollygo-tools
Daniel Martígo-tools
Dominik Honnefgo-tools
Mulr Mandersgo-tools
Billie Cleek go-tools
Ramya Raogo-tools
John Paton Google Cloud Python client libraries and Pandas GBQ
Krystian Kuźniarekgoogletest
Gernot VormayrGoPacket
Johan Brandhorstgrpc-gateway
Mike JumperGuacamole
Willy TarreauHAProxy
Mike McQuaid HomeBrew
Joachim ViideHTM
Serguei Bezverkhinftables
Kalle PerssonInbox Theme for Gmail
Artem Gusevios-webkit-debug-proxy
Morven CaoIstio Operator
Karol LassakJenkins GCE plugin
Sebastien Goasguen Knative
Joan Edwards Knative
Markus ThömmesKnative
Ashleigh BrennanKnative
Cornelius WeigKrew
Josh BottumKubeflow
Kam Kasravikubeflow/kubeflow, kubeflow/manifests
Rune Mehlsenlit-analyzer
Roman LebedevLLVM
Jonas BernoulliMagit
Jaeyoung TaeMaterial Components Web/Material Components Web React
Maximilian Hilsmitmproxy
Brijesh Bittumonaco-vim
Rich Felker musl
Tim NeutkensNext.js
Gordon Lyonnmap
Ryan Gordon Numerous open source games and engines
Carlos Alberto CortezOpenTelemetry
Roch Devost OpenTelemetry
Ted Young OpenTelemetry
Joshua MacDonald OpenTelemetry/opentelemetry-go
Daniel KhanOpenTelemetry
Brandon Gonzalez OpenTelemetry
Valentin Marchaud OpenTelemetry and OpenCensus
Olivier Albertini OpenTelemetry and OpenCensus
Armin Ruech OpenTelemetry-Java
Tyler Benson OpenTelemetry-Java
Paulo Janotti OpenTelemetry-Service
Akshay Anand Oppia
James MarcaOR-Tools
Max DymondOSS-Fuzz
Ignazio Palmisano OWL API
Marcos Caceres Payment Request API
Jovi De Croock Preact
Leah UllmannPreact
Hervé Bredinpyannote
Tomohiko KinebuchiPython official document Japanese translation project
Gabriela de Queiroz R
Baldur KarlssonRenderDoc
Fabian HennekeSecure Shell
Sam AaronSonic Pi
Greg Roth Spiregg (SPIR-V Backend in DirectXShaderCompiler)
Erica SadunSwift Evolution
Sean Morgantensorflow/addons
Yong Tangtensorflow/io
Shree KumarTesseract
Seth Larson urllib3
Michael Tüxenusrsctp
Felix Weinrankusrsctp
Qiuyi ZhangV8
Sébastien HelleuWeechat
Wesley Shields YARA
Congratulations to the winners! Open source is a shared effort that is only possible with everyone’s commitment to build better solutions for the world. Thank you for partnering with us in this mission. We look forward to more collaborations in the months to come!

By María Cruz, Google Open Source