Author Archives: Open Source Programs Office

DC-SCM compatible FPGA based open source BMC hardware platform

Open source software is omnipresent in the server and cloud world, and is giving rise to impressive successes in the SaaS space where useful products can be rapidly created from open source components: operating systems, container runtimes, frameworks for device management, monitoring and data pipelining, workload execution, etc.

Mirroring this trend in software, cloud providers and users are increasingly looking at building their servers using open source hardware, collaborating in initiatives such as the Open Compute Project or OpenPOWER.

With Google, Antmicro has developed two open source hardware FPGA-based Baseboard Management Controller (BMC) platforms compliant with OCP’s DC-SCM ver 1.0 specification to help increase the security, configurability and flexibility of server management and monitoring infrastructure. These have since been adopted by OpenPOWER’s LibreBMC workgroup as the base hardware platform.

The DC-SCM spec

The Datacenter-ready Secure Control Module (DC-SCM) specification aims to move common server management, security and control features from a typical motherboard into a module designed in a normalized form factor, which can be used across various datacenter platforms.

Currently rolling out in first DC-SCM compliant servers, the spec will help Cloud providers share costs, risks and increase reuse for the critical BMC component. Coupled with a fully open source implementation based on popular, inexpensive FPGA platforms will not only allow for more configurability and a tighter integration between hardware and software, but also tap into the momentum behind the broader open source hardware community via groups like CHIPS Alliance, OpenPOWER and RISC-V.

The hardware

Antmicro has developed two implementations of the DC-SCM-compatible BMC. Both designs meet the Open Compute Project specification for a Horizontal Form Factor 90x120 mm DC-SCM ver 1.0.

The BMCs role is central to the server’s faultless operation, responsible for monitoring the system while preventing and mitigating failures. Essentially, acting as an external watchdog.

To be able to provide this functionality, according to the requirements, the module offers a feature-packed Secure Control Interface for communication with the host platform, including:
  • PCI Express
  • USB
  • QSPI
  • SGPIO
  • NCSI
  • multiple I2C, I3C and UART channels
The unique property of Antmicro’s implementation of the standard is merging DC-SCM’s central Baseband Management Controller and the usual programmable SCM CPLD block into one powerful FPGA. This solution was chosen to enable a greater flexibility of the design, allowing remote updates of DC-SCM peripherals, and most notably, placing OpenPOWER or RISC-V IP cores as central processing units of the module.

One of our designs is based on Xilinx Artix-7, whereas the other one features a Lattice ECP5 - both low-cost and open source friendly FPGAs supported by the open source F4PGA toolchain project.

The FPGA is complemented by 512MB of DDR3 memory, 16GB of eMMC flash, as well as a dedicated Gigabit-Ethernet interface. To ensure the security of the Datacenter Control Module, external cryptographic modules: Root of Trust (RoT) and Trusted Platform Module (TPM) can be connected. This will allow future integration with e.g. open hardware Root of Trust projects such as OpenTitan and implementing various boot flow and authentication approaches.

Use in LibreBMC / OpenPOWER

Open source, configurable hardware platforms based on FPGA, open tooling and standards can make BMC more flexible for tomorrow's challenges. Antmicro’s DC-SCM boards have been adopted by the LibreBMC Workgroup, operating under the umbrella of the OpenPOWER Foundation, in a push to build a complete, fully transparent BMC solution. The workgroup, with participation from IBM, Google and Antmicro, among others, will be involved with creating FPGA gateware and software needed to make the hardware fully operational in real-world server solutions.

Variants involving both Linux (in its default open source BMC- distribution, OpenBMC), and Zephyr RTOS, as well as with both POWER and RISC-V cores are planned and thanks to the flexibility of FPGA all of those options will be just one gateware update away. Of course, both the gateware and software will be open source as well.

If you’re looking to develop a secure and transparent DC-SCM spec-compatible BMC solution, reach out to [email protected]. See how you can collaborate with partners such as Antmicro, Google, and IBM around the open source FPGA-based hardware platform.


By Peter Katarzynski – Antmicro

Vectorized and performance-portable Quicksort

Today we're sharing open source code that can sort arrays of numbers about ten times as fast as the C++ std::sort, and outperforms state of the art architecture-specific algorithms, while being portable across all modern CPU architectures. Below we discuss how we achieved this.

First, some background. There is a recent trend towards columnar databases that consecutively store all values from a particular column, as opposed to storing all fields of a record or "row" before those of the next record. This can be faster to filter or sort, which are key building blocks for SQL queries; thus we focus on this data layout.

Given that sorting has been heavily studied, how can we possibly find a 10x speedup? The answer lies in SIMD/vector instructions. These carry out operations on multiple independent elements in a single instruction—for example, operating on 16 float32 at once when using the AVX-512 instruction set, or four on Arm NEON:
Summit supercomputer

If you are already familiar with SIMD, you may have heard of it being used in supercomputers, linear algebra for machine learning applications, video processing, or image codecs such as JPEG XL. But if SIMD operations only involve independent elements, how can we sort them, which involves re-arranging adjacent array elements?

Imagine we have some special way to sort, for instance 256 element arrays. Then, the Quicksort algorithm for sorting a larger array consists of partitioning it into two sub-arrays: those less than a "pivot" value (ideally the median), and all others; then recursing until a sub-array is at most 256 elements large, and using our special method for sorting those. Partitioning accounts for most of the CPU time, so if we can speed it up using SIMD, we have a fast sort.

Happily, modern instruction sets (Arm SVE, RISC-V V, x86 AVX-512) include a special instruction suitable for partitioning. Given a separate input of yes/no values (whether an element is less than the pivot), this "compress-store" instruction stores to consecutive memory only the elements whose corresponding input is "yes". We can then logically negate the yes/no values and apply the instruction again to write the elements to the other partition. This strategy has been used in an AVX-512-specific Quicksort. But what about other instruction sets such as AVX2 that don't have compress-store? Previous work has shown how to emulate this instruction using permute instructions.

We build on these techniques to achieve the first vectorized Quicksort that is portable to six instruction sets across three architectures, and in fact outperforms prior architecture-specific sorts. Our implementation uses Highway's portable SIMD functions, so we do not have to re-implement about 3,000 lines of C++ for each platform. Highway uses compress-store when available and otherwise the equivalent permute instructions. In contrast to the previous state of the art—which was also specific to 32-bit integers—we support a full range of 16-128 bit inputs.

Despite our single portable implementation, we reach record-setting speeds on both AVX2, AVX-512 (Intel Skylake) and Arm NEON (Apple M1). For one million 32/64/128-bit numbers, our code running on Apple M1 can produce sorted output at rates of 499/471/466 MB/s. On a 3 GHz Skylake with AVX-512, the speeds are 1123/1119/1120 MB/s. Interestingly, AVX-512 is 1.4-1.6 times as fast as AVX2 - a worthwhile speedup for zero additional effort (Highway checks what instructions are available on the CPU and uses the best available ones). When running on AVX2, we measure 798 MB/s, whereas the prior state of the art optimized for AVX2 only manages 699 MB/s. By comparison, the standard library reaches 58/128/117 MB/s on the same CPU, so we have managed a 9-19x speedup depending on the type of numbers.

Previously, sorting has been considered expensive. We are interested to see what new applications and capabilities will be unlocked by being able to sort at 1 GB/s on a single CPU core. The Apache2-licensed source code is available on Github (feel free to open an issue if you have any questions or comments) and our paper offers a detailed explanation and evaluation of the implementation (including the special case for 256 elements).


By Jan Wassenberg – Brain Computer Architecture Research

Build Open Silicon with Google

silicon design render
TLDR; the Google Hardware Toolchains team is launching a new developer portal, developers.google.com/silicon, to help the developer community get started with its Open MPW shuttle program. This will allow anyone to submit open source integrated circuit designs to get manufactured at no-cost.

Since November 2020, when Skywater Technologies announced their partnership with Google to open source their Process Design Kit for the SKY130 process node, the Hardware Toolchains team here at Google has been on a journey to make building open silicon accessible to all developers. Having access to an open source and manufacturable PDK changes the status-quo in the custom silicon design industry and academia:
  • Designers are now free to start their projects liberated from NDAs and usage restrictions
  • Researchers are able to make their research reproducible by their fellow peers
  • Open source EDA tools can integrate deeply with the manufacturing process
Together we've built a community of more than 3,000 members, where hardware designers and software developers alike, can all contribute in their own way to advance the state of the art of open silicon design.

Between the opposite trends of the Moore law coming to an end and the exponential growth of connected devices (IoT), there is a real need to find more sustainable ways to scale computing. We need to go beyond cramming more transistors into smaller areas and toward more efficient dedicated hardware accelerators. Given the recent global chip supply chain struggles, and the lead time for popular ICs sometimes going over a year, we need to do this by leveraging more of the existing global foundry capacity that provides access to older and proven process node technologies.

Mature process nodes like SKY130 (a 130nm technology) offer a great way to prototype IoT applications that often need to balance cost and power with performance and leverage a mix of analog blocks and digital logic in their designs. They offer a faster turnaround rate than bleeding-edge process nodes for a fraction of the price; reducing the temporal and financial cost of making the right mistakes necessary to converge toward the optimal design.

By combining open access to PDKs, and recent advancements in the development of open source ASIC toolchains like OpenROAD, OpenLane, and higher level synthesis toolchain like XLS, we are getting us one step closer to bringing software-like development methodology and fast iteration cycles to the silicon design world.

Free and open source licensing, community collaboration, and fast iteration transformed the way we all develop software. We believe we are at the edge of a similar revolution for custom accelerator development, where hardware designers compete by building on each other's works rather than reinventing the wheel.

Towards this goal, we've been sponsoring a series of Open MPW shuttles on the Efabless platform, allowing around 250 open source projects to manufacture their own silicon. 

mpw silicon wafer zoomed view mpw chips die

With the last MPW-5 shuttle that closed up in March this year, we've seen a record level of engagement with 78 open silicon projects submitted for inclusion from 19 different countries.

Each project gets a fixed 2.92mm x 3.52mm user area and 38 I/O pins in a predefined harness to harden their design. It’s also provided with the necessary test infrastructure to validate chip specifications and behavior before being submitted for tape out.

We've seen a variety of designs submission to previous editions of the shuttle including:
floor plan for mpw5 submission

Our partner, Efabless announced that the next MPW-6 shuttle will accept open source project submissions until Monday, June 8, 2022. We can't wait to see the variety of projects the open silicon community creates, building on top of the corpus of open source designs steadily growing one Open MPW shuttle after the next. 

To help you on-board on future shuttles, we created a new developer portal that provides pointers to get started with the various tools of the open silicon ecosystem: so make sure to check out the portal and start your open silicon journey!

website preview



By Johan Euphrosine – Google Hardware Toolchains


GSoC 2022 accepted Contributors announced!

May is here and we’re pleased to announce the Google Summer of Code (GSoC) Contributors for 2022. Our 196 mentoring organizations have spent the last few weeks making the difficult decisions on which applicants they will be mentoring this year as GSoC Contributors


Some notable results from this year’s application period
  • Over 4,000 applicants from 96 countries
  • 5,155 proposals submitted
  • 1,209 GSoC contributors accepted from 62 countries
  • 1,882 mentors and organization administrators
For the next few weeks our GSoC 2022 Contributors will be actively engaging with their new open source community and learning the ins and outs of how their new community works. Mentors will help guide them through the documentation and processes the community uses as well as helping the GSoC Contributors with planning their milestones and projects for the summer. This Community Bonding period helps familiarize the GSoC Contributors with the languages and tools they will need to successfully complete their projects. Coding begins June 13th and for most folks will wrap up September 5th, however this year GSoC Contributors can request a longer coding period wrapping up their projects by mid November.

Thank you to all the applicants who reached out to our mentoring organizations to learn more about the work they do and for the time they spent crafting their project proposals. We hope you all learned more about open source and maybe even found a community you want to contribute to even outside of GSoC. Staying connected with the community or reaching out to other organizations is a great way to set the stage for future opportunities. Open source communities are always looking for new, excited contributors to bring fresh perspectives and ideas to the table. We hope you connect with an open source community or apply to a future GSoC.

There are many changes to this 18th year of GSoC and we are excited to see how our GSoC Contributors and mentoring organizations take advantage of these adjustments. A big thank you to all our mentors and organization administrators who make this program so special.

GSoC Contributors—have fun this summer and keep learning! Your mentors and community members have dozens and in some cases, hundreds of years of experience, let them share their knowledge with you and help you become awesome open source contributors!

By Stephanie Taylor, Google Open Source

Season of Docs announces participating organizations for 2022


Season of Docs provides support for open source projects to improve their documentation and gives professional technical writers an opportunity to gain experience in open source. Together we raise awareness of open source, of docs, and of technical writing. 

For 2022, Season of Docs is pleased to announce that 31 organizations will be participating in the program! The list of participating organizations can be viewed on the website.

The project development phase now begins. Organizations and the technical writers they hire will work on their documentation projects from now until November 15th. For organizations who are still looking to hire a technical writer, the hiring deadline is May 16th.

How do I take part in Season of Docs as a technical writer?

Start by reading the technical writer guide and FAQs which give information about eligibility and choosing a project. Next, technical writers interested in working with accepted open source organizations can share their contact information via the Season of Docs GitHub repository; or they may submit a statement of interest directly to the organizations. We recommend technical writers reach out to organizations before submitting a statement of interest to discuss the project they’ll be working on and gain a better understanding of the organization. Technical writers do not need to submit a formal application through Season of Docs, so reach out to the organizations as soon as possible!

Will technical writers be paid while working with organizations accepted into Season of Docs?

Yes. Participating organizations will transfer funds directly to the technical writer via OpenCollective. Technical writers should review the organization's proposed project budgets and discuss their compensation and payment schedule with the organization before hiring. Check out our technical writer payment process guide for more details.

General Timeline

May 16Technical writer hiring deadline
June 15Organization administrators start reporting on their project status via monthly evaluations
November 15Organization administrators submit their case study and final project evaluation
December 14Google publishes the 2022 Season of Docs case studies and aggregate project data
May 2, 2023Organizations begin to participate in post-program followup surveys

See the full timeline for details.

Care to join us?

Explore the Season of Docs website at g.co/seasonofdocs to learn more about the program. Use our logo and other promotional resources to spread the word. Review the timeline, check out the FAQ, and reach out to organizations now!

If you have any questions about the program, please email us at [email protected].

By Romina Vicente and Erin McKean, Google Open Source Programs Office

Google Summer of Code 2022: Contributor applications now open

Contributor applications for Google Summer of Code (GSoC) 2022 are now open!

Google Summer of Code is a global, online program focused on bringing new contributors into open source software development. GSoC contributors work with an open source organization on a 12+ week programming project under the guidance of mentors. 

Since 2005, GSoC has welcomed new developers into the open source community every year. The GSoC program has brought over 18,000 contributors from 112 countries together, with over 17,000 mentors from 746 open source organizations.

For 2022, GSoC made significant changes to expand the reach and flexibility of the program. The following are the key changes:
  • All newcomers and beginners to open source 18 years and older may now apply to GSoC
  • GSoC now supports both medium sized projects (~175 hours) and large projects (~350 hours)
  • Projects can be spread out over 10–22 weeks
We invite students, graduates, and folks at various stages of their career to check out Google Summer of Code. Now that applications are open, please keep a few helpful tips in mind:
  • Narrow down your list to 2-4 organizations and review their ideas list
  • Reach out to the organizations via their contact methods listed on the GSoC site
  • Engage with your organization early and often
Contributors may register and submit project proposals on the GSoC site from now until Tuesday, April 19th at 18:00 UTC.

Best of luck to all our applicants!

Romina Vicente, Program Manager – Google Open Source

Announcing First Group of Google Open Source Peer Bonus Winners in 2022

After receiving over 200 nominations from Googlers, we are very pleased to announce our biggest group of winners to date for the Google Open Source Peer Bonus Program.

We are honored to present 154 contributors from 29 countries with peer bonuses, representing more than 80 open source projects.

The Google Open Source Peer Bonus program was launched in 2011, and over the years became a much loved initiative within open source. Many teams at Google rely on open source projects in their work and are very keen to support contributors who devote their time and energy to these projects. Here are some quotes from our winners about what the program means to them.

“Google's OSS Peer Bonus program recognizes the fantastic work done by people who volunteer their time tirelessly to contribute to open source projects. Society as a large benefits from having a strong community of contributors to open source software. I'm humbled to receive the OSPB award.” – Robert A. van Engelen, ugrep contributor

“It is a very motivating program, rewarding and acknowledging important work [for open source].” - Christoph Gorgulla, VirtualFlow contributor

“Open source is a great chance to work on worldwide use products with other developers. It was a pleasure and, hope I made Firebase a bit better. Thanks a lot!”
- Andrey Uryadov, Firebase iOS SDK contributor

“The Angular team is incredibly welcoming and supportive to open source contributors, the support and appreciation they give to any sort of contribution, no matter on size or relevance is really impressive and heartwarming. It is a pleasure and an honor to be able to interact with such wonderful people and of course awe-inspiring software engineers.” - Dario Piotrowicz, Angular contributor

Below is the list of winners who gave us permission to thank them publicly:

Project

Winner

altair

Christopher Davis

altair

Mattijn van Hoek

Android FHIR SDK

Aditya Kurkure

Android FHIR SDK

Ephraim Kigamba

AndroidX

Simon Schiller

AndroidX, Jetpack

Eli Hart

Angular

Dario Piotrowicz

Apache Airflow

Ash Berlin-Taylor

Apache Airflow

Kaxil Naik

Apache Beam

Alex Kosolapov

Apache Beam

Alex Van Boxel

Apache Beam

Austin Bennett

Apache Beam

Calvin Leung

Apache Beam

Chun Yang

Apache Beam

Matthias Baetens

Apache Beam, Hop

Matt Casters

Apache Cassandra

Dinesh Joshi

Apache Log4J

Ralph Goers

apache/pinot , evidentlyai/evidently

Nadcharin Silaphung

ASF Diversity and Inclusion Committee

Katia Rojas

Bazel

Brentley Jones

Bazel

Fabian Meumertzheim

bazel-zig-cc

Motiejus Jakštys

Buefy

Walter Tommasi

caps-rs

Luca Bruno

Chrome DevTools

Jesper van den Ende

Chrome OS

Álvaro Guzmán Parrochia

Chromium

Jinyoung Hur

Cirq

Victory Omole

conda-forge package maintenance

Mark Harfouche

ContainerSSH

Sanja Bonic

coreboot

Elyes Haouas

coreboot

Felix Held

coreboot

Felix Singer

coreboot

Matt DeVillier

COVID-19 scenario modeling hub

Matteo Chinazzi

Docsy

Andreas Deininger

Docsy

Franz Steininger

Docsy

Gareth Watts

Docsy

Patrice Chalin

DoIT

Eduardo Naufel Schettino

Eleventy

Zach Leatherman

Firebase iOS SDK

Artem Volkov

Firebase iOS SDK

Florian Schweizer

Firebase iOS SDK

Morten Bek Ditlevsen

Firebase iOS SDK

Akira Matsuda

Firebase iOS SDK

Andrey Uryadov

Firebase iOS SDK

Ashleigh Kaffenberger

Firebase iOS SDK

Kamil Powałowski

Firebase iOS SDK

Marina Gornostaeva

Firebase iOS SDK

Paul Harter

Firebase iOS SDK

Yakov Manshin

Flutter

Alex Li

Flutter

Xu Baolin

Flutter DevTools

Bruno Leroux

Fuchsia

Fabio D'Urso

Gentoo

Agostino Sarubbo

Gentoo

Toralf Förster

Go

Rhys Hiltner

Good Docs Project

Carrie Crowe

Halide

Alex Reinking

HTTP Archive

Barry Pollard

classgraph

Luke Hutchison

Istio

Rama Chavali

Jest mock library for Google Maps JavaScript

Eric Egli

jupyter_bbox_widget

Daria Vasyukova

karatelabs

Dinesh Arora

KDE Frameworks 6

Volker Krause

Knative

Dave Protasowski

Knative

Evan Anderson

Kubernetes

Adolfo García Veytia

Kubernetes

Rey Lejano

libsodium

Frank Denis

Linux, LLVM

Nathan Chancellor

LLVM

Sylvestre Ledru

LLVM

Zhiqian Xia

Mediawiki

Soham Parekh

mold

Rui Ueyama

Multiscale modeling of brain circuts

Salvador Dura Bernal

Open-JDK

Aleksey Shipilëv

OpenROAD

Matt Liberty

oreboot

Danny Milosavljevic

ostreedev/ostree

Colin Walters

p5.js

Lauren Lee McCarthy

PepTrans: SARS-CoV-2 Peptidic Drug Discovery

Ahmed Elnaggar

protoc-jar-maven-plugin

Oliver Suciu

PyBaMM

Priyanshu Agarwal

RDKit

Greg Landrum

regex-automata

Andrew Gallant

rgs1

Raul Gutierrez Segales

Robolectric

Junyi Wang

Ruby for Good

Gia Coelho

Ruby for Good

Sean Marcia

Sass

Christophe Coevoet

Screenity, Omni, Mapus, Flowy

Alyssa X

sigstore

Carlos Panato

SLF4j, Logback, reload4j Java Logging Frameworks

Ceki Gülcü

Smithay

Victor Berger

Sollya

Christoph Lauter

Sollya

Mioara Joldes

Sollya

Sylvain Chevillard

Spanish Open Source Distributed Systems Seminar

Ricardo Zavaleta

strict-csp and html-webpack-plugin

Jan Nicklas

Tekton Pipelines

Aiden De Loryn

Tekton Pipelines

Eugene McArdle

TFX

Gerard Casas Saez

TFX

Vincent Nguyen

The Good Docs Project

Chris Ganta

The Good Docs Project

Deanna Thompson

The Good Docs Project

Gayathri Krishnaswamy

The Good Docs Project

Nelson Guya

TL Draw

Steve Ruiz

Trust-DNS

Benjamin Fry

ugrep

Robert van Engelen

virtio-iommu

Jean-Philippe Brucker

VirtualFlow

Christoph Gorgulla

Vite, Vitest

Matias Capeletto

Vite, Vitest

Anthony Fu

Vue, Stylelint

Yosuke Ota

Vuls

Kota Kanbe

wails

Lea Anthony

WalkingPad controller

Dušan Klinec

Web Almanac

David Fox

WebRTC

Philipp Hancke

What we teach about race and gender: Representation in images and text of children books

Teodora Szasz

Zig

Andrew Kelley


Thank you for your contributions to open source! Congratulations!

By Maria Tabak – Google Open Source

Rewarding Rust contributors with Google Open Source Peer Bonuses

logo showing a trophy cup in the Google colors, with the text “Google Open Source Peer Bonus” below it
We are very excited to reward 25 open source contributors, specifically, for their work on Rust projects! 

At Google, Open Source lies at the core of not only our processes but also many of our products. While we try to directly contribute upstream as much as possible, the Google Open Source Peer Bonus program is designed to reward external open source contributors for their outstanding contributions to FOSS, whether the contribution benefits Google in some way or not.

The Rust programming language is an open source systems programming language with a strong focus on memory safety. The language has a caring community and a robust package ecosystem, which have heavily contributed to its growing popularity. The Rust community shows dedication to maintaining quality packages and tooling, which we at Google are quite thankful for.

Among many other things, Google uses Rust in some open source projects, including Android, Fuchsia, and ICU4X; and has been participating in the efforts to evaluate Rust in the Linux Kernel. Google is also a founding member of the Rust Foundation.

Below is the list of winners who gave us permission to thank them publicly:

Winner

Project

antoyo

For work on rustc_codegen_gcc

Asherah Connor

For maintaining comrak

David Hewitt

For maintaining PyO3

Dirkjan Ochtman

For maintaining rustls and quinn

Frank Denis

For maintaining rust-ed25519-compact

Gary Guo

For maintaining Rust for Linux

Jack Grigg

For integrating RustCrypto into Fuchsia

Jack Huey

For highly involved rust compiler work fixing a large number of crashes around higher-kinded types.

Joe Birr-Pixton

For building rustls

Joshua Nelson

For improving the developer workflow for contributing to Rust itself

Lokathor

For creating tinyvec and bytemuck

Mara Bos

For work on the Rust Libraries Team and the 2021 Rust Edition

Nikita Popov

For maintaining the Rust compiler’s LLVM backend

Pietro Albini

For maintaining crucial Rust infrastructure and working on the Rust core team

Ricky Hosfelt

For maintaining cargo-outdated

Sébastien Crozet

For creating dimforge

Simonas Kazlauskas

For maintaining the Rust compiler’s LLVM backend


Thank you for your contributions to the Rust projects and ecosystem! Congratulations!

By Maria Tabak, Google Open Source and Manish Goregaokar, Rustacean Googler

How to integrate your web app with Google Ads

TL;DR: You can now have a web application integrated with Google Ads in just a few minutes!

Google Ads
Google Ads is an online advertising platform where advertisers can create and manage their Google marketing campaigns. The Google Ads API is the modern programmatic interface to Google Ads and the next generation of the AdWords API. It enables developers to interact directly with the Google Ads platform, vastly increasing the efficiency of managing large or complex Google Ads accounts and campaigns.

A typical use case is when a company wants to offer Google ads natively on their platform to their users. For example, customers who have an online store with Shopify can promote their business using Google ads, with just a few clicks and without needing to go to the Google Ads platform. They’re able to do it directly on Shopify’s platform—the Google Ads API makes this possible.

Demo App
Francisco Blasco, Strategic Technical Solutions Manager at Google, designed and built an open source web application that is integrated with Google Ads and Business Profile (aka Google My Business).

Anyone can use the app, called Fran Ads, to save significant time on product development. Just follow the simple installation steps in the README files (frontend README file and backend README file) on the GitHub repo! The app uses React for the frontend, and Django for the backend; two of the most popular web frameworks.

App's Logo


Check out a product demo here! You can have this app running in your local machine in a few minutes. To learn how, check out the video tutorial.

Blasco acts as an external Product Manager for Google’s strategic partners, driving the entire product development lifecycle. He created this project to help Google’s partners and businesses seeking to offer Google Ads to their users.

The goal is to accelerate the Google Ads integration process and decrease associated development costs. Some companies are using Fran Ads to see what an integration looks like, while others are using the technical guide to learn how to start using the Google Ads API.

In general, companies can use Fran Ads as an SDK to begin working with elements within the Google Ads API, and serve as a guidance system for integrating with Google. This project will minimize the number of times the wheel needs to be reinvented, accelerating innovation and facilitating adoption. Developers can clone the code repositories, follow the steps, and have a web app integrated with Google Ads in just a few minutes. They can adapt and build on top of this project, or they can just use the functions they need for the features they want to develop



App Architecture

Furthermore, you will learn how to create credentials to consume Google APIs; specifically, the README files show how to create a project on Google Cloud Platform (GCP), and how to set it up correctly so a web app can consume Google Ads API and Business Profile APIs.

Also, you will learn how refresh tokens work for Google APIs, and how to manage them for your web application.

Francisco wrote a detailed technical guide explaining how to build every feature of the app. Some of the most important features are:
        1. Create a new Google Ads account
        2. Link an existing Google Ads account
        3. OAuth authentication & authorization
        4. Refresh token management
        5. List of Google Ads accounts associated with Google account
        6. Reporting on performance for all campaign types
        7. Create Smart Campaign (automated ads on Google and across the web)
        8. Edit Smart Campaign settings

As you can see from the list above, the app will create Smart Campaigns — a simplified, automated campaign designed for new advertisers and SMBs

Google made public the suggestion services through the Google Ads API. Fran Ads uses those services to recommend keyword themes, headlines & descriptions for the ad, and budget. These recommendations are specific for each advertiser, depending on several factors such as type of business, location, and keyword themes.



An example of three Google recommendations for an advertiser.


The image above shows the final step of creating a Smart Campaign on Fran Ads. In this step, users have to set a daily budget for the campaign. Not only will you receive recommendations for the budget, but an estimate of how many ad clicks you will get per month. This is a great feature for users who are new to digital marketing and aren’t aware of their spending needs.

You can also see an alert message that the budget can be changed anytime, so users can pause spending on the campaign. This is important because many new users, especially SMBs, have doubts about spending on something new. Therefore, it is important to communicate to them that the decision they are making at that moment is not set in stone.

When you start using Fran Ads, you will see there is guidance so users complete the tasks they want.


Guidance on how to complete tasks based on Google’s best practices.


Furthermore, the app is designed based on Google’s best practices. For example, when users are creating a Smart Campaign, in step three (see the above image) they need to select keyword themes (group of keywords). If you choose “bakery” as the keyword theme, your ad is eligible to show when people search for “bakery near me”, “local bakery”, and “cake shop”.

Google’s best practices suggest that advertisers use between seven and ten keyword themes per campaign. Therefore, Fran Ads is designed for users to select up to seven keyword themes. Refer to the image of step three when creating a Smart Campaign on Fran Ads. However, you can set it to ten if you like.

The technical guide also provides:

        1. Production-ready code for both the frontend and backend
        2. Engineering flow diagrams
        3. Best practices
        4. High-fidelity mockups
        5. App architecture and structure diagrams
        6. Workarounds to current bugs on Google Ads API v9
        7. Important information on how to handle important tasks necessary for integrating your platform with Google Ads
        8. Help with the design strategy for the UX and design elements of the UI.

Important resources

See below the list summarizing the important resources that will help you integrate with Google Ads easier, faster, and better.
        1. Frontend repo: all the code for the frontend of Fran Ads.
        2. Backend repo: all the code for the backend of Fran Ads.
        3. Technical guide: 3 sections: ‘Before Starting’, ‘Configurations & Installation’, and ‘Build web              app’. In section 3, you have explanations on how to build all the features of the app.
        4. Product demo: 15-minute demo of Fran Ads showing many core features.
        5. Video tutorial: 17-minute tutorial on how to set up and run Fran Ads.


By Francisco Blasco – Launch, Channel Partners

Google Summer of Code 2022 mentoring orgs revealed!


After reviewing over 350 mentoring organization applications, we are excited to announce that 203 open source projects have been selected for Google Summer of Code (GSoC) 2022. This year we are welcoming 32 new organizations to mentor GSoC contributors.

Visit our new program site to view the complete list of GSoC 2022 accepted mentoring organizations. You can drill down into the details for each organization on their program page, including reading more about the project ideas they are looking for GSoC contributors to work on this year.

Are you a developer new to open source interested in participating in GSoC?
If you are a new or beginner open source contributor over 18 years old, we welcome you to apply for GSoC 2022! Contributor applications will open on Monday, April 4, 2022 at 18:00 UTC with Tuesday, April 19, 2022 18:00 UTC being the deadline to submit your application (which includes your project proposal).

The most successful applications come from students who start preparing now. We can’t say this enough—if you want to significantly increase your chances of being selected as a 2022 GSoC Contributor, we recommend you to prepare early. Below are some tips for prospective contributors to accomplish before the application period begins in early April:

  • Watch our short videos: What is GSoC? and Being a GSoC Contributor
  • Check out the Contributor/ Student Guide and Advice for Applying to GSoC doc.
  • Review the list of accepted organizations and find two to four that interest you and read through their Project Ideas lists.
  • When you see an idea that piques your interest, reach out to the organization via their preferred communication methods (listed on their org page on the GSoC program site).
  • Talk with the mentors and community to determine if this project idea is something you would enjoy working on during the program. Find a project that motivates you, otherwise it may be a challenging summer for you and your mentor.
  • Use the information you received during your communications with the mentors and other org community members to write up your proposal.
You can find more information about the program on our website which includes a full timeline of important dates. We also highly recommend reading the FAQ and Program Rules and watching some of our other videos with more details about GSoC for contributors and mentors.

A hearty welcome—and thank you—to all of our mentor organizations! We look forward to working with all of you during Google Summer of Code 2022.

By Stephanie Taylor – Google Open Source