Category Archives: Open Source Blog

News about Google’s open source projects and programs

Authenticating to Hashicorp Vault using GCE Signed Metadata

Applications often require access to small pieces of sensitive data at build or run time, referred to as secrets. Secrets are generally more sensitive than other environment variables or parts of your repository as they may grant access to additional data, such as user information.

HashiCorp Vault is a popular open source tool for secret management, which allows a developer to store, manage and control access to tokens, passwords, certificates, API keys and other secrets. Vault has authentication backends, which allow developers to use many kinds of identities to access Vault, including tokens, or usernames and passwords.

Today, we’re announcing a new Google Compute Engine signed metadata authentication backend for Vault. This allows a developer to use an existing running instance to authenticate directly to Vault, using a JWT of this instance’s signed metadata as its identity. Learn more about the latest release of Vault.


The following example shows how a user can enable and configure the GCP auth backend and then authenticate an instance with Vault. See the docs to learn more.

In the following example, an admin mounts the backend at auth/gcp and adds:
  • Credentials that the backend can use to verify the instance (requiring some read-only permissions)
  • A Vault-specific role (roles determine what Vault secrets the authenticating GCE VM instance can access, and can restrict login to a set of instances by zone or region, instance group, or GCP labels)
# Mount the backend at ‘auth/gcp’.
$ vault auth-enable ‘gcp’
...

# Add configuration. This can include credentials Vault will
# use to make calls to GCP APIs to verify authenticating instances.
$ vault write auth/gcp/config credentials=@path/to/creds.json
...


# Add configuration. This can include credentials Vault will
# use to make calls to GCP APIs to verify authenticating instances.
$ vault write auth/gcp/role/my-gce-role \
type='gce' \
policies=’dev,prod’ \
project_id=”project-123456” \
zone="us-central1-c"
instance_group=”my-instance-group” \
labels=”foo:bar,prod:true,...” \
...

Then, a Compute Engine instance can authenticate to the Vault server using the following script:

#! /bin/bash

# [START PARAMS]
# Subtitute real vault address or env variable.
VAULT_ADDR="https://vault.rocks"

# Substitute another service account for the VM instance or use the
# built-in default.
SERVICE_ACCOUNT="default"

# The instance will attempt login under this role.
ROLE="my-gce-role"
# [END PARAMS]


# Generate a metadata identity token JWT with the expected audience.
# The GCP backend expect a JWT 'aud' claim ending in “vault/$ROLE”.
AUD="$VAULT_ADDR/vault/$ROLE"
TOKEN="$(curl -H "Metadata-Flavor: Google" \
http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=$AUD&format=full)"


# Attempt login against VAULT.
# Using API calls:

cat > payload.json <<-END
{
"role": "$ROLE",
"jwt": "$TOKEN"
}
END

$ RESP="$(curl \
--request POST \
--data @payload.json \
$VAULT_ADDR/v1/auth/gcp/login)"

# OR

# Use CL vault tool if downloaded
vault write auth/gcp/login role=”$ROLE" jwt="$TOKEN" > response

# Get auth token from JSON
# response (response) and get
# your secrets from Vault!
# ...

A few weeks ago, we also announced a Google Cloud Platform IAM authentication backend for HashiCorp Vault, allowing authentication using an existing IAM service account identity. For further guidance on using HashiCorp Vault with Google Cloud, take a look at today’s Google Cloud blog post.

By Emily Ye, Software Engineer

Announcing Google Code-in 2017: The Latest and Greatest for Year Eight

We are excited to announce the 8th consecutive year of the Google Code-in (GCI) contest! Students ages 13 through 17 from around the world can learn about open source development working on real open source projects, with mentorship from active developers. GCI begins on Tuesday, November 28, 2017 and runs for seven weeks through to Wednesday, January 17, 2018.


Google Code-in is unique because not only do the students choose what they want to work on from the 2,000+ tasks created by open source organizations, but they have mentors available to help answer their questions as they work on each of their tasks.

Starting to work on open source software can be a daunting task in and of itself. How do I get started? Does the organization want my help? Am I too inexperienced? These are all questions that developers (of all ages) might consider before contributing to an open source organization.

The beauty of GCI is that participating open source organizations realize teens are often first time contributors, and the volunteer mentors are equipped with the patience and the experience to help these young minds become part of the open source community.

Open source communities thrive when there is a steady flow of new contributors who bring new perspectives, ideas, and enthusiasm. Over the last 7 years, GCI open source organizations have helped over 4,500 students from 99 countries become contributors. Many of these students are still contributing to open source years later. Dozens have gone on to become Google Summer of Code (GSoC) students and even mentors for other students.

The tasks open source organizations create vary in skill set and level, including beginner tasks any student can take on, such as “setup your development environment.” With tasks in five different categories, there’s something to fit almost any student’s skills:

  • Code: writing or refactoring 
  • Documentation/Training: creating/editing documents and helping others learn more
  • Outreach/Research: community management, marketing, or studying problems and recommending solutions
  • Quality Assurance: testing and ensuring code is of high quality
  • User Interface: user experience research, user interface design, or graphic design

Open source organizations can apply to participate in Google Code-in starting on Monday, October 9, 2017. Google Code-in starts for students November 28th!

Visit the contest site g.co/gci to learn more about the contest and find flyers, slide decks, timelines and more.

By Stephanie Taylor, Google Open Source

Making the Google Developers Documentation Style Guide Public

Cross-posted on the Google Developers Blog

You can now use our developer-documentation style guide for open source documentation projects.

For some years now, our technical writers at Google have used an internal-only editorial style guide for most of our developer documentation. In order to better support external contributors to our open source projects, such as Kubernetes, AMP, or Dart, and to allow for more consistency across developer documentation, we're now making that style guide public.

If you contribute documentation to projects like those, you now have direct access to useful guidance about voice, tone, word choice, and other style considerations. It can be useful for general issues, like reminders to use second person, present tense, active voice, and the serial comma; it can also be great for checking very specific issues, like whether to write "app" or "application" when you want to be consistent with the Google Developers style.

The style guide is a reference document, so instead of reading through it in linear order, you can use it to look things up as needed. For matters of punctuation, grammar, and formatting, you can do a search-in-page to find items like "Commas," "Lists," and "Link text" in the left nav. For specific terms and phrases, you can look at the word list.

Keep an eye on the guide's release notes page for updates and developments, and send us your comments and suggestions via the Send Feedback link on each page of the guide—we want to hear from you as we continue to evolve the style guide.

Posted by Jed Hartman, Technical Writer

Google Summer of Code 2017 Student Curtain Call

Back in early May we announced the students accepted into the 13th edition of the Google Summer of Code (GSoC) program, our largest program ever! Today we are pleased to announce the 1,128 (86.2%*) students from 68 countries that successfully completed the 2017 GSoC. Great job, students!





Students worked diligently with 201 open source organizations and over 1,600 mentors to learn to work with internationally distributed online teams, write great code and help their mentoring org enhance, extend and refine their codebases. Students have also become an important part of these communities. We feel strongly that to keep open source organizations thriving and evolving, they need new ideas - GSoC students help to bring fresh perspectives to these important projects.

We look forward to seeing even more from the 2017 students. Many will go on to become GSoC mentors in future programs and many more will become committers to these and other open source organizations. Some may even create their own open source projects! These students have a bright future ahead of them in technology and open source.

Interested in what the students worked on this summer? Check out their work as well as statistics on past programs.

A big thank you to our mentors and organization administrators who make this program possible. Their dedication to welcoming new student contributors into their communities and teaching them the fundamentals of open source is awesome and inspiring. Thank you all!

Congratulations to all of the GSoC 2017 students and the mentors who made this our biggest and best Google Summer of Code yet.

By Stephanie Taylor, Google Open Source

* 1,309 students started the coding period on May 30th, stats are based upon that number.

The Mentors of Google Summer of Code 2017

Every year, we pore over oodles of data to extract the most interesting and relevant statistics about the Google Summer of Code (GSoC) mentors. Mentors are the bread and butter of our program - without their hard work and dedication, there would be no GSoC. These volunteers spend 12 weeks (plus a month of community bonding) tirelessly guiding their students to create the best quality project possible and welcoming them into their communities - answering questions and providing help at all hours.

Here’s a quick snapshot of our 2017 group:
  • Total mentors: 3,439
  • Mentors assigned to an active project: 1,647
  • Mentors who have participated in GSoC over 10 years: 22
  • Percentage of new mentors: 49%
GSoC 2017 mentors are a worldly group, hailing from 69 countries on 6 continents - we’re still waiting on a mentor from Antarctica… Anyone?

Interested in the data? Check out the full list of countries.
Some interesting factoids about our mentors:
  • Average age: 39
  • Youngest: 15*
  • Oldest: 68
  • Most common first name: Michael (there are 40!)
GSoC mentors help to introduce the next generation to the world of open source software development — for that we are very grateful. To show our appreciation, we invite two mentors from each of the 201 participating organizations to attend the annual mentor summit at the Google campus in Sunnyvale, California. It’s three days of food, community building, lively debate and lots of fun.

Thank you to everyone involved in Google Summer of Code. Cheers to yet another great year!

By Mary Radomile, Google Open Source

* Say what? 15 years old!? Yep! We had 12 GSoC mentors under the age of 18. This group of enthusiastic teens started their journey in our sister program, Google Code-in, an open source coding competition for 13-17 year olds. You can read more about it at g.co/gci.

Bringing Real-time Spatial Audio to the Web with Songbird

For a virtual scene to be truly immersive, stunning visuals need to be accompanied by true spatial audio to create a realistic and believable experience. Spatial audio tools allow developers to include sounds that can come from any direction, and that are associated in 3D space with audio sources, thus completely enveloping the user in 360-degree sound.

Spatial audio helps draw the user into a scene and creates the illusion of entering an entirely new world. To make this possible, the Chrome Media team has created Songbird, an open source, spatial audio encoding engine that works in any web browser by using the Web Audio API.

The Songbird library takes in any number of mono audio streams and allows developers to programmatically place them in 3D space around the user. Songbird allows you to create immersive soundscapes, realistically reproducing reflection and reverb for the space you describe. Sounds bounce off walls and reflect off materials just as they would in real-life, capturing truly 360-degree sound. Songbird creates an ambisonic soundfield that can then be rendered in real-time for use in your application. We’ve partnered with the Omnitone project, which we blogged about last year, to add higher-order ambisonic support to Omnitone’s binaural renderer to produce far more accurate sounding audio than ever before.

Songbird encapsulates Omnitone and with it, developers can now add interactive, full-sphere audio to any web based application. Songbird can scale to any order ambisonics, thereby creating a more realistic sound and higher performance than what is achievable through standard Web Audio API.
Songbird Audio Processing Diagram
The implementation of Songbird is based on the Google spatial media specification. It expects mono input and outputs ambisonic (multichannel) ACN channel layout with SN3D normalization. Detailed documentation may be found here.

As the web emerges as an important VR platform for delivering content, spatial audio will play a vital role in users’ embrace of this new medium. Songbird and Omnitone are key tools in enabling spatial audio on the web platform and establishing it as a preeminent platform for compelling VR experiences. Combining these audio experiences with 3D JavaScript libraries like three.js gives a glimpse into the future on the web.
Demo combining spatial sound in 3D environment
This project was made possible through close collaboration with Google’s Daydream and Web Audio teams. This collaboration allowed us to deliver similar audio capabilities to the web as are available to developers creating Daydream applications.

We look forward to seeing what people do with Songbird now that it's open source. Check out the code on GitHub and let us know what you think. Also available are a number of demos on creating full spherical audio with Songbird.

By Jamieson Brettle and Drew Allen, Chrome Media Team

Authenticating to HashiCorp Vault using Google Cloud IAM

Applications often require access to small pieces of sensitive data at build or run time, referred to as secrets. Secrets are generally more sensitive than other environment variables or parts of your repository as they may grant access to additional data, such as user data.

HashiCorp Vault is a popular open source tool for secret management, which allows a developer to store, manage and control access to tokens, passwords, certificates, API keys and other secrets. Vault has many options for authentication, called authentication backends. These allow developers to use many kinds of identities to access Vault, including tokens, or usernames and passwords. As the number of developers on a team grows, these kinds of authentication options become impractical; and in enterprise scenarios, managing and auditing these identities becomes burdensome.

Today, we are pleased to announce a Google Cloud Platform IAM authentication backend for Vault. This allows a developer to use an existing IAM identity to authenticate to Vault. Using a service account, you can sign a JWT to show it came from a particular account, and use that to authenticate to Vault. Learn more in the documentation.


The following example in Go shows how a user can authenticate with Vault using this backend. This example assumes the Vault server has already been mounted at auth/gcp and configured.
package main

import (
...
vaultapi "github.com/hashicorp/vault/api"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"google.golang.org/api/iam/v1"
...
)

func main() {
// Start [PARAMS]
project := "project-123456"
serviceAccount := "[email protected]"
credsPath := "path/to/creds.json"

os.Setenv("VAULT_ADDR", "https://vault.mycompany.com")
defer os.Setenv("VAULT_ADDR", "")
// End [PARAMS]

// Start [GCP IAM Setup]
jsonBytes, err := ioutil.ReadFile(credsPath)
if err != nil {
log.Fatal(err)
}
config, err := google.JWTConfigFromJSON(jsonBytes, iam.CloudPlatformScope)
if err != nil {
log.Fatal(err)
}

httpClient := config.Client(oauth2.NoContext)
iamClient, err := iam.New(httpClient)
if err != nil {
log.Fatal(err)
}
// End [GCP IAM Setup]

// 1. Generate signed JWT using IAM.
resourceName := fmt.Sprintf("projects/%s/serviceAccounts/%s", project, serviceAccount)
jwtPayload := map[string]interface{}{
"aud": "auth/gcp/login",
"sub": serviceAccount,
"exp": time.Now().Add(time.Minute * 10).Unix(),
}

payloadBytes, err := json.Marshal(jwtPayload)
if err != nil {
log.Fatal(err)
}
signJwtReq := &iam.SignJwtRequest{
Payload: string(payloadBytes),
}

resp, err := iamClient.Projects.ServiceAccounts.SignJwt(
resourceName, signJwtReq).Do()
if err != nil {
log.Fatal(err)
}

// 2. Send signed JWT in login request to Vault.
vaultClient, err := vaultapi.NewClient(vaultapi.DefaultConfig())
if err != nil {
log.Fatal(err)
}

vaultResp, err := vaultClient.Logical().Write(
"auth/gcp/login",
map[string]interface{}{
"role": "test",
"jwt": resp.SignedJwt,
})

if err != nil {
log.Fatal(err)
}

// 3. Use auth token from response.
log.Println("Access token %s", vaultResp.Auth.ClientToken)
vaultClient.SetToken(vaultResp.Auth.ClientToken)
// ...
}

Vault is just one way of managing secrets in development. For further reading on choosing a solution that’s right for you, see Google Cloud Platform’s documentation on Secret Management.

By Emily Ye, Software Engineer

Making Great Mobile Games with Firebase

So much goes into building and maintaining a mobile game. Let’s say you want to ship it with a level builder for sharing content with other players and, looking forward, you want to roll out new content and unlockables linked with player behavior. Of course, you also need players to be able to easily sign into your soon-to-be hit game.

With a DIY approach, you’d be faced with having to build user management, data storage, server side logic, and more. This will take a lot of your time, and importantly, it would take critical resources away from what you really want to do: build that amazing new mobile game!

Our Firebase SDKs for Unity and C++ provide you with the tools you need to add these features and more to your game with ease. Plus, to help you better understand how Firebase can help you build your next chart-topper, we’ve built a sample game in Unity and open sourced it: MechaHamster. Check it out on Google Play or download the project from GitHub to see how easy it is to integrate Firebase into your game.
Before you dive into the code for Mecha Hamster, here’s a rundown of the Firebase products that can help your game be successful.

Analytics

One of the best tools you have to maintain a high-performing game is your analytics. With Google Analytics for Firebase, you can see where your players might be struggling and make adjustments as needed. Analytics also integrates with Adwords and other major ad networks to maximize your campaign performance. If you monetize your game using AdMob, you can link your two accounts and see the lifetime value (LTV) of your players, from in-game purchases and AdMob, right from your Analytics console. And with Streamview, you can see how players are interacting with your game in realtime.

Test Lab for Android - Game Loop Test

Before releasing updates to your game, you’ll want to make sure it works correctly. However, manual testing can be time consuming when faced with a large variety of target devices. To help solve this, we recently launched Firebase Test Lab for Android Game Loop Test at Google I/O. If you add a demo mode to your game, Test Lab will automatically verify your game is working on a wide range of devices. You can read more in our deep dive blog post here.

Authentication

Another thing you’ll want to be sure to take care of before launch is easy sign-in, so your users can start playing as quickly as possible. Firebase Authentication can help by handling all sign-in and authentication, from simple email + password logins to support for common identity providers like Google, Facebook, Twitter, and Github. Just announced recently at I/O, Firebase also now supports phone number authentication. And Firebase Authentication shares state cross-device, so your users can pick up where they left off, no matter what platforms they’re using.

Remote Config

As more players start using your game, you realize that there are few spots that are frustrating for your audience. You may even see churn rates start to rise, so you decide that you need to push some adjustments. With Firebase Remote Config, you can change values in the console and push them out to players. Some players having trouble navigating levels? You can adjust the difficulty and update remotely. Remote Config can even benefit your development cycle; team members can tweak and test parameters without having to make new builds.

Realtime Database

Now that you have a robust player community, you’re probably starting to see a bunch of great player-built levels. With Firebase Realtime Database, you can store player data and sync it in real-time, meaning that the level builder you’ve built can store and share data easily with other players. You don't need your own server and it’s optimized for offline use. Plus, Realtime Database integrates with Firebase Auth for secure access to user specific data.

Cloud Messaging & Dynamic Links

A few months go by and your game is thriving, with high engagement and an active community. You’re ready to release your next wave of new content, but how can you efficiently get the word out to your users? Firebase Cloud Messaging lets you target messages to player segments, without any coding required. And Firebase Dynamic Links allow your users to share this new content — or an invitation to your game — with other players. Dynamic Links survive the app install process, so a new player can install your app and then dive right into the piece of content that was shared with him or her.

At Firebase, our mission is to help mobile developers build better apps and grow successful businesses. When it comes to games, that means taking care of the boring stuff, so you can focus on what matters — making a great game. Our mobile SDKs for C++ and Unity are available now at firebase.google.com/games.

By Darin Hilton, Art Director

Professors from Around the World Get Their Students into HFOSS

Over the last four years instructors from around the world have gathered for the Professors’ Open Source Software Experience (POSSE) workshop to integrate open source concepts into their curriculum. At each event, professors make more progress toward providing students with hands on experience via contributions to humanitarian free and open source software (HFOSS).

This year Google was proud to not only host a workshop at our San Francisco office in April, but also to collaborate with the organizers to bring a POSSE workshop to Europe for the first time.
POSSE workshop leaders, from left to right: Clif Kussmaul (Muhlenburg College), Lori Postner (Nassau Community College), Stoney Jackson (Western New England University),  Heidi Ellis (Western New England University), Greg Hislop (Drexel University), and Darci Burdge (Nassau Community College).
The workshop in Italy was led by Dr. Gregory Hislop from Drexel University, and Drs. Heidi Ellis and Stoney Jackson from Western New England University, and brought together 20 instructors from Germany, Hungary, India, Italy, Macedonia, Qatar, Spain, Swaziland, the United Kingdom, and the United States. This was the most geographically diverse workshop to date!
Group photos in San Francisco, USA on April 22, 2017 (left) and Bologna, Italy on July 1, 2017 (right).
What’s next for POSSE? University instructors from institutions in the US can apply now to participate in the next workshop, November 16-18 in Raleigh, NC and join their peers in the community of instructors weaving HFOSS into their curriculum.

By Helen Hu, Google Open Source

Facets: An Open Source Visualization Tool for Machine Learning Training Data

Cross-posted on the Google Research Blog

Getting the best results out of a machine learning (ML) model requires that you truly understand your data. However, ML datasets can contain hundreds of millions of data points, each consisting of hundreds (or even thousands) of features, making it nearly impossible to understand an entire dataset in an intuitive fashion. Visualization can help unlock nuances and insights in large datasets. A picture may be worth a thousand words, but an interactive visualization can be worth even more.

Working with the PAIR initiative, we’ve released Facets, an open source visualization tool to aid in understanding and analyzing ML datasets. Facets consists of two visualizations that allow users to see a holistic picture of their data at different granularities. Get a sense of the shape of each feature of the data using Facets Overview, or explore a set of individual observations using Facets Dive. These visualizations allow you to debug your data which, in machine learning, is as important as debugging your model. They can easily be used inside of Jupyter notebooks or embedded into webpages. In addition to the open source code, we've also created a Facets demo website. This website allows anyone to visualize their own datasets directly in the browser without the need for any software installation or setup, without the data ever leaving your computer.

Facets Overview

Facets Overview automatically gives users a quick understanding of the distribution of values across the features of their datasets. Multiple datasets, such as a training set and a test set, can be compared on the same visualization. Common data issues that can hamper machine learning are pushed to the forefront, such as: unexpected feature values, features with high percentages of missing values, features with unbalanced distributions, and feature distribution skew between datasets.
overview-numerical.png
Facets Overview visualization of the six numeric features of the UCI Census datasets[1]. The features are sorted by non-uniformity, with the feature with the most non-uniform distribution at the top. Numbers in red indicate possible trouble spots, in this case numeric features with a high percentage of values set to 0. The histograms at right allow you to compare the distributions between the training data (blue) and test data (orange).

overview-categorical-expand.png
Facets Overview visualization showing two of the nine categorical features of the UCI Census datasets[1]. The features are sorted by distribution distance, with the feature with the biggest skew between the training (blue) and test (orange) datasets at the top. Notice in the “Target” feature that the label values differ between the training and test datasets, due to a trailing period in the test set (“<=50K” vs “<=50K.”). This can be seen in the chart for the feature and also in the entries in the “top” column of the table. This label mismatch would cause a model trained and tested on this data to not be evaluated correctly.

Facets Dive

Facets Dive provides an easy-to-customize, intuitive interface for exploring the relationship between the data points across the different features of a dataset. With Facets Dive, you control the position, color and visual representation of each data point based on its feature values. If the data points have images associated with them, the images can be used as the visual representations.
facets-dive.gif
Facets Dive visualization showing all 16281 data points in the UCI Census test dataset[1]. The animation shows a user coloring the data points by one feature (“Relationship”), faceting in one dimension by a continuous feature (“Age”) and then faceting in another dimension by a discrete feature (“Marital Status”).
dive-quickdraw.png
Facets Dive visualization of a large number of face drawings from the “Quick, Draw!” Dataset, showing the relationship between the number of strokes and points in the drawings and the ability for the “Quick, Draw!” classifier to correctly categorize them as faces.

Fun Fact: In large datasets, such as the CIFAR-10 dataset[2], a small human labelling error can easily go unnoticed. We inspected the CIFAR-10 dataset with Dive and were able to catch a frog-cat – an image of a frog that had been incorrectly labelled as a cat!
cat-frogs.gif
Exploration of the CIFAR-10 dataset using Facets Dive. Here we facet the ground truth labels by row and the predicted labels by column. This produces a confusion matrix view, allowing us to drill into particular kinds of misclassifications. In this particular case, the ML model incorrectly labels some small percentage of true cats as frogs. The interesting thing we find by putting the real images in the confusion matrix is that one of these "true cats" that the model predicted was a frog is actually a frog from visual inspection. With Facets Dive, we can determine that this one misclassification wasn't a true misclassification of the model, but instead incorrectly labeled data in the dataset.
Screen Shot 2017-07-14 at 2.59.13 PM.png
Can you spot the frog-cat?
We’ve gotten great value out of Facets inside of Google and are excited to share the visualizations with the world. We hope they can help you discover new and interesting things about your data that lead you to create more powerful and accurate machine learning models. And since they are open source, you can customize the visualizations for your specific needs or contribute to the project to help us all better understand our data. If you have feedback about your experience with Facets, please let us know what you think.

By James Wexler, Senior Software Engineer, Google Big Picture Team

Acknowledgments

This work is a collaboration between Mahima Pushkarna, James Wexler and Jimbo Wilson, with input from the entire Big Picture team. We would also like to thank Justine Tunney for providing us with the build tooling.

References

[1] Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml/datasets/Census+Income]. Irvine, CA: University of California, School of Information and Computer Science

[2] Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky (2009).