Tag Archives: releases

Google Cardboard XR Plugin for Unity

Late in 2019, we decided to open source Google Cardboard. Since then, our developer community has had access to create a plethora of experiences on both iOS and Android platforms, while reaching millions of users around the world. While this release has been considered a success by our developer community, we also promised that we would release a plugin for Unity. Our users have long preferred developing Cardboard experiences in Unity, so we made it a priority to develop a Unity SDK. Today, we have fulfilled that promise, and the Google Cardboard open source plugin for Unity is now available via the Unity Asset Store

What's Included in the Cardboard Unity SDK

Today, we’re releasing the Cardboard Unity SDK to our users so that they can continue creating smartphone XR experiences using Unity. Unity is one of the most popular 3D and XR development platforms in the world, and our release of this SDK will give our content creators a smoother workflow with Unity when developing for Cardboard.

In addition to the Unity SDK, we are also providing a sample application for iOS/Android, which will be a great aid for developers trying to debug their own creations. This release not only fulfills a promise we made to our Cardboard community, but also shows our support, as we move away from smartphone VR and leave it in the more-than-capable hands of our development community.



If you’re interested in learning how to develop with the Cardboard open source project, please see our developer documentation or visit the Google VR GitHub repo to access source code, build the project, and download the latest release.

By Jonathan Goodlow, Product Manager, AR & VR

The Tekton Pipelines Beta release

Tekton is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems. The project recently released its Beta, which creates higher levels of stability by bringing the best features into the Pipelines Beta and brings more trust between the users and the features.


Tekton is used for infrastructure development on top of Kubernetes; it provides an open source framework for creating CI/CD systems, easily allowing developers to build, test, and deploy applications across applications.

With the new Beta functionality, users can rest assured that Beta features will not be removed, and that there will be a 9-month window dedicated to finding solutions for incompatible API changes. Since many in the Tekton community are using Tekton Pipelines to run APIs, this new release helps guarantee that any new developments on top of Tekton are reliable and optimized for best performance, with a budget of several months to make any necessary adjustments.

As platform builders require a stable API and feature set, the Beta launch includes Tasks, ClusterTasks and TaskRuns, Pipelines and PipelineRuns, to provide a foundation that users can rely on. Google created working groups in conjunction with other contributors from various companies to drive the Beta release. The team continues to churn out new Pipeline features towards a GA launch at the end of the year, while also focussing on bringing other components like metadata storage, Triggers, and the Catalog to Beta.


While initially starting as part of the Knative project from Google, in collaboration with developers from other organizations, Tekton was donated to the Continuous Delivery Foundation (CDF) in early 2019. Tekton’s initial design for the interface was even inspired by the Cloud Build API—and to this day—Google remains heavily involved in the commitment to develop Tekton, by participating in the governing board, and staffing a dedicated team invested in the success of this project. These characteristics make Tekton a prime example of a collaboration in open source.

Since its launch in February 2019, Tekton has had 3712 pull requests from 262 contributors across 39 companies spanning 16 countries. Many widely used projects across the open source industry are built on Tekton:
  • Puppet Project Nebula
  • Jenkins X
  • Red Hat OpenShift Pipelines
  • IBM Cloud Continuous Delivery
  • Kabanero – open source project led by IBM
  • Rio – open source project led by Rancher
  • Kf – open source project led by Google
Interested in trying out Tekton yourself? To install Tekton in your own kubernetes cluster (1.15 or newer), use kubectl to install the latest Tekton release:

kubectl apply -f
https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

You can jump right in by saving this Task to a file called task.yaml:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: hello-world
spec:
  steps:
  - image: ubuntu
    script: |
      echo "hello world"


Tasks are one of the most important building blocks of Tekton! Head over to tektoncd/catalog for more examples of reusable Tasks.

To run the hello-world Task, first apply it to your cluster with kubectl:

kubectl apply -f task.yaml

The easiest way to start running our Task is to use the Tekton command line tool, tkn. Install tkn using the right method for your OS, and you can run your Task with:

tkn task start hello-world --showlog

That’s just a taste of Tekton! At tekton.dev/try the community is hard at work adding interactive tutorials that let you try Tekton in a virtual environment. And you can dump straight into the docs at tekton.dev/docs and join the Tekton community at github.com/tektoncd/community.

Congratulations to all the contributors who made this Beta release possible!

By Radha Jhatakia and Christie Wilson, Google Open Source

Truth 1.0: Fluent Assertions for Java and Android Tests

Software testing is important—and sometimes frustrating. The frustration can come from working on innately hard domains, like concurrency, but too often it comes from a thousand small cuts:
assertEquals("Message has been sent", getString(notification, EXTRA_BIG_TEXT));
assertTrue(
    getString(notification, EXTRA_TEXT)
        .contains("Kurt Kluever <[email protected]>"));
The two assertions above test almost the same thing, but they are structured differently. The difference in structure makes it hard to identify the difference in what's being tested.
A better way to structure these assertions is to use a fluent API:
assertThat(getString(notification, EXTRA_BIG_TEXT))
    .isEqualTo("Message has been sent");
assertThat(getString(notification, EXTRA_TEXT))
    .contains("Kurt Kluever <[email protected]>");
A fluent API naturally leads to other advantages:
  • IDE autocompletion can suggest assertions that fit the value under test, including rich operations like containsExactly(permission.SEND_SMS, permission.READ_SMS).
  • Failure messages can include the value under test and the expected result. Contrast this with the assertTrue call above, which lacks a failure message entirely.
Google's fluent assertion library for Java and Android is Truth. We're happy to announce that we've released Truth 1.0, which stabilizes our API after years of fine-tuning.



Truth started in 2011 as a Googler's personal open source project. Later, it was donated back to Google and cultivated by the Java Core Libraries team, the people who bring you Guava.
You might already be familiar with assertion libraries like Hamcrest and AssertJ, which provide similar features. We've designed Truth to have a simpler API and more readable failure messages. For example, here's a failure message from AssertJ:
java.lang.AssertionError:
Expecting:
  <[year: 2019
month: 7
day: 15
]>
to contain exactly in any order:
  <[year: 2019
month: 6
day: 30
]>
elements not found:
  <[year: 2019
month: 6
day: 30
]>
and elements not expected:
  <[year: 2019
month: 7
day: 15
]>
And here's the equivalent message from Truth:
value of:
    iterable.onlyElement()
expected:
    year: 2019
    month: 6
    day: 30

but was:
    year: 2019
    month: 7
    day: 15
For more details, read our comparison of the libraries, and try Truth for yourself.

Also, if you're developing for Android, try AndroidX Test. It includes Truth extensions that make assertions even easier to write and failure messages even clearer:
assertThat(notification).extras().string(EXTRA_BIG_TEXT)
    .isEqualTo("Message has been sent");
assertThat(notification).extras().string(EXTRA_TEXT)
    .contains("Kurt Kluever <[email protected]>");
Coming soon: Kotlin users of Truth can look forward to Kotlin-specific enhancements.
By Chris Povirk, Java Core Libraries

Open sourcing ClusterFuzz

Fuzzing is an automated method for detecting bugs in software that works by feeding unexpected inputs to a target program. It is effective at finding memory corruption bugs, which often have serious security implications. Manually finding these issues is both difficult and time consuming, and bugs often slip through despite rigorous code review practices. For software projects written in an unsafe language such as C or C++, fuzzing is a crucial part of ensuring their security and stability.

In order for fuzzing to be truly effective, it must be continuous, done at scale, and integrated into the development process of a software project. To provide these features for Chrome, we wrote ClusterFuzz, a fuzzing infrastructure running on over 25,000 cores. Two years ago, we began offering ClusterFuzz as a free service to open source projects through OSS-Fuzz.

Today, we’re announcing that ClusterFuzz is now open source and available for anyone to use.



We developed ClusterFuzz over eight years to fit seamlessly into developer workflows, and to make it dead simple to find bugs and get them fixed. ClusterFuzz provides end-to-end automation, from bug detection, to triage (accurate deduplication, bisection), to bug reporting, and finally to automatic closure of bug reports.

ClusterFuzz has found more than 16,000 bugs in Chrome and more than 11,000 bugs in over 160 open source projects integrated with OSS-Fuzz. It is an integral part of the development process of Chrome and many other open source projects. ClusterFuzz is often able to detect bugs hours after they are introduced and verify the fix within a day.

Check out our GitHub repository. You can try ClusterFuzz locally by following these instructions. In production, ClusterFuzz depends on some key Google Cloud Platform services, but you can use your own compute cluster. We welcome your contributions and look forward to any suggestions to help improve and extend this infrastructure. Through open sourcing ClusterFuzz, we hope to encourage all software developers to integrate fuzzing into their workflows.

By Abhishek Arya, Oliver Chang, Max Moroz, Martin Barbella and Jonathan Metzman, ClusterFuzz team

Dopamine 2.0: providing more flexibility in reinforcement learning research

Reinforcement learning (RL) has become one of the most popular fields of machine learning, and has seen a number of great advances over the last few years. As a result, there is a growing need from both researchers and educators to have access to a clear and reliable framework for RL research and education.

Last August, we announced Dopamine, our framework for flexible reinforcement learning.  For the initial version we decided to focus on a specific type of RL research: value-based agents evaluated on the Atari 2600 framework supported by the Arcade Learning Environment. We were thrilled to see how well it was received by the community, including a live coding session, its inclusion in a recently-announced benchmark for RL, considered as the top “Cool new open source project of 2018” by the Octoverse, and over 7K GitHub stars on our repository.

One of the most common requests we have received is support for more environments. This confirms what we have seen internally, where simpler environments, such as those supported by OpenAI’s Gym, are incredibly useful when testing out new algorithms. We are happy to announce Dopamine 2.0, which includes support for discrete-domain gym environments (e.g. discrete states and actions). The core of the framework remains unchanged, we have simply generalized the interface with the environment. For backwards compatibility, users will still be able to download version 1.0.

We include default configurations for two classic control environments: CartPole and Acrobot; on these environments one can train a Dopamine agent in minutes. When compared with the training time for a standard Atari 2600 game (around 5 days on a standard GPU), these environments allow researchers to iterate much faster on research ideas before testing them out on larger Atari games. We also include a Colaboratory that illustrates how to train an agent on Cartpole and Acrobot. Finally, our GymPreprocessing class serves as an example for how to use Dopamine with other custom environments.

We are excited by the new opportunities enabled by Dopamine 2.0, and look forward to seeing what the research community creates with it!

By Pablo Samuel Castro and Marc G. Bellemare, Dopamine Team

TF-Ranking: a scalable TensorFlow library for learning-to-rank

Cross-posted from the Google AI Blog.

Ranking, the process of ordering a list of items in a way that maximizes the utility of the entire list, is applicable in a wide range of domains, from search engines and recommender systems to machine translation, dialogue systems and even computational biology. In applications like these (and many others), researchers often utilize a set of supervised machine learning techniques called learning-to-rank. In many cases, these learning-to-rank techniques are applied to datasets that are prohibitively large — scenarios where the scalability of TensorFlow could be an advantage. However, there is currently no out-of-the-box support for applying learning-to-rank techniques in TensorFlow. To the best of our knowledge, there are also no other open source libraries that specialize in applying learning-to-rank techniques at scale.

Today, we are excited to share TF-Ranking, a scalable TensorFlow-based library for learning-to-rank. As described in our recent paper, TF-Ranking provides a unified framework that includes a suite of state-of-the-art learning-to-rank algorithms, and supports pairwise or listwise loss functions, multi-item scoring, ranking metric optimization, and unbiased learning-to-rank.

TF-Ranking is fast and easy to use, and creates high-quality ranking models. The unified framework gives ML researchers, practitioners and enthusiasts the ability to evaluate and choose among an array of different ranking models within a single library. Moreover, we strongly believe that a key to a useful open source library is not only providing sensible defaults, but also empowering our users to develop their own custom models. Therefore, we provide flexible API's, within which the users can define and plug in their own customized loss functions, scoring functions and metrics.

Existing Algorithms and Metrics Support

The objective of learning-to-rank algorithms is minimizing a loss function defined over a list of items to optimize the utility of the list ordering for any given application. TF-Ranking supports a wide range of standard pointwise, pairwise and listwise loss functions as described in prior work. This ensures that researchers using the TF-Ranking library are able to reproduce and extend previously published baselines, and practitioners can make the most informed choices for their applications. Furthermore, TF-Ranking can handle sparse features (like raw text) through embeddings and scales to hundreds of millions of training instances. Thus, anyone who is interested in building real-world data intensive ranking systems such as web search or news recommendation, can use TF-Ranking as a robust, scalable solution.

Empirical evaluation is an important part of any machine learning or information retrieval research. To ensure compatibility with prior work,  we support many of the commonly used ranking metrics, including Mean Reciprocal Rank (MRR) and Normalized Discounted Cumulative Gain (NDCG). We also make it easy to visualize these metrics at training time on TensorBoard, an open source TensorFlow visualization dashboard.
An example of the NDCG metric (Y-axis) along the training steps (X-axis) displayed in the TensorBoard. It shows the overall progress of the metrics during training. Different methods can be compared directly on the dashboard. Best models can be selected based on the metric.

Multi-Item Scoring

TF-Ranking supports a novel scoring mechanism wherein multiple items (e.g., web pages) can be scored jointly, an extension of the traditional scoring paradigm in which single items are scored independently. One challenge in multi-item scoring is the difficulty for inference where items have to be grouped and scored in subgroups. Then, scores are accumulated per-item and used for sorting. To make these complexities transparent to the user, TF-Ranking provides a List-In-List-Out (LILO) API to wrap all this logic in the exported TF models.
The TF-Ranking library supports multi-item scoring architecture, an extension of traditional single-item scoring.
As we demonstrate in recent work, multi-item scoring is competitive in its performance to the state-of-the-art learning-to-rank models such as RankNet, MART, and LambdaMART on a public LETOR benchmark.

Ranking Metric Optimization

An important research challenge in learning-to-rank is direct optimization of ranking metrics (such as the previously mentioned NDCG and MRR).  These metrics, while being able to measure the performance of ranking systems better than the standard classification metrics like Area Under the Curve (AUC), have the unfortunate property of being either discontinuous or flat. Therefore standard stochastic gradient descent optimization of these metrics is problematic.

In recent work, we proposed a novel method, LambdaLoss, which provides a principled probabilistic framework for ranking metric optimization. In this framework, metric-driven loss functions can be designed and optimized by an expectation-maximization procedure. The TF-Ranking library integrates the recent advances in direct metric optimization and provides an implementation of LambdaLoss. We are hopeful that this will encourage and facilitate further research advances in the important area of ranking metric optimization.

Unbiased Learning-to-Rank

Prior research has shown that given a ranked list of items, users are much more likely to interact with the first few results, regardless of their relevance. This observation has inspired research interest in unbiased learning-to-rank, and led to the development of unbiased evaluation and several unbiased learning algorithms, based on training instances re-weighting. In the TF-Ranking library, metrics are implemented to support unbiased evaluation and losses are implemented for unbiased learning by natively supporting re-weighting to overcome the inherent biases in user interactions datasets.

Getting Started with TF-Ranking

TF-Ranking implements the TensorFlow Estimator interface, which greatly simplifies machine learning programming by encapsulating training, evaluation, prediction and export for serving. TF-Ranking is well integrated with the rich TensorFlow ecosystem. As described above, you can use TensorBoard to visualize ranking metrics like NDCG and MRR, as well as to pick the best model checkpoints using these metrics. Once your model is ready, it is easy to deploy it in production using TensorFlow Serving.

If you’re interested in trying TF-Ranking for yourself, please check out our GitHub repo, and walk through the tutorial examples. TF-Ranking is an active research project, and we welcome your feedback and contributions. We are excited to see how TF-Ranking can help the information retrieval and machine learning research communities.

By Xuanhui Wang and Michael Bendersky, Software Engineers, Google AI

Acknowledgements

This project was only possible thanks to the members of the core TF-Ranking team: Rama Pasumarthi, Cheng Li, Sebastian Bruch, Nadav Golbandi, Stephan Wolf, Jan Pfeifer, Rohan Anil, Marc Najork, Patrick McGregor and Clemens Mewald‎. We thank the members of the TensorFlow team for their advice and support: Alexandre Passos, Mustafa Ispir, Karmel Allison, Martin Wicke, and others. Finally, we extend our special thanks to our collaborators, interns and early adopters: Suming Chen, Zhen Qin, Chirag Sethi, Maryam Karimzadehgan, Makoto Uchida, Yan Zhu, Qingyao Ai, Brandon Tran, Donald Metzler, Mike Colagrosso, and many others at Google who helped in evaluating and testing the early versions of TF-Ranking.

Outline: secure access to the open web

Censorship and surveillance are challenges that many journalists around the world face on a daily basis. Some of them use a virtual private network (VPN) to provide safer access to the open internet, but not all VPNs are equally reliable and trustworthy, and even fewer are open source.

That’s why Jigsaw created Outline, a new open source, independently audited platform that lets any organization easily create and operate their own VPN.

Outline’s most striking feature is arguably how easy it is to use. An organization starts by downloading the Outline Manager app, which lets them sign in to DigitalOcean, where they can host their own VPN, and set it up with just a few clicks. They can also easily use other cloud providers, provided they have shell access to run the installation script. Once an Outline server is set up, the server administrator can create access credentials and share with their network of contacts, who can then use the Outline clients to connect to it.


A core element to any VPN’s security is the protocol that the server and clients use to communicate. When we looked at the existing protocols, we realized that many of them were easily identifiable by network adversaries looking to spot and block VPN traffic. To make Outline more resilient against this threat, we chose Shadowsocks, a secure, handshake-less, and open source protocol that is known for its strength and performance, and enjoys the support of many developers worldwide. Shadowsocks is a combination of a simplified SOCKS5-like routing protocol, running on top of an encrypted channel. We chose the AEAD_CHACHA20_POLY1305 cipher, which is an IETF standard and provides the security and performance users need.

Another important component to security is running up-to-date software. We package the server code as a Docker image, enabling us to run on multiple platforms, and allowing for automatic updates using Watchtower. On DigitalOcean installations, we also enable automatic security updates on the host machine.

If security is one of the most critical parts of creating a better VPN, usability is the other. We wanted Outline to offer a consistent, simple user experience across platforms, and for it to be easy for developers around the world to contribute to it. With that in mind, we use the cross-platform development framework Apache Cordova for Android, iOS, macOS and ChromeOS, and Electron for Windows. The application logic is a web application written in TypeScript, while the networking code had to be written in native code for each platform. This setup allows us to reutilize most of code, and create consistent user experiences across diverse platforms.

In order to encourage a robust developer community we wanted to strike a balance between simplicity, reproducibility, and automation of future contributions. To that end, we use Travis for continuous builds and to generate the binaries that are ultimately uploaded to the app stores. Thanks to its cross-platform support, any team member can produce a macOS or Windows binary with a single click. We also use Docker to package the build tools for client platforms, and thanks to Electron, developers familiar with the server's Node.js code base can also contribute to the Outline Manager application.

You can find our code in the Outline GitHub repositories and more information on the Outline website. We hope that more developers join the project to build technology that helps people connect to the open web and stay more safe online.

By Vinicius Fortuna, Jigsaw

Introducing the Tink cryptographic software library

Cross-posted on the Google Security Blog

At Google, many product teams use cryptographic techniques to protect user data. In cryptography, subtle mistakes can have serious consequences, and understanding how to implement cryptography correctly requires digesting decades' worth of academic literature. Needless to say, many developers don’t have time for that.

To help our developers ship secure cryptographic code we’ve developed Tink—a multi-language, cross-platform cryptographic library. We believe in open source and want Tink to become a community project—thus Tink has been available on GitHub since the early days of the project, and it has already attracted several external contributors. At Google, Tink is already being used to secure data of many products such as AdMob, Google Pay, Google Assistant, Firebase, the Android Search App, etc. After nearly two years of development, today we’re excited to announce Tink 1.2.0, the first version that supports cloud, Android, iOS, and more!

Tink aims to provide cryptographic APIs that are secure, easy to use correctly, and hard(er) to misuse. Tink is built on top of existing libraries such as BoringSSL and Java Cryptography Architecture, but includes countermeasures to many weaknesses in these libraries, which were discovered by Project Wycheproof, another project from our team.

With Tink, many common cryptographic operations such as data encryption, digital signatures, etc. can be done with only a few lines of code. Here is an example of encrypting and decrypting with our AEAD interface in Java:
 import com.google.crypto.tink.Aead;
import com.google.crypto.tink.KeysetHandle;
import com.google.crypto.tink.aead.AeadFactory;
import com.google.crypto.tink.aead.AeadKeyTemplates;
// 1. Generate the key material.
KeysetHandle keysetHandle = KeysetHandle.generateNew(
AeadKeyTemplates.AES256_EAX);
// 2. Get the primitive.
Aead aead = AeadFactory.getPrimitive(keysetHandle);
// 3. Use the primitive.
byte[] plaintext = ...;
byte[] additionalData = ...;
byte[] ciphertext = aead.encrypt(plaintext, additionalData);
Tink aims to eliminate as many potential misuses as possible. For example, if the underlying encryption mode requires nonces and nonce reuse makes it insecure, then Tink does not allow the user to pass nonces. Interfaces have security guarantees that must be satisfied by each primitive implementing the interface. This may exclude some encryption modes. Rather than adding them to existing interfaces and weakening the guarantees of the interface, it is possible to add new interfaces and describe the security guarantees appropriately.

We’re cryptographers and security engineers working to improve Google’s product security, so we built Tink to make our job easier. Tink shows the claimed security properties (e.g., safe against chosen-ciphertext attacks) right in the interfaces, allowing security auditors and automated tools to quickly discover usages where the security guarantees don’t match the security requirements. Tink also isolates APIs for potentially dangerous operations (e.g., loading cleartext keys from disk), which allows discovering, restricting, monitoring and logging their usage.

Tink provides support for key management, including key rotation and phasing out deprecated ciphers. For example, if a cryptographic primitive is found to be broken, you can switch to a different primitive by rotating keys, without changing or recompiling code.

Tink is also extensible by design: it is easy to add a custom cryptographic scheme or an in-house key management system so that it works seamlessly with other parts of Tink. No part of Tink is hard to replace or remove. All components are composable, and can be selected and assembled in various combinations. For example, if you need only digital signatures, you can exclude symmetric key encryption components to minimize code size in your application.

To get started, please check out our HOW-TO for Java, C++ and Obj-C. If you'd like to talk to the developers or get notified about project updates, you may want to subscribe to our mailing list. To join, simply send an empty email to [email protected]. You can also post your questions to StackOverflow, just remember to tag them with tink.

We’re excited to share this with the community, and welcome your feedback!

By Thai Duong, Information Security Engineer, on behalf of Tink team

How we brought the latest version of Python to App Engine and Cloud Functions

At Cloud Next 2018, we added Python 3.7 support to Cloud Functions and now we’ve announced Python 3.7 support for the App Engine standard environment. These new runtimes allow you to write Python functions and apps using the latest version of Python and the rich ecosystem of packages available on Python Packaging Index (PyPI).

This new runtime marks a significant update to App Engine and was enabled by new open source software that we recently released: gVisor and FTL.

Python, straight from the source

Running Python 3.7 on App Engine and Cloud Functions required us to fundamentally rethink our infrastructure. Traditionally, meeting Google Cloud’s security requirements meant that we had to run a modified version of the Python interpreter. However, using a modified interpreter constrained some language features and only allowed us to support a limited set of whitelisted Python libraries.

Thanks to gVisor, a container sandbox that provides improved security and process isolation, we can now run the unmodified Python 3.7.0 interpreter. We’ve done extensive testing to make sure Python 3.7 is compatible with gVisor. As part of our compatibility testing, we run Python’s full suite of language tests, and tests for Python packages that are popular on PyPI. We’re committed to ensuring that everything you’ve come to know and love about Python is supported on our platform.

Seamless deployments

Most importantly, this change in our infrastructure makes it easier to take advantage of Python’s vast ecosystem. As a developer, you just add project dependencies to a requirements.txt file and deploy.

During deployment, FTL, a tool for building containers, fetches dependencies listed in your requirements.txt file and installs them alongside your app or function. FTL also includes a short-lived dependency cache, which speeds up repeated deployments if no changes are detected in your requirements.txt file. This is particularly useful if you find just need to re-deploy because you found a typo.

Keeping up with the Pythonistas

In making these changes, we also decided to expand the list of system packages that are included with each runtime’s Ubuntu 18.04 distribution. We think that will make life just a little bit easier for developers working with the latest release of Python.

Looking forward, we’re excited about how these changes will allow us to keep up with the Python community’s progress as they release new versions and libraries. Please let us know what you think and if you run into any challenges.

You can learn more about how to get started with it on App Engine and Cloud Functions in our documentation. We can’t wait to see what you build with Python 3.7.

By Stewart Reichling, Product Manager

Introducing the new lead for Android Open Source Project

This week began with the announcement of Android 9 Pie and, as usual, the subsequent upstreaming of code to the Android Open Source Project (AOSP). But the release of Android 9 isn’t the only important Android news!

Tucked away in the announcement to the Android Building mailing list was this note:

“I also wanted to take a moment to introduce myself as the new Tech Lead / Manager for AOSP. My name is Jeff Bailey, and I’ve been involved in the Open Source community for more than two decades. Since I joined the Android team a few months ago, I’ve been learning how we do things and getting an understanding of how we could work better with the community. I’d love to hear from you: @JeffBaileyAOSP on Twitter or [email protected]. Be well!”

As Jeff notes in his introduction, he has a history in free and open source software (FOSS). He’s been an avid user, contributor, and maintainer since before the Open Source Definition was inked!

Jeff co-founded Savannah, where GNU software is developed and distributed, spent 15 years working on Debian, and has been an Ubuntu core developer. Further, he spent some time on the Google Open Source team and was involved in open sourcing Android back in 2008.

Open source projects, even those which originate inside of companies, are powered by the community of users and contributors that surround them. And those communities thrive when they have stewards who are steeped in the traditions of free and open source software. We’re excited for AOSP as Jeff takes the reins. He brings both technical and cultural skills to the table, and he’s been involved with the project since the beginning!

Suffice it to say, AOSP is in good hands. We welcome Jeff to his new role and, as he said in his introduction, he’d love to hear from the community: you can reach Jeff on Twitter and via email.

By Josh Simmons, Google Open Source