Tag Archives: releases

TF-Ranking: a scalable TensorFlow library for learning-to-rank

Cross-posted from the Google AI Blog.

Ranking, the process of ordering a list of items in a way that maximizes the utility of the entire list, is applicable in a wide range of domains, from search engines and recommender systems to machine translation, dialogue systems and even computational biology. In applications like these (and many others), researchers often utilize a set of supervised machine learning techniques called learning-to-rank. In many cases, these learning-to-rank techniques are applied to datasets that are prohibitively large — scenarios where the scalability of TensorFlow could be an advantage. However, there is currently no out-of-the-box support for applying learning-to-rank techniques in TensorFlow. To the best of our knowledge, there are also no other open source libraries that specialize in applying learning-to-rank techniques at scale.

Today, we are excited to share TF-Ranking, a scalable TensorFlow-based library for learning-to-rank. As described in our recent paper, TF-Ranking provides a unified framework that includes a suite of state-of-the-art learning-to-rank algorithms, and supports pairwise or listwise loss functions, multi-item scoring, ranking metric optimization, and unbiased learning-to-rank.

TF-Ranking is fast and easy to use, and creates high-quality ranking models. The unified framework gives ML researchers, practitioners and enthusiasts the ability to evaluate and choose among an array of different ranking models within a single library. Moreover, we strongly believe that a key to a useful open source library is not only providing sensible defaults, but also empowering our users to develop their own custom models. Therefore, we provide flexible API's, within which the users can define and plug in their own customized loss functions, scoring functions and metrics.

Existing Algorithms and Metrics Support

The objective of learning-to-rank algorithms is minimizing a loss function defined over a list of items to optimize the utility of the list ordering for any given application. TF-Ranking supports a wide range of standard pointwise, pairwise and listwise loss functions as described in prior work. This ensures that researchers using the TF-Ranking library are able to reproduce and extend previously published baselines, and practitioners can make the most informed choices for their applications. Furthermore, TF-Ranking can handle sparse features (like raw text) through embeddings and scales to hundreds of millions of training instances. Thus, anyone who is interested in building real-world data intensive ranking systems such as web search or news recommendation, can use TF-Ranking as a robust, scalable solution.

Empirical evaluation is an important part of any machine learning or information retrieval research. To ensure compatibility with prior work,  we support many of the commonly used ranking metrics, including Mean Reciprocal Rank (MRR) and Normalized Discounted Cumulative Gain (NDCG). We also make it easy to visualize these metrics at training time on TensorBoard, an open source TensorFlow visualization dashboard.
An example of the NDCG metric (Y-axis) along the training steps (X-axis) displayed in the TensorBoard. It shows the overall progress of the metrics during training. Different methods can be compared directly on the dashboard. Best models can be selected based on the metric.

Multi-Item Scoring

TF-Ranking supports a novel scoring mechanism wherein multiple items (e.g., web pages) can be scored jointly, an extension of the traditional scoring paradigm in which single items are scored independently. One challenge in multi-item scoring is the difficulty for inference where items have to be grouped and scored in subgroups. Then, scores are accumulated per-item and used for sorting. To make these complexities transparent to the user, TF-Ranking provides a List-In-List-Out (LILO) API to wrap all this logic in the exported TF models.
The TF-Ranking library supports multi-item scoring architecture, an extension of traditional single-item scoring.
As we demonstrate in recent work, multi-item scoring is competitive in its performance to the state-of-the-art learning-to-rank models such as RankNet, MART, and LambdaMART on a public LETOR benchmark.

Ranking Metric Optimization

An important research challenge in learning-to-rank is direct optimization of ranking metrics (such as the previously mentioned NDCG and MRR).  These metrics, while being able to measure the performance of ranking systems better than the standard classification metrics like Area Under the Curve (AUC), have the unfortunate property of being either discontinuous or flat. Therefore standard stochastic gradient descent optimization of these metrics is problematic.

In recent work, we proposed a novel method, LambdaLoss, which provides a principled probabilistic framework for ranking metric optimization. In this framework, metric-driven loss functions can be designed and optimized by an expectation-maximization procedure. The TF-Ranking library integrates the recent advances in direct metric optimization and provides an implementation of LambdaLoss. We are hopeful that this will encourage and facilitate further research advances in the important area of ranking metric optimization.

Unbiased Learning-to-Rank

Prior research has shown that given a ranked list of items, users are much more likely to interact with the first few results, regardless of their relevance. This observation has inspired research interest in unbiased learning-to-rank, and led to the development of unbiased evaluation and several unbiased learning algorithms, based on training instances re-weighting. In the TF-Ranking library, metrics are implemented to support unbiased evaluation and losses are implemented for unbiased learning by natively supporting re-weighting to overcome the inherent biases in user interactions datasets.

Getting Started with TF-Ranking

TF-Ranking implements the TensorFlow Estimator interface, which greatly simplifies machine learning programming by encapsulating training, evaluation, prediction and export for serving. TF-Ranking is well integrated with the rich TensorFlow ecosystem. As described above, you can use TensorBoard to visualize ranking metrics like NDCG and MRR, as well as to pick the best model checkpoints using these metrics. Once your model is ready, it is easy to deploy it in production using TensorFlow Serving.

If you’re interested in trying TF-Ranking for yourself, please check out our GitHub repo, and walk through the tutorial examples. TF-Ranking is an active research project, and we welcome your feedback and contributions. We are excited to see how TF-Ranking can help the information retrieval and machine learning research communities.

By Xuanhui Wang and Michael Bendersky, Software Engineers, Google AI

Acknowledgements

This project was only possible thanks to the members of the core TF-Ranking team: Rama Pasumarthi, Cheng Li, Sebastian Bruch, Nadav Golbandi, Stephan Wolf, Jan Pfeifer, Rohan Anil, Marc Najork, Patrick McGregor and Clemens Mewald‎. We thank the members of the TensorFlow team for their advice and support: Alexandre Passos, Mustafa Ispir, Karmel Allison, Martin Wicke, and others. Finally, we extend our special thanks to our collaborators, interns and early adopters: Suming Chen, Zhen Qin, Chirag Sethi, Maryam Karimzadehgan, Makoto Uchida, Yan Zhu, Qingyao Ai, Brandon Tran, Donald Metzler, Mike Colagrosso, and many others at Google who helped in evaluating and testing the early versions of TF-Ranking.

Outline: secure access to the open web

Censorship and surveillance are challenges that many journalists around the world face on a daily basis. Some of them use a virtual private network (VPN) to provide safer access to the open internet, but not all VPNs are equally reliable and trustworthy, and even fewer are open source.

That’s why Jigsaw created Outline, a new open source, independently audited platform that lets any organization easily create and operate their own VPN.

Outline’s most striking feature is arguably how easy it is to use. An organization starts by downloading the Outline Manager app, which lets them sign in to DigitalOcean, where they can host their own VPN, and set it up with just a few clicks. They can also easily use other cloud providers, provided they have shell access to run the installation script. Once an Outline server is set up, the server administrator can create access credentials and share with their network of contacts, who can then use the Outline clients to connect to it.


A core element to any VPN’s security is the protocol that the server and clients use to communicate. When we looked at the existing protocols, we realized that many of them were easily identifiable by network adversaries looking to spot and block VPN traffic. To make Outline more resilient against this threat, we chose Shadowsocks, a secure, handshake-less, and open source protocol that is known for its strength and performance, and enjoys the support of many developers worldwide. Shadowsocks is a combination of a simplified SOCKS5-like routing protocol, running on top of an encrypted channel. We chose the AEAD_CHACHA20_POLY1305 cipher, which is an IETF standard and provides the security and performance users need.

Another important component to security is running up-to-date software. We package the server code as a Docker image, enabling us to run on multiple platforms, and allowing for automatic updates using Watchtower. On DigitalOcean installations, we also enable automatic security updates on the host machine.

If security is one of the most critical parts of creating a better VPN, usability is the other. We wanted Outline to offer a consistent, simple user experience across platforms, and for it to be easy for developers around the world to contribute to it. With that in mind, we use the cross-platform development framework Apache Cordova for Android, iOS, macOS and ChromeOS, and Electron for Windows. The application logic is a web application written in TypeScript, while the networking code had to be written in native code for each platform. This setup allows us to reutilize most of code, and create consistent user experiences across diverse platforms.

In order to encourage a robust developer community we wanted to strike a balance between simplicity, reproducibility, and automation of future contributions. To that end, we use Travis for continuous builds and to generate the binaries that are ultimately uploaded to the app stores. Thanks to its cross-platform support, any team member can produce a macOS or Windows binary with a single click. We also use Docker to package the build tools for client platforms, and thanks to Electron, developers familiar with the server's Node.js code base can also contribute to the Outline Manager application.

You can find our code in the Outline GitHub repositories and more information on the Outline website. We hope that more developers join the project to build technology that helps people connect to the open web and stay more safe online.

By Vinicius Fortuna, Jigsaw

Introducing the Tink cryptographic software library

Cross-posted on the Google Security Blog

At Google, many product teams use cryptographic techniques to protect user data. In cryptography, subtle mistakes can have serious consequences, and understanding how to implement cryptography correctly requires digesting decades' worth of academic literature. Needless to say, many developers don’t have time for that.

To help our developers ship secure cryptographic code we’ve developed Tink—a multi-language, cross-platform cryptographic library. We believe in open source and want Tink to become a community project—thus Tink has been available on GitHub since the early days of the project, and it has already attracted several external contributors. At Google, Tink is already being used to secure data of many products such as AdMob, Google Pay, Google Assistant, Firebase, the Android Search App, etc. After nearly two years of development, today we’re excited to announce Tink 1.2.0, the first version that supports cloud, Android, iOS, and more!

Tink aims to provide cryptographic APIs that are secure, easy to use correctly, and hard(er) to misuse. Tink is built on top of existing libraries such as BoringSSL and Java Cryptography Architecture, but includes countermeasures to many weaknesses in these libraries, which were discovered by Project Wycheproof, another project from our team.

With Tink, many common cryptographic operations such as data encryption, digital signatures, etc. can be done with only a few lines of code. Here is an example of encrypting and decrypting with our AEAD interface in Java:
 import com.google.crypto.tink.Aead;
import com.google.crypto.tink.KeysetHandle;
import com.google.crypto.tink.aead.AeadFactory;
import com.google.crypto.tink.aead.AeadKeyTemplates;
// 1. Generate the key material.
KeysetHandle keysetHandle = KeysetHandle.generateNew(
AeadKeyTemplates.AES256_EAX);
// 2. Get the primitive.
Aead aead = AeadFactory.getPrimitive(keysetHandle);
// 3. Use the primitive.
byte[] plaintext = ...;
byte[] additionalData = ...;
byte[] ciphertext = aead.encrypt(plaintext, additionalData);
Tink aims to eliminate as many potential misuses as possible. For example, if the underlying encryption mode requires nonces and nonce reuse makes it insecure, then Tink does not allow the user to pass nonces. Interfaces have security guarantees that must be satisfied by each primitive implementing the interface. This may exclude some encryption modes. Rather than adding them to existing interfaces and weakening the guarantees of the interface, it is possible to add new interfaces and describe the security guarantees appropriately.

We’re cryptographers and security engineers working to improve Google’s product security, so we built Tink to make our job easier. Tink shows the claimed security properties (e.g., safe against chosen-ciphertext attacks) right in the interfaces, allowing security auditors and automated tools to quickly discover usages where the security guarantees don’t match the security requirements. Tink also isolates APIs for potentially dangerous operations (e.g., loading cleartext keys from disk), which allows discovering, restricting, monitoring and logging their usage.

Tink provides support for key management, including key rotation and phasing out deprecated ciphers. For example, if a cryptographic primitive is found to be broken, you can switch to a different primitive by rotating keys, without changing or recompiling code.

Tink is also extensible by design: it is easy to add a custom cryptographic scheme or an in-house key management system so that it works seamlessly with other parts of Tink. No part of Tink is hard to replace or remove. All components are composable, and can be selected and assembled in various combinations. For example, if you need only digital signatures, you can exclude symmetric key encryption components to minimize code size in your application.

To get started, please check out our HOW-TO for Java, C++ and Obj-C. If you'd like to talk to the developers or get notified about project updates, you may want to subscribe to our mailing list. To join, simply send an empty email to tink-users+subscribe@googlegroups.com. You can also post your questions to StackOverflow, just remember to tag them with tink.

We’re excited to share this with the community, and welcome your feedback!

By Thai Duong, Information Security Engineer, on behalf of Tink team

How we brought the latest version of Python to App Engine and Cloud Functions

At Cloud Next 2018, we added Python 3.7 support to Cloud Functions and now we’ve announced Python 3.7 support for the App Engine standard environment. These new runtimes allow you to write Python functions and apps using the latest version of Python and the rich ecosystem of packages available on Python Packaging Index (PyPI).

This new runtime marks a significant update to App Engine and was enabled by new open source software that we recently released: gVisor and FTL.

Python, straight from the source

Running Python 3.7 on App Engine and Cloud Functions required us to fundamentally rethink our infrastructure. Traditionally, meeting Google Cloud’s security requirements meant that we had to run a modified version of the Python interpreter. However, using a modified interpreter constrained some language features and only allowed us to support a limited set of whitelisted Python libraries.

Thanks to gVisor, a container sandbox that provides improved security and process isolation, we can now run the unmodified Python 3.7.0 interpreter. We’ve done extensive testing to make sure Python 3.7 is compatible with gVisor. As part of our compatibility testing, we run Python’s full suite of language tests, and tests for Python packages that are popular on PyPI. We’re committed to ensuring that everything you’ve come to know and love about Python is supported on our platform.

Seamless deployments

Most importantly, this change in our infrastructure makes it easier to take advantage of Python’s vast ecosystem. As a developer, you just add project dependencies to a requirements.txt file and deploy.

During deployment, FTL, a tool for building containers, fetches dependencies listed in your requirements.txt file and installs them alongside your app or function. FTL also includes a short-lived dependency cache, which speeds up repeated deployments if no changes are detected in your requirements.txt file. This is particularly useful if you find just need to re-deploy because you found a typo.

Keeping up with the Pythonistas

In making these changes, we also decided to expand the list of system packages that are included with each runtime’s Ubuntu 18.04 distribution. We think that will make life just a little bit easier for developers working with the latest release of Python.

Looking forward, we’re excited about how these changes will allow us to keep up with the Python community’s progress as they release new versions and libraries. Please let us know what you think and if you run into any challenges.

You can learn more about how to get started with it on App Engine and Cloud Functions in our documentation. We can’t wait to see what you build with Python 3.7.

By Stewart Reichling, Product Manager

Introducing the new lead for Android Open Source Project

This week began with the announcement of Android 9 Pie and, as usual, the subsequent upstreaming of code to the Android Open Source Project (AOSP). But the release of Android 9 isn’t the only important Android news!

Tucked away in the announcement to the Android Building mailing list was this note:

“I also wanted to take a moment to introduce myself as the new Tech Lead / Manager for AOSP. My name is Jeff Bailey, and I’ve been involved in the Open Source community for more than two decades. Since I joined the Android team a few months ago, I’ve been learning how we do things and getting an understanding of how we could work better with the community. I’d love to hear from you: @JeffBaileyAOSP on Twitter or jeffbailey+aosp@google.com. Be well!”

As Jeff notes in his introduction, he has a history in free and open source software (FOSS). He’s been an avid user, contributor, and maintainer since before the Open Source Definition was inked!

Jeff co-founded Savannah, where GNU software is developed and distributed, spent 15 years working on Debian, and has been an Ubuntu core developer. Further, he spent some time on the Google Open Source team and was involved in open sourcing Android back in 2008.

Open source projects, even those which originate inside of companies, are powered by the community of users and contributors that surround them. And those communities thrive when they have stewards who are steeped in the traditions of free and open source software. We’re excited for AOSP as Jeff takes the reins. He brings both technical and cultural skills to the table, and he’s been involved with the project since the beginning!

Suffice it to say, AOSP is in good hands. We welcome Jeff to his new role and, as he said in his introduction, he’d love to hear from the community: you can reach Jeff on Twitter and via email.

By Josh Simmons, Google Open Source

Announcing Cirq: an open source framework for NISQ algorithms

Cross-posted from the Google AI Blog

Over the past few years, quantum computing has experienced a growth not only in the construction of quantum hardware, but also in the development of quantum algorithms. With the availability of Noisy Intermediate Scale Quantum (NISQ) computers (devices with ~50 - 100 qubits and high fidelity quantum gates), the development of algorithms to understand the power of these machines is of increasing importance. However, a common problem when designing a quantum algorithm on a NISQ processor is how to take full advantage of these limited quantum devices—using resources to solve the hardest part of the problem rather than on overheads from poor mappings between the algorithm and hardware. Furthermore some quantum processors have complex geometric constraints and other nuances, and ignoring these will either result in faulty quantum computation, or a computation that is modified and sub-optimal.*

Today at the First International Workshop on Quantum Software and Quantum Machine Learning (QSML), the Google AI Quantum team announced the public alpha of Cirq, an open source framework for NISQ computers. Cirq is focused on near-term questions and helping researchers understand whether NISQ quantum computers are capable of solving computational problems of practical importance. Cirq is licensed under Apache 2, and is free to be modified or embedded in any commercial or open source package.

Once installed, Cirq enables researchers to write quantum algorithms for specific quantum processors. Cirq gives users fine tuned control over quantum circuits, specifying gate behavior using native gates, placing these gates appropriately on the device, and scheduling the timing of these gates within the constraints of the quantum hardware. Data structures are optimized for writing and compiling these quantum circuits to allow users to get the most out of NISQ architectures. Cirq supports running these algorithms locally on a simulator, and is designed to easily integrate with future quantum hardware or larger simulators via the cloud.


We are also announcing the release of OpenFermion-Cirq, an example of a Cirq based application enabling near-term algorithms. OpenFermion is a platform for developing quantum algorithms for chemistry problems, and OpenFermion-Cirq is an open source library which compiles quantum simulation algorithms to Cirq. The new library uses the latest advances in building low depth quantum algorithms for quantum chemistry problems to enable users to go from the details of a chemical problem to highly optimized quantum circuits customized to run on particular hardware. For example, this library can be used to easily build quantum variational algorithms for simulating properties of molecules and complex materials.

Quantum computing will require strong cross-industry and academic collaborations if it is going to realize its full potential. In building Cirq, we worked with early testers to gain feedback and insight into algorithm design for NISQ computers. Below are some examples of Cirq work resulting from these early adopters:
To learn more about how Cirq is helping enable NISQ algorithms, please visit the links above where many of the adopters have provided example source code for their implementations.

Today, the Google AI Quantum team is using Cirq to create circuits that run on Google’s Bristlecone processor. In the future, we plan to make this processor available in the cloud, and Cirq will be the interface in which users write programs for this processor. In the meantime, we hope Cirq will improve the productivity of NISQ algorithm developers and researchers everywhere. Please check out the GitHub repositories for Cirq and OpenFermion-Cirq — pull requests welcome!

By Alan Ho, Product Lead and Dave Bacon, Software Lead, Google AI Quantum Team

Acknowledgements
We would like to thank Craig Gidney for leading the development of Cirq, Ryan Babbush and Kevin Sung for building OpenFermion-Cirq and a whole host of code contributors to both frameworks.



* An analogous situation is how early classical programmers needed to run complex programs in very small memory spaces by paying careful attention to the lowest level details of the hardware.

Introducing Data Transfer Project: an open source platform promoting universal data portability

In 2007, a small group of engineers in our Chicago office formed the Data Liberation Front, a team that believed consumers should have better tools to put their data where they want, when they want, and even move it to a different service. This idea, called “data portability,” gives people greater control of their information, and pushes us to develop great products because we know they can pack up and leave at any time.

In 2011, we launched Takeout, a new way for Google users to download or transfer a copy of the data they store or create in a variety of industry-standard formats. Since then, we've continued to invest in Takeout—we now call it Download Your Data—and today, our users can download a machine-readable copy of the data they have stored in 50+ Google products, with more on the way.

Now, we’re taking our commitment to portability a step further. In tandem with Microsoft, Twitter, and Facebook we’re announcing the Data Transfer Project, an open source initiative dedicated to developing tools that will enable consumers to transfer their data directly from one service to another, without needing to download and re-upload it. Download Your Data users can already do this; they can transfer their information directly to their Dropbox, Box, MS OneDrive, and Google Drive accounts today. With this project, the development of which we mentioned in our blog post about preparations for the GDPR, we’re looking forward to working with companies across the industry to bring this type of functionality to individuals across the web.

Our approach

The organizations involved with this project are developing tools that can convert any service's proprietary APIs to and from a small set of standardized data formats that can be used by anyone. This makes it possible to transfer data between any two providers using existing industry-standard infrastructure and authorization mechanisms, such as OAuth. So far, we have developed adapters for seven different service providers across five different types of consumer data; we think this demonstrates the viability of this approach to scale to a large number of use cases.

Consumers will benefit from improved flexibility and control over their data. They will be able to import their information into any participating service that offers compelling features—even brand new ones that could rely on powerful, cloud-based infrastructure rather than the consumers’ potentially limited bandwidth and capability to transfer files. Services will benefit as well, as they will be able to compete for users that can move their data more easily.

Protecting users’ data and keeping them in control

Data security and privacy are foundational to the design of the Data Transfer Project. Services must first agree to allow data transfer between them, and then they will require that individuals authenticate each account independently. All credentials and user data will be encrypted both in transit and at rest. The protocol uses a form of perfect forward secrecy where a new unique key is generated for each transfer. Additionally, the framework allows partners to support any authorization mechanism they choose. This enables partners to leverage their existing security infrastructure when authorizing accounts.

As it is an open source product, anyone can inspect the code to verify that data isn't being collected or used for profiling purposes. Tech savvy consumers are also free to download and run an instance of the framework themselves. Interested parties can learn more at the Data Transfer Project website, which explains the technical foundations behind the project and goes into greater detail on how it works.

How to get involved

It is very early days for the Data Transfer Project and we encourage the developer community to join us and help extend the platform to support many more data types, service providers, and hosting solutions.

The Data Transfer Project’s open source code can be found at datatransferproject.dev and you can learn more about Google’s approach to portability in our paper, where we describe our history with this topic and the values and principles that motivated us to invest in the Data Transfer Project. Our prototype already supports data transfer for several product verticals including: photos, mail, contacts, calendar, and tasks. These are enabled by existing, publicly available APIs from Google, Microsoft, Twitter, Flickr, Instagram, Remember the Milk, and Smugmug.

Data portability makes it easy for consumers to try new services and use the ones that they like best. We’re thrilled to help drive an initiative that incentivizes companies large and small to continue innovating across the internet. We’re just getting started and we’re looking forward to what comes next.

By Brian Willard, Software Engineer and Greg Fair, Product Manager

Automating your app releases with Google Play

Posted by Nicholas Lativy, Software Engineer

At Google I/O we shared how Google's own apps make use of Google Play for successful launches and updates and introduced the new Google Play Developer Publishing API Version 3.

The Publishing API enables you to integrate publishing operations into your existing release process or automated workflows by providing the ability to upload APKs and roll out releases. Here's an overview of some of the improvements you can now take advantage of in Version 3 of the API.

Releases in the API

The Publishing API now uses the release model you are familiar with from the Play Console.

{
  "track": "production",
  "releases": [
    {
      "name": "Release One", 
      "versionCodes": ["100"],
      "status": "completed"
    }
  ]
}

This gives you full control over releases via the API allowing a number of operations which were previously available only in the Play Console. For example, you can now control the name of releases created via the API, and we have now relaxed the constraints on what can be rolled out via the API to match the Play Console.

Additional testing tracks

The API now supports releasing to any of the testing tracks you have configured for your application as well as the production track. This makes it possible to configure your continuous integration system to push a new build to your internal test track as soon as it's ready for QA.

Staged rollout

Staged rollouts are the recommended way to deploy new versions of your app. They allow you to make your new release available to a small percentage of users and gradually increase this percentage as your confidence in the release grows.

Staged rollouts are now represented directly in the API as inProgress releases.

{
  "track": "production",
  "releases": [
    {
      "versionCodes": ["100"],
      "status": "completed"
    },
    {
      "versionCodes": ["200"],
      "status": "inProgress",
      "userFraction": 0.1
    }
  ]
}

You can now halt a staged rollout via the API by changing its status to halted. This makes it possible to automatically respond to any problems you detect while performing a rollout. If it turns out to be a false alarm, the API now also allows you to resume a halted release by changing its status back to inProgress.

Release notes

Release notes are a useful way to communicate to users new features you have added in a release. In V3 we have simplified how these are specified via the API by adding the releaseNotes field to release.

{
  "track": "production",
  "releases": [
    {
      "versionCodes": ["100"],
      "status": "completed",
      "releaseNotes": [
        {
          "language": "en-US",
          "text": "Now it's easier to specify release notes."
        },
        {
           "language": "it-IT",
           "text": "Ora è più semplice specificare le note sulla versione."
        }
    }
  ]
}

Draft releases

We know that while many developers are comfortable deploying test builds automatically, they like using the Play Console when rolling out to production.

So, in the V3 API we have added the ability to create and manage Draft Releases.

{
  "track": "production",
  "releases": [
    {
      "name": "Big Launch",
      "versionCodes": ["200"],
      "status": "draft"
    }
  ]
}

This allows you to upload APKs or App Bundles and create a draft release from your continuous integration system, and then have your product manager log in, check that everything looks good, and hit "Confirm and Rollout".

We hope you find these features useful and take advantage of them for successful launches and updates with Google Play. If you're interested in some of the other great tools for distributing your apps, check out the I/O sessions which have now been posted to the Android Developers YouTube Channel.

How useful did you find this blogpost?

OpenCensus’s journey ahead: platforms and languages

We recently blogged about the value of OpenCensus and how Google uses Census internally. Today, we want to share more about our long-term vision for OpenCensus.

The goal of OpenCensus is to be a ubiquitous observability framework that allows developers to automatically collect, aggregate, and export traces, metrics, and other telemetry from their applications. We plan on getting there by building easy-to-use libraries and automatically integrate with as many technologies and frameworks as possible.

Our roadmap has two themes: increased language, framework, and platform coverage, and the addition of more powerful features.Today, we’ll discuss the first theme of the increased coverage.

Increasing Coverage

More Language Coverage

In January, we released OpenCensus for Java, Go, and C++ as well as tracing support for Python, PHP, and Ruby. We’re about to start development of OpenCensus for Node.js and .NET, and you’ll see activity on these repositories ramp up in the coming quarter.

Integration with more Frameworks, Platforms, and Clients

We want to provide a great out-of-the-box experience, so we need to automatically capture traces and metrics with as little developer effort as possible. To achieve this, we’ll be creating integrations for popular web frameworks, RPC frameworks, and storage clients. This will enable automatic context propagation, span creation, and trace annotations, without requiring extra work on behalf of developers.

As a basic example, OpenCensus already integrates with Go’s default gRPC and HTTP handlers to generate spans (with relevant annotations) and to pass context.

More complex integrations will provide more information to developers. Here’s an example of a trace captured with our upcoming MongoDB instrumentation, shown on Stackdriver Trace and AWS X-Ray:
A MongoDB trace shown in Stackdriver Trace

The same trace captured in X-Ray

Istio

OpenCensus will soon have out-of-the-box tracing and metrics collection in Istio. We’re currently working through our initial designs and implementation for integrations with the Envoy Sidecar and Istio Mixer service. Our goal is to provide Istio users with a great out of box tracing and metrics collection experience.

Kubernetes

We have two primary use cases in mind for Kubernetes deployments: providing cluster-wide visibility via z-pages, and better labeling of traces, stats, and metrics. Cluster-wide z-pages will allow developers to view telemetry in real time across an entire Kubernetes deployment, independently of their back-end. This is incredibly useful when debugging immediate high-impact issues like service outages.

Client Application Support

OpenCensus currently provides observability into back-end services, however this doesn’t tell the whole story about end-to-end application performance. Throughout 2018, we plan to add instrumentation for client and front-end web applications, so developers can get traces that begin from customers’ devices and reflect actual perceived latency, and metrics captured from client code.

We aim to add support for instrumenting Android, iOS, and front-end JavaScript, though this list may grow or change. Expect to hear more about this later in 2018.

Next Up

Next week we’ll discuss some of the new features that we’re looking to bring to OpenCensus, including notable enhancements to the trace sampling logic.

None of this is possible without the support and participation from the community. Please check out our repository and start contributing; we welcome contributions of any size -- however you want to take part. You can join other developers and users on the OpenCensus Gitter channel. We’d love to hear from you!

By Pritam Shah and Morgan McLean, Census team

Open sourcing Seurat: bringing high-fidelity scenes to mobile VR

Crossposted from the Google Developers Blog

Great VR experiences make you feel like you’re really somewhere else. To create deeply immersive experiences, there are a lot of factors that need to come together: amazing graphics, spatialized audio, and the ability to move around and feel like the world is responding to you.

Last year at I/O, we announced Seurat as a powerful tool to help developers and creators bring high-fidelity graphics to standalone VR headsets with full positional tracking, like the Lenovo Mirage Solo with Daydream. Seurat is a scene simplification technology designed to process very complex 3D scenes into a representation that renders efficiently on mobile hardware. Here’s how ILMxLAB was able to use Seurat to bring an incredibly detailed ‘Rogue One: A Star Wars Story’ scene to a standalone VR experience.

Today, we’re open sourcing Seurat to the developer community. You can now use Seurat to bring visually stunning scenes to your own VR applications and have the flexibility to customize the tool for your own workflows.

Behind the scenes: how Seurat works

Seurat works by taking advantage of the fact that VR scenes are typically viewed from within a limited viewing region, and leverages this to optimize the geometry and textures in your scene. It takes RGBD images (color and depth) as input and generates a textured mesh, targeting a configurable number of triangles, texture size, and fill rate, to simplify scenes beyond what traditional methods can achieve.


To demonstrate what Seurat can do, here’s a snippet from Blade Runner: Revelations, which launched today with the Lenovo Mirage Solo.

Blade Runner: Revolution by Alcon Interactive and Seismic Games
The Blade Runner universe is known for its stunning worlds, and in Revelations, you get to unravel a mystery around fugitive Replicants in the futuristic but gritty streets. To create the look and feel for Revelations, Seismic used Seurat to bring a scene of 46.6 million triangles down to only 307,000, improving performance by more than 100x with almost no loss in visual quality:

Original scene:

Seurat-processed scene: 

If you’re interested in learning more about Seurat or trying it out yourself, visit the Seurat GitHub page to access the documentation and source code. We’re looking forward to seeing what you build!

By Manfred Ernst, Software Engineer