Author Archives: Google Devs

Fun new ways developers are experimenting with voice interaction

Posted by Amit Pitaru, Creative Lab

Voice interaction has the potential to simplify the way we use technology. And with Dialogflow, Actions on Google, and Speech Synthesis API, it's becoming easier for any developer to create voice-based experiences. That's why we've created Voice Experiments, a site to showcase how developers are exploring voice interaction in all kinds of exciting new ways.

The site includes a few experiments that show how voice interaction can be used to explore music, gaming, storytelling, and more. MixLab makes it easier for anyone to create music, using simple voice commands. Mystery Animal puts a new spin on a classic game. And Story Speakerlets you create interactive, spoken stories by just writing in a Google Doc – no coding required.

You can try the experiments through the Google Assistant on your phone and on voice-activated speakers like the Google Home. Or you can try them on the web using a browser like Chrome.

It's still early days for voice interaction, and we're excited to see what you will make. Visit g.co/VoiceExperiments to play with the experiments or submit your own.

Announcing TensorFlow r1.4

Posted by the TensorFlow Team

TensorFlow release 1.4 is now public - and this is a big one! So we're happy to announce a number of new and exciting features we hope everyone will enjoy.

Keras

In 1.4, Keras has graduated from tf.contrib.keras to core package tf.keras. Keras is a hugely popular machine learning framework, consisting of high-level APIs to minimize the time between your ideas and working implementations. Keras integrates smoothly with other core TensorFlow functionality, including the Estimator API. In fact, you may construct an Estimator directly from any Keras model by calling the tf.keras.estimator.model_to_estimatorfunction. With Keras now in TensorFlow core, you can rely on it for your production workflows.

To get started with Keras, please read:

To get started with Estimators, please read:

Datasets

We're pleased to announce that the Dataset API has graduated to core package tf.data(from tf.contrib.data). The 1.4 version of the Dataset API also adds support for Python generators. We strongly recommend using the Dataset API to create input pipelines for TensorFlow models because:

  • The Dataset API provides more functionality than the older APIs (feed_dict or the queue-based pipelines).
  • The Dataset API performs better.
  • The Dataset API is cleaner and easier to use.

We're going to focus future development on the Dataset API rather than the older APIs.

To get started with Datasets, please read:

Distributed Training & Evaluation for Estimators

Release 1.4 also introduces the utility function tf.estimator.train_and_evaluate, which simplifies training, evaluation, and exporting Estimator models. This function enables distributed execution for training and evaluation, while still supporting local execution.

Other Enhancements

Beyond the features called out in this announcement, 1.4 also introduces a number of additional enhancements, which are described in the Release Notes.

Installing TensorFlow 1.4

TensorFlow release 1.4 is now available using standard pipinstallation.

# Note: the following command will overwrite any existing TensorFlow
# installation.
$ pip install --ignore-installed --upgrade tensorflow
# Use pip for Python 2.7
# Use pip3 instead of pip for Python 3.x

We've updated the documentation on tensorflow.org to 1.4.

TensorFlow depends on contributors for enhancements. A big thank you to everyonehelping out developing TensorFlow! Don't hesitate to join the community and become a contributor by developing the source code on GitHub or helping out answering questions on Stack Overflow.

We hope you enjoy all the features in this release.

Happy TensorFlow Coding!

Resonance Audio: Multi-platform spatial audio at scale

Posted by Eric Mauskopf, Product Manager

As humans, we rely on sound to guide us through our environment, help us communicate with others and connect us with what's happening around us. Whether walking along a busy city street or attending a packed music concert, we're able to hear hundreds of sounds coming from different directions. So when it comes to AR, VR, games and even 360 video, you need rich sound to create an engaging immersive experience that makes you feel like you're really there. Today, we're releasing a new spatial audio software development kit (SDK) called Resonance Audio. It's based on technology from Google's VR Audio SDK, and it works at scale across mobile and desktop platforms.

Experience spatial audio in our Audio Factory VR app for Daydreamand SteamVR

Performance that scales on mobile and desktop

Bringing rich, dynamic audio environments into your VR, AR, gaming, or video experiences without affecting performance can be challenging. There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments. The SDK uses highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality, even on mobile. We're also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback.

Using geometry-based reverb by assigning acoustic materials to a cathedral in Unity

Multi-platform support for developers and sound designers

We know how important it is that audio solutions integrate seamlessly with your preferred audio middleware and sound design tools. With Resonance Audio, we've released cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows, so you can focus on creating more immersive audio. The SDKs run on Android, iOS, Windows, MacOS and Linux platforms and provide integrations for Unity, Unreal Engine, FMOD, Wwise and DAWs. We also provide native APIs for C/C++, Java, Objective-C and the web. This multi-platform support enables developers to implement sound designs once, and easily deploy their project with consistent sounding results across the top mobile and desktop platforms. Sound designers can save time by using our new DAW plugin for accurately monitoring spatial audio that's destined for YouTube videos or apps developed with Resonance Audio SDKs. Web developers get the open source Resonance Audio Web SDK that works in the top web browsers by using the Web Audio API.

DAW plugin for sound designers to monitor audio destined for YouTube 360 videos or apps developed with the SDK

Model complex Sound Environments Cutting edge features

By providing powerful tools for accurately modeling complex sound environments, Resonance Audio goes beyond basic 3D spatialization. The SDK enables developers to control the direction acoustic waves propagate from sound sources. For example, when standing behind a guitar player, it can sound quieter than when standing in front. And when facing the direction of the guitar, it can sound louder than when your back is turned.

Controlling sound wave directivity for an acoustic guitar using the SDK

Another SDK feature is automatically rendering near-field effects when sound sources get close to a listener's head, providing an accurate perception of distance, even when sources are close to the ear. The SDK also enables sound source spread, by specifying the width of the source, allowing sound to be simulated from a tiny point in space up to a wall of sound. We've also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it anywhere Ambisonic soundfield playback is supported, from game engines to YouTube videos.

If you're interested in creating rich, immersive soundscapes using cutting-edge spatial audio technology, check out the Resonance Audio documentation on our developer site, let us know what you think through GitHub, and show us what you build with #ResonanceAudio on social media; we'll be resharing our favorites.

Google Developers Launchpad Studio works with top startups to tackle healthcare challenges with machine learning

Posted by Malika Cantor, Developer Relations Program Manager

Google is an artificial intelligence-first company. Machine Learning (ML) and Cloud are deeply embedded in our product strategies and have been crucial thus far in our efforts to tackle some of humanity's greatest challenges - like bringing high-quality, affordable, and specialized healthcare to people globally.

In that spirit, we're excited to announce the first four startups to join Launchpad Studio, our 6-month mentorship program tailored to help applied-ML startups build great products using the most advanced tools and technologies available. Working side-by-side with experts from across Google product and research teams - including Google Cloud, Verily, X, Brain, ML Research -, we intend to support these startups on their journey to build successful applications, and explore leveraging Google Cloud Platform, TensorFlow, Android, and other Google platforms. Launchpad Studio has also enlisted the expertise of a number of top industry practitioners and thought leaders to ensure Studio startups are successful in practice and long-term.

These four startups were selected based on the novel ways they've found to apply ML to important challenges in the Healthcare industry. Namely:

  1. Reducing doctor burnout and increasing doctor productivity (Augmedix)
  2. Regaining movement in paralyzed limbs (BrainQ)
  3. Accelerating clinical trials and enabling value-based healthcare (Byteflies)
  4. Detecting sepsis (CytoVale)

Let's take a closer look:

Reducing Doctor Burnout and Increasing Doctor Productivity

Numerous studies have shown that primary care physicians currently spend about half of their workday on the computer, documenting in the electronic health records (EHR).

Augmedix is on a mission to reclaim this time and repurpose it for what matters most: patient care. When doctors use the service by wearing Alphabet's Glass hardware, their documentation and administrative load is almost entirely alleviated. This saves doctors 2-3 hours per day and dramatically improves the doctor-patient experience.

Augmedix has started leveraging advances in deep learning and natural language understanding to accelerate these efficiencies and offer additional value that further improves patient care.

Regaining Movement in Paralyzed Limbs

Motor disability following neuro-disorders such as stroke, spinal cord injury, and traumatic brain injury affects tens of millions of people each year worldwide.

BrainQ's mission is to help these patients back on their feet, restoring their ability to perform activities of daily living. BrainQ is currently conducting clinical trials in leading hospitals in Israel.

The company is developing a medical device that utilizes artificial intelligence tools to identify high resolution spectral patterns in patient's brain waves, observed in electroencephalogram (EEG) sensors. These patterns are then translated into a personalized electromagnetic treatment protocol aimed at facilitating targeted neuroplasticity and enhancing patient's recovery.

Accelerating Clinical Trials and Enabling Value-Based Healthcare

Today, sensors are making it easier to collect data about health and diseases. However building a new wearable health application that is clinically validated and end-user friendly is still a daunting task. Byteflies' modular platform makes this whole process much easier and cost-effective. Through their medical and signal processing expertise, Byteflies has made advances in the interpretation of multiple synchronized vital signs. This multimodal high-resolution vital sign data is very useful for healthcare and clinical trial applications. With that level of data ingestion comes a great need for automated data processing. Byteflies plans to use ML to transform these data streams into actionable, personalized, and medically-relevant data.

Early Sepsis Detection

Research suggests that sepsis kills more Americans than breast cancer, prostate cancer, and AIDS combined. Fortunately, sepsis can often be quickly mitigated if caught early on in patient care.

CytoVale is developing a medical diagnostics platform based on cell mechanics, initially for use in early detection of sepsis in the emergency room setting. It analyzes thousands of cells' mechanical properties using ultra high speed video to diagnose disease in a few minutes. Their technology also has applications in immune activation, cancer detection, research tools, and biodefense.

CytoVale is leveraging recent advances in ML and computer vision in conjunction with their unique measurement approach to facilitate this early detection of sepsis.

More about the Program

Each startup will get tailored, equity-free support, with the goal of successfully completing a ML-focused project during the term of the program. To support this process, we provide resources, including deep engagement with engineers in Google Cloud, Google X, and other product teams, as well as Google Cloud credits. We also include both Google Cloud Platform and GSuite training in our engagement with all Studio startups.

Join Us

Based in San Francisco, Launchpad Studio is a fully tailored product development acceleration program that matches top ML startups and experts from Silicon Valley with the best of Google - its people, network, and advanced technologies - to help accelerate applied ML and AI innovation. The program's mandate is to support the growth of the ML ecosystem, and to develop product methodologies for ML.

Launchpad Studio is looking to work with the best and most game-changing ML startups from around the world. While we're currently focused on working with startups in the Healthcare and Biotech space, we'll soon be announcing other industry verticals, and any startup applying AI / ML technology to a specific industry vertical can apply on a rolling-basis.

Eager Execution: An imperative, define-by-run interface to TensorFlow

Posted by Asim Shankar and Wolff Dobson, Google Brain Team

Today, we introduce eager execution for TensorFlow.

Eager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.

The benefits of eager execution include:

  • Fast debugging with immediate run-time errors and integration with Python tools
  • Support for dynamic models using easy-to-use Python control flow
  • Strong support for custom and higher-order gradients
  • Almost all of the available TensorFlow operations

Eager execution is available now as an experimental feature, so we're looking for feedback from the community to guide our direction.

To understand this all better, let's look at some code. This gets pretty technical; familiarity with TensorFlow will help.

Using Eager Execution

When you enable eager execution, operations execute immediately and return their values to Python without requiring a Session.run(). For example, to multiply two matrices together, we write this:

import tensorflow as tf
import tensorflow.contrib.eager as tfe

tfe.enable_eager_execution()

x = [[2.]]
m = tf.matmul(x, x)

It's straightforward to inspect intermediate results with print or the Python debugger.


print(m)
# The 1x1 matrix [[4.]]

Dynamic models can be built with Python flow control. Here's an example of the Collatz conjecture using TensorFlow's arithmetic operations:

a = tf.constant(12)
counter = 0
while not tf.equal(a, 1):
if tf.equal(a % 2, 0):
a = a / 2
else:
a = 3 * a + 1
print(a)

Here, the use of the tf.constant(12) Tensor object will promote all math operations to tensor operations, and as such all return values with be tensors.

Gradients

Most TensorFlow users are interested in automatic differentiation. Because different operations can occur during each call, we record all forward operations to a tape, which is then played backwards when computing gradients. After we've computed the gradients, we discard the tape.

If you're familiar with the autograd package, the API is very similar. For example:

def square(x):
return tf.multiply(x, x)

grad = tfe.gradients_function(square)

print(square(3.)) # [9.]
print(grad(3.)) # [6.]

The gradients_function call takes a Python function square() as an argument and returns a Python callable that computes the partial derivatives of square() with respect to its inputs. So, to get the derivative of square() at 3.0, invoke grad(3.0), which is 6.

The same gradients_function call can be used to get the second derivative of square:

gradgrad = tfe.gradients_function(lambda x: grad(x)[0])

print(gradgrad(3.)) # [2.]

As we noted, control flow can cause different operations to run, such as in this example.

def abs(x):
return x if x > 0. else -x

grad = tfe.gradients_function(abs)

print(grad(2.0)) # [1.]
print(grad(-2.0)) # [-1.]

Custom Gradients

Users may want to define custom gradients for an operation, or for a function. This may be useful for multiple reasons, including providing a more efficient or more numerically stable gradient for a sequence of operations.

Here is an example that illustrates the use of custom gradients. Let's start by looking at the function log(1 + ex), which commonly occurs in the computation of cross entropy and log likelihoods.

def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)

# The gradient computation works fine at x = 0.
print(grad_log1pexp(0.))
# [0.5]
# However it returns a `nan` at x = 100 due to numerical instability.
print(grad_log1pexp(100.))
# [nan]

We can use a custom gradient for the above function that analytically simplifies the gradient expression. Notice how the gradient function implementation below reuses an expression (tf.exp(x)) that was computed during the forward pass, making the gradient computation more efficient by avoiding redundant computation.

@tfe.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)

# Gradient at x = 0 works as before.
print(grad_log1pexp(0.))
# [0.5]
# And now gradient computation at x=100 works as well.
print(grad_log1pexp(100.))
# [1.0]

Building models

Models can be organized in classes. Here's a model class that creates a (simple) two layer network that can classify the standard MNIST handwritten digits.

class MNISTModel(tfe.Network):
def __init__(self):
super(MNISTModel, self).__init__()
self.layer1 = self.track_layer(tf.layers.Dense(units=10))
self.layer2 = self.track_layer(tf.layers.Dense(units=10))
def call(self, input):
"""Actually runs the model."""
result = self.layer1(input)
result = self.layer2(result)
return result

We recommend using the classes (not the functions) in tf.layers since they create and contain model parameters (variables). Variable lifetimes are tied to the lifetime of the layer objects, so be sure to keep track of them.

Why are we using tfe.Network? A Network is a container for layers and is a tf.layer.Layer itself, allowing Networkobjects to be embedded in other Network objects. It also contains utilities to assist with inspection, saving, and restoring.

Even without training the model, we can imperatively call it and inspect the output:

# Let's make up a blank input image
model = MNISTModel()
batch = tf.zeros([1, 1, 784])
print(batch.shape)
# (1, 1, 784)
result = model(batch)
print(result)
# tf.Tensor([[[ 0. 0., ...., 0.]]], shape=(1, 1, 10), dtype=float32)

Note that we do not need any placeholders or sessions. The first time we pass in the input, the sizes of the layers' parameters are set.

To train any model, we define a loss function to optimize, calculate gradients, and use an optimizer to update the variables. First, here's a loss function:

def loss_function(model, x, y):
y_ = model(x)
return tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_)

And then, our training loop:

optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
for (x, y) in tfe.Iterator(dataset):
grads = tfe.implicit_gradients(loss_function)(model, x, y)
optimizer.apply_gradients(grads)

implicit_gradients() calculates the derivatives of loss_function with respect to all the TensorFlow variables used during its computation.

We can move computation to a GPU the same way we've always done with TensorFlow:

with tf.device("/gpu:0"):
for (x, y) in tfe.Iterator(dataset):
optimizer.minimize(lambda: loss_function(model, x, y))

(Note: We're shortcutting storing our loss and directly calling the optimizer.minimize, but you could also use the apply_gradients() method above; they are equivalent.)

Using Eager with Graphs

Eager execution makes development and debugging far more interactive, but TensorFlow graphs have a lot of advantages with respect to distributed training, performance optimizations, and production deployment.

The same code that executes operations when eager execution is enabled will construct a graph describing the computation when it is not. To convert your models to graphs, simply run the same code in a new Python session where eager execution hasn't been enabled, as seen, for example, in the MNIST example. The value of model variables can be saved and restored from checkpoints, allowing us to move between eager (imperative) and graph (declarative) programming easily. With this, models developed with eager execution enabled can be easily exported for production deployment.

In the near future, we will provide utilities to selectively convert portions of your model to graphs. In this way, you can fuse parts of your computation (such as internals of a custom RNN cell) for high-performance, but also keep the flexibility and readability of eager execution.

How does my code change?

Using eager execution should be intuitive to current TensorFlow users. There are only a handful of eager-specific APIs; most of the existing APIs and operations work with eager enabled. Some notes to keep in mind:

  • As with TensorFlow generally, we recommend that if you have not yet switched from queues to using tf.data for input processing, you should. It's easier to use and usually faster. For help, see this blog post and the documentation page.
  • Use object-oriented layers, like tf.layer.Conv2D() or Keras layers; these have explicit storage for variables.
  • For most models, you can write code so that it will work the same for both eager execution and graph construction. There are some exceptions, such as dynamic models that use Python control flow to alter the computation based on inputs.
  • Once you invoke tfe.enable_eager_execution(), it cannot be turned off. To get graph behavior, start a new Python session.

Getting started and the future

This is still a preview release, so you may hit some rough edges. To get started today:

There's a lot more to talk about with eager execution and we're excited… or, rather, we're eager for you to try it today! Feedback is absolutely welcome.

Gmail Add-ons framework now available to all developers

Originally posted by Wesley Chun, G Suite Developer Advocate on the G Suite Blog

Email remains at the heart of how companies operate. That's why earlier this year, we previewed Gmail Add-ons—a way to help businesses speed up workflows. Since then, we've seen partners build awesome applications, and beginning today, we're extending the Gmail add-on preview to include all developers. Now anyone can start building a Gmail add-on.

Gmail Add-ons let you integrate your app into Gmail and extend Gmail to handle quick actions.

They are built using native UI context cards that can include simple text dialogs, images, links, buttons and forms. The add-on appears when relevant, and the user is just a click away from your app's rich and integrated functionality.

Gmail Add-ons are easy to create. You only have to write code once for your add-on to work on both web and mobile, and you can choose from a rich palette of widgets to craft a custom UI. Create an add-on that contextually surfaces cards based on the content of a message. Check out this video to see how we created an add-on to collate email receipts and expedite expense reporting.

Per the video, you can see that there are three components to the app's core functionality. The first component is getContextualAddOn()—this is the entry point for all Gmail Add-ons where data is compiled to build the card and render it within the Gmail UI. Since the add-on is processing expense reports from email receipts in your inbox, the createExpensesCard()parses the relevant data from the message and presents them in a form so your users can confirm or update values before submitting. Finally, submitForm()takes the data and writes a new row in an "expenses" spreadsheet in Google Sheets, which you can edit and tweak, and submit for approval to your boss.

Check out the documentation to get started with Gmail Add-ons, or if you want to see what it's like to build an add-on, go to the codelab to build ExpenseItstep-by-step. While you can't publish your add-on just yet, you can fill out this form to get notified when publishing is opened. We can't wait to see what Gmail Add-ons you build!

Introducing the Mobile Excellence Award to celebrate great work on Mobile Web

Posted by Shane Cassells, mSite Product Lead, EMEA

We recently partnered with Awwwards, an awards platform for web development and web design, to launch a Mobile Excellence Badge on awwwards.comand a Mobile Excellence Award to recognize great mobile web experiences.

Starting this month, every agency and digital professional that submits their website to Awwwards can be eligible for a Mobile Excellence Badge, a guarantee of the performance of their mobile version. The mobile website's performance will be evaluated by a group of experts and measured against specific criteria based on Google's mobile principles on speed and usability. When a site achieves a minimum score, it will be recognized with the new Mobile Excellence Badge. All criteria are listed at the Mobile Guidelines.

The highest scoring sites with the Mobile Excellence Badge will be nominated for Mobile Site of the Week. One of them will then go on to win Mobile Site of the Month.

All Mobile Sites of the Month will be candidate for Mobile Site of the Year, with the winner receiving a physical award at the Awwwards Conference in Berlin, 8-9 February 2018.

In a time where mobile is playing a dominant role in how people access the web, it is necessary that web developers and web designers build websites that meet users' expectations. Today, 53% of mobile site visits are abandoned if pages take longer than 3 seconds to load1 and despite the explosion of mobile usage, performance and usability of existing mobile sites remain poor and are far from meeting those expectations. At the moment, the average page load time is 22s globally2, which represents a massive missed opportunity for many companies knowing the impact of speed on conversion and bounce rates3.

If you created a great mobile web experience and want it to receive a Mobile Excellence Badge and compete for the Mobile Excellence Award submit your request here.

Notes


  1. Google Data, Aggregated, anonymized Google Analytics data from a sample of mWeb sites opted into sharing benchmark data, n=3.7K, Global, March 2016 

  2. Google Research, Webpagetest.org, Global, sample of more than 900,000 mWeb sites across Fortune 1000 and Small Medium Businesses. Testing was performed using Chrome and emulating a Nexus 5 device on a globally representative 3G connection. 1.6Mbps download speed, 300ms Round-Trip Time (RTT). Tested on EC2 on m3.medium instances, similar in performance to high-end smartphones, Jan. 2017. 

  3. Akamai.com, Online Retail Experience Report 2017 

Playtime 2017: Find success on Google Play and grow your business with new Play Console features


Originally Posted by Vineet Buch, Director of Product Management, Google Play Apps & Games on the Android Developers Blog
Today we kicked off our annual global Playtime series with back-to-back events in Berlin and San Francisco. Over the next month, we’ll be hearing from many app and game developers in cities around the world. It has been an amazing 2017 for developers on Google Play, there are now more than 8 billion new installs per month globally.

To help you continue to take advantage of this opportunity, we're announcing innovations on Google Play and new features in the Play Console. Follow us on Medium where presenters will be posting their strategies, best practices, and examples to help you achieve your business objectives. As Google Play continues to grow rapidly, we want to help people understand our business. That's why we're also publishing the State of Play 2017 report that will be updated annually to help you stay informed about our progress and how we’re helping developers succeed.


Apps and games on Google Play bring your devices to life, whether they're phones and tablets, Wear devices, TVs, Daydream, or Chromebooks like the new Google Pixelbook. We're making it even easier for people to discover and re-engage with great content on the Play Store.



Recognizing the best

We're investing in curation and editorial to showcase the highest quality apps and games we love. The revamped Editors' Choice is now live in 17 countries and Android Excellence recently welcomed new apps and games. We also continue to celebrate and support indie games, recently announcing winners of the Indie Games Festival in San Francisco and opening the second Indie Games Contest in Europe for nominations.



Discovering great games

We've launched an improved home for games with trailers and screenshots of gameplay and two new browse destinations are coming soon, 'New' (for upcoming and trending games) and 'Premium' (for paid games).



Going beyond installs

We’re showing reminders to try games you’ve recently installed and we’re expanding our successful ‘live operations’ banners on the Play Store, telling you about major in-game events in popular games you’ve got on your device. We're also excited to integrate Android Instant Apps with a 'Try it Now' button on store listings. With a single tap, people can jump right into the app experience without installing.



The new games experience on Google Play


The Google Play Console offers tools which help you and your team members at every step of an app’s lifecycle. Use the Play Console to improve app quality, manage releases with confidence, and increase business performance.



Focus on quality

Android vitals were introduced at I/O 2017 and already 65% of top developers are using the dashboard to understand their app's performance. We're adding five new Android vitals and increasing device coverage to help you address issues relating to battery consumption, crashes, and render time. Better performing apps are favored by Google Play's search and discovery algorithms.
We're improving pre-launch reports and enabling them for all developers with no need to opt-in. When you upload an alpha or beta APK, we'll automatically install and test your app on physical, popular devices powered by Firebase Test Lab. The report will tell you about crashes, display issues, security vulnerabilities, and now, performance issues encountered.
When you install a new app, you expect it to open and perform normally. To ensure people installing apps and games from Google Play have a positive experience and developers benefit from being part of a trusted ecosystem, we are introducing a policy to disallow apps which consistently exhibit broken experiences on the majority of devices such as​ crashing,​ closing,​ ​freezing,​ ​or​ ​otherwise​ ​functioning​ ​abnormally. Learn more in the policy center.




Release with confidence

Beta testing lets trusted users try your app or game before it goes to production so you can iterate on your ideas and gather feedback. You can now target alpha and beta tests to specific countries. This allows you to, for example, beta test in a country you're about to launch in, while people in other countries receive your production app. We'll be bringing country-targeting to staged rollouts soon.
We've also made improvements to the device catalog. Over 66% of top developers are using the catalog to ensure they provide a great user experience on the widest range of devices. You can now save device searches and see why a specific device doesn't support your app. Navigate to the device catalog and review the terms of service to get started.




Grow your subscriptions business

At I/O 2017 we announced that both the number of subscribers on Play and the subscriptions business revenue doubled in the preceding year. We're making it easier to setup and manage your subscription service with the Play Billing Library and, soon, new test instruments to simplify testing your flows for successful and unsuccessful payments.
We're helping you acquire and retain more subscribers. You can offer shorter free trials, at a minimum of three days, and we will now enforce one free trial at the app level to reduce the potential for abuse. You can opt-in to receive notifications when someone cancels their subscription and we're making it easier for people to restore a canceled subscription. Account hold is now generally available, where you can block access to your service while we get a user to fix a renewal payment issue. Finally, from January 2018 we're also updating our transaction fee for subscribers who are retained for more than 12 months.




Announcing the Google Play Security Reward Program

At Google, we have long enjoyed a close relationship with the security research community. Today we're introducing the Google Play Security Reward Program to incentivize security research into popular Android apps, including Google's own apps. The program will help us find vulnerabilities and notify developers via security recommendations on how to fix them. We hope to bring the success we have with our other reward programs, and we invite developers and the research community to work together with us on proactively improving Google Play ecosystem's security.



Stay up to date with Google Play news and tips





How useful did you find this blogpost?



Grow with Google scholarships for US Android and web developers

Posted by Peter Lubbers, Head of Google Developer Training
Today, we are excited to announce that we are offering a 50,000 Udacity Scholarship Challenge in the United States through the Grow with Google initiative!
In case you missed the announcements in Pittsburgh earlier, the Grow with Google initiative represents Google's commitment to help drive the economic potential of technology through education. In addition to the Nanodegree scholarships, we are offering grants to organizations that train job-seekers with the digital tools they need.
Visit Grow with Google to learn more about this exciting initiative.
The Google-Udacity curriculum is targeted to helping developers get the training they need to enter the workforce as Android or mobile web developers. Whether you're an experienced programmer looking for a career-change or a novice looking for a start, the courses and the Nanodegree programs are built with your career-goals in mind and prepare you for Google's Associate Android Developer and Mobile Web Specialist developer certifications.
Of the 50,000 Challenge Scholarships available, 25,000 will be available for aspiring developers with no experience. We've split the curriculum for new developers between these two courses:
We've also dedicated 25,000 scholarships for developers with more than one year of experience. For these developers, the curriculum will be divided between these two courses:
The top 5,000 students at the end of the challenge will earn a full Nanodegree scholarship to one of the four Nanodegree programs in Android or web development.
The application period closes on November 30th. To learn more about the scholarships and to apply, visit www.udacity.com/grow-with-google.

Introducing Dialogflow, the new name for API.AI

Posted by Ilya Gelfenbeyn, Lead Product Manager, on behalf of the entire Dialogflow team

When we started API.AI, our goal was to provide developers like you with an API to add natural language processing capabilities to your applications, services and devices. We've worked hard towards that goal and accomplished a lot partnering with all of you. But as we've taken a look at our work over the past year and where we're heading, from new features like our Analytics tool to the 33 prebuilt agents, we realized that we were doing so much more than just providing an API. So with that, we'd like to introduce Dialogflow – the new name for API.AI.

Our new name doesn't change the work we're doing with you or our mission. Our mission continues to be that Dialogflow is your end-to-end platform for building great conversational experiences and our team will help you share what you've built with millions of users. In fact, here are 2 new features we've just launched to help you build those great experiences:

  1. In-line code editor: you can now write fulfillment logic, test, and implement a functional webhook directly in the console.
  1. Multi-lingual agent support: building for multiple languages is now easier than ever. You can now add additional languages and locales to your new or existing agent.

Thanks for being a part of API.AI – we can't wait to see what we do together with Dialogflow. Head over to your developer console and give these new features a try. And, as always, contact us if you have any questions.

Hi from the Dialogflow team!