Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Introducing our new developer YouTube Series: “Build Out”

Posted by Reto Meier & Colt McAnlis: Developer Advocates

Ever found yourself trying to figure out the right way to combine mobile, cloud, and web technologies, only to be lost in the myriad of available offerings? It can be challenging to know the best way to combine all the options to build products that solve problems for your users.

That's why we created Build Out, a new YouTube series where real engineers face-off building fake products.

Each month we, (Reto Meier and Colt McAnlis), will present competing architectures to help show how Google's developer products can be combined to solve challenging problems for your users. Each solution incorporates a wide range of technologies, including Google Cloud, Android, Firebase, and Tensorflow (just to name a few).

Since we're engineers at heart, we enjoy a challenge—so each solution goes well past minimum viable product, and explores some of the more advanced possibilities available to solve the problem creatively.

Now, here's the interesting part. When we're done presenting, you get to decide which of us solved the problem better, by posting a comment to the video on YouTube. If you've already got a better solution—or think you know one—tell us about it in the comments, or respond with your own Build Out video to show us how it's done!

Episode #1: The Smart Garden.

In which we explore designs for gardens that care for themselves. Each design must be fully autonomous, learn from experience, and scale from backyard up to large-scale commercial gardens.

You can get the full technical details on each Smart Garden solution in this Medium article, including alternative approaches and best practices.

You can also listen to the Build Out Rewound Podcast, to hear us discuss our choices.

Launchpad comes to Africa to support tech startups! Apply to join the first accelerator class

Posted by Andy Volk, Sub-Saharan Africa Ecosystem Regional Manager & Josh Yellin, Program Manager of Launchpad Accelerator

Earlier this year at Google for Nigeria, our CEO Sundar Pichai made a commitmentto support African entrepreneurs building successful technology companies and products. Following up on that commitment, we're excited to announce Google Developers Launchpad Africa , our new hands-on comprehensive mentorship program tailored exclusively to startups based in Africa.

Building on the success of our global Launchpad Accelerator program, Launchpad Africa will kick-off as a three-month accelerator that provides African startups with over $3 million in equity-free support, working space, travel and PR backing, and access to expert advisers from Google, Africa, and around the world.

The first applicationperiod is now open through December 11, 9am PST and the first class will start in early 2018. More classes will be hosted in 2018 and beyond.

What do we look for when selecting startups?

Each startup that applies to Launchpad Africa is evaluated carefully. Below are general guidelines behind our process to help you understand what we look for in our candidates.

All startups in the program must:

  • Be a technology startup.
  • Be based in Ghana, Kenya, Nigeria, South Africa, Tanzania, or Uganda (stay tuned for future classes, as we hope to add more countries).
  • Have already raised seed funding.

Additionally, we also consider:

  • The problem you're trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country, or Africa broadly?
  • Will you share what you learn for the benefit of other startups in your local ecosystem?

Anyone who spends time in the African technology space knows that the continent is home to some exciting innovations. Over the years, Google has worked with some incredible startups across Africa, tackling everything from healthcare, education, streamlining e-commerce, to improving the food supply chain. We very much look forward to welcoming the first cohort of innovators for Launchpad Africa and continue to work together to drive innovation in the African market.

Help users find, interact & re-engage with your app on the Google Assistant

Posted by Brad Abrams, Product Manager
Every day, users are discovering new ways the Google Assistant and your apps can help them get things done. Today we're announcing a set of new features to make it easier for users to find, interact, and re-engage with your app.

Helping users find your apps

With more international support and updates to the Google Assistant, it's easier than ever for users to find your app.
  • Updates to the app directory: We're adding what's new and what's trending sections in the app directory within the Assistant experience on your phone. These dynamic sections will constantly change and evolve, creating more opportunities for your app to be discovered by users in all supported locales where the Google Assistant and Actions on Google are available. We're also introducing autocomplete in the directory's search box, so, if a user doesn't quite remember the name of your app, it will populate as they type.
  • New subcategories: We've created subcategories in the app directory, so if you click on a category like "Food & Drink", apps are broken down into additional subcategories, like "Order Food" or "View a Menu." We're using your app's description and sample invocations to map users' natural search queries to the new task-based subcategories. The updated labelling taxonomy improves discovery for your app; it will now surface for users in all relevant subcategories depending on its various capabilities. This change will help you communicate to users everything your app can do, and creates new avenues for your app to be discovered – learn more here.
  • Implicit discovery: Implicit discovery is when a user is connected to your app using contextual queries (e.g., "book an appointment to fix my bike"), as opposed to calling for your app by name. We've created a new discovery section of the console to help improve your app's implicit discovery, providing instructions for creating precise action invocation phrases so your app will surface even when a user can't remember its name. Go hereto learn more.
  • Badges for family-friendly apps: We're launching a new "For Families" badge on the Google Assistant, designed to help users find apps that are appropriate for all ages. All existing apps in the Apps for Families program will get the badge automatically. Learn about how your app can qualify for the "For Families" badge here.
  • International support: Users will soon be able to find your apps in even more languages because starting today, you can build apps in Spanish (US, MX and ES), Italian, Portuguese (BR) and English (IN). And in the UK, developers can now start building apps that have transactional capabilities. Watch the internationalization videoto learn how to support multiple languages with Actions on Google.

Creating a more interactive user experience

Helping users find your app is one thing, but making sure they have a compelling, meaningful experience once they begin talking to your app is equally important – we're releasing some new features to help:
  • Speaker to phone transfer: We're launching a new API so you can develop experiences that start with the Assistant on voice-activated speakers like Google Home and can be passed off to users' phones. Need to send a map or complete a transaction using a phone? Check out the example below and click hereto learn more.
  • Build personalized apps: To create a more personal experience for users, you can now enable your app to remember select information and preferences. Learn more here.
  • Better SSML: We recently rolled out an update to the web simulator which includes a new SSML audio design experience. We now give you more options for creating natural, quality dialog using newly supported SSML tags, including <prosody>, <emphasis>, <audio> and others. The new tag <par> is coming soon and lets you add mood and richness, so you can play background music and ambient sounds while a user is having a conversation with your app. To help you get started, we've added over 1,000 sounds to the sound library.Listen to a brief SSML audio experiment that shows off some of the new features here 🔊.
  • Cancel event: Today when a user says "cancel" to end the conversation, your app never gets a chance to respond with a polite farewell message. Now you can get one last request to your webhook that you can use to clean up your fulfillment logic and respond to the user before they exit.
  • Account linking in conversation: Until today, users had to link their account to your app at the beginning of the interaction, before they had a chance to decide whether or not account linking was the right choice. With the updated AskForSignInAPI, we're giving you the option of prompting users to link their account to your app at the most appropriate time of the experience.

Re-engaging with your users

To keep users coming back to your app, day after day, we're adding some additional features that you can experiment with – these are available this week for you to start testing and will roll out to users soon.
  • Daily updates: At the end of a great interaction with your app, a user might want to be notified of similar content from your app every day. To enable that we will add a suggestion chip prompting the user to sign up for a daily update. Check out the example below and go to the discovery section of the console to configure daily updates.
  • Push notifications: We're launching a new push notification API, enabling your app to push asynchronous updates to users. For the day trader who's looking for the best time to sell stock options, or the frugal shopper waiting for the big sale to buy a new pair of shoes, these alerts will show up as system notifications on the phone (and later to the Assistant on voice-activated speakers like Google Home).
  • Directory analytics: To give you more insight into how users are interacting with your app on the mobile directory so you can continue improving the experience for users, we've updated the analytics tools in the console. You will be able to find information about your app's rating, the number of pageviews, along with the number of conversations that were initiated from your app directory listing.
Phew! I know that was a lot to cover, but that was only a brief overview of the updates we've made and we can't wait to see how you'll use these tools to unlock the Google Assistant's potential in new and creative ways.

Announcing TensorFlow Lite

Posted by the TensorFlow team
Today, we're happy to announce the developer preview of TensorFlow Lite, TensorFlow’s lightweight solution for mobile and embedded devices! TensorFlow has always run on many platforms, from racks of servers to tiny IoT devices, but as the adoption of machine learning models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices. TensorFlow Lite enables low-latency inference of on-device machine learning models.

It is designed from scratch to be:
  • Lightweight Enables inference of on-device machine learning models with a small binary size and fast initialization/startup
  • Cross-platform A runtime designed to run on many different platforms, starting with Android and iOS
  • Fast Optimized for mobile devices, including dramatically improved model loading times, and supporting hardware acceleration
More and more mobile devices today incorporate purpose-built custom hardware to process ML workloads more efficiently. TensorFlow Lite supports the Android Neural Networks API to take advantage of these new accelerators as they come available.
TensorFlow Lite falls back to optimized CPU execution when accelerator hardware is not available, which ensures your models can still run fast on a large set of devices.

Architecture

The following diagram shows the architectural design of TensorFlow Lite:
The individual components are:
  • TensorFlow Model: A trained TensorFlow model saved on disk.
  • TensorFlow Lite Converter: A program that converts the model to the TensorFlow Lite file format.
  • TensorFlow Lite Model File: A model file format based on FlatBuffers, that has been optimized for maximum speed and minimum size.
The TensorFlow Lite Model File is then deployed within a Mobile App, where:
  • Java API: A convenience wrapper around the C++ API on Android
  • C++ API: Loads the TensorFlow Lite Model File and invokes the Interpreter. The same library is available on both Android and iOS
  • Interpreter: Executes the model using a set of operators. The interpreter supports selective operator loading; without operators it is only 70KB, and 300KB with all the operators loaded. This is a significant reduction from the 1.5M required by TensorFlow Mobile (with a normal set of operators).
  • On select Android devices, the Interpreter will use the Android Neural Networks API for hardware acceleration, or default to CPU execution if none are available.
Developers can also implement custom kernels using the C++ API, that can be used by the Interpreter.

Models

TensorFlow Lite already has support for a number of models that have been trained and optimized for mobile:
  • MobileNet: A class of vision models able to identify across 1000 different object classes, specifically designed for efficient execution on mobile and embedded devices
  • Inception v3: An image recognition model, similar in functionality to MobileNet, that offers higher accuracy but also has a larger size
  • Smart Reply: An on-device conversational model that provides one-touch replies to incoming conversational chat messages. First-party and third-party messaging apps use this feature on Android Wear.
Inception v3 and MobileNets have been trained on the ImageNet dataset. You can easily retrain these on your own image datasets through transfer learning.

What About TensorFlow Mobile?

As you may know, TensorFlow already supports mobile and embedded deployment of models through the TensorFlow Mobile API. Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it will become the recommended solution for deploying models on mobile and embedded devices. With this announcement, TensorFlow Lite is made available as a developer preview, and TensorFlow Mobile is still there to support production apps.
The scope of TensorFlow Lite is large and still under active development. With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of the most important common models. We plan to prioritize future functional expansion based on the needs of our users. The goals for our continued development are to simplify the developer experience, and enable model deployment for a range of mobile and embedded devices.
We are excited that developers are getting their hands on TensorFlow Lite. We plan to support and address our external community with the same intensity as the rest of the TensorFlow project. We can't wait to see what you can do with TensorFlow Lite.
For more information, check out the TensorFlow Lite documentation pages.
Stay tuned for more updates.
Happy TensorFlow Lite coding!

Reminder: Grow with Google scholarship window closes soon

Posted by Peter Lubbers, Head of Google Developer Training

Last month, we announced the 50,000 Grow with Google scholarship challenge in partnership with Udacity. And today, we want to remind you to apply for the programs before the application window closes in just over a week on November 30th.

In case you missed the announcement details, the Google-Udacity curriculum was created to help developers get the training they need to enter the workforce as Android or mobile web developers. Whether you're an experienced programmer looking for a career-change or a novice looking for a start, the courses and the Nanodegree programs are built with your career-goals in mind and prepare you for Google's Associate Android Developer and Mobile Web Specialist developer certifications.

The scholarship challenge is an exciting chance to learn valuable skills to launch or advance your career as a mobile or web developer. The program leverages world-class curriculum, developed by experts from Google and Udacity. These courses are completely free, and as a reminder the top 5,000 students at the end of the challenge will earn a full Nanodegree scholarship to one of the four Nanodegree programs in Android or web development.

To learn more visit udacity.com/grow-with-google and submit your application before the scholarship window closes!

Best practices to succeed with Universal App campaigns

Posted by Sissie Hsiao, VP of Product, Mobile App Advertising

It's almost time to move all your AdWords app install campaigns to Universal App campaigns (UAC). Existing Search, Display and YouTube app promo campaigns will stop running on November 15th, so it's important to start upgrading to UAC as soon as possible.

With UAC, you can reach the right people across all of Google's largest properties like Google Search, Google Play, YouTube and the Google Display Network — all from one campaign. Marketers who are already using UAC to optimize in-app actions are seeing 140% more conversions per dollar, on average, than other Google app promotion products.1

One of my favorite apps, Maven, a car sharing service from General Motors (GM), is already seeing great success with UAC. According to Kristen Alexander, Marketing Manager: "Maven believes in connecting people with the moments that matter to them. This car sharing audience is largely urban millennials and UAC helps us find this unique, engaged audience across the scale of Google. UAC for Actions helped us increase monthly Android registrations in the Maven app by 51% between April and June."

Join Kristen and others who are already seeing better results with UAC by following some best practices, which I've shared in these blog posts:

Steer Performance with Goals Create a separate UAC for each type of app user that you'd like to acquire — whether that's someone who will install your app or someone who will perform an in-app action after they've installed. Then increase the daily campaign budget for the UAC that's more important right now.

Optimize for the Right In-app Action

Track all important conversion events in your app to learn how users engage with it. Then pick an in-app action that's valuable to your business and is completed by at least 10 different people every day. This will give UAC enough data to find more users who will most likely complete the same in-app action.

Steer Performance with Creative Assets

Supply a healthy mix of creative assets (text, images and videos) that UAC can use to build ads optimized for your goal. Then use the Creative Asset Report to identify which assets are performing "Best" and which ones you should replace.

Follow these and other best practices to help you get positive results from your Universal App campaigns once you upgrade.

Notes


  1. Google Internal Data, July 2017 

Fun new ways developers are experimenting with voice interaction

Posted by Amit Pitaru, Creative Lab

Voice interaction has the potential to simplify the way we use technology. And with Dialogflow, Actions on Google, and Speech Synthesis API, it's becoming easier for any developer to create voice-based experiences. That's why we've created Voice Experiments, a site to showcase how developers are exploring voice interaction in all kinds of exciting new ways.

The site includes a few experiments that show how voice interaction can be used to explore music, gaming, storytelling, and more. MixLab makes it easier for anyone to create music, using simple voice commands. Mystery Animal puts a new spin on a classic game. And Story Speakerlets you create interactive, spoken stories by just writing in a Google Doc – no coding required.

You can try the experiments through the Google Assistant on your phone and on voice-activated speakers like the Google Home. Or you can try them on the web using a browser like Chrome.

It's still early days for voice interaction, and we're excited to see what you will make. Visit g.co/VoiceExperiments to play with the experiments or submit your own.

Announcing TensorFlow r1.4

Posted by the TensorFlow Team

TensorFlow release 1.4 is now public - and this is a big one! So we're happy to announce a number of new and exciting features we hope everyone will enjoy.

Keras

In 1.4, Keras has graduated from tf.contrib.keras to core package tf.keras. Keras is a hugely popular machine learning framework, consisting of high-level APIs to minimize the time between your ideas and working implementations. Keras integrates smoothly with other core TensorFlow functionality, including the Estimator API. In fact, you may construct an Estimator directly from any Keras model by calling the tf.keras.estimator.model_to_estimatorfunction. With Keras now in TensorFlow core, you can rely on it for your production workflows.

To get started with Keras, please read:

To get started with Estimators, please read:

Datasets

We're pleased to announce that the Dataset API has graduated to core package tf.data(from tf.contrib.data). The 1.4 version of the Dataset API also adds support for Python generators. We strongly recommend using the Dataset API to create input pipelines for TensorFlow models because:

  • The Dataset API provides more functionality than the older APIs (feed_dict or the queue-based pipelines).
  • The Dataset API performs better.
  • The Dataset API is cleaner and easier to use.

We're going to focus future development on the Dataset API rather than the older APIs.

To get started with Datasets, please read:

Distributed Training & Evaluation for Estimators

Release 1.4 also introduces the utility function tf.estimator.train_and_evaluate, which simplifies training, evaluation, and exporting Estimator models. This function enables distributed execution for training and evaluation, while still supporting local execution.

Other Enhancements

Beyond the features called out in this announcement, 1.4 also introduces a number of additional enhancements, which are described in the Release Notes.

Installing TensorFlow 1.4

TensorFlow release 1.4 is now available using standard pipinstallation.

# Note: the following command will overwrite any existing TensorFlow
# installation.
$ pip install --ignore-installed --upgrade tensorflow
# Use pip for Python 2.7
# Use pip3 instead of pip for Python 3.x

We've updated the documentation on tensorflow.org to 1.4.

TensorFlow depends on contributors for enhancements. A big thank you to everyonehelping out developing TensorFlow! Don't hesitate to join the community and become a contributor by developing the source code on GitHub or helping out answering questions on Stack Overflow.

We hope you enjoy all the features in this release.

Happy TensorFlow Coding!

Resonance Audio: Multi-platform spatial audio at scale

Posted by Eric Mauskopf, Product Manager

As humans, we rely on sound to guide us through our environment, help us communicate with others and connect us with what's happening around us. Whether walking along a busy city street or attending a packed music concert, we're able to hear hundreds of sounds coming from different directions. So when it comes to AR, VR, games and even 360 video, you need rich sound to create an engaging immersive experience that makes you feel like you're really there. Today, we're releasing a new spatial audio software development kit (SDK) called Resonance Audio. It's based on technology from Google's VR Audio SDK, and it works at scale across mobile and desktop platforms.

Experience spatial audio in our Audio Factory VR app for Daydreamand SteamVR

Performance that scales on mobile and desktop

Bringing rich, dynamic audio environments into your VR, AR, gaming, or video experiences without affecting performance can be challenging. There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments. The SDK uses highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality, even on mobile. We're also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback.

Using geometry-based reverb by assigning acoustic materials to a cathedral in Unity

Multi-platform support for developers and sound designers

We know how important it is that audio solutions integrate seamlessly with your preferred audio middleware and sound design tools. With Resonance Audio, we've released cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows, so you can focus on creating more immersive audio. The SDKs run on Android, iOS, Windows, MacOS and Linux platforms and provide integrations for Unity, Unreal Engine, FMOD, Wwise and DAWs. We also provide native APIs for C/C++, Java, Objective-C and the web. This multi-platform support enables developers to implement sound designs once, and easily deploy their project with consistent sounding results across the top mobile and desktop platforms. Sound designers can save time by using our new DAW plugin for accurately monitoring spatial audio that's destined for YouTube videos or apps developed with Resonance Audio SDKs. Web developers get the open source Resonance Audio Web SDK that works in the top web browsers by using the Web Audio API.

DAW plugin for sound designers to monitor audio destined for YouTube 360 videos or apps developed with the SDK

Model complex Sound Environments Cutting edge features

By providing powerful tools for accurately modeling complex sound environments, Resonance Audio goes beyond basic 3D spatialization. The SDK enables developers to control the direction acoustic waves propagate from sound sources. For example, when standing behind a guitar player, it can sound quieter than when standing in front. And when facing the direction of the guitar, it can sound louder than when your back is turned.

Controlling sound wave directivity for an acoustic guitar using the SDK

Another SDK feature is automatically rendering near-field effects when sound sources get close to a listener's head, providing an accurate perception of distance, even when sources are close to the ear. The SDK also enables sound source spread, by specifying the width of the source, allowing sound to be simulated from a tiny point in space up to a wall of sound. We've also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it anywhere Ambisonic soundfield playback is supported, from game engines to YouTube videos.

If you're interested in creating rich, immersive soundscapes using cutting-edge spatial audio technology, check out the Resonance Audio documentation on our developer site, let us know what you think through GitHub, and show us what you build with #ResonanceAudio on social media; we'll be resharing our favorites.