Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Google Developers Launchpad introduces The Lever, sharing applied-Machine Learning best practices

Posted by Malika Cantor, Program Manager for Launchpad

The Lever is Google Developers Launchpad's new resource for sharing applied-Machine Learning (ML) content to help startups innovate and thrive. In partnership with experts and leaders across Google and Alphabet, The Lever is operated by Launchpad, Google's global startup acceleration program. The Lever will publish the Launchpad community's experiences of integrating ML into products, and will include case studies, insights from mentors, and best practices from both Google and global thought leaders.

Peter Norvig, Google ML Research Director, and Cassie Kozyrkov, Google Cloud Chief Decision Scientist, are editors of the publication. Hear from them and other Googlers on the importance of developing and sharing applied ML product and business methodologies:

Peter Norvig (Google ML Research, Director): "The software industry has had 50 years to perfect a methodology of software development. In Machine Learning, we've only had a few years, so companies need to pay more attention to the process in order to create products that are reliable, up-to-date, have good accuracy, and are respectful of their customers' private data."

Cassie Kozyrkov (Chief Decision Scientist, Google Cloud): "We live in exciting times where the contributions of researchers have finally made it possible for non-experts to do amazing things with Artificial Intelligence. Now that anyone can stand on the shoulders of giants, process-oriented avenues of inquiry around how to best apply ML are coming to the forefront. Among these is decision intelligence engineering: a new approach to ML, focusing on how to discover opportunities and build towards safe, effective, and reliable solutions. The world is poised to make data more useful than ever before!"

Clemens Mewald (Lead, Machine Learning X and TensorFlow X): "ML/AI has had a profound impact in many areas, but I would argue that we're still very early in this journey. Many applications of ML are incremental improvements on existing features and products. Video recommendations are more relevant, ads have become more targeted and personalized. However, as Sundar said, AI is more profound than electricity (or fire). Electricity enabled modern technology, computing, and the internet. What new products will be enabled by ML/AI? I am convinced that the right ML product methodologies will help lead the way to magical products that have previously been unthinkable."

We invite you to follow the publication, and actively comment on our blog posts to share your own experience and insights.

5 Tips for Developing Actions with the New Actions Console

Posted by Zachary Senzer, Product Manager

A couple months ago at Google I/O, we announced a redesigned Actions console that makes developing your Actions easier than ever before. The new Actions console features a more seamless development experience that tailors your workflow from onboarding through deployment, with tailored analytics to manage your Actions post-launch. Simply select your use case during onboarding and the Actions console will guide you through the different stages of development.

Here are 5 tips to help you create the best Actions for your content using our new console.

1. Optimize your Actions for new surfaces with theme customization

Part of what makes the Actions on Google ecosystem so special is the vast array of devices that people can use to interact with your Actions. Some of these devices, including phones and our new smart displays, allow users to have rich visual interactions with your content. To help your Actions stand out, you can customize how these visual experiences appear to users of these devices. Simply visit the "Build" tab and go to theme customization in the Actions console where you can specify background images, typography, colors, and more for your Actions.

2. Start to make your Actions easier to discover with built-in intents

Conversational experiences can introduce complexity in how people ask to complete a task related to your Action--a user could ask for a game in thousands of different ways ("play a game for me", "find a maps quiz", "I want some trivia"). Figuring out all of the ways a user might ask for your Action is difficult. To make this process much easier, we're beginning to map the ways users might ask for your Action into a taxonomy of built-in intents to abstract away this difficulty.

We'll start to use the built-in intent you associated with your Action to help users more easily discover your content as we begin testing them with user's queries. We'll continue to add many more built-in intents over the coming months to cover a variety of use cases. In the Actions console, go to the "Build" tab, click "Actions", then "Add Action" and select one to get started.

3. Promote your Actions with Action Links

While we'll continue to improve the ways users find your Actions within the Assistant, we've also made it easier for users to find your Actions outside the Assistant. Driving new traffic to your Actions is as easy as a click with Action Links. You now have the ability to define hyperlinks for each of your Actions to be used on your website, social media, email newsletters, and more. These links will launch users directly into your Action. If used on a desktop, the link will take users to the directory page for your Action, where they'll have the ability to choose the device they want to try your Action on. To configure Action Links in the console, visit the "Build" tab, choose "Actions", and select the Action for which you would like to create a link. That's it!

4. Ensure your Actions are high-quality by testing using our web simulator and alpha/beta environments

The best way to make sure that your Actions are working as intended is to test them using our updated web simulator. In the simulator, you can run through conversational user flows on phone, speaker, and even smart display device types. After you issue a request, you can see the visual response, request, and response JSON, with any potential errors. For further assistance with debugging errors, you also have the ability to view logs for your Actions.

Another great opportunity to test your Actions is by deploying to limited audiences in alpha and beta environments. By deploying to the alpha environment, your Actions do not need to go through the review process, meaning you can quickly test with your users. After deploying to the beta environment, you can launch your Actions to production whenever you like without additional review. To use alpha and beta environments, go to the "Deploy" tab and click "Release" in the Actions console.

5. Measure your success using analytics

After you deploy your Actions, it's equally important to measure their performance. By visiting the "Measure" tab and clicking "Analytics" in the Actions console, you will be able to view rich analytics on usage, health, and discovery. You can easily see how many people are using and returning to your Actions, how many errors users are encountering, the phrases users are saying to discover your Actions, and much, much, more. These insights can help you improve your Actions.


If you're new to the Actions console and looking for a quick way to get started, watch this video for an overview of the development process.

We're so excited to see how you will use the new Actions console to create even more Actions for more use cases, with additional tools to improve and iterate. Happy building!

Designing for the Google Assistant on Smart Displays

Posted by Saba Zaidi, Senior Interaction Designer, Google Assistant

Earlier this year we announced Smart Displays, a new category of devices with the Google Assistant built in, that augment voice experiences with immersive visuals. These new, highly visual devices can make it easier to convey complex information, suggest Actions, support transactions, and express your brand. Starting today, Smart Displays are available for purchase in major US retailers, both in-store and online.

Interacting through voice is fast and easy, because speaking comes naturally to people, and language doesn't constrain people to predefined paths, unlike traditional visual interfaces. However in audio-only interfaces, it can be difficult to communicate detailed information like lists or tables, and nearly impossible to represent rich content like images, charts or a visual brand identity. Smart Displays allow you to create Actions for the Assistant that can respond to natural conversation, and also display information and represent your brand in an immersive, visual way.

Today we're announcing consumer availability of rich responses optimized for Smart Displays. With rich responses, developers can use basic cards, lists, tables, carousels and suggestion chips, which give you an array of visual interactions for your Action, with more visual components coming soon. In addition, developers can also create custom themes to more deeply customize your Action's look and feel.

If you've already built a voice-centric Action for the Google Assistant, not to worry, it'll work automatically on Smart Displays. But we highly recommend adding rich responses and custom themes to make your Action even more visually engaging and useful to your users on Smart Displays. Here are a few tips to get you started:

1. Consider using visual components instead of complex voice prompts

Smart Displays offer several visual formats for displaying information and facilitating user input. A carousel of images, a list or a table can help users scan information efficiently and then interact with a quick tap or swipe.

For example, consider a long, spoken prompt like: "Welcome to National Anthems! You can play the national anthems from 20 different countries, including the United States, Canada and the United Kingdom. Which would you like to hear?"

Instead of merely showing the transcript of that whole spoken prompt on the screen, a carousel of country flags makes it easy for users to scroll and tap the anthem they want to hear.

2. Use visual suggestions to streamline the conversation

Suggestion chips are a great way to surface recommendations, aid feature discovery and keep the conversation moving on Smart Displays.

In this example, suggestion chips can help users find the "surprise me" feature, find the most popular anthems, or filter anthems by region.

3. Express your brand with themes

You can take advantage of new custom themes to differentiate your experience and represent your brand's persona, choosing a custom voice, background image or color, font style, or the shape of your cards to match your branding.

For example, an Action like California Surf Report, could be themed in a more immersive and customized way.

4. Check out our library of developer resources

We offer more tips on designing and building for Smart Displays and other visual devices in our conversation design site and in our talk from I/O about how to design Actions across devices.

Then head to our documentation to learn how to customize the visual appearance of your Actions with rich responses. You can also test and tinker with customizations for Smart Displays in the Actions Console simulator.

Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit.

We can't wait to see—quite literally—what you build next! Thanks for being a part of our community, and as always, if you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.


*Some countries are not eligible to participate in the developer community program, please review the terms and conditions

New AIY Edge TPU Boards

Posted by Billy Rutledge, Director of AIY Projects

Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.

The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.

The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.

The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.

On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.

Both devices will be available online this fall in the US with other countries to follow shortly.

For more product information visit g.co/aiy and sign up to be notified as products become available.

New Dialogflow features: how to use them to expand your Actions’ customer support capabilities

Posted by Mary Chen, Product Marketing Manager, and Ralfi Nahmias, Product Manager, Dialogflow

Today at Google Cloud Next '18, Dialogflow is introducing several new beta features to expand conversational capabilities for customer support and contact centers. Let's take a look at how three of these features can be used with the Google Assistant to improve the customer care experience for your Actions.

Create Actions smarter and faster with Knowledge Connectors Beta

Building conversational Actions for content-heavy use cases, such as FAQ or knowledge base answers, is difficult. Such content is often dense and unstructured, making accurate intent modeling time-consuming and prone to error. Dialogflow's Knowledge Connectors feature simplifies the development process by understanding and automatically curating questions and responses from the content you provide. It can add thousands of extracted responses directly to your conversational Action built with Dialogflow, giving you more time for the fun parts – building rich and engaging user experiences.

Try out Knowledge Connectors in this bike shop sample

Understand user texts better with Automatic Spelling Correction

When users interact with the Google Assistant through text, it's common and natural to make spelling and grammar mistakes. When mistypes occur, Actions may not understand the user's intent, resulting in a poor followup experience. With Dialogflow's Automatic Spelling Correction, Actions built with Dialogflow can automatically correct spelling mistakes, which significantly improves intent and entity matching. Automatic Spelling Correction uses similar technology to what's used in Google Search and other Google products.

Enable Automatic Spelling Correction to improve intent and entity matching

Assign a phone number to your Action with Phone Gateway Beta

Your Action can now be used as a virtual phone agent with Dialogflow's new Phone Gateway integration. Assign a working phone number to your Action built with Dialogflow, and it can start taking calls immediately. Phone Gateway allows you to easily implement virtual agents without needing to stitch together multiple services required for building phone applications.

Set up Phone Gateway in 3 easy steps

Dialogflow's Knowledge Connectors, Automatic Spelling Correction, and Phone Gateway are free for Standard Edition agents up to certain limits; for enterprise needs, see here for more options.

We look forward to the Actions you'll build with these new Dialogflow features. Give the features a try with the Cloud Next FAQ Action we made:

  • Download the Github sample
  • Say "Hey Google, talk to Next helper" on your Google Assistant-enabled device
  • Call +1 317-978-0364 (which uses Dialogflow's Phone Gateway)

And if you're new to developing for the Google Assistant, join our Cloud Next talk this Thursday at 9am – see you on the livestream or in person!

10 must-see G Suite developer sessions at Google Cloud Next ‘18

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Google Cloud Next '18 is only a few days away, and this year, there are over 500 sessions covering all aspects of cloud computing, from G Suite to the Google Cloud Platform. This is your chance to learn first-hand how to build custom solutions in G Suite alongside other developers from Independent Software Vendors (ISVs), systems integrators (SIs), and industry enterprises.

G Suite's intelligent productivity apps are secure, smart, and simple to use, so why not integrate your apps with them? If you're planning to attend the event and are wondering which sessions you should check out, here are some sessions to consider:

  • "Power Your Apps with Gmail, Google Drive, Calendar, Sheets, Slides, and More!" on Tuesday, July 24th. Join me as I lead this session that provides a high-level technical overview of the various ways you can build with G Suite. This is a great place to start before attending deeper technical sessions.
  • "Power your apps with Gmail, Google Drive, Calendar, Sheets, Slides and more" on Monday, July 23rd and Friday, July 27th. Join me for one of our half-day bootcamps! Both are identical and bookend the conference—one on Monday and another on Friday, meaning you can do either one and still make it to all the other conference sessions. While named the same as the technical overview above, the bootcamps dive a bit deeper and feature more detailed tech talks on Google Apps Script, the G Suite REST APIs, and App Maker. The three (or more!) hands-on codelabs will leave you with working code that you can start customizing for your own apps on the job! Register today to ensure you get a seat.
  • "Automating G Suite: Apps Script & Sheets Macro Recorder" and "Enhancing the Google Apps Script Developer Experience" both on Tuesday, July 24th. Interested in Google Apps Script, our customized serverless JavaScript runtime used to automate, integrate, and extend G Suite? The first session introduces developers and ITDMs to new features as well as real business use cases while the other dives into recent features that make Apps Script more friendly for the professional developer.
  • "G Suite + GCP: Building Serverless Applications with All of Google Cloud" on Wednesday, July 25th. This session is your chance to attend one of the few hybrid talks that look at how to you can build applications on both the GCP and G Suite platforms. Learn about serverless—a topic that's become more and more popular over the past year—and see examples on both platforms with a pair of demos that showcase how you can take advantage of GCP tools from a G Suite serverless app, and how you can process G Suite data driven by GCP serverless functions. I'm also leading this session and eager to show how you can leverage the strengths of each platform together in the same applications.
  • "Build apps your business needs, with App Maker" and "How to Build Enterprise Workflows with App Maker" on Tuesday, July 24th and Thursday, July 26th, respectively. Google App Maker is a new low-code, development environment that makes it easy to build custom apps for work. It's great for business analysts, technical managers, or data scientists who may not have software engineering resources. With a drag & drop UI, built-in templates, and point-and-click data modeling, App Maker lets you go from idea to app in minutes! Learn all about it with our pair of App Maker talks featuring our Developer Advocate, Chris Schalk.
  • "The Google Docs, Sheets & Slides Ecosystem: Stronger than ever, and growing" and "Building on the Docs Editors: APIs and Apps Script" on Wednesday, July 25th and Thursday, July 26th, respectively. Check out these pair of talks to learn more about how to write apps that integrate with the Google Docs editors (Docs, Sheets, Slides, Forms). The first describes the G Suite productivity tools' growing interoperability in the enterprise with while the second focuses on the different integration options available to developers, either using Google Apps Script or the REST APIs.
  • "Get Productive with Gmail Add-ons" on Tuesday, July 24th. We launched Gmail Add-ons less than a year ago to help developers integrate their apps alongside Gmail. Check out this video I made to help you get up-to-speed on Gmail Add-ons! This session is for developers either new to Gmail Add-ons or want to hear the latest from the Gmail Add-ons and API team.

I look forward to meeting you in person at Next '18. In the meantime, check out the entire session schedule to find out everything it has to offer. Don't forget to swing by our "Meet the Experts" office hours (Tue-Thu), G Suite "Collaboration & Productivity" showcase demos (Tue-Thu), the G Suite Birds-of-a-Feather meetup (Wed), and the Google Apps Script & G Suite Add-ons meetup (just after the BoF on Wed). I'm excited at how we can use "all the tech" to change the world. See you soon!

DevFest 2018 Kickoff!

Posted by Erica Hanson, Program Manager in Developer Relations

Google Developers is proud to announce DevFest 2018, the largest annual community event series for the Google Developer Groups (GDG) program. Hundreds of GDG chapters around the world will host their biggest and most exciting developer event of the year. These are often all-day or multi-day events with many speakers and workshops, highlighting a wide range of Google developer products. DevFest season runs from August to November 2018.

Our GDG organizers and communities are getting ready for the season, and are excited to host an event near you!

Whether you are an established developer, new to tech, or just curious about the community - come and check out #DevFest18. Everyone is invited!

For more information on DevFest 2018 and to find an event near you, visit the site.

Hangouts Chat alerts & notifications… with asynchronous messages

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

While most chatbots respond to user requests in a synchronous way, there are scenarios when bots don't perform actions based on an explicit user request, such as for alerts or notifications. In today's DevByte video, I'm going to show you how to send messages asynchronously to rooms or direct messages (DMs) in Hangouts Chat, the team collaboration and communication tool in G Suite.

What comes to mind when you think of a bot in a chat room? Perhaps a user wants the last quarter's European sales numbers, or maybe, they want to look up local weather or the next movie showtime. Assuming there's a bot for whatever the request is, a user will either send a direct message (DM) to that bot or @mention the bot from within a chat room. The bot then fields the request (sent to it by the Hangouts Chat service), performs any necessary magic, and responds back to the user in that "space," the generic nomenclature for a room or DM.

Our previous DevByte video for the Hangouts Chat bot framework shows developers what bots and the framework are all about as well as how to build one of these types of bots, in both Python and JavaScript. However, recognize that these bots are responding synchronously to a user request. This doesn't suffice when users want to be notified when a long-running background job has completed, when a late bus or train will be arriving soon, or when one of their servers has just gone down. Recognize that such alerts can come from a bot but also perhaps a monitoring application. In the latest episode of the G Suite Dev Show, learn how to integrate this functionality in either type of application.

From the video, you can see that alerts and notifications are "out-of-band" messages, meaning they can come in at any time. The Hangouts Chat bot framework provides several ways to send asynchronous messages to a room or DM, generically referred to as a "space." The first is the HTTP-based REST API. The other way is using what are known as "incoming webhooks."

The REST API is used by bots to send messages into a space. Since a bot will never be a human user, a Google service account is required. Once you create a service account for your Hangouts Chat bot in the developers console, you can download its credentials needed to communicate with the API. Below is a short Python sample snippet that uses the API to send a message asynchronously to a space.

from apiclient import discovery
from httplib2 import Http
from oauth2client.service_account import ServiceAccountCredentials

SCOPES = 'https://www.googleapis.com/auth/chat.bot'
creds = ServiceAccountCredentials.from_json_keyfile_name(
'svc_acct.json', SCOPES)
CHAT = discovery.build('chat', 'v1', http=creds.authorize(Http()))

room = 'spaces/<ROOM-or-DM>'
message = {'text': 'Hello world!'}
CHAT.spaces().messages().create(parent=room, body=message).execute()

The alternative to using the API with services accounts is the concept of incoming webhooks. Webhooks are a quick and easy way to send messages into any room or DM without configuring a full bot, i.e., monitoring apps. Webhooks also allow you to integrate your custom workflows, such as when a new customer is added to the corporate CRM (customer relationship management system), as well as others mentioned above. Below is a Python snippet that uses an incoming webhook to communicate into a space asynchronously.

import requests
import json

URL = 'https://chat.googleapis.com/...&thread_key=T12345'
message = {'text': 'Hello world!'}
requests.post(URL, data = json.dumps(message))

Since incoming webhooks are merely endpoints you HTTP POST to, you can even use curl to send a message to a Hangouts Chat space from the command-line:

curl \
-X POST \
-H 'Content-Type: application/json' \
'https://chat.googleapis.com/...&thread_key=T12345' \
-d '{"text": "Hello!"}'

To get started, take a look at the Hangouts Chat developer documentation, especially the specific pages linked to above. We hope this video helps you take your bot development skills to the next level by showing you how to send messages to the Hangouts Chat service asynchronously.

Launching the Indie Games Accelerator in Asia – helping gaming startups find success on Google Play

Posted by Anuj Gulati, Developer Marketing Manager, Google Play and Sami Kizilbash, Developer Relations Program Manager, Google

Emerging markets now account for more than 40% of game installs on Google Play. Rapid smartphone adoption in these regions presents a new base of engaged gamers that are looking for high quality mobile gaming experiences. At Google Play, we are focused on helping local game developers from these markets achieve their full potential and make the most of this opportunity.

Indie Games Accelerator is a new initiative to support top indie game startups from India, Indonesia, Malaysia, Pakistan, Philippines, Singapore, Thailand and Vietnam who are looking to supercharge their growth on Android. This four month program is a special edition of Launchpad Accelerator, designed in close collaboration with Google Play, featuring a comprehensive gaming curriculum and mentorship from top mobile gaming experts.

Successful participants will be invited to attend two all-expense-paid gaming bootcamps at the Google Asia-Pacific office in Singapore, where they will receive personalized mentorship from Google teams and industry experts. Additional benefits include Google Cloud Platform credits, invites to exclusive Google and industry events, and more.

Visit the program website to find out more and apply now.

Flutter Release Preview 1: Live from GMTC in Beijing

Posted by the Flutter Team at Google

Today at the GMTC front-end conference in Beijing, we announced Flutter Release Preview 1, signaling a new phase of development for Flutter as we move into the final stages of stabilization for 1.0.

Google I/O last month was something of a celebration for the Flutter team: having reached beta, it was good to meet with many developers who are learning, prototyping, or building with Flutter. In the immediate aftermath of Google I/O, we continue to see rapid growth in the Flutter ecosystem, with a 50% increase in active Flutter users. We've also seen over 150 individual Flutter events taking place across fifty countries: from New York City to Uyo, Nigeria; from Tokyo and Osaka in Japan to Nuremberg, Germany.

One common measure of community momentum is the number of GitHub stars, and we've also seen tremendous growth here, with Flutter becoming one of the top 100 software repos on GitHub in May.

Announcing Flutter Release Preview 1

Today we're taking another big step forward, with the immediate availability of Flutter Release Preview 1. It seems particularly auspicious to make this announcement in Beijing at the GMTC Global Front-End Conference. China has the third largest population of developers using Flutter, after the USA and India. Companies such as Alibaba and Tencent are already adopting Flutter for production apps, and there is a growing local community who are translating content and adding packages and mirrors for Chinese developers.

The shift from beta to release preview with this release signals our confidence in the stability and quality of what we have with Flutter, and our focus on bug fixing and stabilization.

We've posted a longer article with details on what's new in Flutter Release Preview 1 over at our Medium channel. You can download Flutter Release Preview 1 directly from the Flutter website, or simply run flutter upgrade from an existing installation.

It's been fun to watch others encounter Flutter for the first time. This article from an iOS developer who has recently completed porting an iOS app to Flutter is a positive endorsement of the project's readiness for real-world production usage:

"I haven't been this excited about a technology since Ruby on Rails or Go… After dedicating years to learning iOS app dev in-depth, it killed me that I was alienating so many Android friends out there. Also, learning other cross platform frameworks at the time was super unattractive to me because of what was available… Writing a Flutter app has been a litmus test and Flutter passed the test. Flutter is something I feel like I can really invest in and most importantly, really enjoy using."

As we get ever closer to publishing our first release from the "stable" channel, we're ready for more developers to build and deploy solutions that use this Release Preview. There are plenty of training offerings to help you learn Flutter: from I/O sessions to newsletters to hands-on videos to developer shows. We're excited to see what you build!