Tag Archives: actions on google

International Women’s Day’19 featuring Actions on Google

Posted by Marisa Pareti, Rubi Martinez & Jessica Earley-Cha

In celebration of International Women’s Day, Women Techmakers hosted its sixth annual summit series to acknowledge and celebrate women in the tech industry, and to create a space for attendees to build community, hear from industry leaders, and learn new skills. The series featured 19 summits and 305 meetups across 87 countries.

This year, Women Techmakers partnered with the Actions on Google team to host technical workshops at these events so attendees could learn the fundamental concepts to develop Actions for the Google Assistant.Together, we created hundreds of new Actions for the Assistant. Check out some of the highlights of this year’s summit in the video below:

Technical Workshop Details

If you couldn’t attend any of our meetups this past year, we’ll cover our technical workshops now so you can start building for the Assistant from home. The technical workshop kicked off by introducing Actions on Google — the platform that enables developers to build Actions for the Google Assistant. Participants got hands-on experience building their first Action with the following features:

  • Users can start a conversation by explicitly calling the Action by name, which then responds with a greeting message.
  • Once in conversation, users are prompted to provide their favorite color. The Action parses the user’s input to extract the information it needs (namely, the color parameter).
  • If a color is provided, the Action processes the color parameter to auto-generate a “lucky number” to send back to the user and the conversation ends.
  • If no color is provided, the Action sends the user additional prompts until the parameter is extracted.
  • Users can explicitly leave the conversation at any time.

During Codelab level 1, participants learned how to parse the user’s input by using Dialogflow, a tool that uses Machine Learning and acted as their Natural Language Processor (NLP). Dialogflow processes what the user says and extracts important information from that input to identify how to fulfill the user’s request. Participants configured Dialogflow and connected it to their code’s back-end using Dialogflow’s inline editor. In the editor, participants added their code and tested their Action in the Action Simulator.

In Codelab level 2, participants continued building on their Action, adding features such as:

  • Supports deep links to directly launch the user into certain points of dialog
  • Uses utilities provided by the Actions on Google platform to fetch the user’s name and address them personally
  • Responds with follow-up questions to further the conversation
  • Presents users with a rich visual response complete with sound effects

Instead of using Dialogflow’s inline editor, participants set up a Cloud Functions for Firebase as their server.

You can learn more about developing your own Actions here. To support developers’ efforts in building great Actions for the Google Assistant, the team also has a developer community program.

Alex Eremia, a workshop attendee, reflected, “I think voice applications will have a huge impact on society both today and in the future. It will become a natural way we interact with the items around us.”

From keynotes, fireside chats, and interactive workshops, the Women Techmakers summit attendees enjoyed a mixture of technical and inspirational content. If you’re interested in learning more and getting involved, follow us WTM on twitter, check out our website and sign up to become a member.

To learn more Actions on Google and how to build for the Google Assistant, be sure to follow us on Twitter, and join our Reddit community!

Developer Preview of Local Home SDK

Posted by Toni Klopfenstein

Recently at Google I/O, we gave you a sneak peek at our new Local Home SDK, a suite of local technologies to enhance your smart home integrations. Today, the SDK is live as a developer preview. We've been working hard testing the platform with our partners, including GE, LIFX, Philips Hue, TP-Link, and Wemo, and are excited to bring you these additional technologies for connecting smart devices to the Google Assistant.

Figure 1: The local execution path

This SDK enables developers to more deeply integrate their smart devices into the Assistant by building upon the existing Smart Home platform to create a local execution path via Google Home smart speakers and Nest smart displays. Developers can now run their business logic to control new and existing smart devices in JavaScript that executes on the smart speakers and displays, benefitting users with reduced latency and higher reliability.

How it works:

The SDK introduces two new intents, IDENTIFY and REACHABLE_DEVICES. The local home platform scans the user's home network via mDNS, UDP, or UPnP to discover any smart devices connected to the Assistant, and triggers IDENTIFY to verify that the device IDs match those returned from the familiar Smart Home API SYNC intent. If the detected device is a hub or bridge, REACHABLE_DEVICES is triggered and treats the hub as the proxy device for communicating locally. Once the local execution path from Google Home to a device is established, the device properties are updated in Home Graph.

Figure 2: The intents used for each execution path

When a user triggers a smart home Action that has a local execution path, the Assistant sends the EXECUTE intent to the Google Nest device rather than the developer's cloud fulfillment. The developer's JavaScript app is invoked, which then triggers the Local Home SDK to send control commands to the smart device over TCP, UDP socket, or HTTP/HTTPS requests. By defaulting to local execution rather than the cloud, users experience faster fulfillment of their requests. The execution requests can still be sent to the cloud path in case local execution fails. This redundancy minimizes the possibility of a failed request, and improves the overall user experience.

Additional features of the Local Home platform include:

  • Support for all Wi-Fi-enabled device types and device traits without two-factor authentication enabled.
  • No user action required to deploy Local Home benefits to all devices.
  • Easily configure discovery protocols and the hosted JavaScript app URL through the Actions console.

Figure 3: Local Home configuration tool in the Actions console

JavaScript apps can be tested on-device, allowing developers to employ familiar tools like Chrome Developer Console for debugging. Because the Local Home SDK works with the existing smart home framework, you can self-certify new apps through the Test suite for smart home as well.

Get started

To learn more about the Local Home platform, check out the API reference, and get started adding local execution with the developer guide and samples. For general information covering how you can connect smart devices to the Google Assistant, visit the Smart Home documentation, or check out the Local Technologies for the Smart Home talk from Google I/O this year.

You can send us any feedback you have through the bug tracker, or engage with the community at /r/GoogleAssistantDev. You can tag your posts with the flair local-home-sdk to help organize discussion.

Actions on Google at I/O 2019: New tools for web, mobile, and smart home developers

Posted by Chris Turkstra, Director, Actions on Google

People are using the Assistant every day to get things done more easily, creating lots of opportunities for developers on this quickly growing platform. And we’ve heard from many of you that want easier ways to connect your content across the Assistant.

At I/O, we’re announcing new solutions for Actions on Google that were built specifically with you in mind. Whether you build for web, mobile, or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.

Enhance your presence in Search and the Assistant

Help people with their “how to” questions

Every day, people turn to the internet to ask “how to” questions, like how to tie a tie, how to fix a faucet, or how to install a dog door. At I/O, we’re introducing support for How-to markup that lets you power richer and more helpful results in Search and the Assistant.

Adding How-to markup to your pages will enable the page to appear as a rich result on mobile Search and on Google Assistant Smart Displays. This is an incredibly lightweight way for web developers and creators to connect with millions of people, giving them helpful step-by-step instructions with video, images and text. You can start seeing How-to markup results on Search today, and your content will become available on the Smart Displays in the coming months.

Here’s an example where DIY Network added markup to their existing content on the web to provide a more helpful, interactive result on both Google Search and the Assistant:

Mobile Search screenshot showing how to install a dog door How-to Markup of how to install a dog door

For content creators that don’t maintain a website, we created a How-to Video Template where video creators can upload a simple spreadsheet with titles, text and timestamps for their YouTube video, and we’ll handle the rest. This is a simple way to transform your existing how-to videos into interactive, step-by-step tutorials across Google Assistant Smart Displays and Android phones.

Check out how REI is getting extra mileage out of their YouTube video:

Laptop to Home Hub displaying How To Template for the REI compass

How-to Video Templates are in developer preview so you can start building today, and your content will become available on Android phones and Smart Displays in the coming months.

Easier engagement with your apps

Help people quickly get things done with App Actions

If you’re an app developer, people are turning to your apps every day to get things done. And we see people turn to the Assistant every day for a natural way to ask for help via voice. This offers an opportunity to use intents to create voice-based entry points from the Assistant to the right spot in your app.

Last year, we previewed App Actions, a simple mechanism for Android developers that uses intents from the Assistant to deep link to exactly the right spot in your app. At I/O, we are announcing the release of built-in intents for four new App Action categories: Health & Fitness, Finance and Banking, Ridesharing, and Food Ordering. Using these intents, you can integrate with the Assistant in no time.

If I wanted to track my run with Nike Run Club, I could just say “Hey Google, start my run in Nike Run Club” and the app will automatically start tracking my run. Or, let’s say I just finished dinner with my friend Chad and we're splitting the check. I can say "Hey Google, send $15 to Chad on PayPal" and the Assistant takes me right into Paypal, I log in, and all of my information is filled in – all I need to do is hit send.

Google Pixel showing App Actions Nike Run Club

Each of these integrations were completed in less than a day with the addition of an Actions.xml file that handles the mapping of intents between your app and the Actions platform. You can start building with these new intents today and deploy to Assistant users on Android in the coming months. This is a huge opportunity to offer your fans an effortless way to engage more frequently with your apps.

Build for devices in the home

Take advantage of Smart Displays’ interactive screens

Last year, we saw the introduction of the Smart Display as a new device category. The interactive visual surface opens up many new possibilities for developers.

Today, we’re introducing a developer preview of Interactive Canvas which lets you create full-screen experiences that combine the power of voice, visuals and touch. Canvas works across Smart Displays and Android phones, and it uses open web technologies you’re likely already familiar with, like HTML, CSS and Javascript.

Here’s an example of what you can build when you can leverage the full screen of a Smart Display:

Full screen of a Smart Display

Interactive Canvas is available for building games starting today, and we’ll be adding more categories soon. Visit the Actions Console to be one of the first to try it out.

Enable smart home devices to communicate locally

There are now more than 30,000 connected devices that work with the Assistant across 3,500 brands, and today, we’re excited to announce a new suite of local technologies that are specifically designed to create an even better smart home.

Introducing a preview of the Local Home SDK which enables you to run your smart home code locally on Google Home Speakers and Nest Displays and use its radios to communicate locally with your smart devices. This reduces cloud hops and brings a new level of speed and reliability to the smart home. We’ve been working with some amazing partners including Philips, Wemo, TP-Link, and LIFX on testing this SDK and we’re excited to open it up for all developers next month.

Flowchart of Local Home SDK

Make setup more seamless

And, through the Local Home SDK, we’re improving the device setup experience by providing users with a seamless setup experience, something we launched in partnership with GE smart lights this past October. So far, people have loved the ability to set up their lights in less than a minute in the Google Home app. We’re now scaling this to more partners, so go here if you’re interested.

Make your devices smart with Assistant Connect

Also, at CES earlier this year we previewed Google Assistant Connect which leverages the Local Home SDK. Assistant Connect enables smart home and appliance developers to easily add Assistant functionality into their devices at low cost. It does this by offloading a lot of work onto the Assistant to complete Actions, display content and respond to commands. We've been hard at work developing the platform along with the first products built on it by Anker, Leviton and Tile. We can't wait to show you more about Assistant Connect later this year.

New device types and traits

For those of you creating Actions for the smart home, we’re also releasing 16 new device types and three new device traits including LockUnlock, ArmDisarm, and Timer. Head over to our developer documentation for the full list of 38 device types and 18 device traits, and check out our sample project on GitHub to start building.

Get started with our new tools for all types of developers

Whether you’re looking to extend the reach of your content, drive more usage in your apps, or build custom Assistant-powered experiences, you now have more tools to do so.

If you want to learn more about how you can start building with these tools, check out our website to get started and our schedule so you can tune in to all of our developer talks that we’ll be hosting throughout the week.

We can’t wait to build together with you!

Check out the Google Assistant talks at I/O 2019

Posted by Mary Chen, Strategy Lead, Actions on Google

This year at Google I/O, the Actions on Google team is sharing new ways developers of all types can use the Assistant to help users get things done. Whether you’re making Android apps, websites, web content, Actions, or IoT devices, you’ll see how the Assistant can help you engage with users in natural and conversational ways.

Tune in to our announcements during the developer keynote, and then dive deeper with our technical talks. We listed the talks out below by area of interest. Make sure to bookmark them and reserve your seat if you’re attending live, or check back for livestream details if you’re joining us online.


For anyone new to building for the Google Assistant


For Android app developers


For webmasters, web developers, and content creators


For smart home developers


For anyone building an Action from scratch


For insight and beyond


In addition to these sessions, stay tuned for interactive demos and codelabs that you can try at I/O and at home. Follow @ActionsOnGoogle for updates and highlights before, during, and after the festivities.

See you soon!

Build Actions for the next billion users

Posted by Brad Abrams, Group Product Manager, Actions on Google

Before we look forward and discuss updates to Actions on Google for 2019, we wanted to recognize our global developer community for your tremendous work in 2018. We saw more than 4 times the number of projects created with Actions on Google this past year. And some of the most popular Action categories include Games and Trivia, Home Control, Music, Actions for Families, and Education – well done!

We hope to carry this enthusiasm forward, and at Mobile World Congress, we're announcing new tools so you can reach and engage with more people around the globe.

Building for the next billion users

The Google Assistant's now available in more than 80 countries in nearly 30 languages, and you've been busy making your Actions accessible in many of those locales.

One of the most exciting things we've seen in the last couple of years is happening in places where the next billion users are coming online for the first time. In these fast-growing countries like India, Indonesia, Brazil, and Mexico, voice is often the primary way users interact with their devices because it's natural, universal, and the most accessible input method for people who are starting to engage with technology for the first time in their lives.

Actions on Google coming to KaiOS and Android (Go Edition)

As more countries are coming online, we want to make it so you can reach and engage with these users as they're adopting the Google Assistant into their everyday lives with astonishing ease. There are tens of millions of users on Android Go and KaiOS in over 100 countries.

We'll be making your Actions available to Android Go and KaiOS devices in the next few months, so you should start thinking now about how to build for these platforms and users. Without any additional work required, your Actions will work on both operating systems at launch (unless of course, Action requires a screen with touch input). We'll also be launching a simulator so you can test your Actions to see how they look on entry-level Android Go smartphones and KaiOS feature phones.

A couple of partners have already built Actions with these new audiences in mind. Hello English, for example, created an Action to offer English lessons for users that speak Hindi, to create more opportunities for people through language learning. And Where is My Train? (WIMT) was built for the millions of Indians commuting daily, offering real-time locations and times for trains accessible by voice. Check out our developer docs for KaiOS and Android Go Edition, and start building for the next billion users.

Expanding capabilities to more languages and countries

And we're not just focused on a handful of emerging countries. We're always working to enable all of Actions on Google's tools so users can enjoy the best experience possible regardless of the country they live in or the language they speak—our work here never ends! Here's a snapshot of some of the progress we've made this past year:

  • New locales: Since last MWC, we've launched Actions on Google support for more languages and locales. You can now build Actions in 19 languages across 28 locales.
  • Wavenet voices: As we've launched Actions on Google in more languages, we've added more text-to-speech voice options for your Actions. And thanks to Wavenet advancements, we're introducing improved, more natural-sounding TTS voices for English (en-US, en-GB and en-AU), Dutch, French (fr-FR and fr-CA), German, Italian, Russian, Portuguese (Brazilian), Japanese, Korean, Polish, Danish and Swedish. You can listen to the upgraded voices here, and they'll start rolling out to your Actions in the coming weeks.
  • Transactions: You can now offer transactional experiences in 22 markets, up from just 1 since last MWC. If you're looking to incorporate transactions in your Actions, check out these tips.
  • Templates for the next billion users: If you're not yet familiar with templates, you can fill in a Google Sheet and publish an Action within minutes. Trivia and Personality Quiz templates are available in English, (en-US and en-UK), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Hindi and Indonesian. All you have to do is upload a Sheet in any of the languages above and your Actions will be live in those languages.

We've already talked about how busy the development community was this past year, and we've been hard at work to keep up! If you're looking to reach and engage with millions—even billions more users—now's a good time to start thinking about how your Action can make a difference in people's lives around the globe.

Five new investments for the Google Assistant Investments program

Posted by Ilya Gelfenbeyn, Head of the Google Assistant Investments program

Last year, we announced the Google Assistant Investments program with the goal to help pioneering startups bring their ideas to life in the digital assistant ecosystem. Not only have we invested in some really great startups, we've also been working closely with these companies to make their services available to more users.

We're excited to be back to announce five new portfolio companies and catch up on the progress some of them have made this past year. With the next batch of investments, we're helping companies explore how digital assistants can improve the hospitality, insurance, fashion and education industries, and we have something for sports fans too.

Welcome to our new portfolio investments

First up, AskPorter. This London-based team was founded to make managing spaces simple, providing every property manager and occupant with a digital personal assistant. AskPorter is an AI-powered property management platform with a digital assistant called Porter. Porter assists and takes care of all aspects of property management such as guiding inspections arranging viewings, troubleshooting maintenance issues and chasing payments.

GradeSlam is an on-demand, chat-based, personalized learning and tutoring service available across all subject areas. Sessions are conducted via chat, creating a learning environment that allows students to interact freely and personally with qualified educators. The Montreal-based team is already used by more than 150,000 students, teachers and administrators.

Aiva Health puts smart speakers in hospitals and senior communities to reduce response times and improve satisfaction for patients, seniors, and caregivers alike. Aiva understands patient requests and routes them to the most appropriate caregiver so they can respond instantly via their mobile app. The Aiva platform provides centralized IoT management, powering Smart Hospitals and Smart Communities.

StyleHacks (formerly Maison Me) was founded with a goal of empowering people to take back control of their style and wardrobe. With a conversational interface and personalized AI-powered recommendations, they're helping people live their most stylish lives. The team has already launched the "StyleHacks" Action for phones and Smart Displays in December 2018, helping people decide what to wear by providing personalized recommendations based on the weather and preferences. And in the next few months, StyleHacks will also be able to help you shop for clothes you will actually wear. Just ask StyleHacks what to wear today

StatMuse turns the biggest sports stars into your own personal sports commentator. Powered by the personalities of more than 25 sports superstars including Peyton Manning, Jerry Rice and Scott Van Pelt, fans can get scores, stats and recaps for the NBA, NFL, NHL and MLB dating back to 1876. To try it out, just say, "Hey Google, talk to StatMuse."

It's been almost a year since we launched the Investments program and we're happy to see how some of these companies are already using voice to broaden the Google Assistant's capabilities. If you're working on new ways for people to use their voice to get things done, or building new hardware devices for digital assistants, we'd like to hear from you.

Recap: Build Actions For Your Community

Posted by Leon Nicholls, Developer Programs Engineer

In March, we announced the "Build Actions for Your Community" Event Series. These events are run by Google Developers Groups (GDG) and other community groups to educate developers about Actions on Google through local meetup events.

The event series has now ended and spanned 66 countries with a total of 432 events. These events reached 19,400 developers with 21% women attendance.

Actions on Google is of interest to developers globally, from Benin City, Nigeria, to Valparaíso, Chile, Hyderabad, India, Košice, Slovakia, and Omaha, Nebraska.

Developers in these cities experienced hands-on learning, including codelabs and activities to design and create Actions for their communities.

Developers consider creating Actions for the Google Assistant as a way of applying machine learning to solve real world problems. Here, for example, are the winners of the #IndiaBuildsActions Campaign:

You can try Meditation Daily to help you relax, English King to learn about grammar, or Voice Cricket to play a game of cricket.

We also got valuable feedback directly from developers about how to improve the Actions on Google APIs and documentation. We learned that developers want to do Actions for feature phones and want the Assistant to support more languages. Developers also asked for more codelabs, more workshops and more samples (subsequently, we've added a 3rd codelab).

It was exciting to see how many developers shared their experiences on social media.

"Event series was impressive, Awesome and amazing. Knowledge well acquired" (Nigeria)

"The experience I had with the participants was unforgettable. Thank you" (Philippines)

It was also very encouraging to see that 76% of developers are likely to build new Actions and that most developers rated the Actions on Google platform better than other platforms.

Thanks to everybody who organized, presented, and attended these events all around the world. For even more events, join a local GDG DevFest to share ideas and learn about developing with Google's technologies. We can't wait to see what kinds of Actions you create for the Google Assistant!

Want more? Head over to the Actions on Google community to discuss Actions with other developers. Join the Actions on Google developer community program and you could earn a $200 monthly Google Cloud credit and an Assistant t-shirt when you publish your first app.

Four tips for building great transactional experiences for the Google Assistant

Posted by Mikhail Turilin, Product Manager, Actions on Google

Building engaging Actions for the Google Assistant is just the first step in your journey for delivering a great experience for your users. We also understand how important it is for many of you to get compensated for your hard work by enabling quick, hands-free transactional experiences through the Google Assistant.

Let's take a look at some of the best practices you should consider when adding transactions to your Actions!

1. Use Google Sign-In for the Assistant

Traditional account linking requires the user to open a web browser and manually log in to a merchant's website. This can lead to higher abandonment rates for a couple of reasons:

  1. Users need to enter username and password, which they often can't remember
  2. Even if the user started the conversation on Google Home, they will have to use a mobile phone to log in to the merchant web site

Our new Google Sign-In for the Assistant flow solves this problem. By implementing this authentication flow, your users will only need to tap twice on the screen to link their accounts or create a new account on your website. Connecting individual user profiles to your Actions gives you an opportunity to personalize your customer experience based on your existing relationship with a user.

And if you already have a loyalty program in place, users can accrue points and access discounts with account linking with OAuth and Google Sign-In.

Head over to our step-by-step guide to learn how to incorporate Google Sign-In.

2. Simplify the order process with a re-ordering flow

Most people prefer to use the Google Assistant quickly, whether they're at home and or on the go. So if you're a merchant, you should look for opportunities to simplify the ordering process.

Choosing a product from a list of many dozens of items takes a really long time. That's why many consumers enjoy the ability to quickly reorder items when shopping online. Implementing reordering with Google Assistant provides an opportunity to solve both problems at the same time.

Reordering is based on the history to previous purchases. You will need to implement account linking to identify returning users. Once the account is linked, connect the order history on your backend and present the choices to the user.

Just Eat, an online food ordering and delivery service in the UK, focuses on reordering as one of their core flows because they expect their customers to use the Google Assistant to reorder their favorite meals.

3. Use Google Pay for a more seamless checkout

Once a user has decided they're ready to make a purchase, it's important to provide a quick checkout experience. To help, we've expanded payment options for transactions to include Google Pay, a fast, simple way to pay online, in stores, and in the Google Assistant.

Google Pay reduces customer friction during checkout because it's already connected to users' Google accounts. Users don't need to go back and forth between the Google Assistant and your website to add a payment method. Instead, users can share the payment method that they have on file with Google Pay.

Best of all, it's simple to integrate – just follow the instructions in our transactions docs.

4. Support voice-only Actions on the Google Home

At I/O, we announced that voice-only transactions for Google Home are now supported in the US, UK, Canada, Germany, France, Australia, and Japan. A completely hands-free experience will give users more ways to complete transactions with your Actions.

Here are a few things to keep in mind when designing your transactions for voice-only surfaces:

  • Build easy-to-follow dialogue because users won't see dialogue or suggestion chips available on phones.
  • Avoid inducing choice paralysis. Focus on a few simple choices based on customer preferences collected during their previous orders.
  • Localize your transactional experiences for new regions – learn more here.
  • Don't forget to enable your transactions to work on smart speakers in the console.

Learn more tips in our Conversation Design Guidelines.

As we expand support for transactions in new countries and on new Google Assistant surfaces, now is the perfect time to make sure your transactional experiences are designed with users in mind so you can increase conversion and minimize drop-off.

Make money from your Actions, create better user experiences

Posted by Tarun Jain, Group PM, Actions on Google

The Google Assistant helps you get things done across the devices you have at your side throughout your day--a bedside smart speaker, your mobile device while on the go, or even your kitchen Smart Display when winding down in the evening.

One of the common questions we get from developers is: how do I create a seamless path for users to complete purchases across all these types of devices? We also get asked by developers: how can I better personalize my experience for users on the Assistant with privacy in mind?

Today, we're making these easier for developers with support for digital goods and subscriptions, and Google Sign-in for the Assistant. We're also giving the Google Assistant a complete makeover on mobile phones, enabling developers to create even more visually rich integrations.

Start earning money with premium experiences for your users

While we've offered transactions for physical goods for some time, starting today, you will also be able to offer digital goods, including one time purchases like upgrades--expansion packs or new levels, for example--and even recurring subscriptions directly within your Action.

Starting today, users can complete these transactions while in conversation with your Action through speakers, phones, and Smart Displays.This will be supported in the U.S. to start, with more locales coming soon.

Headspace, for example, now offers Android users an option to subscribe to their plans, meaning users can purchase a subscription and immediately see an upgraded experience while talking to their Action. Try it for yourself, by telling your Google Assistant, "meditate with Headspace"

Volley added digital goods to their role-playing game Castle Master so users could enhance their experience by purchasing upgrades. Try it yourself, by asking your Google Assistant to, "play Castle Master."

You can also ensure a seamless premium experience as users move between your Android app and Action for Assistant by letting users access their digital goods across their relationship with you, regardless of where the purchase originated. You can manage your digital goods for both your app and your Action in one place, in the Play Console.

Simplified account linking and user personalization

Once your users have access to a premium experience with digital goods, you will want to make sure your Action remembers them. To help with that, we're also introducing Google Sign-In for the Assistant, a secure authentication method that simplifies account linking for your users and reduces user drop off for login. Google Sign-In provides the most convenient way to log in, with just a few taps. With Google Sign-In users can even just use their voice to login and link accounts on smart speakers with the Assistant.

In the past, account linking could be a frustrating experience for your users; having to manually type a username and password--or worse, create a new account--breaks the natural conversational flow. With Google Sign-In, users can now create a new account with just a tap or confirmation through their voice. Most users can even link to their existing accounts with your service using their verified email address.

For developers, Google Sign-In also makes it easier to support login and personalize your Action for users. Previously, developers needed to build an account system and support OAuth-based account linking in order to personalize their Action. Now, you have the option to use Google Sign-In to support login for any user with a Google account.

Starbucks added Google Sign-In for the Assistant to enable users of their Action to access their Starbucks RewardsTM accounts and earn stars for their purchases. Since adding Google Sign-In for the Assistant, they've seen login conversion nearly double for their users versus their previous implementation that required manual account entry.

Check out our guide on the different authentication options available to you, to understand which best meets your needs.

A new visual experience for the phone

Today, we're launching the first major makeover for the Google Assistant on phones, bringing a richer, more interactive interface to the devices we carry with us throughout the day.

Since the Google Assistant made its debut, we've noticed that nearly half of all interactions with the Assistant today include both voice and touch. With this redesign, we're making the Assistant more visually assistive for users, combining voice with touch in a way that gives users the right controls in the right situations.

For developers, we've also made it easy to bring great multimodal experiences to life on the phone and other Assistant-enabled devices with screens, including Smart Displays. This presents a new opportunity to express your brand through richer visuals and with greater real estate in your Action.

To get started, you can now add rich responses to customize your Action for visual interfaces. With rich responses you can build visually engaging Actions for your users with a set of plug-and-play visual components for different types of content. If you've already added rich responses to your Action, these will work automatically on the new mobile redesign. Be sure to also check out our guidance on how and when to use visuals in your Action.

Below you can find some examples of the ways some partners and developers have already started to make use of rich responses to provide more visually interactive experiences for Assistant users on phones.

You can try these yourself by asking your Google Assistant to, "order my usual from Starbucks," "ask H&M Home to give inspiration for my kitchen," "ask Fitstar to workout," or "ask Food Network for chicken recipes."

Ready to get building? Check out our documentation on how to add digital goods and Google Sign-In for Assistant to create premium and personalized experiences for your users across devices.

To improve your visual experience for phone users, check out our conversation design site, our documentation on different surfaces, and our documentation and sample on how you can use rich responses to build with visual components. You can also test and debug your different types of voice, visual, and multimodal experiences in the Actions simulator.

Good luck building, and please continue to share your ideas and feedback with us. Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit.


*Some countries are not eligible to participate in the developer community program, please review the terms and conditions

5 Tips for Developing Actions with the New Actions Console

Posted by Zachary Senzer, Product Manager

A couple months ago at Google I/O, we announced a redesigned Actions console that makes developing your Actions easier than ever before. The new Actions console features a more seamless development experience that tailors your workflow from onboarding through deployment, with tailored analytics to manage your Actions post-launch. Simply select your use case during onboarding and the Actions console will guide you through the different stages of development.

Here are 5 tips to help you create the best Actions for your content using our new console.

1. Optimize your Actions for new surfaces with theme customization

Part of what makes the Actions on Google ecosystem so special is the vast array of devices that people can use to interact with your Actions. Some of these devices, including phones and our new smart displays, allow users to have rich visual interactions with your content. To help your Actions stand out, you can customize how these visual experiences appear to users of these devices. Simply visit the "Build" tab and go to theme customization in the Actions console where you can specify background images, typography, colors, and more for your Actions.

2. Start to make your Actions easier to discover with built-in intents

Conversational experiences can introduce complexity in how people ask to complete a task related to your Action--a user could ask for a game in thousands of different ways ("play a game for me", "find a maps quiz", "I want some trivia"). Figuring out all of the ways a user might ask for your Action is difficult. To make this process much easier, we're beginning to map the ways users might ask for your Action into a taxonomy of built-in intents to abstract away this difficulty.

We'll start to use the built-in intent you associated with your Action to help users more easily discover your content as we begin testing them with user's queries. We'll continue to add many more built-in intents over the coming months to cover a variety of use cases. In the Actions console, go to the "Build" tab, click "Actions", then "Add Action" and select one to get started.

3. Promote your Actions with Action Links

While we'll continue to improve the ways users find your Actions within the Assistant, we've also made it easier for users to find your Actions outside the Assistant. Driving new traffic to your Actions is as easy as a click with Action Links. You now have the ability to define hyperlinks for each of your Actions to be used on your website, social media, email newsletters, and more. These links will launch users directly into your Action. If used on a desktop, the link will take users to the directory page for your Action, where they'll have the ability to choose the device they want to try your Action on. To configure Action Links in the console, visit the "Build" tab, choose "Actions", and select the Action for which you would like to create a link. That's it!

4. Ensure your Actions are high-quality by testing using our web simulator and alpha/beta environments

The best way to make sure that your Actions are working as intended is to test them using our updated web simulator. In the simulator, you can run through conversational user flows on phone, speaker, and even smart display device types. After you issue a request, you can see the visual response, request, and response JSON, with any potential errors. For further assistance with debugging errors, you also have the ability to view logs for your Actions.

Another great opportunity to test your Actions is by deploying to limited audiences in alpha and beta environments. By deploying to the alpha environment, your Actions do not need to go through the review process, meaning you can quickly test with your users. After deploying to the beta environment, you can launch your Actions to production whenever you like without additional review. To use alpha and beta environments, go to the "Deploy" tab and click "Release" in the Actions console.

5. Measure your success using analytics

After you deploy your Actions, it's equally important to measure their performance. By visiting the "Measure" tab and clicking "Analytics" in the Actions console, you will be able to view rich analytics on usage, health, and discovery. You can easily see how many people are using and returning to your Actions, how many errors users are encountering, the phrases users are saying to discover your Actions, and much, much, more. These insights can help you improve your Actions.


If you're new to the Actions console and looking for a quick way to get started, watch this video for an overview of the development process.

We're so excited to see how you will use the new Actions console to create even more Actions for more use cases, with additional tools to improve and iterate. Happy building!