Tag Archives: actions on google

New Analytics updates in Actions on Google Console

Posted by Mandy Chan, Developer Advocate

Have you built an Action for the Google Assistant and wondered how many people are using it? Or how many of your users are returning users? In this blog post, we will dive into 5 improvements that the Actions on Google Console team has made to give you more insight into how your Action is being used.

1. Multiple improvements for readability

We've updated three areas of the Actions Console for readability: Active Users Chart, Date Range Selection, and Filter Options. With these new updates, you can now better customize the data to analyze the usage of your Actions.

Active Users Chart

The labels at the top of the Active Users chart now read Daily, Weekly and Monthly, instead of the previous 1 Day, 7 Days and 28 Days labels. We also improved the readability of the individual date labels at the bottom of the chart to be more clear. You’ll also notice a quick insight at the bottom of the chart that shows the unique number of users during this time period.

Before:Active Users chartAfter:

Date Range Selection

Previously, the date range selectors applied globally to all the charts. These selectors are now local to each chart, allowing you more control over how you view your data.

The date selector provides the following ranges:

  • Daily (last 7 days, last 30 days, last 90 days)
  • Weekly (last 4 weeks, last 8 weeks, last 12 weeks, last 24 weeks)
  • Monthly (last 3 months, last 6 months, last 12 months)
Date Selector

Filter Options

Previously when you added a filter, it was applied to all the charts on the page. Now, the filters apply only to the chart you're viewing. We’ve also enhanced the filtering options available for the ‘Surface’ filter, such as mobile devices, smart speakers, and smart display.

Before:

Filter Options Before

After:

filter options after

The filter feature also lets you show data breakdowns over different dimensions. By default, the chart shows a single consolidated line, a result of all the filters applied. You can now select the ‘Show breakdown by’ option to see how the components of that data contribute to the totals based on the dimension you selected.

2. Introducing Retention metrics (New!)

A brand new addition to analytics is the introduction of a retention metrics chart to help you understand how well your action is retaining users. This chart shows you how many users you had in a week and how many returned each week for up to 5 weeks. The higher the percentage week after week, the better your retention.

When you hover over each cell in the chart, you can see the exact number of users who have returned for that week from the previous week.

Retention Metrics

3. Improvements to Conversation Metrics

Finally, we’ve consolidated the conversation metrics and brought them together into a single chart with separate tabs (‘Conversations’, ‘Messages’, ‘Avg Length’ and ‘Abort rate’) for easier comparison and visibility of trends over time. We’ve also updated the chart labels and tooltips for better interpretation.

Before:

Conversion Metrics Before

After:

Conversion Metrics After

Next steps

To learn more about what each metric means, you can check out our documentation.

Try out these new improvements to see how your Actions are performing with your users. You can also check out our documentation to learn more. Let us know if you have any feedback or suggestions in terms of metrics that you need to improve your Action. Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Announcing Dynamic Modes and Toggles

Posted by Dave Smith, Developer Advocate

Modes and toggles let you define the configurable attributes of your device that may exist outside the standard grammar for device control traits (such as On/Off or Start/Stop). This feature is often used to express device-specific settings, such as the "load size" for a clothes washer or the "cooking mode" for an oven.

When we initially introduced modes and toggles, we supported a whitelisted set of names and synonyms to ensure the most accurate responses and best user experience. Over time, we continued to add support based on the community's requests, but getting these requests approved has been a common pain point for many of you.

Starting today, you no longer have to get the names and synonyms provided in your SYNC response approved. The Google Assistant dynamically determines the necessary grammar for users to invoke these traits. If you're not already familiar with modes and toggles, here is an example using these traits to add support for custom cooking modes to an oven.

{
availableModes: [{
name: 'cook',
name_values: [{
name_synonym: ['cook setting'],
lang: 'en'
}],
settings: [{
setting_name: 'pizza',
setting_values: [{
setting_synonym: ['pizza'],
lang: 'en'
}]
}, {
setting_name: 'pasta',
setting_values: [{
setting_synonym: ['pasta'],
lang: 'en'
}]
}]
}],
}

Example modes in SYNC response

Controlling

Controlling a device using modes and toggles

We're excited to see what you build with these improved modes and toggles! For more details on using these features, see the updated guides for the Modes Trait and Toggles Trait. To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on.

International Women’s Day’19 featuring Actions on Google

Posted by Marisa Pareti, Rubi Martinez & Jessica Earley-Cha

In celebration of International Women’s Day, Women Techmakers hosted its sixth annual summit series to acknowledge and celebrate women in the tech industry, and to create a space for attendees to build community, hear from industry leaders, and learn new skills. The series featured 19 summits and 305 meetups across 87 countries.

This year, Women Techmakers partnered with the Actions on Google team to host technical workshops at these events so attendees could learn the fundamental concepts to develop Actions for the Google Assistant.Together, we created hundreds of new Actions for the Assistant. Check out some of the highlights of this year’s summit in the video below:

Technical Workshop Details

If you couldn’t attend any of our meetups this past year, we’ll cover our technical workshops now so you can start building for the Assistant from home. The technical workshop kicked off by introducing Actions on Google — the platform that enables developers to build Actions for the Google Assistant. Participants got hands-on experience building their first Action with the following features:

  • Users can start a conversation by explicitly calling the Action by name, which then responds with a greeting message.
  • Once in conversation, users are prompted to provide their favorite color. The Action parses the user’s input to extract the information it needs (namely, the color parameter).
  • If a color is provided, the Action processes the color parameter to auto-generate a “lucky number” to send back to the user and the conversation ends.
  • If no color is provided, the Action sends the user additional prompts until the parameter is extracted.
  • Users can explicitly leave the conversation at any time.

During Codelab level 1, participants learned how to parse the user’s input by using Dialogflow, a tool that uses Machine Learning and acted as their Natural Language Processor (NLP). Dialogflow processes what the user says and extracts important information from that input to identify how to fulfill the user’s request. Participants configured Dialogflow and connected it to their code’s back-end using Dialogflow’s inline editor. In the editor, participants added their code and tested their Action in the Action Simulator.

In Codelab level 2, participants continued building on their Action, adding features such as:

  • Supports deep links to directly launch the user into certain points of dialog
  • Uses utilities provided by the Actions on Google platform to fetch the user’s name and address them personally
  • Responds with follow-up questions to further the conversation
  • Presents users with a rich visual response complete with sound effects

Instead of using Dialogflow’s inline editor, participants set up a Cloud Functions for Firebase as their server.

You can learn more about developing your own Actions here. To support developers’ efforts in building great Actions for the Google Assistant, the team also has a developer community program.

Alex Eremia, a workshop attendee, reflected, “I think voice applications will have a huge impact on society both today and in the future. It will become a natural way we interact with the items around us.”

From keynotes, fireside chats, and interactive workshops, the Women Techmakers summit attendees enjoyed a mixture of technical and inspirational content. If you’re interested in learning more and getting involved, follow us WTM on twitter, check out our website and sign up to become a member.

To learn more Actions on Google and how to build for the Google Assistant, be sure to follow us on Twitter, and join our Reddit community!

Developer Preview of Local Home SDK

Posted by Toni Klopfenstein

Recently at Google I/O, we gave you a sneak peek at our new Local Home SDK, a suite of local technologies to enhance your smart home integrations. Today, the SDK is live as a developer preview. We've been working hard testing the platform with our partners, including GE, LIFX, Philips Hue, TP-Link, and Wemo, and are excited to bring you these additional technologies for connecting smart devices to the Google Assistant.

Figure 1: The local execution path

This SDK enables developers to more deeply integrate their smart devices into the Assistant by building upon the existing Smart Home platform to create a local execution path via Google Home smart speakers and Nest smart displays. Developers can now run their business logic to control new and existing smart devices in JavaScript that executes on the smart speakers and displays, benefitting users with reduced latency and higher reliability.

How it works:

The SDK introduces two new intents, IDENTIFY and REACHABLE_DEVICES. The local home platform scans the user's home network via mDNS, UDP, or UPnP to discover any smart devices connected to the Assistant, and triggers IDENTIFY to verify that the device IDs match those returned from the familiar Smart Home API SYNC intent. If the detected device is a hub or bridge, REACHABLE_DEVICES is triggered and treats the hub as the proxy device for communicating locally. Once the local execution path from Google Home to a device is established, the device properties are updated in Home Graph.

Figure 2: The intents used for each execution path

When a user triggers a smart home Action that has a local execution path, the Assistant sends the EXECUTE intent to the Google Nest device rather than the developer's cloud fulfillment. The developer's JavaScript app is invoked, which then triggers the Local Home SDK to send control commands to the smart device over TCP, UDP socket, or HTTP/HTTPS requests. By defaulting to local execution rather than the cloud, users experience faster fulfillment of their requests. The execution requests can still be sent to the cloud path in case local execution fails. This redundancy minimizes the possibility of a failed request, and improves the overall user experience.

Additional features of the Local Home platform include:

  • Support for all Wi-Fi-enabled device types and device traits without two-factor authentication enabled.
  • No user action required to deploy Local Home benefits to all devices.
  • Easily configure discovery protocols and the hosted JavaScript app URL through the Actions console.

Figure 3: Local Home configuration tool in the Actions console

JavaScript apps can be tested on-device, allowing developers to employ familiar tools like Chrome Developer Console for debugging. Because the Local Home SDK works with the existing smart home framework, you can self-certify new apps through the Test suite for smart home as well.

Get started

To learn more about the Local Home platform, check out the API reference, and get started adding local execution with the developer guide and samples. For general information covering how you can connect smart devices to the Google Assistant, visit the Smart Home documentation, or check out the Local Technologies for the Smart Home talk from Google I/O this year.

You can send us any feedback you have through the bug tracker, or engage with the community at /r/GoogleAssistantDev. You can tag your posts with the flair local-home-sdk to help organize discussion.

Actions on Google at I/O 2019: New tools for web, mobile, and smart home developers

Posted by Chris Turkstra, Director, Actions on Google

People are using the Assistant every day to get things done more easily, creating lots of opportunities for developers on this quickly growing platform. And we’ve heard from many of you that want easier ways to connect your content across the Assistant.

At I/O, we’re announcing new solutions for Actions on Google that were built specifically with you in mind. Whether you build for web, mobile, or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.

Enhance your presence in Search and the Assistant

Help people with their “how to” questions

Every day, people turn to the internet to ask “how to” questions, like how to tie a tie, how to fix a faucet, or how to install a dog door. At I/O, we’re introducing support for How-to markup that lets you power richer and more helpful results in Search and the Assistant.

Adding How-to markup to your pages will enable the page to appear as a rich result on mobile Search and on Google Assistant Smart Displays. This is an incredibly lightweight way for web developers and creators to connect with millions of people, giving them helpful step-by-step instructions with video, images and text. You can start seeing How-to markup results on Search today, and your content will become available on the Smart Displays in the coming months.

Here’s an example where DIY Network added markup to their existing content on the web to provide a more helpful, interactive result on both Google Search and the Assistant:

Mobile Search screenshot showing how to install a dog door How-to Markup of how to install a dog door

For content creators that don’t maintain a website, we created a How-to Video Template where video creators can upload a simple spreadsheet with titles, text and timestamps for their YouTube video, and we’ll handle the rest. This is a simple way to transform your existing how-to videos into interactive, step-by-step tutorials across Google Assistant Smart Displays and Android phones.

Check out how REI is getting extra mileage out of their YouTube video:

Laptop to Home Hub displaying How To Template for the REI compass

How-to Video Templates are in developer preview so you can start building today, and your content will become available on Android phones and Smart Displays in the coming months.

Easier engagement with your apps

Help people quickly get things done with App Actions

If you’re an app developer, people are turning to your apps every day to get things done. And we see people turn to the Assistant every day for a natural way to ask for help via voice. This offers an opportunity to use intents to create voice-based entry points from the Assistant to the right spot in your app.

Last year, we previewed App Actions, a simple mechanism for Android developers that uses intents from the Assistant to deep link to exactly the right spot in your app. At I/O, we are announcing the release of built-in intents for four new App Action categories: Health & Fitness, Finance and Banking, Ridesharing, and Food Ordering. Using these intents, you can integrate with the Assistant in no time.

If I wanted to track my run with Nike Run Club, I could just say “Hey Google, start my run in Nike Run Club” and the app will automatically start tracking my run. Or, let’s say I just finished dinner with my friend Chad and we're splitting the check. I can say "Hey Google, send $15 to Chad on PayPal" and the Assistant takes me right into Paypal, I log in, and all of my information is filled in – all I need to do is hit send.

Google Pixel showing App Actions Nike Run Club

Each of these integrations were completed in less than a day with the addition of an Actions.xml file that handles the mapping of intents between your app and the Actions platform. You can start building with these new intents today and deploy to Assistant users on Android in the coming months. This is a huge opportunity to offer your fans an effortless way to engage more frequently with your apps.

Build for devices in the home

Take advantage of Smart Displays’ interactive screens

Last year, we saw the introduction of the Smart Display as a new device category. The interactive visual surface opens up many new possibilities for developers.

Today, we’re introducing a developer preview of Interactive Canvas which lets you create full-screen experiences that combine the power of voice, visuals and touch. Canvas works across Smart Displays and Android phones, and it uses open web technologies you’re likely already familiar with, like HTML, CSS and Javascript.

Here’s an example of what you can build when you can leverage the full screen of a Smart Display:

Full screen of a Smart Display

Interactive Canvas is available for building games starting today, and we’ll be adding more categories soon. Visit the Actions Console to be one of the first to try it out.

Enable smart home devices to communicate locally

There are now more than 30,000 connected devices that work with the Assistant across 3,500 brands, and today, we’re excited to announce a new suite of local technologies that are specifically designed to create an even better smart home.

Introducing a preview of the Local Home SDK which enables you to run your smart home code locally on Google Home Speakers and Nest Displays and use its radios to communicate locally with your smart devices. This reduces cloud hops and brings a new level of speed and reliability to the smart home. We’ve been working with some amazing partners including Philips, Wemo, TP-Link, and LIFX on testing this SDK and we’re excited to open it up for all developers next month.

Flowchart of Local Home SDK

Make setup more seamless

And, through the Local Home SDK, we’re improving the device setup experience by providing users with a seamless setup experience, something we launched in partnership with GE smart lights this past October. So far, people have loved the ability to set up their lights in less than a minute in the Google Home app. We’re now scaling this to more partners, so go here if you’re interested.

Make your devices smart with Assistant Connect

Also, at CES earlier this year we previewed Google Assistant Connect which leverages the Local Home SDK. Assistant Connect enables smart home and appliance developers to easily add Assistant functionality into their devices at low cost. It does this by offloading a lot of work onto the Assistant to complete Actions, display content and respond to commands. We've been hard at work developing the platform along with the first products built on it by Anker, Leviton and Tile. We can't wait to show you more about Assistant Connect later this year.

New device types and traits

For those of you creating Actions for the smart home, we’re also releasing 16 new device types and three new device traits including LockUnlock, ArmDisarm, and Timer. Head over to our developer documentation for the full list of 38 device types and 18 device traits, and check out our sample project on GitHub to start building.

Get started with our new tools for all types of developers

Whether you’re looking to extend the reach of your content, drive more usage in your apps, or build custom Assistant-powered experiences, you now have more tools to do so.

If you want to learn more about how you can start building with these tools, check out our website to get started and our schedule so you can tune in to all of our developer talks that we’ll be hosting throughout the week.

We can’t wait to build together with you!

Check out the Google Assistant talks at I/O 2019

Posted by Mary Chen, Strategy Lead, Actions on Google

This year at Google I/O, the Actions on Google team is sharing new ways developers of all types can use the Assistant to help users get things done. Whether you’re making Android apps, websites, web content, Actions, or IoT devices, you’ll see how the Assistant can help you engage with users in natural and conversational ways.

Tune in to our announcements during the developer keynote, and then dive deeper with our technical talks. We listed the talks out below by area of interest. Make sure to bookmark them and reserve your seat if you’re attending live, or check back for livestream details if you’re joining us online.


For anyone new to building for the Google Assistant


For Android app developers


For webmasters, web developers, and content creators


For smart home developers


For anyone building an Action from scratch


For insight and beyond


In addition to these sessions, stay tuned for interactive demos and codelabs that you can try at I/O and at home. Follow @ActionsOnGoogle for updates and highlights before, during, and after the festivities.

See you soon!

Build Actions for the next billion users

Posted by Brad Abrams, Group Product Manager, Actions on Google

Before we look forward and discuss updates to Actions on Google for 2019, we wanted to recognize our global developer community for your tremendous work in 2018. We saw more than 4 times the number of projects created with Actions on Google this past year. And some of the most popular Action categories include Games and Trivia, Home Control, Music, Actions for Families, and Education – well done!

We hope to carry this enthusiasm forward, and at Mobile World Congress, we're announcing new tools so you can reach and engage with more people around the globe.

Building for the next billion users

The Google Assistant's now available in more than 80 countries in nearly 30 languages, and you've been busy making your Actions accessible in many of those locales.

One of the most exciting things we've seen in the last couple of years is happening in places where the next billion users are coming online for the first time. In these fast-growing countries like India, Indonesia, Brazil, and Mexico, voice is often the primary way users interact with their devices because it's natural, universal, and the most accessible input method for people who are starting to engage with technology for the first time in their lives.

Actions on Google coming to KaiOS and Android (Go Edition)

As more countries are coming online, we want to make it so you can reach and engage with these users as they're adopting the Google Assistant into their everyday lives with astonishing ease. There are tens of millions of users on Android Go and KaiOS in over 100 countries.

We'll be making your Actions available to Android Go and KaiOS devices in the next few months, so you should start thinking now about how to build for these platforms and users. Without any additional work required, your Actions will work on both operating systems at launch (unless of course, Action requires a screen with touch input). We'll also be launching a simulator so you can test your Actions to see how they look on entry-level Android Go smartphones and KaiOS feature phones.

A couple of partners have already built Actions with these new audiences in mind. Hello English, for example, created an Action to offer English lessons for users that speak Hindi, to create more opportunities for people through language learning. And Where is My Train? (WIMT) was built for the millions of Indians commuting daily, offering real-time locations and times for trains accessible by voice. Check out our developer docs for KaiOS and Android Go Edition, and start building for the next billion users.

Expanding capabilities to more languages and countries

And we're not just focused on a handful of emerging countries. We're always working to enable all of Actions on Google's tools so users can enjoy the best experience possible regardless of the country they live in or the language they speak—our work here never ends! Here's a snapshot of some of the progress we've made this past year:

  • New locales: Since last MWC, we've launched Actions on Google support for more languages and locales. You can now build Actions in 19 languages across 28 locales.
  • Wavenet voices: As we've launched Actions on Google in more languages, we've added more text-to-speech voice options for your Actions. And thanks to Wavenet advancements, we're introducing improved, more natural-sounding TTS voices for English (en-US, en-GB and en-AU), Dutch, French (fr-FR and fr-CA), German, Italian, Russian, Portuguese (Brazilian), Japanese, Korean, Polish, Danish and Swedish. You can listen to the upgraded voices here, and they'll start rolling out to your Actions in the coming weeks.
  • Transactions: You can now offer transactional experiences in 22 markets, up from just 1 since last MWC. If you're looking to incorporate transactions in your Actions, check out these tips.
  • Templates for the next billion users: If you're not yet familiar with templates, you can fill in a Google Sheet and publish an Action within minutes. Trivia and Personality Quiz templates are available in English, (en-US and en-UK), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Hindi and Indonesian. All you have to do is upload a Sheet in any of the languages above and your Actions will be live in those languages.

We've already talked about how busy the development community was this past year, and we've been hard at work to keep up! If you're looking to reach and engage with millions—even billions more users—now's a good time to start thinking about how your Action can make a difference in people's lives around the globe.

Five new investments for the Google Assistant Investments program

Posted by Ilya Gelfenbeyn, Head of the Google Assistant Investments program

Last year, we announced the Google Assistant Investments program with the goal to help pioneering startups bring their ideas to life in the digital assistant ecosystem. Not only have we invested in some really great startups, we've also been working closely with these companies to make their services available to more users.

We're excited to be back to announce five new portfolio companies and catch up on the progress some of them have made this past year. With the next batch of investments, we're helping companies explore how digital assistants can improve the hospitality, insurance, fashion and education industries, and we have something for sports fans too.

Welcome to our new portfolio investments

First up, AskPorter. This London-based team was founded to make managing spaces simple, providing every property manager and occupant with a digital personal assistant. AskPorter is an AI-powered property management platform with a digital assistant called Porter. Porter assists and takes care of all aspects of property management such as guiding inspections arranging viewings, troubleshooting maintenance issues and chasing payments.

GradeSlam is an on-demand, chat-based, personalized learning and tutoring service available across all subject areas. Sessions are conducted via chat, creating a learning environment that allows students to interact freely and personally with qualified educators. The Montreal-based team is already used by more than 150,000 students, teachers and administrators.

Aiva Health puts smart speakers in hospitals and senior communities to reduce response times and improve satisfaction for patients, seniors, and caregivers alike. Aiva understands patient requests and routes them to the most appropriate caregiver so they can respond instantly via their mobile app. The Aiva platform provides centralized IoT management, powering Smart Hospitals and Smart Communities.

StyleHacks (formerly Maison Me) was founded with a goal of empowering people to take back control of their style and wardrobe. With a conversational interface and personalized AI-powered recommendations, they're helping people live their most stylish lives. The team has already launched the "StyleHacks" Action for phones and Smart Displays in December 2018, helping people decide what to wear by providing personalized recommendations based on the weather and preferences. And in the next few months, StyleHacks will also be able to help you shop for clothes you will actually wear. Just ask StyleHacks what to wear today

StatMuse turns the biggest sports stars into your own personal sports commentator. Powered by the personalities of more than 25 sports superstars including Peyton Manning, Jerry Rice and Scott Van Pelt, fans can get scores, stats and recaps for the NBA, NFL, NHL and MLB dating back to 1876. To try it out, just say, "Hey Google, talk to StatMuse."

It's been almost a year since we launched the Investments program and we're happy to see how some of these companies are already using voice to broaden the Google Assistant's capabilities. If you're working on new ways for people to use their voice to get things done, or building new hardware devices for digital assistants, we'd like to hear from you.

Recap: Build Actions For Your Community

Posted by Leon Nicholls, Developer Programs Engineer

In March, we announced the "Build Actions for Your Community" Event Series. These events are run by Google Developers Groups (GDG) and other community groups to educate developers about Actions on Google through local meetup events.

The event series has now ended and spanned 66 countries with a total of 432 events. These events reached 19,400 developers with 21% women attendance.

Actions on Google is of interest to developers globally, from Benin City, Nigeria, to Valparaíso, Chile, Hyderabad, India, Košice, Slovakia, and Omaha, Nebraska.

Developers in these cities experienced hands-on learning, including codelabs and activities to design and create Actions for their communities.

Developers consider creating Actions for the Google Assistant as a way of applying machine learning to solve real world problems. Here, for example, are the winners of the #IndiaBuildsActions Campaign:

You can try Meditation Daily to help you relax, English King to learn about grammar, or Voice Cricket to play a game of cricket.

We also got valuable feedback directly from developers about how to improve the Actions on Google APIs and documentation. We learned that developers want to do Actions for feature phones and want the Assistant to support more languages. Developers also asked for more codelabs, more workshops and more samples (subsequently, we've added a 3rd codelab).

It was exciting to see how many developers shared their experiences on social media.

"Event series was impressive, Awesome and amazing. Knowledge well acquired" (Nigeria)

"The experience I had with the participants was unforgettable. Thank you" (Philippines)

It was also very encouraging to see that 76% of developers are likely to build new Actions and that most developers rated the Actions on Google platform better than other platforms.

Thanks to everybody who organized, presented, and attended these events all around the world. For even more events, join a local GDG DevFest to share ideas and learn about developing with Google's technologies. We can't wait to see what kinds of Actions you create for the Google Assistant!

Want more? Head over to the Actions on Google community to discuss Actions with other developers. Join the Actions on Google developer community program and you could earn a $200 monthly Google Cloud credit and an Assistant t-shirt when you publish your first app.

Four tips for building great transactional experiences for the Google Assistant

Posted by Mikhail Turilin, Product Manager, Actions on Google

Building engaging Actions for the Google Assistant is just the first step in your journey for delivering a great experience for your users. We also understand how important it is for many of you to get compensated for your hard work by enabling quick, hands-free transactional experiences through the Google Assistant.

Let's take a look at some of the best practices you should consider when adding transactions to your Actions!

1. Use Google Sign-In for the Assistant

Traditional account linking requires the user to open a web browser and manually log in to a merchant's website. This can lead to higher abandonment rates for a couple of reasons:

  1. Users need to enter username and password, which they often can't remember
  2. Even if the user started the conversation on Google Home, they will have to use a mobile phone to log in to the merchant web site

Our new Google Sign-In for the Assistant flow solves this problem. By implementing this authentication flow, your users will only need to tap twice on the screen to link their accounts or create a new account on your website. Connecting individual user profiles to your Actions gives you an opportunity to personalize your customer experience based on your existing relationship with a user.

And if you already have a loyalty program in place, users can accrue points and access discounts with account linking with OAuth and Google Sign-In.

Head over to our step-by-step guide to learn how to incorporate Google Sign-In.

2. Simplify the order process with a re-ordering flow

Most people prefer to use the Google Assistant quickly, whether they're at home and or on the go. So if you're a merchant, you should look for opportunities to simplify the ordering process.

Choosing a product from a list of many dozens of items takes a really long time. That's why many consumers enjoy the ability to quickly reorder items when shopping online. Implementing reordering with Google Assistant provides an opportunity to solve both problems at the same time.

Reordering is based on the history to previous purchases. You will need to implement account linking to identify returning users. Once the account is linked, connect the order history on your backend and present the choices to the user.

Just Eat, an online food ordering and delivery service in the UK, focuses on reordering as one of their core flows because they expect their customers to use the Google Assistant to reorder their favorite meals.

3. Use Google Pay for a more seamless checkout

Once a user has decided they're ready to make a purchase, it's important to provide a quick checkout experience. To help, we've expanded payment options for transactions to include Google Pay, a fast, simple way to pay online, in stores, and in the Google Assistant.

Google Pay reduces customer friction during checkout because it's already connected to users' Google accounts. Users don't need to go back and forth between the Google Assistant and your website to add a payment method. Instead, users can share the payment method that they have on file with Google Pay.

Best of all, it's simple to integrate – just follow the instructions in our transactions docs.

4. Support voice-only Actions on the Google Home

At I/O, we announced that voice-only transactions for Google Home are now supported in the US, UK, Canada, Germany, France, Australia, and Japan. A completely hands-free experience will give users more ways to complete transactions with your Actions.

Here are a few things to keep in mind when designing your transactions for voice-only surfaces:

  • Build easy-to-follow dialogue because users won't see dialogue or suggestion chips available on phones.
  • Avoid inducing choice paralysis. Focus on a few simple choices based on customer preferences collected during their previous orders.
  • Localize your transactional experiences for new regions – learn more here.
  • Don't forget to enable your transactions to work on smart speakers in the console.

Learn more tips in our Conversation Design Guidelines.

As we expand support for transactions in new countries and on new Google Assistant surfaces, now is the perfect time to make sure your transactional experiences are designed with users in mind so you can increase conversion and minimize drop-off.