Author Archives: Google Developers

New Dashboard in Google Developer Profiles

Posted by Chris Demeke, Product Manager; Amani Newton, Technical Writer

The last time you signed into your Google Developer Profile, you may have noticed a change. Now, after signing-in, you’re invited to view your personal dashboard.

Google Developer Profile dashboard

This feature provides a personalized view of your activity and learning content across Google Developers, Firebase, and Android (more to come soon). With the Developer Dashboard, you can see a history of your earned badges and other activity, continue any in-progress codelabs or pathways, or start a new pathway.

This feature is just the latest step in Google’s ongoing plan to maintain a thriving ecosystem for developers and provide more tailored experiences. Here’s what’s new:

Create and organize a personalized list of developer pages that you care about

Animation of saved pages

Quickly access the pages you visit frequently using the Saved Pages feature. By selecting the bookmark icon next to any developer reference page, you can add it to your Saved Pages, and open them here at a later date.

Keep track of what you learn across Google Developers, Firebase, and Android

image showing how to keep track of what you learn on your developer profile

Your dashboard will keep track of your in-progress codelabs and pathways for you, eliminating the trouble of forgetting to bookmark. You can view a history of all of your developer achievements and activities, across Google Developers, Firebase, and Android.

Collect badges and share your achievements

3 examples of badges you can earn

Show off your familiarity with the latest technologies by passing short assessments, and earn badges that you can share on LinkedIn, Twitter, or Facebook.

Sign in today to view your personal dashboard, or to get started, head to developer.google.com/profile and click Create profile.

New Dashboard in Google Developer Profiles

Posted by Chris Demeke, Product Manager; Amani Newton, Technical Writer

The last time you signed into your Google Developer Profile, you may have noticed a change. Now, after signing-in, you’re invited to view your personal dashboard.

Google Developer Profile dashboard

This feature provides a personalized view of your activity and learning content across Google Developers, Firebase, and Android (more to come soon). With the Developer Dashboard, you can see a history of your earned badges and other activity, continue any in-progress codelabs or pathways, or start a new pathway.

This feature is just the latest step in Google’s ongoing plan to maintain a thriving ecosystem for developers and provide more tailored experiences. Here’s what’s new:

Create and organize a personalized list of developer pages that you care about

Animation of saved pages

Quickly access the pages you visit frequently using the Saved Pages feature. By selecting the bookmark icon next to any developer reference page, you can add it to your Saved Pages, and open them here at a later date.

Keep track of what you learn across Google Developers, Firebase, and Android

image showing how to keep track of what you learn on your developer profile

Your dashboard will keep track of your in-progress codelabs and pathways for you, eliminating the trouble of forgetting to bookmark. You can view a history of all of your developer achievements and activities, across Google Developers, Firebase, and Android.

Collect badges and share your achievements

3 examples of badges you can earn

Show off your familiarity with the latest technologies by passing short assessments, and earn badges that you can share on LinkedIn, Twitter, or Facebook.

Sign in today to view your personal dashboard, or to get started, head to developer.google.com/profile and click Create profile.

Updated Google Pay button increases click-through rates

Posted by Soc Sieng, Developer Advocate, Google Pay

Google Pay header

An improved Google Pay button works wonders for click-through rates and the checkout experience.

The updated Google Pay button displays a user's card information, which makes the user 30% more likely to use it and increases conversions by 3.6%.

The display of the card's type and last four digits reminds the user that they already saved a payment card to their Google Account, which makes them more likely to opt for the quick and easy checkout process that Google Pay provides.

How it works

If a user configured an eligible payment method in their Google Account at the time of purchase, the Google Pay button displays the type and last four digits of their most-recently used card.

Dynamic Google Pay button

Figure 1. An example of the Google Pay button with the additional information.

Buy with Google Pay button

Figure 2. An example of the Google Pay button without the additional information.

How to enable card information

If you use the createButton API with default button options, your Google Pay button is automatically updated to include the user's card network and last four digits.

If you customized the createButton API and set buttonType to plain or short, set it to buy to make your Google Pay button display the user's card information.

If you haven’t integrated with the createButton API yet, consider doing so now so that the user knows that their payment details are a click away.

See it in action

To test the Google Pay button with other button options, check out this button-customization tool:

Next steps

To get started with Google Pay, visit Google Pay's Business Console. Make sure to use the createButton API to benefit from the new features. If you have any questions, tweet @GooglePayDevs on Twitter and use #AskGooglePayDevs.

Tips and shortcuts for a more productive spring

Posted by Bruno Panara, Google Registry Team

An animation of a person at a desk using a laptop and drinking out of a mug while different domain names pop up.

In my previous life as a startup entrepreneur, I found that life was more manageable when I was able to stay organized — a task that’s easier said than done. At Google Registry, we've been keeping an eye out for productivity and organization tools, and we’re sharing a few of our favorites with you today, just in time for spring cleaning.

.new shortcuts to save you time

Since launching .new shortcuts last year, we’ve seen a range of companies use .new domains to help their users get things done faster on their websites.

  • If your digital workspace looks anything like mine, you’ll love these shortcuts: action.new creates a new Workona workspace to organize your Chrome tabs, and task.new helps keep track of your to-dos and projects in Asana.
  • Bringing together notes and ideas can make it easier to get work done: coda.new creates a new Coda document to collect all your team’s thoughts, and jam.new starts a new collaborative Google Jamboard session.
  • Spring cleaning wouldn’t be complete without a tidy cupboard: With sell.new you can create an eBay listing in minutes and free up some closet space. And if you own or manage a business, stay on top of your orders and keep services flowing by giving the shortcut — invoice.new — a try.

Visit whats.new to browse all the .new shortcuts, including our Spring Spotlights section.

Six startups helping you increase productivity

We recently sat down with six startups to learn how they’re helping their clients be more productive. From interviewing and hiring, to managing teamwork, calendars and meetings, check out these videos to learn how you can make the most of your time:

Arc.dev connects developers with companies hiring remotely, helping them find their next opportunity.

The founders of byteboard.dev, who came through Area 120, Google’s in-house incubator for experimental projects, thought that technical interviews were inefficient. So they redesigned them from the ground up to be more fair and relevant to real-world jobs.

To run more efficient meetings, try fellow.app. Streamlining agendas, note taking, action items and decision recording can help your team build great meeting habits.

Friday.app helps you organize your day so you can stay focused while sharing and collaborating with remote teammates.

Manage your time productively using inmotion.app, a browser extension that is a search bar, calendar, tab manager and distraction blocker, all in one.

No time to take your pet to the groomers? Find a groomer who will come to you and treat your pet to an in-home grooming session with pawsh.app.

Whether you’re a pet parent, a busy professional or just looking to sell your clutter online, we hope these tools help you organize and save time this season.

Everything Assistant at I/O

Posted by Mike Bifulco

Google I/O banner

We’re excited to host the first ever virtual Google I/O Conference this year, from May 18-20, 2021 – and everyone's invited! Developers around the world will join us for keynotes, technical sessions, codelabs, demos, meetups, workshops, and Ask Me Anything (AMA) sessions hosted by Googlers whose teams have been hard at work preparing new features, APIs, and tools for you to try out. We can’t wait for you to explore everything Google has to share. Given the sheer amount of content that will be shared during those 3 days, this guide is meant to help you find sessions that might interest you if you’re interested in building and integrating with Google Assistant.

With that in mind, here’s a rundown of everything Assistant at Google I/O 2021:

Keynote: What’s New in Google Assistant (register)

We’ll kick off news from Assistant with our keynote session, which will be livestreamed on May 19th at 9:45am PST. Expect to hear about what’s happened in Assistant over the past year, new product announcements, feature updates, and tooling changes.

Keynote: What’s New in Smart Home (register)

In celebration of Google Assistant's 5th birthday, we'll share our Smart Home journey and the things we’ve learned along the way. We'll also dive into product vision, new product announcements, and showcase great Assistant experiences built by our developer community. Catch the Smart Home keynote on May 19th at 4:15pm PST.

Technical Sessions

Technical sessions are 15 minute deep dives into new features, tools, and other announcements from product teams. These 4 sessions will be available on demand, so you can watch them any time after they officially launch during the event.

Driving a Successful Launch for Conversational Actions (register)

In this session, we’ll discuss marketing activities that will help users discover and engage with what you’ve built on Google Assistant. Learn some of the basics of putting together a marketing team, a go-to-market plan, and some recommended activities for promoting engagement with your Conversational Actions.

How to Voicify Your Android App (register)

In this session, you’ll learn how to implement voice capabilities in your Android App. Get users into your app with a voice command using App Actions.

Android Shortcuts for Assistant (register)

Now that you've added a layer of voice interaction to your Android app, learn what's new with Android Shortcuts and how they can be extended to the Google Assistant.

Refreshing Widgets (register)

Widgets in Android 12 are coming with a fresh new look and feel. Come to this session to learn how you can make the most of what’s coming to Widgets, while also making them more useful and discoverable through integrations with Assistant and Assistant Auto.

Ask Me Anything (AMA)

AMAs are a great opportunity for you to have your questions fielded by Googlers. If you register for I/O, you’ll be able to pre-submit questions to any of these AMAs. Teams of Googlers will be answering audience questions live during I/O. All AMA sessions will be livestreamed at specific dates and times, so be sure to add them to your calendar.

App Actions: Ask Me Anything

May 19th, 10:15am PST (register)

This is the place to bring all of your burning questions about App Actions for Android. Our App Actions team will include Program Managers, Developer Advocates, and Engineers who are looking forward to answering your questions. Maybe you’re building an app which uses Custom Intents, or you’ve got questions about some of the new feature announcements from our Technical Sessions (see above!) - the team is looking forward to helping.

Games on Google Assistant: Ask Me Anything

May 19th, 11:00pm PST (register)

Join a panel of Googlers to ask your questions about building Games with Google Assistant. Our team of Product Managers and Game developers are here to help you - from designing and building games, to toolchain questions, to figuring out what types of games people are playing on their smart devices.

Workshops

This year, our workshops will be conducted online via livestream. Each workshop will be led by a Googler providing instruction alongside a team of Googler TAs, who will be there to answer your questions via live chat. Workshops will show you how to apply the things you learn at I/O by giving you hands-on experience with new tools and APIs. Each workshop has limited space for registrations, so be sure to sign up early if you’re interested.

Extend an Android app to Google Assistant with App Actions

May 19th, 11:00am PST (register)

Learn to develop App Actions using common built-in intents in this intermediate codelab, enabling users to open app features and search for in-app content, with Google Assistant.

Debugging the Smart Home

May 19th, 11:30pm PST (register)

Improve your products' reliability and user experience with Google's new smart home quality tools in this intermediate codelab. Learn how to view, analyze, debug and fix issues with your smart home integrations.

Meetups

Women in Voice Meetup

May 20th, 4:00pm PST (register)

This meetup will be a chance for developers to share influential work by women in Voice AI and to discuss ways allies can help women in Voice to be more successful while building a more inclusive ecosystem.

Smart Home Developer Meetup

[Americas] May 18, 3:00pm PST (register)
[APAC] May 19th, 9:00pm PST (register)
[EMEA] May 20th, 6:00am PST (register)

This meetup will be a chance for developers interested in Smart Home to chat with the Smart Home partner engineering team about developing and debugging smart home integrations, share projects, or ask questions.

Register now

Registration for Google I/O 2021 is now open - and attending I/O 2021 is entirely free and open to all. We hope to see you there, and can’t wait to share what we’ve been working on with you. To register for the event, head over to the Google I/O registration page.

Recommended strategies and best practices for designing and developing games and stories on Google Assistant

Posted by Wally Brill and Jessica Dene Earley-Cha

Illustration of pink car collecting coins

Since we launched Interactive Canvas, and especially in the last year we have been helping developers create great storytelling and gaming experiences for Google Assistant on smart displays. Along the way we’ve learned a lot about what does and doesn’t work. Building these kinds of interactive voice experiences is still a relatively new endeavor, and so we want to share what we've learned to help you build the next great gaming or storytelling experience for Assistant.

Here are three key things to keep in mind when you’re designing and developing interactive games and stories. These three were selected from a longer list of lessons learned (stay tuned to the end for the link for the 10+ lessons) because they are dependent on Action Builder/SDK functionality and can be slightly different for the traditional conversation design for voice only experiences.

1. Keep the Text-To-Speech (TTS) brief

Text-to-speech, or computer generated voice, has improved exponentially in the last few years, but it isn’t perfect. Through user testing, we’ve learned that users (especially kids) don’t like listening to long TTS messages. Of course, some content (like interactive stories) should not be reduced. However, for games, try to keep your script simple. Wherever possible, leverage the power of the visual medium and show, don’t tell. Consider providing a skip button on the screen so that users can read and move forward without waiting until the TTS is finished. In many cases the TTS and text on a screen won’t always need to mirror each other. For example the TTS may say "Great job! Let's move to the next question. What’s the name of the big red dog?" and the text on screen may simply say "What is the name of the big red dog?"

Implementation

You can provide different audio and screen-based prompts by using a simple response, which allows different verbiage in the speech and text sections of the response. With Actions Builder, you can do this using the node client library or in the JSON response. The following code samples show you how to implement the example discussed above:

candidates:
- first_simple:
variants:
- speech: Great job! Let's move to the next question. What’s the name of the big red dog?
text: What is the name of the big red dog?

Note: implementation in YAML for Actions Builder

app.handle('yourHandlerName', conv => {
conv.add(new Simple({
speech: 'Great job! Let\'s move to the next question. What’s the name of the big red dog?',
text: 'What is the name of the big red dog?'
}));
});

Note: implementation with node client library

2. Consider both first-time and returning users

Frequent users don't need to hear the same instructions repeatedly. Optimize the experience for returning users. If it's a user's first time experience, try to explain the full context. If they revisit your action, acknowledge their return with a "Welcome back" message, and try to shorten (or taper) the instructions. If you noticed the user has returned more than 3 or 4 times, try to get to the point as quickly as possible.

An example of tapering:

  • Instructions to first time users: “Just say words you can make from the letters provided. Are you ready to begin?”
  • For a returning user: “Make up words from the jumbled letters. Ready?”
  • For a frequent user: “Are you ready to play?”

Implementation

You can check the lastSeenTime property in the User object of the HTTP request. The lastSeenTime property is a timestamp of the last interaction with this particular user. If this is the first time a user is interacting with your Action, this field will be omitted. Since it’s a timestamp, you can have different messages for a user who’s last interaction has been more than 3 months, 3 weeks or 3 days. Below is an example of having a default message that is tapered. If the lastSeenTime property is omitted, meaning that it's the first time the user is interacting with this Action, the message is updated with the longer message containing more details.

app.handle('greetingInstructions', conv => {
let message = 'Make up words from the jumbled letters. Ready?';
if (!conv.user.lastSeenTime) {
message = 'Just say words you can make from the letters provided. Are you ready to begin?';
}
conv.add(message);
});

Note: implementation with node client library

3. Support strongly recommended intents

There are some commonly used intents which really enhance the user experience by providing some basic commands to interact with your voice app. If your action doesn’t support these, users might get frustrated. These intents help create a basic structure to your voice user interface, and help users navigate your Action.

  • Exit / Quit

    Closes the action

  • Repeat / Say that again

    Makes it easy for users to hear immediately preceding content at any point

  • Play Again

    Gives users an opportunity to re-engage with their favorite experiences

  • Help

    Provides more detailed instructions for users who may be lost. Depending on the type of Action, this may need to be context specific. Defaults returning users to where they left off in game play after a Help message plays.

  • Pause, Resume

    Provides a visual indication that the game has been paused, and provides both visual and voice options to resume.

  • Skip

    Moves to the next decision point.

  • Home / Menu

    Moves to the home or main menu of an action. Having a visual affordance for this is a great idea. Without visual cues, it’s hard for users to know that they can navigate through voice even when it’s supported.

  • Go back

    Moves to the previous page in an interactive story.

Implementation

Actions Builder & Actions SDK support System Intents that cover a few of these use case which contain Google support training phrase:

  • Exit / Quit -> actions.intent.CANCEL This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit."
  • Repeat / Say that again -> actions.intent.REPEAT This intent is matched when a user asks the Action to repeat.

For the remaining intents, you can create User Intents and you have the option of making them Global (where they can be triggered at any Scene) or add them to a particular scene. Below are examples from a variety of projects to get you started:

So there you have it. Three suggestions to keep in mind for making amazing interactive games and story experiences that people will want to use over and over again. To check out the full list of our recommendations go to the Lessons Learned page.

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Recommended strategies and best practices for designing and developing games and stories on Google Assistant

Posted by Wally Brill and Jessica Dene Earley-Cha

Illustration of pink car collecting coins

Since we launched Interactive Canvas, and especially in the last year we have been helping developers create great storytelling and gaming experiences for Google Assistant on smart displays. Along the way we’ve learned a lot about what does and doesn’t work. Building these kinds of interactive voice experiences is still a relatively new endeavor, and so we want to share what we've learned to help you build the next great gaming or storytelling experience for Assistant.

Here are three key things to keep in mind when you’re designing and developing interactive games and stories. These three were selected from a longer list of lessons learned (stay tuned to the end for the link for the 10+ lessons) because they are dependent on Action Builder/SDK functionality and can be slightly different for the traditional conversation design for voice only experiences.

1. Keep the Text-To-Speech (TTS) brief

Text-to-speech, or computer generated voice, has improved exponentially in the last few years, but it isn’t perfect. Through user testing, we’ve learned that users (especially kids) don’t like listening to long TTS messages. Of course, some content (like interactive stories) should not be reduced. However, for games, try to keep your script simple. Wherever possible, leverage the power of the visual medium and show, don’t tell. Consider providing a skip button on the screen so that users can read and move forward without waiting until the TTS is finished. In many cases the TTS and text on a screen won’t always need to mirror each other. For example the TTS may say "Great job! Let's move to the next question. What’s the name of the big red dog?" and the text on screen may simply say "What is the name of the big red dog?"

Implementation

You can provide different audio and screen-based prompts by using a simple response, which allows different verbiage in the speech and text sections of the response. With Actions Builder, you can do this using the node client library or in the JSON response. The following code samples show you how to implement the example discussed above:

candidates:
- first_simple:
variants:
- speech: Great job! Let's move to the next question. What’s the name of the big red dog?
text: What is the name of the big red dog?

Note: implementation in YAML for Actions Builder

app.handle('yourHandlerName', conv => {
conv.add(new Simple({
speech: 'Great job! Let\'s move to the next question. What’s the name of the big red dog?',
text: 'What is the name of the big red dog?'
}));
});

Note: implementation with node client library

2. Consider both first-time and returning users

Frequent users don't need to hear the same instructions repeatedly. Optimize the experience for returning users. If it's a user's first time experience, try to explain the full context. If they revisit your action, acknowledge their return with a "Welcome back" message, and try to shorten (or taper) the instructions. If you noticed the user has returned more than 3 or 4 times, try to get to the point as quickly as possible.

An example of tapering:

  • Instructions to first time users: “Just say words you can make from the letters provided. Are you ready to begin?”
  • For a returning user: “Make up words from the jumbled letters. Ready?”
  • For a frequent user: “Are you ready to play?”

Implementation

You can check the lastSeenTime property in the User object of the HTTP request. The lastSeenTime property is a timestamp of the last interaction with this particular user. If this is the first time a user is interacting with your Action, this field will be omitted. Since it’s a timestamp, you can have different messages for a user who’s last interaction has been more than 3 months, 3 weeks or 3 days. Below is an example of having a default message that is tapered. If the lastSeenTime property is omitted, meaning that it's the first time the user is interacting with this Action, the message is updated with the longer message containing more details.

app.handle('greetingInstructions', conv => {
let message = 'Make up words from the jumbled letters. Ready?';
if (!conv.user.lastSeenTime) {
message = 'Just say words you can make from the letters provided. Are you ready to begin?';
}
conv.add(message);
});

Note: implementation with node client library

3. Support strongly recommended intents

There are some commonly used intents which really enhance the user experience by providing some basic commands to interact with your voice app. If your action doesn’t support these, users might get frustrated. These intents help create a basic structure to your voice user interface, and help users navigate your Action.

  • Exit / Quit

    Closes the action

  • Repeat / Say that again

    Makes it easy for users to hear immediately preceding content at any point

  • Play Again

    Gives users an opportunity to re-engage with their favorite experiences

  • Help

    Provides more detailed instructions for users who may be lost. Depending on the type of Action, this may need to be context specific. Defaults returning users to where they left off in game play after a Help message plays.

  • Pause, Resume

    Provides a visual indication that the game has been paused, and provides both visual and voice options to resume.

  • Skip

    Moves to the next decision point.

  • Home / Menu

    Moves to the home or main menu of an action. Having a visual affordance for this is a great idea. Without visual cues, it’s hard for users to know that they can navigate through voice even when it’s supported.

  • Go back

    Moves to the previous page in an interactive story.

Implementation

Actions Builder & Actions SDK support System Intents that cover a few of these use case which contain Google support training phrase:

  • Exit / Quit -> actions.intent.CANCEL This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit."
  • Repeat / Say that again -> actions.intent.REPEAT This intent is matched when a user asks the Action to repeat.

For the remaining intents, you can create User Intents and you have the option of making them Global (where they can be triggered at any Scene) or add them to a particular scene. Below are examples from a variety of projects to get you started:

So there you have it. Three suggestions to keep in mind for making amazing interactive games and story experiences that people will want to use over and over again. To check out the full list of our recommendations go to the Lessons Learned page.

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Recommended strategies and best practices for designing and developing games and stories on Google Assistant

Posted by Wally Brill and Jessica Dene Earley-Cha

Illustration of pink car collecting coins

Since we launched Interactive Canvas, and especially in the last year we have been helping developers create great storytelling and gaming experiences for Google Assistant on smart displays. Along the way we’ve learned a lot about what does and doesn’t work. Building these kinds of interactive voice experiences is still a relatively new endeavor, and so we want to share what we've learned to help you build the next great gaming or storytelling experience for Assistant.

Here are three key things to keep in mind when you’re designing and developing interactive games and stories. These three were selected from a longer list of lessons learned (stay tuned to the end for the link for the 10+ lessons) because they are dependent on Action Builder/SDK functionality and can be slightly different for the traditional conversation design for voice only experiences.

1. Keep the Text-To-Speech (TTS) brief

Text-to-speech, or computer generated voice, has improved exponentially in the last few years, but it isn’t perfect. Through user testing, we’ve learned that users (especially kids) don’t like listening to long TTS messages. Of course, some content (like interactive stories) should not be reduced. However, for games, try to keep your script simple. Wherever possible, leverage the power of the visual medium and show, don’t tell. Consider providing a skip button on the screen so that users can read and move forward without waiting until the TTS is finished. In many cases the TTS and text on a screen won’t always need to mirror each other. For example the TTS may say "Great job! Let's move to the next question. What’s the name of the big red dog?" and the text on screen may simply say "What is the name of the big red dog?"

Implementation

You can provide different audio and screen-based prompts by using a simple response, which allows different verbiage in the speech and text sections of the response. With Actions Builder, you can do this using the node client library or in the JSON response. The following code samples show you how to implement the example discussed above:

candidates:
- first_simple:
variants:
- speech: Great job! Let's move to the next question. What’s the name of the big red dog?
text: What is the name of the big red dog?

Note: implementation in YAML for Actions Builder

app.handle('yourHandlerName', conv => {
conv.add(new Simple({
speech: 'Great job! Let\'s move to the next question. What’s the name of the big red dog?',
text: 'What is the name of the big red dog?'
}));
});

Note: implementation with node client library

2. Consider both first-time and returning users

Frequent users don't need to hear the same instructions repeatedly. Optimize the experience for returning users. If it's a user's first time experience, try to explain the full context. If they revisit your action, acknowledge their return with a "Welcome back" message, and try to shorten (or taper) the instructions. If you noticed the user has returned more than 3 or 4 times, try to get to the point as quickly as possible.

An example of tapering:

  • Instructions to first time users: “Just say words you can make from the letters provided. Are you ready to begin?”
  • For a returning user: “Make up words from the jumbled letters. Ready?”
  • For a frequent user: “Are you ready to play?”

Implementation

You can check the lastSeenTime property in the User object of the HTTP request. The lastSeenTime property is a timestamp of the last interaction with this particular user. If this is the first time a user is interacting with your Action, this field will be omitted. Since it’s a timestamp, you can have different messages for a user who’s last interaction has been more than 3 months, 3 weeks or 3 days. Below is an example of having a default message that is tapered. If the lastSeenTime property is omitted, meaning that it's the first time the user is interacting with this Action, the message is updated with the longer message containing more details.

app.handle('greetingInstructions', conv => {
let message = 'Make up words from the jumbled letters. Ready?';
if (!conv.user.lastSeenTime) {
message = 'Just say words you can make from the letters provided. Are you ready to begin?';
}
conv.add(message);
});

Note: implementation with node client library

3. Support strongly recommended intents

There are some commonly used intents which really enhance the user experience by providing some basic commands to interact with your voice app. If your action doesn’t support these, users might get frustrated. These intents help create a basic structure to your voice user interface, and help users navigate your Action.

  • Exit / Quit

    Closes the action

  • Repeat / Say that again

    Makes it easy for users to hear immediately preceding content at any point

  • Play Again

    Gives users an opportunity to re-engage with their favorite experiences

  • Help

    Provides more detailed instructions for users who may be lost. Depending on the type of Action, this may need to be context specific. Defaults returning users to where they left off in game play after a Help message plays.

  • Pause, Resume

    Provides a visual indication that the game has been paused, and provides both visual and voice options to resume.

  • Skip

    Moves to the next decision point.

  • Home / Menu

    Moves to the home or main menu of an action. Having a visual affordance for this is a great idea. Without visual cues, it’s hard for users to know that they can navigate through voice even when it’s supported.

  • Go back

    Moves to the previous page in an interactive story.

Implementation

Actions Builder & Actions SDK support System Intents that cover a few of these use case which contain Google support training phrase:

  • Exit / Quit -> actions.intent.CANCEL This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit."
  • Repeat / Say that again -> actions.intent.REPEAT This intent is matched when a user asks the Action to repeat.

For the remaining intents, you can create User Intents and you have the option of making them Global (where they can be triggered at any Scene) or add them to a particular scene. Below are examples from a variety of projects to get you started:

So there you have it. Three suggestions to keep in mind for making amazing interactive games and story experiences that people will want to use over and over again. To check out the full list of our recommendations go to the Lessons Learned page.

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Google Developer Group Spotlight: A conversation with Cloud Architect, Ilias Papachristos

Posted by Jennifer Kohl, Global Program Manager, Google Developer Communities

The Google Developer Groups Spotlight series interviews inspiring leaders of community meetup groups around the world. Our goal is to learn more about what developers are working on, how they’ve grown their skills with the Google Developer Group community, and what tips they might have for us all.

We recently spoke with Ilias Papachristos, Google Developer Group Cloud Thessaloniki Lead in Greece. Check out our conversation with Ilias on Cloud architecture, reading official documentation, and suggested resources to help developers grow professionally.

Tell us a little about yourself?

I’m a family man, ex-army helicopter pilot, Kendo sensei, beta tester at Coursera, Lead of the Google Developer Group Cloud Thessaloniki community, Google Cloud Professional Architect, and a Cloud Board Moderator on the Google Developers Community Leads Platform (CLP).

I love outdoor activities, reading books, listening to music, and cooking for my family and friends!

Can you explain your work in Cloud technologies?

Over my career, I have used Compute Engine for an e-shop, AutoML Tables for an HR company, and have architected the migration of a company in Mumbai. Now I’m consulting for a company on two of their projects: one that uses Cloud Run and another that uses Kubernetes.

Both of them have Cloud SQL and the Kubernetes project will use the AI Platform. We might even end up using Dataflow with BigQuery for the streaming and Scheduler or Manager, but I’m still working out the details.

I love the chance to share knowledge with the developer community. Many days, I open my PC, read the official Google Cloud blog, and share interesting articles on the CLP Cloud Board and GDG Cloud Thessaloniki’s social media accounts. Then, I check Google Cloud’s Medium publication for extra articles. Read, comment, share, repeat!

How did the Google Developer Group community help your Cloud career?

My overall knowledge of Google Cloud has to do with my involvement with Google Developer Groups. It is not just one thing. It’s about everything! At the first European GDG Leads Summit, I met so many people who were sharing their knowledge and offering their help. For a newbie like me it was and still is something that I keep in my heart as a treasure

I’ve also received so many informative lessons on public speaking from Google Developer Group and Google Developer Student Club Leads. They always motivate me to continue talking about the things I love!

What has been the most inspiring part of being a part of your local Google Developer Group?

Collaboration with the rest of the DevFest Hellas Team! For this event, I was a part of a small group of 12 organizers, all of whom never had hosted a large meetup before. With the help of Google Developer Groups, we had so much fun while creating a successful DevFest learning program for 360 people.

What are some technical resources you have found the most helpful for your professional development?

Besides all of the amazing tricks and tips you can learn from the Google Cloud training team and courses on the official YouTube channel, I had the chance to hear a talk by Wietse Venema on Cloud Run. I also have learned so much about AI from Dale Markovitz’s videos on Applied AI. And of course, I can’t leave out Priyanka Vergadia’s posts, articles, and comic-videos!

Official documentation has also been a super important part of my career. Here are five links that I am using right now as an Architect:

  1. Google Cloud Samples
  2. Cloud Architecture Center
  3. Solve with Google Cloud
  4. Google Cloud Solutions
  5. 13 sample architectures to kickstart your Google Cloud journey

How did you become a Google Developer Group Lead?

I am a member of the Digital Analytics community in Thessaloniki, Greece. Their organizer asked me to write articles to start motivating young people. I translated one of the blogs into English and published it on Medium. The Lead of GDG Thessaloniki read them and asked me to become a facilitator for a Cloud Study Jams (CSJ) workshop. I accepted and then traveled to Athens to train three people so that they could also become CSJ facilitators. At the end of the CSJ, I was asked if I wanted to lead a Google Developer Group chapter. I agreed. Maria Encinar and Katharina Lindenthal interviewed me, and I got it!

What would be one piece of advice you have for someone looking to learn more about a specific technology?

Learning has to be an amusing and fun process. And that’s how it’s done with Google Developer Groups all over the world. Join mine, here. It’s the best one. (Wink, wink.)

Want to start growing your career and coding knowledge with developers like Ilias? Then join a Google Developer Group near you, here.

Celebrating Earth Day with our inaugural Google for Startups Accelerator: Climate Change cohort

Posted by Jason Scott, Head of Startup Developer Ecosystem, USA | Nick Zakrasek, Global Product Lead, Sustainability

GIF of Climate Change Class Announcement

Today, people across the world will celebrate and participate in Earth Day. In line with Google’s broader commitment to address climate change, we are proud to join in this celebration by announcing the first cohort for our Google for Startups Accelerator: Climate Change program. The 10-week digital accelerator is designed to help North American sustainable technology startups take their businesses to the next level.

Meet the cohort of 11 companies, who are collectively leveraging technology and data to combat the challenge of climate change:

75F, Bloomington, Minnesota, USA

75F is a vertically-integrated building intelligence company using smart sensors, controllers and software to make commercial buildings more efficient and comfortable.

BlocPower, Brooklyn, New York, USA

BlocPower is providing software and financial tools to analyze, finance, and manage the challenge of converting millions of urban buildings off of fossil fuels.

CarbiCrete, Montreal, Quebec, Canada

CarbiCrete's concrete-making solution completely eliminates the need for cement, making it cheaper and stronger than traditional concrete, all through an overall carbon-negative process.

Enexor BioEnergy, Franklin, Tennessee, USA

Enexor BioEnergy delivers renewable electricity and thermal using organic, biomass, and plastic feedstock, helping to mitigate climate change while addressing global waste overabundance challenges.

FARM-TRACE, Vancouver, British Columbia, Canada

FARM-TRACE is a software platform which delivers verified reforestation impacts created by farmers to brands wanting to reduce their climate footprints.

Fermata Energy, Charlottesville, Virginia, USA

Fermata Energy designs, supplies, and operates technology that turns electric vehicles into energy storage assets that combat climate change, increase resilience, and dramatically lower the cost of ownership.

Flair, San Francisco, California, USA

Flair makes buildings more comfortable using less energy while promoting energy efficiency, electrification, and smart grid integration.

Heatworks, Mt. Pleasant, South Carolina, USA

Heatworks uses electronic controls and graphite electrodes to heat water instantly, endlessly, and precisely, without energy loss.

Wild Earth, Berkeley, California, USA

Wild Earth is a plant-based pet food company that harnesses biotech to create cruelty free products with less environmental impact.

Yard Stick PBC, Cambridge, Massachusetts, USA

Yard Stick fights climate change by measuring soil carbon accurately, instantly, and affordably, providing the “missing link” to carbon sequestration on the gigaton per year scale.

Zauben, Chicago, Illinois, USA

Zauben is designing the world's smartest green products, like sensor-driven, IoT-connected green roofs and living walls, to create healthier and happier environments for humans and our planet.

The program kicks off on Monday, June 7th and will focus on product design, technical infrastructure, customer acquisition, and leadership development - granting our founders access to an expansive network of mentors, senior executives, and industry leaders.

We are incredibly excited to support this group of entrepreneurs over the next three months and beyond, connecting them with the best of our people, products, and programming to advance their companies and solutions.

Be sure to join us as we showcase their accomplishments on Thursday, August 12th from 12:30pm - 2:00pm EST at our Google for Startups Accelerator: Climate Change Demo Day.