Tag Archives: actions on google

Announcing Enhanced Smart Home Analytics

Posted by Toni Klopfenstein, Developer Advocate

When creating scalable applications, consistent and reliable monitoring of resources is a valuable tool for any developer. Today we are releasing enhanced analytics and logging for Smart Home Actions. This feature enables you to more quickly identify and respond to errors or quality issues that may arise.

Request Latency Dashboard

You can now access the smart home dashboard with pre-populated metrics charts for your Actions on the Analytics tab in the Actions Console, or through Cloud Monitoring. These metrics help you quantify the health and usage of your Action, and gain insight into how users engage with your Action. You can view:

  • Execution types and device traits used
  • Daily users and request counts
  • User query response latency
  • Success rate for Smart Home engagements
  • Comparison of cloud and local fulfilment interactions

Successful Requests Dashboard

Cloud Logging provides detailed logs based on the events observed in Cloud Monitoring.

We've added additional features to the error logs to help you quickly debug why intents fail, which particular device commands malfunction, or if your local fulfilment falls back to cloud fulfilment.

New details added to the event logs include:

  • Cloud vs. local fulfilment
  • EXECUTE vs. QUERY intents
  • Locale of request
  • Device Type

You can additionally export these logs through Cloud Pub/Sub, and build log-based metrics and alerts for your development teams to gain insights into common issues.

For more guidance on accessing your Smart Home Action analytics and logs, check out the developer guide or watch the video.

We want to hear from you! Continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

Join the "Hey Google" Smart Home Virtual Summit

Posted by Toni Klopfenstein, Developer Relations

Over the past year, we've been focused on building new tools and features to support our smart home developer community. Though we weren't able to engage with you in person at Google I/O, we are pleased to announce the "Hey Google" Smart Home Virtual Summit on July 8th - an opportunity for us to come together and dive into the exciting new and upcoming features for smart home developers and users.

Join us in the keynote where Michele Turner, the Product Management director of the Smart Home Ecosystem, will share our recent smart home product initiatives and how developers can benefit from these capabilities. She will also introduce new tools that make it easier for you to develop with Google Assistant. We will also be hosting a partner panel, where you can hear from industry leaders on how they navigate the impact of COVID-19 and their thoughts on the state of the industry.

Registration is FREE! Head on over to the Summit website to register and check out the schedule. Events will be held during EMEA, APAC, and AMER friendly times. We hope to see you and your colleagues there!

Local Home SDK support on Nest WiFi

Posted by Toni Klopfenstein, Developer Advocate

Today, we're expanding the support of the Local Home SDK to the Google Nest Wifi routers with the latest firmware update to M81. The Local Home SDK we recently launched allows you to create a local fulfilment path for your smart home Action. Local fulfillment provides lower latency and higher reliability for your smart home Action.

By adding support for the Node.js runtime of the Nest WiFi routers, the Local Home platform is now compatible with the full Nest WiFi system. This update means your local execution application can run on a self-healing mesh wireless network, and your users gain the benefits of expanded reliable home automation coverage.

To support this additional runtime, we've updated the Actions Console to enable you to add the Node.js on-device testing URL. The Nest WiFi routers will receive the the node-targeted bundle.js files you've already uploaded during deployment of your Action automatically. Since Chrome DevTools have built-in Node.js support, your development flow doesn't require any additional tools for inspecting your Node.js app or debugging your smart home Action.

We have updated the developer guide and tools to help guide you through the various local fulfilment runtimes and features of these toolings. For additional guidance on enabling local fulfilment for your smart home Action, check out the Enable local fulfillment for smart home Actions codelab. The API reference and samples can also help you build your first local fulfilment app.

We want to hear from you! Continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

Announcing Actions Builder & Actions SDK: New tools optimized for the Google Assistant

Posted by the Assistant Developer Platform team

Since the launch of the Google Assistant, our developer ecosystem has been instrumental in delivering compelling voice experiences to more than 500 million active users. Today, we’re taking a major step forward in helping you build these custom voice apps and services by introducing a suite of new and improved developer tools: Actions Builder and Actions SDK. These tools make building Conversational Actions for the Assistant easier and more streamlined than ever.

Better design and development tools

Actions Builder is a web-based IDE that lets you develop, test, and deploy directly in the Actions console. The graphical interface lets you visualize the conversational flow, manage Natural Language Understanding (NLU) training data, and debug with advanced tools.

For those of you who prefer local IDEs, the updated Actions SDK provides a file based representation of your Actions project. This lets you author NLU training data and conversational flows locally as well as bulk import and export training data. We've also updated the CLI that accompanies Actions SDK, so you can build and manage Actions projects completely with code, using your favorite source control and continuous integration tools.

Together, Actions Builder and Actions SDK create a seamless, consolidated development experience. No matter what tool you start with, you can switch between them based on what works best for your workflow. For example, you can use Actions Builder to lay out conversational flows and provide NLU training data, Actions SDK to write fulfillment code, and the CLI to synchronize the two. These tools create an environment where all team members can contribute effectively and focus on what they do best: design and code.

New interaction model

A new, powerful interaction model lets you design conversations quickly and efficiently. Intents and scenes let you define robust NLU training data and behavior for specific conversational contexts. Using scenes as building blocks, you define active intents, declare context specific error handling, collect data through slot filling, and respond with prompts.

Scenes also separate conversational flow definitions from fulfillment logic, so you can reuse the same flows across multiple conversations. Transitions between scenes let you define when one conversational context switches to another. All your scenes and transitions describe a full conversational flow and all possible dialog turns.

You can express the entire interaction model with either the Actions Builder or Actions SDK. A typical way to develop is to use Actions Builder to view and edit your scenes and then use Actions SDK to sync changes to your local file system. This lets you version control your project, modify your project files, and build fulfillment in your favorite development environment.

Faster and smarter runtime engine

Under the hood, we also made a lot of improvements that your users will appreciate. We sped up the Assistant runtime engine, so users get faster responses and a smoother experience. We’ve also made the runtime engine smarter, so your Actions can understand users better with the same amount of training data.

Production ready platform

We've worked with Pretzel Labs and Galinha Pintadinha to test the capabilities of the new platform and to refine the interaction model and runtime engine improvements.

Pretzel Labs built Kids Court with Actions Builder, creating a full conversational flow with no code and added fulfilment for advanced functionality.

"Having the combination of a visual layout with webhook blocks for code helps us collaborate clearly and more efficiently. Something I liked very much about this was the separation between the designer and the developers' parts, making it very intuitive to make design changes without affecting backend logic."
-- Adva Levin, founder of Pretzel Labs

Galinha Pintadinha runs one of the biggest YouTube channels and built one the most popular Conversational Actions in their country. Their development team migrated to the new platform to optimize their workflow and simplify future Action development. Galinha Pintadinha’s Actions now contain half the number of intents and have a radically simplified conversation tree. Using features like contextual error handling, they were able to improve the user experience and quality with little to no cost.

"Actions Builder is a robust and well designed toolbox for developing conversational apps. The concept of scenes and transitions helped us define the flow of our Action in a much more streamlined way."
-- Mário Neto, engineer at Galinha Pintadinha

Get started

To learn more about Actions Builder and SDK and to start developing your next Actions, check out our new developer resources. Our codelabs will walk you through using the new tooling and interaction model. Samples for all major features are also available, so you can start playing with code immediately. See the full set of documentation to start building today.

Stay tuned for more platform updates and happy coding!

Local Home SDK Ready for Actions

Posted by Dave Smith, Developer Advocate

Last year we introduced the developer preview of the Local Home SDK, a suite of local technologies to enhance your smart home integration with Google Assistant by adding local fulfillment. Since then, we've been hard at work incorporating your feedback and getting the experience ready for production. Starting today, we're exiting developer preview and allowing you to submit local fulfillment apps along with your smart home Action through the Actions console using Local Home SDK v1.0.

Adding local fulfillment for your smart home Action.

As part of the Smart Home platform, local fulfillment extends your smart home Action and routes commands to devices through the local network, benefitting users with reduced latency and higher reliability. If a local path cannot be successfully established, commands fall back to your cloud fulfillment.

The Local Home SDK v1.0 supports discovery of local devices over Wi-Fi using the mDNS, UDP, or UPnP protocols. Once a local path is established, apps can send commands to devices using TCP, UDP, or HTTP. For more details on the API changes in SDK v1.0, check out the changelog.

Multi-scan configurations

Along with this release, we've also improved the scan configurations in the Actions console based on your feedback. You can now enter multiple scan configurations for a given project, enabling your local fulfillment app to handle multiple device families that may be using different discovery protocols.

New multi-scan configuration UI.

The new interface groups scan attributes by protocol and highlights required fields, making it clearer how to properly configure your project.

Submit your app

The Local Home SDK configuration page in the Actions console now accepts JavaScript bundles for your local fulfillment app. When you are ready to publish your app, upload your JavaScript files to the console and submit your Action. For more details on submitting your smart home Action for review, see the smart home launch guide.

Upload your local fulfillment app.

We've updated the test suite for smart home to support local fulfillment as well. Be sure to self-test your local fulfillment before submitting your updated smart home Action for review. You must provide updated test suite results with your certification request when you submit.

Get started

To learn more about enhancing your smart home Actions with local fulfillment, check out the Introduction to Local Home SDK and the developer guide. Build your first local fulfillment app with the codelab, and go deeper with the samples and API reference.

We want to hear from you, so continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

New Analytics updates in Actions on Google Console

Posted by Mandy Chan, Developer Advocate

Have you built an Action for the Google Assistant and wondered how many people are using it? Or how many of your users are returning users? In this blog post, we will dive into 5 improvements that the Actions on Google Console team has made to give you more insight into how your Action is being used.

1. Multiple improvements for readability

We've updated three areas of the Actions Console for readability: Active Users Chart, Date Range Selection, and Filter Options. With these new updates, you can now better customize the data to analyze the usage of your Actions.

Active Users Chart

The labels at the top of the Active Users chart now read Daily, Weekly and Monthly, instead of the previous 1 Day, 7 Days and 28 Days labels. We also improved the readability of the individual date labels at the bottom of the chart to be more clear. You’ll also notice a quick insight at the bottom of the chart that shows the unique number of users during this time period.

Before:Active Users chartAfter:

Date Range Selection

Previously, the date range selectors applied globally to all the charts. These selectors are now local to each chart, allowing you more control over how you view your data.

The date selector provides the following ranges:

  • Daily (last 7 days, last 30 days, last 90 days)
  • Weekly (last 4 weeks, last 8 weeks, last 12 weeks, last 24 weeks)
  • Monthly (last 3 months, last 6 months, last 12 months)
Date Selector

Filter Options

Previously when you added a filter, it was applied to all the charts on the page. Now, the filters apply only to the chart you're viewing. We’ve also enhanced the filtering options available for the ‘Surface’ filter, such as mobile devices, smart speakers, and smart display.

Before:

Filter Options Before

After:

filter options after

The filter feature also lets you show data breakdowns over different dimensions. By default, the chart shows a single consolidated line, a result of all the filters applied. You can now select the ‘Show breakdown by’ option to see how the components of that data contribute to the totals based on the dimension you selected.

2. Introducing Retention metrics (New!)

A brand new addition to analytics is the introduction of a retention metrics chart to help you understand how well your action is retaining users. This chart shows you how many users you had in a week and how many returned each week for up to 5 weeks. The higher the percentage week after week, the better your retention.

When you hover over each cell in the chart, you can see the exact number of users who have returned for that week from the previous week.

Retention Metrics

3. Improvements to Conversation Metrics

Finally, we’ve consolidated the conversation metrics and brought them together into a single chart with separate tabs (‘Conversations’, ‘Messages’, ‘Avg Length’ and ‘Abort rate’) for easier comparison and visibility of trends over time. We’ve also updated the chart labels and tooltips for better interpretation.

Before:

Conversion Metrics Before

After:

Conversion Metrics After

Next steps

To learn more about what each metric means, you can check out our documentation.

Try out these new improvements to see how your Actions are performing with your users. You can also check out our documentation to learn more. Let us know if you have any feedback or suggestions in terms of metrics that you need to improve your Action. Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Announcing Dynamic Modes and Toggles

Posted by Dave Smith, Developer Advocate

Modes and toggles let you define the configurable attributes of your device that may exist outside the standard grammar for device control traits (such as On/Off or Start/Stop). This feature is often used to express device-specific settings, such as the "load size" for a clothes washer or the "cooking mode" for an oven.

When we initially introduced modes and toggles, we supported a whitelisted set of names and synonyms to ensure the most accurate responses and best user experience. Over time, we continued to add support based on the community's requests, but getting these requests approved has been a common pain point for many of you.

Starting today, you no longer have to get the names and synonyms provided in your SYNC response approved. The Google Assistant dynamically determines the necessary grammar for users to invoke these traits. If you're not already familiar with modes and toggles, here is an example using these traits to add support for custom cooking modes to an oven.

{
availableModes: [{
name: 'cook',
name_values: [{
name_synonym: ['cook setting'],
lang: 'en'
}],
settings: [{
setting_name: 'pizza',
setting_values: [{
setting_synonym: ['pizza'],
lang: 'en'
}]
}, {
setting_name: 'pasta',
setting_values: [{
setting_synonym: ['pasta'],
lang: 'en'
}]
}]
}],
}

Example modes in SYNC response

Controlling

Controlling a device using modes and toggles

We're excited to see what you build with these improved modes and toggles! For more details on using these features, see the updated guides for the Modes Trait and Toggles Trait. To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on.

International Women’s Day’19 featuring Actions on Google

Posted by Marisa Pareti, Rubi Martinez & Jessica Earley-Cha

In celebration of International Women’s Day, Women Techmakers hosted its sixth annual summit series to acknowledge and celebrate women in the tech industry, and to create a space for attendees to build community, hear from industry leaders, and learn new skills. The series featured 19 summits and 305 meetups across 87 countries.

This year, Women Techmakers partnered with the Actions on Google team to host technical workshops at these events so attendees could learn the fundamental concepts to develop Actions for the Google Assistant.Together, we created hundreds of new Actions for the Assistant. Check out some of the highlights of this year’s summit in the video below:

Technical Workshop Details

If you couldn’t attend any of our meetups this past year, we’ll cover our technical workshops now so you can start building for the Assistant from home. The technical workshop kicked off by introducing Actions on Google — the platform that enables developers to build Actions for the Google Assistant. Participants got hands-on experience building their first Action with the following features:

  • Users can start a conversation by explicitly calling the Action by name, which then responds with a greeting message.
  • Once in conversation, users are prompted to provide their favorite color. The Action parses the user’s input to extract the information it needs (namely, the color parameter).
  • If a color is provided, the Action processes the color parameter to auto-generate a “lucky number” to send back to the user and the conversation ends.
  • If no color is provided, the Action sends the user additional prompts until the parameter is extracted.
  • Users can explicitly leave the conversation at any time.

During Codelab level 1, participants learned how to parse the user’s input by using Dialogflow, a tool that uses Machine Learning and acted as their Natural Language Processor (NLP). Dialogflow processes what the user says and extracts important information from that input to identify how to fulfill the user’s request. Participants configured Dialogflow and connected it to their code’s back-end using Dialogflow’s inline editor. In the editor, participants added their code and tested their Action in the Action Simulator.

In Codelab level 2, participants continued building on their Action, adding features such as:

  • Supports deep links to directly launch the user into certain points of dialog
  • Uses utilities provided by the Actions on Google platform to fetch the user’s name and address them personally
  • Responds with follow-up questions to further the conversation
  • Presents users with a rich visual response complete with sound effects

Instead of using Dialogflow’s inline editor, participants set up a Cloud Functions for Firebase as their server.

You can learn more about developing your own Actions here. To support developers’ efforts in building great Actions for the Google Assistant, the team also has a developer community program.

Alex Eremia, a workshop attendee, reflected, “I think voice applications will have a huge impact on society both today and in the future. It will become a natural way we interact with the items around us.”

From keynotes, fireside chats, and interactive workshops, the Women Techmakers summit attendees enjoyed a mixture of technical and inspirational content. If you’re interested in learning more and getting involved, follow us WTM on twitter, check out our website and sign up to become a member.

To learn more Actions on Google and how to build for the Google Assistant, be sure to follow us on Twitter, and join our Reddit community!

Developer Preview of Local Home SDK

Posted by Toni Klopfenstein

Recently at Google I/O, we gave you a sneak peek at our new Local Home SDK, a suite of local technologies to enhance your smart home integrations. Today, the SDK is live as a developer preview. We've been working hard testing the platform with our partners, including GE, LIFX, Philips Hue, TP-Link, and Wemo, and are excited to bring you these additional technologies for connecting smart devices to the Google Assistant.

Figure 1: The local execution path

This SDK enables developers to more deeply integrate their smart devices into the Assistant by building upon the existing Smart Home platform to create a local execution path via Google Home smart speakers and Nest smart displays. Developers can now run their business logic to control new and existing smart devices in JavaScript that executes on the smart speakers and displays, benefitting users with reduced latency and higher reliability.

How it works:

The SDK introduces two new intents, IDENTIFY and REACHABLE_DEVICES. The local home platform scans the user's home network via mDNS, UDP, or UPnP to discover any smart devices connected to the Assistant, and triggers IDENTIFY to verify that the device IDs match those returned from the familiar Smart Home API SYNC intent. If the detected device is a hub or bridge, REACHABLE_DEVICES is triggered and treats the hub as the proxy device for communicating locally. Once the local execution path from Google Home to a device is established, the device properties are updated in Home Graph.

Figure 2: The intents used for each execution path

When a user triggers a smart home Action that has a local execution path, the Assistant sends the EXECUTE intent to the Google Nest device rather than the developer's cloud fulfillment. The developer's JavaScript app is invoked, which then triggers the Local Home SDK to send control commands to the smart device over TCP, UDP socket, or HTTP/HTTPS requests. By defaulting to local execution rather than the cloud, users experience faster fulfillment of their requests. The execution requests can still be sent to the cloud path in case local execution fails. This redundancy minimizes the possibility of a failed request, and improves the overall user experience.

Additional features of the Local Home platform include:

  • Support for all Wi-Fi-enabled device types and device traits without two-factor authentication enabled.
  • No user action required to deploy Local Home benefits to all devices.
  • Easily configure discovery protocols and the hosted JavaScript app URL through the Actions console.

Figure 3: Local Home configuration tool in the Actions console

JavaScript apps can be tested on-device, allowing developers to employ familiar tools like Chrome Developer Console for debugging. Because the Local Home SDK works with the existing smart home framework, you can self-certify new apps through the Test suite for smart home as well.

Get started

To learn more about the Local Home platform, check out the API reference, and get started adding local execution with the developer guide and samples. For general information covering how you can connect smart devices to the Google Assistant, visit the Smart Home documentation, or check out the Local Technologies for the Smart Home talk from Google I/O this year.

You can send us any feedback you have through the bug tracker, or engage with the community at /r/GoogleAssistantDev. You can tag your posts with the flair local-home-sdk to help organize discussion.

Actions on Google at I/O 2019: New tools for web, mobile, and smart home developers

Posted by Chris Turkstra, Director, Actions on Google

People are using the Assistant every day to get things done more easily, creating lots of opportunities for developers on this quickly growing platform. And we’ve heard from many of you that want easier ways to connect your content across the Assistant.

At I/O, we’re announcing new solutions for Actions on Google that were built specifically with you in mind. Whether you build for web, mobile, or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.

Enhance your presence in Search and the Assistant

Help people with their “how to” questions

Every day, people turn to the internet to ask “how to” questions, like how to tie a tie, how to fix a faucet, or how to install a dog door. At I/O, we’re introducing support for How-to markup that lets you power richer and more helpful results in Search and the Assistant.

Adding How-to markup to your pages will enable the page to appear as a rich result on mobile Search and on Google Assistant Smart Displays. This is an incredibly lightweight way for web developers and creators to connect with millions of people, giving them helpful step-by-step instructions with video, images and text. You can start seeing How-to markup results on Search today, and your content will become available on the Smart Displays in the coming months.

Here’s an example where DIY Network added markup to their existing content on the web to provide a more helpful, interactive result on both Google Search and the Assistant:

Mobile Search screenshot showing how to install a dog door How-to Markup of how to install a dog door

For content creators that don’t maintain a website, we created a How-to Video Template where video creators can upload a simple spreadsheet with titles, text and timestamps for their YouTube video, and we’ll handle the rest. This is a simple way to transform your existing how-to videos into interactive, step-by-step tutorials across Google Assistant Smart Displays and Android phones.

Check out how REI is getting extra mileage out of their YouTube video:

Laptop to Home Hub displaying How To Template for the REI compass

How-to Video Templates are in developer preview so you can start building today, and your content will become available on Android phones and Smart Displays in the coming months.

Easier engagement with your apps

Help people quickly get things done with App Actions

If you’re an app developer, people are turning to your apps every day to get things done. And we see people turn to the Assistant every day for a natural way to ask for help via voice. This offers an opportunity to use intents to create voice-based entry points from the Assistant to the right spot in your app.

Last year, we previewed App Actions, a simple mechanism for Android developers that uses intents from the Assistant to deep link to exactly the right spot in your app. At I/O, we are announcing the release of built-in intents for four new App Action categories: Health & Fitness, Finance and Banking, Ridesharing, and Food Ordering. Using these intents, you can integrate with the Assistant in no time.

If I wanted to track my run with Nike Run Club, I could just say “Hey Google, start my run in Nike Run Club” and the app will automatically start tracking my run. Or, let’s say I just finished dinner with my friend Chad and we're splitting the check. I can say "Hey Google, send $15 to Chad on PayPal" and the Assistant takes me right into Paypal, I log in, and all of my information is filled in – all I need to do is hit send.

Google Pixel showing App Actions Nike Run Club

Each of these integrations were completed in less than a day with the addition of an Actions.xml file that handles the mapping of intents between your app and the Actions platform. You can start building with these new intents today and deploy to Assistant users on Android in the coming months. This is a huge opportunity to offer your fans an effortless way to engage more frequently with your apps.

Build for devices in the home

Take advantage of Smart Displays’ interactive screens

Last year, we saw the introduction of the Smart Display as a new device category. The interactive visual surface opens up many new possibilities for developers.

Today, we’re introducing a developer preview of Interactive Canvas which lets you create full-screen experiences that combine the power of voice, visuals and touch. Canvas works across Smart Displays and Android phones, and it uses open web technologies you’re likely already familiar with, like HTML, CSS and Javascript.

Here’s an example of what you can build when you can leverage the full screen of a Smart Display:

Full screen of a Smart Display

Interactive Canvas is available for building games starting today, and we’ll be adding more categories soon. Visit the Actions Console to be one of the first to try it out.

Enable smart home devices to communicate locally

There are now more than 30,000 connected devices that work with the Assistant across 3,500 brands, and today, we’re excited to announce a new suite of local technologies that are specifically designed to create an even better smart home.

Introducing a preview of the Local Home SDK which enables you to run your smart home code locally on Google Home Speakers and Nest Displays and use its radios to communicate locally with your smart devices. This reduces cloud hops and brings a new level of speed and reliability to the smart home. We’ve been working with some amazing partners including Philips, Wemo, TP-Link, and LIFX on testing this SDK and we’re excited to open it up for all developers next month.

Flowchart of Local Home SDK

Make setup more seamless

And, through the Local Home SDK, we’re improving the device setup experience by providing users with a seamless setup experience, something we launched in partnership with GE smart lights this past October. So far, people have loved the ability to set up their lights in less than a minute in the Google Home app. We’re now scaling this to more partners, so go here if you’re interested.

Make your devices smart with Assistant Connect

Also, at CES earlier this year we previewed Google Assistant Connect which leverages the Local Home SDK. Assistant Connect enables smart home and appliance developers to easily add Assistant functionality into their devices at low cost. It does this by offloading a lot of work onto the Assistant to complete Actions, display content and respond to commands. We've been hard at work developing the platform along with the first products built on it by Anker, Leviton and Tile. We can't wait to show you more about Assistant Connect later this year.

New device types and traits

For those of you creating Actions for the smart home, we’re also releasing 16 new device types and three new device traits including LockUnlock, ArmDisarm, and Timer. Head over to our developer documentation for the full list of 38 device types and 18 device traits, and check out our sample project on GitHub to start building.

Get started with our new tools for all types of developers

Whether you’re looking to extend the reach of your content, drive more usage in your apps, or build custom Assistant-powered experiences, you now have more tools to do so.

If you want to learn more about how you can start building with these tools, check out our website to get started and our schedule so you can tune in to all of our developer talks that we’ll be hosting throughout the week.

We can’t wait to build together with you!