Tag Archives: Google I/O

Assistant Recap Google I/O 2021

Written by: Jessica Dene Earley-Cha, Mike Bifulco and Toni Klopfenstein, Developer Relations Engineers for Google Assistant

Now that we’ve packed up all of the virtual stages from Google I/O 2021, let's take a look at some of the highlights and new product announcements for App Actions, Conversational Actions, and Smart Home Actions. We also held a number of amazing live events and meetups that happened during I/O - which we’ll summarize as well.

App Actions

App Actions allows developers to extend their Android App to Google Assistant. For our Android Developers, we are happy to announce that App Actions is now part of the Android framework. With the introduction of the beta shortcuts.xml configuration resource and our latest Google Assistant Plug App Actions is moving closer to the Android platform.

Capabilities

Capabilities is a new Android framework API that allows you to declare the types of actions users can take to launch your app and jump directly to performing a specific task. Assistant provides the first available concrete implementation of the capabilities API. You can utilize capabilities by creating shortcuts.xml resources and defining your capabilities. Capabilities specify two things: how it's triggered and what to do when it's triggered. To add a capability, use Built-In intents (BIIs), which are pre-built intents that provide all the Natural Language Understanding to map the user's input to individual fields. When a BII is matched by the user’s speech, your capability will trigger an Android Intent that delivers the understood BII fields to your app, so you can determine what to show in response.

This framework integration is in the Beta release stage, and will eventually replace the original implementation of App Actions that uses actions.xml. If your app provides both the new shortcuts.xml and old actions.xml, the latter will be disregarded.

Voice shortcuts for Discovery

Google Assistant suggests relevant shortcuts to users and has made it easier for users to discover and add shortcuts by saying “Hey Google, shortcuts.”

Image of Google Assistant voice shortcuts

You can use the Google Shortcuts Integration library, currently in beta, to push an unlimited number of dynamic shortcuts to Google to make your shortcuts visible to users as voice shortcuts. Assistant can suggest relevant shortcuts to users to help make it more convenient for the user to interact with your Android app.

gif of In App Promo SDK

In-App Promo SDK

Not only can Assistant suggest shortcuts, with In-App Promo SDK you can proactively suggest shortcuts in your app for actions that the user can repeat with a voice command to Assistant, in beta. The SDK allows you to check if the shortcut you want to suggest already exists for that user and prompt the user to create the suggested shortcut.

Google Assistant plugin for Android Studio

To support testing Capabilities, Google Assistant plugin for Android Studio was launched. It contains an updated App Action Test Tool that creates a preview of your App Action, so you can test an integration before publishing it to the Play store.

New App Actions resources

Learn more with new or updated content:

Conversational Actions

During the What's New in Google Assistant keynote, Director of Product for the Google Assistant Developer Platform Rebecca Nathenson mentioned several coming updates and changes for Conversational Actions.

Updates to Interactive Canvas

Over the coming weeks, we’ll introduce new functionality to Interactive Canvas. Canvas developers will be able to manage intent fulfillment client-side, removing the need for intermediary webhooks in some cases. For use cases which require server-side fulfillment, like transactions and account linking, developers will be able to opt-in to server-side fulfillment as needed.

We’re also introducing a new function, outputTts(), which allows you to trigger Text to Speech client-side. This should help reduce latency for end users.

Additionally, there will be updates to the APIs available to get and set storage for both the home and individual users, allowing for client-side storage of user information. You’ll be able to persist user information within your web app, which was previously only available for access by webhook.


These new features for Interactive Canvas will be made available soon as part of a developer preview for Conversational Actions Developers. For more details on these new features, check out the preview page.

Updates to Transaction UX for Smart Displays

Also coming soon to Conversational Actions - we’re updating the workflow for completing transactions, allowing users to complete transactions from their smart screens, by confirming the CVC code from their chosen payment method. Watch our demo video showing new transaction features on smart devices to get a feel for these changes.

Tips on Launching your Conversational Action

Make sure to catch our technical session Driving a successful launch for Conversational Actions to learn about some strategies for putting together a marketing team and go-to-market plan for releasing your Conversational Action.

AMA: Games on Google Assistant

If you’re interested in building Games for Google Assistant with Conversational Actions, you should check out the recording of our AMA, where Googlers answered questions from I/O attendees about designing, building, and launching games.


Smart Home Actions

The What's new in Smart Home keynote covered several updates for Smart Home Actions. Following our continued emphasis on quality smart home integrations with the updated policy launch, we added new features to help you build engaging, reliable Actions for your users.

Test Suite and Analytics

The updated Test Suite for Smart Home now supports automatic testing, without the use of TTS. Additionally, the Analytics dashboards have been expanded with more detailed logs and in-depth error reporting to help you more quickly identify any potential issues with your Action. For a deeper dive into these enhancements, try out the Debugging the Smart Home workshop. There are also two new debugging codelabs to help you get more familiar with using these tools to improve the quality of your Action.

Notifications

We expanded support for proactive notifications to include the device traits RunCycle and SensorState. Users can now be proactively notified for multiple different device events. We also announced the release of follow-up responses. These follow-up responses enable your smart devices to notify users asynchronously to device changes succeeding or failing.

WebRTC

We added support for WebRTC to the CameraStream trait. Smart camera users can now benefit from lower latency and half-duplex talk between devices. As mentioned in the keynote, we will also be making updates to the other currently supported protocols for smart cameras.

Bluetooth Seamless Setup

To improve the on-boarding experience, developers can now enable BLE (bluetooth low energy) for device onboarding with Bluetooth Seamless Setup. Google Home and Nest devices can act as local hubs to provision and register nearby devices for any Action configured with local fulfillment.

Matter

Project CHIP has officially rebranded as Matter. Once the IP-based connectivity protocol officially launches, we will be supporting devices running the protocol. Watch the Getting started with Project CHIP tech session to learn more.

Ecosystem and Community

The women building voice AI and their role in the voice revolution

Voice AI is fundamentally changing how we interact with technology and its future will be a product of the people that build it. Watch this session to hear about the talented women shaping the Voice AI field, including an interview with Lilian Rincon, Sr. Director of Product Management at Google. Leslie also discusses strategies for achieving equal gender representation in Voice AI, an ambitious but essential goal.

AMA: How the Assistant Investment Program can help fund your startup

This "Ask Me Anything" session was hosted by the all-star team who runs the Google for Startups Accelerator: Voice AI. The team fielded questions from startups and investors around the world who are interested in building businesses based on voice technology. Check out the recording of this event here. The day after the AMA session, the 2021 cohort for the Voice AI accelerator had their demo day - you can catch the recording of their presentations here.

Image from the AMA titled: How the Assistant Investment Program can help fund your startup

Women in Voice Meetup

We connected with amazing women in Voice AI and discussed ways allies can help women in Voice to be more successful while building a more inclusive ecosystem. It was hosted by Leslie Garcia-Amaya, Jessica Dene Earley-Cha, Karina Alarcon, Mike Bifulco, Cathy Pearl, Toni Klopfenstein, Shikha Kapoor & Walquiria Saad

Smart home developer Meetups

One of the perks of I/O being virtual this year was the ability to connect with students, hobbyists, and developers around the globe to discuss the current state of Smart Home, as well as some of the upcoming features. We hosted 3 meetups for the APAC, Americas, and EMEA regions and gathered some great feedback from the community.

Assistant Google Developers Experts Meetup

Every year we host an Assistant Google Developer Expert meetup to connect and share knowledge. This year we were able to invite everyone who is interested in building for Google Assistant to network and connect with one another. At the end several attendees came together at the Assistant Sandbox for a virtual photo!

Image of GoogleIO assitant meetup

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Android 12 Beta 2 Update

Posted by Dave Burke, VP of Engineering

Android 12 logo

Just a few weeks ago at Google I/O we unwrapped the first beta of Android 12, focusing on a new UI that adapts to you, improved performance, and privacy and security at the core. For developers, Android 12 gives you better tools to build delightful experiences for people on phones, laptops, tablets, wearables, TVs, and cars.

Today we’re releasing the second Beta of Android 12 for you to try. Beta 2 adds new privacy features like the Privacy Dashboard and continues our work of refining the release.

End-to-end there’s a lot for developers in Android 12 - from the redesigned UI and app widgets, to rich haptics, improved video and image quality, privacy features like approximate location, and much more. For a quick look at related Google I/O sessions, see Android 12 at Google I/O later in the post.

You can get Beta 2 today on your Pixel device by enrolling here for over-the-air updates, and if you previously enrolled for Beta 1, you’ll automatically get today’s update. Android 12 Beta is also available on select devices from several of our partners - learn more at android.com/beta.

Visit the Android 12 developer site for details on how to get started.

What’s new in Beta 2?

Beta 2 includes several of the new privacy features we talked about at Google I/O, as well as various feature updates to improve functionality, stability, and performance. Here are a few highlights.

Privacy Dashboard - We’ve added a Privacy Dashboard to give users better visibility over the data that apps are accessing. The dashboard offers a simple and clear timeline view of all recent app accesses to microphone, camera, and location. Users can also request details from an app on why it has accessed sensitive data, and developers can provide this information in an activity by handling a new system intent, ACTION_VIEW_PERMISSION_USAGE_FOR_PERIOD. We recommend that apps take advantage of this intent to proactively help users understand accesses in the given time period. To help you track these accesses in your code and any third-party libraries, we recommend using the Data Auditing APIs. More here.

Privacy Dashboard and location access gif

Privacy dashboard and location access timeline.

Mic and camera indicators - We’ve added indicators to the status bar to let users know when apps are using the device camera or microphone. Users can go to Quick Settings to see which apps are accessing their camera or microphone data and manage permissions if needed. For developers, we recommend reviewing your app’s uses of the microphone and camera and removing any that users would not expect. More here.

Microphone & camera toggles - We’ve added Quick Settings toggles on supported devices that make it easy for users to instantly disable app access to the microphone and camera. When the toggles are turned off, an app accessing these sensors will receive blank camera and audio feeds, and the system handles notifying the user to enable access to use the app’s features. Developers can use a new API, SensorPrivacyManager, to check whether toggles are supported on the device. The microphone and camera controls apply to all apps regardless of their platform targeting. More here.

Clipboard read notification - To give users more transparency on when apps are reading from the clipboard, Android 12 now displays a toast at the bottom of the screen each time an app calls getPrimaryClip(). Android won’t show the toast if the clipboard was copied from the same app. We recommend minimizing your app’s reads from the clipboard, and making sure that you only access the clipboard when it will be expected by users. More here.

More intuitive connectivity experience - To help users understand and manage their network connections better, we’re introducing a simpler and more intuitive connectivity experience across the Status Bar, Quick Settings, and Settings. The new Internet Panel helps users switch between their Internet providers and troubleshoot network connectivity issues more easily. Let us know what you think!

Quick Settings controls

New Internet controls through Quick Settings.

Visit the Android 12 developer site to learn more about all of the new features in Android 12.

Android 12 at Google I/O

At Google I/O we talked about everything that’s new in Android for developers - from Android 12 to Modern Android Development tools, new form factors like Wear and foldables, and Google Play. Here are the top 3 things to know about Android 12 at Google I/O.

#1 A new UI for Android - Android 12 brings the biggest design change in Android's history. We rethought the entire experience, from the colors to the shapes, light and motion, making Android 12 more expressive, dynamic, and personal, under a single design language called Material You.

#2 Performance - With Android 12, we made significant and deep investments in performance, from foundational system performance and battery life to foreground service changes, media quality and performance, and new tools to optimize apps.

#3 Privacy and security - In Android 12 we’re continuing to give users more transparency and control while keeping their devices and data secure.

For an overview of Android 12 for developers, watch this year’s What's new in Android talk, and check out Top 12 tips to get ready for Android 12 for an overview of where to test your app for compatibility. The full list of Android content at Google I/O is here.

App compatibility

With more early-adopter users and developers getting Android 12 beta on Pixel and other devices, now is the time to make sure your apps are ready!

To test your app for compatibility, install the published version from Google Play or other source onto a device or emulator running Android 12 Beta. Work through all of the app’s flows and watch for functional or UI issues. Review the behavior changes to focus your testing. There’s no need to change your app’s targetSdkVersion at this time, so when you’ve resolved any issues, publish an update as soon as possible for your Android 12 Beta users.

timeline for Android 12

With Beta 2, Android 12 is closing in on Platform Stability in August 2021. Starting then, app-facing system behaviors, SDK/NDK APIs, and non-SDK lists will be finalized. At that time, you should finish up your final compatibility testing and release a fully compatible version of your app, SDK, or library. More on the timeline for developers is here.

Get started with Android 12!

Today’s Beta release has everything you need to try the latest Android 12 features, test your apps, and give us feedback. Just enroll any supported Pixel device to get the update over-the-air. To get started developing, set up the Android 12 SDK.

You can also get Android 12 Beta 2 on devices from some of our top device-maker partners like Sharp. Visit android.com/beta to see the full list of partners participating in Android 12 Beta. For even broader testing, you can try Android 12 Beta on Android GSI images, and if you don’t have a device you can test on the Android Emulator.

Beta 2 is also available for Android TV, so you can check out the latest TV features and test your apps on the all-new Google TV experience. Try it out with the ADT-3 developer kit. More here.

For complete details on Android 12 Beta, visit the Android 12 developer site.

What’s new in Android TV (and Google TV!)

Posted by Ben Serridge, Director of Product Management - TV Platforms and Dan Aharon, Product Manager

Android TV

Today at Google I/O 2021, we announced a significant milestone for our team: we have over 80 million monthly active devices on Android TV OS, with more than 80% growth in the US alone. We would not be here without the hard work of the developer community, so a huge and heartfelt thank you to you all.

Android TV OS is the operating system that powers a number of devices around the world including the new Google TV experience launched last fall. Google TV has generated a lot of excitement from consumers, developers, and industry partners alike, offering a content forward TV experience that helps the user discover more of the movies and shows they love. Google TV is available on streaming devices like the Chromecast with Google TV, smart TVs from Sony (and soon TCL!), and as an app on Android devices. Check out this presentation on how to get your app ready for Google TV.

Our goal is to always enable you to build better and more engaging experiences on Android TV OS. One example of this is the widely utilized Watch Next API which increases app re-engagement by ~30% in certain cases1. Well over 100 major media partners are already using WatchNext API and you can learn more about how to add your app here.

We are also announcing several new tools and helpful features to make developing for Android TV OS easier and enable you to create engaging experiences for your users. Some are already available and some will be available soon:

  • Cast Connect with Stream Transfer and Stream Expansion: Cast Connect allows users to cast from their phone/ tablet or Chrome browser onto your app on Android TV. Stream Transfer and Stream Expansion allow users to transfer media to other devices and/or play audio on multiple devices.
  • Emulator updates: To help you make your app work better on Google TV without requiring new hardware, we are now making our first Google TV Emulator available, running on Android 11. There will also be an Android 11 image with the traditional Android TV experience. You can now also use a remote that more closely mimics TV remotes directly within the Emulator.
  • Firebase Test Lab: Firebase Test Lab runs millions of tests every week on behalf of developers. Following requests from developers, we are excited to share that Firebase Test Lab is adding Android TV support. Firebase Test Lab Virtual Devices run your app in the cloud on Android TV emulators and allow you to scale your test across hundreds or thousands of virtual devices. Physical Devices will be coming soon.
  • Android 12 Beta 1: We are making the Android 12 Beta 1 available for TV on ADT-3 today. With this release the developer community will be able to take advantage of many of the changes and improvements coming with Android 12. We encourage you to try it and provide us with feedback.

Thank you for your continued support of the Android TV OS platform. The future of TV is bright and we can’t wait to see what you build next!

1 Average gain in number of days active in the app in a 28-day period amongst app 28DAUs, based on 3 apps analyzed during the 11/2020 - 2/2021 period.

Google releases source code for Google I/O 2019 for Android

Posted by Takeshi Hagikura, Developer Programs Engineer

Today we're releasing the source code for the official Google I/O 2019 Android app.

This year's app substantially modified existing functionality and added several new features. In this post, we’ll highlight several notable changes.

Android Q out of the box

  • Gesture navigation

Android Q introduced an option for fully gestural navigation, allowing the user to navigate back and to the home screen using only gestures. To support gesture navigation, app developers need to do two things:

  1. Extend app content to draw edge-to-edge
  2. Handle any conflicting app gestures

The Google I/O 2019 app was one of the first apps to support fully the gestural navigation. For more details, check out this series of blog posts about gesture navigation and the commit in the Google I/O app repository that extended the content to draw edge-to-edge.

Gesture navigation navigating back and to the home screen

  • Dark theme

Another new feature that was introduced with Android Q was the new system Dark theme that applies to both the Android system UI and apps running on Android devices. Dark theme brings many benefits to developers, including being able to reduce power usage and improving visibility for users with low vision and those who are sensitive to bright light.

To support the dark theme, you must set the app’s theme to inherit from a dark theme.

<style name="AppTheme" parent="Theme.AppCompat.DayNight">
OR
<style name="AppTheme" parent="Theme.MaterialComponents.DayNight">


You also need to avoid hard-coded colors or icons. You should use theme attributes (such as ?android:attr/textColorPrimary) or night-qualified resources (such as colors defined both in the res/values/colors.xml and res/values-night/colors.xml) instead. Check out the Google I/O talk about Dark Theme & Gesture Navigation for more details or the series of commits (1, 2, 3) in the Google I/O 2019 app repository for how we achieved implementing the dark theme in a real app.

Schedule UI in dark theme

Improved schedule screen

In 2018, we adopted a tabbed interface for the schedule UI with horizontal swiping, each tab represented a conference day. In 2019, we changed the UI to address some usability and performance problems. For example, the views in the all tabs were rendered at the same time when the schedule UI became visible. That caused a noticeable UI slowdown especially on a low-end device.

The new schedule UI is a single stream, allowing the app to render only visible content and users to easily jump to another conference day by choosing a day at the top of the UI. Check out the series of commits (1, 2) for how we revamped the schedule UI.

This year’s schedule UI jumping to another conference day

Navigation component

We introduced Navigation component to simplify this year’s app into a Single Activity app and observed the following benefits:

  • Being able to see all the transitions at a glance in the navigation editor which simplified launching Session Details and the Map from launch actions
  • Removed boiler plate code for handling up and back navigations
  • Arguments between Fragments were statically typed by using the Safe Args gradle plugin

Check out the getting started guide for how you can start introducing the Navigation component in your app and the series of commits (1, 2, 3, 4) in the Google I/O 2019 app repository for the usage in a real app.

All transitions in the navigation editor

Full Text Search with Room

For this year’s app we added a search feature for users to quickly find sessions, speakers, and codelabs. To accomplish this, we used the Full Text Search feature of the Room Jetpack component. Whenever the conference data is fetched from the server, we update the session, speaker, and codelab data in the Room tables, which have corresponding FTS mapping tables. When a user starts typing in the search box, the search term is used to query the session title and description, speaker names, and codelab title. The search results are shown almost instantly, which allows the search results to be updated with each character typed in the search field. The user can then tap on a search result to navigate to see the details on the session, speaker, or codelab. Check out the series of commits (1, 2, 3, 4) for how we achieved the Full Text Search feature.

Searching for a session and a speaker

Lots of improvements

These were the biggest changes we made to the app, but we improved a lot of little things as well. We added the new Home UI, allowing the app to tell the user time relevant information during the conference and the Codelab UI, which gave users more information about codelabs at I/O and how to participate in them.

Home UI and Codelabs UI

We also introduced Firebase Remote Config to toggle the visibility of each feature by updating the boolean values in the Remote Config without updating the app and removed the hard-coded values that were used for representing start and end time of each event in the Agenda UI.

Go explore the code

If you’re interested go checkout the code and let us know what you think. If you have any questions or issues, please let us know via the issue tracker on GitHub.


Search at Google I/O 2019

Google I/O is our yearly developer conference where we have the pleasure of announcing some exciting new Search-related features and capabilities. A good place to start is Google Search: State of the Union, which explains how to take advantage of the latest capabilities in Google Search:

We also gave more details on how JavaScript and Google Search work together and what you can do to make sure your JavaScript site performs well in Search.

Try out new features today

Here are some of the new features, codelabs, and documentation that you can try out today:
The Google I/O sign at Shoreline Amphitheatre at Mountain View, CA

Be among the first to test new features

Your help is invaluable to making sure our products work for everyone. We shared some new features that we're still testing and would love your feedback and participation.
A large crowd at Google I/O

Learn more about what's coming soon

I/O is a place where we get to showcase new Search features, so we're excited to give you a heads up on what's next on the horizon:
Two people posing for a photo at Google I/O, forming a heart with their arms

We hope these cool announcements help & inspire you to create even better websites that work well in Search. Should you have any questions, feel free to post in our webmaster help forums, contact us on Twitter, or reach out to us at any of the next events we're at.

Flutter and Chrome OS: Better Together

Posted by the Flutter and Chrome OS teams

Chrome OS is the fast, simple, and secure operating system that powers Chromebooks, including the Google Pixelbook and millions of devices used by consumers and students every day. The latest Flutter release adds support for building beautiful, tailored Chrome OS applications, including rich support for keyboard and mouse, and tooling to ensure that your app runs well on a Chromebook. Furthermore, Chrome OS is a great developer workstation for building general-purpose Flutter apps, thanks to its support for developing and running Flutter apps locally on the same device.

Flutter is a great way to build Chrome OS apps

Since its inception, Flutter has shared many of the same principles as Chrome OS: productive, fast, and beautiful experiences. Flutter allows developers to build beautiful, fast UIs, while also providing a high degree of developer productivity, and a completely open-source engine, framework and tools. In short, it’s the ideal modern toolkit for building multi-platform apps, including apps for Chrome OS.

Flutter initially focused on providing a UI toolkit for building apps for mobile devices, which typically feature touch input and small screens. However, we’ve been building keyboard and mouse support into Flutter since before our 1.0 release last December. And today, we’re pleased to announce that Flutter for Chrome OS is now stronger with scroll wheel support, hover management, and better keyboard event support. In addition, Flutter has always been great at allowing you to build apps that run at any size (large screen or small), with seamless resizing, as shown here in the Chrome OS Best Practices Sample:

The Chrome OS best practices sample in action

The Chrome OS best practices sample in action

The Chrome OS Hello World sample is an app built with Flutter that is optimized for Chrome OS. This includes a responsive UI to showcase how to reposition items and have layouts that respond well to changes in size from mobile to desktop.

Because Chrome OS runs Android apps, targeting Android is the way to build Chrome OS apps. However, while building Chrome OS apps on Android has always been possible, as described in these guidelines, it’s often difficult to know whether your Android app is going to run well on Chrome OS. To help with that problem, today we are adding a new set of lint rules to the Flutter tooling to catch violations of the most important of the Chrome OS best practice guidelines:

The Flutter Chrome OS lint rules in action

The Flutter Chrome OS lint rules in action

When you’re able to put these Chrome OS lint rules in place, you’ll quickly be able to see any problems in your Android app that would hamper it when running on Chrome OS. To learn how to take advantage of these rules, see the linting docs for Flutter Chrome OS.

But all of that is just the beginning -- the Flutter tools allow you to develop and test your apps directly on Chrome OS as well.

Chrome OS is a great developer platform to build Flutter apps

No matter what platform you're targeting, Flutter has support for rich IDEs and programming tools like Android Studio and Visual Studio Code. Over the last year, Chrome OS has been building support for running the Linux version of these tools with the beta of Linux on Chrome OS (aka Crostini). And, because Chrome OS also supports Android natively, you can configure the Flutter tooling to run your Android apps directly without an emulator involved.

The Flutter development tools running on Chrome OS

The Flutter development tools running on Chrome OS

All of the great productivity of Flutter is available, including Stateful Hot Reload, seamless resizing, keyboard and mouse support, and so on. Recent improvements in Crostini, such as high DPI support, Crostini file system integration, easier adb, and so on, have made this experience even better! Of course, you don’t have to test against the Android container running on Chrome OS; you can also test against Android devices attached to your Chrome OS box. In short, Chrome OS is the ideal environment in which to develop and test your Flutter apps, especially when you’re targeting Chrome OS itself.

Customers love Flutter on Chrome OS

With its unique combination of simplicity, security, and capability, Chrome OS is an increasingly popular platform for enterprise applications. These apps often work with large quantities of data, whether it’s a chart, or a graph for visualization, or lists and forms for data entry. The support in Flutter for high quality graphics, large screen layout, and input features (like text selection, tab order and mousewheel), make it an ideal way to port mobile applications for the enterprise. One purveyor of such apps is AppTree, who use Flutter and Chrome OS to solve problems for their enterprise customers.

“Creating a Chrome OS version of our app took very little effort. In 10 minutes we tweaked a few values and now our users have access to our app on a whole new class of devices. This is a huge deal for our enterprise customers who have been wanting access to our app on Desktop devices.”
--Matthew Smith, CTO, AppTree Software

By using Flutter to target Chrome OS, AppTree was able to start with their existing Flutter mobile app and easily adapt it to take advantage of the capabilities of Chrome OS.

Try Flutter on Chrome OS today!

If you’d like to target Chrome OS with Flutter, you can do so today simply by installing the latest version of Flutter. If you’d like to run the Flutter development tools on Chrome OS, you can follow these instructions to get started fast. To see a real-world app built with Flutter that has been optimized for Chrome OS, check out the the Developer Quest sample that the Flutter DevRel team launched at the 2019 Google I/O conference. And finally, don’t forget to try out the Flutter Chrome OS linting rules to make sure that your Chrome OS apps are following the most important practices.

Flutter and Chrome OS go great together. What are you going to build?

Actions on Google at I/O 2019: New tools for web, mobile, and smart home developers

Posted by Chris Turkstra, Director, Actions on Google

People are using the Assistant every day to get things done more easily, creating lots of opportunities for developers on this quickly growing platform. And we’ve heard from many of you that want easier ways to connect your content across the Assistant.

At I/O, we’re announcing new solutions for Actions on Google that were built specifically with you in mind. Whether you build for web, mobile, or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.

Enhance your presence in Search and the Assistant

Help people with their “how to” questions

Every day, people turn to the internet to ask “how to” questions, like how to tie a tie, how to fix a faucet, or how to install a dog door. At I/O, we’re introducing support for How-to markup that lets you power richer and more helpful results in Search and the Assistant.

Adding How-to markup to your pages will enable the page to appear as a rich result on mobile Search and on Google Assistant Smart Displays. This is an incredibly lightweight way for web developers and creators to connect with millions of people, giving them helpful step-by-step instructions with video, images and text. You can start seeing How-to markup results on Search today, and your content will become available on the Smart Displays in the coming months.

Here’s an example where DIY Network added markup to their existing content on the web to provide a more helpful, interactive result on both Google Search and the Assistant:

Mobile Search screenshot showing how to install a dog door How-to Markup of how to install a dog door

For content creators that don’t maintain a website, we created a How-to Video Template where video creators can upload a simple spreadsheet with titles, text and timestamps for their YouTube video, and we’ll handle the rest. This is a simple way to transform your existing how-to videos into interactive, step-by-step tutorials across Google Assistant Smart Displays and Android phones.

Check out how REI is getting extra mileage out of their YouTube video:

Laptop to Home Hub displaying How To Template for the REI compass

How-to Video Templates are in developer preview so you can start building today, and your content will become available on Android phones and Smart Displays in the coming months.

Easier engagement with your apps

Help people quickly get things done with App Actions

If you’re an app developer, people are turning to your apps every day to get things done. And we see people turn to the Assistant every day for a natural way to ask for help via voice. This offers an opportunity to use intents to create voice-based entry points from the Assistant to the right spot in your app.

Last year, we previewed App Actions, a simple mechanism for Android developers that uses intents from the Assistant to deep link to exactly the right spot in your app. At I/O, we are announcing the release of built-in intents for four new App Action categories: Health & Fitness, Finance and Banking, Ridesharing, and Food Ordering. Using these intents, you can integrate with the Assistant in no time.

If I wanted to track my run with Nike Run Club, I could just say “Hey Google, start my run in Nike Run Club” and the app will automatically start tracking my run. Or, let’s say I just finished dinner with my friend Chad and we're splitting the check. I can say "Hey Google, send $15 to Chad on PayPal" and the Assistant takes me right into Paypal, I log in, and all of my information is filled in – all I need to do is hit send.

Google Pixel showing App Actions Nike Run Club

Each of these integrations were completed in less than a day with the addition of an Actions.xml file that handles the mapping of intents between your app and the Actions platform. You can start building with these new intents today and deploy to Assistant users on Android in the coming months. This is a huge opportunity to offer your fans an effortless way to engage more frequently with your apps.

Build for devices in the home

Take advantage of Smart Displays’ interactive screens

Last year, we saw the introduction of the Smart Display as a new device category. The interactive visual surface opens up many new possibilities for developers.

Today, we’re introducing a developer preview of Interactive Canvas which lets you create full-screen experiences that combine the power of voice, visuals and touch. Canvas works across Smart Displays and Android phones, and it uses open web technologies you’re likely already familiar with, like HTML, CSS and Javascript.

Here’s an example of what you can build when you can leverage the full screen of a Smart Display:

Full screen of a Smart Display

Interactive Canvas is available for building games starting today, and we’ll be adding more categories soon. Visit the Actions Console to be one of the first to try it out.

Enable smart home devices to communicate locally

There are now more than 30,000 connected devices that work with the Assistant across 3,500 brands, and today, we’re excited to announce a new suite of local technologies that are specifically designed to create an even better smart home.

Introducing a preview of the Local Home SDK which enables you to run your smart home code locally on Google Home Speakers and Nest Displays and use its radios to communicate locally with your smart devices. This reduces cloud hops and brings a new level of speed and reliability to the smart home. We’ve been working with some amazing partners including Philips, Wemo, TP-Link, and LIFX on testing this SDK and we’re excited to open it up for all developers next month.

Flowchart of Local Home SDK

Make setup more seamless

And, through the Local Home SDK, we’re improving the device setup experience by providing users with a seamless setup experience, something we launched in partnership with GE smart lights this past October. So far, people have loved the ability to set up their lights in less than a minute in the Google Home app. We’re now scaling this to more partners, so go here if you’re interested.

Make your devices smart with Assistant Connect

Also, at CES earlier this year we previewed Google Assistant Connect which leverages the Local Home SDK. Assistant Connect enables smart home and appliance developers to easily add Assistant functionality into their devices at low cost. It does this by offloading a lot of work onto the Assistant to complete Actions, display content and respond to commands. We've been hard at work developing the platform along with the first products built on it by Anker, Leviton and Tile. We can't wait to show you more about Assistant Connect later this year.

New device types and traits

For those of you creating Actions for the smart home, we’re also releasing 16 new device types and three new device traits including LockUnlock, ArmDisarm, and Timer. Head over to our developer documentation for the full list of 38 device types and 18 device traits, and check out our sample project on GitHub to start building.

Get started with our new tools for all types of developers

Whether you’re looking to extend the reach of your content, drive more usage in your apps, or build custom Assistant-powered experiences, you now have more tools to do so.

If you want to learn more about how you can start building with these tools, check out our website to get started and our schedule so you can tune in to all of our developer talks that we’ll be hosting throughout the week.

We can’t wait to build together with you!

Google I/O 2019 – What sessions should SEOs and webmasters watch?

Google I/O 2019 is starting tomorrow and will run for 3 days, until Thursday. Google I/O is our yearly developers festival, where product announcements are made, new APIs and frameworks are introduced, and Product Managers present the latest from Google to an audience of 7,000+ developers who fly to California.

However, you don't have to physically attend the event to take advantage of this once-a-year opportunity: many conferences and talks are live streamed on YouTube for anyone to watch. Browse the full schedule of events, including a list of talks that we think will be interesting for webmasters to watch (all talks are in English). All the links shared below will bring you to pages with more details about each talk, and links to watch the sessions will display on the day of each event. All times are Pacific Central time (California time).



This list is only a small part of the agenda that we think is useful to webmasters and SEOs. There are many more sessions that you could find interesting! To learn about those other talks, check out the full list of “web” sessions, design sessions, Cloud sessions, machine learning sessions, and more. Use the filtering function to toggle the sessions on and off.

We hope you can make the time to watch the talks online, and participate in the excitement of I/O ! The videos will also be available on Youtube after the event, in case you can't tune in live.

Posted by Vincent Courson, Search Outreach Specialist

Check out the Google Assistant talks at I/O 2019

Posted by Mary Chen, Strategy Lead, Actions on Google

This year at Google I/O, the Actions on Google team is sharing new ways developers of all types can use the Assistant to help users get things done. Whether you’re making Android apps, websites, web content, Actions, or IoT devices, you’ll see how the Assistant can help you engage with users in natural and conversational ways.

Tune in to our announcements during the developer keynote, and then dive deeper with our technical talks. We listed the talks out below by area of interest. Make sure to bookmark them and reserve your seat if you’re attending live, or check back for livestream details if you’re joining us online.


For anyone new to building for the Google Assistant


For Android app developers


For webmasters, web developers, and content creators


For smart home developers


For anyone building an Action from scratch


For insight and beyond


In addition to these sessions, stay tuned for interactive demos and codelabs that you can try at I/O and at home. Follow @ActionsOnGoogle for updates and highlights before, during, and after the festivities.

See you soon!

Google Search at I/O 2018

With the eleventh annual Google I/O wrapped up, it’s a great time to reflect on some of the highlights.

What we did at I/O


The event was a wonderful way to meet many great people from various communities across the globe, exchange ideas, and gather feedback. Besides many great web sessions, codelabs, and office hours we shared a few things with the community in two sessions specific to Search:




The sessions included the launch of JavaScript error reporting in the Mobile Friendly Test tool, dynamic rendering (we will discuss this in more detail in a future post), and an explanation of how CMS can use the Indexing and Search Console APIs to provide users with insights. For example, Wix lets their users submit their homepage to the index and see it in Search results instantly, and Squarespace created a Google Search keywords report to help webmasters understand what prospective users search for.

During the event, we also presented the new Search Console in the Sandbox area for people to try and were happy to get a lot of positive feedback, from people being excited about the AMP Status report to others exploring how to improve their content for Search.

Hands-on codelabs, case studies and more


We presented the Structured Data Codelab that walks you through adding and testing structured data. We were really happy to see that it ended up being one of the top 20 codelabs by completions at I/O. If you want to learn more about the benefits of using Structured Data, check out our case studies.



During the in-person office hours we saw a lot of interest around HTTPS, mobile-first indexing, AMP, and many other topics. The in-person Office Hours were a wonderful addition to our monthly Webmaster Office Hours hangout. The questions and comments will help us adjust our documentation and tools by making them clearer and easier to use for everyone.

Highlights and key takeaways


We also repeated a few key points that web developers should have an eye on when building websites, such as:


  • Indexing and rendering don’t happen at the same time. We may defer the rendering to a later point in time.
  • Make sure the content you want in Search has metadata, correct HTTP statuses, and the intended canonical tag.
  • Hash-based routing (URLs with "#") should be deprecated in favour of the JavaScript History API in Single Page Apps.
  • Links should have an href attribute pointing to a URL, so Googlebot can follow the links properly.

Make sure to watch this talk for more on indexing, dynamic rendering and troubleshooting your site. If you wanna learn more about things to do as a CMS developer or theme author or Structured Data, watch this talk.

We were excited to meet some of you at I/O as well as the global I/O extended events and share the latest developments in Search. To stay in touch, join the Webmaster Forum or follow us on Twitter, Google+, and YouTube.