Tag Archives: developers

3 new tools to help improve your Apps Script development and management experience



Apps Script has come a long way since we first launched scripting with Google Sheets. Now, Apps Script supports more than 5 million weekly active scripts that are integrated with a host of G Suite apps, and more than 1 billion daily executions.

As developers increasingly rely on Apps Script for mission-critical enterprise applications, we've redoubled our efforts to improve its power, reliability and operational monitoring, like our recently announced integration with Stackdriver for logging and error reporting. Today, we’re providing three new tools to help further improve your workflows and manage Apps Script projects:

  1. Apps Script dashboard, to help you manage, debug and monitor all of your projects in one place. 
  2. Apps Script API, so you can programmatically manage Apps Script source files, versions and deployments. 
  3. Apps Script Command Line Interface, for easy access to Apps Script API functionality from your terminal and shell scripts. 

Apps Script dashboard 

Over the next few weeks we’ll be making a new dashboard available to help you manage, debug and monitor all of your Apps Script projects from one place.
In this new dashboard—available at script.google.com—you will be able to:
  • View and search all of your projects. 
  • Monitor the health and usage of projects you care about. 
  • View details about individual projects. 
  • View a log of project executions and terminate long-running executions. 
Check out the documentation for more detail on the dashboard. If you encounter any issues, please use the feedback link in the left column of the new dashboard or file a bug.

Apps Script API 

The new Apps Script dashboard is built on top of a powerful new Apps Script API which replaces and extends the Execution API. This new Apps Script API provides a RESTful interface for developers to create, manage, deploy and execute their scripts from their preferred programming language. This gives you control to create development workflows and deployment strategies that fit your needs. With this new API, you can:
  • Create, read, update and delete script projects, source files and versions. 
  • Manage project deployments and entry points (web app, add-on, execution). 
  • Obtain project execution metrics and process data. 
  • Run script functions. 
To learn more about the new Apps Script API, check out the documentation. If you encounter any issues please ask a question on Stack Overflow or file a bug.

Apps Script Command Line Interface 

Lastly, we’re pleased to introduce the first open-source client of the Apps Script API, a command-line interface tool called clasp (Command Line Apps Script Projects). clasp allows you to access the management functionality of the Apps Script API with intuitive terminal commands and is available as an open-source project on GitHub.
clasp allows developers to create, pull and push Apps Script projects, plus manage deployments and versions with terminal commands and shell scripts. clasp also allows you to write and maintain your Apps Script projects using the development tools of your choice including your native IDE, Git and GitHub.

To get started, try the clasp codelab. You can file issues or ask questions on the clasp project GitHub page.

We’re doubling down on powerful platforms like Apps Script. We hope these new additions help ease your development process.

Hangouts Meet now available in the Google Calendar API



Thousands of developers use Google Calendar API to read, create and modify Google Calendar events, and quite often, these events represent meetings happening not just face-to-face but also remotely. We introduced Hangouts Meet earlier this year to give users richer conference experiences, adding video call links and phone numbers for G Suite Enterprise. Starting today, we are making it possible to access all that conference information through the Google Calendar API. With this update, developers can now:
  • Read conference data associated with events 
  • Copy conference data from one event to another 
  • Request new conference generation for an event 
The API supports all Hangouts versions.

Reading conference data 

Conference information is stored in a new event attribute called conferenceData. conferenceData provides information about the solution that was used to create the conference (such as Hangouts Meet) and a set of entry points (like a video call link and phone number). Everything the user needs to know to join a conference call is there.

To help you build even better user experiences, we also give you access to icons and user-readable labels that you can use in your products. In JSON format, conferenceData looks something like this (of course, your actual meeting IDs and phone numbers will vary):
"conferenceData": {
"entryPoints": [
{
"entryPointType": "video",
"uri": "https://meet.google.com/wix-pvpt-njj",
"label": "meet.google.com/wix-pvpt-njj"
},
{
"entryPointType": "more",
"uri": "https://tel.meet/wix-pvpt-njj?pin=1701789652855",
"pin": "1701789652855"
},
{
"entryPointType": "phone",
"uri": "tel:+44-20-3873-7654",
"label": "+44 20 3873 7654",
"pin": "6054226"
}
],
"conferenceSolution": {
"key": {
"type": "hangoutsMeet"
},
"name": "Hangouts Meet",
"iconUri": "https://lh5.googleusercontent.com/proxy/bWvYBOb7O03a7HK5iKNEAPoUNPEXH1CHZjuOkiqxHx8OtyVn9sZ6Ktl8hfqBNQUUbCDg6T2unnsHx7RSkCyhrKgHcdoosAW8POQJm_ZEvZU9ZfAE7mZIBGr_tDlF8Z_rSzXcjTffVXg3M46v"
},
"conferenceId": "wix-pvpt-njj",
"signature": "ADwwud9tLfjGQPpT7bdP8f3bq3DS"
}

And this, for example, is how you would retrieve and display conference solution name and icon:
var solution = event.conferenceData.conferenceSolution;

var content = document.getElementById("content");
var text = document.createTextNode("Join " + solution.name);
var icon = document.createElement("img");
icon.src = solution.iconUri;

content.appendChild(icon);
content.appendChild(text);
The result of the code above will look like this in the user interface:

Copying conferences across events 

Sometimes displaying information is not enough—you might want to update it as well. This is especially true when scheduling multiple Calendar events with the same conference details. Say you’re developing a recruiting application that sets up separate events for the candidate and the interviewer; you want to protect the interviewer’s identity, but you also want to make sure all participants join the same conference call. To do this, you can now copy conference information from one event to another by simply writing to conferenceData.

To ensure that only existing Hangouts conferences are copied, and to help safeguard your users against malicious actors, copied conference data will always be verified by the Google Calendar API using the signature field, so don’t forget to copy it too.

Creating a new conference for an event 

Finally, the API allows developers to request conference creation. Simply provide a conferenceData.createRequest and set the conferenceDataVersion request parameter to 1 when creating or updating events. Conferences are created asynchronously, but you can always check the status of your request to let your users know what’s happening. For example, to request conference generation for an existing event (again, your request and event IDs will be different):
var eventPatch = {
conferenceData: {
createRequest: {requestId: "7qxalsvy0e"}
}
};

gapi.client.calendar.events.patch({
calendarId: "primary",
eventId: "7cbh8rpc10lrc0ckih9tafss99",
resource: eventPatch,
sendNotifications: true,
conferenceDataVersion: 1
}).execute(function(event) {
console.log("Conference created for event: %s", event.htmlLink);
});
The immediate response to this call might not yet contain the fully-populated conferenceData which is indicated by status pending:
"conferenceData": {
"createRequest": {
"requestId": "7qxalsvy0e",
"conferenceSolutionKey": {
"type": "hangoutsMeet"
},
"status": {
"statusCode": "pending"
}
}
}
Once the statusCode changes to success, the conference information is populated. Finally, if you are developing a Google Calendar client, you might also want to know beforehand which of the three Hangouts solutions (consumer Hangouts, classic Hangouts and Hangouts Meet) will be used to create the conference. You can get that information by checking allowedConferenceSolutionTypes in a calendar’s conferenceProperties.

To get started, check out the documentation page for managing conference data. We can’t wait to see what you build with these new features in the Google Calendar API.

Hangouts Meet now available in the Google Calendar API



Thousands of developers use Google Calendar API to read, create and modify Google Calendar events, and quite often, these events represent meetings happening not just face-to-face but also remotely. We introduced Hangouts Meet earlier this year to give users richer conference experiences, adding video call links and phone numbers for G Suite Enterprise. Starting today, we are making it possible to access all that conference information through the Google Calendar API. With this update, developers can now:
  • Read conference data associated with events 
  • Copy conference data from one event to another 
  • Request new conference generation for an event 
The API supports all Hangouts versions.

Reading conference data 

Conference information is stored in a new event attribute called conferenceData. conferenceData provides information about the solution that was used to create the conference (such as Hangouts Meet) and a set of entry points (like a video call link and phone number). Everything the user needs to know to join a conference call is there.

To help you build even better user experiences, we also give you access to icons and user-readable labels that you can use in your products. In JSON format, conferenceData looks something like this (of course, your actual meeting IDs and phone numbers will vary):
"conferenceData": {
"entryPoints": [
{
"entryPointType": "video",
"uri": "https://meet.google.com/wix-pvpt-njj",
"label": "meet.google.com/wix-pvpt-njj"
},
{
"entryPointType": "more",
"uri": "https://tel.meet/wix-pvpt-njj?pin=1701789652855",
"pin": "1701789652855"
},
{
"entryPointType": "phone",
"uri": "tel:+44-20-3873-7654",
"label": "+44 20 3873 7654",
"pin": "6054226"
}
],
"conferenceSolution": {
"key": {
"type": "hangoutsMeet"
},
"name": "Hangouts Meet",
"iconUri": "https://lh5.googleusercontent.com/proxy/bWvYBOb7O03a7HK5iKNEAPoUNPEXH1CHZjuOkiqxHx8OtyVn9sZ6Ktl8hfqBNQUUbCDg6T2unnsHx7RSkCyhrKgHcdoosAW8POQJm_ZEvZU9ZfAE7mZIBGr_tDlF8Z_rSzXcjTffVXg3M46v"
},
"conferenceId": "wix-pvpt-njj",
"signature": "ADwwud9tLfjGQPpT7bdP8f3bq3DS"
}

And this, for example, is how you would retrieve and display conference solution name and icon:
var solution = event.conferenceData.conferenceSolution;

var content = document.getElementById("content");
var text = document.createTextNode("Join " + solution.name);
var icon = document.createElement("img");
icon.src = solution.iconUri;

content.appendChild(icon);
content.appendChild(text);
The result of the code above will look like this in the user interface:

Copying conferences across events 

Sometimes displaying information is not enough—you might want to update it as well. This is especially true when scheduling multiple Calendar events with the same conference details. Say you’re developing a recruiting application that sets up separate events for the candidate and the interviewer; you want to protect the interviewer’s identity, but you also want to make sure all participants join the same conference call. To do this, you can now copy conference information from one event to another by simply writing to conferenceData.

To ensure that only existing Hangouts conferences are copied, and to help safeguard your users against malicious actors, copied conference data will always be verified by the Google Calendar API using the signature field, so don’t forget to copy it too.

Creating a new conference for an event 

Finally, the API allows developers to request conference creation. Simply provide a conferenceData.createRequest and set the conferenceDataVersion request parameter to 1 when creating or updating events. Conferences are created asynchronously, but you can always check the status of your request to let your users know what’s happening. For example, to request conference generation for an existing event (again, your request and event IDs will be different):
var eventPatch = {
conferenceData: {
createRequest: {requestId: "7qxalsvy0e"}
}
};

gapi.client.calendar.events.patch({
calendarId: "primary",
eventId: "7cbh8rpc10lrc0ckih9tafss99",
resource: eventPatch,
sendNotifications: true,
conferenceDataVersion: 1
}).execute(function(event) {
console.log("Conference created for event: %s", event.htmlLink);
});
The immediate response to this call might not yet contain the fully-populated conferenceData which is indicated by status pending:
"conferenceData": {
"createRequest": {
"requestId": "7qxalsvy0e",
"conferenceSolutionKey": {
"type": "hangoutsMeet"
},
"status": {
"statusCode": "pending"
}
}
}
Once the statusCode changes to success, the conference information is populated. Finally, if you are developing a Google Calendar client, you might also want to know beforehand which of the three Hangouts solutions (consumer Hangouts, classic Hangouts and Hangouts Meet) will be used to create the conference. You can get that information by checking allowedConferenceSolutionTypes in a calendar’s conferenceProperties.

To get started, check out the documentation page for managing conference data. We can’t wait to see what you build with these new features in the Google Calendar API.

The new maker toolkit: IoT, AI and Google Cloud Platform

Voice interaction is everywhere these days—via phones, TVs, laptops and smart home devices that use technology like the Google Assistant. And with the availability of maker-friendly offerings like Google AIY’s Voice Kit, the maker community has been getting in on the action and adding voice to their Internet of Things (IoT) projects.

As avid makers ourselves, we wrote an open-source, maker-friendly tutorial to show developers how to piggyback on a Google Assistant-enabled device (Google Home, Pixel, Voice Kit, etc.) and add voice to their own projects. We also created an example application to help you connect your project with GCP-hosted web and mobile applications, or tap into sophisticated AI frameworks that can provide more natural conversational flow.

Let’s take a look at what this tutorial, and our example application, can help you do.

Particle Photon: the brains of the operation

The Photon microcontroller from Particle is an easy-to-use IoT prototyping board that comes with onboard Wi-Fi and USB support, and is compatible with the popular Arduino ecosystem. It’s also a great choice for internet-enabled projects: every Photon gets its own webhook in Particle Cloud, and Particle provides a host of additional integration options with its web-based IDE, JavaScript SDK and command-line interface. Most importantly for the maker community, Particle Photons are super affordable, starting at just $19.

voice-kit-gcp-particle

Connecting the Google Assistant and Photon: Actions on Google and Dialogflow

The Google Assistant (via Google Home, Pixel, Voice Kit, etc.) responds to your voice input, and the Photon (through Particle Cloud) reacts to your application’s requests (in this case, turning an LED on and off). But how do you tie the two together? Let’s take a look at all the moving parts:


  • Actions on Google is the developer platform for the Google Assistant. With Actions on Google, developers build apps to help answer specific queries and connect users to products and services. Users interact with apps for the Assistant through a conversational, natural-sounding back-and-forth exchange, and your Action passes those user requests on to your app.

  • Dialogflow (formerly API.AI) lets you build even more engaging voice and text-based conversational interfaces powered by AI, and sends out request data via a webhook.

  • A server (or service) running Node.js handles the resulting user queries.


Along with some sample applications, our guide includes a Dialogflow agent, which lets you parse queries and route actions back to users (by voice and/or text) or to other applications. Dialogflow provides a variety of interface options, from an easy-to-use web-based GUI to a robust Node.js-powered SDK for interacting with both your queries and the outside world. In addition, its powerful machine learning tools add intelligence and natural language processing. Your applications can learn queries and intents over time, exposing even more powerful options for making and providing better results along the way. (The recently announced Dialogflow Enterprise Edition offers greater flexibility and support to meet the needs of large-scale businesses.)


Backend infrastructure: GCP

It’s a no-brainer to build your IoT apps on a Google Cloud Platform (GCP) backend, as you can use a single Google account to sign into your voice device, create Actions on Google apps and Dialogflow agents, and host the web services. To help get you up and running, we developed two sample web applications based on different GCP technologies that you can use as inspiration when creating a voice-powered IoT app:


  • Cloud Functions for Firebase. If your goal is quick deployment and iteration, Cloud Functions for Firebase is a simple, low-cost and powerful option—even if you don’t have much server-side development experience. It integrates quickly and easily with the other tools used here. Dialogflow, for example, now allows you to drop Cloud Functions for Firebase code directly into its graphical user interface.

  • App Engine. For those of you with more development experience and/or curiosity, App Engine is just as easy to deploy and scale, but includes more options for integrations with your other applications, additional programming language/framework choices, and a host of third-party add-ons. App Engine is a great choice if you already have a Node.js application to which you want to add voice actions, you want to tie into more of Google’s machine learning services, or you want to get deeper into device connection and management.


Next steps

As makers, we’ve only just scratched the surface of what we can do with these new tools like IoT, AI and cloud. Check out our full tutorials, and grab the code on Github. With these examples to build from, we hope we’ve made it easier for you to add voice powers to your maker project. For some extra inspiration, check out what other makers have built with AIY Voice Kit. And for even more ways to add machine learning to your maker project, check out the AIY Vision Kit, which just went on pre-sale today.

We can’t wait to see what you build!

Source: Google Cloud


The new maker toolkit: IoT, AI and Google Cloud Platform

Voice interaction is everywhere these days—via phones, TVs, laptops and smart home devices that use technology like the Google Assistant. And with the availability of maker-friendly offerings like Google AIY’s Voice Kit, the maker community has been getting in on the action and adding voice to their Internet of Things (IoT) projects.

As avid makers ourselves, we wrote an open-source, maker-friendly tutorial to show developers how to piggyback on a Google Assistant-enabled device (Google Home, Pixel, Voice Kit, etc.) and add voice to their own projects. We also created an example application to help you connect your project with GCP-hosted web and mobile applications, or tap into sophisticated AI frameworks that can provide more natural conversational flow.

Let’s take a look at what this tutorial, and our example application, can help you do.

Particle Photon: the brains of the operation

The Photon microcontroller from Particle is an easy-to-use IoT prototyping board that comes with onboard Wi-Fi and USB support, and is compatible with the popular Arduino ecosystem. It’s also a great choice for internet-enabled projects: every Photon gets its own webhook in Particle Cloud, and Particle provides a host of additional integration options with its web-based IDE, JavaScript SDK and command-line interface. Most importantly for the maker community, Particle Photons are super affordable, starting at just $19.

voice-kit-gcp-particle

Connecting the Google Assistant and Photon: Actions on Google and Dialogflow

The Google Assistant (via Google Home, Pixel, Voice Kit, etc.) responds to your voice input, and the Photon (through Particle Cloud) reacts to your application’s requests (in this case, turning an LED on and off). But how do you tie the two together? Let’s take a look at all the moving parts:


  • Actions on Google is the developer platform for the Google Assistant. With Actions on Google, developers build apps to help answer specific queries and connect users to products and services. Users interact with apps for the Assistant through a conversational, natural-sounding back-and-forth exchange, and your Action passes those user requests on to your app.

  • Dialogflow (formerly API.AI) lets you build even more engaging voice and text-based conversational interfaces powered by AI, and sends out request data via a webhook.

  • A server (or service) running Node.js handles the resulting user queries.


Along with some sample applications, our guide includes a Dialogflow agent, which lets you parse queries and route actions back to users (by voice and/or text) or to other applications. Dialogflow provides a variety of interface options, from an easy-to-use web-based GUI to a robust Node.js-powered SDK for interacting with both your queries and the outside world. In addition, its powerful machine learning tools add intelligence and natural language processing. Your applications can learn queries and intents over time, exposing even more powerful options for making and providing better results along the way. (The recently announced Dialogflow Enterprise Edition offers greater flexibility and support to meet the needs of large-scale businesses.)


Backend infrastructure: GCP

It’s a no-brainer to build your IoT apps on a Google Cloud Platform (GCP) backend, as you can use a single Google account to sign into your voice device, create Actions on Google apps and Dialogflow agents, and host the web services. To help get you up and running, we developed two sample web applications based on different GCP technologies that you can use as inspiration when creating a voice-powered IoT app:


  • Cloud Functions for Firebase. If your goal is quick deployment and iteration, Cloud Functions for Firebase is a simple, low-cost and powerful option—even if you don’t have much server-side development experience. It integrates quickly and easily with the other tools used here. Dialogflow, for example, now allows you to drop Cloud Functions for Firebase code directly into its graphical user interface.

  • App Engine. For those of you with more development experience and/or curiosity, App Engine is just as easy to deploy and scale, but includes more options for integrations with your other applications, additional programming language/framework choices, and a host of third-party add-ons. App Engine is a great choice if you already have a Node.js application to which you want to add voice actions, you want to tie into more of Google’s machine learning services, or you want to get deeper into device connection and management.


Next steps

As makers, we’ve only just scratched the surface of what we can do with these new tools like IoT, AI and cloud. Check out our full tutorials, and grab the code on Github. With these examples to build from, we hope we’ve made it easier for you to add voice powers to your maker project. For some extra inspiration, check out what other makers have built with AIY Voice Kit. And for even more ways to add machine learning to your maker project, check out the AIY Vision Kit, which just went on pre-sale today.

We can’t wait to see what you build!

With Google Maps APIs, Toyota Europe keeps teen drivers safe and sound



Editor’s note: Today’s post is from Christophe Hardy, Toyota Motor Europe’s Manager of Social Business. He’ll explain how Toyota used Google Maps APIs to build an Android app to keep teen drivers safe.

It’s a milestone that teenagers celebrate and parents fear: getting that first driver’s license. For teens, a license means freedom and a gateway to adulthood. For parents, it means worrying about their kid’s safety, with no way to make sure they’re doing the right thing behind the wheel.

We know that the risk of motor vehicle crashes is higher among 16-19-year-olds than any other age group, and that speeding and using smartphones are two of the main main causes. So as part of Toyota's efforts to eliminate accidents and fatalities, we worked with Molamil and MapsPeople to build Safe and Sound, an Android app for European teen drivers. It takes a lighthearted but effective approach to help young drivers stay focused on speed limits and the rules of the road, not on their cellphones. And it can be used by anyone, not just Toyota owners.

One way Safe and Sound combats speeding and distracted driving is by using music. Before parents turn over their car keys, parents and teens download and run the app to set it up. The app syncs with Spotify, and uses the Google Maps Roads API to monitor a teen’s driving behavior. If Safe and Sound determines the teen is speeding, it’ll override the teen’s music with a Spotify playlist specifically chosen by the parent—and the teen can’t turn it off. As any parent knows, parents and kids don’t always agree on music. And there’s nothing less cool to a teen than being forced to listen to folk ballads or ‘70s soft rock. (The embarrassment doubles if their friends are in the car.) The parents’ playlist turns off and switches back to the teen’s only when the teen drives at the speed limit.

The app also helps prevent distracted driving. When it detects the car is moving above nine miles an hour, it switches on a “do not disturb” mode that blocks social media notifications, incoming and outgoing texts, and phone calls. If the teen touches the phone, the app will detect that too, and play the parents' Spotify playlist until the teen removes his or her hand. At the end of the drive, Safe and Sound alerts parents to how many times their teen exceeded the speed limit or touched the phone. Parents can also tap a link in the app that displays the route the teen drove in Google Maps.

Google Maps provided us the ideal platform for building Safe and Sound. It has accurate, up-to-date and comprehensive map data, including road speed limits. The documentation is great, which made using the Google Maps Roads API simple. It also scales to handle millions of users, an important consideration as we roll out the app to more of Europe.

Safe and Sound is currently available in English throughout the continent, with a Spanish version launching soon in Spain, and a Dutch and French version coming to Belgium. And we’re looking to localize Safe and Sound into even more languages.

We hope Safe and Sound helps keep more teens safe, and brings more parents peace of mind. Plus, there’s never been a better use for that playlist of yacht rock classics.

Committed to storage APIs, retiring Realtime API


We launched Google Realtime API in 2013 to help developers build collaborative apps using familiar JSON-based data models, while leaving the complexities of real-time synchronization to the API. Since then, we've developed other fast, flexible cloud-based storage solutions like Google Cloud SQL and Google Cloud Firestore. As a result, we’ve decided to deprecate Realtime API in favor of these new, powerful solutions.

We’re investing heavily in Google Cloud Platform, as well as Firebase—our mobile development platform—to help developers build scalable, performant applications. While these solutions aren't a direct analog to the Drive Realtime API, we're confident they can meet most of your needs:
  • Google Cloud SQL: Fully-managed database service that makes it easy to set up, maintain, manage, and administer your relational PostgreSQL and MySQL databases in the cloud. 
  • Firebase Realtime Database: Cloud-hosted NoSQL database that lets you store and sync data between your users in real-time. 
  • Cloud Firestore: We recently announced Cloud Firestore to help developers build responsive apps that work regardless of network latency or Internet connectivity. If you're curious about Firebase Realtime Database vs. Cloud Firestore, we've got you covered.
Existing Realtime API client applications will continue to work normally until December 11, 2018, but we are no longer accepting new clients of the API. After the API is decommissioned, to facilitate migration, we will continue to provide a mechanism for applications to access document contents as JSON.

More specific deprecation timelines 

We know developers and partners have come to rely on Realtime API and that migration may be a significant effort. We hope that the deprecation timelines summarized below allow for a smooth transition.


November 28, 2017
Realtime API is no longer available for new projects.1
December 11, 2018
Realtime API documents become read-only, and attempts to modify document contents using the API fail.
January 15, 2019
Realtime API is shut down, but a JSON export API remains available.
1 Projects which accessed the Realtime API prior to November 28, 2017 (including your projects listed below) will continue to function as before. All other projects, including new projects, will be blocked from accessing the Realtime API.

Migration tips 

Applications using the Realtime API will need to migrate to another data store. Our migration guide provides instructions on how to export Realtime document data and also how that data can be imported into Google Cloud Firestore. After Realtime API is shut down, we will continue to provide a means for exporting Realtime document contents as JSON.

Additional information and support 

You can read more about the deprecation in our documentation. If you have questions that aren’t answered there, see the support page for how to get help.

Getting your Android app ready for Autofill

Posted by Wojtek Kalicinski, Android Developer Advocate, Akshay Kannan, Product Manager for Android Authentication, and Felipe Leme, Software Engineer on Android Frameworks

Starting in Oreo, Autofill makes it easy for users to provide credit cards, logins, addresses, and other information to apps. Forms in your apps can now be filled automatically, and your users no longer have to remember complicated passwords or type the same bits of information more than once.

Users can choose from multiple Autofill services (similar to keyboards today). By default, we include Autofill with Google, but users can also select any third party Autofill app of their choice. Users can manage this from Settings->System->Languages>Advanced->Autofill service.

What's available today

Today, Autofill with Google supports filing credit cards, addresses, logins, names, and phone numbers. When logging in or creating an account for the first time, Autofill also allows users to save the new credentials to their account. If you use WebViews in your app, which many apps do for logins and other screens, your users can now also benefit from Autofill support, as long as they have Chrome 61 or later installed.

The Autofill API is open for anyone to implement a service. We are actively working with 1Password, Dashlane, Keeper, and LastPass to help them with their implementations towards becoming certified on Android. We will be certifying password managers and adding them to a curated section in the Play Store, which the "Add service" button in settings will link to. If you are a password manager and would like to be certified, please get in touch.

What you need to do as a developer

As an app developer, there are a few simple things you can do to take advantage of this new functionality and make sure that it works in your apps:

Test your app and annotate your views if needed

In many cases, Autofill may work in your app without any effort. But to ensure consistent behavior, we recommend providing explicit hints to tell the framework about the contents of your field. You can do this using either the android:autofillHints attribute or the setAutofillHints() method.

Similarly, with WebViews in your apps, you can use HTML Autocomplete Attributes to provide hints about fields. Autofill will work in WebViews as long as you have Chrome 61 or later installed on your device. Even if your app is using custom views, you can also define the metadata that allows autofill to work.

For views where Autofill does not make sense, such as a Captcha or a message compose box, you can explicitly mark the view as IMPORTANT_FOR_AUTOFILL_NO (or IMPORTANT_FOR_AUTOFILL_NO_EXCLUDE_DESCENDANTS in the root of a view hierarchy). Use this field responsibly, and remember that users can always bypass this by long pressing an EditText and selecting "Autofill" in the overflow menu.

Affiliate your website and mobile app

Autofill with Google can seamlessly share logins across websites and mobile apps ‒ passwords saved through Chrome can also be provided to native apps. But in order for this to work, as an app developer, you must explicitly declare the association between your website with your mobile app. This involves 2 steps:

Step 1: Host a JSON file at yourdomain.com/.well-known/assetlinks.json

If you've used technologies like App Links or Google Smart Lock before, you might have heard about the Digital Asset Links (DAL) file. It's a JSON file placed under a well known location in your website that lets you make public, verifiable statements about other apps or websites.

You should follow the Smart Lock for Passwords guide for information about how to create and host the DAL file correctly on your server. Even though Smart Lock is a more advanced way of signing users into your app, our Autofill service uses the same infrastructure to verify app-website associations. What's more, because DAL files are public, third-party Autofill service developers can also use the association information to secure their implementations.

Step 2: Update your App's Manifest with the same information

Once again, follow the Smart Lock for Passwords guide to do this, under "Declare the association in the Android app."

You'll need to update your app's manifest file with an asset_statements resource, which links to the URL where your assetlinks.json file is hosted. Once that's done, you'll need to submit your updated app to the Play Store, and fill out the Affiliation Submission Form for the association to go live.

When using Android Studio 3.0, the App Links Assistant can generate all of this for you. When you open the DAL generator tool (Tools -> App Links Assistant -> Open Digital Asset Links File Generator), simply make sure you enable the new checkbox labeled "Support sharing credentials between the app and website".

Then, click on "Generate Digital Asset Links file", and copy the preview content to the DAL file hosted on your server and in your app. Please remember to verify that the selected domain names and certificates are correct.

Future work

It's still very early days for Autofill in Android. We are continuing to make some major investments going forward to improve the experience, whether you use Autofill with Google or a third party password manager.

Some of our key areas of investment include:

  1. Autofill with Google: We want to provide a great experience out of the box, so we include Autofill with Google with all Oreo devices. We're constantly improving our field detection and data quality, as well as expanding our support for saving more types of data.
  2. WebView support: We introduced initial support for filling WebViews in Chrome 61, and we'll be continuing to test, harden, and make improvements to this integration over time, so if your app uses WebViews you'll still be able to benefit from this functionality.
  3. Third party app support: We are working with the ecosystem to make sure that apps work as intended with the Autofill framework. We urge you as developers to give your app a spin on Android Oreo and make sure that things work as expected with Autofill enabled. For more info, see our full documentation on the Autofill Framework.

If you encounter any issues or have any suggestions for how we can make this better for you, please send us feedback.

Start building apps for the Indian Google Assistant

https://lh3.googleusercontent.com/1h4wgmQMUKoeQGlNSAhXpnSUf6EuKvvX_l4mTRBkeaUvCBYjdhs_w5jyn1UkEBC1PyCeTppO86xfcFEm5RoEGtXiVysq8nWNM6RWsw1aTqEHYwkOFiJBCIs7YRDbuHbKNrDafj0D
Whether you’re searching for a great restaurant for biryani, mapping your travels or helping the kids with their homework, your Google Assistant is always ready to help. You can ask about your day or your commute, explore your favorite topics, and get answers to hundreds of small and big questions during your day. But to be truly successful, your Google Assistant should be able to connect you across the apps and services in your life. So starting today, developers and companies can build apps to engage with Indian users through Actions on Google, the developer platform for the Google Assistant. And as a user, you’ll soon be able to access more of your favorite services and content straight through your Google Assistant.
For anyone who wants to build for the Assistant, resources such as developer tools, documentation and a simulator are available on the Actions on Google developer website, making it easy to create, test and deploy apps. Eligible developers will also be invited to join the Google Assistant Developer Community Program to get support for their efforts.


Now that we’re launching the ability for everyone to create apps on the Google Assistant, Indians will soon have easy and fast access to all types of apps. Once an app works with the Assistant, you can just tell your Assistant to connect you with the app of your choice with a simple voice command – whether it’s on eligible Android phones or iPhones. And the best thing? You won’t need to install anything extra, we’ll connect you straight with the app you’d like to interact with. For example, just say “Ok Google, talk to….” to fire up an app.


Stay tuned as local Indian apps rollout soon, and dive in today by trying one of the many apps already available.  

We hope that this growing platform will give more Indians the help they need, at home or on-the-go – from the morning rush hour to the weekend unwind. Together with developers from Kashmir to Kanyakumari, we look forward to exploring and delivering these new possibilities for the Indian Google Assistant.


Posted by Brad Abrams, Group Product Manager, Google Assistant

Gmail add-ons framework now available to all developers



Email remains at the heart of how companies operate. That’s why earlier this year, we previewed Gmail Add-ons—a way to help businesses speed up workflows. Since then, we’ve seen partners build awesome applications, and beginning today, we’re extending the Gmail add-on preview to include all developers. Now anyone can start building a Gmail add-on.

Gmail Add-ons let you integrate your app into Gmail and extend Gmail to handle quick actions. They are built using native UI context cards that can include simple text dialogs, images, links, buttons and forms. The add-on appears when relevant, and the user is just a click away from your app's rich and integrated functionality.

Gmail Add-ons are easy to create. You only have to write code once for your add-on to work on both web and mobile, and you can choose from a rich palette of widgets to craft a custom UI. Create an add-on that contextually surfaces cards based on the content of a message. Check out this video to see how we created an add-on to collate email receipts and expedite expense reporting.

Per the video, you can see that there are three components to the app’s core functionality. The first component is getContextualAddOn()—this is the entry point for all Gmail Add-ons where data is compiled to build the card and render it within the Gmail UI. Since the add-on is processing expense reports from email receipts in your inbox, the createExpensesCard()parses the relevant data from the message and presents them in a form so your users can confirm or update values before submitting. Finally, submitForm() takes the data and writes a new row in an “expenses” spreadsheet in Google Sheets, which you can edit and tweak, and submit for approval to your boss.

Check out the documentation to get started with Gmail Add-ons, or if you want to see what it's like to build an add-on, go to the codelab to build ExpenseIt step-by-step. While you can't publish your add-on just yet, you can fill out this form to get notified when publishing is opened. We can’t wait to see what Gmail Add-ons you build!