Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Born in Detroit, Accelerated with Google

Posted by Ajeet Mirwani, Program Manager, Developer Relations

StockX is a Detroit-based tech leader focused on the large and growing online resale marketplace for sneakers, apparel, accessories, collectibles, and electronics. Its innovative marketplace enables users to anonymously buy and sell high-demand consumer products with stock market-like visibility. StockX employs over 800 people in more than 13 offices and authentication centers around the world, and facilitates sales in more than 200 countries and territories.

StockX has been selected for Google’s Late-Stage Accelerator, which offers specialized programs in the areas of tech, design, product, and people operations to enable high growth startups. This accelerator is built using the fundamentals of the Google for Startups Accelerator that runs across the globe.

Every single item sold on StockX is shipped to one of its six global authentication centers and verified by a human to ensure the item is brand new, authentic, and has no manufacturing defects, providing confidence that resale market transactions are safe and secure.

The partnership between StockX and Google came to light as StockX started looking for technology to enhance its authentication process. This process today is managed by the StockX team with “authenticators” ( i.e. employees who are specially trained at finding fakes, manufacturing defects, etc.) taking on the work.

With this problem statement in mind, we gathered experts from the Google Cloud AI team to help StockX utilize machine learning / AI to improve the speed and accuracy of authentication, spotting which items are fake or have a manufacturing defect. This is a perfect problem for AI - StockX captures large amounts of information about every item and whether it passed or failed authentication, enabling the team to quickly gather training data. StockX and the Accelerator team started collaboration early in the process, planning the project phases together and bringing Google’s experience and expertise in solving these types of problems to bear. The teams meet weekly, sharing data, insights and feedback to enable fast iteration.

Google’s experts in applied machine learning (ML) from the Late-Stage Accelerator have already saved the StockX technical team significant time on model architecture and data management. Both teams are looking forward to moving this collaboration to the next stage of model development, training and serving into production. More to come!

Born in Detroit, Accelerated with Google

Posted by Ajeet Mirwani, Program Manager, Developer Relations

StockX is a Detroit-based tech leader focused on the large and growing online resale marketplace for sneakers, apparel, accessories, collectibles, and electronics. Its innovative marketplace enables users to anonymously buy and sell high-demand consumer products with stock market-like visibility. StockX employs over 800 people in more than 13 offices and authentication centers around the world, and facilitates sales in more than 200 countries and territories.

StockX has been selected for Google’s Late-Stage Accelerator, which offers specialized programs in the areas of tech, design, product, and people operations to enable high growth startups. This accelerator is built using the fundamentals of the Google for Startups Accelerator that runs across the globe.

Every single item sold on StockX is shipped to one of its six global authentication centers and verified by a human to ensure the item is brand new, authentic, and has no manufacturing defects, providing confidence that resale market transactions are safe and secure.

The partnership between StockX and Google came to light as StockX started looking for technology to enhance its authentication process. This process today is managed by the StockX team with “authenticators” ( i.e. employees who are specially trained at finding fakes, manufacturing defects, etc.) taking on the work.

With this problem statement in mind, we gathered experts from the Google Cloud AI team to help StockX utilize machine learning / AI to improve the speed and accuracy of authentication, spotting which items are fake or have a manufacturing defect. This is a perfect problem for AI - StockX captures large amounts of information about every item and whether it passed or failed authentication, enabling the team to quickly gather training data. StockX and the Accelerator team started collaboration early in the process, planning the project phases together and bringing Google’s experience and expertise in solving these types of problems to bear. The teams meet weekly, sharing data, insights and feedback to enable fast iteration.

Google’s experts in applied machine learning (ML) from the Late-Stage Accelerator have already saved the StockX technical team significant time on model architecture and data management. Both teams are looking forward to moving this collaboration to the next stage of model development, training and serving into production. More to come!

Learn the steps to build an app that detects crop diseases

Posted by Laurence Moroney, TensorFlow Developer Advocate at Google

On October 16-18, thousands of developers from all over the world are coming together for DevFest 2020, the largest virtual weekend of community-led learning on Google technologies.

For DevFest this year, a few familiar faces from Google and the community came together to show you how to build an app using multiple Google Developer tools to detect crop diseases, from scratch, in just a few minutes. This is one example of how developers can leverage a number of Google tools to solve a real-world problem. Watch the full demo video here or learn more below.

Creating the Android app

Image of Chet Haase

Chet Haase, Android Developer Advocate, begins by creating an Android app that recognizes information about plants. To do that, he needs camera functionality, and also machine learning inference.

The app is written in Kotlin, uses CameraX to take the pictures and MLKit for on-device Machine Learning analysis. The core functionality revolves around taking a picture, analyzing it, and displaying the results.

[Code showing how the app takes a picture, analyzes it, and displays the results.]

MLKIt makes it easy to recognize the contents of an image using its ImageLabeler object, so Chet just grabs a frame from CameraX and uses that. When this succeeds, we receive a collection of ImageLabels, which we turn into text strings and display a toast with the results.

[Demo of what the app detecting that the image is a plant.]

Setting up the Machine Learning model

To dig a little deeper, Gus Martins, Google Developer Advocate for TensorFlow, shows us how to set up a Machine Learning model to detect diseases in bean plants.

Gus uses Google Colab, a cloud-hosted development tool to do transfer learning from an existing ML model hosted on TensorFlow.Hub

He then puts it all together and uses a tool called Tensorflow Lite Model Maker to train the model using our custom dataset.

Setting up the Android app to recognize and build classes

The Model Gus created includes all the metadata needed for Android Studio to recognize it and build classes from it that can run inference on the model using TensorFlow Lite. To do so, Annyce Davis, Google Developer Expert for Android, updates the app to use TensorFlow Lite.

Image of Annyce Davis

She uses the model with an image from the camera to get an inference about a bean leaf to see if it is diseased or not.

Now, when we run our app, instead of telling us it’s looking at a leaf, it can tell us if our bean is healthy or, if not, can give us a diagnosis.

(Demo of the app detecting whether or not the plant is healthy)

Transforming the demo into a successful app using Firebase, Design, and Responsible AI principles

This is just a raw demo. But to transform it into a successful app, Todd Kerpelman, Google Developer Advocate for Firebase, suggests using the Firebase plugin for Android Studio to add some Analytics, so we can find out exactly how our users are interacting with our app.

Image of Toff Kerpelman

There's a lot of ways to get at this data -- it will start showing up in the Firebase dashboard, but one really fun way of viewing this data is to use StreamView, which gives you a real-time sample of what kinds of analytics results we're seeing.

[Firebase Streamview allows you to view real-time analytics.]

Using Firebase, you could also, for example, add A/B testing to your app to choose the best model for your users; have remote configuration to keep your app up to date; have easy sign-in to your app if you want users to log in, and a whole lot more!

Di Dang, UX Designer & Design Advocate, reminds us that if we were to productize this app, it’s important to keep in mind how our AI design decisions impact users.

Image of Di Dang

For instance, we need to consider if and/or how it makes sense to display confidence intervals. Or consider how you design the onboarding experience to set user expectations for the capabilities and limitations of your ML-based app, which is vital to app adoption and engagement. For more guidance on AI design decisions, check out the People + AI Guidebook.

[You can learn more about AI design decisions at the People & AI Guidebook]

This use case focuses on plant diseases, but for this case and others, where our ML-based predictions intersect with people or communities, we absolutely need to think about responsible AI themes like privacy and fairness. Learn more here.

Building a Progressive Web App

Paul Kinlan, Developer Advocate for Web, reminds us to not forget about the web!

Image of Paul Kinlan

Paul shows us how to build a PWA that allows users to install an app across all platforms, which can combine the camera with TensorFlow.js to integrate Machine Learning to build an amazing experience that runs in the browser - no additional download required.

After setting up the project with a standard layout (with an HTML file, manifest, and Service Worker to make it a PWA) and a data folder that contains our TensorFlow configuration, we’ll wait until all of the JS and CSS has loaded in order to initialize the app. We then set up the camera with our helper object, and load the TensorFlow model. After it becomes active, we can then set up the UI.

The PWA is now ready and waiting for us to use.

PWA image

(The PWA tells us whether or not the plant is healthy - no app download necessary!)

The importance of Open Source

And finally, Puuja Rajan, Google Developer Expert for TensorFlow and Women Techmakers lead, reminds us that we might also want to open source this project, too, so that developers can suggest improvements, optimizations and even additional features by filing an issue or sending a pull request. It’s a great way to get your hard work in front of even more people. You can learn more about starting an Open Source project here.

Image of Pujaa Rajan

In fact, we’ve already open sourced this project, which you can find here.

So now you have the platform for building a real app -- with the tooling from Android Studio, CameraX, Jetpack, ML Kit, Colab, TensorFlow, Firebase, Chrome and Google Cloud, you have a lot of things that just work better together. This isn’t a finished project by any means, just a proof of concept for how a minimum viable product with a roadmap to completion can be put together using Google’s Developer Tools.


Join us online this weekend at a DevFest near you. Sign up here.

Image archive, analysis, and report generation with Google APIs

Posted by Wesley Chun, Developer Advocate, Google Cloud

File backup isn't the most exciting topic while analyzing images with AI/ML is more interesting, so combining them probably isn't a workflow you think about often. However, by augmenting the former with the latter, you can build a more useful solution than without. Google provides a diverse array of developer tools you can use to realize this ambition, and in fact, you can craft such a workflow with Google Cloud products alone. More compellingly, the basic principle of mixing-and-matching Google technologies can be applied to many other challenges faced by you, your organization, or your customers.

The sample app presented uses Google Drive and Sheets plus Cloud Storage and Vision to make it happen. The use-case: Google Workspace (formerly G Suite) users who work in industries like architecture or advertising, where multimedia files are constantly generated. Every client job results in yet another Drive subfolder and collection of asset files. Successive projects lead to even more files and folders. At some point, your Drive becomes a "hot mess," making users increasingly inefficient, requiring them to scroll endlessly to find what they're looking for.

Image of a user and their google drive files

A user and their Google Drive files

How can Google Cloud help? Like Drive, Cloud Storage provides file (and generic blob) storage in the cloud. (More on the differences between Drive & Cloud Storage can be found in this video.)

Cloud Storage provides several storage classes depending on how often you expect to access your archived files. The less often files are accessed, the "colder" the storage, and the lower the cost. As users progress from one project to another, they're not as likely to need older Drive folders and those make great candidates to backup to Cloud Storage.

First challenge: determine the security model. When working with Google Cloud APIs, you generally select OAuth client IDs to access data owned by users and service accounts for data owned by applications/projects. The former is typically used with Workspace APIs while the latter is the primary way to access Google Cloud APIs. Since we're using APIs from both product groups, we need to make a decision (for now and change later if desired).

Since the goal is a simple proof-of-concept, user auth suffices. OAuth client IDs are standard for Drive & Sheets API access, and the Vision API only needs API keys so the more-secure OAuth client ID is more than enough. The only IAM permissions to acquire are for the user running the script to get write access to the destination Cloud Storage bucket. Lastly, Workspace APIs don't have their own product client libraries (yet), so the lower-level Google APIs "platform" client libraries serve as a "lowest common denominator" to access all four REST APIs. Those who have written Cloud Storage or Vision code using the Cloud client libraries will see something different.

The prototype is a command-line script. In real life, it would likely be an application in the cloud, executing as a Cloud Function or a Cloud Task running as determined by Cloud Scheduler. In that case, it would use a service account with Workspace domain-wide delegation to act on behalf of an employee to backup their files. See this page in the documentation describing when you'd use this type of delegation and when not to.

Our simple prototype targets individual image files, but you can continue to evolve it to support multiple files, movies, folders, and ZIP archives if desired. Each function calls a different API, creating a "service pipeline" with which to process the images. The first pair of functions are drive_get_file() and gcs_blob_upload(). The former queries for the image on Drive, grabs pertinent metadata (filename, ID, MIMEtype, size), downloads the binary "blob" and returns all of that to the caller. The latter uploads the binary along with relevant metadata to Cloud Storage. The script was written in Python for brevity, but the client libraries support most popular languages. Below is the aforementioned function pseudocode:

def drive_get_file(fname):
rsp = DRIVE.files().list(q="name='%s'" % fname).execute().get['files'][0]
fileId, fname, mtype = rsp['id'], rsp['name'], rsp['mimeType']
blob = DRIVE.files().get_blob(fileId).execute()
return fname, mtype, rsp['modifiedTime'], blob

def gcs_blob_upload(fname, folder, bucket, blob, mimetype):
body = {'name': folder+'/'+fname, 'uploadType': 'multipart',
'contentType': mimetype}
return GCS.objects().insert(bucket, body, blob).execute()

Next, vision_label_img() passes the binary to the Vision API and formats the results. Finally that information along with the file's archived Cloud Storage location are written as a single row of data in a Google Sheet via sheet_append_roww().

def vision_label_img(img):
body = {'requests': [{'image': {'content': img},
'features': [{'type': 'LABEL_DETECTION'}]}]}
rsp = VISION.images().annotate(body=body).execute().get['responses'][0]
return ', '.join('(%.2f%%) %s' % (label['score']*100.,
label['description']) for label in rsp['labelAnnotations'])

def sheet_append_row(sheet_id, row):
rsp = SHEETS.spreadsheets().values().append(spreadsheetId=sheet_id,
range='Sheet1', body={'values': row}).execute()
return rsp.get('updates').get('updatedCells')

Finally, a "main" program that drives the workflow is needed. It comes with a pair of utility functions, _k_ize() to turn file sizes into kilobytes and _linkify() to build a valid Cloud Storage hyperlink as a spreadsheet formula. These are featured here:

def _k_ize(nbytes):  # bytes to KBs (not KiBs) as str
return '%6.2fK' % (nbytes/1000.)


def _linkify(bucket, fname): # make GCS hyperlink to bucket/folder/file
tmpl = '=HYPERLINK("storage.cloud.google.com/{0}/{1}/{2}", "{2}")'
return tmpl.format(bucket, folder, fname)

def main(fname, bucket, SHEET_ID, folder):
fname, mtype, ftime, data = drive_get_img(fname)
gcs_blob_upload(fname, folder, bucket, data, mtype)
info = vision_label_img(data)
sheet_append_row(SHEET_ID, [folder, _linkify(bucket, fname), mtype,
ftime, _k_ize(data), info])

While this post may feature just pseudocode, a barebones working version can be accomplished with ~80 lines of actual Python. The rest of the code not shown are constants and other auxiliary support. The application gets kicked off with a call to main() passing in a filename, the Cloud Storage bucket to archive it to, a Drive file ID for the Sheet, and a "folder name," e.g., a directory or ZIP archive. Running it several images results in a spreadsheet that looks like this:

Image archive report in Google Sheets

Image archive report in Google Sheets

Developers can build this application step-by-step with our "codelab" (free, online, self-paced tutorials) which can be found here. As you journey through this tutorial, its corresponding open source repo features separate folders for each step so you know what state your app should be in after every implemented function. (NOTE: Files are not deleted, so your users have to decide when to their cleanse Drive folders.) For backwards-compatibility, the script is implemented using older Python auth client libraries, but the repo has an "alt" folder featuring alternative versions of the final script that use service accounts, Google Cloud client libraries, and the newer Python auth client libraries.

Finally to save you some clicks, here are links to the API documentation pages for Google Drive, Cloud Storage, Cloud Vision, and Google Sheets. While this sample app deals with a constrained resource issue, we hope it inspires you to consider what's possible with Google developer tools so you can build your own solutions to improve users' lives every day!

Building for a more productive inbox with AMP

Posted by Jon Harmer, Product Manager, Google Workspace

With today being the start of AMP Fest, quite naturally AMP is on our minds. One of the ways that AMP shines is through email. With AMP for Email, brands can change triggered emails from being just another notification to an easy way for a user to always have realtime and relevant context.

Expanding the AMP Ecosystem

We’re excited to be partnering with Verizon Media and Salesforce Marketing Cloud to build for a future in which every message and touchpoint is an opportunity to make a delightful impression with rich, web-like experiences.

“The motivation to join the AMP for email project was simple: Allowing brands to send richer and more engaging emails to our users. This in turn creates a much better user experience. This also enables features and functionality right within the email environment which are on par with other native web or app experiences. It’s a perfect fit with our mission ... to create the best consumer email experience.” said Nirmal Thangaraj, Engineer on Verizon Media Mail, which powers AOL and Yahoo! Mail.

Making things even easier for email senders, Salesforce announced at AMP Fest that early next year, senders will be able to send AMP emails from the Marketing Cloud. With Salesforce Marketing Cloud enabling AMP emails, senders can add one to two actionable steps into their emails and store that information back in Salesforce Marketing Cloud.

AMP for Productivity

Another area where AMP can really make an impact is in the office. With the influx of applications in the workplace, companies are using new SaaS applications to simplify individual processes - but it comes with a downside of complicating a workers day by requiring that employee jump from app to app to get work done. With context aware content that's dynamically populated and updated in real-time, AMP helps make email a place where work gets done .

Let’s take a look at a couple partners who have been building AMP emails, and how they’ve gone about implementing AMP as part of their email strategy.

Guru

Guru sends tens of thousands of notification emails each day, and while helpful, there were limitations to their effectiveness. Here’s Jason Maynard, Guru’s VP of Product on AMP:

“Static emails are helpful for giving a user awareness of a necessary task, but they also require that user to navigate away from their inbox to our web app in order to review knowledge cards and take specific actions. Their workflow is interrupted. Thus, we decided to leverage AMP in hopes of alleviating this user friction with a goal of fostering engagement within an email thread and reducing context switching.”

And the process and results also were in Guru’s favor: “AMP’s predefined components, documented examples, and testing playgrounds were all development resources that enabled us to deploy AMP payloads very quickly.The new implementation has resulted in users now being able to interact with these notifications to a much greater extent. Users can now expand and read knowledge cards within their email thread. They can also complete actions such as card verifications and reply comments. Emails are now much more stateful and relevant to users.”

After deploying AMP, Guru saw a noticeable uptick in email-driven actions resulting in a 2.5x increase in the number of card comment actions and a 75% increase in card verification. These are thousands of new actions that helped teams manage their knowledge base, all without leaving their inbox.

Amp gif

VOGSY

VOGSY, the Professional Services Automation Cloud App for Workspace, sends approval and notification emails that have multiple conversion paths. Historically, these actions would take a day to complete. With AMP, they've seen an 80% improvement in completion speed. Reaching this success was a smooth and pleasant journey.

“Our developers and our users love AMP technology. Developers truly enjoy building engaging emails with personalized content that is securely and dynamically updated every time you open the email. User adoption is 100%. Completing a workflow can be done without leaving your inbox. That is a huge improvement in user experience. Because of its fast adoption, we expect to send more than 2 million AMP emails in the first year,” said Leo Koster, Founder of VOGSY.

Copper

Amp with Copper image

Copper is a CRM designed for people whose business relies on relationship-building, Copper functions seamlessly in the background while employees spend time on what matters: customers. Email is obviously a big part of how organizations communicate, plan, and collaborate. And up to now, email is mostly used as a gateway to other applications where users can take action or complete their task.

“This is why the idea of dynamic emails intrigued us... Supercharging the receivers’ experience to provide up to date information that you can interact with from your inbox. Instead of receiving static email notifications each time you are tagged, we leveraged AMP for email to give users a single, dynamic email where they can see relevant information about the opportunity. They can then respond to comments from their teammates—bringing our users the most seamless experience possible wherever they like to work,” said Sefunmi Osinaike, Product Manager at Copper.

And best of all, the process was simple: “Our developers described the documentation as enjoyable because it helped us add rich components without the overhead of figuring out how to make them work in email with basic HTML. The ease of use of lists, inputs and tooltips accelerated the rate we prototyped our feature and it saved us a lot of time. We also got a ton of support on stack overflow with a response rate in less than 24 hours.”

For Copper, AMP has allowed them to take the experiences that always existed in Copper, but move them closer to the employee’s day-to-day workflow by allowing them to take those actions from email.

Stripo

As an email design platform, Stripo.email has seen over 1,000 different companies create AMP email campaigns with Carousels, Feedback Forms, and Net Promoter Score forms--in one month alone. Stripo was able to implement AMP where users could fill out forms without having to leave their inbox. The strategy drove a 5x lift in effectiveness from traditional questionnaires.

We’re excited about AMP and all of the great use cases partners are implementing to modernize the capabilities of email. To learn more about AMP for Email, click here and be sure to check out AMP Fest.

Building for a more productive inbox with AMP

Posted by Jon Harmer, Product Manager, Google Workspace

With today being the start of AMP Fest, quite naturally AMP is on our minds. One of the ways that AMP shines is through email. With AMP for Email, brands can change triggered emails from being just another notification to an easy way for a user to always have realtime and relevant context.

Expanding the AMP Ecosystem

We’re excited to be partnering with Verizon Media and Salesforce Marketing Cloud to build for a future in which every message and touchpoint is an opportunity to make a delightful impression with rich, web-like experiences.

“The motivation to join the AMP for email project was simple: Allowing brands to send richer and more engaging emails to our users. This in turn creates a much better user experience. This also enables features and functionality right within the email environment which are on par with other native web or app experiences. It’s a perfect fit with our mission ... to create the best consumer email experience.” said Nirmal Thangaraj, Engineer on Verizon Media Mail, which powers AOL and Yahoo! Mail.

Making things even easier for email senders, Salesforce announced at AMP Fest that early next year, senders will be able to send AMP emails from the Marketing Cloud. With Salesforce Marketing Cloud enabling AMP emails, senders can add one to two actionable steps into their emails and store that information back in Salesforce Marketing Cloud.

AMP for Productivity

Another area where AMP can really make an impact is in the office. With the influx of applications in the workplace, companies are using new SaaS applications to simplify individual processes - but it comes with a downside of complicating a workers day by requiring that employee jump from app to app to get work done. With context aware content that's dynamically populated and updated in real-time, AMP helps make email a place where work gets done .

Let’s take a look at a couple partners who have been building AMP emails, and how they’ve gone about implementing AMP as part of their email strategy.

Guru

Guru sends tens of thousands of notification emails each day, and while helpful, there were limitations to their effectiveness. Here’s Jason Maynard, Guru’s VP of Product on AMP:

“Static emails are helpful for giving a user awareness of a necessary task, but they also require that user to navigate away from their inbox to our web app in order to review knowledge cards and take specific actions. Their workflow is interrupted. Thus, we decided to leverage AMP in hopes of alleviating this user friction with a goal of fostering engagement within an email thread and reducing context switching.”

And the process and results also were in Guru’s favor: “AMP’s predefined components, documented examples, and testing playgrounds were all development resources that enabled us to deploy AMP payloads very quickly.The new implementation has resulted in users now being able to interact with these notifications to a much greater extent. Users can now expand and read knowledge cards within their email thread. They can also complete actions such as card verifications and reply comments. Emails are now much more stateful and relevant to users.”

After deploying AMP, Guru saw a noticeable uptick in email-driven actions resulting in a 2.5x increase in the number of card comment actions and a 75% increase in card verification. These are thousands of new actions that helped teams manage their knowledge base, all without leaving their inbox.

Amp gif

VOGSY

VOGSY, the Professional Services Automation Cloud App for Workspace, sends approval and notification emails that have multiple conversion paths. Historically, these actions would take a day to complete. With AMP, they've seen an 80% improvement in completion speed. Reaching this success was a smooth and pleasant journey.

“Our developers and our users love AMP technology. Developers truly enjoy building engaging emails with personalized content that is securely and dynamically updated every time you open the email. User adoption is 100%. Completing a workflow can be done without leaving your inbox. That is a huge improvement in user experience. Because of its fast adoption, we expect to send more than 2 million AMP emails in the first year,” said Leo Koster, Founder of VOGSY.

Copper

Amp with Copper image

Copper is a CRM designed for people whose business relies on relationship-building, Copper functions seamlessly in the background while employees spend time on what matters: customers. Email is obviously a big part of how organizations communicate, plan, and collaborate. And up to now, email is mostly used as a gateway to other applications where users can take action or complete their task.

“This is why the idea of dynamic emails intrigued us... Supercharging the receivers’ experience to provide up to date information that you can interact with from your inbox. Instead of receiving static email notifications each time you are tagged, we leveraged AMP for email to give users a single, dynamic email where they can see relevant information about the opportunity. They can then respond to comments from their teammates—bringing our users the most seamless experience possible wherever they like to work,” said Sefunmi Osinaike, Product Manager at Copper.

And best of all, the process was simple: “Our developers described the documentation as enjoyable because it helped us add rich components without the overhead of figuring out how to make them work in email with basic HTML. The ease of use of lists, inputs and tooltips accelerated the rate we prototyped our feature and it saved us a lot of time. We also got a ton of support on stack overflow with a response rate in less than 24 hours.”

For Copper, AMP has allowed them to take the experiences that always existed in Copper, but move them closer to the employee’s day-to-day workflow by allowing them to take those actions from email.

Stripo

As an email design platform, Stripo.email has seen over 1,000 different companies create AMP email campaigns with Carousels, Feedback Forms, and Net Promoter Score forms--in one month alone. Stripo was able to implement AMP where users could fill out forms without having to leave their inbox. The strategy drove a 5x lift in effectiveness from traditional questionnaires.

We’re excited about AMP and all of the great use cases partners are implementing to modernize the capabilities of email. To learn more about AMP for Email, click here and be sure to check out AMP Fest.

Building for a more productive inbox with AMP

Posted by Jon Harmer, Product Manager, Google Workspace

With today being the start of AMP Fest, quite naturally AMP is on our minds. One of the ways that AMP shines is through email. With AMP for Email, brands can change triggered emails from being just another notification to an easy way for a user to always have realtime and relevant context.

Expanding the AMP Ecosystem

We’re excited to be partnering with Verizon Media and Salesforce Marketing Cloud to build for a future in which every message and touchpoint is an opportunity to make a delightful impression with rich, web-like experiences.

“The motivation to join the AMP for email project was simple: Allowing brands to send richer and more engaging emails to our users. This in turn creates a much better user experience. This also enables features and functionality right within the email environment which are on par with other native web or app experiences. It’s a perfect fit with our mission ... to create the best consumer email experience.” said Nirmal Thangaraj, Engineer on Verizon Media Mail, which powers AOL and Yahoo! Mail.

Making things even easier for email senders, Salesforce announced at AMP Fest that early next year, senders will be able to send AMP emails from the Marketing Cloud. With Salesforce Marketing Cloud enabling AMP emails, senders can add one to two actionable steps into their emails and store that information back in Salesforce Marketing Cloud.

AMP for Productivity

Another area where AMP can really make an impact is in the office. With the influx of applications in the workplace, companies are using new SaaS applications to simplify individual processes - but it comes with a downside of complicating a workers day by requiring that employee jump from app to app to get work done. With context aware content that's dynamically populated and updated in real-time, AMP helps make email a place where work gets done .

Let’s take a look at a couple partners who have been building AMP emails, and how they’ve gone about implementing AMP as part of their email strategy.

Guru

Guru sends tens of thousands of notification emails each day, and while helpful, there were limitations to their effectiveness. Here’s Jason Maynard, Guru’s VP of Product on AMP:

“Static emails are helpful for giving a user awareness of a necessary task, but they also require that user to navigate away from their inbox to our web app in order to review knowledge cards and take specific actions. Their workflow is interrupted. Thus, we decided to leverage AMP in hopes of alleviating this user friction with a goal of fostering engagement within an email thread and reducing context switching.”

And the process and results also were in Guru’s favor: “AMP’s predefined components, documented examples, and testing playgrounds were all development resources that enabled us to deploy AMP payloads very quickly.The new implementation has resulted in users now being able to interact with these notifications to a much greater extent. Users can now expand and read knowledge cards within their email thread. They can also complete actions such as card verifications and reply comments. Emails are now much more stateful and relevant to users.”

After deploying AMP, Guru saw a noticeable uptick in email-driven actions resulting in a 2.5x increase in the number of card comment actions and a 75% increase in card verification. These are thousands of new actions that helped teams manage their knowledge base, all without leaving their inbox.

Amp gif

VOGSY

VOGSY, the Professional Services Automation Cloud App for Workspace, sends approval and notification emails that have multiple conversion paths. Historically, these actions would take a day to complete. With AMP, they've seen an 80% improvement in completion speed. Reaching this success was a smooth and pleasant journey.

“Our developers and our users love AMP technology. Developers truly enjoy building engaging emails with personalized content that is securely and dynamically updated every time you open the email. User adoption is 100%. Completing a workflow can be done without leaving your inbox. That is a huge improvement in user experience. Because of its fast adoption, we expect to send more than 2 million AMP emails in the first year,” said Leo Koster, Founder of VOGSY.

Copper

Amp with Copper image

Copper is a CRM designed for people whose business relies on relationship-building, Copper functions seamlessly in the background while employees spend time on what matters: customers. Email is obviously a big part of how organizations communicate, plan, and collaborate. And up to now, email is mostly used as a gateway to other applications where users can take action or complete their task.

“This is why the idea of dynamic emails intrigued us... Supercharging the receivers’ experience to provide up to date information that you can interact with from your inbox. Instead of receiving static email notifications each time you are tagged, we leveraged AMP for email to give users a single, dynamic email where they can see relevant information about the opportunity. They can then respond to comments from their teammates—bringing our users the most seamless experience possible wherever they like to work,” said Sefunmi Osinaike, Product Manager at Copper.

And best of all, the process was simple: “Our developers described the documentation as enjoyable because it helped us add rich components without the overhead of figuring out how to make them work in email with basic HTML. The ease of use of lists, inputs and tooltips accelerated the rate we prototyped our feature and it saved us a lot of time. We also got a ton of support on stack overflow with a response rate in less than 24 hours.”

For Copper, AMP has allowed them to take the experiences that always existed in Copper, but move them closer to the employee’s day-to-day workflow by allowing them to take those actions from email.

Stripo

As an email design platform, Stripo.email has seen over 1,000 different companies create AMP email campaigns with Carousels, Feedback Forms, and Net Promoter Score forms--in one month alone. Stripo was able to implement AMP where users could fill out forms without having to leave their inbox. The strategy drove a 5x lift in effectiveness from traditional questionnaires.

We’re excited about AMP and all of the great use cases partners are implementing to modernize the capabilities of email. To learn more about AMP for Email, click here and be sure to check out AMP Fest.

Top brands integrate Google Assistant with new tools and features for Android apps and Smart Displays

Posted by Baris Gultekin and Payam Shodjai, Directors of Product Management

Top brands turn to Google Assistant every day to help their users get things done on their phones and on Smart Displays -- such as playing games, finding recipes or checking investments, just by using their voice. In fact, over the last year, the number of Actions completed by third-party developers has more than doubled.

We want to support our developer ecosystem as they continue building the best experiences for smart displays and Android phones. That’s why today at Google Assistant Developer Day, we introduced:

  • New App Actions built in intents -- to enable Android developers easily integrate Google Assistant with their apps,
  • New discovery features such as suggestions and shortcuts -- to enable users easily discover and engage with Android apps
  • New developer tools and features, such as testing API, voices and frameworks for game development -- to help build high quality nativel experiences for smart displays
  • New discovery and monetization improvements -- to help users discover and engage with developers’ experiences on Assistant.

Now, all Android Developers can bring Google Assistant to their apps

Now, every Android app developer can make it easier for their users to find what they're looking for, by fast forwarding them into the app’s key functionality, using just voice. With App Actions, top app developers such as Yahoo Mail, Fandango, and ColorNote, are currently creating these natural and engaging experiences for users by mapping their users' intents to specific functionality within their apps. Instead of having to navigate through each app to get tasks done, users can simply say “Hey Google” and the outcome they want - such as “find Motivation Mix on Spotify” using just their voice.

Here are a few updates we’re introducing today to App Actions.

Quickly open and search within apps with common intents

Every day, people ask Google Assistant to open their favorite apps. Today, we are building on this functionality to open specific pages within apps and also search within apps. Starting today, you can use the GET_THING intent to search within apps and the OPEN_APP_FEATURE intent to open specific pages in apps; offering more ways to easily connect users to your app through Assistant.

Many top brands such as eBay and Kroger are already using these intents. If you have the eBay app on your Android phone, try saying “Hey Google, find baseball cards on eBay” to try the GET_THING intent.

If you have the Kroger app on your Android phone, try saying “Hey Google, open Kroger pay” to try the OPEN_APP_FEATURE intent.

It's easy to implement all these common intents to your Android apps. You can simply declare support for these capabilities in your Actions.xml file to get started. For searching, you can provide a deep link that will allow Assistant to pass a search term into your app. For opening pages, you can provide a deep link with the corresponding name for Assistant to match users' requests.

Vertical specific built-in intents

For a deeper integration, we offer vertical-specific built-in intents (BII) that lets Google take care of all the Natural Language Understanding (NLU) so you don’t have to. We first piloted App Actions in some of the most popular app verticals such as Finance, Ridesharing, Food Ordering, and Fitness. Today, we are announcing that we have now grown our catalog to cover more than 60 intents across 10 verticals, adding new categories like Social, Games, Travel & Local, Productivity, Shopping and Communications

For example, Twitter and Wayfair have already implemented these vertical built in intents. So, if you have the Twitter app on your Android phone, try saying “Hey Google, post a Tweet” to see a Social vertical BII in action.

If you have the Wayfair app on your Android phone, try saying “Hey Google, buy accent chairs on Wayfair” to see a Shopping vertical BII in action.

Check out how you can get started with these built-in intents or explore creating custom intents today.

Custom Intents to highlight unique app experiences

Every app is unique with its own features and capabilities, which may not match the list of available App Actions built-in intents. For cases where there isn't a built-in intent for your app functionality, you can instead create a custom intent.Like BIIs, custom intents follow the actions.xml schema and act as connection points between Assistant and your defined fulfillments.

Snapchat and Walmart use custom intents to extend their app’s functionality to Google Assistant. For example, if you have the Snapchat app on your Android phone, just say, “Hey Google, send a Snap using the cartoon face lens” to try their Custom Intent.

Or, If you have the Walmart app on your Android phone, just say, “Hey Google, reserve a time slot with Walmart” to schedule your next grocery pickup.

With more common, built-in, and custom intents available, every Android developer can now enable their app to fulfill Assistant queries that tailor to exactly what their app offers. Developers can also use known developer tools such as Android Studio, and with just a few days of work, they can easily integrate their Android apps with the Google Assistant.

Suggestions and Shortcuts for improving user discoverability

We are excited about these new improvements to App Actions, but we also understand that it's equally important that people are able to discover your App Actions. We’re designing new touch points to help users easily learn about Android apps that support App Actions. For example, we’ll be recommending relevant Apps Actions even when the user doesn't mention the app name explicitly by showing suggestions. If you say broadly “Hey Google, show me Taylor Swift”, we’ll highlight a suggestion chip that will guide the user to open up the search result in Twitter. Google Assistant will also be suggesting apps proactively, depending on individual app usage patterns.

Android users will also be able to customize their experience, creating their own way to automate their most common tasks with app shortcuts, enabling people to set up quick phrases to enable app functions they frequently use. For example, you can create a MyFitnessPal shortcut to easily track their calories throughout the day and customize the query to say what you want - such as “Hey Google, check my calories.”

By simply saying "Hey Google, shortcuts", they can set up and explore suggested shortcuts in the settings screen. We’ll also make proactive suggestions for shortcuts throughout the Assistant mobile experience, tailored to how you use your phone.

Build high quality conversational Actions for Smart Displays

Back in June, we launched new developer tools such as Actions Builder and Actions SDK, making it easier to design and build conversational Actions on Assistant, like games, for Smart Displays. Many partners have already been building with these, such as Cool Games and Sony. We’re excited to share new updates that not only enable developers to build more, higher quality native Assistant experiences with new game development frameworks and better testing tools, but we’ve also made user discovery of those experiences better than ever.

New developer tools and features

Improved voices

We’ve heard your feedback that you need better voices to match the quality of the experiences you’re delivering on the Assistant. We’ve released two new English voices that take advantage of an improved prosody model to make Assistant sound more natural. Give it a listen.

These voices are now available and you can leverage them in your existing Actions by simply making the change in the Actions Console.

Interactive Canvas expansion

But what can you build with these new voices? Last year, we introduced Interactive Canvas, an API that lets you build custom experiences for the Assistant that can be controlled via both touch and voice using simple technologies like HTML, CSS, and Javascript.

We’re expanding Interactive Canvas to Actions in the education and storytelling verticals; in addition to games. Whether you’re building an action that teaches someone to cook, explains the phases of the moon, helps a family member with grammar, or takes you through an interactive adventure, you’ll have access to the full visual power of Interactive Canvas.

Improved testing to deliver high quality experiences

Actions Testing API is a new programmatic way to test your critical user journeys and ensure there aren’t any broken conversation paths. Using this framework allows you to run end to end tests in an isolated preview environment, run regression tests, and add continuous testing to your arsenal. This API is being released to general availability soon.

New Dialogflow migration tool

For those of you who built experiences using Dialogflow, we want you to enjoy the benefits of the new platform without having to build from scratch. That’s why we’re offering a migration tool inside the Actions Console that automates much of the work to move projects to the improved platform.

New site for game developers

Game developers, we built a new resource hub just for you. Boost your game design expertise with full source code to games, design best practices, interviews with game developers, tools, and everything you need to create voice-enabled games for Smart Displays.

Discovery

With more incredible experiences being built, we know it can be challenging to help users discover them and drive engagement. To make it easier for people to discover and engage with your experiences, we have invested in a slew of new discovery features:

New Built-in intents and the Learning Hub

We’ll soon be opening two new set Built-in intents (BIIs) for public registration: Education and Storytelling. Registering your Actions for these intents allows users to discover them in a simple, natural way through general requests to Google Assistant. These new BIIs cover a range of intents in the Education and Storytelling domains and join Games as principal areas of investment for the developer ecosystem.

People will then be able to say "Hey Google, teach me something new" and they will be presented with a Learning Hub where they can browse different education experiences. For stories, users can simply say "Hey Google, tell me a story". Developers can soon register for both new BIIs to get their experiences listed in these browsable catalog.

Household Authentication token and improving transactions

One of the exciting things about the Smart Display is that it’s an inherently communal device. So if you’re offering an experience that is meant to be enjoyed collaboratively, you need a way to share state between household members and between multiple devices. Let’s say you’re working on a puzzle and your roommate wants to help with a few pieces on the Smart Display. We’re introducing household authentication tokens so all users in a home can now share these types of experiences. This feature will be available soon via the Actions console.

Finally, we're making improvements to the transaction flow on Smart Displays. We want to make it easier for you to add seamless voice-based and display-based monetization capabilities to your experience. We've started by supporting voice-match as an option for payment authorization. And early next year, we'll also launch an on-display CVC entry.

Simplifying account linking and authentication

Once you build personalized and premium experiences, you need to make it as easy as possible to connect with existing accounts. To help streamline this process, we’re opening two betas: Link with Google and App Flip, for improved account linking flows to allow simple, streamlined authentication via apps.

Link with Google enables anyone with an Android or iOS app where they are already logged in to complete the linking flow with just a few clicks, without needing to re-enter credentials.

App Flip helps you build a better mobile account linking experience and decrease drop-off rates. App Flip allows your users to seamlessly link their accounts to Google without having to re-enter their credentials.

Assistant links

In addition to launching new channels of discovery for developer Actions, we also want to provide more control over how you and your users reach your Actions. Action links were a way to deep link to your conversational action that has been used with great success by partners like Sushiro, Caixa, and Giallo Zafferano. Now we are reintroducing this feature as Assistant links, which enable partners such as TD Ameritrade to deliver rich Google Assistant experiences in their websites as well as deep links to their Google Assistant integrations from anywhere on the web.

We are very excited about all these announcements - both across App Actions and native Assistant development. Whether you are exploring new ways to engage your users using voice via App Actions, or looking to build something new to engage users at home via Smart Displays, we hope you will leverage these new tools and features and share your feedback with us.

Improving shared AR experiences with Cloud Anchors in ARCore 1.20

Posted by Eric Lai, Product Manager, Augmented Reality

Augmented reality (AR) can help you explore the world around you in new, seemingly magical ways. Whether you want to venture through the Earth’s unique habitats, explore historic cultures or even just find the shortest path to your destination, there’s no shortage of ways that AR can help you interact with the world.

That’s why we’re constantly improving ARCore — so developers can build amazing AR experiences that help us reimagine what’s possible.

In 2018, we introduced the Cloud Anchors API in ARCore, which lets people across devices view and share the same AR content in real-world spaces. Since then, we’ve been working on new ways for developers to use Cloud Anchors to make AR content persist and more easily discoverable.

Create long-lasting AR experiences

Last year, we previewed persistent Cloud Anchors, which lets people return to shared AR experiences again and again. With ARCore 1.20, this feature is now widely available to Android, iOS, and Unity mobile developers.

Developers all over the world are already using this technology to help people learn, share and engage with the world around them in new ways.

MARK, which we highlighted last year, is a social platform that lets people leave AR messages in real-world locations for friends, family and their community to discover. MARK is now available globally and will be launching the MARK Hope Campaign in the US to help people raise funds for their favorite charities and have their donations matched for a limited time.

AR photo

MARK by People Sharing Streetart Together Limited

REWILD Our Planet is an AR nature series produced by Melbourne based studio PHORIA. The experience is based on the Netflix original documentary series Our Planet. REWILD uses Ultra High Definition Video alongside AR content to let you venture into earth’s unique habitats and interact with endangered wildlife. It originally launched in museums, but can now be enjoyed on your smartphone in your living room. As episodes of the show are released, persistent Cloud Anchors allow you to return to the same spot in your own home to see how nature is changing.

AR image

REWILD Our Planet by PHORIA

Changdeok ARirang is an AR tour guide app that combines the power of SK Telecom’s 5G with persistent Cloud Anchors. Visitors at Changdeokgung Palace in South Korea are guided by the legendary Haechi to relevant locations where they can experience historical and cultural high fidelity AR content. Changdeok ARirang at Home was also launched so that this same experience can be accessed from the comfort of your couch.

AR image

Changdeok ARirang by SK Telecom

In Sweden, SJ Labs, the innovation arm of Swedish Railways, together with Bontouch, their tech innovation partner, uses persistent Cloud Anchors to help passengers find their way at Central Station in Stockholm, making it easier and faster for them to make their train departures.

AR image

SJ Labs by SJ – Swedish Railways

Coming soon, Lowe’s Persistent View will let you design your home in AR with the help of an expert. You’ll be able to add furniture and appliances to different areas of your home to see how they’d look, and return to the experience as many times as needed before making a purchase.

AR example

Lowe’s Persistent View powered by Streem

If you’re interested in building AR experiences that last over time, you can learn more about persistent Cloud Anchors in our docs.

Call for collaborators: test a new way to find AR content

As developers use Cloud Anchors to attach more AR experiences to the world, we also want to make it easier for people to discover them. That’s why we’re working on earth Cloud Anchors, a new feature that uses AR and global localization—the underlying technology that powers Live View features on Google Maps—to easily guide users to AR content. If you’re interested in early access to test this feature, you can apply here.

Some earth Cloud Anchors concepts

Introducing Learn, your key to unlocking Google’s educational content for developers

Posted by Amani Newton, Technical Writer

Any Codelabs fans in the house?

If you haven’t heard yet, we’re excited to announce the launch of developers.google.com/learn, a new one-stop destination for developers to achieve the knowledge and skills needed to develop software with Google's technology. Learn brings the learning content you already love from Google together into one easy to access place.

Google Developers learning page image

The home page of developers.google.com/learn

Previously, our educational content was separated by product area and platform. For example, you’d likely find Firebase Codelabs on firebase.google.com, and their video series on Youtube. We know you love these educational offerings, but they could be somewhat difficult to find, unless you were already in the know.

To address this issue, we built Learn to act as a portal, linking all these amazing educational activities together. In addition, we came up with some handy new ways to organize the content, so you can easily find what you’re looking for the first time, every time.

Codelabs

For newbies: Codelabs walk you through the process of building a small application, or adding a new feature to an existing application. They cover a wide range of topics such as Android Wear, Google Compute Engine, Project Tango, and Google APIs on iOS.

If you’re already familiar with Codelabs, rest assured that not too much has changed. Codelabs still provide guided, hands-on coding experience for new and aspiring developers at no charge, and you can still access all of them through codelabs.developers.google.com.

What has changed is that now there’s a new way to experience Codelabs: through our Pathways.

Pathways

Pathways image

The home for Google Learning Pathways

Pathways are a new way to learn skills using all of the educational activities Google has developed for that skill. They organize selected videos, articles, blog posts, and Codelabs, together in one sequential learning experience so you can develop knowledge and skills at your own pace.

Let’s use Flutter as an example. Did you love The Boring Flutter Development Show, but your style of learning is a little more hands-on? Look no further than the Build apps with Flutter pathway, featuring explanatory videos from the Flutter team and step-by-step Codelabs designed to help you build your first Flutter app.

Flutter image

The Flutter pathway

All Pathways finish with an assessment, which you can pass to earn a badge.

Topics

Topics allow you to explore collections of related codelabs, pathways, news, and videos.

Are you a chatbot developer, or aspire to be one? You can find all the latest news and educational content regarding chatbots in one easy to find place.

Topics image

The home for news and more about Chatbots

Developer Profiles

Here’s where the fun begins! You can show off all the new stuff you’ve learned on your Google Developer Profile.

To use the social features, first, create your unique Developer Profile on google.dev.

Developer Profile

Create a Developer Profile on google.dev

Your first badge will be the Created Developer Profile badge.

first badge

Create a Developer Profile badge

Next, try one of the pathways we currently host. After completing the activities you’ll take a quiz, and if you pass, you’ll be awarded the badge for that pathway. You can share all of your earned badges on social media, and make your other developer friends jealous!