Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

ML Kit Pose Detection Makes Staying Active at Home Easier

Posted by Kenny Sulaimon, Product Manager, ML Kit; Chengji Yan and Areeba Abid, Software Engineers, ML Kit

ML Kit logo

Two months ago we introduced the standalone version of the ML Kit SDK, making it even easier to integrate on-device machine learning into mobile apps. Since then we’ve launched the Digital Ink Recognition API, and also introduced the ML Kit early access program. Our first two early access APIs were Pose Detection and Entity Extraction. We’ve received an overwhelming amount of interest in these new APIs and today, we are thrilled to officially add Pose Detection to the ML Kit lineup.

ML Kit Overview

A New ML Kit API, Pose Detection


Examples of ML Kit Pose Detection

ML Kit Pose Detection is an on-device, cross platform (Android and iOS), lightweight solution that tracks a subject's physical actions in real time. With this technology, building a one-of-a-kind experience for your users is easier than ever.

The API produces a full body 33 point skeletal match that includes facial landmarks (ears, eyes, mouth, and nose), along with hands and feet tracking. The API was also trained on a variety of complex athletic poses, such as Yoga positions.

Skeleton image detailing all 33 landmark points

Skeleton image detailing all 33 landmark points

Under The Hood

Diagram of the ML Kit Pose Detection Pipeline

The power of the ML Kit Pose Detection API is in its ease of use. The API builds on the cutting edge BlazePose pipeline and allows developers to build great experiences on Android and iOS, with little effort. We offer a full body model, support for both video and static image use cases, and have added multiple pre and post processing improvements to help developers get started with only a few lines of code.

The ML Kit Pose Detection API utilizes a two step process for detecting poses. First, the API combines an ultra-fast face detector with a prominent person detection algorithm, in order to detect when a person has entered the scene. The API is capable of detecting a single (highest confidence) person in the scene and requires the face of the user to be present in order to ensure optimal results.

Next, the API applies a full body, 33 landmark point skeleton to the detected person. These points are rendered in 2D space and do not account for depth. The API also contains a streaming mode option for further performance and latency optimization. When enabled, instead of running person detection on every frame, the API only runs this detector when the previous frame no longer detects a pose.

The ML Kit Pose Detection API also features two operating modes, “Fast” and “Accurate”. With the “Fast” mode enabled, you can expect a frame rate of around 30+ FPS on a modern Android device, such as a Pixel 4 and 45+ FPS on a modern iOS device, such as an iPhone X. With the “Accurate” mode enabled, you can expect more stable x,y coordinates on both types of devices, but a slower frame rate overall.

Lastly, we’ve also added a per point “InFrameLikelihood” score to help app developers ensure their users are in the right position and filter out extraneous points. This score is calculated during the landmark detection phase and a low likelihood score suggests that a landmark is outside the image frame.

Real World Applications


Examples of a pushup and squat counter using ML Kit Pose Detection

Keeping up with regular physical activity is one of the hardest things to do while at home. We often rely on gym buddies or physical trainers to help us with our workouts, but this has become increasingly difficult. Apps and technology can often help with this, but with existing solutions, many app developers are still struggling to understand and provide feedback on a user’s movement in real time. ML Kit Pose Detection aims to make this problem a whole lot easier.

The most common applications for Pose detection are fitness and yoga trackers. It’s possible to use our API to track pushups, squats and a variety of other physical activities in real time. These complex use cases can be achieved by using the output of the API, either with angle heuristics, tracking the distance between joints, or with your own proprietary classifier model.

To get you jump started with classifying poses, we are sharing additional tips on how to use angle heuristics to classify popular yoga poses. Check it out here.

Learning to Dance Without Leaving Home

Learning a new skill is always tough, but learning to dance without the aid of a real time instructor is even tougher. One of our early access partners, Groovetime, has set out to solve this problem.

With the power of ML Kit Pose Detection, Groovetime allows users to learn their favorite dance moves from popular short-form dance videos, while giving users automated real time feedback on their technique. You can join their early access beta here.

Groovetime App using ML Kit Pose Detection

Staying Active Wherever You Are

Our Pose Detection API is also helping adidas Training, another one of our early access partners, build a virtual workout experience that will help you stay active no matter where you are. This one-of-a-kind innovation will help analyze and give feedback on the user’s movements, using nothing more than just your phone. Integration into the adidas Training app is still in the early phases of the development cycle, but stay tuned for more updates in the future.

How to get started?

If you would like to start using the Pose Detection API in your mobile app, head over to the developer documentation or check out the sample apps for Android and iOS to see the API in action. For questions or feedback, please reach out to us through one of our community channels.

Join us for Google Assistant Developer Day on October 8

Posted by Baris Gultekin, Director, Product Management Google Assistant and
Payam Shodjai, Director, Product Management Google Assistant

More and more people turn to Google Assistant every day to help them get the most out of their phones and smart displays: From playing games to using their favorite app by voice, there are more opportunities than ever for developers to create new and engaging experiences for Google Assistant.

We welcome you to join us virtually at our Google Assistant Developer Day on Thursday, October 8, to learn more about new tools and features we’re building for developers to bring Google Assistant to mobile apps and Smart Displays and help drive discoverability and engagement via voice. This will also be a great chance to chat live with Google leaders and engineers on the team to get your questions answered.

You’ll hear from our product experts and partnership leads on best practices to integrate with Google Assistant to help users more easily engage with their favorite apps by voice. Other sessions will include in-depth conversations around native development on Google Assistant, with so much more.

We’ll also have guest speakers like: Garrett Gaudini, Head of Product at Postmates, Laurens Rutten, Founder & CEO of CoolGames, Corey Bozarth, VP of Product & Monetization at MyFitnessPal and many other, join us on stage to share their stories about how voice has transformed the way people interact with their apps and services.

Whether you build for mobile or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.

Registration is FREE! Head on over to the event website to register and check out the schedule.

Helping the Haitian economy, one line of code at a time

Posted by Jennifer Kohl, Program Manager, Developer Community Programs

Picture

Eustache Luckens Yadley at a GDG Port-au-Prince meetup

Meet Eustache Luckens Yadley, or “Yadley” for short. As a web developer from Port-au-Prince, Yadley has spent his career building web applications that benefit the local Haitian economy. Whether it’s ecommerce platforms that bring local sellers to market or software tools that help local businesses operate more effectively, Yadley has always been there with a technical hand to lend.

However, Yadley has also spent his career watching Haiti’s unemployment numbers rise to among the highest in the Caribbean. As he describes it,


“Every day, several thousand young people have no job to get by.”


So with code in mind and mouse in hand, Yadley got right to work. His first step was to identify a need in the economy. He soon figured out that Haiti had a shortage of delivery methods for consumers, making home delivery purchases of any kind extremely unreliable. With this observation, Yadley also noticed that there was a surplus of workers willing to deliver the goods, but no infrastructure to align their needs with that of the market’s.

picture

Yadley watching a demo at a GDG Port-au-Prince meetup

In this moment, Yadley did what many good developers would do: build an app. He created the framework for what is now called “Livrezonpam,” an application that allows companies to post where and when they need a particular product delivered and workers to find the corresponding delivery jobs closest to them.

With a brilliant solution, Yadley’s last step was to find the right technical tools to build the concept out and make it a viable platform that users could work with to their benefit.

It was at this crucial step when Yadley found the Port-au-Prince Google Developer Group. With GDG Port-au-Prince, Yadley was able to bring his young app right into the developer community, run different demos of his product to experienced users, and get feedback from a wide array of developers with an intimate knowledge of the Haitian tech scene. The takeaways from working in the community translated directly to his work. Yadley learned how to build with the Google Cloud Platform Essentials, which proved key in managing all the data his app now collects. He also learned how to get the Google Maps Platform API working for his app, creating a streamlined user experience that helped workers and companies in Haiti locate one another with precision and ease.

picture

This wide array of community technical resources, from trainings, to mentors, to helpful friends, allowed Yadley to grow his knowledge of several Google technologies, which in turn allowed him to grow his app for the Haitian community.

Today, Yadley is still an active member of the GDG community, growing his skills and those of the many friends around him. And at the same time, he is still growing Librezonpam on the Google Play App Store to help local businesses reach their customers and bring more jobs directly to the people of Haiti.


Ready to start building with a Google Developer Group near you? Find the closest community to you, here.

Introducing Spotlight: Women Techmakers series on career and professional development in tech

Posted by Caitlin Morrissey, PgM for Women Techmakers

On July 9, Women Techmakers launched the first episode of “Spotlight”, a new global program focused on amplifying the stories and pathways of women in technology across the industry. The idea for this series came from our community who were looking for more help with navigating their careers, especially during this time. We quickly realized that with our extensive network of professional women in all different parts of the tech industry, we could play a part in scaling this kind of guidance and advice to women all over the world.

The first video features Priyanka Vergadia, a Developer Advocate for Google Cloud, who shared advice on taking risks, career paths in tech, and dealing with failure. “I wanted to be on the show because I am a firm believer that hearing someone's story can inspire you to create your own beautiful journey.”

Image of Priyanka Vergadia with Spotlight host Caitlin Morrissey

Since then, we’ve been releasing weekly episodes featuring women from all over the world on themes from finding the job that’s right for you and what’s needed to be successful in that role to overcoming imposter syndrome and leading with confidence. Each woman brings a unique perspective to the table -- some have their masters in Engineering, while others went through a coding bootcamp and got into tech as a second career. This diversity in experience and backgrounds brings a richness and variety to the advice shared across the episodes, providing valuable guidance a viewer can take away, regardless of where they are in their careers.

Image of Spotlight guests: Pujaa Rajan, Jessica Early-Cha and Jacquelle Horton

The response to the series has been overwhelmingly positive so far - “I am honestly blown away by the reception of the video,” said Priyanka, who received many heartfelt messages after her episode was published. And as Spotlight continues, we’re excited to tell more stories of more women in tech and build a library of career advice that anyone can use to help navigate the tech ecosystem.

You can check out all episodes of Spotlight here, and make sure to subscribe to the Women Techmakers channel so you don’t miss out on future interviews.

Building G Suite Add-ons with your favorite tech stack

Posted by Jon Harmer, Product Manager and Steven Bazyl, Developer Advocate for G Suite

Let’s talk about the basics of G Suite Add-ons. G Suite Add-ons simplify how users get things done in G Suite by bringing in functionality from other applications where you need them. They provide a persistent sidebar for quick access, and they are context-aware -- meaning they can react to what you’re doing in context. For example, a CRM add-on can automatically surface details about a sales opportunity in response to an email based on the recipients or even the contents of the message itself.

Up until recently, G Suite Add-ons leaned on Apps Script to build Add-ons, but choice is always a good thing, and in some cases you may want to use another scripting language.. So let’s talk about how to build Add-ons using additional runtimes:

First, additional runtimes don't add any new capabilities to what you can build. What it does give you is more choice and flexibility in how you build Add-ons. We’ve heard feedback from developers that they would also like the option to use the tools and ecosystems they’ve already learned and invested in. And while there have always been ways to bridge Apps Script and other backends that expose APIs over HTTP/S, it isn't the cleanest of workarounds. .

So let’s look at a side by side comparison of what it looks like to build an Add-on with alternate runtimes:

function homePage() {
let card = CardService.newCardBuilder()
.addSection(CardService.newCardSection()
.addWidget(CardService.newTextParagraph()
.setText("Hello world"))
).build();
return [card];
}

Here’s the hello world equivalent of an Add-on in Apps Script. Since Apps Script is more akin to a serverless framework like GCF, the code is straightforward -- a function that takes an event and returns the UI to render.

// Hello world Node.js
const express = require('express');
const app = express();
app.use(express.json());

app.post('/home', (req, res) => {
let card = {
sections: [{
widgets: [{
textParagraph: {
text: 'Hello world'
}
}]
}]
};
res.json({
action: {
navigations: [{
pushCard: card
}]
}
});
}

This is the equivalent in NodeJS using express, a popular web server framework. It shows a little bit more of the underlying mechanics -- working with the HTTP request/response directly, starting the server, and so on.

The biggest difference is the card markup -- instead of using CardService, which under the covers builds a protobuf, we're using the JSON representation of the same thing.

function getCurrentMessage(event) {
var accessToken = event.messageMetadata.accessToken;
var messageId = event.messageMetadata.messageId;
GmailApp.setCurrentMessageAccessToken(accessToken);
return GmailApp.getMessageById(messageId);
}

Another area where things differ is accessing Google APis. In Apps Script, the clients are available in the global context -- the APIs mostly 'just work'. Moving to Node requires a little more effort, but not much.

Apps Script is super easy here. In fact, normally we wouldn't bother with setting the token when using more permissive scopes as it's done for us by Apps Script. We're doing it here to take advantage of the per-message scope that the add-on framework provides.

const { google } = require('googleapis');
const { OAuth2Client } = require('google-auth-library');
const gmail = google.gmail({ version: 'v1' });

async function fetchMessage(event) {
const accessToken = event.gmail.accessToken;
const auth = new OAuth2Client();
auth.setCredentials({access_token: accessToken});

const messageId = event.gmail.messageId;
const res = await gmail.users.messages.get({
id: messageId,
userId: 'me',
headers: { 'X-Goog-Gmail-Access-Token': event.gmail.accessToken },
auth
});
return res.data;
}

The NodeJS version is very similar -- a little extra code to import the libraries, but otherwise the same -- extract the message ID and token from the request, set the credentials, then call the API to get the message contents.

Your Add-on, Your way

One of the biggest wins for alternate runtimes is the testability that comes with using your favorite IDE, language, and framework--all of which helps you make developing add-ons more approachable.

Both Apps Script and alternate runtimes for G Suite Add-ons have important places in building Add-ons. If you’re getting into building Add-ons or if you want to prototype more complex ones, Apps Script is a good choice.. If you write and maintain systems as your full time job, though, alternate runtimes allow you to use those tools to build your Add-on, letting you leverage work, code and processes that you’re already using. With alternate runtimes for G Suite Add-ons, we want to make it possible for you to extend G Suite in a way that fits your needs using whatever tools you're most comfortable with.

And don't just take our word for it, hear from one of our early access partners. Shailesh Matariya, CTO at Gfacility has this to say about alternate runtimes: "We're really happy to use alternate runtimes in G Suite Add-ons. The results have been great and it's much easier to maintain the code. Historically, it would take 4-5 seconds to load data in our Add-on, whereas with alternate runtimes it's closer to 1 second, and that time and efficiency really adds up. Not to mention performance, we're seeing about a 50% performance increase and thanks to this our users are able to manage their workflows with just a few clicks, without having to jump to another system and deal with the hassle of constant updates."

Next Steps

Read the developer documentation for Alternate Runtimes and sign up for the early access program.

Building G Suite Add-ons with your favorite tech stack

Posted by Jon Harmer, Product Manager and Steven Bazyl, Developer Advocate for G Suite

Let’s talk about the basics of G Suite Add-ons. G Suite Add-ons simplify how users get things done in G Suite by bringing in functionality from other applications where you need them. They provide a persistent sidebar for quick access, and they are context-aware -- meaning they can react to what you’re doing in context. For example, a CRM add-on can automatically surface details about a sales opportunity in response to an email based on the recipients or even the contents of the message itself.

Up until recently, G Suite Add-ons leaned on Apps Script to build Add-ons, but choice is always a good thing, and in some cases you may want to use another scripting language.. So let’s talk about how to build Add-ons using additional runtimes:

First, additional runtimes don't add any new capabilities to what you can build. What it does give you is more choice and flexibility in how you build Add-ons. We’ve heard feedback from developers that they would also like the option to use the tools and ecosystems they’ve already learned and invested in. And while there have always been ways to bridge Apps Script and other backends that expose APIs over HTTP/S, it isn't the cleanest of workarounds. .

So let’s look at a side by side comparison of what it looks like to build an Add-on with alternate runtimes:

function homePage() {
let card = CardService.newCardBuilder()
.addSection(CardService.newCardSection()
.addWidget(CardService.newTextParagraph()
.setText("Hello world"))
).build();
return [card];
}

Here’s the hello world equivalent of an Add-on in Apps Script. Since Apps Script is more akin to a serverless framework like GCF, the code is straightforward -- a function that takes an event and returns the UI to render.

// Hello world Node.js
const express = require('express');
const app = express();
app.use(express.json());

app.post('/home', (req, res) => {
let card = {
sections: [{
widgets: [{
textParagraph: {
text: 'Hello world'
}
}]
}]
};
res.json({
action: {
navigations: [{
pushCard: card
}]
}
});
}

This is the equivalent in NodeJS using express, a popular web server framework. It shows a little bit more of the underlying mechanics -- working with the HTTP request/response directly, starting the server, and so on.

The biggest difference is the card markup -- instead of using CardService, which under the covers builds a protobuf, we're using the JSON representation of the same thing.

function getCurrentMessage(event) {
var accessToken = event.messageMetadata.accessToken;
var messageId = event.messageMetadata.messageId;
GmailApp.setCurrentMessageAccessToken(accessToken);
return GmailApp.getMessageById(messageId);
}

Another area where things differ is accessing Google APis. In Apps Script, the clients are available in the global context -- the APIs mostly 'just work'. Moving to Node requires a little more effort, but not much.

Apps Script is super easy here. In fact, normally we wouldn't bother with setting the token when using more permissive scopes as it's done for us by Apps Script. We're doing it here to take advantage of the per-message scope that the add-on framework provides.

const { google } = require('googleapis');
const { OAuth2Client } = require('google-auth-library');
const gmail = google.gmail({ version: 'v1' });

async function fetchMessage(event) {
const accessToken = event.gmail.accessToken;
const auth = new OAuth2Client();
auth.setCredentials({access_token: accessToken});

const messageId = event.gmail.messageId;
const res = await gmail.users.messages.get({
id: messageId,
userId: 'me',
headers: { 'X-Goog-Gmail-Access-Token': event.gmail.accessToken },
auth
});
return res.data;
}

The NodeJS version is very similar -- a little extra code to import the libraries, but otherwise the same -- extract the message ID and token from the request, set the credentials, then call the API to get the message contents.

Your Add-on, Your way

One of the biggest wins for alternate runtimes is the testability that comes with using your favorite IDE, language, and framework--all of which helps you make developing add-ons more approachable.

Both Apps Script and alternate runtimes for G Suite Add-ons have important places in building Add-ons. If you’re getting into building Add-ons or if you want to prototype more complex ones, Apps Script is a good choice.. If you write and maintain systems as your full time job, though, alternate runtimes allow you to use those tools to build your Add-on, letting you leverage work, code and processes that you’re already using. With alternate runtimes for G Suite Add-ons, we want to make it possible for you to extend G Suite in a way that fits your needs using whatever tools you're most comfortable with.

And don't just take our word for it, hear from one of our early access partners. Shailesh Matariya, CTO at Gfacility has this to say about alternate runtimes: "We're really happy to use alternate runtimes in G Suite Add-ons. The results have been great and it's much easier to maintain the code. Historically, it would take 4-5 seconds to load data in our Add-on, whereas with alternate runtimes it's closer to 1 second, and that time and efficiency really adds up. Not to mention performance, we're seeing about a 50% performance increase and thanks to this our users are able to manage their workflows with just a few clicks, without having to jump to another system and deal with the hassle of constant updates."

Next Steps

Read the developer documentation for Alternate Runtimes and sign up for the early access program.

ChromeOS.dev — A blueprint to build world-class apps and games for Chrome OS

Posted by Iein Valdez, Head of Chrome OS Developer Relations

This article originally appeared on ChromeOS.dev.

While people are spending more time at home than on the go, they’re relying increasingly on personal desktops and laptops to make everyday life easier. Whether they’re video-chatting with friends and family, discovering entertaining apps and games, multitasking at work, or pursuing a passion project, bigger screens and better performance have made all the difference.

This trend was clear from March through June 2020: Chromebook unit sales grew 127% year over year (YOY) while the rest of the U.S. notebook category increased by 40% YOY.1 Laptops have become crucial to people at home who want to use their favorite apps and games, like Star Trek™ Fleet Command and Reigns: Game of Thrones to enjoy action-packed adventure, Calm to manage stress, or Disney+ to keep the whole family entertained.

Device Sales YOY

To deliver app experiences that truly improve people’s lives, developers must be equipped with the right tools, resources, and best practices. That’s why we’re excited to introduce ChromeOS.dev — a dedicated resource for technical developers, designers, product managers, and business leaders.

ChromeOS.dev, available in English and Spanish (with other languages coming soon), features the latest news, product announcements, technical documentation, and code samples from popular apps. Whether you’re a web, Android, or Linux developer who’s just getting started or a certified expert, you’ll find all the information you need on ChromeOS.dev.

Hear from our experts at Google and Chrome OS, as well as a variety of developers, as they share practical tips, benefits, and the challenges of creating app experiences for today’s users. Plus, you can review the updated Chrome OS Layout and UX App Quality guidelines with helpful information on UI components, navigation, fonts, layouts, and everything that goes into creating world-class apps and games for Chrome OS.

Even better, as a fully open-source online destination, ChromeOS.dev is designed considering all the principles and methods for creating highly capable and reliable Progressive Web Apps (PWAs), ensuring developers always have quick, easy access to the information they need — even when they’re offline.

Check out a few of the newest updates and improvements below, and be sure to install the ChromeOS.dev PWA on your device to stay on top of the latest information.

New features for Chrome OS developers

Whether it’s developing Android, Linux, or web apps, every update on ChromeOS.dev is about making sure all developers can build better app experiences in a streamlined, easy-to-navigate environment.

Customizable Linux Terminal

The Linux (Beta) on Chrome OS Terminal now comes equipped with personalized features right out of the box, including:

  • Integrated tabs and shortcuts
    Multitask with ease by using windows and tabs to manage different tasks and switch between multiple projects. You can also use familiar shortcuts such as Ctrl + T, Ctrl + W, and Ctrl + Tab to manage your tabs, or use the settings page to control if these keys should be used in your Terminal for apps like vim or emacs.
  • Themes
    Customize your Terminal by selecting a theme to switch up the background, frame, font, and cursor color.
  • Redesigned Terminal settings
    The settings tab has been reorganized to make it easier to customize all your Terminal options.

Developers can now start using these and other customizable features in the Terminal app.

Android Emulator support

Supported Chromebooks can now run a full version of the Android Emulator, which allows developers to test apps on any Android version and device without needing the actual hardware. Android app developers can simulate map locations and other sensor data to test how an app performs with various motions, orientations, and environmental conditions. With the Android Emulator support in Chrome OS, developers can optimize for different Android versions and devices — including tablets and foldable smartphones — right from their Chromebook.

Deploy apps directly to Chrome OS

Building and testing Android apps on a single machine is simpler than ever. Now, developers who are running Chrome OS M81 and higher can deploy and test apps directly on their Chromebooks — no need to use developer mode or to connect different devices physically via USB. Combined with Android Emulator support, Chrome OS is equipped to support full Android development.

Improved Project Wizard in Android Studio

An updated Primary/Detail Activity Template in Android Studio offers complete support to build experiences for larger screens, including Chromebooks, tablets, and foldables. This updated option provides multiple layouts for both phones and larger-screen devices as well as better keyboard/mouse scaffolding. This feature will be available in Android Studio 4.2 Canary 8.

Updated support from Android lint checks

We’ve improved the default checks in Android’s lint tool to help developers identify and correct common coding issues to improve their apps on larger screens, such as non-resizable and portrait-locked activities. This feature is currently available for testing in Canary channel.

Unlock your app’s full potential with Chrome OS

From day one, our goal has been to help developers at every skill level create simple, powerful, and secure app experiences for all platforms. As our new reality creates a greater need for helpful and engaging apps on large-screen devices, we’re working hard to streamline the process by making Chrome OS more versatile, customizable, and intuitive.

Visit ChromeOS.dev and install it on your Chromebook to stay on top of the latest resources, product updates, thought-provoking insights, and inspiring success stories from Chrome OS developers worldwide.






Sources:
1 The NPD Group, Inc., U.S. Retail Tracking Service, Notebook Computers, based on unit sales, April–June 2020 and March–June 2020​.

Introducing Cast Connect: a better way to integrate Google Cast directly into your Android TV apps

Posted by Meher Vurimi, Product Manager

For more than seven years, Google Cast has made it easy for users to enjoy your content on the big screen with Chromecast or a Chromecast built-in TV. We’re always looking to improve the casting experiences, which is why we’re excited to introduce Cast Connect. This new feature allows users to cast directly to your Android TV app while still allowing control from your sender app.

Why is this helpful?

With Google Cast your app is the remote control - helping users find, play, pause, seek, stop, and otherwise control what they’re watching. It enables you to extend video or audio from your Android, iOS, or Chrome app to a TV or sound system. With Android TV, partners are able to build apps that let users experience your app's immersive content on their TV screen and control with a remote.

With Cast Connect, we are combining the best of both worlds; first, by prioritizing playback via the Android TV app to deliver a richer and more immersive experience, and second, by allowing the user to still control their experience from your Android, iOS or Chrome app, and now, also directly using their Android TV’s remote control. Cast Connect helps the user easily engage with other content directly on the TV instead of only having to use your mobile device to browse for additional content.

Cast Connect User Journey on Stan

Availability

We’re working closely with a number of partners on bringing Cast Connect to their Apps and, most recently, we’re excited to announce that CBS and our Australian SVOD partner, Stan have launched Cast Connect. Starting today, the Cast Connect library is available on Android, iOS and Chrome. To get started with adding Cast Connect to your existing framework, head over to the Google Cast Developers site. Along the way, the Cast SDK team and the developer community are available to help you and answer questions on Stack Overflow by using the google-cast-connect tag.

Happy Casting!

Digital Ink Recognition in ML Kit

Posted by Mircea Trăichioiu, Software Engineer, Handwriting Recognition

A month ago, we announced changes to ML Kit to make mobile development with machine learning even easier. Today we're announcing the addition of the Digital Ink Recognition API on both Android and iOS to allow developers to create apps where stylus and touch act as first class inputs.

Digital ink recognition: the latest addition to ML Kit’s APIs

Digital Ink Recognition is different from the existing Vision and Natural Language APIs in ML Kit, as it takes neither text nor images as input. Instead, it looks at the user's strokes on the screen and recognizes what they are writing or drawing. This is the same technology that powers handwriting recognition in Gboard - Google’s own keyboard app, which we described in detail in a 2019 blog post. It's also the same underlying technology used in the Quick, Draw! and AutoDraw experiments.

Handwriting input in Gboard

Turning doodles into art with Autodraw

With the new Digital Ink Recognition API, developers can now use this technology in their apps as well, for everything from letting users input text and figures with a finger or stylus to transcribing handwritten notes to make them searchable; all in near real time and entirely on-device.

Supports many languages and character sets

Digital Ink Recognition supports 300+ languages and 25+ writing systems including all major Latin languages, as well as Chinese, Japanese, Korean, Arabic, Cyrillic, and more. Classifiers parse written text into a string of characters

Recognizes shapes

Other classifiers can describe shapes, such as drawings and emojis, by the class to which they belong (circle, square, happy face, etc). We currently support an autodraw sketch recognizer, an emoji recognizer, and a basic shape recognizer.

Works offline

Digital Ink Recognition API runs on-device and does not require a network connection. However, you must download one or more models before you can use a recognizer. Models are downloaded on demand and are around 20MB in size. Refer to the model download documentation for more information.

Runs fast

The time to perform a recognition call depends on the exact device and the size of the input stroke sequence. On a typical mobile device recognizing a line of text takes about 100 ms.

How to get started

If you would like to start using Digital Ink Recognition in your mobile app, head over to the documentation or check out the sample apps for Android and iOS to see the API in action. For questions or feedback, please reach out to us through one of our community channels.

Automatic Deployment of Hugo Sites on Firebase Hosting and Drafts on Cloud Run

Posted by James Ward, Developer Advocate

Recently I completed the migration of my blog from Wordpress to Hugo and I wanted to take advantage of it now being a static site by hosting it on a Content Delivery Network (CDN). With Hugo the source content is plain files instead of rows in a database. In the case of my blog those files are in git on GitHub. But when the source files change, the site needs to be regenerated and redeployed to the CDN. Also, sometimes it is nice to have drafts available for review. I setup a continuous delivery pipeline which deploys changes to my prod site on Firebase Hosting and drafts on Cloud Run, using Cloud Build. Read on for instructions for how to set all this up.

Step 1a) Setup A New Hugo Project

If you do not have an existing Hugo project you can create a GitHub copy (i.e. fork) of my Hugo starter repo:

Step 1b) Setup Existing Hugo Project

If you have an existing Hugo project you'll need to add some files to it:

.firebaserc

{
"projects": {
"production": "hello-hugo"
}
}

cloudbuild-draft.yaml

steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/sh'
args:
- '-c'
- |
# Get the theme git submodule
THEME_URL=$(git config -f .gitmodules --get-regexp '^submodule\..*\.url$' | awk '{ print $2 }')
THEME_DIR=$(git config -f .gitmodules --get-regexp '^submodule\..*\.path$' | awk '{ print $2 }')
rm -rf themes
git clone $$THEME_URL $$THEME_DIR

- name: 'gcr.io/cloud-builders/docker'
entrypoint: '/bin/sh'
args:
- '-c'
- |
docker build -t gcr.io/$PROJECT_ID/$REPO_NAME-$BRANCH_NAME:$COMMIT_SHA -f - . << EOF
FROM klakegg/hugo:latest
WORKDIR /workspace
COPY . /workspace
ENTRYPOINT hugo -D -p \$$PORT --bind \$$HUGO_BIND --renderToDisk --disableLiveReload --watch=false serve
EOF
docker push gcr.io/$PROJECT_ID/$REPO_NAME-$BRANCH_NAME:$COMMIT_SHA

- name: 'gcr.io/cloud-builders/gcloud'
args:
- run
- deploy
- --image=gcr.io/$PROJECT_ID/$REPO_NAME-$BRANCH_NAME:$COMMIT_SHA
- --platform=managed
- --project=$PROJECT_ID
- --region=us-central1
- --memory=512Mi
- --allow-unauthenticated
- $REPO_NAME-$BRANCH_NAME

cloudbuild.yaml

steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/sh'
args:
- '-c'
- |
# Get the theme git submodule
THEME_URL=$(git config -f .gitmodules --get-regexp '^submodule\..*\.url$' | awk '{ print $2 }')
THEME_DIR=$(git config -f .gitmodules --get-regexp '^submodule\..*\.path$' | awk '{ print $2 }')
rm -rf themes
git clone $$THEME_URL $$THEME_DIR

- name: 'gcr.io/cloud-builders/curl'
entrypoint: '/bin/sh'
args:
- '-c'
- |
curl -sL https://github.com/gohugoio/hugo/releases/download/v0.69.2/hugo_0.69.2_Linux-64bit.tar.gz | tar -zxv
./hugo

- name: 'gcr.io/cloud-builders/wget'
entrypoint: '/bin/sh'
args:
- '-c'
- |
# Get firebase CLI
wget -O firebase https://firebase.tools/bin/linux/latest
chmod +x firebase
# Deploy site
./firebase deploy --project=$PROJECT_ID --only=hosting

firebase.json

{
"hosting": {
"public": "public"
}
}


Step 2) Setup Cloud Build Triggers

In the Google Cloud Build console, connect to your newly forked repo: Select the newly forked repo: Create the default push trigger: Edit the new trigger: Set the trigger to only fire on changes to the ^master$ branch: Create a new trigger: Give it a name like drafts-trigger, specify the branch selector as .* (i.e. any branch), and the build configuration file type to "Cloud Build configuration file" with a value of cloudbuild-draft.yaml Setup permissions for the Cloud Build process to manage Cloud Run and Firebase Hosting by visiting the IAM management page, locate the member with the name ending with @cloudbuild.gserviceaccount.com, and select the "pencil" / edit button: Add a role for "Cloud Run Admin" and another for "Firebase Hosting Admin": Your default "prod" trigger isn't read to test yet, but you can test the drafts on Cloud Run by going back to the Cloud Build Triggers page, and clicking the "Run Trigger" button on the "drafts-trigger" line. Check the build logs by finding the build in the Cloud Build History. Once the build completes visit the Cloud Run console to find your newly created service which hosts the drafts version of your new blog. Note that the service name includes the branch so that you can see drafts from different branches.

Step 3) Setup Firebase Hosting

To setup your production / CDN'd site, login to the firebase console and select your project:

Now you'll need your project id, which can be found in the URL on the Firebase Project Overview page. The URL for my project is:

console.firebase.google.com/project/jw-demo/overview

Which means my project id is: jw-demo

Now copy your project id go into your GitHub fork, select the .firebaserc file and click the "pencil" / edit button:

Replace the hello-hugo string with your project id and commit the changes. This commit will trigger two new builds, one for the production site and one for the drafts site on Cloud Run. You can check the status of those builds on the Cloud Build History page. Once the default trigger (the one for Firebase hosting) finishes, check out your Hugo site running on Firebase Hosting by navigating to (replacing YOUR_PROJECT_ID with the project id you used above): https://YOUR_PROJECT_ID.web.app/

Your prod and drafts sites are now automatically deploying on new commits!

Step 4) (Optional) Change Hugo Theme

There are many themes for Hugo and they are easy to change. Typically themes are pulled into Hugo sites using git submodules. To change the theme, edit your .gitmodules file and set the subdirectories and url. As an example, here is the content when using the mainroad theme:

[submodule "themes/mainroad"]
path = themes/mainroad
url = https://github.com/vimux/mainroad.git

You will also need to change the theme value in your config.toml file to match the directory name in the themes directory. For example:

theme = "mainroad"

Note: At the time of writing this, Cloud Build does not clone git submodules so the cloudbuild.yaml does the cloning instead.

Step 5) (Optional) Setup Local Editing

To setup local editing you will first need to clone your fork. You can do this with the GitHub desktop app. Or from the command line:

git clone --recurse-submodules https://github.com/USER/REPO.git

Once you have the files locally, install Hugo, and from inside the repo's directory, run:

hugo -D serve

This will serve the drafts in the site. You can check out the site at: localhost:1313

Committing non-draft changes to master and pushing those changes to GitHub will kick off the build which will deploy them on your prod site. Committing draft to any branch will kick off the build which will deploy them on a Cloud Run site.

Hopefully that all helps you with hosting your Hugo sites! Let me know if you run into any problems.