New ML Kit features easily bring Machine Learning to your apps

Posted by Brahim Elbouchikhi, Director of Product Management and Matej Pfajfar, Engineering Director

We launched ML Kit at I/O last year with the mission to simplify Machine Learning for everyone. We couldn’t be happier about the experiences that ML Kit has enabled thousands of developers to create. And more importantly, user engagement with features powered by ML Kit is growing more than 60% per month. Below is a small sample of apps we have been working with.

But there is a lot more. At I/O this year, we are excited to introduce four new features.

The Object Detection and Tracking API lets you identify the prominent object in an image and then track it in real-time. You can pair this API with a cloud solution (e.g. Google Cloud’s Product Search API) to create a real-time visual search experience.

When you pass an image or video stream to the API, it will return the coordinates of the primary object as well as a coarse classification. The API then provides a handle for tracking this object's coordinates over time.

A number of partners have built experiences that are powered by this API already. For example, Adidas built a visual search experience right into their app.

The On-device Translation API allows you to use the same offline models that support Google Translate to provide fast, dynamic translation of text in your app into 58 languages. This API operates entirely on-device so the context of the translated text never leaves the device.

You can use this API to enable users to communicate with others who don't understand their language or translate user-generated content.

To the right, we demonstrate the use of ML Kit’s text recognition, language detection, and translation APIs in one experience.

We also collaborated with the Material Design team to produce a set of design patterns for integrating ML into your apps. We are open sourcing implementations of these patterns and hope that they will further accelerate your adoption of ML Kit and AI more broadly.

Our design patterns for machine learning powered features will be available on the Material.io site.

With AutoML Vision Edge, you can easily create custom image classification models tailored to your needs. For example, you may want your app to be able to identify different types of food, or distinguish between species of animals. Whatever your need, just upload your training data to the Firebase console and you can use Google’s AutoML technology to build a custom TensorFlow Lite model for you to run locally on your user's device. And if you find that collecting training datasets is hard, you can use our open source app which makes the process simpler and more collaborative.

Wrapping up

We are excited by this first year and really hope that our progress will inspire you to get started with Machine Learning. Please head over to g.co/mlkit to learn more or visit Firebase to get started right away.