Category Archives: Android Developers Blog

An Open Handset Alliance Project

11 Weeks of Android: Privacy and Security

Posted by:
Charmaine D’Silva, Product Lead, Android Privacy and Framework
Narayan Kamath, Engineering Lead, Android Privacy and Framework
Stephan Somogyi, Product Lead, Android Security
Sudhi Herle, Engineering Lead, Android Security

This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.

mobile security illustration

Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.

As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:

One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.

Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.

In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.

In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.

Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.

Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.

Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.

Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.

BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.

Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.

Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.

Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

11 Weeks of Android: Privacy and Security

Posted by:
Charmaine D’Silva, Product Lead, Android Privacy and Framework
Narayan Kamath, Engineering Lead, Android Privacy and Framework
Stephan Somogyi, Product Lead, Android Security
Sudhi Herle, Engineering Lead, Android Security

This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.

mobile security illustration

Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.

As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:

One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.

Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.

In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.

In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.

Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.

Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.

Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.

Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.

BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.

Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.

Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.

Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

Full spectrum of on-device machine learning tools on Android

Posted by Hoi Lam, Android Machine Learning



This blog post is part of a weekly series for #11WeeksOfAndroid. Each week we’re diving into a key area of Android so you don’t miss anything. Throughout this week, we covered various aspects of Android on-device machine learning (ML). Whichever stage of development be it starting out or an established app; whatever role you play in design, product and engineering; whatever your skill level from beginner to experts, we have a wide range of ML tools for you.

Design - ML as a differentiator

“Focus on the user and all else will follow” is a Google mantra that becomes even more relevant in our machine learning age. Our Design Advocate, Di Dang, highlighted the importance of finding the unique intersection of user problems and ML strengths. Too often, teams are so keen on the idea of machine learning that they lose sight of their user needs.



Di outlined how the People + AI Guidebook can help you make ML product decisions and used the example of the Read Along app to illustrate topics like precision and recall, which are unique to ML design and development. Check out her interview with the Read Along team together with your team for more inspiration.

New ML Kit fully focused on on-device

When you decide that on-device machine learning is the solution, the easiest way to implement it will be through turnkey SDKs like ML Kit. Sophisticated Google-trained models and processing pipelines are offered through an easy to use interface in Kotlin / Java. ML Kit is designed and built for on-device ML: it works offline, offers enhanced privacy, unlocks high performance for real-time use cases and it is free. We recently made ML Kit a standalone SDK and it no longer requires a Firebase account. Just one line in your build.gradle file and you can start bringing ML functionality into your app.



The team has also added new functionalities such as Jetpack lifecycle support and the option to use the face contour models via Google Play Services saving as much as 20MB in app size. Another much anticipated addition is the support for swapping Google models with your own for both Image Labeling as well as Object Detection and Tracking. This provides one of the easiest ways to add TensorFlow Lite models to your applications without interacting with ByteArray!

Customise with TensorFlow Lite and Android tools

If the base model provided by ML Kit doesn’t quite fit the bill, what should developers do? The first port of call should be TensorFlow Hub where ready-to-use TensorFlow Lite models from both Google and the wider community can be downloaded. From 100,000 US Supermarket products to tomato plant diseases classifiers, the choice is yours.



In addition to Firebase AutoML Vision Edge, you can also build your own model using TensorFlow Model Maker (image classification / text classification) with just a few lines of Python. Once you have a TensorFlow Lite model from either TensorFlow Hub, or the Model Maker, you can easily integrate it with your Android app using ML Kit Image Labelling or Object Detection and Tracking. If you prefer an open source solution, Android Studio 4.1 beta introduces ML model binding that helps wrap around the TensorFlow Lite model with an easy to use Kotlin / Java wrapper. Adding a custom model to your Android app has never been easier. Check out this blog for more details.

Time for on-device ML is now

From the examples of the Android Developer Challenge winners, it is obvious that on-device machine learning has come of age and ML functionalities once reserved for the cloud or supercomputers are now available on your Android phone. Take a step forward with us by trying out our codelabs of the day:

Also checkout the ML Week learning pathway and take the quiz to get your very own ML badge.

Android on-device machine learning is a rapidly evolving platform, if you have any enhancement requests or feedback on how it could be improved, please let us know together with your use-case (TensorFlow Lite / ML Kit). Time for on-device ML is now.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

Read Along: Grow child literacy with on-device ML design insights

Posted by Di Dang, Design Advocate

From our Machine Learning-themed week together, we’ve delved into an ML Kit x CameraX Codelab, and we learned how to train your own custom models and integrate them in your Android app. In addition to the technical considerations that go into using ML, it’s important that we design our ML-based apps in a way that enables our users to feel in control of the ML technology, and not the other way around. To help product creators understand some best practices for ML product decisions, the PAIR team published the People + AI Guidebook at Google I/O last year. Let’s take a look at some ML design considerations you can apply in your Android apps by learning from the example of Read Along.




Google recently launched Read Along, an Android app that uses on-device ML and voice UI to help children learn to read anytime, anywhere, using just their voice. According to the UN Division of Sustainable Development Goals, more than 50% of children worldwide are not achieving minimum proficiency in reading. First launched in India as “Bolo”, the “Read Along” app is now available globally. We recently went behind the scenes with the Read Along team in this episode of Centered, to learn how they made an ML- and voice-based app to improve child literacy.

Why Machine Learning and Voice UI?

Since using ML can be time- and cost-intensive, we need to find the intersection of ML strengths and user needs. To learn to read, children need time on task and one-on-one attention, which is challenging for areas where there is a lack of access to teachers or educational materials. “In many parts of the world, there are only so many schools that can be built, only so many teachers can be trained. So first and foremost, it's a scale problem,” said Nitin Kashyap, Read Along’s product manager. This creates a unique opportunity for the use of ML—to provide real-time reading feedback at scale, Read Along utilizes the Google Assistant’s text-to-speech and speech recognition capabilities. The Read Along team also added abilities on edge to preserve children’s privacy. The voice data is analyzed on-device without being sent to any Google servers, enabling children to use Read Along offline as well.
Child using app demo

False positives vs. false negatives

Since ML-based systems are inherently probabilistic, they can generate “wrong” predictions in the form of false positives and false negatives. As we create ML-based applications, we need to decide which behavior to optimize for. Within the Read Along experience, a false positive denotes that the child has misread a word, though the system fails to recognize this and does not provide corrective feedback. On the other hand, an example of a false negative is when a child reads a word correctly, but Read Along predicts the word was read incorrectly, and thus prompts the child to try again. “We spent a little time to understand what really happens when the child gets false positive and false negative, and what impact does it have on the psychology, and also on the reading experience,“ said Eshita Priyadarshini, Read Along’s UX Research Lead. “When the child is reading, we don't really tell him, "Oh, you got that word wrong. Why don't you read it again?" By unpacking the impact of false positives and false negatives on the user, the Read Along team decided to optimize for recall, thereby increasing the number of false positives, which results in a user experience that feels more encouraging for children.

Screenshot of People + AI Guidebook

To learn more about how the Read Along team made ML product decisions, check out the full Centered episode. For more guidance on how your cross-functional team (spanning UX, PM, and engineering) can come together to design ML-based applications, check out the People + AI Guidebook.

Android 11 Developer Preview on Android TV

Posted by Xiaodao Wu, Developer Advocate

With the rise in quality content that’s keeping us glued to the big screen, it’s no surprise watch time on the TV continues to grow. As users spend more time in their living rooms, they are also looking to get more from their smart TVs and streaming devices. To help developers meet these needs, we are always working to support the latest Android features on Android TV.

Today, we are releasing an Android 11 Developer Preview for Android TV with many privacy, performance, accessibility and connectivity features. More information can be found on the Android 11 Developer Preview web page.

The Android 11 Developer Preview on TV is for developers (not for consumer use), this image is for ADT-3 developer devices only, it is available by manual download and flash. All user data on the ADT-3 device will be wiped out after flash. Once the device has been flashed to Android 11, you will not be able to go back to the previous Android 10 build.

  1. Download the system image (link) and unzip the file.
  2. Plug in the ADT-3 developer kit for Android TV and enable Developer options.
  3. Run flash-all.sh in the unzipped folder to perform manual system image installation to the ADT-3 device.

The flash-all script uses fastboot and adb tools to upgrade the system. The latest version of fastboot is recommended; developers can find it in the Android SDK Platform-Tools package.

We encourage you to test your Android TV app on the Android 11 Developer Preview. If you have any feedback, please reach out to us. We’d love to hear from you.

Tune in to the Android Beyond Phones week of the #11WeeksOfAndroid on August 10th for even more developer resources from Android TV.

New tools for finding, training, and using custom machine learning models on Android

Posted by Hoi Lam, Android Machine Learning

Yesterday, we talked about turnkey machine learning (ML) solutions with ML Kit. But what if that doesn’t completely address your needs and you need to tweak it a little? Today, we will discuss how to find alternative models, and how to train and use custom ML models in your Android app.

Find alternative ML models

Crop disease models from the wider research community available on tfhub.dev

If the turnkey ML solutions don't suit your needs, TensorFlow Hub should be your first port of call. It is a repository of ML models from Google and the wider research community. The models on the site are ready for use in the cloud, in a web-browser or in an app on-device. For Android developers, the most exciting models are the TensorFlow Lite (TFLite) models that are optimized for mobile.

In addition to key vision models such as MobileNet and EfficientNet, the repository also boast models powered by the latest research such as:

Many of these solutions were previously only available in the cloud, as the models are too large and too power intensive to run on-device. Today, you can run them on Android on-device, offline and live.

Train your own custom model

Besides the large repository of base models, developers can also train their own models. Developer-friendly tools are available for many common use cases. In addition to Firebase’s AutoML Vision Edge, the TensorFlow team launched TensorFlow Lite Model Maker earlier this year to give developers more choices over the base model that support more use cases. TensorFlow Lite Model Maker currently supports two common ML tasks:

The TensorFlow Lite Model Maker can run on your own developer machine or in Google Colab online machine learning notebooks. Going forward, the team plans to improve the existing offerings and to add new use cases.

Using custom model in your Android app

New TFLite Model import screen in Android Studio 4.1 beta

Once you have selected a model or trained your model there are new easy-to-use tools to help you integrate them into your Android app without having to convert everything into ByteArrays. The first new tool is ML Model binding with Android Studio 4.1. This lets developers import any TFLite model, read the input / output signature of the model, and use it with just a few lines of code that calls the open source TensorFlow Lite Android Support Library.

Another way to implement a TensorFlow Lite model is via ML Kit. Starting in June, ML Kit no longer requires a Firebase project for on-device functionality. In addition, the image classification and object detection and tracking (ODT) APIs support custom models. The latter ODT offering is especially useful in use-cases where you need to separate out objects from a busy scene.

So how should you choose between these three solutions? If you are trying to detect a product on a busy supermarket shelf, ML Kit object detection and tracking can help your user select a specific product for processing. The API then performs image classification on just the part of the image that contains the product, which results in better detection performance. On the other hand, if the scene or the object you are trying to detect takes up most of the input image, for example, a landmark such as Big Ben, using ML Model binding or the ML Kit image classification API might be more appropriate.

TensorFlow Hub bird detection model with ML Kit Object Detection & Tracking AP

Two examples of how these tools can fit together

Here are some resources to help you get started:

Customizing your model is easier than ever

Finding, building and using custom models on Android has never been easier. As both Android and TensorFlow teams increase the coverage of machine learning use cases, please let us know how we can improve these tools for your use cases by filing an enhancement request with TensorFlow Lite or ML Kit.

Tomorrow, we will take a step back and focus on how to appropriately use and design for a machine learning first Android app. The content will be appropriate for the entire development team, so bring your product manager and designers along. See you next time.

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.

New features to acquire and retain subscribers

Posted by Angela Ying, Product Manager, Google Play

Subscription continues to be one of the fastest growing business models for apps in Google Play. As your subscription business evolves and becomes more sophisticated, our platform continues to evolve to better support your needs. Today we’re excited to tell you more about the new subscription capabilities we announced at the Android 11 Beta Launch, including promotional codes to help you access new subscribers, new opportunities to remind users of your value and win back churned users. Many of these capabilities are built on top of the Play Billing Library version 3.

In addition to the new capabilities, we are also making improvements to our existing platform. Over the past few years, we have launched many features, such as account hold, restore, and pause, which have been highly effective in reducing your voluntary and involuntary churn. We want to ensure that everyone can take advantage of them, which is why we are planning on changing the default settings for these features from optional to either mandatory or on by default starting on November 1, 2020. Additional details on these features and implementation requirements can be found at the end of this post.

Here’s everything that is changing about the subscriptions platform:

Subscriptions platform

More targeted promotions

Promotions and deals are an important way to grow your business to acquire new customers. That’s why over the last year, we have invested in new promotional code capabilities for subscriptions that you can use to send promotions to a more targeted set of users.

Last year at I/O 2019, we launched subscription one-time promotion codes, unique alphanumeric codes that can be distributed to individual users for redemption. Now, we have launched a new frictionless redemption flow which allows users to easily redeem the code, purchase the subscription, and install the app in the Play store in a few simple steps. This greatly simplifies the user experience by reducing the friction users go through to use your code. Since this subscription is started outside of your app, it is only available to developers who are using Billing Library 2.0 or higher.

example mobile displays
In addition to one-time codes, we are excited to officially announce the launch of custom codes (also known as vanity codes), which can be redeemed by multiple users and can be used for marketing campaigns to drive acquisitions. For example, you can post custom codes in advertisements or in social promotions to creatively engage with potential new users. Users can redeem a custom code in your app by entering it in their payment methods when purchasing a subscription.

Remind users of the value of your subscription

Retaining your subscribers is crucial for the long-term health of your subscriptions business. The reason why users stay subscribed is because they perceive ongoing value from your subscription service. To help you communicate this value, we recently launched a module that will remind users of the benefits gained from a subscription when they go to cancel. To take advantage of this module, add a short list of up to 4 subscriber benefits in the Google Play Console.

Win back churned subscribers

If users do churn from their subscription, we want to make it easy for them to restart it whenever they want. To help you do this, we have launched the ability for users to resubscribe to recently expired subscriptions directly from the Google Play subscriptions center. You can enable your SKUs for resubscribe in the Google Play Console. Since this subscription is started outside of your app, it is only available to developers who are using Billing Library version 2 or higher.

Price decreases without opt-ins

Finally, we’ve heard your feedback that requiring users to opt in to subscription price decreases was too restrictive. We are happy to announce that subscription price decreases will no longer require users to take action to opt in to keep their subscription. Users will be notified of an upcoming price decrease and be able to see the upcoming change in the Google Play subscriptions center.

Updated platform retention settings

Over the past few years, our platform has made strides in helping you keep your subscribers, through features aimed at decreasing both voluntary churn and involuntary churn (churn due to payment failure). For example, account hold has helped developers achieve 8% lower involuntary churn and 35% higher payment decline recovery rate compared to developers without account hold. Although these features are effective, retention may not be something that you are thinking about when starting out for the first time.

That’s why we are announcing updated defaults for several subscriptions features that have been up until now optional, which will take effect on November 1, 2020.

  • Account hold and restore will both be mandatory for all developers.
    • Account hold is a state the user enters after a renewal fails due to a payment issue. During account hold, the user loses access to the subscription while Google notifies the user and retries the payment method. Learn how to integrate account hold
    • Restore enables users to resume auto-renewals after they have cancelled the subscription but before the subscription expires. Learn how to integrate restore
  • Pause and resubscribe will be turned on by default for all developers with subscriptions enabled. You can opt-out of either feature in Google Play Console at any time, in case you are unable to implement the changes by November.
    • Pause enables users to pause their subscription for up to 3 months. At the end of the pause period, the subscription will auto-resume. Pause requires Account Hold to be enabled. You can disable the feature by selecting “Disabled” next to Pause in the subscription settings of the Play console. Learn how to integrate pause
    • Resubscribe enables users to resubscribe to a churned subscription within 12 months of subscription expiry. This feature is only available to apps that support Billing Library versions 2.0 and above. You can disable the feature by changing the setting to “Disabled” for each subscription product in the Play console. Learn how to integrate resubscribe

You may have to make modifications to your app or your server to handle these new features. Specifically, your app should:

  • Recognize when a user loses access to the subscription and when the user regains it later
    • If your app relies on the Billing Library and not the Google Play Developer API Purchases.subscriptions to maintain the latest state of your subscriber, then your app should automatically be able to handle this.
    • However, if you rely on the Google Play Developer API, which is common for developers whose subscription is accessible across multiple platforms such as web, it is important that you always have the latest status of the subscriber in your server.
    • To ensure you always have the latest subscriber status, we strongly recommend implementing Real-Time Developer Notifications. Learn more.
  • Gracefully handle out of app purchases (Billing Library 2.0+ only)
    • When a user opens the app after resubscribing, make sure you acknowledge the purchase and show an in-app message recognizing the new purchase. Check out our best practices for handling out of app purchases.

Although not every feature will require you to make engineering changes, we highly recommend that you test each feature before November 1. To make the transition easier, Google has enabled Account Hold, Pause, Restore, and Resubscribe for all license test accounts. Learn more about testing for subscriptions.





How useful did you find this blog post?

New features to acquire and retain subscribers

Posted by Angela Ying, Product Manager, Google Play

Subscription continues to be one of the fastest growing business models for apps in Google Play. As your subscription business evolves and becomes more sophisticated, our platform continues to evolve to better support your needs. Today we’re excited to tell you more about the new subscription capabilities we announced at the Android 11 Beta Launch, including promotional codes to help you access new subscribers, new opportunities to remind users of your value and win back churned users. Many of these capabilities are built on top of the Play Billing Library version 3.

In addition to the new capabilities, we are also making improvements to our existing platform. Over the past few years, we have launched many features, such as account hold, restore, and pause, which have been highly effective in reducing your voluntary and involuntary churn. We want to ensure that everyone can take advantage of them, which is why we are planning on changing the default settings for these features from optional to either mandatory or on by default starting on November 1, 2020. Additional details on these features and implementation requirements can be found at the end of this post.

Here’s everything that is changing about the subscriptions platform:

Subscriptions platform

More targeted promotions

Promotions and deals are an important way to grow your business to acquire new customers. That’s why over the last year, we have invested in new promotional code capabilities for subscriptions that you can use to send promotions to a more targeted set of users.

Last year at I/O 2019, we launched subscription one-time promotion codes, unique alphanumeric codes that can be distributed to individual users for redemption. Now, we have launched a new frictionless redemption flow which allows users to easily redeem the code, purchase the subscription, and install the app in the Play store in a few simple steps. This greatly simplifies the user experience by reducing the friction users go through to use your code. Since this subscription is started outside of your app, it is only available to developers who are using Billing Library 2.0 or higher.

example mobile displays
In addition to one-time codes, we are excited to officially announce the launch of custom codes (also known as vanity codes), which can be redeemed by multiple users and can be used for marketing campaigns to drive acquisitions. For example, you can post custom codes in advertisements or in social promotions to creatively engage with potential new users. Users can redeem a custom code in your app by entering it in their payment methods when purchasing a subscription.

Remind users of the value of your subscription

Retaining your subscribers is crucial for the long-term health of your subscriptions business. The reason why users stay subscribed is because they perceive ongoing value from your subscription service. To help you communicate this value, we recently launched a module that will remind users of the benefits gained from a subscription when they go to cancel. To take advantage of this module, add a short list of up to 4 subscriber benefits in the Google Play Console.

Win back churned subscribers

If users do churn from their subscription, we want to make it easy for them to restart it whenever they want. To help you do this, we have launched the ability for users to resubscribe to recently expired subscriptions directly from the Google Play subscriptions center. You can enable your SKUs for resubscribe in the Google Play Console. Since this subscription is started outside of your app, it is only available to developers who are using Billing Library version 2 or higher.

Price decreases without opt-ins

Finally, we’ve heard your feedback that requiring users to opt in to subscription price decreases was too restrictive. We are happy to announce that subscription price decreases will no longer require users to take action to opt in to keep their subscription. Users will be notified of an upcoming price decrease and be able to see the upcoming change in the Google Play subscriptions center.

Updated platform retention settings

Over the past few years, our platform has made strides in helping you keep your subscribers, through features aimed at decreasing both voluntary churn and involuntary churn (churn due to payment failure). For example, account hold has helped developers achieve 8% lower involuntary churn and 35% higher payment decline recovery rate compared to developers without account hold. Although these features are effective, retention may not be something that you are thinking about when starting out for the first time.

That’s why we are announcing updated defaults for several subscriptions features that have been up until now optional, which will take effect on November 1, 2020.

  • Account hold and restore will both be mandatory for all developers.
    • Account hold is a state the user enters after a renewal fails due to a payment issue. During account hold, the user loses access to the subscription while Google notifies the user and retries the payment method. Learn how to integrate account hold
    • Restore enables users to resume auto-renewals after they have cancelled the subscription but before the subscription expires. Learn how to integrate restore
  • Pause and resubscribe will be turned on by default for all developers with subscriptions enabled. You can opt-out of either feature in Google Play Console at any time, in case you are unable to implement the changes by November.
    • Pause enables users to pause their subscription for up to 3 months. At the end of the pause period, the subscription will auto-resume. Pause requires Account Hold to be enabled. You can disable the feature by selecting “Disabled” next to Pause in the subscription settings of the Play console. Learn how to integrate pause
    • Resubscribe enables users to resubscribe to a churned subscription within 12 months of subscription expiry. This feature is only available to apps that support Billing Library versions 2.0 and above. You can disable the feature by changing the setting to “Disabled” for each subscription product in the Play console. Learn how to integrate resubscribe

You may have to make modifications to your app or your server to handle these new features. Specifically, your app should:

  • Recognize when a user loses access to the subscription and when the user regains it later
    • If your app relies on the Billing Library and not the Google Play Developer API Purchases.subscriptions to maintain the latest state of your subscriber, then your app should automatically be able to handle this.
    • However, if you rely on the Google Play Developer API, which is common for developers whose subscription is accessible across multiple platforms such as web, it is important that you always have the latest status of the subscriber in your server.
    • To ensure you always have the latest subscriber status, we strongly recommend implementing Real-Time Developer Notifications. Learn more.
  • Gracefully handle out of app purchases (Billing Library 2.0+ only)
    • When a user opens the app after resubscribing, make sure you acknowledge the purchase and show an in-app message recognizing the new purchase. Check out our best practices for handling out of app purchases.

Although not every feature will require you to make engineering changes, we highly recommend that you test each feature before November 1. To make the transition easier, Google has enabled Account Hold, Pause, Restore, and Resubscribe for all license test accounts. Learn more about testing for subscriptions.





How useful did you find this blog post?