Tag Archives: Android Developer

Bringing modern storage to Viber’s users

This blogpost is a collaboration between Google and Viber. Authored by Kseniia Shumelchyk from Google and Anton Novikov, Sergey Kozlov from Viber.

As a messaging app, Viber needs to store, process and share a significant amount of data. Viber aims to give its users an easy, fast, reliable and secure communication platform by providing an intuitive interface and operating with files in a privacy-preserving way. We believe the modern scoped storage paradigm provides this foundation for app developers and users.

Scoped storage was introduced in Android 10 with further improvements in Android 11 to provide better protection to app and user data on a platform level. Due to Viber's complexity, the team opted to incrementally implement the changes that were required to comply with scoped storage.

In this article, we’ll share how Viber handled the migration to scoped storage, focusing on what they did to optimize working with media files and other data in the app.

Managing storage across Android versions

Android’s storage model has evolved to adapt to changing privacy considerations, leading to the changes in the storage system APIs. Let’s take a look at key platform changes that affected the legacy Viber implementation.

Media directories

Scoped storage changes the way that apps store and access files on a device's external storage. Viber needed to evaluate the differences between the existing app's storage model and updated platform guidelines, followed by gradual application changes to work with files in scoped storage. Therefore Viber invoked the requestLegacyExternalStorage flag to temporarily opt-out of scoped storage on Android 10 until the app was fully compatible.

In order to adjust their app experience to scoped storage, Viber now contributes public media files to well-defined media collections using the MediaStore API. This way, the files are accessible in a device gallery, and can be read by other apps with the storage permission. Private media files are stored in the app-specific directory on external storage and are accessed via the internal ContentProvider.

Storage permissions

The other notable update is related to changes in the storage permissions model: Apps in scoped storage have unrestricted access to their app-specific directories on external storage and can contribute to well-defined media collections without requesting a runtime permission. This change will help Viber provide more granular control to their users:

“This addition supports our efforts to provide our users with the best security and privacy solutions we can provide supported by the Android OS, users will benefit from this added security later without needing to opt-in. We also added a new ‘Save to gallery’ option allowing users to choose to make their photos readable by other apps or not. Because chats may contain private images or videos, it’s important to give users the ability to hide these files from the gallery. This change gives users additional control over the content included in their Viber messages.“ said Anton Novikov and Sergey Kozlov from Viber.

Accessing files outside of app-specific directory

Previously, Viber created and consumed files in a custom top level directory and depended on file path access. With scoped storage, saving app files to a top level directory became an anti-pattern, so Viber has followed best practices to update their implementation to store media files from the chats only in locations that are accessible in scoped storage.

However, to reduce the complexity of migration, Viber decided to keep their own top level directory for Android 10 and below, storing only the media files that are not exposed to the device’s Gallery app, while for Android 11 and above this directory is used in read-only mode to provide backward compatibility.

Another use case that Viber has been refining is sharing files in the chats. The updated storage runtime permission gives read access only to the images, videos and audio files that are available through MediaProvider. Starting from Android 11, the only way for Viber to access non-media files created by other apps is by using the Storage Access Framework document picker, which they had already utilized in a different part of their app.

App-specific files within external storage

In the scoped storage environment, app-specific directories on external storage are becoming private from other apps. This change has helped Viber leverage its use of external storage for storing private user files:

”We find change to app-specific directories to be useful, because it will help to ensure that personal chats are protected and backed with platform security.” said Anton Novikov from Viber. Learn more about how to access app-specific files.

Single interface to access storage

Because Viber targets a large audience running on Android 4.2 and above, they introduced an abstraction layer that aids them in managing storage access efficiently across all supported Android versions and with their use cases in mind.

Previously, Viber heavily used File API to access files, including files in legacy storage locations. Further, they stored absolute file paths for entries in the local database to keep the user's conversation history.

To standardize access to this conversation history and thus ensure that users don’t lose access to their files, Viber replaced absolute file paths with content URIs. In the new implementation, the app is accessing files only via content providers:

  • Internal FileProvider for Viber app-specific directories.
  • External file providers available in the Android framework, such as MediaStore or Storage Access Framework, or those belong to another app that shares files with Viber through Intent.ACTION_SEND.

By using a consistent ContentProvider layer, the ContentResolver gives the app a unified interface to access the file content.

This approach has also helped Viber optimize the network layer and define a universal Loader abstraction to upload/fetch and to read/store different types of media files like voice messages, chat images and stickers.

Summary

Android 11 further enhances scoped storage, which provides better protection of app and user data and makes the transition easier for developers. It’s amazing to see many apps like Viber are migrating to take advantage of scoped storage since Android 10.

We hope Viber’s story is useful and will inspire you to modernize your Android apps as well. Learn more about Android storage use cases and best practices.

11 Weeks of Android: Privacy and Security

Posted by:
Charmaine D’Silva, Product Lead, Android Privacy and Framework
Narayan Kamath, Engineering Lead, Android Privacy and Framework
Stephan Somogyi, Product Lead, Android Security
Sudhi Herle, Engineering Lead, Android Security

This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.

mobile security illustration

Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.

As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:

One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.

Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.

In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.

In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.

Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.

Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.

Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.

Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.

BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.

Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.

Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.

Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

11 Weeks of Android: Privacy and Security

Posted by:
Charmaine D’Silva, Product Lead, Android Privacy and Framework
Narayan Kamath, Engineering Lead, Android Privacy and Framework
Stephan Somogyi, Product Lead, Android Security
Sudhi Herle, Engineering Lead, Android Security

This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.

mobile security illustration

Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.

As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:

One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.

Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.

In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.

In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.

Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.

Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.

Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.

Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.

BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.

Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.

Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.

Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

Full spectrum of on-device machine learning tools on Android

Posted by Hoi Lam, Android Machine Learning



This blog post is part of a weekly series for #11WeeksOfAndroid. Each week we’re diving into a key area of Android so you don’t miss anything. Throughout this week, we covered various aspects of Android on-device machine learning (ML). Whichever stage of development be it starting out or an established app; whatever role you play in design, product and engineering; whatever your skill level from beginner to experts, we have a wide range of ML tools for you.

Design - ML as a differentiator

“Focus on the user and all else will follow” is a Google mantra that becomes even more relevant in our machine learning age. Our Design Advocate, Di Dang, highlighted the importance of finding the unique intersection of user problems and ML strengths. Too often, teams are so keen on the idea of machine learning that they lose sight of their user needs.



Di outlined how the People + AI Guidebook can help you make ML product decisions and used the example of the Read Along app to illustrate topics like precision and recall, which are unique to ML design and development. Check out her interview with the Read Along team together with your team for more inspiration.

New ML Kit fully focused on on-device

When you decide that on-device machine learning is the solution, the easiest way to implement it will be through turnkey SDKs like ML Kit. Sophisticated Google-trained models and processing pipelines are offered through an easy to use interface in Kotlin / Java. ML Kit is designed and built for on-device ML: it works offline, offers enhanced privacy, unlocks high performance for real-time use cases and it is free. We recently made ML Kit a standalone SDK and it no longer requires a Firebase account. Just one line in your build.gradle file and you can start bringing ML functionality into your app.



The team has also added new functionalities such as Jetpack lifecycle support and the option to use the face contour models via Google Play Services saving as much as 20MB in app size. Another much anticipated addition is the support for swapping Google models with your own for both Image Labeling as well as Object Detection and Tracking. This provides one of the easiest ways to add TensorFlow Lite models to your applications without interacting with ByteArray!

Customise with TensorFlow Lite and Android tools

If the base model provided by ML Kit doesn’t quite fit the bill, what should developers do? The first port of call should be TensorFlow Hub where ready-to-use TensorFlow Lite models from both Google and the wider community can be downloaded. From 100,000 US Supermarket products to tomato plant diseases classifiers, the choice is yours.



In addition to Firebase AutoML Vision Edge, you can also build your own model using TensorFlow Model Maker (image classification / text classification) with just a few lines of Python. Once you have a TensorFlow Lite model from either TensorFlow Hub, or the Model Maker, you can easily integrate it with your Android app using ML Kit Image Labelling or Object Detection and Tracking. If you prefer an open source solution, Android Studio 4.1 beta introduces ML model binding that helps wrap around the TensorFlow Lite model with an easy to use Kotlin / Java wrapper. Adding a custom model to your Android app has never been easier. Check out this blog for more details.

Time for on-device ML is now

From the examples of the Android Developer Challenge winners, it is obvious that on-device machine learning has come of age and ML functionalities once reserved for the cloud or supercomputers are now available on your Android phone. Take a step forward with us by trying out our codelabs of the day:

Also checkout the ML Week learning pathway and take the quiz to get your very own ML badge.

Android on-device machine learning is a rapidly evolving platform, if you have any enhancement requests or feedback on how it could be improved, please let us know together with your use-case (TensorFlow Lite / ML Kit). Time for on-device ML is now.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

New tools for finding, training, and using custom machine learning models on Android

Posted by Hoi Lam, Android Machine Learning

Yesterday, we talked about turnkey machine learning (ML) solutions with ML Kit. But what if that doesn’t completely address your needs and you need to tweak it a little? Today, we will discuss how to find alternative models, and how to train and use custom ML models in your Android app.

Find alternative ML models

Crop disease models from the wider research community available on tfhub.dev

If the turnkey ML solutions don't suit your needs, TensorFlow Hub should be your first port of call. It is a repository of ML models from Google and the wider research community. The models on the site are ready for use in the cloud, in a web-browser or in an app on-device. For Android developers, the most exciting models are the TensorFlow Lite (TFLite) models that are optimized for mobile.

In addition to key vision models such as MobileNet and EfficientNet, the repository also boast models powered by the latest research such as:

Many of these solutions were previously only available in the cloud, as the models are too large and too power intensive to run on-device. Today, you can run them on Android on-device, offline and live.

Train your own custom model

Besides the large repository of base models, developers can also train their own models. Developer-friendly tools are available for many common use cases. In addition to Firebase’s AutoML Vision Edge, the TensorFlow team launched TensorFlow Lite Model Maker earlier this year to give developers more choices over the base model that support more use cases. TensorFlow Lite Model Maker currently supports two common ML tasks:

The TensorFlow Lite Model Maker can run on your own developer machine or in Google Colab online machine learning notebooks. Going forward, the team plans to improve the existing offerings and to add new use cases.

Using custom model in your Android app

New TFLite Model import screen in Android Studio 4.1 beta

Once you have selected a model or trained your model there are new easy-to-use tools to help you integrate them into your Android app without having to convert everything into ByteArrays. The first new tool is ML Model binding with Android Studio 4.1. This lets developers import any TFLite model, read the input / output signature of the model, and use it with just a few lines of code that calls the open source TensorFlow Lite Android Support Library.

Another way to implement a TensorFlow Lite model is via ML Kit. Starting in June, ML Kit no longer requires a Firebase project for on-device functionality. In addition, the image classification and object detection and tracking (ODT) APIs support custom models. The latter ODT offering is especially useful in use-cases where you need to separate out objects from a busy scene.

So how should you choose between these three solutions? If you are trying to detect a product on a busy supermarket shelf, ML Kit object detection and tracking can help your user select a specific product for processing. The API then performs image classification on just the part of the image that contains the product, which results in better detection performance. On the other hand, if the scene or the object you are trying to detect takes up most of the input image, for example, a landmark such as Big Ben, using ML Model binding or the ML Kit image classification API might be more appropriate.

TensorFlow Hub bird detection model with ML Kit Object Detection & Tracking AP

Two examples of how these tools can fit together

Here are some resources to help you get started:

Customizing your model is easier than ever

Finding, building and using custom models on Android has never been easier. As both Android and TensorFlow teams increase the coverage of machine learning use cases, please let us know how we can improve these tools for your use cases by filing an enhancement request with TensorFlow Lite or ML Kit.

Tomorrow, we will take a step back and focus on how to appropriately use and design for a machine learning first Android app. The content will be appropriate for the entire development team, so bring your product manager and designers along. See you next time.

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.

Announcing the winners of the #AndroidDevChallenge, powered by on-device machine learning

Posted by Jacob Lehrbaum, Director of Developer Relations, Android

Developers like you have always played an important role in Android innovation. Over 10 years ago, when we first launched the Android SDK, we also announced the Android Developer Challenge to reward model apps and highlight new ways of solving user problems. As Android pushes the boundaries of machine learning, 5G, foldables, and more, developers continue to help shape these new frontiers. To celebrate this work, we revived the challenge in 2019, with a focus on “Helpful Innovation,” powered by on-device machine learning.

We received hundreds of creative projects, and at the end of last year, picked 10 winners who each combined a strong idea and a thirst to bring it to life. Since then, we’ve been working with those winners to help turn their ideas into reality. And today, we’re announcing the 10 winners. Some are still at the beginning of their journey but but their apps are now ready for you to download and try out! !

  • AgroDoc helps farmers diagnose plant disease and make treatment plans. [Navneet Krishna; Kochi, India]
  • AgriFarm helps farmers detect plant diseases and prevent major damage in fruits and vegetables such as tomatoes, corn and potatoes. [Balochisan, Pakistan]
  • Eskke streamlines mobile money management for people in the Congo, letting them transfer money, pay bills, buy subscriptions and essential airtime through SMS. [David Mumbere Kathoh; Goma, Democratic Republic of Congo]
  • Leepi helps students learn hand gestures and symbols for American Sign Language. [Prince Patel; Bengaluru, India]
  • MixPose is a live streaming platform that gives yoga teachers and fitness professionals the opportunity to teach, track alignment, and give feedback in real-time. [Peter Ma; San Francisco, California, USA]
  • Pathfinder could help people with visual impairments navigate complex situations by identifying and calculating the trajectories of objects moving in their path. [Colin Shelton; Addison, Texas, USA]
  • Snore & Cough helps you identify and analyze snoring and coughing, to help provide info to users seeking assistance from a medical professional. [Ethan Fan; Mountain View, California, USA]
  • Stila pairs with a wearable device, like the Fitbit wristband or a device running on Wear OS by Google to monitor and track the body’s stress levels. By monitoring stress levels over time, you have the chance to better understand and manage stress in your life. [Yingdin Wing; Munich, Germany]
  • Trashly makes recycling easier. Just point the on-device camera at an item, and through object detection, the app identifies and classifies plastic and paper cups, bags, bottles, etc. [Elvin Rakhmankulov; Chicago, Illinois, USA]
  • UnoDogs helps owners better support their pet’s wellness, providing customized information and fitness programs. [Chinmany Mishra; New Delhi, India]

Making on-device machine learning more accessible, with ML Kit and TensorFlow Lite

Increasingly, machine learning is becoming a more accessible tool to developers with limited to no background in the technology. In fact, for most of the winners of the Android Developer Challenge, this was their first foray into machine learning. That’s thanks in part to two key offerings from Google, which bring on-device machine learning into reach for millions of developers around the world.

The first is ML Kit. ML Kit brings Google’s on-device machine learning technologies to mobile app developers, so they can build customized and interactive experiences into their apps. This includes tools such as language translation, text recognition, object detection and more. Eskke, for instance, uses offline text recognition and barcode scanning from ML Kit so users can scan the QR code at a mobile money kiosk and quickly withdraw money. And MixPose uses ML Kit's forthcoming Pose detection API to detect each user’s yoga positions and movements, so teachers can provide feedback.

The other Google resource that many of the Android Dev Challenge winners used was TensorFlow Lite. This powerful machine learning framework can help run machine learning models on Android, iOS and IoT devices that would never normally be able to support them. Its set of tools can be used for all kinds of powerful neural network-related applications, from image detection to speech recognition, bringing the latest cutting-edge technology to the devices we carry around with us wherever we go. Trashly, for instance, uses a custom TensorFlow Lite model to report if an object is recyclable and how to recycle it.

Helpful innovation, such as the 10 winning apps in the Android Developer Challenge, has the potential to change the way we access, use, and interpret information, making it available when we need it, where we need it most. By working with these developers focused on helpful innovation, we hope to inspire the next wave of developers to unlock what’s possible with this new technology.

#11WeeksOfAndroid Week 2 Machine Learning with Android logo head

What’s next in Android Machine Learning week?

As we kick off the second week of #11WeeksOfAndroid, focused on Machine Learning, we will highlight new tools and resources available to Android developers. Here’s a taste of the rest of this week:

  • Tuesday - ML Kit, the turnkey ML SDK went through a major overhaul with its new on-device offering this month. Check out the substantial improvement in developer usability, CameraX support and where the platform is going next.
  • Wednesday - Custom Models. When prepackaged SDK doesn’t quite satisfy your need, tools from Android Studio, TensorFlow Lite and ML Kit might just be the answer. Aside from individual offerings, we will also highlight how they can be used together.
  • Thursday - ML design. Learn some best practices for making ML product decisions from the People + AI Guidebook. We will go behind the scenes of the Read Along app, an on-device ML app that helps grow universal literacy. Bring your whole team because everyone, including UXers, engineers, and product managers are invited!

On Tuesday and Wednesday, we will also have a “codelab of the day” so get your Android Studio 4.1 beta today, block off an hour in your schedule and take this ML journey with us!

*The apps presented here are the projects of the developers individually, and not Google.

Announcing the winners of the #AndroidDevChallenge, powered by on-device machine learning

Posted by Jacob Lehrbaum, Director of Developer Relations, Android

Developers like you have always played an important role in Android innovation. Over 10 years ago, when we first launched the Android SDK, we also announced the Android Developer Challenge to reward model apps and highlight new ways of solving user problems. As Android pushes the boundaries of machine learning, 5G, foldables, and more, developers continue to help shape these new frontiers. To celebrate this work, we revived the challenge in 2019, with a focus on “Helpful Innovation,” powered by on-device machine learning.

We received hundreds of creative projects, and at the end of last year, picked 10 winners who each combined a strong idea and a thirst to bring it to life. Since then, we’ve been working with those winners to help turn their ideas into reality. And today, we’re announcing the 10 winners. Some are still at the beginning of their journey but but their apps are now ready for you to download and try out! !

  • AgroDoc helps farmers diagnose plant disease and make treatment plans. [Navneet Krishna; Kochi, India]
  • AgriFarm helps farmers detect plant diseases and prevent major damage in fruits and vegetables such as tomatoes, corn and potatoes. [Balochisan, Pakistan]
  • Eskke streamlines mobile money management for people in the Congo, letting them transfer money, pay bills, buy subscriptions and essential airtime through SMS. [David Mumbere Kathoh; Goma, Democratic Republic of Congo]
  • Leepi helps students learn hand gestures and symbols for American Sign Language. [Prince Patel; Bengaluru, India]
  • MixPose is a live streaming platform that gives yoga teachers and fitness professionals the opportunity to teach, track alignment, and give feedback in real-time. [Peter Ma; San Francisco, California, USA]
  • Pathfinder could help people with visual impairments navigate complex situations by identifying and calculating the trajectories of objects moving in their path. [Colin Shelton; Addison, Texas, USA]
  • Snore & Cough helps you identify and analyze snoring and coughing, to help provide info to users seeking assistance from a medical professional. [Ethan Fan; Mountain View, California, USA]
  • Stila pairs with a wearable device, like the Fitbit wristband or a device running on Wear OS by Google to monitor and track the body’s stress levels. By monitoring stress levels over time, you have the chance to better understand and manage stress in your life. [Yingdin Wing; Munich, Germany]
  • Trashly makes recycling easier. Just point the on-device camera at an item, and through object detection, the app identifies and classifies plastic and paper cups, bags, bottles, etc. [Elvin Rakhmankulov; Chicago, Illinois, USA]
  • UnoDogs helps owners better support their pet’s wellness, providing customized information and fitness programs. [Chinmany Mishra; New Delhi, India]

Making on-device machine learning more accessible, with ML Kit and TensorFlow Lite

Increasingly, machine learning is becoming a more accessible tool to developers with limited to no background in the technology. In fact, for most of the winners of the Android Developer Challenge, this was their first foray into machine learning. That’s thanks in part to two key offerings from Google, which bring on-device machine learning into reach for millions of developers around the world.

The first is ML Kit. ML Kit brings Google’s on-device machine learning technologies to mobile app developers, so they can build customized and interactive experiences into their apps. This includes tools such as language translation, text recognition, object detection and more. Eskke, for instance, uses offline text recognition and barcode scanning from ML Kit so users can scan the QR code at a mobile money kiosk and quickly withdraw money. And MixPose uses ML Kit's forthcoming Pose detection API to detect each user’s yoga positions and movements, so teachers can provide feedback.

The other Google resource that many of the Android Dev Challenge winners used was TensorFlow Lite. This powerful machine learning framework can help run machine learning models on Android, iOS and IoT devices that would never normally be able to support them. Its set of tools can be used for all kinds of powerful neural network-related applications, from image detection to speech recognition, bringing the latest cutting-edge technology to the devices we carry around with us wherever we go. Trashly, for instance, uses a custom TensorFlow Lite model to report if an object is recyclable and how to recycle it.

Helpful innovation, such as the 10 winning apps in the Android Developer Challenge, has the potential to change the way we access, use, and interpret information, making it available when we need it, where we need it most. By working with these developers focused on helpful innovation, we hope to inspire the next wave of developers to unlock what’s possible with this new technology.

#11WeeksOfAndroid Week 2 Machine Learning with Android logo head

What’s next in Android Machine Learning week?

As we kick off the second week of #11WeeksOfAndroid, focused on Machine Learning, we will highlight new tools and resources available to Android developers. Here’s a taste of the rest of this week:

  • Tuesday - ML Kit, the turnkey ML SDK went through a major overhaul with its new on-device offering this month. Check out the substantial improvement in developer usability, CameraX support and where the platform is going next.
  • Wednesday - Custom Models. When prepackaged SDK doesn’t quite satisfy your need, tools from Android Studio, TensorFlow Lite and ML Kit might just be the answer. Aside from individual offerings, we will also highlight how they can be used together.
  • Thursday - ML design. Learn some best practices for making ML product decisions from the People + AI Guidebook. We will go behind the scenes of the Read Along app, an on-device ML app that helps grow universal literacy. Bring your whole team because everyone, including UXers, engineers, and product managers are invited!

On Tuesday and Wednesday, we will also have a “codelab of the day” so get your Android Studio 4.1 beta today, block off an hour in your schedule and take this ML journey with us!

*The apps presented here are the projects of the developers individually, and not Google.

Android Studio 4.0

Posted by Adarsh Fernando, Product Manager

Android Studio logo

During these uncertain times, we’re humbled by the many developers around the world who are finding ways to keep doing what they do best—create amazing apps for Android. Whether you’re working from your kitchen table on a laptop or from a home office, you need tools that keep up with you. Android Studio 4.0 is the result of our drive to bring you new and improved tools for coding smarter, building faster, and designing the apps your users depend on, and it’s now available on the stable channel.

Some highlights of Android Studio 4.0 include a new Motion Editor to help bring your apps to life, a Build Analyzer to investigate causes for slower build times, and Java 8 language APIs you can use regardless of your app’s minimum API level. Based on your feedback, we’ve also overhauled the CPU Profiler user interface to provide a more intuitive workflow and easier side-by-side analysis of thread activity. And the improved Layout Inspector now provides live data of your app’s UI, so you can easily debug exactly what’s being shown on the device.

As always, this release wouldn’t be possible without the early feedback from our Preview users. So read on or watch below for further highlights and new features you can find in this stable version. If you’re ready to jump in and see for yourself, head over to the official website to download Android Studio 4.0 now.



Design

Motion Editor

The MotionLayout API extends the rich capabilities of ConstraintLayout to help Android developers manage complex motion and widget animation in their apps. In Android Studio 4.0, using this API is made easier with the new Motion Editor—a powerful interface for creating, editing, and previewing MotionLayout animations. You no longer have to create and modify complex XML files; the Motion Editor generates them for you, with support for editing constraint sets, transitions, keyframes, and view attributes. And if you do want to see the code the editor creates, it is one click away. And just as conveniently, for developers already using ConstraintLayout, the IDE can easily convert those to MotionLayout. Learn more

Create, edit, and preview animations in the Motion Editor

Create, edit, and preview animations in the Motion Editor

Upgraded Layout Inspector

Have you ever wanted to investigate where a value for a particular attribute came from? Or see a live 3D representation of nested views to more easily inspect your view hierarchy? With the new Layout Inspector, debugging your UI is much more intuitive by giving you access to data that stays updated with your running app and providing insights on how resources are being resolved.

Debug your app’s UI in real-time with Live Layout Inspector

Debug your app’s UI in real-time with Live Layout Inspector

Use the live Layout Inspector by selecting View > Tool Windows > Layout Inspector from the main menu. If you are deploying to a device running API 29 level or higher, you have access to additional features, such as a dynamic layout hierarchy that updates as views change, detailed view attributes that also help you determine how resource values are resolved, and a live 3D model of your running app’s UI. Navigate, animate, and transition between views on your running app while always having the ability to debug your UI to pixel perfection. Learn more

Layout Validation

Compare your UI across multiple screens with Layout Validation

Compare your UI across multiple screens with Layout Validation

When you’re developing for multiple form-factors, screen sizes, and resolutions, you need to verify that changes you make to your UI look great on every screen you support. With the Layout Validation window, you can preview layouts on different screens and configurations simultaneously, so you can easily ensure your app looks great across a range of devices. To get started, click on the Layout Validation tab in the top-right corner of the IDE.

Develop & Profile

CPU Profiler UI Upgrades

The improved UI of the CPU Profiler

The improved UI of the CPU Profiler

The CPU profiler is designed to provide a rich amount of information about your app’s thread activity and trace recordings. So, when you provided us feedback about how we can make the UI even more intuitive to navigate and the data easier to understand, we listened. In Android Studio 4.0, CPU recordings are now separated from the main profiler timeline and organized in groups to allow for easier analysis. You can move groups up and down, or drag-and-drop individual items within a group for additional customization.

Easier side-by-side analysis of thread activity

Easier side-by-side analysis of thread activity

For easier side-by-side analysis, you can now view all thread activity in the Thread Activity timeline (including methods, functions, and events) and try new navigation shortcuts to easily move around the data—such as using W, A, S, and D keys for fine-grained zooming and panning. We’ve also redesigned the System Trace UI so Events are uniquely colored for better visual distinction, threads are sorted to surface the busier ones first, and you can now focus on seeing data for only the threads you select. Finally, we invested in the quality of the CPU profiler, and consequently we’ve seen a significant decrease in the user-reported error rates of recordings since Android Studio 3.6. There are even more improvements to try, so learn more.

Smart editor features when writing rules for code shrinking

Smart editor feature when writing rules for R8

Smart editor feature when writing rules for R8

R8 was introduced in Android Gradle plugin 3.4.0 to combine desugaring, shrinking, obfuscating, optimizing, and dexing all in one step—resulting in noticeable build performance improvements. When creating rules files for R8, Android Studio now provides smart editor features, such as syntax highlighting, completion, and error checking. The editor also integrates with your Android project to provide full symbol completion for all classes, methods, and fields, and includes quick navigation and refactoring.

IntelliJ IDEA 2019.3 platform update

The core Android Studio IDE has been updated with improvements from IntelliJ IDEA 2019.3 and 2019.3.3 releases. These improvements largely focus on quality and performance improvements across the IDE.

Kotlin Android live templates

Live templates is a convenient IntelliJ feature that allows you to insert common constructs into your code by typing simple keywords. Android Studio now includes Android-specific live templates for your Kotlin code. For example, simply type toast and press the Tab key to quickly insert boilerplate code for a Toast. For a full list of available live templates, navigate to Editor > Live Templates in the Settings (or Preferences) dialog.

Clangd support for C++

For developers writing C++, we have switched to clangd as the primary language analysis engine for code navigation, completion, inspection, and showing code errors and warnings. We also now bundle clang-tidy with Android Studio. To configure Clangd or Clang-Tidy behavior, go to the IDE Settings (or Preferences) dialog, navigate to Languages & Frameworks > C/C++ > Clangd or Clang-Tidy, and configure the options.

Build

Android Gradle plugin 4.0.0 includes support for Android Studio’s Build Analyzer by using Java 8 language APIs (regardless of your app’s minimum API level), and creating feature-on-feature dependencies between Dynamic Feature modules. For a full list of updates, read the Android Gradle plugin 4.0.0 release notes.

Build Analyzer

Address bottlenecks in your build performance with Build Analyzer

Address bottlenecks in your build performance with Build Analyzer

Android Developers rely on a variety of Gradle plugins and custom build logic to tailor the build system for their app. However, outdated or misconfigured tasks can cause longer build times that lead to frustration and lost productivity. The Build Analyzer helps you understand and address bottlenecks in your build by highlighting the plugins and tasks that are most responsible for your overall build time and by suggesting steps to mitigate regressions. Learn more

Java 8 Language library desugaring in D8 and R8

Previous versions of the Android Gradle plugin supported a variety of Java 8 language features for all API levels, such as lambda expressions and method references, through a process called desugaring. In Android Studio 4.0, the desugaring engine has been extended to support Java language APIs, regardless of your app’s minSdkVersion. This means that you can now use standard language APIs, which were previously available in only recent Android releases (such as java.util.stream, java.util.function and java.time). Learn more

Feature-on-feature dependencies

Feature-on-feature dependencies

Feature-on-feature dependencies

When using Android Gradle plugin 4.0.0 and higher, you can now specify that a Dynamic Feature module depends on another feature module. Being able to define this relationship ensures that your app has the required modules to unlock additional functionality, resulting in fewer requests and easier modularization of your app. For example, a :video feature can depend on the :camera feature. If a user wants to unlock the ability to record videos, your app automatically downloads the required :camera module when it requests :video. Learn more

New options to enable or disable build features

The Android Gradle plugin has built-in support for modern libraries, such as data binding and view binding, and build features, such as auto-generated BuildConfig classes. However, you might not need these libraries and features for every project. In version 4.0.0 of the plugin, you can now disable discrete build features, as shown below, which can help optimize build performance for larger projects. For the DSL and full list of features you can control, see the release notes.

android {
    // The default value for each feature is shown below.
    // You can change the value to override the default behavior.
    buildFeatures {
        // Determines whether to support View Binding.
        // Note that the viewBinding.enabled property is now deprecated.
        viewBinding = false
        // Determines whether to support Data Binding.
        // Note that the dataBinding.enabled property is now deprecated.
        dataBinding = false
        ...
    }
}

Android Gradle plugin DSL for enabling or disabling build features

Essential support for Kotlin DSL script files

Android Studio 4.0 now has built-in support for Kotlin DSL build script files (*.kts), which means that Kotlin build scripts offer a full suite of quick fixes and are supported by the Project Structure dialog. While we are excited about the potential for using Kotlin to configure your build, we will continue to refine the Android Gradle Plugin’s DSL API throughout the next year, which may result in breaking API changes for Kotlin script users. Long term, these fixes will make for a more idiomatic, easy-to-use DSL for Kotlin script users.

Dependencies metadata

When building your app using Android Gradle plugin 4.0.0 and higher, the plugin includes metadata that describes the library dependencies that are compiled into your app. When uploading your app, the Play Console inspects this metadata to provide alerts for known issues with SDKs and dependencies your app uses, and, in some cases, provide actionable feedback to resolve those issues.

The data is compressed, encrypted by a Google Play signing key, and stored in the signing block of your release app. If you’d rather not share this information, you can easily opt-out by including the following in your module’s build.gradle file:

android {
    dependenciesInfo {
        // Disables dependency metadata when building APKs.
        includeInApk = false
        // Disables dependency metadata when building Android App Bundles.
        includeInBundle = false
    }
}

Disable dependency metadata for your APKs, app bundle, or both

To recap, Android Studio 4.0 includes these new enhancements & features:

Design

  • Motion Editor: a simple interface for creating, editing, and previewing MotionLayout animations
  • Upgraded Layout Inspector: a real-time & more intuitive debugging experience
  • Layout Validation: compare your UI across multiple screen dimensions

Develop & Profile

  • CPU Profiler update: improvements to make the UI more intuitive to navigate and the data easier to understand
  • R8 rules update: smart editor features for your code shrinker rules, such as syntax highlighting, completion, and error checking
  • IntelliJ IDEA 2019.3 platform update with performance and quality improvements
  • Live Template update: Android-specific live templates for your Kotlin code
  • Clangd support: Clangd and Clang-Tidy turned on by default

Build

  • Build Analyzer: understand and address bottlenecks in your build
  • Java 8 language support update: APIs you can use regardless of your app’s minimum API level
  • Feature-on-feature dependencies: define dependencies between Dynamic Feature modules
  • buildFeatures DSL: enable or disable discrete build features, such as Data Binding
  • Kotlin DSL: essential support for Kotlin DSL script files

For a full list of changes, read the official release notes.

Getting Started

Download

Download Android Studio 4.0 from the download page. If you are using a previous release of Android Studio, you can simply update to the latest version of Android Studio.

As always, we appreciate any feedback on things you like, and issues or features you would like to see. If you find a bug or issue, please file an issue. Follow us -- the Android Studio development team ‐ on Twitter and on Medium.