Tag Archives: Android Developer

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.

Announcing the winners of the #AndroidDevChallenge, powered by on-device machine learning

Posted by Jacob Lehrbaum, Director of Developer Relations, Android

Developers like you have always played an important role in Android innovation. Over 10 years ago, when we first launched the Android SDK, we also announced the Android Developer Challenge to reward model apps and highlight new ways of solving user problems. As Android pushes the boundaries of machine learning, 5G, foldables, and more, developers continue to help shape these new frontiers. To celebrate this work, we revived the challenge in 2019, with a focus on “Helpful Innovation,” powered by on-device machine learning.

We received hundreds of creative projects, and at the end of last year, picked 10 winners who each combined a strong idea and a thirst to bring it to life. Since then, we’ve been working with those winners to help turn their ideas into reality. And today, we’re announcing the 10 winners. Some are still at the beginning of their journey but but their apps are now ready for you to download and try out! !

  • AgroDoc helps farmers diagnose plant disease and make treatment plans. [Navneet Krishna; Kochi, India]
  • AgriFarm helps farmers detect plant diseases and prevent major damage in fruits and vegetables such as tomatoes, corn and potatoes. [Balochisan, Pakistan]
  • Eskke streamlines mobile money management for people in the Congo, letting them transfer money, pay bills, buy subscriptions and essential airtime through SMS. [David Mumbere Kathoh; Goma, Democratic Republic of Congo]
  • Leepi helps students learn hand gestures and symbols for American Sign Language. [Prince Patel; Bengaluru, India]
  • MixPose is a live streaming platform that gives yoga teachers and fitness professionals the opportunity to teach, track alignment, and give feedback in real-time. [Peter Ma; San Francisco, California, USA]
  • Pathfinder could help people with visual impairments navigate complex situations by identifying and calculating the trajectories of objects moving in their path. [Colin Shelton; Addison, Texas, USA]
  • Snore & Cough helps you identify and analyze snoring and coughing, to help provide info to users seeking assistance from a medical professional. [Ethan Fan; Mountain View, California, USA]
  • Stila pairs with a wearable device, like the Fitbit wristband or a device running on Wear OS by Google to monitor and track the body’s stress levels. By monitoring stress levels over time, you have the chance to better understand and manage stress in your life. [Yingdin Wing; Munich, Germany]
  • Trashly makes recycling easier. Just point the on-device camera at an item, and through object detection, the app identifies and classifies plastic and paper cups, bags, bottles, etc. [Elvin Rakhmankulov; Chicago, Illinois, USA]
  • UnoDogs helps owners better support their pet’s wellness, providing customized information and fitness programs. [Chinmany Mishra; New Delhi, India]

Making on-device machine learning more accessible, with ML Kit and TensorFlow Lite

Increasingly, machine learning is becoming a more accessible tool to developers with limited to no background in the technology. In fact, for most of the winners of the Android Developer Challenge, this was their first foray into machine learning. That’s thanks in part to two key offerings from Google, which bring on-device machine learning into reach for millions of developers around the world.

The first is ML Kit. ML Kit brings Google’s on-device machine learning technologies to mobile app developers, so they can build customized and interactive experiences into their apps. This includes tools such as language translation, text recognition, object detection and more. Eskke, for instance, uses offline text recognition and barcode scanning from ML Kit so users can scan the QR code at a mobile money kiosk and quickly withdraw money. And MixPose uses ML Kit's forthcoming Pose detection API to detect each user’s yoga positions and movements, so teachers can provide feedback.

The other Google resource that many of the Android Dev Challenge winners used was TensorFlow Lite. This powerful machine learning framework can help run machine learning models on Android, iOS and IoT devices that would never normally be able to support them. Its set of tools can be used for all kinds of powerful neural network-related applications, from image detection to speech recognition, bringing the latest cutting-edge technology to the devices we carry around with us wherever we go. Trashly, for instance, uses a custom TensorFlow Lite model to report if an object is recyclable and how to recycle it.

Helpful innovation, such as the 10 winning apps in the Android Developer Challenge, has the potential to change the way we access, use, and interpret information, making it available when we need it, where we need it most. By working with these developers focused on helpful innovation, we hope to inspire the next wave of developers to unlock what’s possible with this new technology.

#11WeeksOfAndroid Week 2 Machine Learning with Android logo head

What’s next in Android Machine Learning week?

As we kick off the second week of #11WeeksOfAndroid, focused on Machine Learning, we will highlight new tools and resources available to Android developers. Here’s a taste of the rest of this week:

  • Tuesday - ML Kit, the turnkey ML SDK went through a major overhaul with its new on-device offering this month. Check out the substantial improvement in developer usability, CameraX support and where the platform is going next.
  • Wednesday - Custom Models. When prepackaged SDK doesn’t quite satisfy your need, tools from Android Studio, TensorFlow Lite and ML Kit might just be the answer. Aside from individual offerings, we will also highlight how they can be used together.
  • Thursday - ML design. Learn some best practices for making ML product decisions from the People + AI Guidebook. We will go behind the scenes of the Read Along app, an on-device ML app that helps grow universal literacy. Bring your whole team because everyone, including UXers, engineers, and product managers are invited!

On Tuesday and Wednesday, we will also have a “codelab of the day” so get your Android Studio 4.1 beta today, block off an hour in your schedule and take this ML journey with us!

*The apps presented here are the projects of the developers individually, and not Google.

Announcing the winners of the #AndroidDevChallenge, powered by on-device machine learning

Posted by Jacob Lehrbaum, Director of Developer Relations, Android

Developers like you have always played an important role in Android innovation. Over 10 years ago, when we first launched the Android SDK, we also announced the Android Developer Challenge to reward model apps and highlight new ways of solving user problems. As Android pushes the boundaries of machine learning, 5G, foldables, and more, developers continue to help shape these new frontiers. To celebrate this work, we revived the challenge in 2019, with a focus on “Helpful Innovation,” powered by on-device machine learning.

We received hundreds of creative projects, and at the end of last year, picked 10 winners who each combined a strong idea and a thirst to bring it to life. Since then, we’ve been working with those winners to help turn their ideas into reality. And today, we’re announcing the 10 winners. Some are still at the beginning of their journey but but their apps are now ready for you to download and try out! !

  • AgroDoc helps farmers diagnose plant disease and make treatment plans. [Navneet Krishna; Kochi, India]
  • AgriFarm helps farmers detect plant diseases and prevent major damage in fruits and vegetables such as tomatoes, corn and potatoes. [Balochisan, Pakistan]
  • Eskke streamlines mobile money management for people in the Congo, letting them transfer money, pay bills, buy subscriptions and essential airtime through SMS. [David Mumbere Kathoh; Goma, Democratic Republic of Congo]
  • Leepi helps students learn hand gestures and symbols for American Sign Language. [Prince Patel; Bengaluru, India]
  • MixPose is a live streaming platform that gives yoga teachers and fitness professionals the opportunity to teach, track alignment, and give feedback in real-time. [Peter Ma; San Francisco, California, USA]
  • Pathfinder could help people with visual impairments navigate complex situations by identifying and calculating the trajectories of objects moving in their path. [Colin Shelton; Addison, Texas, USA]
  • Snore & Cough helps you identify and analyze snoring and coughing, to help provide info to users seeking assistance from a medical professional. [Ethan Fan; Mountain View, California, USA]
  • Stila pairs with a wearable device, like the Fitbit wristband or a device running on Wear OS by Google to monitor and track the body’s stress levels. By monitoring stress levels over time, you have the chance to better understand and manage stress in your life. [Yingdin Wing; Munich, Germany]
  • Trashly makes recycling easier. Just point the on-device camera at an item, and through object detection, the app identifies and classifies plastic and paper cups, bags, bottles, etc. [Elvin Rakhmankulov; Chicago, Illinois, USA]
  • UnoDogs helps owners better support their pet’s wellness, providing customized information and fitness programs. [Chinmany Mishra; New Delhi, India]

Making on-device machine learning more accessible, with ML Kit and TensorFlow Lite

Increasingly, machine learning is becoming a more accessible tool to developers with limited to no background in the technology. In fact, for most of the winners of the Android Developer Challenge, this was their first foray into machine learning. That’s thanks in part to two key offerings from Google, which bring on-device machine learning into reach for millions of developers around the world.

The first is ML Kit. ML Kit brings Google’s on-device machine learning technologies to mobile app developers, so they can build customized and interactive experiences into their apps. This includes tools such as language translation, text recognition, object detection and more. Eskke, for instance, uses offline text recognition and barcode scanning from ML Kit so users can scan the QR code at a mobile money kiosk and quickly withdraw money. And MixPose uses ML Kit's forthcoming Pose detection API to detect each user’s yoga positions and movements, so teachers can provide feedback.

The other Google resource that many of the Android Dev Challenge winners used was TensorFlow Lite. This powerful machine learning framework can help run machine learning models on Android, iOS and IoT devices that would never normally be able to support them. Its set of tools can be used for all kinds of powerful neural network-related applications, from image detection to speech recognition, bringing the latest cutting-edge technology to the devices we carry around with us wherever we go. Trashly, for instance, uses a custom TensorFlow Lite model to report if an object is recyclable and how to recycle it.

Helpful innovation, such as the 10 winning apps in the Android Developer Challenge, has the potential to change the way we access, use, and interpret information, making it available when we need it, where we need it most. By working with these developers focused on helpful innovation, we hope to inspire the next wave of developers to unlock what’s possible with this new technology.

#11WeeksOfAndroid Week 2 Machine Learning with Android logo head

What’s next in Android Machine Learning week?

As we kick off the second week of #11WeeksOfAndroid, focused on Machine Learning, we will highlight new tools and resources available to Android developers. Here’s a taste of the rest of this week:

  • Tuesday - ML Kit, the turnkey ML SDK went through a major overhaul with its new on-device offering this month. Check out the substantial improvement in developer usability, CameraX support and where the platform is going next.
  • Wednesday - Custom Models. When prepackaged SDK doesn’t quite satisfy your need, tools from Android Studio, TensorFlow Lite and ML Kit might just be the answer. Aside from individual offerings, we will also highlight how they can be used together.
  • Thursday - ML design. Learn some best practices for making ML product decisions from the People + AI Guidebook. We will go behind the scenes of the Read Along app, an on-device ML app that helps grow universal literacy. Bring your whole team because everyone, including UXers, engineers, and product managers are invited!

On Tuesday and Wednesday, we will also have a “codelab of the day” so get your Android Studio 4.1 beta today, block off an hour in your schedule and take this ML journey with us!

*The apps presented here are the projects of the developers individually, and not Google.

Android Studio 4.0

Posted by Adarsh Fernando, Product Manager

Android Studio logo

During these uncertain times, we’re humbled by the many developers around the world who are finding ways to keep doing what they do best—create amazing apps for Android. Whether you’re working from your kitchen table on a laptop or from a home office, you need tools that keep up with you. Android Studio 4.0 is the result of our drive to bring you new and improved tools for coding smarter, building faster, and designing the apps your users depend on, and it’s now available on the stable channel.

Some highlights of Android Studio 4.0 include a new Motion Editor to help bring your apps to life, a Build Analyzer to investigate causes for slower build times, and Java 8 language APIs you can use regardless of your app’s minimum API level. Based on your feedback, we’ve also overhauled the CPU Profiler user interface to provide a more intuitive workflow and easier side-by-side analysis of thread activity. And the improved Layout Inspector now provides live data of your app’s UI, so you can easily debug exactly what’s being shown on the device.

As always, this release wouldn’t be possible without the early feedback from our Preview users. So read on or watch below for further highlights and new features you can find in this stable version. If you’re ready to jump in and see for yourself, head over to the official website to download Android Studio 4.0 now.



Design

Motion Editor

The MotionLayout API extends the rich capabilities of ConstraintLayout to help Android developers manage complex motion and widget animation in their apps. In Android Studio 4.0, using this API is made easier with the new Motion Editor—a powerful interface for creating, editing, and previewing MotionLayout animations. You no longer have to create and modify complex XML files; the Motion Editor generates them for you, with support for editing constraint sets, transitions, keyframes, and view attributes. And if you do want to see the code the editor creates, it is one click away. And just as conveniently, for developers already using ConstraintLayout, the IDE can easily convert those to MotionLayout. Learn more

Create, edit, and preview animations in the Motion Editor

Create, edit, and preview animations in the Motion Editor

Upgraded Layout Inspector

Have you ever wanted to investigate where a value for a particular attribute came from? Or see a live 3D representation of nested views to more easily inspect your view hierarchy? With the new Layout Inspector, debugging your UI is much more intuitive by giving you access to data that stays updated with your running app and providing insights on how resources are being resolved.

Debug your app’s UI in real-time with Live Layout Inspector

Debug your app’s UI in real-time with Live Layout Inspector

Use the live Layout Inspector by selecting View > Tool Windows > Layout Inspector from the main menu. If you are deploying to a device running API 29 level or higher, you have access to additional features, such as a dynamic layout hierarchy that updates as views change, detailed view attributes that also help you determine how resource values are resolved, and a live 3D model of your running app’s UI. Navigate, animate, and transition between views on your running app while always having the ability to debug your UI to pixel perfection. Learn more

Layout Validation

Compare your UI across multiple screens with Layout Validation

Compare your UI across multiple screens with Layout Validation

When you’re developing for multiple form-factors, screen sizes, and resolutions, you need to verify that changes you make to your UI look great on every screen you support. With the Layout Validation window, you can preview layouts on different screens and configurations simultaneously, so you can easily ensure your app looks great across a range of devices. To get started, click on the Layout Validation tab in the top-right corner of the IDE.

Develop & Profile

CPU Profiler UI Upgrades

The improved UI of the CPU Profiler

The improved UI of the CPU Profiler

The CPU profiler is designed to provide a rich amount of information about your app’s thread activity and trace recordings. So, when you provided us feedback about how we can make the UI even more intuitive to navigate and the data easier to understand, we listened. In Android Studio 4.0, CPU recordings are now separated from the main profiler timeline and organized in groups to allow for easier analysis. You can move groups up and down, or drag-and-drop individual items within a group for additional customization.

Easier side-by-side analysis of thread activity

Easier side-by-side analysis of thread activity

For easier side-by-side analysis, you can now view all thread activity in the Thread Activity timeline (including methods, functions, and events) and try new navigation shortcuts to easily move around the data—such as using W, A, S, and D keys for fine-grained zooming and panning. We’ve also redesigned the System Trace UI so Events are uniquely colored for better visual distinction, threads are sorted to surface the busier ones first, and you can now focus on seeing data for only the threads you select. Finally, we invested in the quality of the CPU profiler, and consequently we’ve seen a significant decrease in the user-reported error rates of recordings since Android Studio 3.6. There are even more improvements to try, so learn more.

Smart editor features when writing rules for code shrinking

Smart editor feature when writing rules for R8

Smart editor feature when writing rules for R8

R8 was introduced in Android Gradle plugin 3.4.0 to combine desugaring, shrinking, obfuscating, optimizing, and dexing all in one step—resulting in noticeable build performance improvements. When creating rules files for R8, Android Studio now provides smart editor features, such as syntax highlighting, completion, and error checking. The editor also integrates with your Android project to provide full symbol completion for all classes, methods, and fields, and includes quick navigation and refactoring.

IntelliJ IDEA 2019.3 platform update

The core Android Studio IDE has been updated with improvements from IntelliJ IDEA 2019.3 and 2019.3.3 releases. These improvements largely focus on quality and performance improvements across the IDE.

Kotlin Android live templates

Live templates is a convenient IntelliJ feature that allows you to insert common constructs into your code by typing simple keywords. Android Studio now includes Android-specific live templates for your Kotlin code. For example, simply type toast and press the Tab key to quickly insert boilerplate code for a Toast. For a full list of available live templates, navigate to Editor > Live Templates in the Settings (or Preferences) dialog.

Clangd support for C++

For developers writing C++, we have switched to clangd as the primary language analysis engine for code navigation, completion, inspection, and showing code errors and warnings. We also now bundle clang-tidy with Android Studio. To configure Clangd or Clang-Tidy behavior, go to the IDE Settings (or Preferences) dialog, navigate to Languages & Frameworks > C/C++ > Clangd or Clang-Tidy, and configure the options.

Build

Android Gradle plugin 4.0.0 includes support for Android Studio’s Build Analyzer by using Java 8 language APIs (regardless of your app’s minimum API level), and creating feature-on-feature dependencies between Dynamic Feature modules. For a full list of updates, read the Android Gradle plugin 4.0.0 release notes.

Build Analyzer

Address bottlenecks in your build performance with Build Analyzer

Address bottlenecks in your build performance with Build Analyzer

Android Developers rely on a variety of Gradle plugins and custom build logic to tailor the build system for their app. However, outdated or misconfigured tasks can cause longer build times that lead to frustration and lost productivity. The Build Analyzer helps you understand and address bottlenecks in your build by highlighting the plugins and tasks that are most responsible for your overall build time and by suggesting steps to mitigate regressions. Learn more

Java 8 Language library desugaring in D8 and R8

Previous versions of the Android Gradle plugin supported a variety of Java 8 language features for all API levels, such as lambda expressions and method references, through a process called desugaring. In Android Studio 4.0, the desugaring engine has been extended to support Java language APIs, regardless of your app’s minSdkVersion. This means that you can now use standard language APIs, which were previously available in only recent Android releases (such as java.util.stream, java.util.function and java.time). Learn more

Feature-on-feature dependencies

Feature-on-feature dependencies

Feature-on-feature dependencies

When using Android Gradle plugin 4.0.0 and higher, you can now specify that a Dynamic Feature module depends on another feature module. Being able to define this relationship ensures that your app has the required modules to unlock additional functionality, resulting in fewer requests and easier modularization of your app. For example, a :video feature can depend on the :camera feature. If a user wants to unlock the ability to record videos, your app automatically downloads the required :camera module when it requests :video. Learn more

New options to enable or disable build features

The Android Gradle plugin has built-in support for modern libraries, such as data binding and view binding, and build features, such as auto-generated BuildConfig classes. However, you might not need these libraries and features for every project. In version 4.0.0 of the plugin, you can now disable discrete build features, as shown below, which can help optimize build performance for larger projects. For the DSL and full list of features you can control, see the release notes.

android {
    // The default value for each feature is shown below.
    // You can change the value to override the default behavior.
    buildFeatures {
        // Determines whether to support View Binding.
        // Note that the viewBinding.enabled property is now deprecated.
        viewBinding = false
        // Determines whether to support Data Binding.
        // Note that the dataBinding.enabled property is now deprecated.
        dataBinding = false
        ...
    }
}

Android Gradle plugin DSL for enabling or disabling build features

Essential support for Kotlin DSL script files

Android Studio 4.0 now has built-in support for Kotlin DSL build script files (*.kts), which means that Kotlin build scripts offer a full suite of quick fixes and are supported by the Project Structure dialog. While we are excited about the potential for using Kotlin to configure your build, we will continue to refine the Android Gradle Plugin’s DSL API throughout the next year, which may result in breaking API changes for Kotlin script users. Long term, these fixes will make for a more idiomatic, easy-to-use DSL for Kotlin script users.

Dependencies metadata

When building your app using Android Gradle plugin 4.0.0 and higher, the plugin includes metadata that describes the library dependencies that are compiled into your app. When uploading your app, the Play Console inspects this metadata to provide alerts for known issues with SDKs and dependencies your app uses, and, in some cases, provide actionable feedback to resolve those issues.

The data is compressed, encrypted by a Google Play signing key, and stored in the signing block of your release app. If you’d rather not share this information, you can easily opt-out by including the following in your module’s build.gradle file:

android {
    dependenciesInfo {
        // Disables dependency metadata when building APKs.
        includeInApk = false
        // Disables dependency metadata when building Android App Bundles.
        includeInBundle = false
    }
}

Disable dependency metadata for your APKs, app bundle, or both

To recap, Android Studio 4.0 includes these new enhancements & features:

Design

  • Motion Editor: a simple interface for creating, editing, and previewing MotionLayout animations
  • Upgraded Layout Inspector: a real-time & more intuitive debugging experience
  • Layout Validation: compare your UI across multiple screen dimensions

Develop & Profile

  • CPU Profiler update: improvements to make the UI more intuitive to navigate and the data easier to understand
  • R8 rules update: smart editor features for your code shrinker rules, such as syntax highlighting, completion, and error checking
  • IntelliJ IDEA 2019.3 platform update with performance and quality improvements
  • Live Template update: Android-specific live templates for your Kotlin code
  • Clangd support: Clangd and Clang-Tidy turned on by default

Build

  • Build Analyzer: understand and address bottlenecks in your build
  • Java 8 language support update: APIs you can use regardless of your app’s minimum API level
  • Feature-on-feature dependencies: define dependencies between Dynamic Feature modules
  • buildFeatures DSL: enable or disable discrete build features, such as Data Binding
  • Kotlin DSL: essential support for Kotlin DSL script files

For a full list of changes, read the official release notes.

Getting Started

Download

Download Android Studio 4.0 from the download page. If you are using a previous release of Android Studio, you can simply update to the latest version of Android Studio.

As always, we appreciate any feedback on things you like, and issues or features you would like to see. If you find a bug or issue, please file an issue. Follow us -- the Android Studio development team ‐ on Twitter and on Medium.

#Android11: The Beta Launch Show – Here’s how to join in and watch next week!

Posted by The #Android11 team

In just under a week, we’ll kick off #Android11: The Beta Launch Show, your opportunity to find out what’s new in Android from the people who build Android. Join us on June 3, 11AM ET (8AM PT, 4PM BST, 8:30PM IST) as we unveil new features packed inside the next release, Android 11, as well as updates to help developers get the most out of modern Android development. You’ll be able to watch the show live on YouTube (don’t forget to set a reminder) or Twitter, and can sign-up for updates here.

Get your #AskAndroid questions answered live

Got a burning question? We’ve got experts ready to answer your #AskAndroid questions, and we’ll be wrapping up the show with a live Q&A session. All you have to do is share your question on Twitter using #AskAndroid, and we’ll be selecting questions for Android engineering and product leads Dave Burke and Stephanie Cuthbertson to answer live on-the-air.

Check out the list of talks

Also on June 3, we’ll be sharing 12 talks on a range of topics from Jetpack to Android Studio and Google Play–talks that we had originally planned for Google I/O–to help you take advantage of the latest in Android development. We just posted the full list of talks on the event page.

Sketchnote with us

Sketchnote with us gif

We want to see your take on the show, so grab your best pens, markers, and paper, download the template, and get ready to show off your sketchnote skills during The Beta Launch Show. Don’t forget to share your work using the hashtag #Android11 for a chance to be featured.

We can’t wait to share with you the latest we’ve been working on with you in just over a week at #Android11: The Beta Launch Show!

Android Studio 3.6

Posted by Scott Swarthout, Product Manager

Android Studio logo

We are excited to announce the stable release of Android Studio 3.6 with a targeted set of features addressing quality in primarily code editing and debugging use cases. This is our first release after the end of Project Marble, which was focused on making the fundamental features and flows of the Integrated Development Environment (IDE) rock-solid. We learned a lot from Project Marble and in Android Studio 3.6 we introduce a small set of features, polished existing features, and spent a notable effort addressing bugs and improving underlying performance to ensure we meet the high quality bar we set in the past year.

Some highlights of Android Studio 3.6 include a new way to quickly design, develop and preview app layouts using XML, with a new Split View in the design editors. Additionally, you no longer have to manually type in GPS coordinates to test location with your app because we now embedded Google Maps right into the Android Emulator extended control panel. Finally, we’ve made it easier to optimize your app and find bugs with automatic memory leak detection for Fragments and Activities. We hope all of these features help you be happier and more productive while developing on Android.

Thank you to those who gave your early feedback in preview releases. Your feedback helped us iterate and improve features in Android Studio 3.6. If you are ready for the next stable release, and want to use a new set of productivity features, Android Studio 3.6 is ready to download for you to get started.

Below is a full list of new features in Android Studio 3.6, organized by key developer flows.

Design

Split view in design editors

Design editors, such as the Layout Editor and Navigation Editor, now provide a Split view that enables you to see both the Design and Code views of your UI at the same time. Split view replaces and improves upon the earlier Preview window, and can be configured on a file-by-file basis to preserve context information like zoom factor and design view options, so you can choose the view that works best for each use case. To enable split view, click the Split icon in the top-right corner of the editor window. Learn more.

Split view for design editors

Split view for design editors

Color picker resource tab

In this release we wanted to make it easier to apply colors you have defined as color resources. In Android Studio 3.6, the color picker populates the color resources in your app for you to quickly choose and replace color resources values. The color picker is accessible in the design tools as well as in the XML editor.

Color picker resource tab

Color picker resource tab

Develop

View binding

View binding is a feature that allows you to more easily write code that interacts with views by providing compile-time safety when referencing views in your code. When enabled, view binding generates a binding class for each XML layout file present in that module. In most cases, view binding replaces findViewById. You can reference all views that have an ID with no risk of null pointer or class cast exceptions.These differences mean that incompatibilities between your layout and your code will result in your build failing at compile time rather than at runtime. To enable view binding in your project, include the following in each module’s build.gradle file:

android {
    viewBinding.enabled = true
}

For more information, check out this blog post by one of our developer experts.

Android NDK updates

The following Android NDK features in Android Studio, previously supported in Java, are now also supported in Kotlin:

  • Navigate from a JNI declaration to the corresponding implementation function in C/C++. View this mapping by hovering over the C or C++ item marker near the line number in the managed source code file.
  • Automatically create a stub implementation function for a JNI declaration. Define the JNI declaration first and then type “jni” or the method name in the C/C++ file to activate.

Learn more

IntelliJ Platform Update

Android Studio 3.6 includes the IntelliJ 2019.2 platform release. This IntelliJ release includes many improvements from a new services tool window to much improved startup times. Learn more

Add classes with Apply Changes

You can now add a class and then deploy that code change to your running app by clicking either Apply Code Changes or Apply Changes and Restart Activity.

To learn more about the difference between these two actions, see Apply Changes.

Build

Android Gradle Plugin (AGP) updates

Android Gradle plugin 3.6 and higher includes support for the Maven Publish Gradle plugin, which allows you to publish build artifacts to an Apache Maven repository. The Android Gradle plugin creates a component for each build variant artifact in your app or library module that you can use to customize a publication to a Maven repository. This change will make it easier to manage the release lifecycle for your various targets. Learn more

Additionally, Android Gradle plugin has made significant performance improvement for annotation processing/KAPT for large projects. This is caused by AGP now generating R class bytecode directly, instead of .java files.

New packaging tool

The Android build team is continuously working on changes to improve build performance, and in this release we changed the default packaging tool to zipflinger for debug builds. Users should see an improvement in build speed, but you can also revert to using the old packaging tool by setting android.useNewApkCreator=false in your gradle.properties file.

Edit your gradle.properties file to disable the new packaging tool

Edit your gradle.properties file to disable the new packaging tool

Test

Android Emulator - Google Maps UI

Android Emulator 29.2.12 includes a new way for app developers to interface with the emulated device location. We embedded the Google Maps user interface in the extended controls menu to make it easier to specify locations and also to construct routes from pairs of locations. Individual points can be saved and re-sent to the device as the virtual location, while routes can be generated through typing in addresses or clicking two points. These routes can be replayed in real time as locations along the route are sent to the guest OS.

Android Emulator location UI with real-time location streaming

Android Emulator location UI with real-time location streaming

Multi-display support

Emulator 29.1.10 includes preliminary support for multiple virtual displays. As more devices are available that have multiple displays, it is important to test your app on a variety of multi-display configurations. Users can configure multiple displays through the settings menu (Extended Controls > Settings).

Multi-display support in Android Emulator

Multi-display support in Android Emulator

Configure secondary displays in the Android Emulator Extended Controls Panel

Configure secondary displays in the Android Emulator Extended Controls Panel

Resumable SDK downloads

When downloading Android SDK components and tools using the Android Studio SDK Manager, Android Studio now allows you to resume downloads that were interrupted (for example, due to a network issue) instead of restarting the download from the beginning. This enhancement is especially helpful for large downloads, such as the Android Emulator or system images, when internet connectivity is unreliable.

Pause and resume SDK downloads

Pause and resume SDK downloads

In-place updates for imported APKs

Android Studio allows you to import externally-built APKs to debug and profile them. Previously, when changes to those APKs were made, you would have to manually import them again and reattach symbols and sources. Android Studio 3.6 now automatically detects changes made to the imported APK file and gives you an option to re-import it in-place.

Attach Kotlin sources to imported APKs

We added support for attaching Kotlin source files to imported APKs. To learn more, see Attach Kotlin/Java sources.

Attach Kotlin/Java sources to imported APKs

Attach Kotlin/Java sources to imported APKs

Optimize

Leak detection in Memory Profiler

Based on your feedback, we’ve added in the Memory Profiler the ability to detect Activity and Fragment instances which may have leaked. To get started, capture or import a heap dump file in the Memory Profiler, and check the Activity/Fragment Leaks checkbox to generate the results. For more information on how Android Studio detects leaks, please see our documentation.

Detect leaked Activities and Fragments in the Memory Profiler

Detect leaked Activities and Fragments in the Memory Profiler

Deobfuscate class and method bytecode in APK Analyzer

When using the APK Analyzer to inspect DEX files, you can now deobfuscate class and method bytecode. While in the DEX file viewer, load the ProGuard mappings file for the APK you’re analyzing. When loaded, you will be able to right-click on the class or method you want to inspect by selecting Show bytecode. Learn more

Deobfuscate class and method bytecode by selecting Show Bytecode in the APK Analyzer

Deobfuscate class and method bytecode by selecting Show Bytecode in the APK Analyzer

To recap, Android Studio 3.6 includes these new enhancements & features:

Design

  • Split View in Design Editors
  • Color Picker Resource Tab

Develop

  • View binding
  • Android NDK support updates
  • IntelliJ Platform Update
  • Add classes with Apply Changes

Build

  • Android Gradle Plugin (AGP) Updates
  • New packaging tool

Test

  • Android Emulator Google Maps UI
  • Multi-display support
  • Resumable SDK downloads
  • In-place updates for imported APKs

Optimize

  • Leak detection in Memory Profiler
  • Deobfuscate class and method bytecode in APK Analyzer
  • Attach Kotlin sources to imported APKs

Getting Started

Download

Download Android Studio 3.6 from the download page. If you are using a previous release of Android Studio, you can simply update to the latest version of Android Studio. To use the mentioned Android Emulator features make sure you are running at least Android Emulator v29.2.12 downloaded via the Android Studio SDK Manager.

As mentioned above, we appreciate any feedback on things you like, and issues or features you would like to see. If you find a bug or issue, feel free to file an issue. Follow us -- the Android Studio development team ‐ on Twitter and on Medium.

#AndroidDevChallenge: today is the last day to apply!

Dev Challenge banner with Android logo

Today is the last day to apply for the Android Developer Challenge! And to spark your imagination, we wanted to take a look at one of the original Android Developer Challenge winners, from over 10 years ago. Meet Maurizio Leo:

Maurizio and team have been working on Android for a while now. In fact, he was one of the winners of the original Android Developer Challenge, which launched with the start of Android over ten years ago. Their app, which won 3rd place worldwide at the time, has gone on to be downloaded over 30 million times!

If you’ve got a great idea that can help users get things done, we want to hear! We’ll pick 10 concepts and provide expertise and guidance to those developers to help in their plans to bring their ideas to fruition, in part from this amazing set of experts we’ve assembled. And once the app is ready, we’ll help showcase it in front of the billions of users on Google Play, through a collection and more. You can read more about all of the prizes here.

There’s still time to submit your idea before the deadline today! Submitting your idea is as simple as creating a repository on GitHub, telling us what you’d build and how we can help (we’ve included all of the materials here), and then officially submitting your repository here. Ideas can be in a concept phase to something that’s already complete; we can’t wait to hear what you come up with, and to work with you on bringing helpful innovation powered by machine learning to more and more users!

3 things to know about Jetpack from Android Dev Summit 2019

Posted by Jisha Abubaker, Product Manager

Last month’s #AndroidDevSummit was jam-packed with announcements and technical news...so much that we wouldn’t be surprised if you missed something. So all this month, we’ll be diving into key areas from throughout the summit so you don’t miss anything. We previously spotlighted Jetpack Compose, Kotlin and Android Studio, and today, we’re highlighting the rest of Android Jetpack, with the top three things you should know:

#1: A number of new & updated Jetpack libraries ready to use:

WorkManager 2.2 (Stable) has landed significant updates in the last releases with features like on-demand initialization improving app startup time when using WorkManager and improved testing support. Hear more of the new features and best practices.

Room 2.2 (Stable) is packed with features you asked for too : pre-packaged databases, improved relationship support and now better support for Kotlin Flow as well. Check out the What’s new in Room session to catch up.

Benchmarking (Stable) helps you measure the performance of tasks in your app with confidence. Here’s a deep dive on how you can exercise the library in fighting performance regressions in CI, like we do ourselves for Jetpack libraries and Compose.

LiveData w/ support for Kotlin coroutines & Flow (RC) : Kotlin coroutines and Flows has been the Android developer community’s interest in simplify async patterns in your apps. Learn how best to take advantage of the liveData builder in your app:

View binding (Beta) is type-safe solution bundled with Android Studio 3.6 Beta with minimal build-time impact, no more findViewById(), no more annotation processors. Check out What’s new in Studio for a demo !

#2: We’re busy baking more libraries

CameraX (Alpha) simplifies the development experience and lets you focus on your app instead by addressing the differences between the many devices in the Android ecosystem, like Samsung, Xiaomi, Oppo, Motorola, LG who are already unifying behind CameraX. Expected in Beta soon, learn what the Camera team has been up to since I/O 2019.

Security (Alpha) helps you simplify data at rest encryption for your app needs. Hear of best practices with encryption on Android from the Security library team.

#3:It’s time to migrate to androidx!

With all the new and updated Jetpack libraries and upcoming release of Jetpack Compose, it is time to get your app updated and ready. Nick and Tiem share a great step by step plan and best practices from the community in migrating to androidx namespace.

...and we also heard from you!

But Android Dev Summit isn’t just about what we’ve got to say; it’s also about you telling us what you’d like to see worked on to make your life easier. And this year, one thing that we heard strongly from our community was the need to provide a simplified Dependency injection developer experience for Jetpack libraries and expand improved Kotlin support to other Jetpack libraries! We’re on it!

You can find the entire playlist of Jetpack sessions at the Android Dev Summit sessions and videos here. We’ll continue to spotlight other areas later this month, so keep an eye out and follow AndroidDevelopers on Twitter. Thanks so much for letting us be a part of this experience with you!

Our panel of experts for the #AndroidDevChallenge (apply by Dec. 2)

Just a little over a week left to finish your submission for the Android Developer Challenge, due December 2! Technology is enabling us to create a whole new era of helpful innovation by helping people get things done more quickly and surfacing patterns that would be difficult to detect using traditional methods. Ultimately, this helpful innovation is enabling us to live better, more productive, and safer lives.

Earlier this week, we highlighted the type of helpful innovation ideas powered by machine learning which are the sort of examples we’re looking for, to help inspire you. Today, we wanted to share the names of the panel of experts we’ve assembled to help bring your projects to life as part of the Android Developer Challenge. These experts will be making the final decision on the 10 finalists of the Android Developer Challenge, and if you’re selected as one of those finalists, we plan to have you meet them when we bring you to Google HQ for a bootcamp next year:

  • Dave Burke is Vice President of Engineering at Google where he leads engineering for the Android platform. Android is the largest mobile platform and ecosystem in the world, with over 2 billion active devices spanning smartphones, tablets, wearables, auto, TV, and IOT. Dave joined Google UK in 2007, becoming an engineering site lead and later moving to California in 2011. Prior to Google, Dave co-founded and was CTO of an internet/telecoms voice startup and helped define related Web and Internet standards.
  • Stephanie Cuthbertson is Senior Director of Developer PM, DevRel and UX for Android. She previously worked on Google’s Search & Ads businesses, as well as a range of developer tools used by Google employees internally. Prior to Google, she was at AWS where she led the product management team for Storage, including Amazon S3. Before AWS, she spent 10 years working on Visual Studio and developer tools.
  • Brahim Elbouchikhi is a Director of Product Management on the Android team. On Android, Brahim is responsible for developer and consumer facing ML and Camera products including CameraX and ML Kit. Prior to Android, Brahim led Daydream’s software team. Brahim was also a founding PM of the Google Play store where he led monetization, search, and discovery.
  • Yossi Matias is Vice President, Engineering, at Google. He is leading efforts in Search (Google Autocomplete, Search Live Results, Google Trends), Conversational AI (Google Duplex, Call Screen, Live Caption, Live Relay, Recorder, Pronunciation), and other Research initiatives. Yossi is the founding Head of Google's R&D Center in Israel, and the founding executive lead of Google for Startup Campus Tel Aviv and of Launchpad. He is the lead of Crisis Response and co-lead of Google’s AI for Social Good. In addition to his experience as an executive and entrepreneur, Yossi has a rich record of scientific research, published extensively, and has dozens of patents on his name. Yossi is a recipient of the Godel Prize and is an ACM Fellow.
  • Sarah Sirajuddin is an engineering director working on TensorFlow at Google. She leads the teams working on on-device machine learning, TensorFlow Extended, and efforts around training models for the best accuracy and performance with Google’s cutting-edge infrastructure, including TensorFlow and tensor processing units (TPUs).

If you’ve got a great idea that can help users get things done, we want to hear! We’ll pick 10 concepts and provide expertise and guidance to those developers to help in their plans to bring their ideas to fruition, in part from this amazing set of experts we’ve assembled. And once the app is ready, we’ll help showcase it in front of the billions of users on Google Play, through a collection and more. You can read more about all of the prizes here.

There’s still time to submit your idea before the December 2 deadline. Submitting your idea is as simple as creating a repository on GitHub, telling us what you’d build and how we can help (we’ve included all of the materials here), and then officially submitting your repository here. Ideas can be in a concept phase to something that’s already complete; we can’t wait to hear what you come up with, and to work with you on bringing helpful innovation powered by machine learning to more and more users!