Category Archives: Android Developers Blog

An Open Handset Alliance Project

ML Kit expands into NLP with Language Identification and Smart Reply

Posted by Christiaan Prins and Max Gubin

Today we are announcing the release of two new features to ML Kit: Language Identification and Smart Reply.

You might notice that both of these features are different from our existing APIs that were all focused on image/video processing. Our goal with ML Kit is to offer powerful but simple-to-use APIs to leverage the power of ML, independent of the domain. As such, we are excited to expand ML Kit with solutions for Natural Language Processing (NLP)!

NLP is a category of ML that deals with analyzing and generating text, speech, and other kinds of natural language data. We're excited to start out with two APIs: one that helps you identify the language of text, and one that generates reply suggestions in chat applications. Both of these features work fully on-device and are available on the latest version of the ML Kit SDK, on iOS (9.0 and higher) and Android (4.1 and higher).

Generate reply suggestions based on previous messages

A new feature popping up in messaging apps is to provide the user with a selection of suggested responses, either as actions on a notification or inside the app itself. This can really help a user to quickly respond when they are busy or a handy way to initiate a longer message.

With the new Smart Reply API you can now quickly achieve the same in your own apps. The API provides suggestions based on the last 10 messages in a conversation, although it still works if only one previous message is available. It is a stateless API that fully runs on-device, so we don't keep message history in memory nor send it to a server.

textPlus app providing response suggestions using Smart Reply

We have worked closely with partners like textPlus to ensure Smart Reply is ready for prime time and they have now implemented in-app response suggestions with the latest version of their app (screenshot above).

Adding Smart Reply to your own app is done with a simple function call (using Swift in this example):

let smartReply = NaturalLanguage.naturalLanguage().smartReply()
smartReply.suggestReplies(for: conversation) { result, error in
    guard error == nil, let result = result else {
        return
    }
    if (result.status == .success) {
        for suggestion in result.suggestions {
            print("Suggested reply: \(suggestion.text)")
        }
    }
}

After you initialize a Smart Reply instance, call suggestReplies with a list of recent messages. The callback provides the result which contains a list of suggestions.

For details on how to use the Smart Reply API, check out the documentation.

Tell me more ...

Although as a developer, you can just pick up this new API and easily get it integrated in your app, it may be interesting to reveal a bit on how it works under the hood. At the core of Smart Reply is a machine-learned model that is executed using TensorFlow Lite and has a state-of-the-art modern architecture based on SentencePiece text encoding[1] and Transformer[2].

However, as we realized when we started development of the API, the core suggestion model is not all that's needed to provide a solution that developers can use in their apps. For example, we added a model to detect sensitive topics, so that we avoid making suggestions in response to profanity or in cases of personal tragedy/hardship. Also, we included language identification, to ensure we do not provide suggestions for languages the core model is not trained on. The Smart Reply feature is launching with English support first.

Identify the language of a piece of text

The language of a given text string is a subtle but helpful piece of information. A lot of apps have functionality with a dependency on the language: you can think of features like spell checking, text translation or Smart Reply. Rather than asking a user to specify the language they use, you can use our new Language Identification API.

ML Kit recognizes text in 103 different languages and typically only requires a few words to make an accurate determination. It is fast as well, typically providing a response within 1 to 2 ms across iOS and Android phones.

Similar to the Smart Reply API, you can identify the language with a function call (using Swift in this example):

let languageId = NaturalLanguage.naturalLanguage().languageIdentification()
languageId.identifyLanguage(for: "¿Cómo estás?") { languageCode, error in
  guard error == nil, let languageCode = languageCode else {
    print("Failed to identify language with error: \(error!)")
    return
  }

  print("Identified Language: \(languageCode)")
}

The identifyLanguage functions takes a piece of a text and its callback provides a BCP-47 language code. If no language can be confidently recognized, ML Kit returns a code of und for undetermined. The Language Identification API can also provide a list of possible languages and their confidence values.

For details on how to use the Language Identification API, check out the documentation.

Get started today

We're really excited to expand ML Kit to include Natural Language APIs. Give the two new NLP APIs a spin today and let us know what you think! You can always reach us in our Firebase Talk Google Group.

As ML Kit grows we look forward to adding more APIs and categories that enables you to provide smarter experiences for your users. With that, please keep an eye out for some exciting ML Kit announcements at Google I/O.

Android Q Beta 2 update

Posted by Dave Burke, VP of Engineering

A few weeks ago we Introduced Android Q Beta, a first look at the next version of Android. Along with new privacy features for users, Android Q adds new capabilities for developers - like enhancements for foldables, new APIs for connectivity, new media codecs and camera capabilities, NNAPI extensions, Vulkan 1.1 graphics, and more.

Android's program of early, open previews is driven by our core philosophy of openness, and collaboration with our community. Your feedback since Beta 1 proves yet again the value of that openness - it's been loud, clear, and incredibly valuable. You've sent us thousands of bug reports, giving us insights and directional feedback, changing our plans in ways that make the platform better for users and developers. We're taking your feedback to heart, so please stay tuned. We're fortunate to have such a passionate community helping to guide Android Q toward the final product later this year.

Today we're releasing Android Q Beta 2 and an updated SDK for developers. It includes the latest bug fixes, optimizations, and API updates for Android Q, along with the April 2019 security patches. You'll also notice isolated storage becoming more prominent as we look for your wider testing and feedback to help us refine that feature.

We're still in early Beta with Android Q so expect rough edges! Before you install, check out the Known Issues. In particular, expect the usual transitional issues with apps that we typically see during early Betas as developers get their app updates ready. For example, you might see issues with apps that access photos, videos, media, or other files stored on your device, such as when browsing or sharing in social media apps.

You can get Beta 2 today by enrolling any Pixel device here. If you're already enrolled, watch for the Beta 2 update coming soon. Stay tuned for more at Google I/O in May.

What's new in Beta 2?

Privacy features for testing and feedback

As we shared at Beta 1, we're making significant privacy investments in Android Q in addition to the work we've done in previous releases. Our goals are improving transparency, giving users more control, and further securing personal data across platform and apps. We know that to reach those goals, we need to partner with you, our app developers. We realize that supporting these features is an investment for you too, so we'll do everything we can to minimize the impact on your apps.

For features like Scoped Storage, we're sharing our plans as early as possible to give you more time to test and give us your input. To generate broader feedback, we've also enabled Scoped Storage for new app installs in Beta 2, so you can more easily see what is affected.

With Scoped Storage, apps can use their private sandbox without permission, but they need new permissions to access shared collections for photos, videos and audio. Apps using files in shared collections -- for example, photo and video galleries and pickers, media browsing, and document storage -- may behave differently under Scoped Storage.

We recommend getting started with Scoped Storage soon -- the developer guide has details on how to handle key use-cases. For testing, make sure to enable Scoped Storage for your app using the adb command. If you discover that your app has a use-case that's not supported by Scoped Storage, please let us know by taking this short survey. We appreciate the great feedback you've given us already, it's a big help as we move forward with the development of this feature.

Bubbles: a new way to multitask

In Android Q we're adding platform support for bubbles, a new way for users to multitask and re-engage with your apps. Various apps have already built similar interactions from the ground up, and we're excited to bring the best from those into the platform, while helping to make interactions consistent, safeguard user privacy, reduce development time, and drive innovation.

Bubbles will let users multitask as they move between activities.

Bubbles help users prioritize information and take action deep within another app, while maintaining their current context. They also let users carry an app's functionality around with them as they move between activities on their device.

Bubbles are great for messaging because they let users keep important conversations within easy reach. They also provide a convenient view over ongoing tasks and updates, like phone calls or arrival times. They can provide quick access to portable UI like notes or translations, and can be visual reminders of tasks too.

We've built bubbles on top of Android's notification system to provide a familiar and easy to use API for developers. To send a bubble through a notification you need to add a BubbleMetadata by calling setBubbleMetadata. Within the metadata you can provide the Activity to display as content within the bubble, along with an icon (disabled in beta 2) and associated person.

We're just getting started with bubbles, but please give them a try and let us know what you think. You can find a sample implementation here.

Foldables emulator

As the ecosystem moves quickly toward foldable devices, new use-cases are opening up for your apps to take advantage of these new screens. With Beta 2, you can build for foldable devices through Android Q enhanced platform support, combined with a new foldable device emulator, available as an Android Virtual device in Android Studio 3.5 available in the canary release channel.

7.3" Foldable AVD switches between the folded and unfolded states

On the platform side, we've made a number of improvements in onResume and onPause to support multi-resume and notify your app when it has focus. We've also changed how the resizeableActivity manifest attribute works, to help you manage how your app is displayed on foldable and large screens. You can read more in the foldables developer guide.

To set up a runtime environment for your app, you can now configure a foldable emulator as a virtual device (AVD) in Android Studio. The foldable AVD is a reference device that lets you test with standard hardware configurations, behaviors, and states, as will be used by our device manufacturer partners. To ensure compatibility, the AVD meets CTS/GTS requirements and models CDD compliance. It supports runtime configuration change, multi-resume and the new resizeableActivity behaviors.

Use the canary release of Android Studio 3.5 to create a foldable virtual device to support either of two hardware configurations 7.3" (4.6" folded) and 8" (6.6" folded) with Beta 2. In each configuration, the emulator gives you on-screen controls to trigger fold/unfold, change orientation, and quick actions.

Android Studio - AVD Manager: Foldable Device Setup

Try your app on the foldable emulator today by downloading the canary release of Android Studio 3.5 and setting up a foldable AVD that uses the Android Q Beta 2 system image.

Improved sharesheet

Following on the initial Sharing Shortcuts APIs in Beta 1, you can now offer a preview of the content being shared by providing an EXTRA_TITLE extra in the Intent for the title, or by setting the Intent's ClipData for a thumbnail image. See the updated sample application for the implementation details.

Directional, zoomable microphones

Android Q Beta 2 gives apps more control over audio capture through a new MicrophoneDirection API. You can use the API to specify a preferred direction of the microphone when taking an audio recording. For example, when the user is taking a "selfie" video, you can request the front-facing microphone for audio recording (if it exists) by calling setMicrophoneDirection(MIC_DIRECTION_FRONT).

Additionally, this API introduces a standardized way of controlling zoomable microphones, allowing your app to have control over the recording field dimension using setMicrophoneFieldDimension(float).

Compatibility through public APIs

In Android Q we're continuing our long-term effort to move apps toward only using public APIs. We introduced most of the new restrictions in Beta 1, and we're making a few minor updates to those lists in Beta 2 to minimize impact on apps. Our goal is to provide public alternative APIs for valid use-cases before restricting access, so if an interface that you currently use in Android 9 Pie is now restricted, you should request a new public API for that interface.

Get started with Android Q Beta

Today's update includes Beta 2 system images for all Pixel devices and the Android Emulator, as well updated SDK and tools for developers. These give you everything you need to get started testing your apps on the new platform and build with the latest APIs.

First, make your app compatible and give your users a seamless transition to Android Q, including your users currently participating in the Android Beta program. To get started, just install your current app from Google Play onto a device or emulator running Beta 2 and work through the user flows. The app should run and look great, and handle the Android Q behavior changes for all apps properly. If you find issues, we recommend fixing them in the current app, without changing your targeting level. See the migration guide for steps and a recommended timeline.

With important privacy features that are likely to affect your apps, we recommend getting started with testing right away. In particular, you'll want to test against scoped storage, new location permissions, restrictions on background Activity starts, and restrictions on device identifiers. See the privacy checklist as a starting point.

Next, update your app's targetSdkVersion to 'Q' as soon as possible. This lets you test your app with all of the privacy and security features in Android Q, as well as any other behavior changes for apps targeting Q.

Explore the new features and APIs

When you're ready, dive into Android Q and learn about the new features and APIs you can use in your apps. Here's a video highlighting many of the changes for developers in Beta 1 and Beta 2. Take a look at the API diff report for an overview of what's changed in Beta 2, and see the Android Q Beta API reference for details. Visit the Android Q Beta developer site for more resources, including release notes and how to report issues.

To build with Android Q, download the Android Q Beta SDK and tools into Android Studio 3.3 or higher, and follow these instructions to configure your environment. If you want the latest fixes for Android Q related changes, we recommend you use Android Studio 3.5 or higher.

How do I get Beta 2?

It's easy - you can enroll here to get Android Q Beta updates over-the-air, on any Pixel device (and this year we're supporting all three generations of Pixel -- Pixel 3, Pixel 2, and even the original Pixel!). If you're already enrolled, you'll receive the update to Beta 2 soon, no action is needed on your part. Downloadable system images are also available. If you don't have a Pixel device, you can use the Android Emulator -- just download the latest emulator system images via the SDK Manager in Android Studio.

As always, your input is critical, so please let us know what you think. You can use our hotlists for filing platform issues (including privacy and behavior changes), app compatibility issues, and third-party SDK issues. You've shared great feedback with us so far and we're working to integrate as much of it as possible in the next Beta release.

Improving app performance with ART optimizing profiles in the cloud

Posted by Calin Juravle, Software Engineer

In Android Pie we launched ART optimizing profiles in Play Cloud, a new optimization feature that greatly improves the application startup time after a new install or update. On average, we have observed that apps start 15% faster (cold startup) across a variety of devices. Some hero cases even show 30%+ faster startup times. One of the most important aspects is that users get this for free, without any effort from their side or from developers!

Source: Google internal data

ART optimizing profiles in Play Cloud

The feature builds on previous Profile Guided Optimization (PGO) work, which was introduced in Android 7.0 Nougat. PGO allows the Android Runtime to help improve an app's performance by building a profile of the app's most important hot code and focusing its optimization effort on it. This leads to big improvements while reducing the traditional memory and storage impact of a fully compiled app. However, it relies on the device to optimize apps based on these code profiles in idle maintenance mode, which means it could be a few days before a user sees the benefits - something we aimed to improve.

Source: Google internal data

ART optimizing profiles in Play Cloud leverages the power of Android Play to bring all PGO benefits at install/update time: most users can get great performance without waiting!

The idea relies on two key observations:

  1. Apps usually have many commonly used code paths (hot code) between a multitude of users and devices, e.g. classes used during startup or critical user paths. This can often be discovered by aggregating a few hundred data points.
  2. App developers often roll-out their apps incrementally, starting with alpha/beta channels before expanding to a wider audience. Even if there isn't an alpha/beta set, there is often a ramp-up of users to a new version of an app.

This means we can use the initial rollout of an app to bootstrap the performance for the rest of users. ART analyzes what part of the application code is worth optimizing on the initial devices, and then uploads the data to Play Cloud, which will build a core-aggregated code profile (containing information relevant to all devices). Once there is enough information, the code profile gets published and installed alongside the app's APKs.

On a device, the code profile acts as a seed, enabling efficient profile-guided optimization at install time. These optimizations help improve cold startup time and steady state performance, all without an app developer needing to write a single line of code.

Step 1: Building the code profile

One of the main goals is to build a quality, stable code profile out of aggregated & anonymized data as fast as possible (to maximize the number of users that can benefit), while also making sure we have enough data to accurately optimize an app's performance. Sampling too much data takes up more bandwidth and time at installation. In addition, the longer we take to build the code profile, the fewer users get the benefits. Sampling too little data, and the code profile won't have enough information on what to properly optimize in order to make a difference.

The outcome of the aggregation is what we call a core code profile, which only contains anonymous data about the code that is frequently seen across a random sample of sessions per device. We remove outliers to ensure we focus on the code that matters for most users.

Experiments show that the most commonly used code paths can be calculated very quickly, over a small amount of time. That means we are able to build a code profile fast enough that the majority of users will benefit from.

*Data averaged from Google apps, Source: Google internal data

Step 2: Installing the code profile

In Android 9.0 Pie, we introduced a new type of installation artifact: dex metadata files. Similar to the APKs, the dex metadata files are regular archives that contain data about how the APK should be optimized - like the core code profiles that have been built in the cloud. A key difference is that the dex metadata are managed solely by the platform and the app stores, and are not directly visible to developers.

There is also built-in support for App Bundles / Google Play Dynamic Delivery: without any developer intervention, all the app's feature splits are optimized.

Step 3: Using the code profiles to optimize performance

To understand how these code profiles achieve better performance, we need to look at their structure. Code profiles contain information about:

  • Classes loaded during startup
  • Hot methods that the runtime deemed worthy of optimizations
  • The layout of the code (e.g. code that executes during startup or post-startup)

Using this information, we use a variety of optimization techniques, out of which the following three provide most of the benefits:

  • App Images: We use the start up classes to build a pre-populated heap where the classes are pre-initialized (called an app image). When the application starts, we map the image directly into memory so that all the startup classes are readily available.
    • The benefit here is that the app's execution saves cycles since it doesn't need to do the work again, leading to a faster startup time.
  • Code pre-compilation: We pre-compile all the hot code. When the apps execute, the most important parts of the code are already optimized and ready to be natively executed. The app no longer needs to wait for the JIT compiler to kick in.
    • The benefit is that the code is mapped as clean memory (compared to the JIT dirty memory) which improves the overall memory efficiency. The clean memory can be released by the kernel when under memory pressure while the dirty memory cannot, lessening the chances that the kernel will kill the app.
  • More efficient dex layout: We reorganize the dex bytecode based on method information the profile exposes. The dex bytecode layout will look like: [startup code, post startup code, the rest of non profiled code].
    • The benefit of doing this is a much higher efficiency of loading the dex byte code in memory: The memory pages have a better occupancy, and since everything is together, we need to load less and we can do less I/O.

Improvements & Observations

We rolled out profiles in the cloud to all apps on the playstore at the end of last year.

  • More than 30,000 apps have shown improvement
  • On average the cold startup is 15% faster across a variety of devices
    • with many top apps getting 20%+ (e.g. Youtube) or even 30% (e.g. Google Search) on selected devices.
  • 90%+ of the app installs on Android Pie get profiles
  • Little increase in install time for the extra optimization
  • Available to all Pie devices.

A very interesting observation is that, on average, ART profiles about 20% of the application methods (even less if we count the actual size of the code). For some apps, the profile covers only 2% of the code while for some the number goes up to 60%.

Source: Google internal data

Why is this an important observation? It means that the runtime has not seen a lot of the application code, and is thus not investing in the code's optimization. While there are a lot of valid use-cases where the code will not be executed (e.g. error handling or backwards compatibility code), this may also be due to unused features or unnecessary code. The skew distribution is a strong signal that the latter could play an important role in further optimizations (e.g. lowering APK size by removing unneeded dex bytecode).

Future Development

We're excited about the improvements that ART optimizing profiles has shown, and we'll be growing this concept more in the future. Building a profile of code per app opens opportunities for even more application improvements. Data can be used by developers to improve the app based on what's relevant and important for their end users. Using the information collected in Profiles, code can be re-organized or trimmed for better efficiency. Developers can potentially use App Bundles to split their features based on their use and avoid shipping unnecessary code to their users. We've already seen great improvements in app startup time, and hope to see additional benefits coming from profiles to make developer's lives easier while providing better experiences for our users.

AOSP Application Updates

Posted by Raman Tenneti, AOSP Software Engineer and Ally Sillins, AOSP Program Manager

When we started the Android Open Source Project (AOSP) 10 years ago, we included some basic applications in the AOSP build for three main purposes:

  1. to provide a usable set of applications for someone building an Android device from our AOSP
  2. to serve as a demonstration for the nascent Android app developer community, showcasing how they should build some of these applications
  3. to, as part of the platform, provide functionality to other Android applications that would interact with them through the standard Android APIs like the common intents

However, as the Android ecosystem has matured over time, we've noticed a healthy growth of alternative applications - both as open source and proprietary implementations - developed by the developer community. These alternative applications are not only capable to serve the first two purposes, but often times showcase richer set of features demonstrating the power of Android. Late last year, we began to clean up these applications in AOSP to focus more effectively on the last purpose — their role to provide functionality to other Android applications as part of the platform.

To date, the following 3 apps have been cleaned up: Music, Calendar, and Calculator. See below for details on these updates. Going forward, you can expect to see similar efforts with the other applications in the AOSP repository.

As always, we're excited to hear your feedback on the developer website or through our AOSP forum.

Music Application

AOSP's Music app can now playback music, one file at a time, and exposes itself as an intent handler for the android.media.browse.MediaBrowserService. The app has controls to play and pause, and a slider moving forward and backward. Features removed include: Music Icon, Artists, Albums, Songs, Playlists, Search, and Settings.

Calendar Application

AOSP's Calendar app now exposes itself as an intent handler for the calendar events. New events cannot be created and existing events cannot be edited or deleted. The following features have been deleted: support for multiple accounts, reminders and settings. In addition, some features remain that are not needed for providing a part of the platform functionality: views for day, week, and month. This app may be further simplified in the future.

Calculator App

The calculator application is a standalone app, and does not function as part of the platform and hence has been removed from the AOSP build. However, the application will continue to exist as an open source project separately.

Changes to the Google Play Developer API

Posted by Vlad Radu, Product Manager and Nicholas Lativy, Software Engineer

The Google Play Developer API allows you to automate your in-app billing and app distribution workflows. At Google I/O '18, we introduced version 3 of the API, which allows you to transactionally start, manage, and halt staged releases on all tracks, through production, open testing, closed testing (including the new additional testing tracks), and internal testing.

Updating from versions 1 and 2 to the latest version 3

In addition to these new features, version 3 also supports all the functionality of previous versions, improving and simplifying how you manage workflows. Starting December 1, 2019, versions 1 and 2 of the Google Play Developer API will no longer be available so you need to update to version 3 ahead of this date.

Migrating to version 3

If you use the Google Play API client libraries (available for Java, Python, and other popular languages), we recommend upgrading to their latest versions, which already support version 3 of the API. In many cases, changing the version of the client library should be all that is necessary. However, you may also need to update specific code references to the version of the API in use - see examples in our samples repository.

Many third-party plugins are already using version 3 of the API. If you use a plugin that does not support version 3 you will need to contact the maintainer. You will start seeing warnings in the Google Play Console in mid-May if we detect that your app is still using version 1 and version 2 endpoints.

For version 1 users

If you currently use version 1 of the API, you may also need to link your API project to the Google Console before converting to version 3. Learn more about this process.

Going forward

We hope you benefit from the new features of the Google Play Developer Publishing API and are looking forward to your continued feedback to help us improve the publishing experience on Google Play.

How useful did you find this blog post?

The latest Android App Bundle updates including the additional languages API

Posted by Wojtek Kaliciński, Developer Advocate, Android

Last year, we launched Android App Bundles and Google Play's Dynamic Delivery to introduce modular development, reduce app size and streamline the release process. Since then, we've seen developers quickly adopt this new app model in over 60,000 production apps. We've been excited to see developers experience significant app size savings and reductions in the time needed to manage each release, and have documented these benefits in case studies with Duolingo and redBus.

Thank you to everyone who took the time to give us feedback on our initial launch. We're always open to new ideas, and today, we're happy to announce some new improvements based on your suggestions:

  • A new additional languages install API, which supports in-app language pickers
  • A streamlined publishing process for instant-enabled app bundles
  • A new enrollment option for app signing by Google Play
  • The ability to permanently uninstall dynamic feature modules that are included in your app's initial install


Additional languages API

When you adopt the Android App Bundle as the publishing format for your app, Google Play is able to optimize the installation by delivering only the language resources that match the device's system locales. If a user changes the system locale after the app is installed, Play automatically downloads the required resources.

Some developers choose to decouple the app's display language from the system locale by adding an in-app language switcher. With the latest release of the Play Core library (version 1.4.0), we're introducing a new additional languages API that makes it possible to build in-app language pickers while retaining the full benefits of smaller installs provided by using app bundles.

With the additional languages API, apps can now request the Play Store to install resources for a new language configuration on demand and immediately start using it.

Get a list of installed languages

The app can get a list of languages that are already installed using the SplitInstallManager#getInstalledLanguages() method.

val splitInstallManager = SplitInstallManagerFactory.create(context)
val langs: Set<String> = splitInstallManager.installedLanguages

Requesting additional languages

Requesting an additional language is similar to requesting an on demand module. You can do this by specifying a language in the request through SplitInstallRequest.Builder#addLanguage(java.util.Locale).

val installRequestBuilder = SplitInstallRequest.newBuilder()
installRequestBuilder.addLanguage(Locale.forLanguageTag("pl"))
splitInstallManager.startInstall(installRequestBuilder.build())

The app can also monitor install success with callbacks and monitor the download state with a listener, just like when requesting an on demand module.

Remember to handle the SplitInstallSessionStatus.REQUIRES_USER_CONFIRMATION state. Please note that there was an API change in a recent Play Core release, which means you should use the new SplitInstallManager#startConfirmationDialogForResult() together with Activity#onActivityResult(). The previous method of using SplitInstallSessionState#resolutionIntent() with startIntentSender() has been deprecated.

Check out the updated Play Core Library documentation for more information on how to access the newly installed language resources in your activity.

We've also updated our dynamic features sample on GitHub with the additional languages API, including how to store the user's language preference and apply it to your activities at startup.

Please note that while the additional languages API is now available to all developers, on demand modules are in a closed beta for the time being. You can experiment with on demand modules in your internal, open, and closed test tracks, while we work with our partners to make sure this feature is ready for production apps.

Instant-enabled App Bundle

In Android Studio 3.3, we introduced a way to build app bundles that contain both the regular, installed version of your app as well as a Google Play Instant experience for modules marked with the dist:instant="true" attribute in their AndroidManifest.xml:

<manifest ... xmlns:dist="http://schemas.android.com/apk/distribution">
    <dist:module dist:instant="true" />
    ...
</manifest>

Even though you could use a single project to generate the installed and instant versions of your app, up until now, developers were still required to use product flavors in order to build two separate app bundles and upload both to Play.

We're happy to announce that we have now removed this restriction. It's now possible to upload a single, unified app bundle artifact, containing modules enabled for the instant experience. This functionality is now available for everyone.

After you build an instant-enabled app bundle, upload it to any track on the Play Console, and you'll be able to select it when creating a new instant app release. This also means that the installed and instant versions of your app no longer need different version codes, which will simplify the release workflow.

Opt in to app signing by Google Play

You need to enable app signing by Google Play to publish your app using an Android App Bundle and automatically benefit from Dynamic Delivery optimizations. It is also a more secure way to manage your signing key, which we recommend to everyone, even if you want to keep publishing regular APKs for now.

Based on your feedback, we've revamped the sign-up flow for new apps to make it easier to initialize the key you want to use for signing your app.

Now developers can explicitly choose to upload their existing key without needing to upload a self-signed artifact first. You can also choose to start with a key generated by Google Play, so that the key used to locally sign your app bundle can become your upload key.

Read more about the new flow.

Permanent uninstallation of install time modules

We have now added the ability to permanently uninstall dynamic feature modules that are included in your app's initial install.

This is a behavior change, which means you can now call the existing SplitInstallManager#deferredUninstall() API on modules that set onDemand="false". The module will be permanently uninstalled, even when the app is updated.

This opens up new possibilities for developers to further reduce the installed app size. For example, you can now uninstall a heavy sign-up module or any other onboarding content once the user completes it. If the user navigates to a section of your app that has been uninstalled, you can reinstall it using the standard on demand modules install API.

We hope you enjoy these improvements and test them out in your apps. Continue to share your feedback as we work to make these features even more useful for you!

How useful did you find this blog post?

Google Mobile Developer Day at Game Developers Conference 2019

Posted by Kacey Fahey, Developer Marketing, Google Play & Android

We're excited to host the Google Mobile Developer Day at Game Developers Conference 2019. We are taking this opportunity to share best practices and our plans to help your games businesses, which are fuelling incredible growth in the global mobile games market. According to Newzoo, mobile games revenue is projected to account for nearly 60% of global games revenue by 2021. The drivers of this growth come in many forms, including more developers building great games, new game styles blurring the lines of traditional genres, and the explosion of gaming in emerging markets - most notably in India.

Image Source: GamesIndustry.biz

To support your growth, Google is focused on improving the game development experience on Android. We are investing in tools to give you better insights into what is happening on devices, as well as in people and teams to address your feedback about the development process, graphics, multiplayer experiences, and more.

We have some great updates and new tools to improve game discovery and monetization on Google Play, which we also shared today during our Mobile Developer Day:

Pre-registration now in general availability

Starting today, we are launching pre-registration for general availability. Set up a pre-registration campaign in the Google Play Console and start marketing your games to build awareness before launch. Users who pre-register receive a notification at launch, which helps increase day one installs.

Google Play Instant gaining adoption

We have seen strong adoption of Google Play Instant with 3x growth in the number of instant games and 5x growth in the number of instant sessions over the last six months. Instant experiences allow players to tap the 'Try Now' button on your store listing page and go straight to a demo experience in a matter of seconds, without installing. Now, they're even easier to build with Cocos and Unity plug-ins and an expanded implementation partner program. Discover the latest updates on Google Play Instant.

Android App Bundles momentum and new large download size threshold

Over 60K apps and games on Google Play are now using the Android App Bundle publishing format, which is supported in Android Studio, Unity, and Cocos Creator. The app bundle uses Google Play's Dynamic Delivery to deliver a smaller, optimized APK containing only the resources needed for a specific device.

To better support high quality game experiences and reflect improved devices, we've also increased the size limit for APKs generated from app bundles to 150MB and raised the threshold for large download user warnings on the Google Play Store to 150MB, from 100MB.

Improved tools in the Google Play Console

Store listing experiments let you A/B test changes to your store listing on actual Play Store visitors. We recently rolled out improvements, introducing two new metrics - first time installers and D1 retained users - to more accurately reflect the performance of your store listings. These two new metrics are now reported with hourly intervals and are available via email notifications, letting you see results faster and track performance better.

Country targeted store listings allow you to tailor your app's store listing to appeal to users in different countries. You can customize the app title, icon, descriptions and graphic assets, allowing you to better appeal to users in specific target markets. For example, you can now tailor your store listing with different versions of the English language for users in India versus the United States.

Rewarded ads give players the choice to watch an advertisement in exchange for in-app items. With rewarded ads in Google Play, you can now create and manage rewarded ads through the Google Play Console. No additional SDK integrations are required.

We hope you try some of these new tools and keep sharing ideas so we can make Android and Google Play a better place to grow your business. We are committed to continue improving the platform and building tools that better serve the gaming community.

Get started today by visiting two new resources, a hub for developers interested in creating games on Android and games.withgoogle.com, for developers looking to connect and scale their business across Google. Many of these updates and resources come from community suggestions, so sign up for our monthly newsletter to stay informed.

How useful did you find this blog post?

Introducing a new Google Play app and game icon specification


Posted by Steve Suppe, Product Manager, Google Play
As part of our focus and dedication to improving the Google Play Store experience for our users, we are introducing new design specifications for your app icons.

Left to right: original icon, new icon (example), original icon in legacy mode


As of early April, you will be able to upload new icons to the Google Play Console and confirm you are compliant with the new specification. Original icons are still accepted in the Google Play Store during this time. As of May 1st, developers will no longer be able to upload icons in the Play Console which do not meet the new specifications, although existing original icons in the Google Play Store during this period can remain unchanged.
By June 24, we require you to:
  1. Update your icon to the new specification.
  2. Upload your icon to Play Console.
  3. Confirm in Play Console that your icon meets the new specification.
We highly recommend that you update your icons and confirm they meet the new specification as soon as possible to ensure that you provide the highest quality experience for users.

What exactly is changing?

  • Icon assets will remain the same size (512 x 512), but transparent backgrounds will no longer be allowed.
  • Google Play on Android and Chrome OS will dynamically apply rounded corners and drop shadows to icons. The corner radius will be 20% of the icon size, to ensure consistency at different sizes.
  • There will be no changes to Google Play on other form factors (TV, Wear, Auto).
  • Note this does not affect your APK launcher icons for Android.
Timelines Changes
Early April You can start uploading your new icons in Play Console and confirm they meet the new specification.
  • Original icons will continue to display correctly in Google Play.
  • New icons will display correctly in Google Play.
May 1st Any new icons uploaded in Play Console must be confirmed as meeting the new specification.
  • Original icons will continue to display correctly in Google Play.
  • New icons will display correctly in Google Play.
June 24th Original icons are converted to "legacy mode." You must confirm that any new icons uploaded in Play Console meet the new specification.
  • Original icons will be automatically converted to "legacy mode" icons.
  • New icons render correctly in the Google Play Store.

These updates will help us all provide a more unified and consistent look and feel for Google Play, allowing us to better showcase your apps and games and provide a higher quality user experience.
We will be keeping you up-to-date with these changes in the coming months - so look out for more updates. In the meantime, check out our new icon design specifications.
How useful did you find this blog post?


Giving users more control over their location data

Posted by Jen Chai, Product Manager

Location data can deliver amazing, rich mobile experiences for users on Android such as finding a restaurant nearby, tracking the distance of a run, and getting turn-by-turn directions as you drive. Location is also one of the most sensitive types of personal information for a user. We want to give users simple, easy-to-understand controls for what data they are providing to apps, and yesterday, we announced in Android Q that we are giving users more control over location permissions. We are delighted by the innovative location experiences you provide to users through your apps, and we want to make this transition as straightforward for you as possible. This post dives deeper into the location permission changes in Q, what it may mean for your app, and how to get started with any updates needed.

Previously, a user had a single control to allow or deny an app access to device location, which covered location usage by the app both while it was in use and while it wasn't. Starting in Android Q, users have a new option to give an app access to location only when the app is being used; in other words, when the app is in the foreground. This means users will have a choice of three options for providing location to an app:

  • "All the time" - this means an app can access location at any time
  • "While in use" - this means an app can access location only while the app is being used
  • "Deny" - this means an app cannot access location

Some apps or features within an app may only need location while the app is being used. For example, if a feature allows a user to search for a restaurant nearby, the app only needs to understand the user's location when the user opens the app to search for a restaurant.

However, some apps may need location even when the app is not in use. For example, an app that automatically tracks the mileage you drive for tax filing, without requiring you to interact with the app.

The new location control allows users to decide when device location data is provided to an app and prevents an app from getting location data that it may not need. Users will see this new option in the same permissions dialog that is presented today when an app requests access to location. This permission can also be changed at any time for any app from Settings-> Location-> App permission.

Here's how to get started

We know these updates may impact your apps. We respect our developer community, and our goal is to approach any change like this very carefully. We want to support you as much as we can by (1) releasing developer-impacting features in the first Q Beta to give you as much time as possible to make any updates needed in your apps and (2) providing detailed information in follow-up posts like this one as well as in the developer guides and privacy checklist. Please let us know if there are ways we can make the guides more helpful!

If your app has a feature requiring "all the time" permission, you'll need to add the new ACCESS_BACKGROUND_LOCATION permission to your manifest file when you target Android Q. If your app targets Android 9 (API level 28) or lower, the ACCESS_BACKGROUND_LOCATION permission will be automatically added for you by the system if you request either ACCESS_FINE_LOCATION or ACCESS_COARSE_LOCATION. A user can decide to provide or remove these location permissions at any time through Settings. To maintain a good user experience, design your app to gracefully handle when your app doesn't have background location permission or when it doesn't have any access to location.

Users will also be more likely to grant the location permission if they clearly understand why your app needs it. Consider asking for the location permission from users in context, when the user is turning on or interacting with a feature that requires it, such as when they are searching for something nearby. In addition, only ask for the level of access required for that feature. In other words, don't ask for "all the time" permission if the feature only requires "while in use" permission.

To learn more, read the developer guide on how to handle the new location controls.

Android Jetpack Navigation Stable Release

Posted by Ian Lake, Software Engineering Lead & Jisha Abubaker, Product Manager

Cohesive tooling and guidance for implementing predictable in-app navigation

Today we're happy to announce the stable release of the Android Jetpack Navigation component.

The Jetpack Navigation component's suite of libraries, tooling and guidance provides a robust, complete navigation framework, freeing you from the challenges of implementing navigation yourself and giving you certainty that all edge cases are handled correctly.

With the Jetpack Navigation component you can:

  • Handle basic user actions like Up & Back buttons so that they work consistently across devices and screens.
  • Allow users to land on any part of your app via deep links and build consistent and predictable navigation within your app.
  • Improve type safety of arguments passed from one screen to another, decreasing the chances of runtime crashes as users navigate in your app.
  • Add navigation experiences like navigation drawers and bottom navigation consistent with the Material Design guidelines.
  • Visualize and manipulate your navigation flows easily with the Navigation Editor in Android Studio 3.3

The Jetpack Navigation component adheres to the Principles of Navigation, providing consistent and predictable navigation no matter how simple or complex your app may be.

Simplify navigation code with Jetpack Navigation Libraries

The Jetpack Navigation component provides a framework for in-app navigation that makes it possible to abstract away the implementation details, keeping your app code free of navigation boilerplate.

To get started with the Jetpack Navigation component in your project, add the Navigation artifacts available on Google's Maven repository in Java or Kotlin to your app's build.gradle file:

 dependencies {
    def nav_version = 2.0.0

    // Java
    implementation "androidx.navigation:navigation-fragment:$nav_version"
    implementation "androidx.navigation:navigation-ui:$nav_version"

    // Kotlin KTX 
    implementation "androidx.navigation:navigation-fragment-ktx:$nav_version"
    implementation "androidx.navigation:navigation-ui-ktx:$nav_version"
  }

Note: If you have not yet migrated to androidx.*, the Jetpack Navigation stable component libraries are also available as android.arch.* artifacts in version 1.0.0.

navigation-runtime : This core library powers the navigation graph, which provides the structure of your in-app navigation: the screens or destinations that make up your app and the actions that link them. You can control how you navigate to destinations with a simple navigate() call. These destinations may be fragments, activities or custom destinations.

navigation-fragment: This library builds upon navigation-runtime and provides out-of-the-box support for fragments as destinations. With this library, fragment transactions are now handled for you automatically.

navigation-ui: This library allows you to easily add navigation drawers, menus and bottom navigation to your app consistent with the Material Design guidelines.

Each of these libraries provide an Android KTX artifact with the -ktx suffix that builds upon the Java API, taking advantage of Kotlin-specific language features.

Tools to help you build predictable navigation workflows

Available in Android Studio 3.3 and above, the Navigation Editor lets you visually create your navigation graph , allowing you to manage user journeys within your app.

With integration into the manifest merger tool, Android Studio can automatically generate the intent filters necessary to enable deep linking to a specific screen in your app. With this feature, you can associate URLs with any screen of your app by simply setting an attribute on the navigation destination.

Navigation often requires passing data from one screen to another. For example, your list screen may pass an item ID to a details screen. Many of the runtime exceptions during navigation have been attributed to a lack of type safety guarantees as you pass arguments. These exceptions are hard to replicate and debug. Learn how you can provide compile time type safety with the Safe Args Gradle Plugin.

Guidance to get it right on the first try

Check out our brand new set of developer guides that encompass best practices to help you implement navigation correctly:

What developers say

Here's what Emery Coxe, Android Lead @ HomeAway, has to say about the Jetpack Navigation component :

"The Navigation library is well-designed and fully configurable, allowing us to integrate the library according to our specific needs.

With the Navigation Library, we refactored our legacy navigation drawer to support a dynamic, runtime-based configuration using custom views. It allowed us to add / remove new screens to the top-level experience of our app without creating any interdependencies between discreetly packaged modules.

We were also able to get rid of all anti-patterns in our app around top-level navigation, removing explicit casts and hardcoded assumptions to instead rely directly on Navigation. This library is a fundamental component of modern Android development, and we intend to adopt it more broadly across our app moving forward.

Get started

Check out the migration guide and the developer guide to learn how you can get started using the Jetpack Navigation component in your app. We also offer a hands-on codelab and a sample app.

Also check out Google's Digital Wellbeing to see another real-world example of in-app navigation using the Android Jetpack Navigation component.

Feedback

Please continue to tell us about your experience with the Navigation component. If you have specific feedback on features or if you run into any issues, please file a bug via one of the following links: