Category Archives: Android Developers Blog

An Open Handset Alliance Project

Excelliance Tech: moving to new Android dynamic resource loading APIs for long-term compatibility

This blogpost is a collaboration between Google and Excelliance Tech. Authored by Zhuo Chen with support from Zhihai Wang, Gao Huang from Excelliance Tech.

Excelliance Tech improved the stability and compatibility of their LeBian SDK by moving away from non-SDK APIs, toward stable, official APIs. Their collaboration with the Android team during the process also led to a new public API for resource loading that all developers can use - the ResourcesLoader API in Android 11.

Helping game developers keep users engaged

Games are becoming increasingly complex, and a loading progress bar is not only a countdown to a new adventure, but also a bridge which connects players and developers.

Players want the game to load right away, so "loading" has its own priorities: resources that will be used in the first few minutes need to be packed into the APK, while the rest of the content can be downloaded in the background after the game starts.

Developers are always creating new content for their games, so "change" is the only constant: different campaigns bring different launch screens and themes, keeping the game experience fresh for players.

Excelliance Tech’s LeBian (乐变) game assets streaming service helps game developers meet players’ needs by loading fresh resources dynamically while the game is loading or being played.

Meteor, Butterfly And a Sword (流星群侠传) by NetEase Games, Duoduo Auto Chess (多多自走棋) by Dragonest Game, Langrisser (梦幻模拟战) by ZlongGames, Junior Three Kingdom 2 (少年三国志 2) by Yoozoo Games - these games are created by different developers and have different look and feel, but one thing they have in common: they all use LeBian game streaming service to load resources.

The resource loading technology is so useful that Excelliance Tech is even using it in the LeBian SDK itself, bringing a better experience for developers. Dynamic resource loading makes the SDK much easier to use. By dynamically updating its internal resources when needed, the library doesn’t require developers to update the SDK for new resources.

Before Android 11 introduced the ResourcesLoader API, Excelliance Tech had to build their dynamic resource loading capability the hard way, using non-SDK interfaces.

Building the initial product

When Excelliance was first building their product, Android did not offer public APIs for the dynamic resource loading use-case. The team did what they could, but ended up using non-SDK interfaces to add the external resources. While this met the technical need initially, the implementation was fragile - it depended on non-SDK interfaces, which don’t have the same compatibility guarantees as official SDK APIs and can change without notice.

As a result, Excelliance found that compatibility issues would surface unexpectedly as new versions of Android were released. These required additional testing and development to assure the stability of the product. Over many iterations, it took the Excelliance team six engineer-months and a lot of code to stabilize their solution, while knowing that it might break again in the next Android release. With Android tightening restrictions on non-SDK interfaces to achieve better stability and app compatibility, relying on those non-SDK interfaces became no longer an option.

Working toward a sustainable solution

As the Android team increased its focus on moving apps to public APIs, Excelliance saw an opportunity to migrate to a stronger foundation. They reached out to the Android team to give their feedback and highlighted their use case and need for public SDK APIs.

Over time, their collaboration led to the development of the ResourcesLoader public API that’s available for the first time in Android 11. Excelliance Tech has already moved to the new ResourcesLoader API and they’ve seen better productivity and product quality as a result. Excelliance believes the ResourcesLoader API provides advantages including the following:

  • Easy to use. The development team migrated the solution to the new API in 2 days, testing included.
  • No performance loss. In some cases, the loading speed even increased because ResourcesLoader can load uncompressed resources much faster.
  • Easy to develop. Before using the ResourcesLoader API, the team had to assign a senior engineer to 1) understand how AssetManager works, 2) find private APIs and find out how they work on different Android versions, 3) learn zip file structure, etc. Now it only takes a junior engineer who can read the API documentation.
  • Much less code. Before the ResourcesLoader API, the solution took more than 1,000 lines of code, now it has less than 50 lines of code, with the essential code down to just a few lines.
  • Forward compatibility. By using official public APIs that will continue to be supported by the Android team, the developer’s solution will have much better compatibility on the future Android platforms.
String sdkroot = getApplicationInfo().dataDir + "/lebian";
ResourcesLoader rl = new ResourcesLoader();
rl.addProvider(ResourcesProvider.loadFromDirectory(sdkroot, null));
Resources res = getResources();
res.addLoaders(rl);
final AssetManager assetManager = res.getAssets();

After moving to the new ResourcesLoader API, the essential code has just a few lines (down from hundreds of lines of code across a number of source files).

Improving performance

Excelliance Tech did a comparison test, loading 16,028 files (uncompressed 1.47GB, compressed 1.36GB) in 4 ways:

  1. Load resources directly from APK
  2. Load resources using non-SDK interfaces
  3. Load APK using ResourcesLoader
  4. Load resources directly from a directory using ResourcesLoader

Resources are compressed in option 1, 2 and 3, and the average loading times are around 19 seconds. Option 4 loads uncompressed resources directly from a directory using ResourcesLoader, the average loading time is about 3 seconds - a 6x performance improvement!

Summarizing the overall impact of ResourcesLoader, Huang Gao, CEO & Product Lead at Excelliance Tech, said “The new ResourcesLoader API dramatically reduces development and maintenance costs and allows us to focus more on product and business innovation."

Co-creating the future

The Excelliance Tech team.

The Excelliance Tech team.

"On the Android platform, we've created some valuable products and services, which makes us want to invest more to create innovative products", Excelliance Tech stated, "We hope to have more opportunities to participate in the building of the Android ecosystem and contribute our efforts to make a better Android both for consumers and developers."

Excelliance Tech made an investment for the long-term compatibility of the LeBian SDK. Moving to the ResourcesLoader API has already yielded stability and performance benefits, reduced the complexity of their code, and reduced risks of future compatibility issues as Android rolls out new versions of the platform. The ResourcesLoader API is part of Android 11’s public APIs, benefitting the entire Android developer community.

11 Weeks of Android: Testing app compatibility in Android 11

Posted by Diana Wong, Product Manager, Android

This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we’re spotlighting Android 11 compatibility; here’s a look at what you should know.

Android 11 compatibility week

This week in 11 Weeks of Android, we’re highlighting Android 11 Compatibility, a theme that’s important for all developers. For Android, the term app compatibility means that your app runs properly on a specific version of Android, typically the latest version.

We’re sharing resources to help you with compatibility testing here, and you can follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week!

Making app compatibility easier in Android 11

With each release, we’re working to reduce the work you’ll need to do to get your apps ready. In Android 11, we’ve added new processes, developer tools, and release milestones to minimize the impact of platform updates and make it easier for apps to stay compatible.

  • Minimizing the impact of behavior changes - we’re making a conscious effort to minimize platform changes that could affect apps by making them opt-in, wherever possible, until you set targetSdkVersion to Android 11 in your app. If you are distributing through Google Play, you’ll have more than a year to opt-in to these changes.
  • Easier testing and debugging - To help you test for compatibility, we’ve made many of the breaking changes toggleable - meaning that you can force-enable or disable the changes individually from Developer options or adb. With this change, there’s no longer a need to change targetSdkVersion or recompile your app for basic testing. We've also made it easier to use Android Studio to run automated tests.
  • Restrictions on non-SDK interfaces - as part of our ongoing effort to gradually move developers away from non-SDK APIs, we’ve updated the lists of restricted non-SDK interfaces, and as always your feedback and requests for public API equivalents are welcome.
  • Dynamic resource loader - As part of their migration away from non-SDK interfaces, developers asked us for a public API to load resources and assets dynamically at runtime. We’ve now added a Resource Loader framework in Android 11, and thank you to the developers who gave us this input!

You’ll hear more about these topics throughout the week. To get things started, read on to learn more about how we're making it easier to test and debug your app in Android 11.

Testing on Android 11

Testing your app for a new Android release can be a challenging task, especially when your app might be affected by multiple platform changes. Many questions can come up:

  • How do you determine which areas of your app might be affected?
  • Should you test by changing the targetSDKVersion, and what’s the easiest way to do so?
  • Once you begin testing, how do you isolate the issues that are causing problems?
  • As you develop and test against the latest version of Android, how do you verify that your app continues to provide a seamless experience across other devices with different form factors and lower API levels?

We've gotten a lot of great feedback from our developer community about these and other questions. In Android 11, we've added new tools to the platform and new features to Android Studio that can make the testing process a little easier for you.

New tools for testing platform changes

Like previous releases, Android 11 includes some changes in the Android platform that may impact your apps. Although these changes are critical to improve the platform, we try to minimize the immediate change to your apps by putting as many changes as possible behind the platform's latest targetSDKVersion. In Android 11, we've also added many of these platform changes to a new compatibility framework.

What is the compatibility framework?

When a change is part of the compatibility framework, you can access new developer tools that help you test and debug your app against that change.

For instance, changes that are part of the compatibility framework are toggleable, so you can force-enable or disable the changes individually, either from a device's developer options or using ADB (Android Debug Bridge). The Android platform automatically adapts its internal API logic, so you don't need to change your targetSDKVersion or recompile your app to perform basic testing. In addition, you can isolate individual changes from each other to lower the amount of time it takes to discover and debug issues in your app.

Choosing a change to test against

Before you start toggling changes on or off, you should read through the lists of behavior changes to determine which changes might affect your app. Changes that are part of the compatibility framework have a corresponding Change ID and Change Name listed before the description of the change.

Generally, we recommend that you start testing with behavior changes that affect all apps because those changes can potentially affect your app regardless of targetSDKVersion. However, let's take a look at a change that is gated by targetSDKVersion so that you can see how to test against these changes without recompiling your app with a different target SDK.

Take a look at the change to background location access. This change affects apps that request all-the-time access to background location. If your app is affected by this change, it might be a great candidate to start testing with. The change's name is BACKGROUND_RATIONALE_CHANGE_ID and the change's ID is 147316723. You'll use this information to enable this change before you test your app against it.

Isolate the change

After deciding which change you want to test against, you can toggle that change on or off using the developer options. To get to the developer options, open your device's Settings app and navigate to System > Advanced > Developer options > App Compatibility Changes.

Toggleable platform changes in the developer options with the background location access change enabled

Toggleable platform changes in the developer options with the background location access change enabled

In this case, the BACKGROUND_RATIONALE_CHANGE_ID is the only change that is enabled to minimize scope of possible causes for any issues that your app might encounter.

You can also use logcat or ADB to identify which changes are enabled and use ADB to toggle changes on or off. Note that you can only toggle changes when using a debuggable app.

Test and debug your app

After enabling a change, you can test and debug your app using your typical testing workflows. If you encounter issues, check your logs to help determine the cause of the problem. If it's not clear whether the problem is caused by the platform change that's enabled, try disabling that change and then retest that area of your app.

Learn more

Check out this week's video on testing platform changes in Android 11 to see another example and read the documentation on developer.android.com.

New tools in Android Studio for testing app compatibility

In addition to manual testing on the new platform, we’ve also made it easier to use Android Studio to run automated tests on the latest OS.

Starting in Android Studio 4.2, instrumentation tests can now be run across multiple physical or virtual devices in parallel. When running tests, there’s now the option to select Multiple Devices in the target device dropdown menu.

This feature is designed to help you catch issues as early as possible in your development cycle and allow you to compare differences between different builds of Android. The results of these tests can be investigated using a new Test Matrix under View > Tool Windows > Run.

The new Test Matrix lets you filter your test results by status, device, and API level.

Learn more

Check out this week's video on testing app compatibility with Android Studio and read the documentation on developer.android.com.

Next steps

We encourage you to try out these new tools and send us feedback about how they're working for you. We hope these tools help make it easier for you to test your app for Android 11.

Also, the Android engineering team will host a Reddit AMA on r/androiddev thursday, July 9 at 12:00PM PST, to answer your technical questions about Android 11. See this post for details and to submit your questions.

Bringing modern storage to Viber’s users

This blogpost is a collaboration between Google and Viber. Authored by Kseniia Shumelchyk from Google and Anton Novikov, Sergey Kozlov from Viber.

As a messaging app, Viber needs to store, process and share a significant amount of data. Viber aims to give its users an easy, fast, reliable and secure communication platform by providing an intuitive interface and operating with files in a privacy-preserving way. We believe the modern scoped storage paradigm provides this foundation for app developers and users.

Scoped storage was introduced in Android 10 with further improvements in Android 11 to provide better protection to app and user data on a platform level. Due to Viber's complexity, the team opted to incrementally implement the changes that were required to comply with scoped storage.

In this article, we’ll share how Viber handled the migration to scoped storage, focusing on what they did to optimize working with media files and other data in the app.

Managing storage across Android versions

Android’s storage model has evolved to adapt to changing privacy considerations, leading to the changes in the storage system APIs. Let’s take a look at key platform changes that affected the legacy Viber implementation.

Media directories

Scoped storage changes the way that apps store and access files on a device's external storage. Viber needed to evaluate the differences between the existing app's storage model and updated platform guidelines, followed by gradual application changes to work with files in scoped storage. Therefore Viber invoked the requestLegacyExternalStorage flag to temporarily opt-out of scoped storage on Android 10 until the app was fully compatible.

In order to adjust their app experience to scoped storage, Viber now contributes public media files to well-defined media collections using the MediaStore API. This way, the files are accessible in a device gallery, and can be read by other apps with the storage permission. Private media files are stored in the app-specific directory on external storage and are accessed via the internal ContentProvider.

Storage permissions

The other notable update is related to changes in the storage permissions model: Apps in scoped storage have unrestricted access to their app-specific directories on external storage and can contribute to well-defined media collections without requesting a runtime permission. This change will help Viber provide more granular control to their users:

“This addition supports our efforts to provide our users with the best security and privacy solutions we can provide supported by the Android OS, users will benefit from this added security later without needing to opt-in. We also added a new ‘Save to gallery’ option allowing users to choose to make their photos readable by other apps or not. Because chats may contain private images or videos, it’s important to give users the ability to hide these files from the gallery. This change gives users additional control over the content included in their Viber messages.“ said Anton Novikov and Sergey Kozlov from Viber.

Accessing files outside of app-specific directory

Previously, Viber created and consumed files in a custom top level directory and depended on file path access. With scoped storage, saving app files to a top level directory became an anti-pattern, so Viber has followed best practices to update their implementation to store media files from the chats only in locations that are accessible in scoped storage.

However, to reduce the complexity of migration, Viber decided to keep their own top level directory for Android 10 and below, storing only the media files that are not exposed to the device’s Gallery app, while for Android 11 and above this directory is used in read-only mode to provide backward compatibility.

Another use case that Viber has been refining is sharing files in the chats. The updated storage runtime permission gives read access only to the images, videos and audio files that are available through MediaProvider. Starting from Android 11, the only way for Viber to access non-media files created by other apps is by using the Storage Access Framework document picker, which they had already utilized in a different part of their app.

App-specific files within external storage

In the scoped storage environment, app-specific directories on external storage are becoming private from other apps. This change has helped Viber leverage its use of external storage for storing private user files:

”We find change to app-specific directories to be useful, because it will help to ensure that personal chats are protected and backed with platform security.” said Anton Novikov from Viber. Learn more about how to access app-specific files.

Single interface to access storage

Because Viber targets a large audience running on Android 4.2 and above, they introduced an abstraction layer that aids them in managing storage access efficiently across all supported Android versions and with their use cases in mind.

Previously, Viber heavily used File API to access files, including files in legacy storage locations. Further, they stored absolute file paths for entries in the local database to keep the user's conversation history.

To standardize access to this conversation history and thus ensure that users don’t lose access to their files, Viber replaced absolute file paths with content URIs. In the new implementation, the app is accessing files only via content providers:

  • Internal FileProvider for Viber app-specific directories.
  • External file providers available in the Android framework, such as MediaStore or Storage Access Framework, or those belong to another app that shares files with Viber through Intent.ACTION_SEND.

By using a consistent ContentProvider layer, the ContentResolver gives the app a unified interface to access the file content.

This approach has also helped Viber optimize the network layer and define a universal Loader abstraction to upload/fetch and to read/store different types of media files like voice messages, chat images and stickers.

Summary

Android 11 further enhances scoped storage, which provides better protection of app and user data and makes the transition easier for developers. It’s amazing to see many apps like Viber are migrating to take advantage of scoped storage since Android 10.

We hope Viber’s story is useful and will inspire you to modernize your Android apps as well. Learn more about Android storage use cases and best practices.

System hardening in Android 11

Posted by Platform Hardening Team

In Android 11 we continue to increase the security of the Android platform. We have moved to safer default settings, migrated to a hardened memory allocator, and expanded the use of compiler mitigations that defend against classes of vulnerabilities and frustrate exploitation techniques.

Initializing memory

We’ve enabled forms of automatic memory initialization in both Android 11’s userspace and the Linux kernel. Uninitialized memory bugs occur in C/C++ when memory is used without having first been initialized to a known safe value. These types of bugs can be confusing, and even the term “uninitialized” is misleading. Uninitialized may seem to imply that a variable has a random value. In reality it isn’t random. It has whatever value was previously placed there. This value may be predictable or even attacker controlled. Unfortunately this behavior can result in a serious vulnerability such as information disclosure bugs like ASLR bypasses, or control flow hijacking via a stack or heap spray. Another possible side effect of using uninitialized values is advanced compiler optimizations may transform the code unpredictably, as this is considered undefined behavior by the relevant C standards.

In practice, uses of uninitialized memory are difficult to detect. Such errors may sit in the codebase unnoticed for years if the memory happens to be initialized with some "safe" value most of the time. When uninitialized memory results in a bug, it is often challenging to identify the source of the error, particularly if it is rarely triggered.

Eliminating an entire class of such bugs is a lot more effective than hunting them down individually. Automatic stack variable initialization relies on a feature in the Clang compiler which allows choosing initializing local variables with either zeros or a pattern.

Initializing to zero provides safer defaults for strings, pointers, indexes, and sizes. The downsides of zero init are less-safe defaults for return values, and exposing fewer bugs where the underlying code relies on zero initialization. Pattern initialization tends to expose more bugs and is generally safer for return values and less safe for strings, pointers, indexes, and sizes.

Initializing Userspace:

Automatic stack variable initialization is enabled throughout the entire Android userspace. During the development of Android 11, we initially selected pattern in order to uncover bugs relying on zero init and then moved to zero-init after a few months for increased safety. Platform OS developers can build with `AUTO_PATTERN_INITIALIZE=true m` if they want help uncovering bugs relying on zero init.

Initializing the Kernel:

Automatic stack and heap initialization were recently merged in the upstream Linux kernel. We have made these features available on earlier versions of Android’s kernel including 4.14, 4.19, and 5.4. These features enforce initialization of local variables and heap allocations with known values that cannot be controlled by attackers and are useless when leaked. Both features result in a performance overhead, but also prevent undefined behavior improving both stability and security.

For kernel stack initialization we adopted the CONFIG_INIT_STACK_ALL from upstream Linux. It currently relies on Clang pattern initialization for stack variables, although this is subject to change in the future.

Heap initialization is controlled by two boot-time flags, init_on_alloc and init_on_free, with the former wiping freshly allocated heap objects with zeroes (think s/kmalloc/kzalloc in the whole kernel) and the latter doing the same before the objects are freed (this helps to reduce the lifetime of security-sensitive data). init_on_alloc is a lot more cache-friendly and has smaller performance impact (within 2%), therefore it has been chosen to protect Android kernels.

Scudo is now Android's default native allocator

In Android 11, Scudo replaces jemalloc as the default native allocator for Android. Scudo is a hardened memory allocator designed to help detect and mitigate memory corruption bugs in the heap, such as:

Scudo does not fully prevent exploitation but it does add a number of sanity checks which are effective at strengthening the heap against some memory corruption bugs.

It also proactively organizes the heap in a way that makes exploitation of memory corruption more difficult, by reducing the predictability of the allocation patterns, and separating allocations by sizes.

In our internal testing, Scudo has already proven its worth by surfacing security and stability bugs that were previously undetected.

Finding Heap Memory Safety Bugs in the Wild (GWP-ASan)

Android 11 introduces GWP-ASan, an in-production heap memory safety bug detection tool that's integrated directly into the native allocator Scudo. GWP-ASan probabilistically detects and provides actionable reports for heap memory safety bugs when they occur, works on 32-bit and 64-bit processes, and is enabled by default for system processes and system apps.

GWP-ASan is also available for developer applications via a one line opt-in in an app's AndroidManifest.xml, with no complicated build support or recompilation of prebuilt libraries necessary.

Software Tag-Based KASAN

Continuing work on adopting the Arm Memory Tagging Extension (MTE) in Android, Android 11 includes support for kernel HWASAN, also known as Software Tag-Based KASAN. Userspace HWASAN is supported since Android 10.

KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to find out-of-bound and use-after-free bugs in the Linux kernel. Its Software Tag-Based mode is a software implementation of the memory tagging concept for the kernel. Software Tag-Based KASAN is available in 4.14, 4.19 and 5.4 Android kernels, and can be enabled with the CONFIG_KASAN_SW_TAGS kernel configuration option. Currently Tag-Based KASAN only supports tagging of slab memory; support for other types of memory (such as stack and globals) will be added in the future.

Compared to Generic KASAN, Tag-Based KASAN has significantly lower memory requirements (see this kernel commit for details), which makes it usable on dog food testing devices. Another use case for Software Tag-Based KASAN is checking the existing kernel code for compatibility with memory tagging. As Tag-Based KASAN is based on similar concepts as the future in-kernel MTE support, making sure that kernel code works with Tag-Based KASAN will ease in-kernel MTE integration in the future.

Expanding existing compiler mitigations

We’ve continued to expand the compiler mitigations that have been rolled out in prior releases as well. This includes adding both integer and bounds sanitizers to some core libraries that were lacking them. For example, the libminikin fonts library and the libui rendering library are now bounds sanitized. We’ve hardened the NFC stack by implementing both integer overflow sanitizer and bounds sanitizer in those components.

In addition to the hard mitigations like sanitizers, we also continue to expand our use of CFI as an exploit mitigation. CFI has been enabled in Android’s networking daemon, DNS resolver, and more of our core javascript libraries like libv8 and the PacProcessor.

The effectiveness of our software codec sandbox

Prior to the Release of Android 10 we announced a new constrained sandbox for software codecs. We’re really pleased with the results. Thus far, Android 10 is the first Android release since the infamous stagefright vulnerabilities in Android 5.0 with zero critical-severity vulnerabilities in the media frameworks.

Thank you to Jeff Vander Stoep, Alexander Potapenko, Stephen Hines, Andrey Konovalov, Mitch Phillips, Ivan Lozano, Kostya Kortchinsky, Christopher Ferris, Cindy Zhou, Evgenii Stepanov, Kevin Deus, Peter Collingbourne, Elliott Hughes, Kees Cook and Ken Chen for their contributions to this post.

11 Weeks of Android: Privacy and Security

Posted by:
Charmaine D’Silva, Product Lead, Android Privacy and Framework
Narayan Kamath, Engineering Lead, Android Privacy and Framework
Stephan Somogyi, Product Lead, Android Security
Sudhi Herle, Engineering Lead, Android Security

This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.

mobile security illustration

Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.

As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:

One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.

Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.

In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.

In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.

Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.

Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.

Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.

Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.

BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.

Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.

Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.

Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

11 Weeks of Android: Privacy and Security

Posted by:
Charmaine D’Silva, Product Lead, Android Privacy and Framework
Narayan Kamath, Engineering Lead, Android Privacy and Framework
Stephan Somogyi, Product Lead, Android Security
Sudhi Herle, Engineering Lead, Android Security

This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.

mobile security illustration

Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.

As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:

One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.

Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.

In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.

In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.

Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.

Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.

Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.

Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.

BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.

Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.

Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.

Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

Full spectrum of on-device machine learning tools on Android

Posted by Hoi Lam, Android Machine Learning



This blog post is part of a weekly series for #11WeeksOfAndroid. Each week we’re diving into a key area of Android so you don’t miss anything. Throughout this week, we covered various aspects of Android on-device machine learning (ML). Whichever stage of development be it starting out or an established app; whatever role you play in design, product and engineering; whatever your skill level from beginner to experts, we have a wide range of ML tools for you.

Design - ML as a differentiator

“Focus on the user and all else will follow” is a Google mantra that becomes even more relevant in our machine learning age. Our Design Advocate, Di Dang, highlighted the importance of finding the unique intersection of user problems and ML strengths. Too often, teams are so keen on the idea of machine learning that they lose sight of their user needs.



Di outlined how the People + AI Guidebook can help you make ML product decisions and used the example of the Read Along app to illustrate topics like precision and recall, which are unique to ML design and development. Check out her interview with the Read Along team together with your team for more inspiration.

New ML Kit fully focused on on-device

When you decide that on-device machine learning is the solution, the easiest way to implement it will be through turnkey SDKs like ML Kit. Sophisticated Google-trained models and processing pipelines are offered through an easy to use interface in Kotlin / Java. ML Kit is designed and built for on-device ML: it works offline, offers enhanced privacy, unlocks high performance for real-time use cases and it is free. We recently made ML Kit a standalone SDK and it no longer requires a Firebase account. Just one line in your build.gradle file and you can start bringing ML functionality into your app.



The team has also added new functionalities such as Jetpack lifecycle support and the option to use the face contour models via Google Play Services saving as much as 20MB in app size. Another much anticipated addition is the support for swapping Google models with your own for both Image Labeling as well as Object Detection and Tracking. This provides one of the easiest ways to add TensorFlow Lite models to your applications without interacting with ByteArray!

Customise with TensorFlow Lite and Android tools

If the base model provided by ML Kit doesn’t quite fit the bill, what should developers do? The first port of call should be TensorFlow Hub where ready-to-use TensorFlow Lite models from both Google and the wider community can be downloaded. From 100,000 US Supermarket products to tomato plant diseases classifiers, the choice is yours.



In addition to Firebase AutoML Vision Edge, you can also build your own model using TensorFlow Model Maker (image classification / text classification) with just a few lines of Python. Once you have a TensorFlow Lite model from either TensorFlow Hub, or the Model Maker, you can easily integrate it with your Android app using ML Kit Image Labelling or Object Detection and Tracking. If you prefer an open source solution, Android Studio 4.1 beta introduces ML model binding that helps wrap around the TensorFlow Lite model with an easy to use Kotlin / Java wrapper. Adding a custom model to your Android app has never been easier. Check out this blog for more details.

Time for on-device ML is now

From the examples of the Android Developer Challenge winners, it is obvious that on-device machine learning has come of age and ML functionalities once reserved for the cloud or supercomputers are now available on your Android phone. Take a step forward with us by trying out our codelabs of the day:

Also checkout the ML Week learning pathway and take the quiz to get your very own ML badge.

Android on-device machine learning is a rapidly evolving platform, if you have any enhancement requests or feedback on how it could be improved, please let us know together with your use-case (TensorFlow Lite / ML Kit). Time for on-device ML is now.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

Read Along: Grow child literacy with on-device ML design insights

Posted by Di Dang, Design Advocate

From our Machine Learning-themed week together, we’ve delved into an ML Kit x CameraX Codelab, and we learned how to train your own custom models and integrate them in your Android app. In addition to the technical considerations that go into using ML, it’s important that we design our ML-based apps in a way that enables our users to feel in control of the ML technology, and not the other way around. To help product creators understand some best practices for ML product decisions, the PAIR team published the People + AI Guidebook at Google I/O last year. Let’s take a look at some ML design considerations you can apply in your Android apps by learning from the example of Read Along.




Google recently launched Read Along, an Android app that uses on-device ML and voice UI to help children learn to read anytime, anywhere, using just their voice. According to the UN Division of Sustainable Development Goals, more than 50% of children worldwide are not achieving minimum proficiency in reading. First launched in India as “Bolo”, the “Read Along” app is now available globally. We recently went behind the scenes with the Read Along team in this episode of Centered, to learn how they made an ML- and voice-based app to improve child literacy.

Why Machine Learning and Voice UI?

Since using ML can be time- and cost-intensive, we need to find the intersection of ML strengths and user needs. To learn to read, children need time on task and one-on-one attention, which is challenging for areas where there is a lack of access to teachers or educational materials. “In many parts of the world, there are only so many schools that can be built, only so many teachers can be trained. So first and foremost, it's a scale problem,” said Nitin Kashyap, Read Along’s product manager. This creates a unique opportunity for the use of ML—to provide real-time reading feedback at scale, Read Along utilizes the Google Assistant’s text-to-speech and speech recognition capabilities. The Read Along team also added abilities on edge to preserve children’s privacy. The voice data is analyzed on-device without being sent to any Google servers, enabling children to use Read Along offline as well.
Child using app demo

False positives vs. false negatives

Since ML-based systems are inherently probabilistic, they can generate “wrong” predictions in the form of false positives and false negatives. As we create ML-based applications, we need to decide which behavior to optimize for. Within the Read Along experience, a false positive denotes that the child has misread a word, though the system fails to recognize this and does not provide corrective feedback. On the other hand, an example of a false negative is when a child reads a word correctly, but Read Along predicts the word was read incorrectly, and thus prompts the child to try again. “We spent a little time to understand what really happens when the child gets false positive and false negative, and what impact does it have on the psychology, and also on the reading experience,“ said Eshita Priyadarshini, Read Along’s UX Research Lead. “When the child is reading, we don't really tell him, "Oh, you got that word wrong. Why don't you read it again?" By unpacking the impact of false positives and false negatives on the user, the Read Along team decided to optimize for recall, thereby increasing the number of false positives, which results in a user experience that feels more encouraging for children.

Screenshot of People + AI Guidebook

To learn more about how the Read Along team made ML product decisions, check out the full Centered episode. For more guidance on how your cross-functional team (spanning UX, PM, and engineering) can come together to design ML-based applications, check out the People + AI Guidebook.

Android 11 Developer Preview on Android TV

Posted by Xiaodao Wu, Developer Advocate

With the rise in quality content that’s keeping us glued to the big screen, it’s no surprise watch time on the TV continues to grow. As users spend more time in their living rooms, they are also looking to get more from their smart TVs and streaming devices. To help developers meet these needs, we are always working to support the latest Android features on Android TV.

Today, we are releasing an Android 11 Developer Preview for Android TV with many privacy, performance, accessibility and connectivity features. More information can be found on the Android 11 Developer Preview web page.

The Android 11 Developer Preview on TV is for developers (not for consumer use), this image is for ADT-3 developer devices only, it is available by manual download and flash. All user data on the ADT-3 device will be wiped out after flash. Once the device has been flashed to Android 11, you will not be able to go back to the previous Android 10 build.

  1. Download the system image (link) and unzip the file.
  2. Plug in the ADT-3 developer kit for Android TV and enable Developer options.
  3. Run flash-all.sh in the unzipped folder to perform manual system image installation to the ADT-3 device.

The flash-all script uses fastboot and adb tools to upgrade the system. The latest version of fastboot is recommended; developers can find it in the Android SDK Platform-Tools package.

We encourage you to test your Android TV app on the Android 11 Developer Preview. If you have any feedback, please reach out to us. We’d love to hear from you.

Tune in to the Android Beyond Phones week of the #11WeeksOfAndroid on August 10th for even more developer resources from Android TV.

New tools for finding, training, and using custom machine learning models on Android

Posted by Hoi Lam, Android Machine Learning

Yesterday, we talked about turnkey machine learning (ML) solutions with ML Kit. But what if that doesn’t completely address your needs and you need to tweak it a little? Today, we will discuss how to find alternative models, and how to train and use custom ML models in your Android app.

Find alternative ML models

Crop disease models from the wider research community available on tfhub.dev

If the turnkey ML solutions don't suit your needs, TensorFlow Hub should be your first port of call. It is a repository of ML models from Google and the wider research community. The models on the site are ready for use in the cloud, in a web-browser or in an app on-device. For Android developers, the most exciting models are the TensorFlow Lite (TFLite) models that are optimized for mobile.

In addition to key vision models such as MobileNet and EfficientNet, the repository also boast models powered by the latest research such as:

Many of these solutions were previously only available in the cloud, as the models are too large and too power intensive to run on-device. Today, you can run them on Android on-device, offline and live.

Train your own custom model

Besides the large repository of base models, developers can also train their own models. Developer-friendly tools are available for many common use cases. In addition to Firebase’s AutoML Vision Edge, the TensorFlow team launched TensorFlow Lite Model Maker earlier this year to give developers more choices over the base model that support more use cases. TensorFlow Lite Model Maker currently supports two common ML tasks:

The TensorFlow Lite Model Maker can run on your own developer machine or in Google Colab online machine learning notebooks. Going forward, the team plans to improve the existing offerings and to add new use cases.

Using custom model in your Android app

New TFLite Model import screen in Android Studio 4.1 beta

Once you have selected a model or trained your model there are new easy-to-use tools to help you integrate them into your Android app without having to convert everything into ByteArrays. The first new tool is ML Model binding with Android Studio 4.1. This lets developers import any TFLite model, read the input / output signature of the model, and use it with just a few lines of code that calls the open source TensorFlow Lite Android Support Library.

Another way to implement a TensorFlow Lite model is via ML Kit. Starting in June, ML Kit no longer requires a Firebase project for on-device functionality. In addition, the image classification and object detection and tracking (ODT) APIs support custom models. The latter ODT offering is especially useful in use-cases where you need to separate out objects from a busy scene.

So how should you choose between these three solutions? If you are trying to detect a product on a busy supermarket shelf, ML Kit object detection and tracking can help your user select a specific product for processing. The API then performs image classification on just the part of the image that contains the product, which results in better detection performance. On the other hand, if the scene or the object you are trying to detect takes up most of the input image, for example, a landmark such as Big Ben, using ML Model binding or the ML Kit image classification API might be more appropriate.

TensorFlow Hub bird detection model with ML Kit Object Detection & Tracking AP

Two examples of how these tools can fit together

Here are some resources to help you get started:

Customizing your model is easier than ever

Finding, building and using custom models on Android has never been easier. As both Android and TensorFlow teams increase the coverage of machine learning use cases, please let us know how we can improve these tools for your use cases by filing an enhancement request with TensorFlow Lite or ML Kit.

Tomorrow, we will take a step back and focus on how to appropriately use and design for a machine learning first Android app. The content will be appropriate for the entire development team, so bring your product manager and designers along. See you next time.