Tag Archives: Android Q

Gesture Navigation: A Backstory

Posted by Allen Huang and Rohan Shah, Product Managers on Android UI

mobile ui

One of the biggest changes in Android Q is the introduction of a new gesture navigation. Just to recap - with the new system navigation mode - users can navigate back (left/right edge swipe), to the home screen (swipe up from the bottom), and trigger the device assistant (swipe in from the bottom corners) with gestures rather than buttons.

By moving to a gesture model for system navigation, we can provide more of the screen to apps to enable a more immersive experience.

We wanted to give folks an inside look at how we’ve approached this challenge, the rationale, and some of the trade-offs as well. There is some nerding out on design around gestures ahead, but hopefully it provides some insight into our process and how we balance the developer and OEM ecosystem in service of users. If you’re looking for more detail on how to handle these changes as an app developer, check out Chris’s “Going edge-to-edge” article series.

Why gestures?

One of the amazing things about Android is the opportunity for app developers and Android partners to try new, innovative approaches on the phone.

In the last 3 years, we’ve seen gesture navigation patterns proliferate on handheld devices (though gestures have been around as early as 2009!).

This trend was led by innovative Android partners and Android apps trying some very cool ideas (for example: Fluid NG, XDA).

When we started researching this more, we honed in on the user benefits:

  1. Gestures can be a faster, more natural and ergonomic way to navigate your phone
  2. Gestures are more intentional than software buttons that you might trigger just by grabbing your phone
  3. Gestures enable a more immersive experience for apps by minimizing how much the system draws over app content, i.e. HOME/BACK buttons and the bar they sit on - especially as hardware trends towards bigger screens and smaller bezels

It wasn’t all roses though - we also saw issues with many of the gesture modes:

  1. Gestures don’t work for every user
  2. Gestures are harder to learn and can take some adjustment
  3. Gestures can interfere with an app’s navigation pattern

But most of all, we realized that there was a larger issue of fragmentation when different Android phones had different gestures, especially for Android developers.

Over the last year, we worked with partners like Samsung, Xiaomi, HMD Global, OPPO, OnePlus, LG, Motorola, and many others to standardize gesture navigation going forward. To ensure a consistent user and developer experience, the Android Q gestures will be the default gesture navigation for new Q+ devices.

Understanding that these gestures don’t work for every user, especially those with more limited dexterity and mobility, three-button navigation will continue to be an option on every Android device.

So why these gestures?

We started with research to understand how users held their phones, what typical reach looked like, and what parts of the phone users used the most. From there, we built many prototypes that we tested across axes like desirability, speed-of-use, ergonomics, and more. And we put our ultimate design through a range of studies - how quickly users learned the system, how quickly users got used to the system, how users felt about it.

A unique element of Android navigation since the very beginning is the Back button. It is appreciated by many users that find Android easier to navigate and learn (despite many debates on what the “correct” behavior is) -- and it's used a lot! In fact, 50% more than even Home. So one of our design goals was to make sure the back gesture was ergonomic, dependable, and intuitive -- and we prioritized this goal above other less frequent navigation such as drawers and recents.

Looking at the reachability charts below, we designed our two core gestures (Back and Home) to coincide with the most reachable/comfortable areas and movement for thumbs.

Phone screen heatmaps showing where users can comfortably do gestures, holding the phone in only one hand

Phone screen heatmaps showing where users can comfortably do gestures, holding the phone in only one hand

As mentioned, we built prototypes of many different gesture models, comparing user ratings and timed user tasks on what ultimately became the Q model to several other navigation paradigms. Here’s a few graphs showing the results of our testing:

Comparison of user ratings for ergonomics and one-handed use across different navigation modes (higher is better)

Comparison of user ratings for ergonomics and one-handed use across different navigation modes (higher is better)


Comparison of average time required to complete Home/Back tasks across various navigation modes (lower is better)

Comparison of average time required to complete Home/Back tasks across various navigation modes (lower is better)


Comparison of average time required to complete Overview/Recents-based tasks across various navigation modes (lower is better)

Comparison of average time required to complete Overview/Recents-based tasks across various navigation modes (lower is better)


Users, on average, performed tasks involving Home and Back more quickly than most other models - even faster than they did with buttons. The model did, however, come at the cost of being able to quickly access Overview/Recent apps, which users go to less than half as often as the Home screen.

From a more qualitative perspective, users viewed the Q model as more one-handed and reachable, although buttons were still viewed as more ergonomic for more users.

App Drawers and other App Swipes

Although we arrived at the side swipe as the gesture for back that best balanced many tradeoffs, it is important to note that there were hard decisions, particularly in how that gesture impacted apps.

For example, we found that ~3-7% of users (depending on the Google app) swipe to open the App Navigation Drawer - the rest of our users push the hamburger menu to invoke the drawer. This drawer swipe gesture is now overloaded with back and some users will need to adapt to using the hamburger menu. This was a tough choice but given the prolific use of back we optimized for what worked best there.

Because it’s never a goal to change out behavior on users, we tried several ways to enable users to distinguish the drawer gesture from the Back gesture. However, all these paths led to users pulling in the drawer when they were trying to go Back and having less confidence that Back would work.

Beyond drawers, gestures are a big change for people and it took on average 1-3 days to adapt - in particular, users struggled with patterns like swiping right or left on a carousel and triggering Back.

In qualitative studies, we found that after an initial break-in period of 1-3 days, users became fluent and could consistently distinguish between these two gestures. The majority of users did not want to switch back to 3 button nav (even though that remains an option).

Additional research showed that there is a clear adjustment phase for users to get used to a new system navigation (across many different navigations). In our Q model, we found that usage of Back goes down for the first 1-3 days. After that period, the average # of Back presses/day ends up being the same as 3-button and our P navigation.

So What Does This Mean for Developers?

With gestural navigation, we are aiming to move forward and standardize the user experience on Android. The model we landed on is the optimal one for most users, but it also means that some of the gestures conflict with existing app gestures, necessitating developer adjustments to how users interact with your apps. We take our responsibility to Android developers seriously and want to help you in this process.

There are three key steps to support gesture navigation:

  1. Go edge-to-edge to enable your app to draw across the entire screen
  2. Handle any visual overlaps with the system user interface (i.e. navigation bar)
  3. Resolve any gesture conflicts with the system gestures

We’ve just published the first article in our “Going edge-to-edge” series on Medium, detailing those steps in turn. The final article in the series will cover some of the common scenarios we’ve seen, and how you can best support them in your apps.

Thank you all for the feedback -- all of your comments and interactions have helped us improve the gesture navigation experience in Android Q and, more broadly, help make Android better each day.

Final Beta update, official Android Q coming soon!

Posted by Dave Burke, VP of Engineering

AndroidQ logo

We’re just a few weeks away from the official release of Android Q! As we put the final polish on the new platform, today we’re rolling out Beta 6, the last Beta update. Now is the time to make sure your apps are ready, before we bring the official release to consumers. Take this opportunity to finish up your testing and publish your app updates soon to give users a smooth transition to Android Q.

You can get Beta 6 today on Pixel devices by enrolling here. If you're already enrolled and received Beta 5, you'll automatically get Beta 6 soon. Partners participating in the Android Q Beta program will also be updating their devices over the coming weeks -- visit their sites to learn more. To get started with Android Q, visit developer.android.com/preview.

Watch for more information on the official Android Q release coming soon!

What’s in Beta 6?

Today’s Beta 6 update includes the latest Android Q system images for Pixel and Android Emulator, the final API 29 SDK, and updated build tools for Android Studio. Beta 6 includes all of the features, system behaviors, and developer APIs that you’ll find in the final platform, so it gives you everything you need to get your apps ready. For users, Beta 6 includes many new fixes and optimizations -- take a look at the release notes for details.

We've made further refinements to Gesture Navigation in Beta 6 based on user feedback. First, to ensure reliable and consistent operation, there's a 200dp vertical app exclusion limit for the Back gesture. Second, we've added a sensitivity preference setting for the Back gesture. Watch for more details coming soon in our blog post series on optimizing for gesture navigation.

Get your apps ready for Android Q!

With the consumer release coming soon, we’re asking all Android developers to update your current apps for compatibility as soon as possible.

Here’s how to do it:

We realize that supporting these changes is an investment for you too, so thanks to all of you who have prioritized the work to get your apps ready for Android Q!

Enhance your app with Android Q features and APIs

Next, when you're ready, dive into Android Q and learn about the new features and APIs that you can use. Here are some of the top features to get started with.

We recommend these for every app:

  • Dark Theme: Ensure a consistent experience for users who enable system-wide dark theme by adding a Dark Theme or enabling Force Dark.
  • Support gestural navigation in your app by going edge-to-edge and making sure your custom gestures are complementary to the system navigation gestures.
  • Optimize for foldables: Deliver seamless, edge-to-edge experiences on today’s innovative devices by optimizing for foldables.

We recommend these if relevant for your app:

  • More interactive notifications: If your notifications include messages, enable suggested replies and actions in notifications to engage users and let them take action instantly.
  • Better biometrics: If you use biometric auth, move to BiometricPrompt, the preferred way to support fingerprint auth on modern devices.
  • Enriched recording: To support captioning or gameplay recording, enable audio playback capture -- it’s a great way to reach more users and make your app more accessible.
  • Better codecs: For media apps, try AV1 for video streaming and HDR10+ for high dynamic range video. For speech and music streaming, you can use Opus encoding, and for musicians, a native MIDI API is available.
  • Better networking APIs: If your app manages IoT devices over Wi-Fi, try the new network connection APIs for functions like configuring, downloading, or printing.

These are just a few of the many new features and APIs in Android Q -- to see them all, visit the Android Q Beta site for developers.

Publish your app updates to Google Play

As soon as you're ready, publish your APK updates to Google Play that are compiled against, or optionally targeting, API 29. To make sure that your updated app runs well on Android Q as well as older versions, try using Google Play testing tracks. With tracks you can safely get early feedback from a small group of users and then do a staged rollout to production.

How do I get Beta 6?

It’s easy! Just enroll any supported Pixel device here to get the update over-the-air. If you're already enrolled, you'll receive the update soon and no action is needed on your part. Downloadable system images are also available here. Partners who are participating in the Android Q Beta program will be updating their devices over the coming weeks. See android.com/beta for details.

To get started developing, download the official API 29 SDK and tools into the stable release of Android Studio 3.4, or for the latest Android Q support update to Android Studio 3.5 Beta. Then follow these instructions to configure your environment, and see the release notes for known issues.

Please continue to share your feedback and requests in our issue tracker. You can use our hotlists for filing platform issues (including privacy and behavior changes), app compatibility issues, and third-party SDK issues.

A big thank you to our developer community for your participation in our recent Reddit AMA on r/androiddev! It’s always great to hear what’s important to you and we hope we were able to help!

What’s new for text in Android Q

Posted by Florina Muntenescu, Android Developer Advocate

Displaying text is an important task in most apps, so in Android Q we're continuing to introduce new features to support your needs and improve performance. We disabled hyphenation by default, enabled creating a typeface using multiple fonts or font families, exposed the list of fonts installed on the device, and improved some of the most-used text styling APIs.

Hyphenation is off by default in Android Q and AppCompat v1.1.0

Our performance tests showed that when hyphenation is enabled, up to 70% of the time spent on measuring text is on hyphenation.

pie chart showing CPU of time spent making StaticLayout: Hyphenation takes up to 70% of the time spent measuring text, 30% Other text

Hyphenation takes up to 70% of the time spent measuring text

Given that hyphenation often isn’t needed for all TextViews in an app, and because of the impact on performance, we decided to turn hyphenation off by default in Android Q and AppCompat v1.1.0. If you want to use hyphenation, you need to manually turn it on in your app by setting the hyphenation frequency to normal. You can set this in multiple ways:

As a TextAppearance attribute in styles.xml:

<style name="MyTextAppearance" parent="TextAppearance.AppCompat">
    <item name="android:hyphenationFrequency">normal</item>
</style>

As a TextView attribute:

<TextView android:hyphenationFrequency="normal" />

Directly in code:

textView.hyphenationFrequency = Layout.HYPHENATION_FREQUENCY_NORMAL

Find out more about how hyphenation works from this talk at Android Dev Summit 2018.

Use multiple custom fonts in the same TextView

Consider a button which mixes a custom font (Lato in this example) with an icon font:

Secure Checkout Button with lock icon, icon and latin fonts

Button with icon and Latin fonts

The Button class accepts only a single instance of a typeface to be set on the text. Pre-Android Q, you can create a Typeface using a single font family. Android Q enables the creation of a typeface from multiple font families with a new API, Typeface.CustomFallbackBuilder, that allows adding up to 64 font families per typeface.

Our icon font example can be implemented like this:

button.typeface = Typeface.CustomFallbackBuilder(
    // add the Latin font
    FontFamily.Builder(
        Font.Builder(assets, "lato.ttf").build()
    ).build()
).addCustomFallback(
    // add the icon font
    FontFamily.Builder(
        Font.Builder(assets, "icon_font.ttf").build()
    ).build()
).build()

When creating the font family, make sure you don’t put fonts that belong to different families in the same font family object nor the same style fonts into the same font family. For example, putting Lato, Kosugi, and Material into the same font family creates an invalid configuration, as does putting two bold fonts into the same font family.

To define the general font family (serif, sans-serif, or monospace) to be used when text is rendered using system fonts, use the setSystemFallback() method to set the system fallback font:

Typeface.CustomFallbackBuilder(
    FontFamily.Builder(
       ...
    ).build()
).setSystemFallback("sans-serif")
.build()

Text styling API updates

Android Q brings several updates to different text styling APIs:

Improved support for variable fonts

TextAppearance now supports the fontVariationSettings attribute:

<style name="MyTextAppearance" parent="TextAppearance.AppCompat">
    <item name="android:fontVariationSettings">...</item>
</style>

The fontVariationSettings attribute can be set directly on the TextView in Android Q and in AppCompatTextView:

<TextView
    ...
    app:fontVariationSettings="..."
/>

Improved spans APIs

TextAppearanceSpan now supports typeface, shadow settings, fontFeatureSettings and fontVariationSettings.

LineBackgroundSpan and LineHeightSpan interfaces now have standard implementations: LineBackgroundSpan.Standard and LineHeightSpan.Standard.

Access system fonts

With more than 100 languages supported by Android, and with different fonts supporting different character sets, knowing which system font can render a given character is not trivial. Apps doing their own text rendering such as games, document viewers, or browsers need this information. In Android Q, you can retrieve the supported system font for a string with the FontMatcher NDK API.

System fonts that can render this text

System fonts that can render this text

Let’s consider the above search string. The FontMatcher API returns us the font object and length. A simplified pseudocode example looks like this:

// font = NotoSansCJK-Regular.ttc
// length = 2
auto[font, length] = AFontMatcher_match("たすく a.k.a. のな");

// font = Roboto-Regular.ttf
// length = 8
auto[font, length] = AFontMatcher_match(" a.k.a. のな");

// font = NotoSansCJK-Regular.ttc
// length = 2
auto[font, length] = AFontMatcher_match("のな");

The FontMatcher API never returns nullptr:

  • If no font supports the given string, a font for Tofu (?), the missing glyph symbol, is returned.
  • If no exact style is supported, a font with the closest, most similar style is returned.

If you want to get all available system fonts, you can do this with a new font enumeration API. In Java, you can use SystemFonts.getAvailableFonts, or in the NDK, you can use ASystemFontIterator. The results of the font enumeration are changed only by a system update, so you should cache them.

Font updates

New Myanmar font

Android added a new Myanmar font to Android Q that is Unicode-compliant and capable of rendering both Unicode and non-Unicode Burmese (commonly known as Zawgyi), right out of the box. This means starting in Android Q, Android makes it easier for users to switch to Unicode: a user can now use a Unicode font to read Unicode and non-Unicode text for the first time. Android also added new requirements to the Android ecosystem CDD that takes a stronger stance in requiring Unicode, including a new subtag "Qaag" which OEMs should use as a locale designating non-Unicode Burmese. All of these changes should make developers’ life easier in the long term, as reduced ecosystem fragmentation makes it easier to develop for our 50M users in Myanmar.

New emojis

New emojis in Android Q

New emojis in Android Q

Say Hello to your new emoji friends! The latest update includes a number of disability-focused emojis, 59 gender-inclusive designs, multi-racial couples, as well as a few cute animals and household objects. See the latest and greatest in Gboard on your Android Q device of choice.

Text plays an important role in a vast majority of apps, so we’re continuing to invest in improving text API features and performance. Learn more about the new APIs in Android Q along with best practices when working with text in our Google I/O 2019 talk:

Capturing Audio in Android Q

Posted by Don Turner, Developer Advocate for Android Media

In Android Q there's a new API which allows applications to capture the audio of other applications. It's called the AudioPlaybackCapture API and it enables some important use cases for easier content sharing and accessibility.

Some examples include:

  • Live captioning - allowing the audio content of the currently playing app to be captioned or translated in real time. In fact, the Live Caption feature shown at I/O this year is a client of this API. Live captioning allows your users to engage with audible content even when it's impossible or inconvenient to do so, such as listening in a public place without headphones.
  • Game recording and streaming - In-game sounds can be recorded and streamed to live audiences, helping to increase the social reach of game content.

There may be some situations where a developer wishes to disallow the capture of their app's audio. This article explains how audio capture works for users and how developers can disallow their app's audio from being captured if they need to.

What does the user see?

In order to capture the audio of other apps the user must grant the record audio permission to the app doing the capturing.

AUDIO_RECORD permissions dialog

AUDIO_RECORD permissions dialog

Additionally, before a capture session can be started the capturing app must call MediaProjectionManager.createScreenCaptureIntent(). This will display the following dialog to the user: screen capture intent dialog

Screen capture intent dialog

The user must tap "Start now" in order for a capturing session to be started. This will allow both video and audio to be captured.

cast icon showing red in status bar

Cast icon showing red in status bar

During a capture session the cast icon is shown in red in the status bar.

Can my app's audio be captured?

Whether your app's audio can be captured by default depends on your target API. Here's a table which summarizes the default behaviour:

Target API Third party apps can capture your app's audio by default? System apps and components can capture your app's audio by default?
28 and below No, the app needs to explicitly opt-in Yes for audio with usage type MEDIA, GAME and UNKNOWN
29 Yes for audio with usage type MEDIA, GAME and UNKNOWN Yes for audio with usage type MEDIA, GAME and UNKNOWN


Disallowing capture by third party apps

There may be situations where an app wishes to disallow its audio from being captured by other apps. This could be because the audio contains:

  • sensitive information - such as private voice recordings.
  • copyrighted material - such as copyrighted music or audio from movies and TV shows.

An app's audio capturing policy can be set either for all audio or for each individual audio player.


Disallowing capture of all audio by third party apps

To disallow capture of all audio by third party apps you can do either of the following:

Add the following to your AndroidManifest.xml <application

...

android:allowAudioPlaybackCapture="false"/>

Programmatically disable capture by running the following code prior to playing audio AudioManager.setAllowedCapturePolicy(ALLOW_CAPTURE_BY_SYSTEM)


Disallowing capture on a per player basis by third party apps

To disallow capture for an individual player you can set the capture policy for the audio player when it is built by calling:

AudioAttributes.Builder.setAllowedCapturePolicy(ALLOW_CAPTURE_BY_SYSTEM)

This approach may be useful if your app plays content with differing licenses. For example, both copyrighted and royalty-free content.


Disallowing capture by system apps and components

By default system apps and components are allowed to capture an app's audio if its usage is MEDIA, GAME and UNKNOWN, as this enables important accessibility use cases, such as live captioning.

In rare cases where a developer wishes to disallow audio capture for system apps as well they can do so in a similar way to the approach for third party apps. Note that this will also disallow capture by third party apps.


Disallowing capture of all audio

This can only be done programmatically by running the following code before any audio is played:

AudioManager.setAllowedCapturePolicy(ALLOW_CAPTURE_BY_NONE)


Disallowing capture on a per player basis

To disallow capture for an individual player you can set the capture policy for the audio player when it is built:

AudioAttributes.Builder.setAllowedCapturePolicy(ALLOW_CAPTURE_BY_NONE)


What next?

If your app is targeting API 28 or below and you would like to enable audio capture add android:allowAudioPlaybackCapture="true" to your app's manifest.xml.

If you would like to disallow some or all of your audio from being captured then update your app according to the instructions above.

For more information check out the Audio Playback Capture API documentation.

Fresher OS with Projects Treble and Mainline

Posted by Anwar Ghuloum, Engineering Director and Maya Ben Ari, Product Manager, Android

With each new OS release, we are making efforts to deliver the latest OS improvements to more Android devices.

Thanks to Project Treble and our continuous collaboration with silicon manufacturers and OEM partners, we have improved the overall quality of the ecosystem and accelerated Android 9 Pie OS adoption by 2.5x compared to Android Oreo. Moreover, Android security updates continue to reach more users, with an 84% increase in devices receiving security updates in Q4, when compared to a year before.

This year, we have increased our overall beta program reach to 15 devices, in addition to Pixel, Pixel 2 and Pixel 3/3a running Android Q beta: Huawei Mate 20 Pro, LGE G8, Sony Xperia XZ3, OPPO Reno, Vivo X27, Vivo NEX S, Vivo NEX A, OnePlus 6T, Xiaomi Mi Mix 3 5G, Xiaomi Mi 9, Realme 3 Pro, Asus Zenfone 5z, Nokia 8.1, Tecno Spark 3 Pro, and Essential PH-1.

But our work hasn’t stopped there. We are continuing to invest in efforts to make Android updates available across the ecosystem.

Safer and more secure devices with Project Mainline

Project Mainline builds on our investment in Treble to simplify and expedite how we deliver updates to the Android ecosystem. Project Mainline enables us to update core OS components in a way that's similar to the way we update apps: through Google Play. With this approach we can deliver selected AOSP components faster, and for a longer period of time – without needing a full OTA update from your phone manufacturer. Mainline components are still open sourced. We are closely collaborating with our partners for code contribution and for testing, e.g., for the initial set of Mainline components our partners contributed many changes and collaborated with us to ensure they ran well on their devices.

Project Mainline updates via Google Play infrastructure components in the Android OS Framework. The Framework components updated are located above the Treble Interface and Hardware-specific implementation, and below the Apps layer.

As a result, we can accelerate the delivery of security fixes, privacy enhancements, and consistency improvements across the ecosystem.

Project Mainline has security, privacy and consistency benefits. Security: Accelerate pushes and remove OEM dependency for critical security bugs. Privacy: Better protection for user’s data; increased privacy standards. Consistency: Device stability and compatibility; developer consistency.

Security: With Project Mainline, we can deliver faster security fixes for critical security bugs. For example, by modularizing media components, which accounted for nearly 40% of recently patched vulnerabilities, and by allowing us to update Conscrypt, the Java Security Provider, Project Mainline will make your device safer.

Privacy: Privacy has been a major focus for us, and we are putting a lot of effort into better protecting users’ data and increasing privacy standards. With Project Mainline, we have the ability to make improvements to our permissions systems to safeguard user data.

Consistency: Project Mainline helps us quickly address issues affecting device stability, compatibility, and developer consistency. We are standardizing time-zone data across devices. Also, we are delivering a new OpenGL driver implementation, ANGLE, designed to help decrease device-specific issues encountered by game developers.

Our initial set of components supported on devices launching on Android Q:

  • Security: Media Codecs, Media Framework Components, DNS Resolver, Conscrypt
  • Privacy: Documents UI, Permission Controller, ExtServices
  • Consistency: Timezone data, ANGLE (developers opt-in), Module Metadata, Networking components, Captive Portal Login, Network Permission Configuration

How does this work?

Mainline components are delivered as either APK or APEX files. APEX is a new file format we developed, similar to APK but with the fundamental difference that APEX is loaded much earlier in the booting process. As a result, important security and performance improvements that previously needed to be part of full OS updates can be downloaded and installed as easily as an app update. To ensure updates are delivered safely, we also built new failsafe mechanisms and enhanced test processes. We are also closely collaborating with our partners to ensure devices are thoroughly tested.

APEX file format. At the top level, an APEX file is a zip file in which files are stored uncompressed. The four files in an APEX file are: apex_manifest.json, AndroidManifest.xml, 
Apex_payload.img, apex_pubkey

Project Mainline enables us to keep the OS on devices fresher, improve consistency, and bring the latest AOSP code to users faster. Users will get these critical fixes and enhancements without having to take a full operating system update. We look forward to extending the program with our OEM partners through our joint work on mainline AOSP.

Android Q Scoped Storage: Best Practices and Updates

Posted by Jeff Sharkey, Software Engineer, and Seb Grubb, Product Manager

Application Sandboxing is a core part of Android’s design, isolating apps from each other. In Android Q, taking the same fundamental principle from Application Sandboxing, we introduced Scoped Storage.

Since the Beta 1 release, you’ve given us a lot of valuable feedback on these changes -- thank you for helping shape Android! Because of your feedback, we've evolved the feature during the course of Android Q Beta. In this post, we'll share options for declaring your app’s support for Scoped Storage on Android Q devices, and best practices for questions we've heard from the community.

Updates to help you adopt Scoped Storage

We expect that Scoped Storage should have minimal impact to apps following current storage best practices. However, we also heard from you that Scoped Storage can be an elaborate change for some apps and you could use more time to assess the impact. Being developers ourselves, we understand you may need some additional time to ensure your app’s compatibility with this change. We want to help.

In the upcoming Beta 3 release, apps that target Android 9 Pie (API level 28) or lower will see no change, by default, to how storage works from previous Android versions. As you update your existing app to work with Scoped Storage, you’ll be able to use a new manifest attribute to enable the new behavior for your app on Android Q devices, even if your app is targeting API level 28 or lower.

The implementation details of these changes will be available with the Beta 3 release, but we wanted to share this update with you early, so you can better prepare your app for Android Q devices. Scoped Storage will be required in next year’s major platform release for all apps, independent of target SDK level, so we recommend you add support to your app well in advance. Please continue letting us know your feedback and how we can better align Scoped Storage with your app’s use cases. You can give us input through this survey, or file bugs and feature requests here.

Best practices for common feedback areas

Your feedback is incredibly valuable and has helped us shape these design decisions. We also want to take a moment to share some best practices for common questions we’ve heard:

  • Storing shared media files. For apps that handle files that users expect to be sharable with other apps (such as photos) and be retained after the app has been uninstalled, use the MediaStore API. There are specific collections for common media files: Audio, Video, and Images. For other file types, you can store them in the new Downloads collection. To access files from the Downloads collection, apps must use the system picker.
  • Storing app-internal files. If your app is designed to handle files not meant to be shared with other apps, store them in your package-specific directories. This helps keep files organized and limit file clutter as the OS will manage cleanup when the app is uninstalled. Calls to Context.getExternalFilesDir() will continue to work.
  • Working with permissions and file ownership. For MediaStore, no permissions are necessary for apps that only access their own files. Your app will need to request permission to access media contributed by other apps. However, if your app is uninstalled and then reinstalled later, you’ll need to request permission from the user in order to be able to access media your app previously contributed.
  • Working with native code or libraries. The recommended pattern is to begin your media file discovery in your Java-based or Kotlin-based code, then pass the file's associated file descriptor into your native code.
  • Working with many files efficiently. If you need to perform bulk file operations in a single transaction, consider using ContentProvider.applyBatch(). Learn more about ContentProvider batch processing here.
  • Integrating with the system file picker.
    • Documents apps, such as a word processor, can use the ACTION_OPEN_DOCUMENT or ACTION_GET_CONTENT action to open a system file picker. You can learn more about the differences here.
    • File management apps typically work with collections of apps in a directory hierarchy. Use ACTION_OPEN_DOCUMENT_TREE to let the user pick a directory subtree. The app can further manipulate files available in the returned directory. Through this support, users can access files from any installed DocumentsProvider instance, which can be supported by any cloud-based or locally-backed storage solutions.

We’ve also provided a detailed Scoped Storage developer guide with additional information.

What’s ahead

It’s been amazing to see the community engagement on Android Q Beta so far. As we finalize the release in the next several months, please continue testing and keep the feedback coming. Join us at Google I/O 2019 for more details on Scoped Storage and other Android Q features. We’re giving a ”What’s new on Shared Storage” talk on May 8, and you’ll be able to find the livestream and recorded video on the Google I/O site.

Improving the update process with your feedback

Posted by Sameer Samat, VP of Product Management, Android & Google Play

Thank you for all the feedback about updates we’ve been making to Android APIs and Play policies. We’ve heard your requests for improvement as well as some frustration. We want to explain how and why we’re making these changes, and how we are using your feedback to improve the way we roll out these updates and communicate with the developer community.

From the outset, we’ve sought to craft Android as a completely open source operating system. We’ve also worked hard to ensure backwards compatibility and API consistency, out of respect and a desire to make the platform as easy to use as possible. This developer-centric approach and openness have been cornerstones of Android’s philosophy from the beginning. These are not changing.

But as the platform grows and evolves, each decision we make comes with trade-offs. Everyday, billions of people around the world use the apps you’ve built to do incredible things like connect with loved ones, manage finances or communicate with doctors. Users want more control and transparency over how their personal information is being used by applications, and expect Android, as the platform, to do more to provide that control and transparency. This responsibility to users is something we have always taken seriously, and that’s why we are taking a comprehensive look at how our platform and policies reflect that commitment.

Taking a closer look at permissions

Earlier this year, we introduced Android Q Beta with dozens of features and improvements that provide users with more transparency and control, further securing their personal data. Along with the system-level changes introduced in Q, we’re also reviewing and refining our Play Developer policies to further enhance user privacy. For years, we’ve required developers to disclose the collection and use of personal data so users can understand how their information is being used, and to only use the permissions that are really needed to deliver the features and services of the app. As part of Project Strobe, which we announced last October, we are rolling out specific guidance for each of the Android runtime permissions, and we are holding apps developed by Google to the same standard.

We started with changes to SMS and Call Log permissions late last year. To better protect sensitive user data available through these permissions, we restricted access to select use cases, such as when an app has been chosen by the user to be their default text message app. We understood that some app features using this data would no longer be allowed -- including features that many users found valuable -- and worked with you on alternatives where possible. As a result, today, the number of apps with access to this sensitive information has decreased by more than 98%. The vast majority of these were able to switch to an alternative or eliminate minor functionality.

Learning from developer feedback

While these changes are critical to help strengthen privacy protections for our users, we’re sensitive that evolving the platform can lead to substantial work for developers. We have a responsibility to make sure you have the details and resources you need to understand and implement changes, and we know there is room for improvement there. For example, when we began enforcing these new SMS and Call Log policies, many of you expressed frustration about the decision making process. There were a number of common themes that we wanted to share:

  • Permission declaration form. Some of you felt that the use case descriptions in our permissions declaration form were unclear and hard to complete correctly.
  • Timeliness in review and appeals process. For some of you, it took too long to get answers on whether apps met policy requirements. Others felt that the process for appealing a decision was too long and cumbersome.
  • Getting information from a ‘real human’ at Google. Some of you came away with the impression that our decisions were automated, without human involvement. And others felt that it was hard to reach a person who could help provide details about our policy decisions and about new use cases proposed by developers.

In response, we are improving and clarifying the process, including:

  • More detailed communication. We are revising the emails we send for policy rejections and appeals to better explain with more details, including why a decision was made, how you can modify your app to comply, and how to appeal.
  • Evaluations and appeals. We will include appeal instructions in all enforcement emails and the appeal form with details can also be found in our Help Center. We will also be reviewing and improving our appeals process.
  • Growing the team. Humans, not bots, already review every sensitive decision but we are improving our communication so responses are more personalized -- and we are expanding our team to help accelerate the appeals process.

Evaluating developer accounts

We have also heard concerns from some developers whose accounts have been blocked from distributing apps through Google Play. While the vast majority of developers on Android are well-meaning, some accounts are suspended for serious, repeated violation of policies that protect our shared users. Bad-faith developers often try to get around this by opening new accounts or using other developers’ existing accounts to publish unsafe apps. While we strive for openness wherever possible, in order to prevent bad-faith developers from gaming our systems and putting our users at risk in the process, we can’t always share the reasons we’ve concluded that one account is related to another.

While 99%+ of these suspension decisions are correct, we are also very sensitive to how impactful it can be if your account has been disabled in error. You can immediately appeal any enforcement, and each appeal is carefully reviewed by a person on our team. During the appeals process, we will reinstate your account if we discover that an error has been made.

Separately, we will soon be taking more time (days, not weeks) to review apps by developers that don’t yet have a track record with us. This will allow us to do more thorough checks before approving apps to go live in the store and will help us make even fewer inaccurate decisions on developer accounts.

Thank you for your ongoing partnership and for continuing to make Android an incredibly helpful platform for billions of people around the world.

How useful did you find this blog post?