Category Archives: Android Developers Blog

An Open Handset Alliance Project

How a single Android developer improved Lyft’s Drivers app startup time by 21% in one month

Posted by Mauricio Vergara, Product Marketing Manager, with contibutions by Thousand Ant.

Lyft 

Lyft is singularly committed to app excellence. As a rideshare company — providing a vital, time-sensitive service to tens of millions of riders and hundreds of thousands of drivers — they have to be. At that scale, every slowdown, frozen frame, or crash of their app can waste thousands of users’ time. Even a minor hiccup can mean a flood of people riding with (or driving for) the competition. Luckily, Lyft’s development team keeps a close eye on their app’s performance. That’s how they first noticed a slowdown in the startup time of their drivers’ Android app.

They needed to get to the bottom of the problem quickly — figure out what it would take to resolve and then justify such an investment to their leadership. That meant answering a number of tough questions. Where was the bottleneck? How was it affecting user experience? How great a priority should it be for their team at that moment? Luckily, they had a powerful tool at their disposal that could help them find answers. With the help of Android vitals, a Google Play tool for improving app stability and performance on Android devices, they located the problem, made a case for prioritizing it to their leadership, and dedicated the right amount of resources to solving it. Here’s how they did it.


New priorities

The first thing Lyft’s development team needed to do was figure out whether this was a pressing enough problem to convince their leadership to dedicate resources to it. Like any proposal to improve app quality, speeding up Lyft Driver’s start-up time had to be weighed out against other competing demands on developer bandwidth: introducing new product features, making architectural improvements, and improving data science. Generally, one of the challenges to convincing leadership to invest in app quality is that it can be difficult to correlate performance improvements with business metrics.

They turned to Android vitals to get an exact picture of what was at stake. Vitals gives developers access to data about the performance of their app, including app-not-responding errors, battery drainage, rendering, and app startup time. The current and historical performance of each metric is tracked on real devices and can be compared to the performance of other apps in the category. With the help of this powerful tool, the development team discovered that the Lyft Driver app startup time was 15–20% slower than 10 other apps in their category — a pressing issue.

Next, the team needed to establish the right scope for the project, one that would be commensurate with the slowdown’s impact on business goals and user experience. The data from Android vitals made the case clear, especially because it provided a direct comparison to competitors in the rideshare space. The development team estimated that a single developer working on the problem for one month would be enough to make a measurable improvement to app startup time.

Drawing on this wealth of data, and appealing to Lyft’s commitment to app excellence, the team made the case to their leadership. Demonstrating a clear opportunity to improve customer experience, a reasonably scoped and achievable goal, and clear-cut competitive intelligence, they got the go-ahead.


How They Did It

Lyft uses “Time to interact” as a primary startup metric (also known as Time to full display). To understand the factors that impact it, the Lyft team profiled each of their app’s launch stages, looking for the impasse. The Lyft Driver app starts up in four stages: 1) First, start the application process 2) “Activity” kicks off the UI rendering. 3) “Bootstrap” sends network requests for the data necessary to render the home screen. 4) Finally, “Display” opens the driver’s interface. Rigorous profiling revealed that the slowdown occurred in the third, bootstrapping, phase. With the bottleneck identified, the team took several steps to resolve it.


Lyft time to interact 

First, they reduced unneeded network calls on the critical launch path. After decomposing their backend services, they could safely remove some network calls in the launch path entirely. When possible, they also chose to execute network calls asynchronously. If some data was still required for the application to function, but was not needed during app launch, these calls were made non-blocking to allow the launch to proceed without them. Blocking network calls were able to be safely moved to the background. Finally, they chose to cache data between sessions.


21% faster startup 5% more driver sessions 

These may sound like relatively small changes, but they resulted in a dramatic 21% reduction in app startup time. This led to a 5% increase in driver sessions in Lyft Driver. With the results in hand, the team had enough buy-in from leadership to create a dedicated mobile performance workstream and add an engineer to the effort as they continued to make improvements. The success of the initiative caught on across the organization, with several managers reaching out to explore how they could make further investments in app quality.


Learnings

The success of these efforts contains several broader lessons, applicable to any organization.

As an app grows and the team grows with it, app excellence becomes more important than ever. Developers are often the first to recognize performance issues as they work closely on an app, but can find it difficult to raise awareness across an entire organization. Android vitals offers a powerful tool to do this. It provides a straightforward way to back up developer observations with data, making it easier to square performance metrics with business cases.

When starting your own app excellence initiative, it pays to first aim for small wins and build from there. Carefully pick actionable projects, which deliver significant results through an appropriate resource investment.

It’s also important to communicate early and often to involve the rest of the organization in the development team’s quality efforts. These constant updates about goals, plans, and results will help you keep your whole team on board.


Further Resources

Android vitals is just one of the many tools in the Android ecosystem designed to help understand and improve app startup time and overall performance. Another complementary tool, Jetpack Macrobenchmark, can help provide intelligence during development and testing on a variety of metrics. In contrast to Android vitals, which provides data from real users’ devices, Macrobenchmark allows you to benchmark and test specific areas of your code locally, including app startup time.

The Jetpack App startup library provides a straightforward, performant way to initialize components at application startup. Developers can use this library to streamline startup sequences and explicitly set the order of initialization. Meanwhile, Reach and devices can help you understand your user and issue distribution to make better decisions about which specs to build for, where to launch, and what to test. The data from the tool allows your team to prioritize quality efforts and determine where improvements will have the greatest impact for the most users. Perfetto is another invaluable asset: an open-source system tracing tool which you can use to instrument your code and diagnose startup problems. In concert, these tools can help you keep your app running smoothly, your users happy, and your whole organization supportive of your quality efforts.

If you’re interested in getting your own team on board for the pursuit of App Excellence (or join Lyft), check out our condensed case study for product owners and executives linked here.

The exciting aspects of Android Camera

Posted by Marwa Mabrouk, Android Camera Platform Product Manager

hand holding a phone 

Android Camera is an exciting space. Camera is one of the top reasons consumers purchase a phone. Android Camera empowers developers today through different tools. Camera 2 is the framework API that is included in Android since Android 5.0 Lollipop, and CameraX is a Jetpack support library that runs on top of Camera 2, and is available to all Android developers. These solutions are meant to complement each other in addressing the needs of the Android Camera ecosystem.

For developers who are starting with Android Camera, refreshing their app to the latest, or migrating their app from Camera 1, CameraX is the best tool to get started! CameraX offers key benefits that empower developers, and address the complexities of the ecosystem.

  1. Development speed was the main driver behind CameraX design. The SDK doesn’t just allow developers to get up and running much faster, it also has built in the best of development practices and photography know-how to get the most out of the camera.
  2. Android-enabled devices come in large numbers with many variations. CameraX aims to be consistent across many Android devices and has taken that complexity upon itself, to offer developers an SDK that works consistently across 150+ phone models, with backward-compatibility to Android 5.0 (API level 21). CameraX is tested daily by Google on each of those devices in our labs, to ensure complexity is not surfaced to developers, while keeping the quality high.
  3. Fast library releases is a flexibility that CameraX gets as a Jetpack support library. CameraX launches can happen on shorter regular bases, or ad hoc, to address feedback, and provide new capabilities. We plan to expand on this more in another blog post.

For developers who are building highly specialized functionality with Camera for low level control of the capture flow, and where device variations are to be taken into consideration, Camera 2 should be used.

Camera 2 is the common API that enables the camera hardware on every Android device and is deployed on all the billions of Android devices around the world in the market today. As a framework API, Camera 2 enables developers to utilize their deep knowledge of photography and device implementations. To ensure the quality of Camera 2, device manufacturers show compliance by testing their devices. Device variations do surface in the API based on the device manufacturer's choices, allowing custom features to take advantage of those variations on specific devices as they see fit.

To understand this more, let’s use an example. We’ll compare camera capture capabilities. Camera 2 offers special control of the individual capture pipeline for each of the cameras on the phone at the same time, in addition to very fine grained manual settings. CameraX enables capturing high-resolution, high-quality photos and provides auto-white-balance, auto-exposure, and auto-focus functionality, in addition to simple manual camera controls.

Considering application examples: Samsung uses the Camera Framework API to help the advanced pro-grade camera system to capture studio-quality photos in various lightings and settings on Samsung Galaxy devices. While the API is common, Samsung has enabled variations that are unique to each device's capabilities, and takes advantage of that in the camera app on each device. The Camera Framework API enables Samsung to reach into the low level camera capabilities, and tailor the native app for the device

Another example, Microsoft decided to integrate CameraX across all productivity apps where Microsoft Lens is used (i.e. Office, Outlook, OneDrive), to ensure high quality images are used in all these applications. By switching to CameraX, the Microsoft Lens team was able not only to improve its developer experience in view of the simpler API, but also improve performance, increase developer productivity and reduce time to go to market. You can learn more about this here.


This is a very exciting time for Android Camera, with many new features on both APIs:

  • CameraX has launched several features recently, the most significant has been Video Capture which became available to developers in beta on Jan 26th.
  • With Android 12 launch, Camera 2 has a number of features now available.

As we move forward, we plan to share with you more details about the exciting features that we have planned for Android Camera. We look forward to engaging with you and hearing your feedback, through the CameraX mailing list: [email protected] and the AOSP issue tracker.

Thank you for your continued interest in Android Camera, and we look forward to building amazing camera experiences for users in collaboration with you!

The exciting aspects of Android Camera

Posted by Marwa Mabrouk, Android Camera Platform Product Manager

hand holding a phone 

Android Camera is an exciting space. Camera is one of the top reasons consumers purchase a phone. Android Camera empowers developers today through different tools. Camera 2 is the framework API that is included in Android since Android 5.0 Lollipop, and CameraX is a Jetpack support library that runs on top of Camera 2, and is available to all Android developers. These solutions are meant to complement each other in addressing the needs of the Android Camera ecosystem.

For developers who are starting with Android Camera, refreshing their app to the latest, or migrating their app from Camera 1, CameraX is the best tool to get started! CameraX offers key benefits that empower developers, and address the complexities of the ecosystem.

  1. Development speed was the main driver behind CameraX design. The SDK doesn’t just allow developers to get up and running much faster, it also has built in the best of development practices and photography know-how to get the most out of the camera.
  2. Android-enabled devices come in large numbers with many variations. CameraX aims to be consistent across many Android devices and has taken that complexity upon itself, to offer developers an SDK that works consistently across 150+ phone models, with backward-compatibility to Android 5.0 (API level 21). CameraX is tested daily by Google on each of those devices in our labs, to ensure complexity is not surfaced to developers, while keeping the quality high.
  3. Fast library releases is a flexibility that CameraX gets as a Jetpack support library. CameraX launches can happen on shorter regular bases, or ad hoc, to address feedback, and provide new capabilities. We plan to expand on this more in another blog post.

For developers who are building highly specialized functionality with Camera for low level control of the capture flow, and where device variations are to be taken into consideration, Camera 2 should be used.

Camera 2 is the common API that enables the camera hardware on every Android device and is deployed on all the billions of Android devices around the world in the market today. As a framework API, Camera 2 enables developers to utilize their deep knowledge of photography and device implementations. To ensure the quality of Camera 2, device manufacturers show compliance by testing their devices. Device variations do surface in the API based on the device manufacturer's choices, allowing custom features to take advantage of those variations on specific devices as they see fit.

To understand this more, let’s use an example. We’ll compare camera capture capabilities. Camera 2 offers special control of the individual capture pipeline for each of the cameras on the phone at the same time, in addition to very fine grained manual settings. CameraX enables capturing high-resolution, high-quality photos and provides auto-white-balance, auto-exposure, and auto-focus functionality, in addition to simple manual camera controls.

Considering application examples: Samsung uses the Camera Framework API to help the advanced pro-grade camera system to capture studio-quality photos in various lightings and settings on Samsung Galaxy devices. While the API is common, Samsung has enabled variations that are unique to each device's capabilities, and takes advantage of that in the camera app on each device. The Camera Framework API enables Samsung to reach into the low level camera capabilities, and tailor the native app for the device

Another example, Microsoft decided to integrate CameraX across all productivity apps where Microsoft Lens is used (i.e. Office, Outlook, OneDrive), to ensure high quality images are used in all these applications. By switching to CameraX, the Microsoft Lens team was able not only to improve its developer experience in view of the simpler API, but also improve performance, increase developer productivity and reduce time to go to market. You can learn more about this here.


This is a very exciting time for Android Camera, with many new features on both APIs:

  • CameraX has launched several features recently, the most significant has been Video Capture which became available to developers in beta on Jan 26th.
  • With Android 12 launch, Camera 2 has a number of features now available.

As we move forward, we plan to share with you more details about the exciting features that we have planned for Android Camera. We look forward to engaging with you and hearing your feedback, through the CameraX mailing list: [email protected] and the AOSP issue tracker.

Thank you for your continued interest in Android Camera, and we look forward to building amazing camera experiences for users in collaboration with you!

Upgrading Android Attestation: Remote Provisioning

Posted by Max Bires, Software Engineer

Blue Android background

Attestation as a feature has been mandated since Android 8.0. As releases have come and gone, it has increasingly become more and more central to trust for a variety of features and services such as SafetyNet, Identity Credential, Digital Car Key, and a variety of third party libraries. In light of this, it is time we revisited our attestation infrastructure to tighten up the security of our trust chain and increase the recoverability of device trust in the event of known vulnerabilities.

Starting in Android 12.0, we will be providing an option to replace in-factory private key provisioning with a combination of in-factory public key extraction and over-the-air certificate provisioning with short-lived certificates. This scheme will be mandated in Android 13.0. We call this new scheme Remote Key Provisioning.


Who This Impacts?

OEMs/ODMs

Device manufacturers will no longer be provisioning attestation private keys directly to devices in the factory, removing the burden of having to manage secrets in the factory for attestation.

Relying Parties, Potentially

Described further down below, the format, algorithms, and length of the certificate chain in an attestation will be changing. If a relying party has set up their certificate validation code to very strictly fit the legacy certificate chain structure, then this code will need to be updated.


Why Change?

The two primary motivating factors for changing the way we provision attestation certificates to devices are to allow devices to be recovered post-compromise and to tighten up the attestation supply chain. In today’s attestation scheme, if a device model is found to be compromised in a way that affects the trust signal of an attestation, or if a key is leaked through some mechanism, the key must be revoked. Due to the increasing number of services that rely on the attestation key signal, this can have a large impact on the consumer whose device is affected.

This change allows us to stop provisioning to devices that are on known-compromised software, and remove the potential for unintentional key leakage. This will go a long way in reducing the potential for service disruption to the user.

Google Servers Image

How Does This Work?

A unique, static keypair is generated by each device, and the public portion of this keypair is extracted by the OEM in their factory. These public keys are then uploaded to Google servers, where they serve as the basis of trust for provisioning later. The private key never leaves the secure environment in which it is generated.

When a device is unboxed and connected to the internet, it will generate a certificate signing request for keys it has generated, signing it with the private key that corresponds to the public key collected in the factory. Backend servers will verify the authenticity of the request and then sign the public keys, returning the certificate chains. Keystore will then store these certificate chains, assigning them to apps whenever an attestation is requested.

This flow will happen regularly upon expiration of the certificates or exhaustion of the current key supply. The scheme is privacy preserving in that each application receives a different attestation key, and the keys themselves are rotated regularly. Additionally, Google backend servers are segmented such that the server which verifies the device’s public key does not see the attached attestation keys. This means it is not possible for Google to correlate attestation keys back to a particular device that requested them.


What is Changing from a Technical Standpoint?

End users won’t notice any changes. Developers that leverage attestation will want to watch out for the following changes:

  • Certificate Chain Structure
    • Due to the nature of our new online provisioning infrastructure, the chain length is longer than it was previously, and is subject to change.
  • Root of Trust
    • The root of trust will eventually be updated from the current RSA key to an ECDSA key.
  • RSA Attestation Deprecation
    • All keys generated and attested by KeyMint will be signed with an ECDSA key and corresponding certificate chain. Previously, asymmetric keys were signed by their corresponding algorithm.
  • Short-Lived Certificates and Attestation Keys
    • Certificates provisioned to devices will generally be valid for up to two months before they expire and are rotated.

Using performance class to optimize your user experience

Posted by Don Turner, Developer Relations Engineer, and Francois Goldfain, Director of Android Media Framework

Illustration of woman on a phone 

Today we're launching the Jetpack Core Performance library in alpha. This library enables you to easily understand what a device is capable of, and tailor your user experience accordingly. It does this by allowing you to obtain the device’s performance class on devices running Android 11 (API level 30) and above.

A performance class is a ranking that reflects both a device's level of performance and its overall capabilities. As such, it largely reflects the device’s hardware specifications, but also how it performs in certain real-world scenarios, verified by the Android Compatibility Test Suite.

The performance class requirements currently focus on media use cases. For example, a Media Performance Class 12 device is guaranteed to:

  • have 6+ GB of RAM
  • have a 12+ megapixel rear-facing camera supporting video capture at 4k@30fps,
  • be able to initialize a video codec in <50ms even when under load
  • and many other requirements.

A device that meets these requirements can optimally handle many popular media use cases including the typical video pipelines in social media apps for capturing, encoding, and sharing.

As an app developer, this means you can reliably group devices with the same level of performance and tailor your app’s behavior to those different groups. This enables you to deliver an optimal experience to users with both more and less capable devices. Performance class requirements will expand with each major Android release, making it possible to easily target different levels of experience to the performance class range you find appropriate. For example, you might wish to tailor “more premium” and “more functional” experiences to certain performance classes.


How to use performance class

To add performance class to your app, include the following dependency in your build.gradle:

implementation 'androidx.core:core-performance:1.0.0-alpha02'

Then use it to tailor your user experience. For example, to encode higher resolution video depending on Media Performance Class:

class OptimalVideoSettings(context: Context){

   private val devicePerf: DevicePerformance = DevicePerformance.create(context)

   val encodeHeight by lazy {
       when (devicePerf.mediaPerformanceClass) {
           Build.VERSION_CODES.S -> 1080 // On performance class 12 use 1080p
           Build.VERSION_CODES.R -> 720 // On performance class 11 use 720p
           else -> 480
       }
   }

   val encodeFps by lazy {
       when(devicePerf.mediaPerformanceClass){
           Build.VERSION_CODES.S -> 60 // On performance class 12 use 60 fps
           Build.VERSION_CODES.R -> 30 // On performance class 11 use 30 fps
           else -> 30
       }
   }
}

When to use performance class

The Android device ecosystem is very diverse. The same application code can lead to very different behaviors depending on the device’s capabilities. For example, encoding a 4K video might take a few seconds on one device but a few minutes on another. User expectations also vary greatly based on the device they purchase. To provide an optimized user experience, it is common to group devices based on some criteria, such as RAM size or year released, then tailor your app's features for each group.

The problem with using an arbitrary value such as RAM size for grouping is that it provides no guarantees of a device's performance. There will also always be outliers that perform better or worse than expected within that group. Grouping on performance class solves this problem since it provides these guarantees, backed by real-world tests.

Manually testing devices that belong to different performance classes is one option to assess and identify the changes needed to balance functionalities and usability. However, the recommended approach to validate changes in the app experience is to run A/B tests and analyze their impact on app metrics. You can do this with the support of an experimentation platform such as Firebase. Providing the device’s performance class to the experimentation platform gives an additional performance dimension to the test results. This lets you identify the right optimizations for each class of device.


Snap Inc.

Snap has been using device clustering and A/B testing to fine tune their experience for Snapchatters. By leveraging performance class, Snapchat confidently identifies device capability in a scalable way and delivers an optimal experience. For example, the visual quality of shared videos is increased by using higher resolution and bitrate on Media Performance Class 12 devices than by default. As more devices are upgraded to meet Media Performance Class, Snapchat will run additional A/B tests and deploy features better optimized for the device capabilities.


Device support

The performance class requirements are developed in collaboration with leading developers and device manufacturers, who recognize the need for a simple, reliable, class-based system to allow app optimizations at scale.

In particular, Oppo, OnePlus, realme, Vivo and Xiaomi have been first to optimize their flagship devices to ensure that they meet the Media Performance Class 12 requirements. As a result, Build.VERSION.MEDIA_PERFORMANCE_CLASS returns Build.VERSION_CODES.S (the Android 12 API level) on the following devices:


Why a Jetpack library?

The Jetpack Core Performance library was introduced to extend performance class to devices not yet running Android 12 or not advertising their eligible performance class at the time they passed the Android Compatibility Test Suite.

The library, which supports devices running Android 11 and above, aims to address this. It reports the performance class of many devices based on the test results collected during the device certification or through additional testing done by the Google team. We're adding new devices regularly, so make sure you’re using the latest version of the Core Performance library to get maximum device coverage.

Reporting performance class to Firebase

When using Firebase as an experimentation platform for A/B tests, it's easy to send the device performance class to Firebase Analytics using a user property. Filtering the A/B test reports by performance class can indicate which experimental values led to the best metrics for each group of devices.

Here's an example of an A/B test which varies the encoding height of a video, and reports the performance class using a user property.

class MyApplication : Application() {

    private lateinit var devicePerf: DevicePerformance 
    private lateinit var firebaseAnalytics: FirebaseAnalytics
         

    override fun onCreate() {
        devicePerf = DevicePerformance.create(this)
        firebaseAnalytics = Firebase.analytics        
        firebaseAnalytics.setUserProperty(
           "androidx.core.performance.DevicePerformance.mediaPerformanceClass",
           devicePerf.mediaPerformanceClass)
    }

    fun getVideoEncodeHeight() : Long = remoteConfig.getLong("encode_height")
}

Next steps

We'd love for you to try out the Core Performance library in your app. If you have any issues or feature requests please file them here.

Also, we'd be interested to hear any feedback you have on the performance class requirements. Are there specific performance criteria or hardware requirements that are important for your app's use cases? If so, please let us know using the Android issue tracker.

Exploring User Choice Billing With First Innovation Partner Spotify

Posted by Sameer Samat, Vice President, Product Management

Mobile apps have transformed our lives. They help us stay informed and entertained, keep us connected to each other and have created new opportunities for billions of people around the world. We’re humbled by the role Google Play has played over the last 10 years in this global transformation.

We wouldn’t be here if it weren’t for the close partnership with our valued developers and using their feedback to keep evolving. For example, based on partner feedback and in response to competition, our pricing model has evolved to help all developers on our platform succeed and today 99% of developers qualify for a service fee of 15% or less.

Recently, a discussion has emerged around billing choice within app stores. We welcome this conversation and today we want to share an exciting pilot program we are working on in partnership with Play developers.


User Choice Billing

When users choose Google Play, it’s because they count on us to deliver a safe experience, and that includes in-app payment systems that protect users’ data and financial information. That’s why we built Google Play’s billing system to the highest standards for privacy and safety so users can be confident their sensitive payment data won’t be at risk when they make in-app purchases.

We think that users should continue to have the choice to use Play’s billing system when they install an app from Google Play. We also think it’s critical that alternative billing systems meet similarly high safety standards in protecting users’ personal data and sensitive financial information.

Building on our recent launch allowing an additional billing system alongside Play’s billing for users in South Korea and in line with our principles, we are announcing we will be exploring user choice billing in other select countries.

This pilot will allow a small number of participating developers to offer an additional billing option next to Google Play’s billing system and is designed to help us explore ways to offer this choice to users, while maintaining our ability to invest in the ecosystem. This is a significant milestone and the first on any major app store — whether on mobile, desktop, or game consoles.


Partnering with Spotify

We’ll be partnering with developers to explore different implementations of user-choice billing, starting with Spotify. As one of the world’s largest subscription developers with a global footprint and integrations across a wide range of device form factors, they’re a natural first partner. Together, we’ll work to innovate in how consumers make in-app purchases, deliver engaging experiences across multiple devices, and bring more consumers to the Android platform.

Spotify will be introducing Google Play’s billing system alongside their current billing system, and their perspective as our first partner will be invaluable. This pilot will help us to increase our understanding of whether and how user choice billing works for users in different countries and for developers of different sizes and categories.

Alex Norström, Chief Freemium Business Officer, commented: “Spotify is on a years-long journey to ensure app developers have the freedom to innovate and compete on a level playing field. We’re excited to be partnering with Google to explore this approach to payment choice and opportunities for developers, users and the entire internet ecosystem. We hope the work we’ll do together blazes a path that will benefit the rest of the industry.”

We understand this process will take time and require close collaboration with our developer community, but we’re thrilled about this first step and plan to share more in the coming months.

Android 13 Developer Preview 2

Posted by Dave Burke, VP of Engineering

Android13 Logo

Last month, we released the first developer preview of Android 13, built around our core themes of privacy and security, developer productivity, as well as tablets and large screen support. Today we’re sharing Android 13 Developer Preview 2 with more new features and changes for you to try in your apps. Your input helps us make Android a better platform for developers and users, so let us know what you think!

Today’s release also comes on the heels of the 12L feature drop moving to the Android Open Source Project (AOSP) last week, helping you better take advantage of the over 250+ million large screen Android devices. And to dive into Android 13, tablets, as well as our developer productivity investments in Jetpack Compose, check out the latest episode of #TheAndroidShow.


12L feature drop, now in AOSP

Before jumping into Developer Preview 2, let’s take a look at the other news from last week: we’ve officially released the 12L feature drop to AOSP and it’s rolling out to all supported Pixel devices over the next few weeks. 12L makes Android 12 even better on tablets, and includes updates like a new taskbar that lets users instantly drag and drop apps into split-screen mode, new large-screen layouts in the notification shade and lockscreen, and improved compatibility modes for apps. You can read more here.

Starting later this year, 12L will be available in planned updates on tablets and foldables from Samsung, Lenovo, and Microsoft, so now is the time to make sure your apps are ready. We highly recommend testing your apps in split-screen mode with windows of various sizes, trying it in different orientations, and checking the new compatibility mode changes if they apply. You can read more about 12L for developers here.

And the best part: the large screen features in 12L are foundational in Android 13, so you can develop and test on Android 13 knowing that you’re also covering your bases for tablets running Android 12L. We see large screens as a key surface for the future of Android, and we’re continuing to invest to give you the tools you need to build great experiences for tablets, Chromebooks, and foldables. You can learn more about how to get started optimizing for large screens, and make sure to check out our large screens developer resources.

Let’s dive into what’s new in today’s Developer Preview 2 of Android 13.


Privacy and user trust

People want an OS and apps that they can trust with their most personal and sensitive information and the resources on their devices. Privacy and user trust are core to Android’s product principles, and in Android 13 we’re continuing to focus on building a responsible and high quality platform for all by providing a safer environment on the device and more controls to the user. Here’s what’s new in Developer Preview 2.

Notification permission - To help users focus on the notifications that are most important to them, Android 13 introduces a new runtime permission for sending notifications from an app: POST_NOTIFICATIONS. Apps targeting Android 13 will now need to request the notification permission from the user before posting notifications. For apps targeting Android 12 or lower, the system will handle the upgrade flow on your behalf. The flow will continue to be fine tuned. To provide more context and control for your users, we encourage you to target Android 13 as early as possible and request the notification permission in your app. More here.

Notification permission dialog in Android 13.

Notification permission dialog in Android 13.

Developer downgradable permissions - Some apps may no longer require certain permissions which were previously granted by the user to enable a specific feature, or retain a sensitive permission from an older Android version. In Android 13, we’re providing a new API to let your app protect user privacy by downgrading previously granted runtime permissions.

Safer exporting of context-registered receivers - In Android 12 we required developers to declare the exportability of manifest-declared Intent receivers. In Android 13 we’re asking you to do the same for context-registered receivers as well, by adding either the RECEIVER_EXPORTED or RECEIVER_NOT_EXPORTED flag when registering receivers for non-system sources. This will help ensure that receivers aren’t available for other applications to send broadcasts to unless desired. While not required in Android 13, we recommend declaring exportability as a step toward securing your app.


Developer productivity

In Android 13 we’re working to give you more tools to help you deliver a polished experience and better performance for users. Here are some of the updates in today’s release.

Improved Japanese text wrapping - TextViews can now wrap text by Bunsetsu (the smallest unit of words that sounds natural) or phrases -- instead of by character -- for more polished and readable Japanese applications. You can take advantage of this wrapping by using android:lineBreakWordStyle="phrase" with TextViews.

Japanese text wrapping with phrase style
enabled (bottom) and without (top)

Japanese text wrapping with phrase style enabled (bottom) and without (top).

Improved line heights for non-latin scripts - Android 13 improves the display of non-Latin scripts (such as Tamil, Burmese, Telugu, and Tibetan) by using a line height that’s adapted for each language. The new line heights prevent clipping and improve the positioning of characters. Your app can take advantage of these improvements just by targeting Android 13. Make sure to test your apps when using the new line spacing, since changes may affect your UI in non-Latin languages.

Target SDK for Android 12 and 13

Improved line height for non-Latin scripts in apps targeting Android 13 (bottom).

Text Conversion APIs - People who speak languages like Japanese and Chinese use phonetic lettering input methods, which often slow down searching and features like auto-completion. In Android 13, apps can call the new text conversion API so users can find what they're looking for faster and easier. Previously, for example, searching required a Japanese user to (1) input Hiragana as the phonetic pronunciation of their search term (i.e. a place or an app name), (2) use the keyboard to convert the Hiragana characters to Kanji, (3) re-search using the Kanji characters to (4) get their search results. With the new text conversion API, Japanese users can type in Hiragana and immediately see Kanji search results live, skipping steps 2 and 3.

Color vector fonts - Android 13 adds rendering support for COLR version 1 (spec, intro video) fonts and updates the system emoji to the COLRv1 format. COLRv1 is a new, highly compact, font format that renders quickly and crisply at any size. For most apps this will just work, the system handles everything. You can opt in to COLRv1 for your app starting in Developer Preview 2. If your app implements its own text rendering and uses the system's fonts, we recommend opting in and testing emoji rendering. Learn more about COLRv1 in the Chrome announcement.

COLRv1 vector emoji (left) and bitmap emoji

COLRv1 vector emoji (left) and bitmap emoji.

Bluetooth LE Audio - Low Energy (LE) Audio is the next-generation wireless audio built to replace Bluetooth classic and enable new use cases and connection topologies. It will allow users to share and broadcast their audio to friends and family, or subscribe to public broadcasts for information, entertainment, or accessibility. It’s designed to ensure that users can receive high fidelity audio without sacrificing battery life and be able to seamlessly switch between different use cases that were not possible with Bluetooth Classic. Android 13 adds built-in support for LE Audio, so developers should get the new capabilities for free on compatible devices.

MIDI 2.0 - Android 13 adds support for the new MIDI 2.0 standard, including the ability to connect MIDI 2.0 hardware through USB. This updated standard offers features such as increased resolution for controllers, better support for non-Western intonation, and more expressive performance using per-note controllers.


App compatibility

With each platform release, we’re working to make updates faster and smoother by prioritizing app compatibility as we roll out new platform versions. In Android 13 we’ve made app-facing changes opt-in to give you more time, and we’ve updated our tools and processes to help you get ready sooner.

With Developer Preview 2, we’re well into the release and continuing to improve overall stability, so now is the time to try the new features and changes and give us your feedback. We’re especially looking for input on our APIs, as well as details on how the platform changes affect your apps. Please visit the feedback page to share your thoughts with us or report issues.

timeline 

It’s also a good time to start your compatibility testing and identify any work you’ll need to do. We recommend doing the work early, so you can release a compatible update by Android 13 Beta 1. There’s no need to change your app’s targetSdkVersion at this time, but we do recommend using the behavior change toggles in Developer Options to get a preliminary idea of how your app might be affected by opt-in changes in Android 13.

As we reach Platform Stability in June 2022, all of the app-facing system behaviors, SDK/NDK APIs, and non-SDK lists will be finalized. At that point, you can wind up your final compatibility testing and release a fully compatible version of your app, SDK, or library. More on the timeline for developers is here.

App compatibility toggles in Developer Options.

App compatibility toggles in Developer Options.


Get started with Android 13

The Developer Preview has everything you need to try the Android 13 features, test your apps, and give us feedback. You can get started today by flashing a device system image to a Pixel 6 Pro, Pixel 6, Pixel 5a 5G, Pixel 5, Pixel 4a (5G), Pixel 4a, Pixel 4 XL, or Pixel 4 device. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio Dolphin. For even broader testing, GSI images are available. If you’ve already installed a preview build to your Pixel device, you’ll automatically get this update and all later previews and Betas over the air. More details on how to get Android 13 are here.

For complete information, visit the Android 13 developer site.

Helping Users Discover Quality Apps on Large Screens

Posted by The Android Team

Computer screen illustration

Large screens are growing in reach, with now over 250M active Android tablets, foldables, and ChromeOS devices. As demand continues to accelerate, we’re seeing users doing more than ever on large screens, from socializing and playing games, to multitasking and getting things done. To help people get the most from their devices, we're making big changes in Google Play to enable users to discover and engage with high quality apps and games.

Changes in Play

We’ll be introducing three main updates to the store: ranking and promotability changes, alerts for low quality apps, and device-specific ratings and reviews.

Ranking and Promotability Changes

We recently published our large screen app quality guidelines in addition to our core app quality guidelines to provide guidance on creating great user experiences on large screens. It encompasses a holistic set of features, from basic compatibility requirements such as portrait and landscape support, to more differentiated requirements like keyboard and stylus capabilities. In the coming months, we’ll be updating our featuring and ranking logic in Play on large screen devices to prioritize high-quality apps and games based on these app quality guidelines. This will affect how apps are surfaced in search results and recommendations on the homepage, with the goal of helping users find the apps that are best optimized for their device. We will also be deepening our investment in editorial content across Play to highlight apps that have been optimized for large screens.

Alerts for Users Installing Low Quality Apps

For apps that don’t meet basic compatibility requirements, we’ll be updating current alerts to users on large screens to help set expectations for how apps will look and function post-install. This will help notify users about apps that may not work well on their large screen devices.We are working to provide additional communications on this change, so stay tuned for further updates later this year.

Device-Specific Ratings and Reviews

Lastly, as we previously announced, users will soon be able to see ratings and reviews split by device type (e.g. tablets and foldables, Chrome OS, Wear, or Auto) to help them make better decisions about the apps that are right for them. Where applicable, the default rating shown in Play will be that of the device type the user is using, to provide a better sense of the app experience on their device. To preview your ratings and reviews by device, you can view your device-type breakdown in Play Console today.

Play Console screen

Analyze your ratings and reviews breakdown by device type to plan large screen optimizations


Tools for Getting Started on Large Screen Optimizations

Developers optimizing for large screens are already seeing positive impact to user engagement and retention. To help you get started, here are some tips and resources for optimizing your app for large screens:

  • Use our large screens quality guidelines as a checklist to help you benchmark your apps’ compatibility level and plan for any enhancements
  • Reference our developer documentation on large screens development, including our resources for building responsive layouts, and how to best support different screen sizes.
  • Track device type changes in core metrics as well as user and issue distributions across device types via our recently released Device Type breakdowns in the Reach and Devices section of Play Console. You can use Reach and Devices not only for existing apps or games, but also to plan your next title - by choosing a relevant peer group and analyzing user and issue distributions.
    Play Console Attribute Details

    Use the Device Type filter to select one or multiple device types to analyze in Reach and Devices


    Play Console Attribute Details with chart

    See device type breakdowns of your user and issue distributions to optimize your current title
    or plan your next title


    The features in Play will roll out gradually over the coming months, so we encourage you to get a head start in planning for large screen app quality enhancements ahead of these changes. Along the way, we will continue collecting feedback to understand how we can best support large screen optimizations that improve consumer experiences and empower developers to build better apps.

Access Android vitals data through the new Play Developer Reporting API

Posted by Lauren Mytton, Product Manager, Google Play

Hand holding a phone 

Quality is foundational to your game or app’s success on Google Play, and Android vitals in Google Play Console is a great way to track how your app is performing. In fact, over 80% of the top one thousand developers check Android vitals at least once a month to monitor and troubleshoot their technical quality, and many visit daily

While the Android vitals overview in Play Console lets you check your app or game’s quality at a glance, many developers have told us that they want to work with their vitals data outside Play Console, too. Some of your use cases include:

  • Build internal dashboards
  • Join with other datasets for deeper analysis, and
  • Automate troubleshooting and releases

Starting today, these use cases are now possible with the new Play Developer Reporting API.

The Play Developer Reporting API allows developers to work with app-level data from their developer accounts outside Play Console. In this initial launch, you get access to the four core Android vitals stability and battery metrics: crash rate, ANR rate, excessive wake-up rate, and stuck background wake-lock rate, along with crash and ANR issues and stack traces. You can also view anomalies, breakdowns (including new country filters in Vitals), and three years of metric history.


Set up access to the new Play Developer Reporting API from 
the API Access page in Play Console.

Set up access to the new Play Developer Reporting API from the API Access page in Play Console.

Getting started with the API

To enable the API, you must be an owner of your developer account in Play Console. Then you can set up access in minutes from the API Access page in Play Console. Our documentation covers everything you need to know to get started.

Using the API

You can find sample requests in the API documentation, along with a list of available endpoints (for both alpha and beta releases).

Best practices

Once you have enabled the API, you may wish to send some requests manually to get a sense of the API resources and operation before implementing more complex solutions. This can also help you establish query times, which will vary depending on the amount of data being processed. Queries over long time ranges, across many dimensions, and/or against very large apps will take longer to execute.

Most of our metric sets are refreshed once a day. To avoid wasting resources and request quota, we recommend you use the provided methods to check for data freshness and verify that new data is available before issuing a query.

Thank you to all the developers who requested this feature. We hope it helps you continue to improve your apps and games. We hope it helps you continue to improve your apps and games. To learn more about Android vitals and the Play Developer Reporting API, view our session from the Google for Games Developer Summit.

How useful did you find this blog post?

Google Play Logo 

Things to know from the 2022 Google for Games Developer Summit

Posted by Greg Hartrell, Product Director, Games on Play/Android

Google for Games Developer Summit 

Over the years, we’ve seen that apps and games are not just experiences - they’re businesses - led by talented people like yourselves. So it's our goal to continue supporting your businesses to reach even greater potential. At our recent Google for Games Developer Summit, we shared how teams across Google have been continuing to build the next generation of services, tools and features to help you create and monetize high quality experiences, more programs tailored to your needs, and more educational resources with best practices.

We want to help you throughout the game development lifecycle, by making it easier to develop high quality games and deliver these great experiences to growing audiences and devices.


Easier to bring your game to more screens
To enable games on new screens and devices, we want to help you meet players where they are, giving them the convenience of playing games wherever they choose.

  • Gameplay across tablets, foldables, and Chromebooks is on the rise and offers the opportunity to be more engaging and immersive than ever before. In 2021, Android usage on CrOS grew 50% versus the previous year, led by games.
  • Google Play Games for PC Beta rolled out in January to South Korea, Taiwan, and Hong Kong. This standalone Windows PC application built by Google, allows users to play a high quality catalog of Google Play games seamlessly across their mobile phone, tablet, Chromebook, or (now) their Windows PC. Learn more and start to optimize your game for more screens today.
  • Play as you download beta program was announced last year and we will soon open it up to all Android 12 users. PAYD allows users to get into gameplay in seconds while game assets are downloaded in the background. and can happen with minimal developer changes to your underlying implementation. Sign up for the beta.

Easier to develop high quality games

We’re committed to supporting you build high quality Android games, by continuing to focus on tools and SDKs that simplify development and provide insights about your game, while also partnering with game engines, including homegrown native c/c++ engines. Last year, we released the Android Game Development Kit (AGDK), a set of tools and libraries to help make Android Game Development more efficient, and have made several updates based on developer feedback.

  • Android Game Development Extension allows game developers to build directly for Android from within Visual Studio. To make debugging easier across Java and C, AGDE will now include cross compatibility between Android Studio and Visual Studio so you can open and edit your AGDE projects in Android Studio’s debugger.
  • The new Memory Advice API (Beta) library added to AGDK helps developers understand their memory consumption by combining the best signals for a given device to determine how close the device is to a low memory kill.
  • We’ve fully launched the Android GPU Inspector Frame Profiler to help you understand when your game is bottlenecked on the GPU vs. CPU, and achieve better frame rates and battery life.

More tools to help you succeed on Google Play

The Play Console is an invaluable resource to help in your game lifecycle, with tools and insights to assist before and after launch.

  • We continue to invest in programs to help developers of all sizes grow their businesses with Google Play. For our largest developers, we launched the Google Play Partner Program for Games, offering additional growth tools and premier services, tailored for the unique needs of developers at this scale.
  • Reach and devices helps you make foundational decisions about what devices to build for, where to launch and what to test, both pre-launch and post-launch. It already shows your install and issue distributions across a range of device attributes. Today, we’re launching Google Play revenue and revenue growth distributions for your game and its peers, so you can build revenue-based business cases for troubleshooting or device targeting, if that suits your business model better than using installs.
  • We recently launched Strategic guidance in Console, which provides an intuitive way to help you evaluate how well your game is monetizing, and see opportunities to grow revenue. You can think of Reach & devices as helping you to understand revenue opportunities from a technical perspective; strategic guidance does the same from a business perspective, so you can use them together to provide a holistic picture of your IAP revenue drivers.
  • Android vitals is your destination to monitor and improve your game’s stability on Google Play. For those of you who have games with global presence, we’ve just launched country breakdowns and filters for Vitals metrics, so it’s easier for you to prioritize and troubleshoot stability issues. In addition, today we’re launching the Developer Reporting API which gives you programmatic access to your core Android vitals metrics and issue data, including crash and ANR rates, clusters, and stack traces.

Learn more about everything we shared at the Google for Games Developer Summit and by visiting g.co/android/games for additional resources and documentation. We remain committed to supporting the developer ecosystem and greatly appreciate your continued feedback and investment in creating high quality game experiences for players around the world.