Tag Archives: Featured

Prepare your apps for Google Play’s 16 KB page size compatibility requirement

Posted by Dan Brown – Product Manager, Google Play

Google Play empowers you to manage and distribute your innovative and trusted apps and games to billions of users around the world across the entire breadth of Android devices, and historically, all Android devices have managed memory in 4 KB pages.

As device manufacturers equip devices with more RAM to optimize performance, many will adopt larger page sizes like 16 KB. Android 15 introduces support for the increased page size, ensuring your app can run on these evolving devices and benefit from the associated performance gains.

Starting November 1st, 2025, all new apps and updates to existing apps submitted to Google Play and targeting Android 15+ devices must support 16 KB page sizes.

This is a key technical requirement to ensure your users can benefit from the performance enhancements on newer devices and prepares your apps for the platform's future direction of improved performance on newer hardware. Without recompiling to support 16 KB pages, your app might not function correctly on these devices when they become more widely available in future Android releases.

We’ve seen that 16 KB can help with:

    • Faster app launches: See improvements ranging from 3% to 30% for various apps.
    • Improved battery usage: Experience an average gain of 4.5%.
    • Quicker camera starts: Launch the camera 4.5% to 6.6% faster.
    • Speedier system boot-ups: Boot Android devices approximately 8% faster.

We recommend checking your apps early especially for dependencies that might not yet be 16 KB compatible. Many popular SDK providers, like React Native and Flutter, already offer compatible versions. For game developers, several leading game engines, such as Unity, support 16 KB, with support for Unreal Engine coming soon.

Reaching 16 KB compatibility

A substantial number of apps are already compatible, so your app may already work seamlessly with this requirement. For most of those that need to make adjustments, we expect the changes to be minimal.

    • Apps with no native code should be compatible without any changes at all.
    • Apps using libraries or SDKs that contain native code may need to update these to a compatible version.
    • Apps with native code may need to recompile with a more recent toolchain and check for any code with incompatible low level memory management.

Our December blog post, Get your apps ready for 16 KB page size devices, provides a more detailed technical explanation and guidance on how to prepare your apps.

Check your app's compatibility now

It's easy to see if your app bundle already supports 16 KB memory page sizes. Visit the app bundle explorer page in Play Console to check your app's build compliance and get guidance on where your app may need updating.

App bundle explorer in Play Console

Beyond the app bundle explorer, make sure to also test your app in a 16 KB environment. This will help you ensure users don’t experience any issues and that your app delivers its best performance.

For more information, check out the full documentation.

Thank you for your continued support in bringing delightful, fast, and high-performance experiences to users across the breadth of devices Play supports. We look forward to seeing the enhanced experiences you'll deliver with 16 KB support.

Building delightful Android camera and media experiences

Posted by Donovan McMurray, Mayuri Khinvasara Khabya, Mozart Louis, and Nevin Mital – Developer Relations Engineers

Hello Android Developers!

We are the Android Developer Relations Camera & Media team, and we’re excited to bring you something a little different today. Over the past several months, we’ve been hard at work writing sample code and building demos that showcase how to take advantage of all the great potential Android offers for building delightful user experiences.

Some of these efforts are available for you to explore now, and some you’ll see later throughout the year, but for this blog post we thought we’d share some of the learnings we gathered while going through this exercise.

Grab your favorite Android plush or rubber duck, and read on to see what we’ve been up to!

Future-proof your app with Jetpack

Nevin Mital

One of our focuses for the past several years has been improving the developer tools available for video editing on Android. This led to the creation of the Jetpack Media3 Transformer APIs, which offer solutions for both single-asset and multi-asset video editing preview and export. Today, I’d like to focus on the Composition demo app, a sample app that showcases some of the multi-asset editing experiences that Transformer enables.

I started by adding a custom video compositor to demonstrate how you can arrange input video sequences into different layouts for your final composition, such as a 2x2 grid or a picture-in-picture overlay. You can customize this by implementing a VideoCompositorSettings and overriding the getOverlaySettings method. This object can then be set when building your Composition with setVideoCompositorSettings.

Here is an example for the 2x2 grid layout:

object : VideoCompositorSettings {
  ...

  override fun getOverlaySettings(inputId: Int, presentationTimeUs: Long): OverlaySettings {
    return when (inputId) {
      0 -> { // First sequence is placed in the top left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(-0.5f, 0.5f) // Top-left section of background
          .build()
      }

      1 -> { // Second sequence is placed in the top right
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(0.5f, 0.5f) // Top-right section of background
          .build()
      }

      2 -> { // Third sequence is placed in the bottom left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(-0.5f, -0.5f) // Bottom-left section of background
          .build()
      }

      3 -> { // Fourth sequence is placed in the bottom right
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(0.5f, -0.5f) // Bottom-right section of background
          .build()
      }

      else -> {
        StaticOverlaySettings.Builder().build()
      }
    }
  }
}

Since getOverlaySettings also provides a presentation time, we can even animate the layout, such as in this picture-in-picture example:

moving image of picture in picture on a mobile device

Next, I spent some time migrating the Composition demo app to use Jetpack Compose. With complicated editing flows, it can help to take advantage of as much screen space as is available, so I decided to use the supporting pane adaptive layout. This way, the user can fine-tune their video creation on the preview screen, and export options are only shown at the same time on a larger display. Below, you can see how the UI dynamically adapts to the screen size on a foldable device, when switching from the outer screen to the inner screen and vice versa.

What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

moving image of suportive pane adaptive layout

What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

moving image of sequential composition preview in Android XR

Orbiter(
  position = OrbiterEdge.Bottom,
  offset = EdgeOffset.inner(offset = MaterialTheme.spacing.standard),
  alignment = Alignment.CenterHorizontally,
  shape = SpatialRoundedCornerShape(CornerSize(28.dp))
) {
  Row (horizontalArrangement = Arrangement.spacedBy(MaterialTheme.spacing.mini)) {
    // Playback control for rewinding by 10 seconds
    FilledTonalIconButton({ viewModel.seekBack(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.rewind_10),
        contentDescription = "Rewind by 10 seconds"
      )
    }
    // Playback control for play/pause
    FilledTonalIconButton({ viewModel.togglePlay() }) {
      Icon(
        painter = painterResource(id = R.drawable.rounded_play_pause_24),
        contentDescription = 
            if(viewModel.compositionPlayer.isPlaying) {
                "Pause preview playback"
            } else {
                "Resume preview playback"
            }
      )
    }
    // Playback control for forwarding by 10 seconds
    FilledTonalIconButton({ viewModel.seekForward(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.forward_10),
        contentDescription = "Forward by 10 seconds"
      )
    }
  }
}

Jetpack libraries unlock premium functionality incrementally

Donovan McMurray

Not only do our Jetpack libraries have you covered by working consistently across existing and future devices, but they also open the doors to advanced functionality and custom behaviors to support all types of app experiences. In a nutshell, our Jetpack libraries aim to make the common case very accessible and easy, and it has hooks for adding more custom features later.

We’ve worked with many apps who have switched to a Jetpack library, built the basics, added their critical custom features, and actually saved developer time over their estimates. Let’s take a look at CameraX and how this incremental development can supercharge your process.

// Set up CameraX app with preview and image capture.
// Note: setting the resolution selector is optional, and if not set,
// then a default 4:3 ratio will be used.
val aspectRatioStrategy = AspectRatioStrategy(
  AspectRatio.RATIO_16_9, AspectRatioStrategy.FALLBACK_RULE_NONE)
var resolutionSelector = ResolutionSelector.Builder()
  .setAspectRatioStrategy(aspectRatioStrategy)
  .build()

private val previewUseCase = Preview.Builder()
  .setResolutionSelector(resolutionSelector)
  .build()
private val imageCaptureUseCase = ImageCapture.Builder()
  .setResolutionSelector(resolutionSelector)
  .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
  .build()

val useCaseGroupBuilder = UseCaseGroup.Builder()
  .addUseCase(previewUseCase)
  .addUseCase(imageCaptureUseCase)

cameraProvider.unbindAll()

camera = cameraProvider.bindToLifecycle(
  this,  // lifecycleOwner
  CameraSelector.DEFAULT_BACK_CAMERA,
  useCaseGroupBuilder.build(),
)

After setting up the basic structure for CameraX, you can set up a simple UI with a camera preview and a shutter button. You can use the CameraX Viewfinder composable which displays a Preview stream from a CameraX SurfaceRequest.

// Create preview
Box(
  Modifier
    .background(Color.Black)
    .fillMaxSize(),
  contentAlignment = Alignment.Center,
) {
  surfaceRequest?.let {
    CameraXViewfinder(
      modifier = Modifier.fillMaxSize(),
      implementationMode = ImplementationMode.EXTERNAL,
      surfaceRequest = surfaceRequest,
     )
  }
  Button(
    onClick = onPhotoCapture,
    shape = CircleShape,
    colors = ButtonDefaults.buttonColors(containerColor = Color.White),
    modifier = Modifier
      .height(75.dp)
      .width(75.dp),
  )
}

fun onPhotoCapture() {
  // Not shown: defining the ImageCapture.OutputFileOptions for
  // your saved images
  imageCaptureUseCase.takePicture(
    outputOptions,
    ContextCompat.getMainExecutor(context),
    object : ImageCapture.OnImageSavedCallback {
      override fun onError(exc: ImageCaptureException) {
        val msg = "Photo capture failed."
        Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
      }

      override fun onImageSaved(output: ImageCapture.OutputFileResults) {
        val savedUri = output.savedUri
        if (savedUri != null) {
          // Do something with the savedUri if needed
        } else {
          val msg = "Photo capture failed."
          Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
        }
      }
    },
  )
}

You’re already on track for a solid camera experience, but what if you wanted to add some extra features for your users? Adding filters and effects are easy with CameraX’s Media3 effect integration, which is one of the new features introduced in CameraX 1.4.0.

Here’s how simple it is to add a black and white filter from Media3’s built-in effects.

val media3Effect = Media3Effect(
  application,
  PREVIEW or IMAGE_CAPTURE,
  ContextCompat.getMainExecutor(application),
  {},
)
media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()))
useCaseGroupBuilder.addEffect(media3Effect)

The Media3Effect object takes a Context, a bitwise representation of the use case constants for targeted UseCases, an Executor, and an error listener. Then you set the list of effects you want to apply. Finally, you add the effect to the useCaseGroupBuilder we defined earlier.

moving image of sequential composition preview in Android XR
(Left) Our camera app with no filter applied. 
 (Right) Our camera app after the createGrayscaleFilter was added.

There are many other built-in effects you can add, too! See the Media3 Effect documentation for more options, like brightness, color lookup tables (LUTs), contrast, blur, and many other effects.

To take your effects to yet another level, it’s also possible to define your own effects by implementing the GlEffect interface, which acts as a factory of GlShaderPrograms. You can implement a BaseGlShaderProgram’s drawFrame() method to implement a custom effect of your own. A minimal implementation should tell your graphics library to use its shader program, bind the shader program's vertex attributes and uniforms, and issue a drawing command.

Jetpack libraries meet you where you are and your app’s needs. Whether that be a simple, fast-to-implement, and reliable implementation, or custom functionality that helps the critical user journeys in your app stand out from the rest, Jetpack has you covered!

Jetpack offers a foundation for innovative AI Features

Mayuri Khinvasara Khabya

Just as Donovan demonstrated with CameraX for capture, Jetpack Media3 provides a reliable, customizable, and feature-rich solution for playback with ExoPlayer. The AI Samples app builds on this foundation to delight users with helpful and enriching AI-driven additions.

In today's rapidly evolving digital landscape, users expect more from their media applications. Simply playing videos is no longer enough. Developers are constantly seeking ways to enhance user experiences and provide deeper engagement. Leveraging the power of Artificial Intelligence (AI), particularly when built upon robust media frameworks like Media3, offers exciting opportunities. Let’s take a look at some of the ways we can transform the way users interact with video content:

    • Empowering Video Understanding: The core idea is to use AI, specifically multimodal models like the Gemini Flash and Pro models, to analyze video content and extract meaningful information. This goes beyond simply playing a video; it's about understanding what's in the video and making that information readily accessible to the user.
    • Actionable Insights: The goal is to transform raw video into summaries, insights, and interactive experiences. This allows users to quickly grasp the content of a video and find specific information they need or learn something new!
    • Accessibility and Engagement: AI helps make videos more accessible by providing features like summaries, translations, and descriptions. It also aims to increase user engagement through interactive features.

A Glimpse into AI-Powered Video Journeys

The following example demonstrates potential video journies enhanced by artificial intelligence. This sample integrates several components, such as ExoPlayer and Transformer from Media3; the Firebase SDK (leveraging Vertex AI on Android); and Jetpack Compose, ViewModel, and StateFlow. The code will be available soon on Github.

moving images of examples of AI-powered video journeys
(Left) Video summarization  
 (Right) Thumbnails timestamps and HDR frame extraction

There are two experiences in particular that I’d like to highlight:

    • HDR Thumbnails: AI can help identify key moments in the video that could make for good thumbnails. With those timestamps, you can use the new ExperimentalFrameExtractor API from Media3 to extract HDR thumbnails from videos, providing richer visual previews.
    • Text-to-Speech: AI can be used to convert textual information derived from the video into spoken audio, enhancing accessibility. On Android you can also choose to play audio in different languages and dialects thus enhancing personalization for a wider audience.

Using the right AI solution

Currently, only cloud models support video inputs, so we went ahead with a cloud-based solution.Iintegrating Firebase in our sample empowers the app to:

    • Generate real-time, concise video summaries automatically.
    • Produce comprehensive content metadata, including chapter markers and relevant hashtags.
    • Facilitate seamless multilingual content translation.

So how do you actually interact with a video and work with Gemini to process it? First, send your video as an input parameter to your prompt:

val promptData =
"Summarize this video in the form of top 3-4 takeaways only. Write in the form of bullet points. Don't assume if you don't know"

val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
_outputText.value = OutputTextState.Loading

viewModelScope.launch(Dispatchers.IO) {
    try {
        val requestContent = content {
            fileData(videoSource.toString(), "video/mp4")
            text(prompt)
        }
        val outputStringBuilder = StringBuilder()

        generativeModel.generateContentStream(requestContent).collect { response ->
            outputStringBuilder.append(response.text)
            _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
        }

        _outputText.value = OutputTextState.Success(outputStringBuilder.toString())

    } catch (error: Exception) {
        _outputText.value = error.localizedMessage?.let { OutputTextState.Error(it) }
    }
}

Notice there are two key components here:

    • FileData: This component integrates a video into the query.
    • Prompt: This asks the user what specific assistance they need from AI in relation to the provided video.

Of course, you can finetune your prompt as per your requirements and get the responses accordingly.

In conclusion, by harnessing the capabilities of Jetpack Media3 and integrating AI solutions like Gemini through Firebase, you can significantly elevate video experiences on Android. This combination enables advanced features like video summaries, enriched metadata, and seamless multilingual translations, ultimately enhancing accessibility and engagement for users. As these technologies continue to evolve, the potential for creating even more dynamic and intelligent video applications is vast.

Go above-and-beyond with specialized APIs

Mozart Louis

Android 16 introduces the new audio PCM Offload mode which can reduce the power consumption of audio playback in your app, leading to longer playback time and increased user engagement. Eliminating the power anxiety greatly enhances the user experience.

Oboe is Android’s premiere audio api that developers are able to use to create high performance, low latency audio apps. A new feature is being added to the Android NDK and Android 16 called Native PCM Offload playback.

Offload playback helps save battery life when playing audio. It works by sending a large chunk of audio to a special part of the device's hardware (a DSP). This allows the CPU of the device to go into a low-power state while the DSP handles playing the sound. This works with uncompressed audio (like PCM) and compressed audio (like MP3 or AAC), where the DSP also takes care of decoding.

This can result in significant power saving while playing back audio and is perfect for applications that play audio in the background or while the screen is off (think audiobooks, podcasts, music etc).

We created the sample app PowerPlay to demonstrate how to implement these features using the latest NDK version, C++ and Jetpack Compose.

Here are the most important parts!

First order of business is to assure the device supports audio offload of the file attributes you need. In the example below, we are checking if the device support audio offload of stereo, float PCM file with a sample rate of 48000Hz.

       val format = AudioFormat.Builder()
            .setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
            .setSampleRate(48000)
            .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
            .build()

        val attributes =
            AudioAttributes.Builder()
                .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                .setUsage(AudioAttributes.USAGE_MEDIA)
                .build()
       
        val isOffloadSupported = 
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
                AudioManager.isOffloadedPlaybackSupported(format, attributes)
            } else {
                false
            }

        if (isOffloadSupported) {
            player.initializeAudio(PerformanceMode::POWER_SAVING_OFFLOADED)
        }

Once we know the device supports audio offload, we can confidently set the Oboe audio streams’ performance mode to the new performance mode option, PerformanceMode::POWER_SAVING_OFFLOADED.

// Create an audio stream
        AudioStreamBuilder builder;
        builder.setChannelCount(mChannelCount);
        builder.setDataCallback(mDataCallback);
        builder.setFormat(AudioFormat::Float);
        builder.setSampleRate(48000);

        builder.setErrorCallback(mErrorCallback);
        builder.setPresentationCallback(mPresentationCallback);
        builder.setPerformanceMode(PerformanceMode::POWER_SAVING_OFFLOADED);
        builder.setFramesPerDataCallback(128);
        builder.setSharingMode(SharingMode::Exclusive);
           builder.setSampleRateConversionQuality(SampleRateConversionQuality::Medium);
        Result result = builder.openStream(mAudioStream);

Now when audio is played back, it will be offloading audio to the DSP, helping save power when playing back audio.

There is more to this feature that will be covered in a future blog post, fully detailing out all of the new available APIs that will help you optimize your audio playback experience!

What’s next

Of course, we were only able to share the tip of the iceberg with you here, so to dive deeper into the samples, check out the following links:

Hopefully these examples have inspired you to explore what new and fascinating experiences you can build on Android. Tune in to our session at Google I/O in a couple weeks to learn even more about use-cases supported by solutions like Jetpack CameraX and Jetpack Media3!

The Fourth Beta of Android 16

Posted by Matthew McCullough – VP of Product Management, Android Developer

Today we're bringing you Android 16 beta 4, the last scheduled update in our Android 16 beta program. Make sure your app or game is ready. It's also the last chance to give us feedback before Android 16 is released.

Android 16 Beta 4

This is our second platform stability release; the developer APIs and all app-facing behaviors are final. Apps targeting Android 16 can be made available in Google Play. Beta 4 includes our latest fixes and optimizations, giving you everything you need to complete your testing. Head over to our Android 16 summary page for a list of the features and behavior changes we've been covering in this series of blog posts, or read on for some of the top changes of which you should be aware.

Android 16 Release timeline showing Platform Stability milestone in April

Now available on more devices

The Android 16 Beta is now available on handset, tablet, and foldable form factors from partners including Honor, iQOO, Lenovo, OnePlus, OPPO, Realme, vivo, and Xiaomi. With more Android 16 partners and device types, many more users can run your app on the Android 16 Beta.

Android 16 Beta Release Partners: Google Pixel, iQOO, Lenovo, OnePlus, Sharp, Oppo, RealMe, vivo, Xiaomi, and Honor

Get your apps, libraries, tools, and game engines ready!

If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates to your SDK are needed to fully support Android 16.

Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16 Beta 4. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you aren't yet targeting Android 16:

Other changes that will be impactful once your app targets Android 16:

Get your app ready for the future:

    • Local network protection: Consider testing your app with the upcoming Local Network Protection feature. It will give users more control over which apps can access devices on their local network in a future Android major release.

Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.

Once you’ve published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.

Two Android API releases in 2025

This Beta is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. This Q2 major release will be the only release in 2025 that includes behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-breaking behavior changes.

Android 16 2025 SDK release timeline

We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.

There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.

Get started with Android 16

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 3 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 4.

While the API and behaviors are final and we are very close to release, we'd still like you to report issues on the feedback page. The earlier we get your feedback, the better chance we'll be able to address it in this or a future release.

For the best development experience with Android 16, we recommend that you use the latest Canary build of Android Studio Narwhal. Once you’re set up, here are some of the things you should do:

    • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.

We’ll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.

For complete information on Android 16 please visit the Android 16 developer site.

From dashboards to deeper data: Improve app quality and performance with new Play Console insights

Posted by Dan Brown, Dina Gandal and Hadar Yanos – Product Managers, Google Play

At Google Play, we partner with developers like you to help your app or game business reach its full potential, providing powerful tools and insights every step of the way. In Google Play Console, you’ll find the features needed to test, publish, improve, and grow your apps — and today, we're excited to share several enhancements to give you even more actionable insights, starting with a redesigned app dashboard tailored to your key workflows, and new metrics designed to help you improve your app quality.

Focus on the metrics that matter with the redesigned app dashboard

The first thing you’ll notice is the redesigned app dashboard, which puts the most essential insights front and center. We know that when you visit Play Console, you usually have a goal in mind — whether that’s checking on your release status or tracking installs. That’s why you’ll now see your most important metrics grouped into four core developer objectives:

    • Test and release
    • Monitor and improve
    • Grow users, and
    • Monetize with Play

Each objective highlights the three metrics most important to that goal, giving you a quick grasp of how your app is doing at a glance, as well as how those metrics have changed over time. For example, you can now easily compare between your latest production release against your app’s overall performance, helping you to quickly identify any issues. In the screenshot below, the latest production release has a crash rate of 0.24%, a large improvement over the 28-day average crash rate shown under “Monitor and Improve."

screen recording of the redesigned app dashboard in Google Play Console
The redesigned app dashboard in Play Console helps you see your most important metrics at a glance.

At the top of the page, you’ll see the status of your latest release changes prominently displayed so you know when it’s been reviewed and approved. If you’re using managed publishing, you can also see when things are ready to publish. And based on your feedback, engagement and monetization metrics now show a comparison to your previous year’s data so you can make quick comparisons.

The new app dashboard also keeps you updated on the latest news from Play, including recent blog posts, new features relevant to your app, and even special invitations to early access programs.

In addition to what’s automatically displayed on the dashboard, we know many of you track other vital metrics for your role or business. That's why we've added the “Monitor KPI trends” section at the bottom of your app dashboard. Simply scroll down and personalize your view by selecting the trends you need to monitor. This customized experience allows each user in your developer account to focus on their most important insights.

Later this year, we’ll introduce new overview pages for each of the four core developer objectives. These pages will help you quickly understand your performance, showcase tools and features within each domain, and list recommended actions to optimize performance, engagement, and revenue across all your apps.

Get actionable notifications when and where you need them

If you spend a lot of time in Play Console, you may have already noticed the new notification center. Accessible from every page, the notification center helps you to stay up to date with your account and apps, and helps you to identify any issues that may need urgent attention.

To help you quickly understand and act on important information, we now group notifications about the same issue across multiple apps. Additionally, notifications that are no longer relevant will automatically expire, ensuring you only see what needs your attention. Plus, notifications will be displayed on the new app dashboard within the relevant objectives.

Improve app quality and performance with new Play Console metrics

One of Play’s top goals is to provide the insights you need to build high-quality apps that deliver exceptional user experiences. We’re continuing to expand these insights, helping you prevent issues like crashes or ANRs, optimize your app’s performance, and reduce resource consumption on users’ devices.

Users expect a polished experience across their devices, and we’ve learned from you it can be difficult to make your app layouts work seamlessly across phones and large screens. To help with this, we’ve introduced pre-review checks for incorrect edge-to-edge rendering, while another new check helps you detect and prevent large screen layout issues caused by letterboxing and restricted layouts, along with resources on how to fix them.

We’re also making it easier to find and triage the most important quality issues in your app. The release dashboard in Play Console now displays prioritized quality issues from your latest release, alongside the existing dashboard features for monitoring post-launch, like crashes and ANRs This addition provides a centralized view of user-impacting issues, along with clear instructions to help you resolve critical user issues to improve your users’ experiences.

The quality panel in the redesigned app dashboard in Google Play Console
The quality panel at the top of the release dashboard gives you a prioritized view of issues that affect users on your latest release and provides instructions on how to fix them.

A new "low memory kill" (LMK) metric is available in Android vitals and the Reporting API. Low memory issues cause your app to terminate without any logging, and can be notoriously difficult to detect. We are making these issues visible with device-specific insights into memory constraints to help you identify and fix these problems. This will improve app stability and user engagement, which is especially crucial for games where LMKs can disrupt real-time gameplay.

The quality panel in the redesigned app dashboard in Google Play Console
The low memory kill metric in Android vitals gives you device-specific insights into low memory terminations, helping you improve app stability and user engagement.

We're also collaborating closely with leading OEMs like Samsung, leveraging their real-world insights to define consistent benchmarks for optimal technical quality across Android devices. Excessive wakelocks are a leading cause of battery drain, a top frustration for users. Today, we're launching the first of these new metrics in beta: excessive wake locks in Android vitals. Take a look at our wakelock documentation and provide feedback on the metric definition. Your input is essential as we refine this metric towards general availability, and will inform our strategy for making this information available to users on the Play Store so they can make informed decisions when choosing apps.

Together, these updates provide you with even more visibility into your app's performance and quality, enabling you to build more stable, efficient, and user-friendly apps across the Android ecosystem. We'll continue to add more metrics and insights over time. To stay informed about all the latest Play Console enhancements and easily find updates relevant to your workflow, explore our new What’s new in Play Console page, where you can filter features by the four developer objectives.

Prioritize media privacy with Android Photo Picker and build user trust

Posted by Tatiana van Maaren – Global T&S Partnerships Lead, Privacy & Security, and Roxanna Aliabadi Walker – Product Manager

At Google Play, we're dedicated to building user trust, especially when it comes to sensitive permissions and your data. We understand that managing files and media permissions can be confusing, and users often worry about which files apps can access. Since these files often contain sensitive information like family photos or financial documents, it's crucial that users feel in control. That’s why we're working to provide clearer choices, so users can confidently grant permissions without sacrificing app functionality or their privacy.

Below are a set of best practices to consider for improving user trust in the sharing of broad access files, ultimately leading to a more successful and sustainable app ecosystem.

Prioritize user privacy with data minimization

Building user trust starts with requesting only the permissions essential for your app's core functions. We understand that photos and videos are sensitive data, and broad access increases security risks. That's why Google Play now restricts READ_MEDIA_IMAGES and READ_MEDIA_VIDEO permissions, allowing developers to request them only when absolutely necessary, typically for apps like photo/video managers and galleries.

Leverage privacy-friendly solutions

Instead of requesting broad storage access, we encourage developers to use the Android Photo Picker, introduced in Android 13. This tool offers a privacy-centric way for users to select specific media files without granting access to their entire library. Android photo picker provides an intuitive interface, including access to cloud-backed photos and videos, and allows for customization to fit your app's needs. In addition, this system picker is backported to Android 4.4, ensuring a consistent experience for all users. By eliminating runtime permissions, Android photo picker simplifies the user experience and builds trust through transparency.

Build trust through transparent data practices

We understand that some developers have historically used custom photo pickers for tailored user experiences. However, regardless of whether you use a custom or system picker, transparency with users is crucial. Users want to know why your app needs access to their photos and videos.

Developers should strive to provide clear and concise explanations within their apps, ideally at the point where the permission is requested. Take the following in consideration while crafting your permission request mechanisms as possible best practices guidelines:

    • When requesting media access, provide clear explanations within your app. Specifically, tell users which media your app needs (e.g., all photos, profile pictures, sharing videos) and explain the functionality that relies on it (e.g., 'To choose a profile picture,' 'To share videos with friends').
    • Clearly outline how user data will be used and protected in your privacy policies. Explain whether data is stored locally, transmitted to a server, or shared with third parties. Reassure users that their data will be handled responsibly and securely.

Learn how Snap has embraced the Android System Picker to prioritize user privacy and streamline their media selection experience. Here's what they have to say about their implementation:

A grid of photos in the photo library is shown on a smartphone screen, including a waterfall and two people smiling and posing for the camera. The Google Photos interface is at the top, with the Photos tab selected, and one photo from the grid is selected for use

“One of our goals is to provide a seamless and intuitive communication experience while ensuring Snapchatters have control over their content. The new flow of the Android Photo Picker is the perfect balance of providing user control of the content they want to share while ensuring fast communication with friends on Snapchat.”
Marc Brown, Product Manager

Get started

Start building a more trustworthy app experience. Explore the Android Photo Picker and implement privacy-first data practices today.


Acknowledgement

Special thanks to: May Smith – Product Manager, and Anita Issagholyan – Senior Policy Specialist

Strengthening Our App Ecosystem: Enhanced Tools for Secure & Efficient Development

Posted by Suzanne Frey – VP, Product, Trust & Growth for Android & Play

Knowing that you’re building on a safe, secure ecosystem is essential for any app developer. We continuously invest in protecting Android and Google Play, so millions of users around the world can trust the apps they download and you can build thriving businesses. And we’re dedicated to continually improving our developer tools to make world–class security even easier to implement.

Together, we’ve made Google Play one of the safest and most secure platforms for developers and users. Our partnership over the past few years includes helping you:

Today, we’re excited to share more about how we're making it easier than ever for developers to build safe apps, while also continuing to strengthen our ecosystem's protection in 2025 and beyond.

Making it easier for you to build safer apps from the start

Google Play’s policies are a critical component of ensuring a safe experience for our shared users. Play Console pre-review checks are a great way to resolve certain policy and compatibility issues before you submit your app for review. We recently added the ability to check privacy policy links and login credential requirements, and we’re launching even more pre-review checks this year to help you avoid common policy pitfalls.

To help you avoid policy complications before you submit apps for review, we’ve been notifying you earlier about certain policies relevant to your apps – starting right as you code in Android Studio. We currently notify developers through Android Studio about a few key policy areas, but this year we’ll expand to a much wider range of policies.

Providing more policy support

Acting on your feedback, we’ve improved our policy experience to give you clearer updates, more time for substantial changes, more flexible requirements while still maintaining safety standards, and more helpful information with live Q&A's. Soon, we’ll be trying a new way of communicating with you in Play Console so you get information when you need it most. This year, we’re investing in even more ways to get your feedback, help you understand our policies, navigate our Policy Center, and help to fix issues before app submission through new features in Console and Android Studio.

We’re also expanding our popular Google Play Developer Help Community, which saw 2.7 million visits last year from developers looking to find answers to policy questions, share knowledge, and connect with fellow developers. This year, we’re planning to expand the community to include more languages, such as Indonesian, Japanese, Korean, and Portuguese.

Protecting your business and users from scams and attacks

The Play Integrity API is an essential tool to help protect your business from abuse such as fraud, bots, cheating, and data theft. Developers are already using the APIs to make over 500M daily checks for potentially fraudulent or risky behavior. In fact, apps that use Play Integrity features to detect suspicious activity are seeing an 80% drop in unauthorized usage on average compared to other apps.

Developers are using Play Integrity API's new app access risk detection to make over 500M daily checks for potentially fraudulent or risky behavior, and apps that use the Play Integrity API are seeing 80% lower usage from unverified, untrusted sources on average.

This year, we’ll continue to enhance the Play Integrity API with stronger protection for even more users. We recently improved the technology that powers the API on all devices running Android 13 (API level 33) and above, making it faster, more reliable, and more private for users. We also launched enhanced security signals to help you decide how much you trust the environment your app is running in, which we’ll automatically roll out to all developers who use the API in May. You can opt in now to start using the improved verdicts today.

We’ll be adding new features later this year to help you deal with emerging threats, such as the ability to re-identify abusive and risky devices in a way that also preserves user privacy. We’re also building more tools to help you guide users to fix issues, like if they need a security update or they’re using a tampered version of your app.

Providing additional validation for your app

For apps in select categories, we offer badges that provide an extra layer of validation and connect users with safe, high-quality, and useful experiences. Building on the work of last year’s “Government” badge, which helps users identify official government apps, this year we introduced a “Verified” badge to help users discover VPN apps that take extra steps to demonstrate their commitment to security. We’ll continue to expand on this and add badges to more app categories in the future.

Partnering to keep kids safe

Whether your app is specifically designed for kids or simply attracts their attention, there is an added responsibility to ensure a safe and trusted experience. We want to partner with you to keep kids and teens safe online, and protect their privacy, and empower families. In addition to Google Play’s Teacher Approved program, Families policies, and tools like Restrict Declared Minors setting within the Google Play Console, we’re building tools like Credential Manager API, now in Beta for Digital IDs.

Strengthening the Android ecosystem

In addition to helping developers build stronger, safer apps on Google Play, we remain committed to protecting the broader Android ecosystem. Last year, our investments in stronger privacy policies, AI-powered threat detection and other security measures prevented 2.36 million policy-violating apps from being published on Google Play. By contrast, our most recent analysis found over 50 times more Android malware from Internet-sideloaded sources (like browsers and messaging apps) than on Google Play. This year we’re working on ways to make it even harder for malicious actors to hide or trick users into harmful installs, which will not only protect your business from fraud but also help users download your apps with confidence.

Our most recent analysis found over 50 times more Android malware from Internet-sideloaded sources than on Google Play

Meanwhile, Google Play Protect is always evolving to combat new threats and protect users from harmful apps that can lead to scams and fraud. As this is a core part of user safety, we’re doing more to keep users from being socially-engineered by scammers to turn this off. First, Google Play Protect live threat detection is expanding its protection to target malicious applications that try to impersonate financial apps. And our enhanced financial fraud protection pilot has continued to expand after a successful launch in select countries where we saw malware based financial fraud coming from Internet-sideloaded sources. We are planning to expand the pilot throughout this year to additional countries where we have seen higher levels of malware-based financial fraud.

We’re even working with other leaders across the industry to protect all users, no matter what device they use or where they download their apps. As a founding member of the App Defense Alliance, we’re working to establish and promote industry-wide security standards for mobile and web applications, as well as cloud configurations. Recently, the ADA launched Application Security Assessments (ASA) v1.0, which provides clear guidance to developers on protecting sensitive data and defending against cyber attacks to strengthen user trust.

What's next

Please keep the feedback coming! We appreciate knowing what can make our developers’ experiences more efficient while ensuring we maintain the highest standards in app safety. Thank you for your continued partnership in making Android and Google Play a safe, thriving platform for everyone.

#WeArePlay | How Memory Lane Games helps people with dementia

Posted by Robbie McLachlan – Developer Marketing

In our latest #WeArePlay film, which celebrates the people behind apps and games, we meet Bruce - a co-founder of Memory Lane Games. His company turns cherished memories into simple, engaging quizzes for people with different types of dementia. Discover how Memory Lane Games blends nostalgia and technology to spark conversations and emotional connections.


What inspired the idea behind Memory Lane Games?

The idea for Memory Lane Games came about one day at the pub when Peter was telling me how his mum, even with vascular dementia, lights up when she looks at old family photos. It got me thinking about my own mum, who treasures old photos just as much. The idea hit us – why not turn those memories into games? We wanted to help people reconnect with their past and create moments where conversations could flow naturally.

Memory Lane Games co-founders, Peter and Bruce from Isle of Man

Can you tell us of a memorable moment in the journey when you realized how powerful the game was?

We knew we were onto something meaningful when a caregiver in a memory cafe told us about a man who was pretty much non-verbal but would enjoy playing. He started humming along to one of our music trivia games, then suddenly said, "Roy Orbison is a way better singer than Elvis, but Elvis had a better manager." The caregiver was in tears—it was the first complete sentence he’d spoken in months. Moments like these remind us why we’re doing this—it’s not just about games; it’s about unlocking moments of connection and joy that dementia often takes away.

A user plays Memory Lane Games from their phone

One of the key features is having errorless fun with the games, why was that so important?

We strive for frustration-free design. With our games, there are no wrong answers—just gentle prompts to trigger memories and spark conversations about topics they are interested in. It’s not about winning or losing; it’s about rekindling connections and creating moments of happiness without any pressure or frustration. Dementia can make day-to-day tasks challenging, and the last thing anyone needs is a game that highlights what they might not remember or get right. Caregivers also like being able to redirect attention back to something familiar and fun when behaviour gets more challenging.

How has Google Play helped your journey?

What’s been amazing is how Google Play has connected us with an incredibly active and engaged global community without any major marketing efforts on our part.

For instance, we got our first big traction in places like the Philippines and India—places we hadn’t specifically targeted. Yet here we are, with thousands of downloads in more than 100 countries. That reach wouldn’t have been possible without Google Play.

A group of senior citizen gather around a table to play a round of Memory Lane Games from a shared mobile device

What is next for Memory Lane Game?

We’re really excited about how we can use AI to take Memory Lane Games to the next level. Our goal is to use generative AI, like Google’s Gemini, to create more personalized and localized game content. For example, instead of just focusing on general memories, we want to tailor the game to a specific village the player came from, or a TV show they used to watch, or even local landmarks from their family’s hometown. AI will help us offer games that are deeply personal. Plus, with the power of AI, we can create games in multiple languages, tapping into new regions like Japan, Nigeria or Mexico.

Discover other inspiring app and game founders featured in #WeArePlay.



How useful did you find this blog post?

The Third Beta of Android 16

Posted by Matthew McCullough – VP of Product Management, Android Developer

Android 16 has officially reached Platform Stability today with Beta 3! That means the API surface is locked, the app-facing behaviors are final, and you can push your Android 16-targeted apps to the Play store right now. Read on for coverage of new security and accessibility features in Beta 3.

Android delivers enhancements and new features year-round, and your feedback on the Android beta program plays a key role in helping Android continuously improve. The Android 16 developer site has more information about the beta, including how to get it onto devices and the release timeline. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that benefits everyone.

New in Android 16 Beta 3

At this late stage in the development cycle, there are only a few new things in the Android 16 Beta 3 release for you to consider when developing your apps.

Android 16 timeline showing we are on time with Beta releases ending in March

Broadcast audio support

Pixel 9 devices on Android 16 Beta now support Auracast broadcast audio with compatible LE Audio hearing aids, part of Android's work to enhance audio accessibility. Built on the LE Audio standard, Auracast enables compatible hearing aids and earbuds to receive direct audio streams from public venues like airports, concerts, and classrooms. Our Keyword post has more on this technology.

Outline text for maximum text contrast

Users with low vision often have reduced contrast sensitivity, making it challenging to distinguish objects from their backgrounds. To help these users, Android 16 Beta 3 introduces outline text, replacing high contrast text, which draws a larger contrasting area around text to greatly improve legibility.

Android 16 also contains new AccessibilityManager APIs to allow your apps to check or register a listener to see if this mode is enabled. This is primarily for UI Toolkits like Compose to offer a similar visual experience. If you maintain a UI Toolkit library or your app performs custom text rendering that bypasses the android.text.Layout class then you can use this to know when outline text is enabled.

Text with enhanced contrast before and after Android 16's new outline text accessibility feature
Text with enhanced contrast before and after Android 16's new outline text accessibility feature

Test your app with Local Network Protection

Android 16 Beta 3 adds the ability to test the Local Network Protection (LNP) feature which is planned for a future Android major release. It gives users more control over which apps can access devices on their local network.

What's Changing?

Currently, any app with the INTERNET permission can communicate with devices on the user's local network. LNP will eventually require apps to request a specific permission to access the local network.

Beta 3: Opt-In and Test

In Beta 3, LNP is an opt-in feature. This is your chance to test your app and identify any parts that rely on local network access. Use this adb command to enable LNP restrictions for your app:

adb shell am compat enable RESTRICT_LOCAL_NETWORK <your_package_name>

After rebooting your device, your app's local network access is restricted. Test features that might interact with local devices (e.g., device discovery, media casting, connecting to IoT devices). Expect to see socket errors like EPERM or ECONNABORTED if your app tries to access the local network without the necessary permission. See the developer guide for more information, including how to re-enable local network access.

This is a significant change, and we're committed to working with you to ensure a smooth transition. By testing and providing feedback now, you can help us build a more private and secure Android ecosystem.

Get your apps, libraries, tools, and game engines ready!

If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates are needed to fully support Android 16.

Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you don't yet target Android 16:

    • JobScheduler: JobScheduler quotas are enforced more strictly in Android 16; enforcement will occur if a job executes while the app is on top, when a foreground service is running, or in the active standby bucket. setImportantWhileForeground is now a no-op. The new stop reason STOP_REASON_TIMEOUT_ABANDONED occurs when we detect that the app can no longer stop the job.
    • Broadcasts: Ordered broadcasts using priorities only work within the same process. Use other IPC if you need cross-process ordering.
    • ART: If you use reflection, JNI, or any other means to access Android internals, your app might break. This is never a best practice. Test thoroughly.
    • 16KB Page Size: If your app isn't 16KB-page-size ready, you can use the new compatibility mode flag, but we recommend migrating to 16KB for best performance.

Other changes that will be impactful once your app targets Android 16:

Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.

Once you’ve published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.

Two Android API releases in 2025

This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. This Q2 major release will be the only release in 2025 that includes behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-breaking behavior changes.

Android API release timeline 2025

We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.

There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.

Get started with Android 16

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 2 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 3.

While the API and behaviors are final, we're still looking for your feedback so please report issues on the feedback page. The earlier we get your feedback, the better chance we'll be able to address it in this or a future release.

For the best development experience with Android 16, we recommend that you use the latest feature drop of Android Studio (Meerkat). Once you’re set up, here are some of the things you should do:

    • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.

We’ll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.

For complete information on Android 16 please visit the Android 16 developer site.

The Second Beta of Android 16

Posted by Matthew McCullough – VP of Product Management, Android Developer

Today we're releasing the second beta of Android 16, continuing our work to build a platform that enables creative expression. You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air.

This build adds new support for professional camera experiences, graphical effects, extends our performance framework, and continues the evolution of features related to privacy, security, and background tasks. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.

Media and camera updates

Android 16 enhances support for professional camera users, allowing for hybrid auto exposure along with precise color temperature and tint adjustments. It's easier than ever to capture motion photos with new Intent actions, and we're continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard.

Hybrid auto-exposure

Android 16 adds new hybrid auto-exposure modes to Camera2, allowing you to manually control specific aspects of exposure while letting the auto-exposure (AE) algorithm handle the rest. You can control ISO + AE, and exposure time + AE, providing greater flexibility compared to the current approach where you either have full manual control or rely entirely on auto-exposure.

fun setISOPriority() {
   // ...

    val availablePriorityModes = mStaticInfo.characteristics.get(
        CameraCharacteristics.CONTROL_AE_AVAILABLE_PRIORITY_MODES
    )
    // ...
    
    // Turn on AE mode to set priority mode
    reqBuilder[CaptureRequest.CONTROL_AE_MODE] = CameraMetadata.CONTROL_AE_MODE_ON
    reqBuilder[CaptureRequest.CONTROL_AE_PRIORITY_MODE] = CameraMetadata.CONTROL_AE_PRIORITY_MODE_SENSOR_SENSITIVITY_PRIORITY
    reqBuilder[CaptureRequest.SENSOR_SENSITIVITY] = TEST_SENSITIVITY_VALUE
    val request: CaptureRequest = reqBuilder.build()

    // ...

}

Precise color temperature and tint adjustments

Android 16 adds camera support for fine color temperature and tint adjustments to better support professional video recording applications. White balance settings are currently controlled through CONTROL_AWB_MODE, which contains options limited to a preset list, such as Incandescent, Cloudy, and Twilight. The COLOR_CORRECTION_MODE_CCT enables the use of COLOR_CORRECTION_COLOR_TEMPERATURE and COLOR_CORRECTION_COLOR_TINT for precise adjustments of white balance based on the correlated color temperature.

fun setCCT() {
    // ... (Your existing code before this point) ...

    val colorTemperatureRange: Range<Int> =
        mStaticInfo.characteristics[CameraCharacteristics.COLOR_CORRECTION_COLOR_TEMPERATURE_RANGE]

    // Set to manual mode to enable CCT mode
    reqBuilder[CaptureRequest.CONTROL_AWB_MODE] = CameraMetadata.CONTROL_AWB_MODE_OFF
    reqBuilder[CaptureRequest.COLOR_CORRECTION_MODE] = CameraMetadata.COLOR_CORRECTION_MODE_CCT
    reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TEMPERATURE] = 5000
    reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TINT] = 30

    val request: CaptureRequest = reqBuilder.build()

    // ... (Your existing code after this point) ...
}
Five photos of the back of a Google Pixel phone demonstrate different color temperatures and tints. The original photo is in the top left, followed by Tint -50, Tint +50, Temp 3000, and Temp 7000.

Motion photo capture intent actions

Android 16 adds standard Intent actions — ACTION_MOTION_PHOTO_CAPTURE, and ACTION_MOTION_PHOTO_CAPTURE_SECURE — which request that the camera application capture a motion photo and return it.

Moving image of a diverse group of friends playing a game of horseshoe

You must either pass an extra EXTRA_OUTPUT to control where the image will be written, or a Uri through Intent setClipData. If you don't set a ClipData, it will be copied there for you when calling Context.startActivity.

UltraHDR image enhancements

A split-screen image compares Standard Dynamic Range (SDR) and High Dynamic Range (HDR) image quality side-by-side using a singular image of a detailed landscape. The HDR side is more vivid and vibrant.

Android 16 continues our work to deliver dazzling image quality with UltraHDR images. It adds support for UltraHDR images in the HEIC file format. These images will get ImageFormat type HEIC_ULTRAHDR and will contain an embedded gainmap similar to the existing UltraHDR JPEG format. We're working on AVIF support for UltraHDR as well, so stay tuned.

In addition, Android 16 implements additional parameters in UltraHDR from the ISO 21496-1 draft standard, including the ability to get and set the colorspace that gainmap math should be applied in, as well as support for HDR encoded base images with SDR gainmaps.

Custom graphical effects with AGSL

Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation and apply them to draw calls. Since Android 13, you've been able to use AGSL to create custom RuntimeShaders that extend Shaders. The new API mirrors this, adding an AGSL-powered RuntimeColorFilter that extends ColorFilters, and a Xfermode effect that allows you to implement AGSL-based custom compositing and blending between source and destination pixels.

private val thresholdEffectString = """
    uniform half threshold;
    half4 main(half4 c) {
        half luminosity = dot(c.rgb, half3(0.2126, 0.7152, 0.0722));
        half bw = step(threshold, luminosity);
        return bw.xxx1 * c.a;
    }"""

fun setCustomColorFilter(paint: Paint) {
   val filter = RuntimeColorFilter(thresholdEffectString)
   filter.setFloatUniform(0.5)
   paint.colorFilter = filter
}

Behavior changes

With every Android release, we seek to make the platform more efficient, privacy conscious, internationalization friendly, and robust, balancing the needs of apps against hardware support, system performance, user privacy, and battery life. This can result in behavior changes that impact compatibility.

Edge to edge opt-out going away

Android 15 enforced edge-to-edge for apps targeting Android 15 (SDK 35), but your app could opt-out by setting R.attr#windowOptOutEdgeToEdgeEnforcement to true. Once your app targets Android 16 (Baklava), R.attr#windowOptOutEdgeToEdgeEnforcement is deprecated and disabled and your app cannot opt-out of going edge-to-edge. To be compatible with Android 16 Beta 2, ensure your app supports edge-to-edge and remove any use of R.attr#windowOptOutEdgeToEdgeEnforcement. To support edge-to-edge, see the Compose and Views guidance. Please let us know about concerns in our tracker on the feedback page.

Health and fitness permissions

For apps targeting Android 16 or higher, BODY_SENSORS permissions are transitioning to the granular permissions under android.permissions.health also used by Health Connect. Any API previously requiring BODY_SENSORS or BODY_SENSORS_BACKGROUND will now require the corresponding android.permissions.health permission. This affects the following data types, APIs, and foreground service types:

If your app uses these APIs, it should now request the respective granular permissions:

These permissions are the same as those that guard access to reading data from Health Connect, the Android datastore for health, fitness, and wellness data.

Abandoned empty jobs stop reason

An abandoned job occurs when the JobParameters object associated with the job has been garbage collected, but jobFinished has not been called to signal job completion. This indicates that the job may be running and being rescheduled without the application's awareness.

Applications in Android 16 that rely on JobScheduler without maintaining a strong reference to the JobParameters object will now be granted the new job stop reason STOP_REASON_TIMEOUT_ABANDONED on timeout, instead of STOP_REASON_TIMEOUT.

If there are frequent occurrences of the new abandoned stop reason, the system will take mitigation steps to reduce job frequency. Please use the new stop reason to detect and reduce abandoned jobs.

Note: If you're using WorkManager, you're not expected to be impacted by this change — one nice side effect of using Android Jetpack to schedule your work.

Intent redirect changes

Android 16 introduces default security hardening against Intent redirection attacks regardless of your app's targetSDK version. The removeLaunchSecurityProtection API allows you to opt-out of this protection if your testing reveals issues.

Note: Opting out of security protections should be done with caution and only when absolutely necessary, as it can increase the risk of security vulnerabilities.
val iSublevel = intent.getParcelableExtra("sub_intent", Intent::class.java)
iSublevel?.let {
    it.removeLaunchSecurityProtection()
    startActivity(it)
}

Elegant font APIs deprecated and disabled

Apps targeting Android 15 (API level 35) have the elegantTextHeight TextView attribute set to true by default, replacing the compact font with one that is much more readable. You could override this by setting the elegantTextHeight attribute to false.

Android 16 deprecates the elegantTextHeight attribute, and the attribute will be ignored once your app targets Android 16. The “UI fonts” controlled by these APIs are being discontinued, so you should adapt any layouts to ensure consistent and future proof text rendering in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai.

Example of default eleganttextHeight behavior for apps targeting Android 14 (API level 34) and lower
default elegantTextHeight behavior for apps targeting Android 14 (API level 34) and lower

Example of default elegantTextHeight behavior for apps targeting Android 15 (API level 35) and higher
default elegantTextHeight behavior for apps targeting Android 15 (API level 35) and higher

16 KB page size compatibility mode

Android 15 introduced support for 16KB memory pages to optimize performance of the platform. Android 16 adds a compatibility mode, allowing some apps built for 4K memory pages to run on a device configured for 16KB memory pages.

If Android detects that your app has 4KB aligned memory pages, it will automatically use compatibility mode and display a notification dialog to the user. Setting the android:pageSizeCompat property in the AndroidManifest.xml to enable the backwards compatibility mode will prevent the display of the dialog when your app launches. For best performance, reliability, and stability, your app should still be 16KB aligned. Read our recent blog post about updating your apps to support 16KB memory pages for more details.

Screenshot of PageSizeCompatTestApp in Android 16

Measurement system customization

Users can now customize their measurement system in regional preferences within Settings. The user preference is included as part of the locale code, so you can register a BroadcastReceiver on ACTION_LOCALE_CHANGED to handle locale configuration changes when regional preferences change.

Using formatters can help match the local experience. For example, "0.5 in" in English (United States), is "12,7 mm" for a user who has set their phone to English (Denmark) or who uses their phone in English (United States) with the metric system as the measurement system preference.

To find these settings in Android 16 Beta 2, open the Settings app and navigate to System > Languages & region.

Content handling for live wallpapers

In Android 16, the live wallpaper framework is gaining a new content API to address the challenges of dynamic, user-driven wallpapers. Currently, live wallpapers incorporating user-provided content require complex, service-specific implementations. Android 16 introduces WallpaperDescription and WallpaperInstance. WallpaperDescription allows you to identify distinct instances of a live wallpaper from the same service. For example, a wallpaper that has instances on both the home screen and on the lock screen may have unique content in both places. The wallpaper picker and WallpaperManager use this metadata to better present wallpapers to users, streamlining the process for you to create diverse and personalized live wallpaper experiences.

Headroom APIs in ADPF

The SystemHealthManager introduces the getCpuHeadroom and getGpuHeadroom APIs, designed to provide games and resource-intensive apps with estimates of available CPU and GPU resources. These methods offer a way for you to gauge how your app or game can best improve system health, particularly when used in conjunction with other Android Dynamic Performance Framework (ADPF) APIs that detect thermal throttling. By using CpuHeadroomParams and GpuHeadroomParams on supported devices, you will be able to customize the time window used to compute the headroom and select between average or minimum resource availability. This can help you reduce your CPU or GPU resource usage accordingly, leading to better user experiences and improved battery life.

Key sharing API

Android 16 adds APIs that support sharing access to Android Keystore keys with other apps. The new KeyStoreManager class supports granting and revoking access to keys by app uid, and includes an API for apps to access shared keys.

Standardized picture and audio quality framework for TVs

The new MediaQuality package in Android 16 exposes a set of standardized APIs for access to audio and picture profiles and hardware-related settings. This allows streaming apps to query profiles and apply them to media dynamically:

    • Movies mastered with a wider dynamic range require greater color accuracy to see subtle details in shadows and adjust to ambient light, so a profile that prefers color accuracy over brightness may be appropriate.
    • Live sporting events are often mastered with a narrow dynamic range, but are often watched in daylight, so a profile that gives preference to brightness over color accuracy can give better results.
    • Fully interactive content wants minimal processing to reduce latency, and wants higher frame rates, which is why many TV's ship with a game profile.

The API allows apps to switch between profiles and users to enjoy the benefits of tuning supported TVs to best suit their content.

Accessibility

Android 16 adds additional APIs to enhance UI semantics that help improve consistency for users that rely on accessibility services, such as TalkBack.

Duration added to TtsSpan

Android 16 extends TtsSpan with a TYPE_DURATION, consisting of ARG_HOURS, ARG_MINUTES, and ARG_SECONDS. This allows you to directly annotate time duration, ensuring accurate and consistent text-to-speech output with services like TalkBack.

Support elements with multiple labels

Android currently allows UI elements to derive their accessibility label from another, and now offers the ability for multiple labels to be associated, a common scenario in web content. By introducing a list-based API within AccessibilityNodeInfo, Android can directly support these multi-label relationships. As part of this change, we've deprecated AccessibilityNodeInfo setLabeledBy and getLabeledBy in favor of addLabeledBy, removeLabeledBy, and getLabeledByList.

Improved support for expandable elements

Android 16 adds accessibility APIs that allow you to convey the expanded or collapsed state of interactive elements, such as menus and expandable lists. By setting the expanded state using setExpandedState and dispatching TYPE_WINDOW_CONTENT_CHANGED AccessibilityEvents with a CONTENT_CHANGE_TYPE_EXPANDED content change type, you can ensure that screen readers like TalkBack announce state changes, providing a more intuitive and inclusive user experience.

Indeterminate ProgressBars

Android 16 adds RANGE_TYPE_INDETERMINATE, giving a way for you to expose RangeInfo for both determinate and indeterminate ProgressBar widgets, allowing services like TalkBack to more consistently provide feedback for progress indicators.

Tri-state CheckBox

The new AccessibilityNodeInfo getChecked and setChecked(int) methods in Android 16 now support a "partially checked" state in addition to "checked" and "unchecked." This replaces the deprecated boolean isChecked and setChecked(boolean).

Two Android API releases in 2025

This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-impacting behavior changes.

2025 SDK release timeline showing a features only update in Q1 and Q3, a major SDK release with behavior changes, APIs, and features in Q2, and a minor SDK release with APIs and features in Q4

We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.

There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.

How to get ready

In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.

App compatibility

The Android 16 production timeline shows the release stages, highlighting 'Beta Releases' and 'Platform Stability' in blue and green, respectively, from December to the final release.

The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.

We’re targeting March of 2025 for our Platform Stability milestone. At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you’ll have several months before the final release to complete your testing. Learn more by checking the release timeline details.

Get started with Android 16

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 1 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 2.

We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you’re set up, here are some of the things you should do:

    • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
    • Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.

We’ll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.

For complete information, visit the Android 16 developer site.

TrustedTime API: Introducing a reliable approach to time keeping for your apps

Posted by Kanyinsola Fapohunda – Software Engineer, and Geoffrey Boullanger – Technical Lead

Accurate time is crucial for a wide variety of app functionalities, from scheduling and event management to transaction logging and security protocols. However, a user can change the device’s time, so a more accurate source of time than the device’s local system time may be required. That's why we're introducing the TrustedTime API that leverages Google's infrastructure to deliver a trustworthy timestamp, independent of the device's potentially manipulated local time settings.

How does TrustedTime work?

The new API leverages Google's secure infrastructure to provide a trusted time source to your app. TrustedTime periodically syncs its clock to Google's servers, which have access to a highly accurate time source, so that you do not need to make a server request every time you want to know the current network time. Additionally, we've integrated a unique model that calculates the device's clock drift. This will inform you when the time may be inaccurate between network synchronizations.

Why is an accurate source of time important?

Many apps rely on the device's clock for various features. However, users can change their device's time settings, either intentionally or unintentionally, therefore changing the time that your app gets. This can lead to problems such as:

    • Data Inconsistency: Apps relying on chronological event ordering are vulnerable to data corruption if users manipulate device time. TrustedTime mitigates this risk by providing a trustworthy time source.
    • Security Gaps: Time-based security measures, like one-time passwords or timed access controls require an unaltered time source to be effective.
    • Unreliable Scheduling: Apps that depend on accurate scheduling, like calendar or reminder apps, can malfunction if the device clock (i.e. Unix timestamp) is incorrect.
    • Inaccurate Time: The device's internal clock can drift due to various factors, such as temperature, doze mode, battery level, etc. This can lead to problems in applications that require more precision. The TrustedTime API also provides the estimated error with the timestamps, so that you can ensure your app's time-sensitive operations are performed correctly.
    • Lack of Consistency Between Devices: Inconsistent time across devices can cause problems in multi-device scenarios, such as gaming or collaborative applications. The TrustedTime API helps ensure that all devices have a consistent view of time, improving the user experience.
    • Unnecessary Power and Data Consumption: TrustedTime is designed to be more efficient than calling an NTP server every time an app needs the current time. It avoids the overhead of repeated network requests by periodically syncing its clock with time servers. This synced time is then used as a reference point, and the TrustedTime API calculates the current time based on the device's internal clock. This approach reduces network usage and improves performance for apps that need frequent time checks.

TrustedTime Use Cases

The TrustedTime API opens up a range of possibilities for enhancing the reliability and security of your apps, with use cases in areas such as:

    • Financial Applications: Ensure the accuracy of transaction timestamps even when the device is offline, preventing fraud and disputes.
    • Gaming: Implement fair play by preventing users from manipulating the game clock to gain an unfair advantage.
    • Limited-Time Offers: Guarantee that promotions and offers expire at the correct time, regardless of the user's device settings.
    • E-commerce: Accurately track order processing and delivery times.
    • Content Licensing: Enforce time-based restrictions on digital content, like rentals or subscriptions.
    • IoT Devices: Synchronize clocks across multiple devices for consistent data logging and control.
    • Productivity apps: Accurately record the time of any changes made to cloud documents while offline.

Getting started with the TrustedTime API

The TrustedTime API is built on top of Google Play services, making integration seamless for most Android developers.

The simplest way to integrate is to initialize the TrustedTimeClient early in your app lifecycle, such as in the onCreate() method of your Application class. The following example uses dependency injection with Hilt to make the time client available to components throughout the app.

[Optional] Setup dependency injection

// TrustedTimeClientAccessor.kt
import com.google.android.gms.tasks.Task
import com.google.android.gms.time.TrustedTimeClient

interface TrustedTimeClientAccessor {
  fun createClient(): Task<TrustedTimeClient>
}

// TrustedTimeModule.kt
@Module
@InstallIn(SingletonComponent::class)
class TrustedTimeModule {
  @Provides
  fun provideTrustedTimeClientAccessor(
    @ApplicationContext context: Context
  ): TrustedTimeClientAccessor {
    return object : TrustedTimeClientAccessor {
      override fun createClient(): Task<TrustedTimeClient> {
        return TrustedTime.createClient(context)
      }
    }
  }
}

Initialize early in your app's lifecycle

// TrustedTimeDemoApplication.kt
@HiltAndroidApp
class TrustedTimeDemoApplication : Application() {

  @Inject
  lateinit var trustedTimeClientAccessor: TrustedTimeClientAccessor

  var trustedTimeClient: TrustedTimeClient? = null
    private set

  override fun onCreate() {
    super.onCreate()
    trustedTimeClientAccessor.createClient().addOnCompleteListener { task ->
      if (task.isSuccessful) {
        // Stash the client
        trustedTimeClient = task.result
      } else {
        // Handle error, maybe retry later
        val exception = task.exception
      }
    }
    // To use Kotlin Coroutine, you can use the await() method, 
    // see https://developers.google.com/android/guides/tasks#kotlin_coroutine for more info.
  }
}

NOTE: If you don't use dependency injection in your app. You can simply call
`TrustedTime.createClient(context)` instead of using a TrustedTimeClientAccessor.

Use TrustedTimeClient anywhere in your app

// Retrieve the TrustedTimeClient from your application class
  val myApp = applicationContext as TrustedTimeDemoApplication

  // In this example, System.currentTimeMillis() is used as a fallback if the
  // client is null (i.e. client creation task failed) or when there is no time
  // signal available. You may not want to do this if using the system clock is
  // not suitable for your use case.
  val currentTimeMillis =
    myApp.trustedTimeClient?.computeCurrentUnixEpochMillis()
        ?: System.currentTimeMillis()
  // trustedTimeClient.computeCurrentInstant() can be used if Instant is
  // preferred to long for Unix epoch times and you are able to use the APIs.

Use in short-lived components like Activity

@AndroidEntryPoint
class MainActivity : AppCompatActivity() {
  @Inject
  lateinit var trustedTimeAccessor: TrustedTimeAccessor

   private var trustedTimeClient: TrustedTimeClient? = null

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    ...
    trustedTimeAccessor.createClient().addOnCompleteListener { task ->
      if (task.isSuccessful) {
          // Stash the client
          trustedTimeClient = task.result
        } else {
         // Handle error, maybe retry later or use another time source.
          val exception = task.exception
        }
    }
  }

  private fun getCurrentTimeInMillis() : Long? {
    return trustedTimeClient?.computeCurrentUnixEpochMillis()
  }
}

TrustedTime API availability and limitations

The TrustedTime API is available on all devices running Google Play services on Android 5 (Lollipop) and above. You need to add the dependency com.google.android.gms:play-services-time:16.0.1 (or above) to access the new API. No additional permission is required to use this API. However, TrustedTime needs an internet connection after the device starts up to provide timestamps. If the device hasn't connected to the internet since booting, the TrustedTime APIs won't return timestamps.

It’s important to note that the device's internal clock can drift due to factors like temperature, doze mode, and battery level. TrustedTime doesn't prevent this drift, but its APIs provide an error estimate for each timestamp. Use this estimate to determine if the timestamp's accuracy meets your application's requirements. While TrustedTime makes it more difficult for users to manipulate the time accessed by your app, it does not guarantee complete safety. Advanced techniques can still be used to tamper with the device’s time.

Next steps

To learn more about the TrustedTime API, check out the following resources: