Author Archives: Android Developers

Announcing Jetpack Navigation 3

Posted by Don Turner - Developer Relations Engineer

Navigating between screens in your app should be simple, shouldn't it? However, building a robust, scalable, and delightful navigation experience can be a challenge. For years, the Jetpack Navigation library has been a key tool for developers, but as the Android UI landscape has evolved, particularly with the rise of Jetpack Compose, we recognized the need for a new approach.

Today, we're excited to introduce Jetpack Navigation 3, a new navigation library built from the ground up specifically for Compose. For brevity, we'll just call it Nav3 from now on. This library embraces the declarative programming model and Compose state as fundamental building blocks.

Why a new navigation library?

The original Jetpack Navigation library (sometimes referred to as Nav2 as it's on major version 2) was initially announced back in 2018, before AndroidX and before Compose. While it served its original goals well, we heard from you that it had several limitations when working with modern Compose patterns.

One key limitation was that the back stack state could only be observed indirectly. This meant there could be two sources of truth, potentially leading to an inconsistent application state. Also, Nav2's NavHost was designed to display only a single destination – the topmost one on the back stack – filling the available space. This made it difficult to implement adaptive layouts that display multiple panes of content simultaneously, such as a list-detail layout on large screens.

illustration of single pane and two-pane layouts showing list and detail features
Figure 1. Changing from single pane to multi-pane layouts can create navigational challenges

Founding principles

Nav3 is built upon principles designed to provide greater flexibility and developer control:

    • You own the back stack: You, the developer, not the library, own and control the back stack. It's a simple list which is backed by Compose state. Specifically, Nav3 expects your back stack to be SnapshotStateList<T> where T can be any type you choose. You can navigate by adding or removing items (Ts), and state changes are observed and reflected by Nav3's UI.
    • Get out of your way: We heard that you don't like a navigation library to be a black box with inaccessible internal components and state. Nav3 is designed to be open and extensible, providing you with building blocks and helpful defaults. If you want custom navigation behavior you can drop down to lower layers and create your own components and customizations.
    • Pick your building blocks: Instead of embedding all behavior within the library, Nav3 offers smaller components that you can combine to create more complex functionality. We've also provided a "recipes book" that shows how to combine components to solve common navigation challenges.

illustration of the Nav3 display observing changes to the developer-owned back stack
Figure 2. The Nav3 display observes changes to the developer-owned back stack.

Key features

    • Adaptive layouts: A flexible layout API (named Scenes) allows you to render multiple destinations in the same layout (for example, a list-detail layout on large screen devices). This makes it easy to switch between single and multi-pane layouts.
    • Modularity: The API design allows navigation code to be split across multiple modules. This improves build times and allows clear separation of responsibilities between feature modules.

      moving image demonstrating custom animations and predictive back features on a mobile device
      Figure 3. Custom animations and predictive back are easy to implement, and easy to override for individual destinations.

      Basic code example

      To give you an idea of how Nav3 works, here's a short code sample.

      // Define the routes in your app and any arguments.
      data object Home
      data class Product(val id: String)
      
      // Create a back stack, specifying the route the app should start with.
      val backStack = remember { mutableStateListOf<Any>(ProductList) }
      
      // A NavDisplay displays your back stack. Whenever the back stack changes, the display updates.
      NavDisplay(
          backStack = backStack,
      
          // Specify what should happen when the user goes back
          onBack = { backStack.removeLastOrNull() },
      
          // An entry provider converts a route into a NavEntry which contains the content for that route.
          entryProvider = { route ->
              when (route) {
                  is Home -> NavEntry(route) {
                      Column {
                          Text("Welcome to Nav3")
                          Button(onClick = {
                              // To navigate to a new route, just add that route to the back stack
                              backStack.add(Product("123"))
                          }) {
                              Text("Click to navigate")
                          }
                      }
                  }
                  is Product -> NavEntry(route) {
                      Text("Product ${route.id} ")
                  }
                  else -> NavEntry(Unit) { Text("Unknown route: $route") }
              }
          }
      )
      

      Get started and provide feedback

      To get started, check out the developer documentation, plus the recipes repository which provides examples for:

        • common navigation UI, such as a navigation rail or bar
        • conditional navigation, such as a login flow
        • custom layouts using Scenes

      We plan to provide code recipes, documentation and blogs for more complex use cases in future.

      Nav3 is currently in alpha, which means that the API is liable to change based on feedback. If you have any issues, or would like to provide feedback, please file an issue.

      Nav3 offers a flexible and powerful foundation for building modern navigation in your Compose applications. We're really excited to see what you build with it.

      Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX

Posted by Rebecca Franks – Developer Relations Engineer

The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular - we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

a moving image of various droid bots dancing individually

Androidify app demo

Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

Under the hood

The app combines a variety of different Google technologies, such as:

    • Gemini API - through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
    • Jetpack Compose - for building the UI with delightful animations and making the app adapt to different screen sizes.
    • Navigation 3 - the latest navigation library for building up Navigation graphs with Compose.
    • CameraX Compose and Media3 Compose - for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

This sample app is currently using a standard Imagen model, but we've been working on a fine-tuned model that's trained specifically on all of the pieces that make the Android bot cute and fun; we'll share that version later this year. In the meantime, don't be surprised if the sample app puts out some interesting looking examples!

How does the Androidify app work?

The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

Flow chart describing Androidify app flow
Androidify app flow chart detailing how the app works with AI

AI in Androidify with Gemini and ML Kit

The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

    • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

val response = generativeModel.generateContent(
   content {
       text(prompt)
       image(image)
   },
)

    • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

    • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

    • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

    • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

Explore more detailed information about AI usage in Androidify.

Jetpack Compose

The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

Delightful details with the UI

The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:


Androidify app UI showing camera button
Camera button with a MaterialShapes.Cookie9Sided shape

Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

    • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

      moving example of expressive button shapes in slow motion

    • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

      animated marquee example

    • Fun color splash animation as a transition between screens.

      moving image of a blue color splash transition between Androidify demo screens

    • Animating gradient buttons for the AI-powered actions.

      animated gradient button for AI powered actions example

To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

Adapting to different devices

Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

a collage of different adaptive layouts for the Androidify app across small and large screens
Various adaptive layouts in the app

For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

    • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

    • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

    • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

CameraX and Media3 Compose

To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

CameraLayout in Compose
CameraLayout composable that takes care of different device configurations, such as table top mode

CameraLayout in Compose
CameraLayout composable that takes care of different device configurations, such as table top mode

The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

@Composable
private fun VideoPlayer(modifier: Modifier = Modifier) {
    val context = LocalContext.current
    var player by remember { mutableStateOf<Player?>(null) }
    LifecycleStartEffect(Unit) {
        player = ExoPlayer.Builder(context).build().apply {
            setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
            repeatMode = Player.REPEAT_MODE_ONE
            prepare()
        }
        onStopOrDispose {
            player?.release()
            player = null
        }
    }
    Box(
        modifier
            .background(MaterialTheme.colorScheme.surfaceContainerLowest),
    ) {
        player?.let { currentPlayer ->
            PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
        }
    }
}

Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

var videoFullyOnScreen by remember { mutableStateOf(false) }     

LaunchedEffect(videoFullyOnScreen) {
     if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
} 

// We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
Modifier.onVisibilityChanged(
                containerWidth = LocalView.current.width,
                containerHeight = LocalView.current.height,
) { fullyVisible -> videoFullyOnScreen = fullyVisible }

// A simple version of visibility changed detection
fun Modifier.onVisibilityChanged(
    containerWidth: Int,
    containerHeight: Int,
    onChanged: (visible: Boolean) -> Unit,
) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
    onChanged(
        layoutBounds.boundsInRoot.top > 0 &&
            layoutBounds.boundsInRoot.bottom < containerHeight &&
            layoutBounds.boundsInRoot.left > 0 &&
            layoutBounds.boundsInRoot.right < containerWidth,
    )
}

Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
            OutlinedIconButton(
                onClick = playPauseButtonState::onClick,
                enabled = playPauseButtonState.isEnabled,
            ) {
                val icon =
                    if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                val contentDescription =
                    if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                Icon(
                    painterResource(icon),
                    stringResource(contentDescription),
                )
            }

Check out the code for more details on how CameraX and Media3 were used in Androidify.

Navigation 3

Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

@Composable
fun MainNavigation() {
   val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
   NavDisplay(
       backStack = backStack,
       onBack = { backStack.removeLastOrNull() },
       entryProvider = entryProvider {
           entry<Home> { entry ->
               HomeScreen(
                   onAboutClicked = {
                       backStack.add(About)
                   },
               )
           }
           entry<Camera> {
               CameraPreviewScreen(
                   onImageCaptured = { uri ->
                       backStack.add(Create(uri.toString()))
                   },
               )
           }
           // etc
       },
   )
}

Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

CameraLayout in Compose

Learn more about Jetpack Navigation 3, currently in alpha.

Learn more

By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


Google I/O 2025: Build adaptive Android apps that shine across form factors

Posted by Fahd Imtiaz – Product Manager, Android Developer

If your app isn’t built to adapt, you’re missing out on the opportunity to reach a giant swath of users across 500 million devices! At Google I/O this year, we are exploring how adaptive development isn’t just a good idea, but essential to building apps that shine across the expanding Android device ecosystem. This is your guide to meeting users wherever they are, with experiences that are perfectly tailored to their needs.

The advantage of building adaptive

In today's multi-device world, users expect their favorite applications to work flawlessly and intuitively, whether they're on a smartphone, tablet, or Chromebook. This expectation for seamless experiences isn't just about convenience; it's an important factor for user engagement and retention.

For example, entertainment apps (including Prime Video, Netflix, and Hulu) users on both phone and tablet spend almost 200% more time in-app (nearly 3x engagement) than phone-only users in the US*.

Peacock, NBCUniversal’s streaming service has seen a trend of users moving between mobile and large screens and building adaptively enables a single build to work across the different form factors.

“This allows Peacock to have more time to innovate faster and deliver more value to its customers.”
– Diego Valente, Head of Mobile, Peacock and Global Streaming

Adaptive Android development offers the strategic solution, enabling apps to perform effectively across an expanding array of devices and contexts through intelligent design choices that emphasize code reuse and scalability. With Android's continuous growth into new form factors and upcoming enhancements such as desktop windowing and connected displays in Android 16, an app's ability to seamlessly adapt to different screen sizes is becoming increasingly crucial for retaining users and staying competitive.

Beyond direct user benefits, designing adaptively also translates to increased visibility. The Google Play Store actively helps promote developers whose apps excel on different form factors. If your application delivers a great experience on tablets or is excellent on ChromeOS, users on those devices will have an easier time discovering your app. This creates a win-win situation: better quality apps for users and a broader audience for you.

examples of form factors across small phones, tablets, laoptops, and auto

Latest in adaptive Android development from Google I/O

To help you more effectively build compelling adaptive experiences, we shared several key updates at I/O this year.

Build for the expanding Android device ecosystem

Your mobile apps can now reach users beyond phones on over 500 million active devices, including foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 introduces significant advancements in desktop windowing for a true desktop-like experience on large screens and when devices are connected to external displays. And, Android XR is opening a new dimension, allowing your existing mobile apps to be available in immersive virtual environments.

The mindset shift to Adaptive

With the expanding Android device ecosystem, adaptive app development is a fundamental strategy. It's about how the same mobile app runs well across phones, foldables, tablets, Chromebooks, connected displays, XR, and cars, laying a strong foundation for future devices and differentiating for specific form factors. You don't need to rebuild your app for each form factor; but rather make small, iterative changes, as needed, when needed. Embracing this adaptive mindset today isn't just about keeping pace; it's about leading the charge in delivering exceptional user experiences across the entire Android ecosystem.

examples of form factors including vr headset

Leverage powerful tools and libraries to build adaptive apps:

    • Compose Adaptive Layouts library: This library makes adaptive development easier by allowing your app code to fit into canonical layout patterns like list-detail and supporting pane, that automatically reflow as your app is resized, flipped or folded. In the 1.1 release, we introduced pane expansion, allowing users to resize panes. The Socialite demo app showcased how one codebase using this library can adapt across six form factors. New adaptation strategies like "Levitate" (elevating a pane, e.g., into a dialog or bottom sheet) and "Reflow" (reorganizing panes on the same level) were also announced in 1.2 (alpha). For XR, component overrides can automatically spatialize UI elements.

    • Jetpack Navigation 3 (Alpha): This new navigation library simplifies defining user journeys across screens with less boilerplate code, especially for multi-pane layouts in Compose. It helps handle scenarios where list and detail panes might be separate destinations on smaller screens but shown together on larger ones. Check out the new Jetpack Navigation library in alpha.

    • Jetpack Compose input enhancements: Compose's layered architecture, strong input support, and single location for layout logic simplify creating adaptive UIs. Upcoming in Compose 1.9 are right-click context menus and enhanced trackpad/mouse functionality.

    • Window Size Classes: Use window size classes for top-level layout decisions. AndroidX.window 1.5 introduces two new width size classes – "large" (1200dp to 1600dp) and "extra-large" (1600dp and larger) – providing more granular breakpoints for large screens. This helps in deciding when to expand navigation rails or show three panes of content. Support for these new breakpoints was also announced in the Compose adaptive layouts library 1.2 alpha, along with design guidance.

    • Compose previews: Get quick feedback by visualizing your layouts across a wide variety of screen sizes and aspect ratios. You can also specify different devices by name to preview your UI on their respective sizes and with their inset values.

    • Testing adaptive layouts: Validating your adaptive layouts is crucial and Android Studio offers various tools for testing – including previews for different sizes and aspect ratios, a resizable emulator to test across different screen sizes with a single AVD, screenshot tests, and instrumental behavior tests. And with Journeys with Gemini in Android Studio, you can define tests using natural language for even more robust testing across different window sizes.

Ensuring app availability across devices

Avoid unnecessarily declaring required features (like specific cameras or GPS) in your manifest, as this can prevent your app from appearing in the Play Store on devices that lack those specific hardware components but could otherwise run your app perfectly.

Handling different input methods

Remember to handle various input methods like touch, keyboard, and mouse, especially with Chromebook detachables and connected displays.

Prepare for orientation and resizability API changes in Android 16

Beginning in Android 16, for apps targeting SDK 36, manifest and runtime restrictions on orientation, resizability, and aspect ratio will be ignored on displays that are at least 600dp in both dimensions. To meet user expectations, your apps will need layouts that work for both portrait and landscape windows, and support resizing at runtime. There's a temporary opt-out manifest flag at both the application and activity level to delay these changes until targetSdk 37, and these changes currently do not apply to apps categorized as "Games". Learn more about these API changes.

Adaptive considerations for games

Games need to be adaptive too and Unity 6 will add enhanced support for configuration handling, including APIs for screenshots, aspect ratio, and density. Success stories like Asphalt Legends Unite show significant user retention increases on foldables after implementing adaptive features.

examples of form factors including vr headset

Start building adaptive today

Now is the time to elevate your Android apps, making them intuitively responsive across form factors. With the latest tools and updates we’re introducing, you have the power to build experiences that seamlessly flow across all devices, from foldables to cars and beyond. Implementing these strategies will allow you to expand your reach and delight users across the Android ecosystem.

Get inspired by the “Adaptive Android development makes your app shine across devices” talk, and explore all the resources you’ll need to start your journey at developer.android.com/adaptive-apps!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


*Source: internal Google data

What’s new in Watch Faces

Posted by Garan Jenkin – Developer Relations Engineer

Wear OS has a thriving watch face ecosystem featuring a variety of designs that also aims to minimize battery impact. Developers have embraced the simplicity of creating watch faces using Watch Face Format – in the last year, the number of published watch faces using Watch Face Format has grown by over 180%*.

Today, we’re continuing our investment and announcing version 4 of the Watch Face Format, available as part of Wear OS 6. These updates allow developers to express even greater levels of creativity through the new features we’ve added. And we’re supporting marketplaces, which gives flexibility and control to developers and more choice for users.

In this blog post we'll cover key new features, check out the documentation for more details of changes introduced in recent versions.

Supporting marketplaces with Watch Face Push

We’re also announcing a completely new API, the Watch Face Push API, aimed at developers who want to create their own watch face marketplaces.

Watch Face Push, available on devices running Wear OS 6 and above, works exclusively with watch faces that use the Watch Face Format watch faces.

We’ve partnered with well-known watch face developers – including Facer, TIMEFLIK, WatchMaker, Pujie, and Recreative – in designing this new API. We’re excited that all of these developers will be bringing their unique watch face experiences to Wear OS 6 using Watch Face Push.

Three mobile devices representing watch face marketplace apps for watches running Wear OS 6
From left to right, Facer, Recreative and TIMEFLIK watch faces have been developing marketplace apps to work with watches running Wear OS 6.

Watch faces managed and deployed using Watch Face Push are all written using Watch Face Format. Developers publish these watch faces in the same way as publishing through Google Play, though there are some additional checks the developer must make which are described in the Watch Face Push guidance.

A flow diagram demonstrating the flow of information from Cloud-based storage to the user's phone where the app is installed, then transferred to be installed on a wearable device using the Wear OS App via the Watch Face Push API

The Watch Face Push API covers only the watch part of this typical marketplace system diagram - as the app developer, you have control and responsibility for the phone app and cloud components, as well as for building the Wear OS app using Watch Face Push. You’re also in control of the phone-watch communications, for which we recommend using the Data Layer APIs.

Adding Watch Face Push to your project

To start using Watch Face Push on Wear OS 6, include the following dependency in your Wear OS app:

// Ensure latest version is used by checking the repository
implementation("androidx.wear.watchface:watchface-push:1.3.0-alpha07")

Declare the necessary permission in your AndroidManifest.xml:

<uses-permission android:name="com.google.wear.permission.PUSH_WATCH_FACES" />

Obtain a Watch Face Push client:

val manager = WatchFacePushManagerFactory.createWatchFacePushManager(context)

You’re now ready to start using the Watch Face Push API, for example to list the watch faces you have already installed, or add a new watch face:

// List existing watch faces, installed by this app
val listResponse = manager.listWatchFaces()

// Add a watch face
manager.addWatchFace(watchFaceFileDescriptor, validationToken)

Understanding Watch Face Push

While the basics of the Watch Face Push API are easy to understand and access through the WatchFacePushManager interface, it’s important to consider several other factors when working with the API in practice to build an effective marketplace app, including:

To learn more about using Watch Face Push, see the guidance and reference documentation.

Updates to Watch Face Format

Photos

Available from Watch Face Format v4

The new Photos element allows the watch face to contain user-selectable photos. The element supports both individual photos and a gallery of photos. For a gallery of photos, developers can choose whether the photos advance automatically or when the user taps the watch face.

a wearable device and small screen mobile device side by side demonstrating how a user may configure photos for the watch face through the Companion app on the mobile device
Configuring photos through the watch Companion app

The user is able to select the photos of their choice through the companion app, making this a great way to include true personalization in your watch face. To use this feature, first add the necessary configuration:

<UserConfigurations>
  <PhotosConfiguration id="myPhoto" configType="SINGLE"/>
</UserConfigurations>

Then use the Photos element within any PartImage, in the same way as you would for an Image element:

<PartImage ...>
  <Photos source="[CONFIGURATION.myPhoto]"
          defaultImageResource="placeholder_photo"/>
</PartImage>

For details on how to support multiple photos, and how to configure the different change behaviors, refer to the Photos section of the guidance and reference, as well as the GitHub samples.

Transitions

Available from Watch Face Format v4

Watch Face Format now supports transitions when exiting and entering ambient mode.

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit
State transition animation: Example using an overshoot effect in revealing the seconds digits

This is achieved through the existing Variant tag. For example, the hours and minutes in the above watch face are animated as follows:

<DigitalClock ...>
  <Variant mode="AMBIENT" target="x" value="100" interpolation="OVERSHOOT" />

   <!-- Rest of "hh:mm" clock definition here -->
</DigitalClock>

By default, the animation takes the full extent of allowed time for the transition. The new interpolation attribute controls the animation effect - in this case the use of OVERSHOOT adds a playful experience.

The seconds are implemented in a separate DigitalClock element, which shows the use of the new duration attribute:

<DigitalClock ...>
  <Variant mode="AMBIENT" target="alpha" value="0" duration="0.5"/>
   <!-- Rest of "ss" clock definition here -->
</DigitalClock>

The duration attribute takes a value between 0.0 and 1.0, with 1.0 representing the full extent of the allowed time. In this example, by using a value of 0.5, the seconds animation is quicker - taking half the allowed time, in comparison to the hours and minutes, which take the entire transition period.

For more details on using transitions, see the guidance documentation, as well as the reference documentation for Variant.

Color Transforms

Available from Watch Face Format v4

We’ve extended the usefulness of the Transform element by allowing color to be transformed on the majority of elements where it is an attribute, and also allowing tintColor to be transformed on Group and Part* elements such as PartDraw and PartText.

The main exceptions to this addition are the clock elements, DigitalClock and AnalogClock, and also ComplicationSlot, which do not currently support Transform.

In addition to extending the list of transformable attributes to include colors, we’ve also added a handful of useful functions for manipulating color:

To see these in action, let’s consider an example.

The Weather data source provides the current UV index through [WEATHER.UV_INDEX]. When representing the UV index, these values are typically also assigned a color:

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit

We want to represent this information as an Arc, not only showing the value, but also using the appropriate color. We can achieve this as follows:

<Arc centerX="0" centerY="0" height="420" width="420"
  startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
  <Transform target="endAngle"
    value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  <Stroke thickness="20" color="#ffffff" cap="ROUND">
    <Transform target="color"
      value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  </Stroke>
</Arc>

Let’s break this down:

    • The first Transform restricts the UV index to the range 0.0 to 11.0 and adjusts the sweep of the Arc according to that value.
    • The second Transform uses the new extractColorFromWeightedColors function.
        • The first argument is our list of colors
        • The second argument is a list of weights - you can see from the chart above that green covers 3 values, whereas orange only covers 2, so we use weights to represent this.
        • The third argument is whether or not to interpolate the color values. In this case we want to stick strictly to the color convention for UV index, so this is false.
        • Finally in the fourth argument we coerce the UV value into the range 0.0 to 1.0, which is used as an index into our weighted colors.

The result looks like this:

side by side quadrants of watch face examples showing using the new color functions in applying color transforms to a Stroke in an Arc
Using the new color functions in applying color transforms to a Stroke in an Arc.

As well as being able to provide raw colors and weights to these functions, they can also be used with values from complications, such as HR, temperature or steps goal. For example, to use the color range specified in a goal complication:

<Transform target="color"
    value="extractColorFromColors(
        [COMPLICATION.GOAL_PROGRESS_COLORS],
        [COMPLICATION.GOAL_PROGRESS_COLOR_INTERPOLATE],
        [COMPLICATION.GOAL_PROGRESS_VALUE] /    
            [COMPLICATION.GOAL_PROGRESS_TARGET_VALUE]
)"/>

Introducing the Reference element

Available from Watch Face Format v4

The new Reference element allows you to refer to any transformable attribute from one part of your watch face scene in other parts of the scene tree.

In our UV index example above, we’d also like the text labels to use the same color scheme.

We could perform the same color transform calculation as on our Arc, using [WEATHER.UV_INDEX], but this is duplicative work which could lead to inconsistencies, for example if we change the exact color hues in one place but not the other.

Returning to the Arc definition, let’s create a Reference to the color:

<Arc centerX="0" centerY="0" height="420" width="420"
  startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
  <Transform target="endAngle"
    value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  <Stroke thickness="20" color="#ffffff" cap="ROUND">
    <Reference source="color" name="uv_color" defaultValue="#ffffff" />
    <Transform target="color"
      value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  </Stroke>
</Arc>

The color of the Arc is calculated from the relatively complex extractColorFromWeightedColors function. To avoid repeating this elsewhere in our watch face, we have added a Reference element, which takes as its source the Stroke color.

Let’s now look at how we can consume this value in a PartText elsewhere in the watch face. We gave the Reference the name uv_color, so we can simply refer to this in any expression:

<PartText x="0" y="225" width="450" height="225">
  <TextCircular centerX="225" centerY="0" width="420" height="420"
    startAngle="120" endAngle="90"
    align="START" direction="COUNTER_CLOCKWISE">
    <Font family="SYNC_TO_DEVICE" size="24">
      <Transform target="color" value="[REFERENCE.uv_color]" />
      <Template>%d<Parameter expression="[WEATHER.UV_INDEX]" /></Template>
    </Font>
  </TextCircular>
</PartText>
<!-- Similar PartText here for the "UV:" label -->

As a result, the color of the Arc and the UV numeric value are now coordinated:

side by side quadrants of watch face examples showing Coordinating colors across elements using the Reference element
Coordinating colors across elements using the Reference element

For more details on how to use the Reference element, refer to the Reference guidance.

Text autosizing

Available from Watch Face Format v3

Sometimes the exact length of the text to be shown on the watch face can vary, and as a developer you want to balance being able to display text that is both legible, but also complete.

Auto-sizing text can help solve this problem, and can be enabled through the isAutoSize attribute introduced to the Text element:

<Text align="CENTER" isAutoSize="true">

Having set this attribute, text will then automatically fit the available space, starting at the maximum size specified in your Font element, and with a minimum size of 12.

As an example, step count could range from tens or hundreds through to many thousands, and the new isAutoSize attribute enables best use of the available space for every possible value:

side by side examples of text sizing adjustments on watch face using isAutosize
Making the best use of the available text space through isAutoSize

For more details on isAutoSize, see the Text reference.

Android Studio support

For developers working in Android Studio, we’ve added support to make working with Watch Face Format easier, including:

    • Run configuration support
    • Auto-complete and resource reference
    • Lint checking

This is available from Android Studio Canary version 2025.1.1 Canary 10.

Learn More

To learn more about building watch faces, please take a look at the following resources:

We’ve also recently launched a codelab for Watch Face Format and have updated samples on GitHub to showcase new features. The issue tracker is available for providing feedback.

We're excited to see the watch face experiences that you create and share!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


* Google Play data for period 2025-03-24 to 2025-03-23

The Android Show: I/O Edition – what Android devs need to know!

Posted by Matthew McCullough – Vice President, Product Management, Android Developer

We just dropped an I/O Edition of The Android Show, where we unpacked exciting new experiences coming to the Android ecosystem: a fresh and dynamic look and feel, smarts across your devices, and enhanced safety and security features. Join Sameer Samat, President of Android Ecosystem, and the Android team to learn about exciting new development in the episode below, and read about all of the updates for users.

Tune into Google I/O next week – including the Developer Keynote as well as the full Android track of sessions – where we’re covering these topics in more detail and how you can get started.


Start building with Material 3 Expressive

The world of UX design is constantly evolving, and you deserve the tools to create truly engaging and impactful experiences. That’s why Material Design’s latest evolution, Material 3 Expressive, provides new ways to make your product more engaging, easy to use, and desirable. Learn more, and try out the new Material 3 Expressive: an expansion pack designed to enhance your app’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. It comes with new components, motion-physics system, type styles, colors, shapes and more.

Material 3 Expressive will be coming to Android 16 later this year; check out the Google I/O talk next week where we’ll dive into this in more detail.

A fluid design built for your watch's round display

Wear OS 6, arriving later this year, brings Material 3 Expressive design to Google’s smartwatch platform. New design language puts the round watch display at the heart of the experience, and is embraced in every single component and motion of the System, from buttons to notifications. You'll be able to try new visual design and upgrade existing app experiences to a new level. Next week, tune in to the What’s New in Android session to learn more.

Plus some goodies in Android 16...

We also unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, you can try out the latest Beta of Android 16.

A few new features that Android 16 adds which developers should pay attention to are Live updates, professional media and camera features, desktop windowing for tablets, major accessibility enhancements and much more:

Watch the What’s New in Android session and the Live updates talk to learn more.

Tune in next week to Google I/O

This was just a preview of some Android-related news, so remember to tune in next week to Google I/O, where we’ll be diving into a range of Android developer topics in a lot more detail. You can check out What’s New in Android and the full Android track of sessions to start planning your time.

We can’t wait to see you next week, whether you’re joining in person or virtually from anywhere around the world!

#WeArePlay: How My Lovely Planet is making environmental preservation fun through games

Posted by Robbie McLachlan – Developer Marketing

In our latest #WeArePlay film, which celebrates the people behind apps and games on Google Play, we meet Clément, the founder of Imagine Games. His game, My Lovely Planet, turns casual mobile gaming into tangible environmental action, planting real trees and supporting reforestation projects worldwide. Discover the inspiration behind My Lovely Planet and the impact it’s had so far.


What inspired you to combine gaming with positive environmental impact?

I’ve always loved gaming and believed in technology’s potential to tackle environmental challenges. But it was my time working with an NGO in Madagascar, where I witnessed firsthand the devastating effects of environmental changes that truly sparked my mission. Combining gaming and sustainability just made sense. Billions of people play games, so why not harness that entertainment to create real-world impact? So far, the results speak for themselves: we've built an engaged global community committed to protecting the environment.

Imagine Games team, Clément, from France

How do players in My Lovely Planet make real-world differences through the game?

With My Lovely Planet, planting a tree in the game means planting a real tree in the world. Our community has already planted over 360,000 trees through partnerships with NGOs like Graines de Vie in Madagascar, Kenya, and France. We've also supported ocean-cleaning, bee-protection, and drone reforestation projects.

Balancing fun with impact was key. Players wouldn’t stay just for the mission, so we focused on creating a genuinely fun match-3 style game. Once gameplay was strong, we made real-world actions like tree planting core rewards in the game, helping players feel naturally connected to their impact. Our goal is to keep growing this model to protect biodiversity and fight climate change.

Can you tell us about your drone-led reforestation project in France?

Our latest initiative involves using drones to reforest areas severely impacted by insect infestations and other environmental issues. We're dropping over one million specially-coated seeds by drone, which is a completely new and efficient way of reforesting large areas. It’s exciting because if this pilot succeeds, it could be replicated worldwide, significantly boosting global reforestation efforts.

a drone in mid air dropping seeds in a forested area

How has Google Play helped your journey?

Google Play has been crucial for My Lovely Planet – it's our main distribution channel, with about 70% of our players coming through the platform. It makes it incredibly easy and convenient for anyone to download and start playing immediately. which is essential for engaging a global community. Plus, from a developer's standpoint, the flexibility, responsiveness, and powerful testing tools Google Play provides have made launching and scaling our game faster and smoother, allowing us to focus even more on our environmental impact.

a close up of a user playing the My Lovely Planet game on their mobile device while sitting in the front seat of a vehicle

What is next for My Lovely Planet?

Right now, we're focused on expanding the game experience by adding more engaging levels, and introducing exciting new features like integrating our eco-friendly cryptocurrency, My Lovely Coin, into gameplay. Following the success of our first drone-led reforestation project in France, our next step is tracking its impact and expanding this approach to other regions. Ultimately, we aim to build the world's largest gaming community dedicated to protecting the environment, empowering millions to make a difference while enjoying the game.


Discover other inspiring app and game founders featured in #WeArePlay.

Prepare your apps for Google Play’s 16 KB page size compatibility requirement

Posted by Dan Brown – Product Manager, Google Play

Google Play empowers you to manage and distribute your innovative and trusted apps and games to billions of users around the world across the entire breadth of Android devices, and historically, all Android devices have managed memory in 4 KB pages.

As device manufacturers equip devices with more RAM to optimize performance, many will adopt larger page sizes like 16 KB. Android 15 introduces support for the increased page size, ensuring your app can run on these evolving devices and benefit from the associated performance gains.

Starting November 1st, 2025, all new apps and updates to existing apps submitted to Google Play and targeting Android 15+ devices must support 16 KB page sizes.

This is a key technical requirement to ensure your users can benefit from the performance enhancements on newer devices and prepares your apps for the platform's future direction of improved performance on newer hardware. Without recompiling to support 16 KB pages, your app might not function correctly on these devices when they become more widely available in future Android releases.

We’ve seen that 16 KB can help with:

    • Faster app launches: See improvements ranging from 3% to 30% for various apps.
    • Improved battery usage: Experience an average gain of 4.5%.
    • Quicker camera starts: Launch the camera 4.5% to 6.6% faster.
    • Speedier system boot-ups: Boot Android devices approximately 8% faster.

We recommend checking your apps early especially for dependencies that might not yet be 16 KB compatible. Many popular SDK providers, like React Native and Flutter, already offer compatible versions. For game developers, several leading game engines, such as Unity, support 16 KB, with support for Unreal Engine coming soon.

Reaching 16 KB compatibility

A substantial number of apps are already compatible, so your app may already work seamlessly with this requirement. For most of those that need to make adjustments, we expect the changes to be minimal.

    • Apps with no native code should be compatible without any changes at all.
    • Apps using libraries or SDKs that contain native code may need to update these to a compatible version.
    • Apps with native code may need to recompile with a more recent toolchain and check for any code with incompatible low level memory management.

Our December blog post, Get your apps ready for 16 KB page size devices, provides a more detailed technical explanation and guidance on how to prepare your apps.

Check your app's compatibility now

It's easy to see if your app bundle already supports 16 KB memory page sizes. Visit the app bundle explorer page in Play Console to check your app's build compliance and get guidance on where your app may need updating.

App bundle explorer in Play Console

Beyond the app bundle explorer, make sure to also test your app in a 16 KB environment. This will help you ensure users don’t experience any issues and that your app delivers its best performance.

For more information, check out the full documentation.

Thank you for your continued support in bringing delightful, fast, and high-performance experiences to users across the breadth of devices Play supports. We look forward to seeing the enhanced experiences you'll deliver with 16 KB support.

Building delightful Android camera and media experiences

Posted by Donovan McMurray, Mayuri Khinvasara Khabya, Mozart Louis, and Nevin Mital – Developer Relations Engineers

Hello Android Developers!

We are the Android Developer Relations Camera & Media team, and we’re excited to bring you something a little different today. Over the past several months, we’ve been hard at work writing sample code and building demos that showcase how to take advantage of all the great potential Android offers for building delightful user experiences.

Some of these efforts are available for you to explore now, and some you’ll see later throughout the year, but for this blog post we thought we’d share some of the learnings we gathered while going through this exercise.

Grab your favorite Android plush or rubber duck, and read on to see what we’ve been up to!

Future-proof your app with Jetpack

Nevin Mital

One of our focuses for the past several years has been improving the developer tools available for video editing on Android. This led to the creation of the Jetpack Media3 Transformer APIs, which offer solutions for both single-asset and multi-asset video editing preview and export. Today, I’d like to focus on the Composition demo app, a sample app that showcases some of the multi-asset editing experiences that Transformer enables.

I started by adding a custom video compositor to demonstrate how you can arrange input video sequences into different layouts for your final composition, such as a 2x2 grid or a picture-in-picture overlay. You can customize this by implementing a VideoCompositorSettings and overriding the getOverlaySettings method. This object can then be set when building your Composition with setVideoCompositorSettings.

Here is an example for the 2x2 grid layout:

object : VideoCompositorSettings {
  ...

  override fun getOverlaySettings(inputId: Int, presentationTimeUs: Long): OverlaySettings {
    return when (inputId) {
      0 -> { // First sequence is placed in the top left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(-0.5f, 0.5f) // Top-left section of background
          .build()
      }

      1 -> { // Second sequence is placed in the top right
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(0.5f, 0.5f) // Top-right section of background
          .build()
      }

      2 -> { // Third sequence is placed in the bottom left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(-0.5f, -0.5f) // Bottom-left section of background
          .build()
      }

      3 -> { // Fourth sequence is placed in the bottom right
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(0.5f, -0.5f) // Bottom-right section of background
          .build()
      }

      else -> {
        StaticOverlaySettings.Builder().build()
      }
    }
  }
}

Since getOverlaySettings also provides a presentation time, we can even animate the layout, such as in this picture-in-picture example:

moving image of picture in picture on a mobile device

Next, I spent some time migrating the Composition demo app to use Jetpack Compose. With complicated editing flows, it can help to take advantage of as much screen space as is available, so I decided to use the supporting pane adaptive layout. This way, the user can fine-tune their video creation on the preview screen, and export options are only shown at the same time on a larger display. Below, you can see how the UI dynamically adapts to the screen size on a foldable device, when switching from the outer screen to the inner screen and vice versa.

What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

moving image of suportive pane adaptive layout

What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

moving image of sequential composition preview in Android XR

Orbiter(
  position = OrbiterEdge.Bottom,
  offset = EdgeOffset.inner(offset = MaterialTheme.spacing.standard),
  alignment = Alignment.CenterHorizontally,
  shape = SpatialRoundedCornerShape(CornerSize(28.dp))
) {
  Row (horizontalArrangement = Arrangement.spacedBy(MaterialTheme.spacing.mini)) {
    // Playback control for rewinding by 10 seconds
    FilledTonalIconButton({ viewModel.seekBack(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.rewind_10),
        contentDescription = "Rewind by 10 seconds"
      )
    }
    // Playback control for play/pause
    FilledTonalIconButton({ viewModel.togglePlay() }) {
      Icon(
        painter = painterResource(id = R.drawable.rounded_play_pause_24),
        contentDescription = 
            if(viewModel.compositionPlayer.isPlaying) {
                "Pause preview playback"
            } else {
                "Resume preview playback"
            }
      )
    }
    // Playback control for forwarding by 10 seconds
    FilledTonalIconButton({ viewModel.seekForward(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.forward_10),
        contentDescription = "Forward by 10 seconds"
      )
    }
  }
}

Jetpack libraries unlock premium functionality incrementally

Donovan McMurray

Not only do our Jetpack libraries have you covered by working consistently across existing and future devices, but they also open the doors to advanced functionality and custom behaviors to support all types of app experiences. In a nutshell, our Jetpack libraries aim to make the common case very accessible and easy, and it has hooks for adding more custom features later.

We’ve worked with many apps who have switched to a Jetpack library, built the basics, added their critical custom features, and actually saved developer time over their estimates. Let’s take a look at CameraX and how this incremental development can supercharge your process.

// Set up CameraX app with preview and image capture.
// Note: setting the resolution selector is optional, and if not set,
// then a default 4:3 ratio will be used.
val aspectRatioStrategy = AspectRatioStrategy(
  AspectRatio.RATIO_16_9, AspectRatioStrategy.FALLBACK_RULE_NONE)
var resolutionSelector = ResolutionSelector.Builder()
  .setAspectRatioStrategy(aspectRatioStrategy)
  .build()

private val previewUseCase = Preview.Builder()
  .setResolutionSelector(resolutionSelector)
  .build()
private val imageCaptureUseCase = ImageCapture.Builder()
  .setResolutionSelector(resolutionSelector)
  .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
  .build()

val useCaseGroupBuilder = UseCaseGroup.Builder()
  .addUseCase(previewUseCase)
  .addUseCase(imageCaptureUseCase)

cameraProvider.unbindAll()

camera = cameraProvider.bindToLifecycle(
  this,  // lifecycleOwner
  CameraSelector.DEFAULT_BACK_CAMERA,
  useCaseGroupBuilder.build(),
)

After setting up the basic structure for CameraX, you can set up a simple UI with a camera preview and a shutter button. You can use the CameraX Viewfinder composable which displays a Preview stream from a CameraX SurfaceRequest.

// Create preview
Box(
  Modifier
    .background(Color.Black)
    .fillMaxSize(),
  contentAlignment = Alignment.Center,
) {
  surfaceRequest?.let {
    CameraXViewfinder(
      modifier = Modifier.fillMaxSize(),
      implementationMode = ImplementationMode.EXTERNAL,
      surfaceRequest = surfaceRequest,
     )
  }
  Button(
    onClick = onPhotoCapture,
    shape = CircleShape,
    colors = ButtonDefaults.buttonColors(containerColor = Color.White),
    modifier = Modifier
      .height(75.dp)
      .width(75.dp),
  )
}

fun onPhotoCapture() {
  // Not shown: defining the ImageCapture.OutputFileOptions for
  // your saved images
  imageCaptureUseCase.takePicture(
    outputOptions,
    ContextCompat.getMainExecutor(context),
    object : ImageCapture.OnImageSavedCallback {
      override fun onError(exc: ImageCaptureException) {
        val msg = "Photo capture failed."
        Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
      }

      override fun onImageSaved(output: ImageCapture.OutputFileResults) {
        val savedUri = output.savedUri
        if (savedUri != null) {
          // Do something with the savedUri if needed
        } else {
          val msg = "Photo capture failed."
          Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
        }
      }
    },
  )
}

You’re already on track for a solid camera experience, but what if you wanted to add some extra features for your users? Adding filters and effects are easy with CameraX’s Media3 effect integration, which is one of the new features introduced in CameraX 1.4.0.

Here’s how simple it is to add a black and white filter from Media3’s built-in effects.

val media3Effect = Media3Effect(
  application,
  PREVIEW or IMAGE_CAPTURE,
  ContextCompat.getMainExecutor(application),
  {},
)
media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()))
useCaseGroupBuilder.addEffect(media3Effect)

The Media3Effect object takes a Context, a bitwise representation of the use case constants for targeted UseCases, an Executor, and an error listener. Then you set the list of effects you want to apply. Finally, you add the effect to the useCaseGroupBuilder we defined earlier.

moving image of sequential composition preview in Android XR
(Left) Our camera app with no filter applied. 
 (Right) Our camera app after the createGrayscaleFilter was added.

There are many other built-in effects you can add, too! See the Media3 Effect documentation for more options, like brightness, color lookup tables (LUTs), contrast, blur, and many other effects.

To take your effects to yet another level, it’s also possible to define your own effects by implementing the GlEffect interface, which acts as a factory of GlShaderPrograms. You can implement a BaseGlShaderProgram’s drawFrame() method to implement a custom effect of your own. A minimal implementation should tell your graphics library to use its shader program, bind the shader program's vertex attributes and uniforms, and issue a drawing command.

Jetpack libraries meet you where you are and your app’s needs. Whether that be a simple, fast-to-implement, and reliable implementation, or custom functionality that helps the critical user journeys in your app stand out from the rest, Jetpack has you covered!

Jetpack offers a foundation for innovative AI Features

Mayuri Khinvasara Khabya

Just as Donovan demonstrated with CameraX for capture, Jetpack Media3 provides a reliable, customizable, and feature-rich solution for playback with ExoPlayer. The AI Samples app builds on this foundation to delight users with helpful and enriching AI-driven additions.

In today's rapidly evolving digital landscape, users expect more from their media applications. Simply playing videos is no longer enough. Developers are constantly seeking ways to enhance user experiences and provide deeper engagement. Leveraging the power of Artificial Intelligence (AI), particularly when built upon robust media frameworks like Media3, offers exciting opportunities. Let’s take a look at some of the ways we can transform the way users interact with video content:

    • Empowering Video Understanding: The core idea is to use AI, specifically multimodal models like the Gemini Flash and Pro models, to analyze video content and extract meaningful information. This goes beyond simply playing a video; it's about understanding what's in the video and making that information readily accessible to the user.
    • Actionable Insights: The goal is to transform raw video into summaries, insights, and interactive experiences. This allows users to quickly grasp the content of a video and find specific information they need or learn something new!
    • Accessibility and Engagement: AI helps make videos more accessible by providing features like summaries, translations, and descriptions. It also aims to increase user engagement through interactive features.

A Glimpse into AI-Powered Video Journeys

The following example demonstrates potential video journies enhanced by artificial intelligence. This sample integrates several components, such as ExoPlayer and Transformer from Media3; the Firebase SDK (leveraging Vertex AI on Android); and Jetpack Compose, ViewModel, and StateFlow. The code will be available soon on Github.

moving images of examples of AI-powered video journeys
(Left) Video summarization  
 (Right) Thumbnails timestamps and HDR frame extraction

There are two experiences in particular that I’d like to highlight:

    • HDR Thumbnails: AI can help identify key moments in the video that could make for good thumbnails. With those timestamps, you can use the new ExperimentalFrameExtractor API from Media3 to extract HDR thumbnails from videos, providing richer visual previews.
    • Text-to-Speech: AI can be used to convert textual information derived from the video into spoken audio, enhancing accessibility. On Android you can also choose to play audio in different languages and dialects thus enhancing personalization for a wider audience.

Using the right AI solution

Currently, only cloud models support video inputs, so we went ahead with a cloud-based solution.Iintegrating Firebase in our sample empowers the app to:

    • Generate real-time, concise video summaries automatically.
    • Produce comprehensive content metadata, including chapter markers and relevant hashtags.
    • Facilitate seamless multilingual content translation.

So how do you actually interact with a video and work with Gemini to process it? First, send your video as an input parameter to your prompt:

val promptData =
"Summarize this video in the form of top 3-4 takeaways only. Write in the form of bullet points. Don't assume if you don't know"

val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
_outputText.value = OutputTextState.Loading

viewModelScope.launch(Dispatchers.IO) {
    try {
        val requestContent = content {
            fileData(videoSource.toString(), "video/mp4")
            text(prompt)
        }
        val outputStringBuilder = StringBuilder()

        generativeModel.generateContentStream(requestContent).collect { response ->
            outputStringBuilder.append(response.text)
            _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
        }

        _outputText.value = OutputTextState.Success(outputStringBuilder.toString())

    } catch (error: Exception) {
        _outputText.value = error.localizedMessage?.let { OutputTextState.Error(it) }
    }
}

Notice there are two key components here:

    • FileData: This component integrates a video into the query.
    • Prompt: This asks the user what specific assistance they need from AI in relation to the provided video.

Of course, you can finetune your prompt as per your requirements and get the responses accordingly.

In conclusion, by harnessing the capabilities of Jetpack Media3 and integrating AI solutions like Gemini through Firebase, you can significantly elevate video experiences on Android. This combination enables advanced features like video summaries, enriched metadata, and seamless multilingual translations, ultimately enhancing accessibility and engagement for users. As these technologies continue to evolve, the potential for creating even more dynamic and intelligent video applications is vast.

Go above-and-beyond with specialized APIs

Mozart Louis

Android 16 introduces the new audio PCM Offload mode which can reduce the power consumption of audio playback in your app, leading to longer playback time and increased user engagement. Eliminating the power anxiety greatly enhances the user experience.

Oboe is Android’s premiere audio api that developers are able to use to create high performance, low latency audio apps. A new feature is being added to the Android NDK and Android 16 called Native PCM Offload playback.

Offload playback helps save battery life when playing audio. It works by sending a large chunk of audio to a special part of the device's hardware (a DSP). This allows the CPU of the device to go into a low-power state while the DSP handles playing the sound. This works with uncompressed audio (like PCM) and compressed audio (like MP3 or AAC), where the DSP also takes care of decoding.

This can result in significant power saving while playing back audio and is perfect for applications that play audio in the background or while the screen is off (think audiobooks, podcasts, music etc).

We created the sample app PowerPlay to demonstrate how to implement these features using the latest NDK version, C++ and Jetpack Compose.

Here are the most important parts!

First order of business is to assure the device supports audio offload of the file attributes you need. In the example below, we are checking if the device support audio offload of stereo, float PCM file with a sample rate of 48000Hz.

       val format = AudioFormat.Builder()
            .setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
            .setSampleRate(48000)
            .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
            .build()

        val attributes =
            AudioAttributes.Builder()
                .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                .setUsage(AudioAttributes.USAGE_MEDIA)
                .build()
       
        val isOffloadSupported = 
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
                AudioManager.isOffloadedPlaybackSupported(format, attributes)
            } else {
                false
            }

        if (isOffloadSupported) {
            player.initializeAudio(PerformanceMode::POWER_SAVING_OFFLOADED)
        }

Once we know the device supports audio offload, we can confidently set the Oboe audio streams’ performance mode to the new performance mode option, PerformanceMode::POWER_SAVING_OFFLOADED.

// Create an audio stream
        AudioStreamBuilder builder;
        builder.setChannelCount(mChannelCount);
        builder.setDataCallback(mDataCallback);
        builder.setFormat(AudioFormat::Float);
        builder.setSampleRate(48000);

        builder.setErrorCallback(mErrorCallback);
        builder.setPresentationCallback(mPresentationCallback);
        builder.setPerformanceMode(PerformanceMode::POWER_SAVING_OFFLOADED);
        builder.setFramesPerDataCallback(128);
        builder.setSharingMode(SharingMode::Exclusive);
           builder.setSampleRateConversionQuality(SampleRateConversionQuality::Medium);
        Result result = builder.openStream(mAudioStream);

Now when audio is played back, it will be offloading audio to the DSP, helping save power when playing back audio.

There is more to this feature that will be covered in a future blog post, fully detailing out all of the new available APIs that will help you optimize your audio playback experience!

What’s next

Of course, we were only able to share the tip of the iceberg with you here, so to dive deeper into the samples, check out the following links:

Hopefully these examples have inspired you to explore what new and fascinating experiences you can build on Android. Tune in to our session at Google I/O in a couple weeks to learn even more about use-cases supported by solutions like Jetpack CameraX and Jetpack Media3!

Zoho Achieves 6x Faster Logins with Passkey and Credential Manager Integration

Posted by Niharika Arora – Senior Developer Relations Engineer, Joseph Lewis – Staff Technical Writer, and Kumareshwaran Sreedharan – Product Manager, Zoho.

As an Android developer, you're constantly looking for ways to enhance security, improve user experience, and streamline development. Zoho, a comprehensive cloud-based software suite focused on security and seamless experiences, achieved significant improvements by adopting passkeys in their OneAuth Android app.

Since integrating passkeys in 2024, Zoho achieved login speeds up to 6x faster than previous methods and a 31% month-over-month (MoM) growth in passkey adoption.

This case study examines Zoho's adoption of passkeys and Android's Credential Manager API to address authentication difficulties. It details the technical implementation process and highlights the impactful results.

Overcoming authentication challenges

Zoho utilizes a combination of authentication methods to protect user accounts. This included Zoho OneAuth, their own multi-factor authentication (MFA) solution, which supported both password-based and passwordless authentication using push notifications, QR codes, and time-based one-time passwords (TOTP). Zoho also supported federated logins, allowing authentication through Security Assertion Markup Language (SAML) and other third-party identity providers.

Challenges

Zoho, like many organizations, aimed to improve authentication security and user experience while reducing operational burdens. The primary challenges that led to the adoption of passkeys included:

    • Security vulnerabilities: Traditional password-based methods left users susceptible to phishing attacks and password breaches.
    • User friction: Password fatigue led to forgotten passwords, frustration, and increased reliance on cumbersome recovery processes.
    • Operational inefficiencies: Handling password resets and MFA issues generated significant support overhead.
    • Scalability concerns: A growing user base demanded a more secure and efficient authentication solution.

Why the shift to passkeys?

Passkeys were implemented in Zoho's apps to address authentication challenges by offering a passwordless approach that significantly improves security and user experience. This solution leverages phishing-resistant authentication, cloud-synchronized credentials for effortless cross-device access, and biometrics (such as a fingerprint or facial recognition), PIN, or pattern for secure logins, thereby reducing the vulnerabilities and inconveniences associated with traditional passwords.

By adopting passkeys with Credential Manager, Zoho cut login times by up to 6x, slashed password-related support costs, and saw strong user adoption – doubling passkey sign-ins in 4 months with 31% MoM growth. Zoho users now enjoy faster, easier logins and phishing-resistant security.

Quote card reads 'Cloud Lion now enjoys logins that are 30% faster and more secure using passkeys – allowing us to use our thumb instead of a password. With passkeys, we can also protect our critical business data against phishing and brute force attacks.' – Fabrice Venegas, Founder, Cloud Lion (a Zoho integration partner)

Implementation with Credential Manager on Android

So, how did Zoho achieve these results? They used Android's Credential Manager API, the recommended Jetpack library for implementing authentication on Android.

Credential Manager provides a unified API that simplifies handling of the various authentication methods. Instead of juggling different APIs for passwords, passkeys, and federated logins (like Sign in with Google), you use a single interface.

Implementing passkeys at Zoho required both client-side and server-side adjustments. Here's a detailed breakdown of the passkey creation, sign-in, and server-side implementation process.

Passkey creation

Passkey creation in OneAuth on a small screen mobile device

To create a passkey, the app first retrieves configuration details from Zoho's server. This process includes a unique verification, such as a fingerprint or facial recognition. This verification data, formatted as a requestJson string), is used by the app to build a CreatePublicKeyCredentialRequest. The app then calls the credentialManager.createCredential method, which prompts the user to authenticate using their device screen lock (biometrics, fingerprint, PIN, etc.).

Upon successful user confirmation, the app receives the new passkey credential data, sends it back to Zoho's server for verification, and the server then stores the passkey information linked to the user's account. Failures or user cancellations during the process are caught and handled by the app.

Sign-in

The Zoho Android app initiates the passkey sign-in process by requesting sign-in options, including a unique challenge, from Zoho's backend server. The app then uses this data to construct a GetCredentialRequest, indicating it will authenticate with a passkey. It then invokes the Android CredentialManager.getCredential() API with this request. This action triggers a standardized Android system interface, prompting the user to choose their Zoho account (if multiple passkeys exist) and authenticate using their device's configured screen lock (fingerprint, face scan, or PIN). After successful authentication, Credential Manager returns a signed assertion (proof of login) to the Zoho app. The app forwards this assertion to Zoho's server, which verifies the signature against the user's stored public key and validates the challenge, completing the secure sign-in process.

Server-side implementation

Zoho's transition to supporting passkeys benefited from their backend systems already being FIDO WebAuthn compliant, which streamlined the server-side implementation process. However, specific modifications were still necessary to fully integrate passkey functionality.

The most significant challenge involved adapting the credential storage system. Zoho's existing authentication methods, which primarily used passwords and FIDO security keys for multi-factor authentication, required different storage approaches than passkeys, which are based on cryptographic public keys. To address this, Zoho implemented a new database schema specifically designed to securely store passkey public keys and related data according to WebAuthn protocols. This new system was built alongside a lookup mechanism to validate and retrieve credentials based on user and device information, ensuring backward compatibility with older authentication methods.

Another server-side adjustment involved implementing the ability to handle requests from Android devices. Passkey requests originating from Android apps use a unique origin format (android:apk-key-hash:example) that is distinct from standard web origins that use a URI-based format (https://example.com/app). The server logic needed to be updated to correctly parse this format, extract the SHA-256 fingerprint hash of the app's signing certificate, and validate it against a pre-registered list. This verification step ensures that authentication requests genuinely originate from Zoho's Android app and protects against phishing attacks.

This code snippet demonstrates how the server checks for the Android-specific origin format and validates the certificate hash:

val origin: String = clientData.getString("origin")

if (origin.startsWith("android:apk-key-hash:")) { 
    val originSplit: List<String> = origin.split(":")
    if (originSplit.size > 3) {
               val androidOriginHashDecoded: ByteArray = Base64.getDecoder().decode(originSplit[3])

                if (!androidOriginHashDecoded.contentEquals(oneAuthSha256FingerPrint)) {
            throw IAMException(IAMErrorCode.WEBAUTH003)
        }
    } else {
        // Optional: Handle the case where the origin string is malformed    }
}

Error handling

Zoho implemented robust error handling mechanisms to manage both user-facing and developer-facing errors. A common error, CreateCredentialCancellationException, appeared when users manually canceled their passkey setup. Zoho tracked the frequency of this error to assess potential UX improvements. Based on Android's UX recommendations, Zoho took steps to better educate their users about passkeys, ensure users were aware of passkey availability, and promote passkey adoption during subsequent sign-in attempts.

This code example demonstrates Zoho's approach for how they handled their most common passkey creation errors:

private fun handleFailure(e: CreateCredentialException) {
    val msg = when (e) {
        is CreateCredentialCancellationException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_CANCELLED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "The operation was canceled by the user."
        }
        is CreateCredentialInterruptedException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_INTERRUPTED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "Passkey setup was interrupted. Please try again."
        }
        is CreateCredentialProviderConfigurationException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_PROVIDER_MISCONFIGURED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "Credential provider misconfigured. Contact support."
        }
        is CreateCredentialUnknownException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_UNKNOWN_ERROR", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "An unknown error occurred during Passkey setup."
        }
        is CreatePublicKeyCredentialDomException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_WEB_AUTHN_ERROR", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "Passkey creation failed: ${e.domError}"
        }
        else -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_FAILED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "An unexpected error occurred. Please try again."
        }
    }
}

Testing passkeys in intranet environments

Zoho faced an initial challenge in testing passkeys within a closed intranet environment. The Google Password Manager verification process for passkeys requires public domain access to validate the relying party (RP) domain. However, Zoho's internal testing environment lacked this public Internet access, causing the verification process to fail and hindering successful passkey authentication testing. To overcome this, Zoho created a publicly accessible test environment, which included hosting a temporary server with an asset link file and domain validation.

This example from the assetlinks.json file used in Zoho's public test environment demonstrates how to associate the relying party domain with the specified Android app for passkey validation.

[
    {
        "relation": [
            "delegate_permission/common.handle_all_urls",
            "delegate_permission/common.get_login_creds"
        ],
        "target": {
            "namespace": "android_app",
            "package_name": "com.zoho.accounts.oneauth",
            "sha256_cert_fingerprints": [
                "SHA_HEX_VALUE" 
            ]
        }
    }
]

Integrate with an existing FIDO server

Android's passkey system utilizes the modern FIDO2 WebAuthn standard. This standard requires requests in a specific JSON format, which helps maintain consistency between native applications and web platforms. To enable Android passkey support, Zoho did minor compatibility and structural changes to correctly generate and process requests that adhere to the required FIDO2 JSON structure.

This server update involved several specific technical adjustments:

      1. Encoding conversion: The server converts the Base64 URL encoding (commonly used in WebAuthn for fields like credential IDs) to standard Base64 encoding before it stores the relevant data. The snippet below shows how a rawId might be encoded to standard Base64:

// Convert rawId bytes to a standard Base64 encoded string for storage
val base64RawId: String = Base64.getEncoder().encodeToString(rawId.toByteArray())

      2. Transport list format: To ensure consistent data processing, the server logic handles lists of transport mechanisms (such as USB, NFC, and Bluetooth, which specify how the authenticator communicated) as JSON arrays.

      3. Client data alignment: The Zoho team adjusted how the server encodes and decodes the clientDataJson field. This ensures the data structure aligns precisely with the expectations of Zoho’s existing internal APIs. The example below illustrates part of the conversion logic applied to client data before the server processes it:

private fun convertForServer(type: String): String {
    val clientDataBytes = BaseEncoding.base64().decode(type)
    val clientDataJson = JSONObject(String(clientDataBytes, StandardCharsets.UTF_8))
    val clientJson = JSONObject()
    val challengeFromJson = clientDataJson.getString("challenge")
    // 'challenge' is a technical identifier/token, not localizable text.
    clientJson.put("challenge", BaseEncoding.base64Url()
        .encode(challengeFromJson.toByteArray(StandardCharsets.UTF_8))) 

    clientJson.put("origin", clientDataJson.getString("origin"))
    clientJson.put("type", clientDataJson.getString("type"))
    clientJson.put("androidPackageName", clientDataJson.getString("androidPackageName"))
    return BaseEncoding.base64().encode(clientJson.toString().toByteArray())
}

User guidance and authentication preferences

A central part of Zoho's passkey strategy involved encouraging user adoption while providing flexibility to align with different organizational requirements. This was achieved through careful UI design and policy controls.

Zoho recognized that organizations have varying security needs. To accommodate this, Zoho implemented:

    • Admin enforcement: Through the Zoho Directory admin panel, administrators can designate passkeys as the mandatory, default authentication method for their entire organization. When this policy is enabled, employees are required to set up a passkey upon their next login and use it going forward.
    • User choice: If an organization does not enforce a specific policy, individual users maintain control. They can choose their preferred authentication method during login, selecting from passkeys or other configured options via their authentication settings.

To make adopting passkeys appealing and straightforward for end-users, Zoho implemented:

    • Easy setup: Zoho integrated passkey setup directly into the Zoho OneAuth mobile app (available for both Android and iOS). Users can conveniently configure their passkeys within the app at any time, smoothing the transition.
    • Consistent access: Passkey support was implemented across key user touchpoints, ensuring users can register and authenticate using passkeys via:
        • The Zoho OneAuth mobile app (Android & iOS);

This method ensured that the process of setting up and using passkeys was accessible and integrated into the platforms they already use, regardless of whether it was mandated by an admin or chosen by the user. You can learn more about how to create smooth user flows for passkey authentication by exploring our comprehensive passkeys user experience guide.

Impact on developer velocity and integration efficiency

Credential Manager, as a unified API, also helped improve developer productivity compared to older sign-in flows. It reduced the complexity of handling multiple authentication methods and APIs separately, leading to faster integration, from months to weeks, and fewer implementation errors. This collectively streamlined the sign-in process and improved overall reliability.

By implementing passkeys with Credential Manager, Zoho achieved significant, measurable improvements across the board:

    • Dramatic speed improvements
        • 2x faster login compared to traditional password authentication.
        • 4x faster login compared to username or mobile number with email or SMS OTP authentication.
        • 6x faster login compared to username, password, and SMS or authenticator OTP authentication.
    • Reduced support costs
        • Reduced password-related support requests, especially for forgotten passwords.
        • Lower costs associated with SMS-based 2FA, as existing users can onboard directly with passkeys.
    • Strong user adoption & enhanced security:
        • Passkey sign-ins doubled in just 4 months, showing high user acceptance.
        • Users migrating to passkeys are fully protected from common phishing and password breach threats.
        • With 31% MoM adoption growth, more users are benefiting daily from enhanced security against vulnerabilities like phishing and SIM swaps.

Recommendations and best practices

To successfully implement passkeys on Android, developers should consider the following best practices:

    • Leverage Android's Credential Manager API:
        • Credential Manager simplifies credential retrieval, reducing developer effort and ensuring a unified authentication experience.
        • Handles passwords, passkeys, and federated login flows in a single interface.
    • Ensure data encoding consistency while migrating from other FIDO authentication solutions:
        • Make sure you handle consistent formatting for all inputs/outputs while migrating from other FIDO authentication solutions such as FIDO security keys.
    • Optimize error handling and logging:
        • Implement robust error handling for a seamless user experience.
        • Provide localized error messages and use detailed logs to debug and resolve unexpected failures.
    • Educate users on passkey recovery options:
        • Prevent lockout scenarios by proactively guiding users on recovery options.
    • Monitor adoption metrics and user feedback:
        • Track user engagement, passkey adoption rates, and login success rates to keep optimizing user experience.
        • Conduct A/B testing on different authentication flows to improve conversion and retention.

Passkeys, combined with the Android Credential Manager API, offer a powerful, unified authentication solution that enhances security while simplifying user experience. Passkeys significantly reduce phishing risks, credential theft, and unauthorized access. We encourage developers to try out the experience in their app and bring the most secure authentication to their users.

Get started with passkeys and Credential Manager

Get hands on with passkeys and Credential Manager on Android using our public sample code.

If you have any questions or issues, you can share with us through the Android Credentials issues tracker.

Android Studio Meerkat Feature Drop is stable

Posted by Adarsh Fernando, Group Product Manager

Today, we're excited to announce the stable release of Android Studio Meerkat Feature Drop (2024.3.2)!

This release brings a host of new features and improvements designed to boost your productivity and enhance your development workflow. With numerous enhancements, this latest release helps you build high-quality Android apps faster and more efficiently: streamlined Jetpack Compose previews, new Gemini capabilities, better Kotlin Multiplatform (KMP) integration, improved device management, and more.

Read on to learn about the key updates in Android Studio Meerkat Feature Drop, and download the latest stable version today to explore them yourself!

Developer Productivity Enhancements

Analyze Crash Reports with Gemini in Android Studio

Debugging production crashes can require you to spend significant time switching contexts between your crash reporting tool, such as Firebase Crashlytics and Android Vitals, and investigating root causes in the IDE. Now, when viewing reports in App Quality Insights (AQI), click the Insights tab. Gemini provides a summary of the crash, generates insights, and links to useful documentation. If you also provide Gemini with access to local code context, it can provide more accurate results, relevant next steps, and code suggestions. This helps you reduce the time spent diagnosing and resolving issues.

moving image of Gemini in the App Quality Insights tool window in Android Studio
Gemini helps you investigate, understand, and resolve crashes in your app much more quickly in the App Quality Insights tool window.

Generate Unit Test Scenarios with Gemini

Writing effective unit tests is crucial but can be time-consuming. Gemini now helps kickstart this process by generating relevant test scenarios. Right-click on a class in your editor and select Gemini > Generate Unit Test Scenarios. Gemini analyzes the code and suggests test cases with descriptive names, outlining what to test. While you still implement the specific test logic, this significantly speeds up the initial setup and ensures better test coverage by suggesting scenarios you might have missed.

moving image of generating unit test scenarios in Android Studio
Gemini helps you generate unit test scenarios for your app.

Gemini Prompt Library

No more retyping your most frequently used prompts for Gemini! The new Prompt Library lets you save prompts directly within Android Studio (Settings > Gemini > Prompt Library). Whether it's a specific code generation pattern, a refactoring instruction, or a debugging query you use often, save it once from the chat (right-click > Save prompt) and re-apply it instantly from the editor (right-click > Gemini > Prompt Library). Prompts that you save can also be shared and standardized across your team.

moving image of prompt library in Android Studio
The prompt library saves your frequently used Gemini prompts to make them easier to use.

You have the option to store prompts on IDE level or Project level:

    • IDE level prompts are private and can be used across multiple projects.
    • Project level prompts can be shared across teams working on the same project (if .idea folder is added to VCS).

Compose and UI Development

Themed Icon Support Preview

Ensure your app's branding looks great with Android’s themed icons. Android Studio now lets you preview how your existing launcher icon adapts to the monochromatic theming algorithm directly within the IDE. This quick visual check helps you identify potential contrast issues or undesirable shapes early in the workflow, even before you provide a dedicated monochromatic drawable. This allows for faster iteration on your app's visual identity.

moving image of themed icon support in preview in Android Studio
Themed icon support in Preview helps you visually check how your existing launcher icon adapts to monochromatic theming.

Compose Preview Enhancements

Iterating on your Compose UI is now faster and better organized:

    • Enhanced Zoom: Navigate complex layouts more easily with smoother, more responsive zooming in your Compose previews.
    • Collapsible Groups: Tidy up your preview surface by collapsing groups of related composables under their @Preview annotation names, letting you focus on specific parts of the UI without clutter.
    • Grid Mode by Default: Grid mode is now the default for a clear overview. Gallery mode (for flipping through individual previews) is available via right-click, while List view has been removed to streamline the experience.
moving image of Compose previews in Android Studio
Compose previews render more smoothly and make it easier to hide previews you’re not focused on.

Build and Deploy

KMP Shared Module Integration

Android Studio now streamlines adding shared logic to your Android app with the new Kotlin Multiplatform Shared Module template. This provides a dedicated starting point within your Android project, making it easier to structure and build shared business logic for both Android and iOS directly from Android Studio.

Kotlin Multiplatform template in Android Studio
The new Kotlin Multiplatform module template makes it easier to add shared business logic to your existing app.

Updated UX for Adding Devices

Spend less time configuring test devices. The new Device Manager UX for adding virtual and remote devices makes it much easier to configure the devices you want from the Device Manager. To get started, click the ‘+’ action at the top of the window and select one of these options:

    • Create Virtual Device: New filters, recommendations, and creation flow guide you towards creating AVDs that are best suited for your intended purpose and your machine's performance.
    • Add Remote Devices: With Android Device Streaming, powered by Firebase, you can connect and debug your app with a variety of real physical devices. With a new catalog view and filters, it's now easier to locate and start using the device you need in just a few clicks.
moving image of configuring virtual devices in Android Studio
It’s now easier to configure virtual devices that are optimized for your workstation.

Google Play Deprecated SDK Warnings

Stay more informed about SDKs you publish with your app. Android Studio now displays warnings from the Google Play SDK Index when an SDK used in your app has been deprecated by its author. These warnings include information about suggested alternative SDKs, helping you proactively manage dependencies and avoid potential issues related to outdated or insecure libraries.

Google Play Deprecated SDK warnings in Android Studio
Play deprecated SDK warnings help you avoid potential issues related to outdated or insecure libraries.

Updated Build Menu and Actions

We've refined the Build menu for a more intuitive experience:

    • New 'Build run-configuration-name' Action: Builds the currently selected run configuration (e.g., :app or a specific test). This is now the default action for the toolbar button and Control/Command+F9.
    • Reordered Actions: The new build action is prioritized at the top, followed by Compile and Assemble actions.
    • Clearer Naming: "Rebuild Project" is now "Clean and Assemble Project with Tests". "Make Project" is renamed to "Assemble Project", and a new "Assemble Project with Tests" action is available.
Build menu in Android Studio
The Build menu includes behavior and naming changes to simplify and streamline the experience.

Standardized Config Directories

Switching between Stable, Beta, and Canary versions of Android Studio is now smoother. Configuration directories are standardized, removing the "Preview" suffix for non-stable builds. We've also added the micro version (e.g., AndroidStudio2024.3.2) to the path, allowing different feature drops to run side-by-side without conflicts. This simplifies managing your IDE settings, especially if you work with multiple Android Studio installations.

IntelliJ platform update

Android Studio Meerkat Feature Drop (2024.3.2) includes the IntelliJ 2024.3 platform release, which has many new features such as a feature complete K2 mode, more reliable Java** and Kotlin code inspections, grammar checks during indexing, debugger improvements, speed and quality of life improvements to Terminal, and more.

For more information, read the full IntelliJ 2024.3 release notes.

Summary

Android Studio Meerkat Feature Drop (2024.3.2) delivers these key features and enhancements:

    • Developer Productivity:
        • Analyze Crash Reports with Gemini
        • Generate Unit Test Scenarios with Gemini
        • Gemini Prompt Library
    • Compose and UI:
        • Themed Icon Preview
        • Compose Preview Enhancements (Zoom, Collapsible Groups, View Modes)
    • Build and Deploy:
        • KMP Shared Module Template
        • Updated UX for Adding Devices
        • Google Play SDK Insights: Deprecated SDK Warnings
        • Updated Build Menu & Actions
        • Standardized Config Directories
    • IntelliJ Platform Update
        • Feature complete K2 mode
        • Improved Kotlin and Java** inspection reliability
        • Debugger improvements
        • Speed and quality of life improvements in Terminal

Getting Started

Ready to elevate your Android development? Download Android Studio Meerkat Feature Drop and start using these powerful new features today!

As always, your feedback is crucial. Check known issues, report bugs, suggest improvements, and connect with the community on LinkedIn, Medium, YouTube, or X. Let's continue building amazing Android apps together!


**Java is a trademark or registered trademark of Oracle and/or its affiliates.