Category Archives: Android Developers Blog

An Open Handset Alliance Project

On-device GenAI APIs as part of ML Kit help you easily build with Gemini Nano

Posted by Caren Chang - Developer Relations Engineer, Chengji Yan - Software Engineer, Taj Darra - Product Manager

We are excited to announce a set of on-device GenAI APIs, as part of ML Kit, to help you integrate Gemini Nano in your Android apps.

To start, we are releasing 4 new APIs:

    • Summarization: to summarize articles and conversations
    • Proofreading: to polish short text
    • Rewriting: to reword text in different styles
    • Image Description: to provide short description for images

Key benefits of GenAI APIs

GenAI APIs are high level APIs that allow for easy integration, similar to existing ML Kit APIs. This means you can expect quality results out of the box without extra effort for prompt engineering or fine tuning for specific use cases.

GenAI APIs run on-device and thus provide the following benefits:

    • Input, inference, and output data is processed locally
    • Functionality remains the same without reliable internet connection
    • No additional cost incurred for each API call

To prevent misuse, we also added safety protection in various layers, including base model training, safety-aware LoRA fine-tuning, input and output classifiers and safety evaluations.

How GenAI APIs are built

There are 4 main components that make up each of the GenAI APIs.

  1. Gemini Nano is the base model, as the foundation shared by all APIs.
  2. Small API-specific LoRA adapter models are trained and deployed on top of the base model to further improve the quality for each API.
  3. Optimized inference parameters (e.g. prompt, temperature, topK, batch size) are tuned for each API to guide the model in returning the best results.
  4. An evaluation pipeline ensures quality in various datasets and attributes. This pipeline consists of: LLM raters, statistical metrics and human raters.

Together, these components make up the high-level GenAI APIs that simplify the effort needed to integrate Gemini Nano in your Android app.

Evaluating quality of GenAI APIs

For each API, we formulate a benchmark score based on the evaluation pipeline mentioned above. This score is based on attributes specific to a task. For example, when evaluating the summarization task, one of the attributes we look at is “grounding” (ie: factual consistency of generated summary with source content).

To provide out-of-box quality for GenAI APIs, we applied feature specific fine-tuning on top of the Gemini Nano base model. This resulted in an increase for the benchmark score of each API as shown below:

Use case in English Gemini Nano Base Model ML Kit GenAI API
Summarization 77.2 92.1
Proofreading 84.3 90.2
Rewriting 79.5 84.1
Image Description 86.9 92.3

In addition, this is a quick reference of how the APIs perform on a Pixel 9 Pro:

Prefix Speed
(input processing rate)
Decode Speed
(output generation rate)
Text-to-text 510 tokens/second 11 tokens/second
Image-to-text 510 tokens/second + 0.8 seconds for image encoding 11 tokens/second

Sample usage

This is an example of implementing the GenAI Summarization API to get a one-bullet summary of an article:

val articleToSummarize = "We are excited to announce a set of on-device generative AI APIs..."

// Define task with desired input and output format
val summarizerOptions = SummarizerOptions.builder(context)
    .setInputType(InputType.ARTICLE)
    .setOutputType(OutputType.ONE_BULLET)
    .setLanguage(Language.ENGLISH)
    .build()
val summarizer = Summarization.getClient(summarizerOptions)

suspend fun prepareAndStartSummarization(context: Context) {
    // Check feature availability. Status will be one of the following: 
    // UNAVAILABLE, DOWNLOADABLE, DOWNLOADING, AVAILABLE
    val featureStatus = summarizer.checkFeatureStatus().await()

    if (featureStatus == FeatureStatus.DOWNLOADABLE) {
        // Download feature if necessary.
        // If downloadFeature is not called, the first inference request will 
        // also trigger the feature to be downloaded if it's not already
        // downloaded.
        summarizer.downloadFeature(object : DownloadCallback {
            override fun onDownloadStarted(bytesToDownload: Long) { }

            override fun onDownloadFailed(e: GenAiException) { }

            override fun onDownloadProgress(totalBytesDownloaded: Long) {}

            override fun onDownloadCompleted() {
                startSummarizationRequest(articleToSummarize, summarizer)
            }
        })    
    } else if (featureStatus == FeatureStatus.DOWNLOADING) {
        // Inference request will automatically run once feature is      
        // downloaded.
        // If Gemini Nano is already downloaded on the device, the   
        // feature-specific LoRA adapter model will be downloaded very  
        // quickly. However, if Gemini Nano is not already downloaded, 
        // the download process may take longer.
        startSummarizationRequest(articleToSummarize, summarizer)
    } else if (featureStatus == FeatureStatus.AVAILABLE) {
        startSummarizationRequest(articleToSummarize, summarizer)
    } 
}

fun startSummarizationRequest(text: String, summarizer: Summarizer) {
    // Create task request  
    val summarizationRequest = SummarizationRequest.builder(text).build()

    // Start summarization request with streaming response
    summarizer.runInference(summarizationRequest) { newText -> 
        // Show new text in UI
    }

    // You can also get a non-streaming response from the request
    // val summarizationResult = summarizer.runInference(summarizationRequest)
    // val summary = summarizationResult.get().summary
}

// Be sure to release the resource when no longer needed
// For example, on viewModel.onCleared() or activity.onDestroy()
summarizer.close()

For more examples of implementing the GenAI APIs, check out the official documentation and samples on GitHub:

Use cases

Here is some guidance on how to best use the current GenAI APIs:

For Summarization, consider:

    • Conversation messages or transcripts that involve 2 or more users
    • Articles or documents less than 4000 tokens (or about 3000 English words). Using the first few paragraphs for summarization is usually good enough to capture the most important information.

For Proofreading and Rewriting APIs, consider utilizing them during the content creation process for short content below 256 tokens to help with tasks such as:

    • Refining messages in a particular tone, such as more formal or more casual
    • Polishing personal notes for easier consumption later

For the Image Description API, consider it for:

    • Generating titles of images
    • Generating metadata for image search
    • Utilizing descriptions of images in use cases where the images themselves cannot be displayed, such as within a list of chat messages
    • Generating alternative text to help visually impaired users better understand content as a whole

GenAI API in production

Envision is an app that verbalizes the visual world to help people who are blind or have low vision lead more independent lives. A common use case in the app is for users to take a picture to have a document read out loud. Utilizing the GenAI Summarization API, Envision is now able to get a concise summary of a captured document. This significantly enhances the user experience by allowing them to quickly grasp the main points of documents and determine if a more detailed reading is desired, saving them time and effort.

side by side images of a mobile device showing a document on a table on the left, and the results of the scanned document on the right showing details providing the what, when, and where as written in the document

Supported devices

GenAI APIs are available on Android devices using optimized MediaTek Dimensity, Qualcomm Snapdragon, and Google Tensor platforms through AICore. For a comprehensive list of devices that support GenAI APIs, refer to our official documentation.

Learn more

Start implementing GenAI APIs in your Android apps today with guidance from our official documentation and samples on GitHub: AI Catalog GenAI API Samples with Compose, ML Kit GenAI APIs Quickstart.

Android Design at Google I/O 2025

Posted by Ivy Knight – Senior Design Advocate

Here’s your guide to the essential Android Design sessions, resources, and announcements for I/O ‘25:

Check out the latest Android updates

The Android Show: I/O Edition

The Android Show had a special I/O edition this year with some exciting announcements like Material Expressive!

Learn more about the new Live Update Notification templates in the Android Notifications & Live Updates for an in-depth look at what they are, when to use them, and why. You can also get the Live Update design template in the Android UI Kit, read more in the updated Notification guidance, and get hands-on with the Jetsnack Live Updates and Widget case study.

Make your apps more expressive

Get a jump on the future of Google’s UX design: Material 3 Expressive. Learn how to use new emotional design patterns to boost engagement, usability, and desire for your product in the Build Next-Level UX with Material 3 Expressive session and check out the expressive update on Material.io.

Stay up to date with Android Accessibility Updates, highlighting accessibility features launching with Android 16: enhanced dark themes, options for those with motion sickness, a new way to increase text contrast, and more.

Catch the Mastering text input in Compose session to learn more about how engaging robust text experiences are built with Jetpack Compose. It covers Autofill integration, dynamic text resizing, and custom input transformations. This is a great session to watch to see what’s possible when designing text inputs.

Thinking across form factors

These design resources and sessions can help you design across more Android form factors or update your existing experiences.

Preview Gemini in-car, imagining seamless navigation and personalized entertainment in the New In-Car App Experiences session. Then explore the new Car UI Design Kit to bring your app to Android Car platforms and speed up your process with the latest Android form factor kit.

Engaging with users on Google TV with excellent TV apps session discusses new ways the Google TV experience is making it easier for users to find and engage with content, including improvement to out-of-box solutions and updates to Android TV OS.

Want a peek at how to bring immersive content, like 3D models, to Android XR with the Building differentiated apps for Android XR with 3D Content session.

Plus WearOS is releasing an updated design kit @AndroidDesign Figma and learning Pathway.

Tip top apps

We’ve also released the following new Android design guidance to help you design the best Android experiences:

In-app Settings

Read up on the latest suggested patterns to build out your app’s settings.

Help and Feedback

Along with settings, learn about adding help and feedback to your app.

Widget Configuration

Does your app need setup? New guidance to help guide in adding configuration to your app’s widgets.

Edge-to-edge design

Allow your apps to take full advantage of the entire screen with the latest guidance on designing for edge-to-edge.

Check out figma.com/@androiddesign for even more new and updated resources.

Visit the I/O 2025 website, build your schedule, and engage with the community. If you are at the Shoreline come say hello to us in the Android tent at our booths.

We can't wait to see what you create with these new tools and insights. Happy I/O!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


Androidify: Building delightful UIs with Compose

Posted by Rebecca Franks - Developer Relations Engineer

Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

Material 3 Expressive

Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.


It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive's component and theme updates for more engaging and user-friendly products.

Material Expressive Component updates
Material Expressive Component updates

In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that's encompassed in the Material theme.

In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

@Composable
fun AndroidifyTheme(
   content: @Composable () -> Unit,
) {
   val colorScheme = LightColorScheme


   MaterialExpressiveTheme(
       colorScheme = colorScheme,
       typography = Typography,
       shapes = shapes,
       motionScheme = MotionScheme.expressive(),
       content = {
           SharedTransitionLayout {
               CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                   content()
               }
           }
       },
   )
}

Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

moving example of expressive button shapes in slow motion

The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

Material Expressive Component updates
Camera button with a MaterialShapes.Cookie9Sided shape

Animations

Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

val interactionSource = remember { MutableInteractionSource() }
val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
Spacer(
   modifier
       .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
       .clip(MaterialShapes.Cookie9Sided.toShape())
       .size(size)
       .drawWithCache {
           //.. etc
       },
)

Camera button scale interaction
Camera button scale interaction

Shared element animations

The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

moving example of expressive button shapes in slow motion

To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

@Composable
fun Modifier.sharedBoundsRevealWithShapeMorph(
   sharedContentState: 
SharedTransitionScope.SharedContentState,
   sharedTransitionScope: SharedTransitionScope = 
LocalSharedTransitionScope.current,
   animatedVisibilityScope: AnimatedVisibilityScope = 
LocalNavAnimatedContentScope.current,
   boundsTransform: BoundsTransform = 
MaterialTheme.motionScheme.sharedElementTransitionSpec,
   resizeMode: SharedTransitionScope.ResizeMode = 
SharedTransitionScope.ResizeMode.RemeasureToBounds,
   restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
   targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
)

Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

val animatedProgress =
   animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)


val morph = remember {
   Morph(restingShape, targetShape)
}
val morphClip = MorphOverlayClip(morph, { animatedProgress.value })


return this@sharedBoundsRevealWithShapeMorph
   .sharedBounds(
       sharedContentState = sharedContentState,
       animatedVisibilityScope = animatedVisibilityScope,
       boundsTransform = boundsTransform,
       resizeMode = resizeMode,
       clipInOverlayDuringTransition = morphClip,
       renderInOverlayDuringTransition = renderInOverlayDuringTransition,
   )

View the full code snippet for this Modifer on GitHub.

Autosize text

With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

BasicText(text,
style = MaterialTheme.typography.titleLarge,
autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
)

This is used front and center for the “Customize your own Android Bot” text:

Text reads Customize your own Android Bot with an inline moving image
“Customize your own Android Bot” text with inline GIF

This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

@Composable
private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
   Box(modifier = modifier) {
       val animatedBot = "animatedBot"
       val text = buildAnnotatedString {
           append(stringResource(R.string.customize))
           // Attach "animatedBot" annotation on the placeholder
           appendInlineContent(animatedBot)
           append(stringResource(R.string.android_bot))
       }
       var placeHolderSize by remember {
           mutableStateOf(220.sp)
       }
       val inlineContent = mapOf(
           Pair(
               animatedBot,
               InlineTextContent(
                   Placeholder(
                       width = placeHolderSize,
                       height = placeHolderSize,
                       placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                   ),
               ) {
                   DancingBot(
                       modifier = Modifier
                           .padding(top = 32.dp)
                           .fillMaxSize(),
                   )
               },
           ),
       )
       BasicText(
           text,
           modifier = Modifier
               .align(Alignment.Center)
               .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
           style = MaterialTheme.typography.titleLarge,
           autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
           maxLines = 6,
           onTextLayout = { result ->
               placeHolderSize = result.layoutInput.style.fontSize * 3.5f
           },
           inlineContent = inlineContent,
       )
   }
}

Composable visibility with onLayoutRectChanged

With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

var buttonBounds by remember {
   mutableStateOf<RelativeLayoutBounds?>(null)
}
var showColorSplash by remember {
   mutableStateOf(false)
}
Box(modifier = Modifier.fillMaxSize()) {
   PrimaryButton(
       buttonText = "Let's Go",
       modifier = Modifier
           .align(Alignment.BottomCenter)
           .onLayoutRectChanged(
               callback = { bounds ->
                   buttonBounds = bounds
               },
           ),
       onClick = {
           showColorSplash = true
       },
   )
}

We use these bounds as an indication of where to start the color splash animation from.

moving image of a blue color splash transition between Androidify demo screens

Learn more delightful details

From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

animated marquee example

animated gradient button for AI powered actions example

animated loading screen example

Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

What’s new in Watch Faces

Posted by Garan Jenkin – Developer Relations Engineer

Wear OS has a thriving watch face ecosystem featuring a variety of designs that also aims to minimize battery impact. Developers have embraced the simplicity of creating watch faces using Watch Face Format – in the last year, the number of published watch faces using Watch Face Format has grown by over 180%*.

Today, we’re continuing our investment and announcing version 4 of the Watch Face Format, available as part of Wear OS 6. These updates allow developers to express even greater levels of creativity through the new features we’ve added. And we’re supporting marketplaces, which gives flexibility and control to developers and more choice for users.

In this blog post we'll cover key new features, check out the documentation for more details of changes introduced in recent versions.

Supporting marketplaces with Watch Face Push

We’re also announcing a completely new API, the Watch Face Push API, aimed at developers who want to create their own watch face marketplaces.

Watch Face Push, available on devices running Wear OS 6 and above, works exclusively with watch faces that use the Watch Face Format watch faces.

We’ve partnered with well-known watch face developers – including Facer, TIMEFLIK, WatchMaker, Pujie, and Recreative – in designing this new API. We’re excited that all of these developers will be bringing their unique watch face experiences to Wear OS 6 using Watch Face Push.

Three mobile devices representing watch face marketplace apps for watches running Wear OS 6
From left to right, Facer, Recreative and TIMEFLIK watch faces have been developing marketplace apps to work with watches running Wear OS 6.

Watch faces managed and deployed using Watch Face Push are all written using Watch Face Format. Developers publish these watch faces in the same way as publishing through Google Play, though there are some additional checks the developer must make which are described in the Watch Face Push guidance.

A flow diagram demonstrating the flow of information from Cloud-based storage to the user's phone where the app is installed, then transferred to be installed on a wearable device using the Wear OS App via the Watch Face Push API

The Watch Face Push API covers only the watch part of this typical marketplace system diagram - as the app developer, you have control and responsibility for the phone app and cloud components, as well as for building the Wear OS app using Watch Face Push. You’re also in control of the phone-watch communications, for which we recommend using the Data Layer APIs.

Adding Watch Face Push to your project

To start using Watch Face Push on Wear OS 6, include the following dependency in your Wear OS app:

// Ensure latest version is used by checking the repository
implementation("androidx.wear.watchface:watchface-push:1.3.0-alpha07")

Declare the necessary permission in your AndroidManifest.xml:

<uses-permission android:name="com.google.wear.permission.PUSH_WATCH_FACES" />

Obtain a Watch Face Push client:

val manager = WatchFacePushManagerFactory.createWatchFacePushManager(context)

You’re now ready to start using the Watch Face Push API, for example to list the watch faces you have already installed, or add a new watch face:

// List existing watch faces, installed by this app
val listResponse = manager.listWatchFaces()

// Add a watch face
manager.addWatchFace(watchFaceFileDescriptor, validationToken)

Understanding Watch Face Push

While the basics of the Watch Face Push API are easy to understand and access through the WatchFacePushManager interface, it’s important to consider several other factors when working with the API in practice to build an effective marketplace app, including:

To learn more about using Watch Face Push, see the guidance and reference documentation.

Updates to Watch Face Format

Photos

Available from Watch Face Format v4

The new Photos element allows the watch face to contain user-selectable photos. The element supports both individual photos and a gallery of photos. For a gallery of photos, developers can choose whether the photos advance automatically or when the user taps the watch face.

a wearable device and small screen mobile device side by side demonstrating how a user may configure photos for the watch face through the Companion app on the mobile device
Configuring photos through the watch Companion app

The user is able to select the photos of their choice through the companion app, making this a great way to include true personalization in your watch face. To use this feature, first add the necessary configuration:

<UserConfigurations>
  <PhotosConfiguration id="myPhoto" configType="SINGLE"/>
</UserConfigurations>

Then use the Photos element within any PartImage, in the same way as you would for an Image element:

<PartImage ...>
  <Photos source="[CONFIGURATION.myPhoto]"
          defaultImageResource="placeholder_photo"/>
</PartImage>

For details on how to support multiple photos, and how to configure the different change behaviors, refer to the Photos section of the guidance and reference, as well as the GitHub samples.

Transitions

Available from Watch Face Format v4

Watch Face Format now supports transitions when exiting and entering ambient mode.

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit
State transition animation: Example using an overshoot effect in revealing the seconds digits

This is achieved through the existing Variant tag. For example, the hours and minutes in the above watch face are animated as follows:

<DigitalClock ...>
  <Variant mode="AMBIENT" target="x" value="100" interpolation="OVERSHOOT" />

   <!-- Rest of "hh:mm" clock definition here -->
</DigitalClock>

By default, the animation takes the full extent of allowed time for the transition. The new interpolation attribute controls the animation effect - in this case the use of OVERSHOOT adds a playful experience.

The seconds are implemented in a separate DigitalClock element, which shows the use of the new duration attribute:

<DigitalClock ...>
  <Variant mode="AMBIENT" target="alpha" value="0" duration="0.5"/>
   <!-- Rest of "ss" clock definition here -->
</DigitalClock>

The duration attribute takes a value between 0.0 and 1.0, with 1.0 representing the full extent of the allowed time. In this example, by using a value of 0.5, the seconds animation is quicker - taking half the allowed time, in comparison to the hours and minutes, which take the entire transition period.

For more details on using transitions, see the guidance documentation, as well as the reference documentation for Variant.

Color Transforms

Available from Watch Face Format v4

We’ve extended the usefulness of the Transform element by allowing color to be transformed on the majority of elements where it is an attribute, and also allowing tintColor to be transformed on Group and Part* elements such as PartDraw and PartText.

The main exceptions to this addition are the clock elements, DigitalClock and AnalogClock, and also ComplicationSlot, which do not currently support Transform.

In addition to extending the list of transformable attributes to include colors, we’ve also added a handful of useful functions for manipulating color:

To see these in action, let’s consider an example.

The Weather data source provides the current UV index through [WEATHER.UV_INDEX]. When representing the UV index, these values are typically also assigned a color:

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit

We want to represent this information as an Arc, not only showing the value, but also using the appropriate color. We can achieve this as follows:

<Arc centerX="0" centerY="0" height="420" width="420"
  startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
  <Transform target="endAngle"
    value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  <Stroke thickness="20" color="#ffffff" cap="ROUND">
    <Transform target="color"
      value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  </Stroke>
</Arc>

Let’s break this down:

    • The first Transform restricts the UV index to the range 0.0 to 11.0 and adjusts the sweep of the Arc according to that value.
    • The second Transform uses the new extractColorFromWeightedColors function.
        • The first argument is our list of colors
        • The second argument is a list of weights - you can see from the chart above that green covers 3 values, whereas orange only covers 2, so we use weights to represent this.
        • The third argument is whether or not to interpolate the color values. In this case we want to stick strictly to the color convention for UV index, so this is false.
        • Finally in the fourth argument we coerce the UV value into the range 0.0 to 1.0, which is used as an index into our weighted colors.

The result looks like this:

side by side quadrants of watch face examples showing using the new color functions in applying color transforms to a Stroke in an Arc
Using the new color functions in applying color transforms to a Stroke in an Arc.

As well as being able to provide raw colors and weights to these functions, they can also be used with values from complications, such as HR, temperature or steps goal. For example, to use the color range specified in a goal complication:

<Transform target="color"
    value="extractColorFromColors(
        [COMPLICATION.GOAL_PROGRESS_COLORS],
        [COMPLICATION.GOAL_PROGRESS_COLOR_INTERPOLATE],
        [COMPLICATION.GOAL_PROGRESS_VALUE] /    
            [COMPLICATION.GOAL_PROGRESS_TARGET_VALUE]
)"/>

Introducing the Reference element

Available from Watch Face Format v4

The new Reference element allows you to refer to any transformable attribute from one part of your watch face scene in other parts of the scene tree.

In our UV index example above, we’d also like the text labels to use the same color scheme.

We could perform the same color transform calculation as on our Arc, using [WEATHER.UV_INDEX], but this is duplicative work which could lead to inconsistencies, for example if we change the exact color hues in one place but not the other.

Returning to the Arc definition, let’s create a Reference to the color:

<Arc centerX="0" centerY="0" height="420" width="420"
  startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
  <Transform target="endAngle"
    value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  <Stroke thickness="20" color="#ffffff" cap="ROUND">
    <Reference source="color" name="uv_color" defaultValue="#ffffff" />
    <Transform target="color"
      value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  </Stroke>
</Arc>

The color of the Arc is calculated from the relatively complex extractColorFromWeightedColors function. To avoid repeating this elsewhere in our watch face, we have added a Reference element, which takes as its source the Stroke color.

Let’s now look at how we can consume this value in a PartText elsewhere in the watch face. We gave the Reference the name uv_color, so we can simply refer to this in any expression:

<PartText x="0" y="225" width="450" height="225">
  <TextCircular centerX="225" centerY="0" width="420" height="420"
    startAngle="120" endAngle="90"
    align="START" direction="COUNTER_CLOCKWISE">
    <Font family="SYNC_TO_DEVICE" size="24">
      <Transform target="color" value="[REFERENCE.uv_color]" />
      <Template>%d<Parameter expression="[WEATHER.UV_INDEX]" /></Template>
    </Font>
  </TextCircular>
</PartText>
<!-- Similar PartText here for the "UV:" label -->

As a result, the color of the Arc and the UV numeric value are now coordinated:

side by side quadrants of watch face examples showing Coordinating colors across elements using the Reference element
Coordinating colors across elements using the Reference element

For more details on how to use the Reference element, refer to the Reference guidance.

Text autosizing

Available from Watch Face Format v3

Sometimes the exact length of the text to be shown on the watch face can vary, and as a developer you want to balance being able to display text that is both legible, but also complete.

Auto-sizing text can help solve this problem, and can be enabled through the isAutoSize attribute introduced to the Text element:

<Text align="CENTER" isAutoSize="true">

Having set this attribute, text will then automatically fit the available space, starting at the maximum size specified in your Font element, and with a minimum size of 12.

As an example, step count could range from tens or hundreds through to many thousands, and the new isAutoSize attribute enables best use of the available space for every possible value:

side by side examples of text sizing adjustments on watch face using isAutosize
Making the best use of the available text space through isAutoSize

For more details on isAutoSize, see the Text reference.

Android Studio support

For developers working in Android Studio, we’ve added support to make working with Watch Face Format easier, including:

    • Run configuration support
    • Auto-complete and resource reference
    • Lint checking

This is available from Android Studio Canary version 2025.1.1 Canary 10.

Learn More

To learn more about building watch faces, please take a look at the following resources:

We’ve also recently launched a codelab for Watch Face Format and have updated samples on GitHub to showcase new features. The issue tracker is available for providing feedback.

We're excited to see the watch face experiences that you create and share!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


* Google Play data for period 2025-03-24 to 2025-03-23

In-App Ratings and Reviews for TV

Posted by Paul Lammertsma – Developer Relations Engineer

Ratings and reviews are essential for developers, offering quantitative and qualitative feedback on user experiences. In 2022, we enhanced the granularity of this feedback by segmenting these insights by countries and form factors.

Now, we're extending the In-App Ratings and Reviews API to TV to allow developers to prompt users for ratings and reviews directly from Google TV.

Ratings and reviews on Google TV

Ratings and reviews entry point forJetStream sample app on TV

Users can now see rating averages, browse reviews, and leave their own review directly from an app's store listing on Google TV.

Ratings and written reviews input screen on TV

Users can interact with in-app ratings and reviews on their TVs by doing the following:

    • Select ratings using the remote control D-pad.
    • Provide optional written reviews using Gboard’s on-screen voice input, or by easily typing from their phone.
    • Send mobile notifications to themselves to complete their TV app review directly on their phone.

User instructions for submitting TV app ratings and reviews on mobile

Additionally, users can leave reviews for other form factors directly from their phone by simply selecting the device chip when submitting an app rating or writing a review.

We've already seen a considerable lift in app ratings on TV since bringing these changes to Google TV, and now, we're making it possible for developers to trigger a ratings prompt as well.

Before we look at the integration, let's first carefully consider the best time to request a review prompt. First, identify optimal moments within your app to request user feedback, ensuring prompts appear only when the UI is idle to prevent interruption of ongoing content.

In-App Review API

Integrating the Google Play In-App Review API is the same as on mobile and it's only a couple of method calls:

val manager = ReviewManagerFactory.create(context)
manager.requestReviewFlow().addOnCompleteListener { task ->
    if (task.isSuccessful) {
        // We got the ReviewInfo object
        val reviewInfo = task.result
        manager.launchReviewFlow(activity, reviewInfo)
    } else {
        // There was some problem, log or handle the error code
        @ReviewErrorCode val reviewErrorCode =
            (task.getException() as ReviewException).errorCode
    }
}

First, invoke requestReviewFlow() to obtain a ReviewInfo object which is used to launch the review flow. You must include an addOnCompleteListener() not just to obtain the ReviewInfo object, but also to monitor for any problems triggering this flow, such as the unavailability of Google Play on the device. Note that ReviewInfo does not offer any insights on whether or not a prompt appeared or which action the user took if a prompt did appear.

The challenge is to identify when to trigger launchReviewFlow(). Track user actions—identifying successful journeys and points where users encounter issues—so you can be confident they had a delightful experience in your app.

For this method, you may optionally also include an addOnCompleteListener() to ensure it resumes when the returned task is completed.

Note that due to throttling of how often users are presented with this prompt, there are no guarantees that the ratings dialog will appear when requesting to start this flow. For best practices, check this guide on when to request an in-app review.

Get started with In-App Reviews on Google TV

You can get a head start today by following these steps:

    1. Identify successful journeys for users, like finishing a movie or TV show season.
    2. Identify poor experiences that should be avoided, like buffering or playback errors.
    3. Integrate the Google Play In-App Review API to trigger review requests at optimal moments within the user journey.
    4. Test your integration by following the testing guide.
    5. Publish your app and continuously monitor your ratings by device type in the Play Console.

We're confident this integration enables you to elevate your Google TV app ratings and empowers your users to share valuable feedback.

Play Console Ratings graphic

Resources

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

New in-car app experiences

Posted by Ben Sagmoe - Developer Relations Engineer

The in-car experience continues to evolve rapidly, and Google remains committed to pushing the boundaries of what's possible. At Google I/O 2025, we're excited to unveil the latest advancements for drivers, car manufacturers, and developers, furthering our goal of a safe, seamless, and helpful connected driving experience.

Today's car cabins are increasingly digital, offering developers exciting new opportunities with larger displays and more powerful computing. Android Auto is now supported in nearly all new cars sold, with almost 250 million compatible vehicles on the road.

We're also seeing significant growth in cars powered by Android Automotive OS with Google built-in. Over 50 models are currently available, with more launching this year. This growth is fueled by a thriving app ecosystem, including over 300 apps already available on the Play Store. These include apps optimized for a safe and seamless experience while driving as well as entertainment apps for while you're parked and waiting in your car—many of which are adaptive mobile apps that have been seamlessly brought to cars through the Car Ready Mobile Apps Program.

A vibrant developer community is essential to delivering these innovative in-car experiences utilizing the different screens within the car cabin. This past year, we've focused on key areas to help empower developers to build more differentiated experiences in cars across both platforms, as we embark on the Gemini era in cars!

Gemini for Cars

Exciting news for in-car experiences: Gemini, Google's advanced AI, is coming to vehicles! This unlocks a new era of safe and helpful interactions on the go.

Gemini enables natural voice conversations and seamless multitasking, empowering drivers to get more done simply by speaking naturally. Imagine effortlessly finding charging stations or navigating to a location pulled directly from an email, all with just your voice.

You can learn how to leverage Gemini's potential to create engaging in-car experiences in your app.

Navigation apps can integrate with Gemini using three core intent formats, allowing you to start navigation, display relevant search results, and execute custom actions, such as enabling users to report incidents like traffic congestion using their voice.

Gemini for cars will be rolling out in the coming months. Get ready to build the next generation of in-car AI experiences!

New developer programs and tools

table of app categories showing availability in android Auto and cars with Google built-in, including media, navigation, point-of-interest, internet of things, weather, video, browsers, games, and communication such as messaging and voip

Last year, we introduced car app quality tiers to inspire developers to create high quality in-car experiences. By developing your app in compliance with the Car ready tier, you can bring video, gaming, or browser apps to run while parked in cars with Google built-in with almost no additional effort. Learn more about Car Ready Mobile Apps.

Your app can further shine in cars within the Car optimized and Car differentiated tiers to unlock experiences while the car is in motion, and also when transitioning between parked and driving modes, while utilizing the different screens within the modern car cabin. Check the car app quality guidelines for details.

To start with, across both Android Auto and for cars with Google built-in, we've made some exciting improvements for Car App Library:

    • The Weather app category has graduated from beta: any developer can now publish weather apps to production tracks on both Android Auto and cars with Google Built-in. Before you publish your app, check that it meets the quality guidelines for weather apps.


    • Two new templates, the SectionedItemTemplate and MediaPlaybackTemplate, are now available in the Car App Library 1.8 alpha release for use on Android Auto. These templates are a great fit for building templated media apps, allowing for increased customization in layout and browsing structure.

      example of sectioneditemtemplate on the left and mediaplaybacktemplate on the right

On Android Auto, many new app categories and capabilities are now in beta:

    • We are adding support for Building media apps with the Car App Library, enabling media app developers to build both richer and more complete experiences that users are used to on their phones. During beta, developers can build and publish media apps built using the Car App Library to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta. 

    • The communications category is in beta. We've simplified calling integration for calling apps by utilizing the CallsManager Jetpack API. Together with the templates provided by the Car App Library, this enables communications apps to build features like full message history, upcoming meetings list, rich in-call views, and more. During beta, developers can build and publish communications apps to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.

    • Games are now supported in Android Auto, while parked, on phones running Android 15 and above. You can already find some popular titles like Angry Birds 2, Farm Heroes Saga, Candy Crush Soda Saga and Beach Buggy Racing 2. The Games category is in Beta and developers can publish games to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.

Finally, we have further simplified building, testing and distribution experience for developers building apps for Android Automotive OS cars with Google built-in:

The road ahead

You can look forward to more updates later this year, including:

    • Video apps will be supported on Android Auto, starting with phones running Android 16 on select compatible cars. If your app is already adaptive, enabling your app experience while parked only requires minimal steps to distribute to cars.

    • For Android Automotive OS cars running Android 14+ with Google built-in, we are working with car manufacturers to add additional app compatibility, to enable thousands of adaptive mobile apps in the next phase of the Car Ready Mobile Apps Program.

    • Updated design documentation that visualizes car app quality guidelines and integration paths to simplify designing your app for cars.

    • Google Play Services for cars with Google built-in are expanding to bring them on-par with mobile, including:
      • a. Passkeys and Credential Manager APIs for a more seamless user sign-in experience.
        b. Quick Share, which will enable easy cross-device sharing from phone to car.



    • Pre-launch reports for Android Automotive OS are coming soon to the Play Console, helping you ensure app quality before distributing your app to cars.

Be sure to keep up to date through goo.gle/cars-whats-new on these features and more as we continuously invest in the future of Android in the car. Stay tuned for more resources to help you build innovative and engaging experiences for drivers and passengers.

Ready to publish your car app? Check our guidance for distributing to cars.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

Announcing Jetpack Navigation 3

Posted by Don Turner - Developer Relations Engineer

Navigating between screens in your app should be simple, shouldn't it? However, building a robust, scalable, and delightful navigation experience can be a challenge. For years, the Jetpack Navigation library has been a key tool for developers, but as the Android UI landscape has evolved, particularly with the rise of Jetpack Compose, we recognized the need for a new approach.

Today, we're excited to introduce Jetpack Navigation 3, a new navigation library built from the ground up specifically for Compose. For brevity, we'll just call it Nav3 from now on. This library embraces the declarative programming model and Compose state as fundamental building blocks.

Why a new navigation library?

The original Jetpack Navigation library (sometimes referred to as Nav2 as it's on major version 2) was initially announced back in 2018, before AndroidX and before Compose. While it served its original goals well, we heard from you that it had several limitations when working with modern Compose patterns.

One key limitation was that the back stack state could only be observed indirectly. This meant there could be two sources of truth, potentially leading to an inconsistent application state. Also, Nav2's NavHost was designed to display only a single destination – the topmost one on the back stack – filling the available space. This made it difficult to implement adaptive layouts that display multiple panes of content simultaneously, such as a list-detail layout on large screens.

illustration of single pane and two-pane layouts showing list and detail features
Figure 1. Changing from single pane to multi-pane layouts can create navigational challenges

Founding principles

Nav3 is built upon principles designed to provide greater flexibility and developer control:

    • You own the back stack: You, the developer, not the library, own and control the back stack. It's a simple list which is backed by Compose state. Specifically, Nav3 expects your back stack to be SnapshotStateList<T> where T can be any type you choose. You can navigate by adding or removing items (Ts), and state changes are observed and reflected by Nav3's UI.
    • Get out of your way: We heard that you don't like a navigation library to be a black box with inaccessible internal components and state. Nav3 is designed to be open and extensible, providing you with building blocks and helpful defaults. If you want custom navigation behavior you can drop down to lower layers and create your own components and customizations.
    • Pick your building blocks: Instead of embedding all behavior within the library, Nav3 offers smaller components that you can combine to create more complex functionality. We've also provided a "recipes book" that shows how to combine components to solve common navigation challenges.

illustration of the Nav3 display observing changes to the developer-owned back stack
Figure 2. The Nav3 display observes changes to the developer-owned back stack.

Key features

    • Adaptive layouts: A flexible layout API (named Scenes) allows you to render multiple destinations in the same layout (for example, a list-detail layout on large screen devices). This makes it easy to switch between single and multi-pane layouts.
    • Modularity: The API design allows navigation code to be split across multiple modules. This improves build times and allows clear separation of responsibilities between feature modules.

      moving image demonstrating custom animations and predictive back features on a mobile device
      Figure 3. Custom animations and predictive back are easy to implement, and easy to override for individual destinations.

      Basic code example

      To give you an idea of how Nav3 works, here's a short code sample.

      // Define the routes in your app and any arguments.
      data object Home
      data class Product(val id: String)
      
      // Create a back stack, specifying the route the app should start with.
      val backStack = remember { mutableStateListOf<Any>(ProductList) }
      
      // A NavDisplay displays your back stack. Whenever the back stack changes, the display updates.
      NavDisplay(
          backStack = backStack,
      
          // Specify what should happen when the user goes back
          onBack = { backStack.removeLastOrNull() },
      
          // An entry provider converts a route into a NavEntry which contains the content for that route.
          entryProvider = { route ->
              when (route) {
                  is Home -> NavEntry(route) {
                      Column {
                          Text("Welcome to Nav3")
                          Button(onClick = {
                              // To navigate to a new route, just add that route to the back stack
                              backStack.add(Product("123"))
                          }) {
                              Text("Click to navigate")
                          }
                      }
                  }
                  is Product -> NavEntry(route) {
                      Text("Product ${route.id} ")
                  }
                  else -> NavEntry(Unit) { Text("Unknown route: $route") }
              }
          }
      )
      

      Get started and provide feedback

      To get started, check out the developer documentation, plus the recipes repository which provides examples for:

        • common navigation UI, such as a navigation rail or bar
        • conditional navigation, such as a login flow
        • custom layouts using Scenes

      We plan to provide code recipes, documentation and blogs for more complex use cases in future.

      Nav3 is currently in alpha, which means that the API is liable to change based on feedback. If you have any issues, or would like to provide feedback, please file an issue.

      Nav3 offers a flexible and powerful foundation for building modern navigation in your Compose applications. We're really excited to see what you build with it.

      Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

Updates to the Android XR SDK: Introducing Developer Preview 2

Posted by Matthew McCullough – VP of Product Management, Android Developer

Since launching the Android XR SDK Developer Preview alongside Samsung, Qualcomm, and Unity last year, we’ve been blown away by all of the excitement we’ve been hearing from the broader Android community. Whether it's through coding live-streams or local Google Developer Group talks, it's been an outstanding experience participating in the community to build the future of XR together, and we're just getting started.

Today we’re excited to share an update to the Android XR SDK: Developer Preview 2, packed with new features and improvements to help you develop helpful and delightful immersive experiences with familiar Android APIs, tools and open standards created for XR.

At Google I/O, we have two technical sessions related to Android XR. The first is Building differentiated apps for Android XR with 3D content, which covers many features present in Jetpack SceneCore and ARCore for Jetpack XR. The future is now, with Compose and AI on Android XR covers creating XR-differentiated UI and our vision on the intersection of XR with cutting-edge AI capabilities.

Android XR sessions at Google I/O 2025
Building differentiated apps for Android XR with 3D content and The future is now, with Compose and AI on Android XR

What’s new in Developer Preview 2

Since the release of Developer Preview 1, we’ve been focused on making the APIs easier to use and adding new immersive Android XR features. Your feedback has helped us shape the development of the tools, SDKs, and the platform itself.

With the Jetpack XR SDK, you can now play back 180° and 360° videos, which can be stereoscopic by encoding with the MV-HEVC specification or by encoding view-frames adjacently. The MV-HEVC standard is optimized and designed for stereoscopic video, allowing your app to efficiently play back immersive videos at great quality. Apps built with Jetpack Compose for XR can use the SpatialExternalSurface composable to render media, including stereoscopic videos.

Using Jetpack Compose for XR, you can now also define layouts that adapt to different XR display configurations. For example, use a SubspaceModifier to specify the size of a Subspace as a percentage of the device’s recommended viewing size, so a panel effortlessly fills the space it's positioned in.

Material Design for XR now supports more component overrides for TopAppBar, AlertDialog, and ListDetailPaneScaffold, helping your large-screen enabled apps that use Material Design effortlessly adapt to the new world of XR.

An app adapts to XR using Material Design for XR with the new component overrides
An app adapts to XR using Material Design for XR with the new component overrides

In ARCore for Jetpack XR, you can now track hands after requesting the appropriate permissions. Hands are a collection of 26 posed hand joints that can be used to detect hand gestures and bring a whole new level of interaction to your Android XR apps:

moving image demonstrates how hands bring a natural input method to your Android XR experience.
Hands bring a natural input method to your Android XR experience.

For more guidance on developing apps for Android XR, check out our Android XR Fundamentals codelab, the updates to our Hello Android XR sample project, and a new version of JetStream with Android XR support.

The Android XR Emulator has also received updates to stability, support for AMD GPUs, and is now fully integrated within the Android Studio UI.

the Android XR Emulator in Android STudio
The Android XR Emulator is now integrated in Android Studio

Developers using Unity have already successfully created and ported existing games and apps to Android XR. Today, you can upgrade to the Pre-Release version 2 of the Unity OpenXR: Android XR package! This update adds many performance improvements such as support for Dynamic Refresh Rate, which optimizes your app’s performance and power consumption. Shaders made with Shader Graph now support SpaceWarp, making it easier to use SpaceWarp to reduce compute load on the device. Hand meshes are now exposed with occlusion, which enables realistic hand visualization.

Check out Unity’s improved Mixed Reality template for Android XR, which now includes support for occlusion and persistent anchors.

We recently launched Android XR Samples for Unity, which demonstrate capabilities on the Android XR platform such as hand tracking, plane tracking, face tracking, and passthrough.

moving image of Google’s open-source Unity samples demonstrating platform features and showing how they’re implemented
Google’s open-source Unity samples demonstrate platform features and show how they’re implemented

The Firebase AI Logic for Unity is now in public preview! This makes it easy for you to integrate gen AI into your apps, enabling the creation of AI-powered experiences with Gemini and Android XR. The Firebase AI Logic fully supports Gemini's capabilities, including multimodal input and output, and bi-directional streaming for immersive conversational interfaces. Built with production readiness in mind, Firebase AI Logic is integrated with core Firebase services like App Check, Remote Config, and Cloud Storage for enhanced security, configurability, and data management. Learn more about this on the Firebase blog or go straight to the Gemini API using Vertex AI in Firebase SDK documentation to get started.

Continuing to build the future together

Our commitment to open standards continues with the glTF Interactivity specification, in collaboration with the Khronos Group. which will be supported in glTF models rendered by Jetpack XR later this year. Models using the glTF Interactivity specification are self-contained interactive assets that can have many pre-programmed behaviors, like rotating objects on a button press or changing the color of a material over time.

Android XR will be available first on Samsung’s Project Moohan, launching later this year. Soon after, our partners at XREAL will release the next Android XR device. Codenamed Project Aura, it’s a portable and tethered device that gives users access to their favorite Android apps, including those that have been built for XR. It will launch as a developer edition, specifically for you to begin creating and experimenting. The best news? With the familiar tools you use to build Android apps today, you can build for these devices too.

product image of XREAL’s Project Aura against a nebulous black background
XREAL’s Project Aura

The Google Play Store is also getting ready for Android XR. It will list supported 2D Android apps on the Android XR Play Store when it launches later this year. If you are working on an Android XR differentiated app, you can get it ready for the big launch and be one of the first differentiated apps on the Android XR Play Store:

And we know many of you are excited for the future of Android XR on glasses. We are shaping the developer experience now and will share more details on how you can participate later this year.

To get started creating and developing for Android XR, check out developer.android.com/develop/xr where you will find all of the tools, libraries, and resources you need to work with the Android XR SDK. In particular, try out our samples and codelabs.

We welcome your feedback, suggestions, and ideas as you’re helping shape Android XR. Your passion, expertise, and bold ideas are vital as we continue to develop Android XR together. We look forward to seeing your XR-differentiated apps when Android XR devices launch later this year!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


Google I/O 2025: What’s new in Android development tools

Posted by Mayank Jain – Product Manager, Android Studio

Android Studio continues to advance Android development by empowering developers to build better app experiences, faster. Our focus has been on improving AI-driven functionality with Gemini, streamlining UI creation and testing, and helping you future-proof apps for the evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in the fast-paced world of mobile development.

You can check out the What’s new in Android Developer Tools session at Google I/O 2025 to see some of the new features in action or better yet, try them out yourself by downloading Android Studio Narwhal Feature Drop (2025.2.1) in the preview release channel. Here’s a look at our latest developments:

Get the latest Gemini 2.5 Pro model in Android Studio

The power of artificial intelligence through Gemini is now deeply integrated into Android Studio, helping you at all stages of Android app development. Now with access to Gemini 2.5 Pro, we're continuing to look for new ways to use AI to supercharge Android development — and help you build better app experiences, faster.

Journeys for Android Studio

We’re also introducing agentic AI with Gemini in Android Studio.Testing your app is now much easier when you create journeys - just describe the actions and assertions in natural language for the user journeys you want to test, and Gemini performs the tests for you. Creating journeys lets you test your app’s critical user journeys across various devices without writing extensive code. You can then run these tests on local physical or virtual Android devices to validate that the test worked as intended by reviewing detailed results directly within the IDE. Although the feature is experimental, the goal is to increase the speed that you can ship high-quality code, while significantly reducing the amount of time you spend manually testing, validating, or reproducing issues.

moving image of Gemini testing an app in Android Studio
Journeys for Android Studio uses Gemini to test your app.


Suggested fixes for crashes with Gemini

The App Quality Insights panel has a great new feature. The crash insights now analyzes your app's source code referenced from the crash, and not only offers a comprehensive analysis and explanation of the crash, in some cases it even offers a source fix! With just a few clicks, you are able to review the changes, accept the code suggestions, and push the changes to your source control. Now you can determine the root cause of a crash and fix it much faster!

screenshot of crash analysis with Gemini in Android Studio
Crash analysis with Gemini

AI features in Studio Labs (stable releases only)

We’ve heard feedback that developers want to access AI features in stable channels as soon as possible. You can now discover and try out the latest AI experimental features through the Studio Labs menu in the Settings menu starting with Narwhal stable release. You can get a first look at AI experiments, share your feedback, and help us bring them into the IDE you use everyday. Go to the Studio Labs tab in Settings and enable the features you would like to start using. These AI features are automatically enabled in canary releases and no action is required.

screenshot of AI features in Studio Labs
AI features in Studio Labs

    • Compose preview generation with Gemini

    • Gemini can automatically generate Jetpack Compose preview code saving you time and effort. You can access this feature by right-clicking within a composable and navigating to Gemini > Generate Compose Preview or Generate Compose Preview for this file, or by clicking the link in an empty preview panel. The generated preview code is presented in a diff view that enables you to quickly accept, edit, or reject the suggestions, providing a faster way to visualize your composables.

      moving image of compose preview generation with gemini in Android Studio
      Compose Preview generation with Gemini

    • Transform UI with Gemini

    • You can now transform UI code within the Compose Preview environment using natural language directly in the preview. To use it, right click in the Compose Preview and select "Transform UI With Gemini". Then enter your natural language requests, such as "Center align these buttons," to guide Gemini in adjusting your layout or styling, or select specific UI elements in the preview for better context. Gemini will then edit your Compose UI code in place, which you can review and approve, speeding up the UI development workflow.

      side by side screenshots showing transforming UI with Gemini in Android Studio
      Transform UI with Gemini

    • Image attachment in Gemini

    • You can now attach image files and provide additional information along with your prompt. For example: you can attach UI mock-ups or screenshots to tell Gemini context about your app’s layout. Consequently, Gemini can generate Compose code based on a provided image, or explain the composables and data flow of a UI screenshot.

      screenshot of image atteachment and preview generation via Gemini in Android Studio
      Image attachment and preview generation via Gemini in Android Studio

    • @File context in Gemini

    • You can now attach your project files as context in chat interactions with Gemini in Android Studio. This lets you quickly reference files in your prompts for Gemini. In the Gemini chat input, type @ to bring up a file completion menu and select files to attach. You can also click the Context drop-down to see which files were automatically attached by Gemini. This gives you more control over the context sent to Gemini.

      screenshot of @File context in Gemini in Android Studio
      @File context in Gemini

Rules in Prompt Library

Rules in Gemini let you define preferred coding styles or output formats within the Prompt Library. You can also mention your preferred tech stack and languages. When you set these preferences once, they are automatically applied to all subsequent prompts sent to Gemini. Rules help the AI understand project standards and preferences for more accurate and tailored code assistance. For example, you can create a rule such as “Always give me concise responses in Kotlin.”

prompt library in Android Studio
Prompt Library Improvements

Gemini in Android Studio for businesses

Gemini in Android Studio for businesses is now available. It provides all the benefits of Gemini in Android Studio, plus enterprise-grade privacy and security features backed by Google Cloud — giving your team the confidence they need to deploy AI at scale while keeping their data protected.

Developers and admins can unlock these features and benefits by subscribing to Gemini Code Assist Standard or Enterprise editions. Discover the full list of Gemini in Android for business features available for your organization.

Improved tools for creating great user experiences

Elevate your Compose UI development with the latest Android Studio enhancements.

Compose preview improvements

Compose preview interaction is now more efficient with the latest navigation improvements. Click on the preview name to jump to the preview definition or click the individual component to jump to the function where it’s defined. Hover states provide immediate visual feedback as you mouse over a preview frame. Improved keyboard arrow navigation eases movement through multiple previews, enabling faster UI iteration and refinement. Additionally, the Compose preview picker is now also available in the stable release.

moving image of compose preview navigation improvements in Android Studio
Compose preview navigation improvements

Compose preview picker in Android Studio
Compose preview picker

Resizable Previews

While in Compose Preview’s focus mode in Android Studio, you can now resize the preview window by dragging its edges. This gives you instant visual feedback on how your UI adapts to different screen sizes, ensuring responsiveness and visual consistency. This rapid iteration helps create UIs that look great on any Android device.

ALT TEXT
Resizable Preview

Embedded Android XR Emulator

The Android XR Emulator now launches by default in the embedded state. You can now deploy your application, navigate the 3D space and use the Layout Inspector directly inside Android Studio, streamlining your development flow.

Embedded XR emulator in Android Studio
Embedded XR Emulator

Improved tools for future-proofing and testing your Android apps

We’ve enhanced some of your favorite features so that you can test more confidently, future-proof your apps, and ensure app compatibility across a wide range of devices and Android versions.

Streamlined testing with Backup and Restore support

Android Studio offers built-in Backup and Restore support by letting you trigger app backups on connected devices directly from the Running Devices window. You can also configure your Run/Debug settings to automatically restore from a previous backup when launching your app. This simplifies the process of validating your app's Backup and Restore implementation and speeds up development by reducing manual setup for testing.

Streamlined testing with backup and restore support in Android Studio
Streamlined testing with Backup and Restore support

Android’s transition to 16 KB Page Size

The underlying architecture of Android is evolving, and a key step forward is the transition to 16 KB page sizes. This fundamental change requires all Android apps with native code or dependencies to be recompiled for compatibility. To help you navigate this transition smoothly, Android Studio now offers proactive warnings when building APKs or Android App Bundles that are incompatible with 16 KB devices. Using the APK Analyzer, you can also find out which libraries are incompatible with 16 KB devices. To test your apps in this new environment, a dedicated 16 KB emulator target is also available in Android Studio alongside existing 4 KB images.

Android’s transition to 16 KB page size in Android Studio
Android’s transition to 16 KB page size

Backup and Sync your Studio settings

When you sign in with your Google account or a JetBrains account in Android Studio, you can now sync your customizations and preferences across all installs and restore preferences automatically on remote Android Studio instances. Simply select “Enable Backup and Sync” while you’re logging in to Android Studio, or from the Settings > Backup and Sync page, and follow the prompts.

Backup and sync settings in Android Studio
Backup and Sync your Studio settings

Increasing developer productivity with Android’s Kotlin Multiplatform improvements

Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. Usage has been growing in the developer community, with apps such as Google Docs now using it in production. We’ve released new Android Studio KMP project templates, updated Jetpack libraries and new codelabs (Get Started with KMP and Migrate Existing Apps to Room KMP) to help developers who are looking to get started with KMP.

Experimental and features that are coming soon to Android Studio

Android Studio Cloud (experimental)

Android Studio Cloud is now available as an experimental public preview, accessible through Firebase Studio. This service streams a Linux virtual machine running Android Studio directly to your web browser, enabling Android application development from anywhere with an internet connection. Get started quickly with dedicated workspaces featuring pre-downloaded Android SDK components. Explore sample projects or seamlessly access your existing Android app projects from GitHub without a local installation. Please note that Android Studio Cloud is currently in an experimental phase. Features and capabilities are subject to significant change, and users may encounter known limitations.

Android Studio Cloud

Version Upgrade Agent (coming soon)

The Version Upgrade Agent, as part of Gemini in Android Studio, is designed to save you time and effort by automating your dependency upgrades. It intelligently analyzes your Android project, parses the release notes for included libraries, and proposes updates directly from your libs.versions.toml file or the refactoring menu (right-click > Refactor > Update dependencies). The agent automatically updates dependencies to the latest compatible version, builds the project, fixes any errors, and repeats until all errors are fixed. Once the dependencies are upgraded, the agent generates a report showing the changes it made, as well as a high level summary highlighting the changes included in the updated libraries.

Version updgrade agent in Android Studio
Version Upgrade Agent

Agent Mode (coming soon)

Agent Mode is a new autonomous AI feature using Gemini, designed to handle complex, multi-stage development tasks that go beyond typical AI assistant capabilities, invoking multiple tools to accomplish tasks on your behalf.

You can describe a complex goal, like integrating a new API, and the agent will formulate an execution plan that spans across files in your project — adding necessary dependencies, editing files, and iteratively fixing bugs. This feature aims to empower all developers to tackle intricate challenges and accelerate the building and prototyping process. You can access it via the Gemini chat window in Android Studio.

Agent Mode in Android Studio
Agent Mode

Play Policy Insights beta in Android Studio (coming soon)

Android Studio now includes richer insights and guidance on Google Play policies that might impact your app. This information, available as lint checks, helps you build safer apps from the start, preventing issues that could disrupt your launch process and cost more time and resources to fix later on. These lint checks will present an overview of the policy, do and don’ts, and links to Play policy pages where you can find more information about the policy.

Play Policy Insights beta in Android Studio
Play Policy Insights beta in Android Studio

IntelliJ Platform Update (2025.1)

Here are some important IDE improvements in the IntelliJ IDEA 2025.1 platform release

    • Kotlin K2 mode: Android Studio now supports Kotlin K2 mode in Android-specific features requiring language support such as Live Edit, Compose Preview and many more

    • Improved dependency resolution in Kotlin build scripts: Makes your Kotlin build scripts for Android projects more stable and predictable

    • Hints about code alterations by Kotlin compiler plugins: Gives you clearer insights into how plugins used in Android development modify your Kotlin code

    • Automatic download of library sources for Gradle projects: Simplifies debugging and understanding your Android project dependencies by providing immediate access to their source code

    • Support for Gradle Daemon toolchains: Helps prevent potential JVM errors during your Android project builds and ensures smoother synchronization

    • Automatic plugin updates: Keeps your Android development tools within IntelliJ IDEA up-to-date effortlessly

To Summarize

Android Studio Narwhal Feature Drop (2025.2.1) is now available in the Android Studio canary channel with some amazing features to help your Android development

AI-powered development tools for Android

    • Journeys for Android Studio: Validate app flows easily using tests and assertions in natural language
    • Suggested fixes for crashes with Gemini: Determine the root cause of a crash and fix it much faster with Gemini
    • AI features in Studio Labs
        • Compose preview generation with Gemini: Generate Compose previews with Gemini's code suggestions
        • Transform UI with Gemini: Transform UI in Compose Preview with natural language, speeding development
        • Image attachment in Gemini: Attach images to Gemini for context-aware code generation
        • @File context in Gemini: Reference project files in Gemini chats for quick AI prompts
    • Rules in Prompt Library: Define preferred coding styles or output formats within the Prompt Library

Improved tools for creating great user experiences

    • Compose preview improvements: Navigate the Compose Preview using clickable names and components
    • Resizable preview: Instantly see how your Compose UI adapts to different screen sizes
    • Embedded XR Emulator: XR Emulator now launches by default in the embedded state

Improved tools for future-proofing and testing your Android apps

    • Streamlined testing with Backup and Restore support: Effortless app testing, trigger backups, auto-restore for faster validation
    • Android’s transition to 16 KB Page Size: Prepare for Android's 16KB page size with Studio's early warnings and testing
    • Backup and Sync your Studio settings: Sync Android Studio settings across devices and restore automatically for convenience
    • Increasing developer productivity with Android’s Kotlin Multiplatform improvements: simplified cross-platform Android and iOS development with new tools

Experimental and features that are coming soon to Android Studio

    • Android Studio Cloud (experimental): Develop Android apps from any browser with just an internet connection
    • Version Upgrade Agent (coming soon): Automated dependency updates save time and effort, ensuring projects stay current
    • Agent Mode (coming soon): Empowering developers to tackle multistage complex tasks that go beyond typical AI assistant capabilities
    • Play Policy Insights beta in Android Studio (coming soon): Insights and guidance on Google Play policies that might impact your app

How to get started

Ready to try the exciting new features in Android Studio?

You can download the canary version of Android Studio Narwhal Feature Drop (2025.1.2) today to incorporate these new features into your workflow or try the latest AI features using Studio Labs in the stable version of Android Studio Meerkat. You can also install them side by side by following these instructions.

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X. Let's build the future of Android apps together!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


Google I/O 2025: Build adaptive Android apps that shine across form factors

Posted by Fahd Imtiaz – Product Manager, Android Developer

If your app isn’t built to adapt, you’re missing out on the opportunity to reach a giant swath of users across 500 million devices! At Google I/O this year, we are exploring how adaptive development isn’t just a good idea, but essential to building apps that shine across the expanding Android device ecosystem. This is your guide to meeting users wherever they are, with experiences that are perfectly tailored to their needs.

The advantage of building adaptive

In today's multi-device world, users expect their favorite applications to work flawlessly and intuitively, whether they're on a smartphone, tablet, or Chromebook. This expectation for seamless experiences isn't just about convenience; it's an important factor for user engagement and retention.

For example, entertainment apps (including Prime Video, Netflix, and Hulu) users on both phone and tablet spend almost 200% more time in-app (nearly 3x engagement) than phone-only users in the US*.

Peacock, NBCUniversal’s streaming service has seen a trend of users moving between mobile and large screens and building adaptively enables a single build to work across the different form factors.

“This allows Peacock to have more time to innovate faster and deliver more value to its customers.”
– Diego Valente, Head of Mobile, Peacock and Global Streaming

Adaptive Android development offers the strategic solution, enabling apps to perform effectively across an expanding array of devices and contexts through intelligent design choices that emphasize code reuse and scalability. With Android's continuous growth into new form factors and upcoming enhancements such as desktop windowing and connected displays in Android 16, an app's ability to seamlessly adapt to different screen sizes is becoming increasingly crucial for retaining users and staying competitive.

Beyond direct user benefits, designing adaptively also translates to increased visibility. The Google Play Store actively helps promote developers whose apps excel on different form factors. If your application delivers a great experience on tablets or is excellent on ChromeOS, users on those devices will have an easier time discovering your app. This creates a win-win situation: better quality apps for users and a broader audience for you.

examples of form factors across small phones, tablets, laoptops, and auto

Latest in adaptive Android development from Google I/O

To help you more effectively build compelling adaptive experiences, we shared several key updates at I/O this year.

Build for the expanding Android device ecosystem

Your mobile apps can now reach users beyond phones on over 500 million active devices, including foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 introduces significant advancements in desktop windowing for a true desktop-like experience on large screens and when devices are connected to external displays. And, Android XR is opening a new dimension, allowing your existing mobile apps to be available in immersive virtual environments.

The mindset shift to Adaptive

With the expanding Android device ecosystem, adaptive app development is a fundamental strategy. It's about how the same mobile app runs well across phones, foldables, tablets, Chromebooks, connected displays, XR, and cars, laying a strong foundation for future devices and differentiating for specific form factors. You don't need to rebuild your app for each form factor; but rather make small, iterative changes, as needed, when needed. Embracing this adaptive mindset today isn't just about keeping pace; it's about leading the charge in delivering exceptional user experiences across the entire Android ecosystem.

examples of form factors including vr headset

Leverage powerful tools and libraries to build adaptive apps:

    • Compose Adaptive Layouts library: This library makes adaptive development easier by allowing your app code to fit into canonical layout patterns like list-detail and supporting pane, that automatically reflow as your app is resized, flipped or folded. In the 1.1 release, we introduced pane expansion, allowing users to resize panes. The Socialite demo app showcased how one codebase using this library can adapt across six form factors. New adaptation strategies like "Levitate" (elevating a pane, e.g., into a dialog or bottom sheet) and "Reflow" (reorganizing panes on the same level) were also announced in 1.2 (alpha). For XR, component overrides can automatically spatialize UI elements.

    • Jetpack Navigation 3 (Alpha): This new navigation library simplifies defining user journeys across screens with less boilerplate code, especially for multi-pane layouts in Compose. It helps handle scenarios where list and detail panes might be separate destinations on smaller screens but shown together on larger ones. Check out the new Jetpack Navigation library in alpha.

    • Jetpack Compose input enhancements: Compose's layered architecture, strong input support, and single location for layout logic simplify creating adaptive UIs. Upcoming in Compose 1.9 are right-click context menus and enhanced trackpad/mouse functionality.

    • Window Size Classes: Use window size classes for top-level layout decisions. AndroidX.window 1.5 introduces two new width size classes – "large" (1200dp to 1600dp) and "extra-large" (1600dp and larger) – providing more granular breakpoints for large screens. This helps in deciding when to expand navigation rails or show three panes of content. Support for these new breakpoints was also announced in the Compose adaptive layouts library 1.2 alpha, along with design guidance.

    • Compose previews: Get quick feedback by visualizing your layouts across a wide variety of screen sizes and aspect ratios. You can also specify different devices by name to preview your UI on their respective sizes and with their inset values.

    • Testing adaptive layouts: Validating your adaptive layouts is crucial and Android Studio offers various tools for testing – including previews for different sizes and aspect ratios, a resizable emulator to test across different screen sizes with a single AVD, screenshot tests, and instrumental behavior tests. And with Journeys with Gemini in Android Studio, you can define tests using natural language for even more robust testing across different window sizes.

Ensuring app availability across devices

Avoid unnecessarily declaring required features (like specific cameras or GPS) in your manifest, as this can prevent your app from appearing in the Play Store on devices that lack those specific hardware components but could otherwise run your app perfectly.

Handling different input methods

Remember to handle various input methods like touch, keyboard, and mouse, especially with Chromebook detachables and connected displays.

Prepare for orientation and resizability API changes in Android 16

Beginning in Android 16, for apps targeting SDK 36, manifest and runtime restrictions on orientation, resizability, and aspect ratio will be ignored on displays that are at least 600dp in both dimensions. To meet user expectations, your apps will need layouts that work for both portrait and landscape windows, and support resizing at runtime. There's a temporary opt-out manifest flag at both the application and activity level to delay these changes until targetSdk 37, and these changes currently do not apply to apps categorized as "Games". Learn more about these API changes.

Adaptive considerations for games

Games need to be adaptive too and Unity 6 will add enhanced support for configuration handling, including APIs for screenshots, aspect ratio, and density. Success stories like Asphalt Legends Unite show significant user retention increases on foldables after implementing adaptive features.

examples of form factors including vr headset

Start building adaptive today

Now is the time to elevate your Android apps, making them intuitively responsive across form factors. With the latest tools and updates we’re introducing, you have the power to build experiences that seamlessly flow across all devices, from foldables to cars and beyond. Implementing these strategies will allow you to expand your reach and delight users across the Android ecosystem.

Get inspired by the “Adaptive Android development makes your app shine across devices” talk, and explore all the resources you’ll need to start your journey at developer.android.com/adaptive-apps!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


*Source: internal Google data