Category Archives: Android Developers Blog

An Open Handset Alliance Project

Engage users on Google TV with excellent TV apps

Posted by Shobana Radhakrishnan - Senior Director of Engineering, Google TV, and Paul Lammertsma - Developer Relations Engineer, Android

Over the past year, Google TV and Android TV achieved over 270 million monthly active devices, establishing one of the largest smart TV OS footprints. Building on this momentum, we are excited to share new platform features and developer tools designed to help you increase app engagement with our expanding user base.

Google TV with Gemini capabilities

Earlier this year, we announced that we’ll bring Gemini capabilities to Google TV, so users can speak more naturally and conversationally to find what to watch and get answers to complex questions.

A user pulls up Gemini on a TV asking for kid-friendly movie recommendations similar to Jurassic Park. Gemini responds with several movie recommendations

After each movie or show search, our new voice assistant will suggest relevant content from your apps, significantly increasing the discoverability of your content.

A user pulls up Gemini on a TV asking for help explaining the solar system to a first grader. Gemini responds with YouTube videos to help explain the solar system

Plus, users can easily ask questions about topics they're curious about and receive insightful answers with supporting videos.

We’re so excited to bring this helpful and delightful experience to users this fall.

Video Discovery API

Today, we’ve also opened partner enrollment for our Video Discovery API.

Video Discovery optimizes Resumption, Entitlements, and Recommendations across all Google TV form factors to enhance the end-user experience and boost app engagement.

    • Resumption: Partners can now easily display a user's paused video within the 'Continue Watching' row from the home screen. This row is a prime location that drives 60% of all user interactions on Google TV.
    • Entitlements: Video Discovery streamlines entitlement management, which matches app content to user eligibility. Users appreciate this because they can enjoy personalized recommendations without needing to manually update all their subscription details. This allows partners to connect with users across multiple discovery points on Google TV.
    • Recommendations: Video Discovery even highlights personalized content recommendations based on content that users watched inside apps.

Partners can begin incorporating the Video Discovery API today, starting with resumption and entitlement integrations. Check out g.co/tv/vda to learn more.

Jetpack Compose for TV

Compose for TV 1.0 expands on the core and Material Compose libraries

Last year, we launched Compose for TV 1.0 beta, which lets you build beautiful, adaptive UIs across Android, including Android TV OS.

Now, Compose for TV 1.0 is stable, and expands on the core and Material Compose libraries. We’ve even seen how the latest release of Compose significantly improves app startup within our internal benchmarking mobile sample, with roughly a 20% improvement compared with the March 2024 release. Because Compose for TV builds upon these libraries, apps built with Compose for TV should also see better app startup times.

New to building with Compose, and not sure where to start? Our updated Jetcaster audio streaming app sample demonstrates how to use Compose across form factors. It includes a dedicated module for playing podcasts on TV by combining separate view models with shared business logic.

Focus Management Codelab

We understand that focus management can be challenging at times. That’s why we’ve published a codelab that reviews how to set initial focus, prepare for unexpected focus traversal, and efficiently restore focus.

Memory Optimization Guide

We’ve released a comprehensive guide on memory optimization, including memory targets for low RAM devices as well. Combined with Android Studio's powerful memory profiler, this helps you understand when your app exceeds those limits and why.

In-App Ratings and Reviews

Ratings and reviews entry point forJetStream sample app on TV

Moreover, app ratings and reviews are essential for developers, offering quantitative and qualitative feedback on user experiences. Now, we're extending the In-App Ratings and Reviews API to TV to allow developers to prompt users for ratings and reviews directly from Google TV. Check out our recent blog post detailing how to easily integrate the In-App Ratings and Reviews API.

Android 16 for TV

Android 16 for TV

We're excited to announce the upcoming release of Android 16 for TV. Developers can begin using the latest beta today. With Android 16, TV developers can access several great features:

    • Platform support for the Eclipsa Audio codec enables creators to use the IAMF spatial audio format. For ExoPlayer support that includes previous platform versions, see ExoPlayer's IAMF decoder module.
    • There are various improvements to media playback speed, consistency and efficiency, as well as HDMI-CEC reliability and performance optimizations for 64-bit kernels.
    • Additional APIs and user experiences from Android 16 are also available. We invite you to explore the complete list from the Android 16 for TV release notes.

What's next

We're incredibly excited to see how these announcements will optimize your development journey, and look forward to seeing the fantastic apps you'll launch on the platform!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

Androidify: How Androidify leverages Gemini, Firebase and ML Kit

Posted by Thomas Ezan – Developer Relations Engineer, Rebecca Franks – Developer Relations Engineer, and Avneet Singh – Product Manager

We’re bringing back Androidify later this year, this time powered by Google AI, so you can customize your very own Android bot and share your creativity with the world. Today, we’re releasing a new open source demo app for Androidify as a great example of how Google is using its Gemini AI models to enhance app experiences.

In this post, we'll dive into how the Androidify app uses Gemini models and Imagen via the Firebase AI Logic SDK, and we'll provide some insights learned along the way to help you incorporate Gemini and AI into your own projects. Read more about the Androidify demo app.

App flow

The overall app functions as follows, with various parts of it using Gemini and Firebase along the way:

flow chart demonstrating Androidify app flow

Gemini and image validation

To get started with Androidify, take a photo or choose an image on your device. The app needs to make sure that the image you upload is suitable for creating an avatar.

Gemini 2.5 Flash via Firebase helps with this by verifying that the image contains a person, that the person is in focus, and assessing image safety, including whether the image contains abusive content.

val jsonSchema = Schema.obj(
   properties = mapOf("success" to Schema.boolean(), "error" to Schema.string()),
   optionalProperties = listOf("error"),
   )
   
val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI())
   .generativeModel(
            modelName = "gemini-2.5-flash-preview-04-17",
   	     generationConfig = generationConfig {
                responseMimeType = "application/json"
                responseSchema = jsonSchema
            },
            safetySettings = listOf(
                SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.DANGEROUS_CONTENT, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.CIVIC_INTEGRITY, HarmBlockThreshold.LOW_AND_ABOVE),
    	),
    )

 val response = generativeModel.generateContent(
            content {
                text("You are to analyze the provided image and determine if it is acceptable and appropriate based on specific criteria.... (more details see the full sample)")
                image(image)
            },
        )

val jsonResponse = Json.parseToJsonElement(response.text)
val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true
val error = jsonResponse.jsonObject["error"]?.jsonPrimitive?.content

In the snippet above, we’re leveraging structured output capabilities of the model by defining the schema of the response. We’re passing a Schema object via the responseSchema param in the generationConfig.

We want to validate that the image has enough information to generate a nice Android avatar. So we ask the model to return a json object with success = true/false and an optional error message explaining why the image doesn't have enough information.

Structured output is a powerful feature enabling a smoother integration of LLMs to your app by controlling the format of their output, similar to an API response.

Image captioning with Gemini Flash

Once it's established that the image contains sufficient information to generate an Android avatar, it is captioned using Gemini 2.5 Flash with structured output.

val jsonSchema = Schema.obj(
            properties = mapOf(
                "success" to Schema.boolean(),
                "user_description" to Schema.string(),
            ),
            optionalProperties = listOf("user_description"),
        )
val generativeModel = createGenerativeTextModel(jsonSchema)

val prompt = "You are to create a VERY detailed description of the main person in the given image. This description will be translated into a prompt for a generative image model..."

val response = generativeModel.generateContent(
content { 
       	text(prompt) 
             	image(image) 
	})
        
val jsonResponse = Json.parseToJsonElement(response.text!!) 
val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true

val userDescription = jsonResponse.jsonObject["user_description"]?.jsonPrimitive?.content

The other option in the app is to start with a text prompt. You can enter in details about your accessories, hairstyle, and clothing, and let Imagen be a bit more creative.

Android generation via Imagen

We’ll use this detailed description of your image to enrich the prompt used for image generation. We’ll add extra details around what we would like to generate and include the bot color selection as part of this too, including the skin tone selected by the user.

val imagenPrompt = "A 3D rendered cartoonish Android mascot in a photorealistic style, the pose is relaxed and straightforward, facing directly forward [...] The bot looks as follows $userDescription [...]"

We then call the Imagen model to create the bot. Using this new prompt, we create a model and call generateImages:

// we supply our own fine-tuned model here but you can use "imagen-3.0-generate-002" 
val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(
            "imagen-3.0-generate-002",
            safetySettings =
            ImagenSafetySettings(
                ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
                personFilterLevel = ImagenPersonFilterLevel.ALLOW_ALL,
            ),
)

val response = generativeModel.generateImages(imagenPrompt)

val image = response.images.first().asBitmap()

And that’s it! The Imagen model generates a bitmap that we can display on the user’s screen.

Finetuning the Imagen model

The Imagen 3 model was finetuned using Low-Rank Adaptation (LoRA). LoRA is a fine-tuning technique designed to reduce the computational burden of training large models. Instead of updating the entire model, LoRA adds smaller, trainable "adapters" that make small changes to the model's performance. We ran a fine tuning pipeline on the Imagen 3 model generally available with Android bot assets of different color combinations and different assets for enhanced cuteness and fun. We generated text captions for the training images and the image-text pairs were used to finetune the model effectively.

The current sample app uses a standard Imagen model, so the results may look a bit different from the visuals in this post. However, the app using the fine-tuned model and a custom version of Firebase AI Logic SDK was demoed at Google I/O. This app will be released later this year and we are also planning on adding support for fine-tuned models to Firebase AI Logic SDK later in the year.

moving image of Androidify app demo turning a selfie image of a bearded man wearing a black tshirt and sunglasses, with a blue back pack into a green 3D bearded droid wearing a black tshirt and sunglasses with a blue backpack
The original image... and Androidifi-ed image

ML Kit

The app also uses the ML Kit Pose Detection SDK to detect a person in the camera view, which triggers the capture button and adds visual indicators.

To do this, we add the SDK to the app, and use PoseDetection.getClient(). Then, using the poseDetector, we look at the detectedLandmarks that are in the streaming image coming from the Camera, and we set the _uiState.detectedPose to true if a nose and shoulders are visible:

private suspend fun runPoseDetection() {
    PoseDetection.getClient(
        PoseDetectorOptions.Builder()
            .setDetectorMode(PoseDetectorOptions.STREAM_MODE)
            .build(),
    ).use { poseDetector ->
        // Since image analysis is processed by ML Kit asynchronously in its own thread pool,
        // we can run this directly from the calling coroutine scope instead of pushing this
        // work to a background dispatcher.
        cameraImageAnalysisUseCase.analyze { imageProxy ->
            imageProxy.image?.let { image ->
                val poseDetected = poseDetector.detectPersonInFrame(image, imageProxy.imageInfo)
                _uiState.update { it.copy(detectedPose = poseDetected) }
            }
        }
    }
}

private suspend fun PoseDetector.detectPersonInFrame(
    image: Image,
    imageInfo: ImageInfo,
): Boolean {
    val results = process(InputImage.fromMediaImage(image, imageInfo.rotationDegrees)).await()
    val landmarkResults = results.allPoseLandmarks
    val detectedLandmarks = mutableListOf<Int>()
    for (landmark in landmarkResults) {
        if (landmark.inFrameLikelihood > 0.7) {
            detectedLandmarks.add(landmark.landmarkType)
        }
    }

    return detectedLandmarks.containsAll(
        listOf(PoseLandmark.NOSE, PoseLandmark.LEFT_SHOULDER, PoseLandmark.RIGHT_SHOULDER),
    )
}
moving image showing the camera shutter button activating when an orange droid figurine is held in the camera frame
The camera shutter button is activated when a person (or a bot!) enters the frame.

Get started with AI on Android

The Androidify app makes an extensive use of the Gemini 2.5 Flash to validate the image and generate a detailed description used to generate the image. It also leverages the specifically fine-tuned Imagen 3 model to generate images of Android bots. Gemini and Imagen models are easily integrated into the app via the Firebase AI Logic SDK. In addition, ML Kit Pose Detection SDK controls the capture button, enabling it only when a person is present in front of the camera.

To get started with AI on Android, go to the Gemini and Imagen documentation for Android.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

What’s New in Jetpack Compose

Posted by Nick Butcher – Product Manager

At Google I/O 2025, we announced a host of features, performance, stability, libraries, and tools updates for Jetpack Compose, our recommended Android UI toolkit. With Compose you can build excellent apps that work across devices. Compose has matured a lot since it was first announced (at Google I/O 2019!) and we're now seeing 60% of the top 1,000 apps in the Play Store such as MAX and Google Drive use and love it.

New Features

Since I/O last year, Compose Bill of Materials (BOM) version 2025.05.01 adds new features such as:

    • Autofill support that lets users automatically insert previously entered personal information into text fields.
    • Auto-sizing text to smoothly adapt text size to a parent container size.
    • Visibility tracking for when you need high-performance information on a composable's position in its root container, screen, or window.
    • Animate bounds modifier for beautiful automatic animations of a Composable's position and size within a LookaheadScope.
    • Accessibility checks in tests that let you build a more accessible app UI through automated a11y testing.

LookaheadScope {
    Box(
        Modifier
            .animateBounds(this@LookaheadScope)
            .width(if(inRow) 100.dp else 150.dp)
            .background(..)
            .border(..)
    )
}
moving image of animate bounds modifier in action

For more details on these features, read What’s new in the Jetpack Compose April ’25 release and check out these talks from Google I/O:

If you’re looking to try out new Compose functionality, the alpha BOM offers new features that we're working on including:

    • Pausable Composition (see below)
    • Updates to LazyLayout prefetch
    • Context Menus
    • New modifiers: onFirstVisible, onVisbilityChanged, contentType
    • New Lint checks for frequently changing values and elements that should be remembered in composition

Please try out the alpha features and provide feedback to help shape the future of Compose.

Material Expressive

At Google I/O, we unveiled Material Expressive, Material Design’s latest evolution that helps you make your products even more engaging and easier to use. It's a comprehensive addition of new components, styles, motion and customization options that help you to build beautiful rich UIs. The Material3 library in the latest alpha BOM contains many of the new expressive components for you to try out.

moving image of material expressive design example

Learn more to start building with Material Expressive.

Adaptive layouts library

Developing adaptive apps across form factors including phones, foldables, tablets, desktop, cars and Android XR is now easier with the latest enhancements to the Compose adaptive layouts library. The stable 1.1 release adds support for predictive back gestures for smoother transitions and pane expansion for more flexible two pane layouts on larger screens. Furthermore, the 1.2 (alpha) release adds more flexibility for how panes are displayed, adding strategies for reflowing and levitating.

moving image of compose adaptive layouts updates in the Google Play app
Compose Adaptive Layouts Updates in the Google Play app

Learn more about building adaptive android apps with Compose.

Performance

With each release of Jetpack Compose, we continue to prioritize performance improvements. The latest stable release includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations. Best of all these are available to you simply by upgrading your Compose dependency; no code changes required.

bar chart of internal benchmarks for performance run on a Pixel 3a device from January to May 2023 measured by jank rate
Internal benchmark, run on a Pixel 3a

We continue to work on further performance improvements, notable changes in the latest alpha BOM include:

    • Pausable Composition allows compositions to be paused, and their work split up over several frames.
    • Background text prefetch enables text layout caches to be pre-warmed on a background thread, enabling faster text layout.
    • LazyLayout prefetch improvements enabling lazy layouts to be smarter about how much content to prefetch, taking advantage of pausable composition.

Together these improvements eliminate nearly all jank in an internal benchmark.

Stability

We've heard from you that upgrading your Compose dependency can be challenging, encountering bugs or behaviour changes that prevent you from staying on the latest version. We've invested significantly in improving the stability of Compose, working closely with the many Google app teams building with Compose to detect and prevent issues before they even make it to a release.

Google apps develop against and release with snapshot builds of Compose; as such, Compose is tested against the hundreds of thousands of Google app tests and any Compose issues are immediately actioned by our team. We have recently invested in increasing the cadence of updating these snapshots and now update them daily from Compose tip-of-tree, which means we’re receiving feedback faster, and are able to resolve issues long before they reach a public release of the library.

Jetpack Compose also relies on @Experimental annotations to mark APIs that are subject to change. We heard your feedback that some APIs have remained experimental for a long time, reducing your confidence in the stability of Compose. We have invested in stabilizing experimental APIs to provide you a more solid API surface, and reduced the number of experimental APIs by 32% in the last year.

We have also heard that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. In the latest alpha BOM, we have added a new opt-in feature to provide more diagnostic information. Note that this does not currently work with minified builds and comes at a performance cost, so we recommend only using this feature in debug builds.

class App : Application() {
   override fun onCreate() {
        // Enable only for debug flavor to avoid perf impact in release
        Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
   }
}

Libraries

We know that to build great apps, you need Compose integration in the libraries that interact with your app's UI.

A core library that powers any Compose app is Navigation. You told us that you often encountered limitations when managing state hoisting and directly manipulating the back stack with the current Compose Navigation solution. We went back to the drawing-board and completely reimagined how a navigation library should integrate with the Compose mental model. We're excited to introduce Navigation 3, a new artifact designed to empower you with greater control and simplify complex navigation flows.

We're also investing in Compose support for CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.

@Composable
private fun VideoPlayer(
    player: Player?, // from media3
    modifier: Modifier = Modifier
) {
    Box(modifier) {
        PlayerSurface(player) // from media3-ui-compose
        player?.let {
            // custom play-pause button UI
            val playPauseButtonState = rememberPlayPauseButtonState(it) // from media3-ui-compose
            MyPlayPauseButton(playPauseButtonState, Modifier.align(BottomEnd).padding(16.dp))
        }
    }
}
To learn more, see the media3 Compose documentation and the CameraX samples.

Tools

We continue to improve the Android Studio tools for creating Compose UIs. The latest Narwhal canary includes:

    • Resizable Previews instantly show you how your Compose UI adapts to different window sizes
    • Preview navigation improvements using clickable names and components
    • Studio Labs 🧪: Compose preview generation with Gemini quickly generate a preview
    • Studio Labs 🧪: Transform UI with Gemini change your UI with natural language, directly from preview.
    • Studio Labs 🧪: Image attachment in Gemini generate Compose code from images.

For more information read What's new in Android development tools.

moving image of resizable preview in Jetpack Compose
Resizable Preview

New Compose Lint checks

The Compose alpha BOM introduces two new annotations and associated lint checks to help you to write correct and performant Compose code. The @FrequentlyChangingValue annotation and FrequentlyChangedStateReadInComposition lint check warns in situations where function calls or property reads in composition might cause frequent recompositions. For example, frequent recompositions might happen when reading scroll position values or animating values. The @RememberInComposition annotation and RememberInCompositionDetector lint check warns in situations where constructors, functions, and property getters are called directly inside composition (e.g. the TextFieldState constructor) without being remembered.

Happy Composing

We continue to invest in providing the features, performance, stability, libraries and tools that you need to build excellent apps. We value your input so please share feedback on our latest updates or what you'd like to see next.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


On-device GenAI APIs as part of ML Kit help you easily build with Gemini Nano

Posted by Caren Chang - Developer Relations Engineer, Chengji Yan - Software Engineer, Taj Darra - Product Manager

We are excited to announce a set of on-device GenAI APIs, as part of ML Kit, to help you integrate Gemini Nano in your Android apps.

To start, we are releasing 4 new APIs:

    • Summarization: to summarize articles and conversations
    • Proofreading: to polish short text
    • Rewriting: to reword text in different styles
    • Image Description: to provide short description for images

Key benefits of GenAI APIs

GenAI APIs are high level APIs that allow for easy integration, similar to existing ML Kit APIs. This means you can expect quality results out of the box without extra effort for prompt engineering or fine tuning for specific use cases.

GenAI APIs run on-device and thus provide the following benefits:

    • Input, inference, and output data is processed locally
    • Functionality remains the same without reliable internet connection
    • No additional cost incurred for each API call

To prevent misuse, we also added safety protection in various layers, including base model training, safety-aware LoRA fine-tuning, input and output classifiers and safety evaluations.

How GenAI APIs are built

There are 4 main components that make up each of the GenAI APIs.

  1. Gemini Nano is the base model, as the foundation shared by all APIs.
  2. Small API-specific LoRA adapter models are trained and deployed on top of the base model to further improve the quality for each API.
  3. Optimized inference parameters (e.g. prompt, temperature, topK, batch size) are tuned for each API to guide the model in returning the best results.
  4. An evaluation pipeline ensures quality in various datasets and attributes. This pipeline consists of: LLM raters, statistical metrics and human raters.

Together, these components make up the high-level GenAI APIs that simplify the effort needed to integrate Gemini Nano in your Android app.

Evaluating quality of GenAI APIs

For each API, we formulate a benchmark score based on the evaluation pipeline mentioned above. This score is based on attributes specific to a task. For example, when evaluating the summarization task, one of the attributes we look at is “grounding” (ie: factual consistency of generated summary with source content).

To provide out-of-box quality for GenAI APIs, we applied feature specific fine-tuning on top of the Gemini Nano base model. This resulted in an increase for the benchmark score of each API as shown below:

Use case in English Gemini Nano Base Model ML Kit GenAI API
Summarization 77.2 92.1
Proofreading 84.3 90.2
Rewriting 79.5 84.1
Image Description 86.9 92.3

In addition, this is a quick reference of how the APIs perform on a Pixel 9 Pro:

Prefix Speed
(input processing rate)
Decode Speed
(output generation rate)
Text-to-text 510 tokens/second 11 tokens/second
Image-to-text 510 tokens/second + 0.8 seconds for image encoding 11 tokens/second

Sample usage

This is an example of implementing the GenAI Summarization API to get a one-bullet summary of an article:

val articleToSummarize = "We are excited to announce a set of on-device generative AI APIs..."

// Define task with desired input and output format
val summarizerOptions = SummarizerOptions.builder(context)
    .setInputType(InputType.ARTICLE)
    .setOutputType(OutputType.ONE_BULLET)
    .setLanguage(Language.ENGLISH)
    .build()
val summarizer = Summarization.getClient(summarizerOptions)

suspend fun prepareAndStartSummarization(context: Context) {
    // Check feature availability. Status will be one of the following: 
    // UNAVAILABLE, DOWNLOADABLE, DOWNLOADING, AVAILABLE
    val featureStatus = summarizer.checkFeatureStatus().await()

    if (featureStatus == FeatureStatus.DOWNLOADABLE) {
        // Download feature if necessary.
        // If downloadFeature is not called, the first inference request will 
        // also trigger the feature to be downloaded if it's not already
        // downloaded.
        summarizer.downloadFeature(object : DownloadCallback {
            override fun onDownloadStarted(bytesToDownload: Long) { }

            override fun onDownloadFailed(e: GenAiException) { }

            override fun onDownloadProgress(totalBytesDownloaded: Long) {}

            override fun onDownloadCompleted() {
                startSummarizationRequest(articleToSummarize, summarizer)
            }
        })    
    } else if (featureStatus == FeatureStatus.DOWNLOADING) {
        // Inference request will automatically run once feature is      
        // downloaded.
        // If Gemini Nano is already downloaded on the device, the   
        // feature-specific LoRA adapter model will be downloaded very  
        // quickly. However, if Gemini Nano is not already downloaded, 
        // the download process may take longer.
        startSummarizationRequest(articleToSummarize, summarizer)
    } else if (featureStatus == FeatureStatus.AVAILABLE) {
        startSummarizationRequest(articleToSummarize, summarizer)
    } 
}

fun startSummarizationRequest(text: String, summarizer: Summarizer) {
    // Create task request  
    val summarizationRequest = SummarizationRequest.builder(text).build()

    // Start summarization request with streaming response
    summarizer.runInference(summarizationRequest) { newText -> 
        // Show new text in UI
    }

    // You can also get a non-streaming response from the request
    // val summarizationResult = summarizer.runInference(summarizationRequest)
    // val summary = summarizationResult.get().summary
}

// Be sure to release the resource when no longer needed
// For example, on viewModel.onCleared() or activity.onDestroy()
summarizer.close()

For more examples of implementing the GenAI APIs, check out the official documentation and samples on GitHub:

Use cases

Here is some guidance on how to best use the current GenAI APIs:

For Summarization, consider:

    • Conversation messages or transcripts that involve 2 or more users
    • Articles or documents less than 4000 tokens (or about 3000 English words). Using the first few paragraphs for summarization is usually good enough to capture the most important information.

For Proofreading and Rewriting APIs, consider utilizing them during the content creation process for short content below 256 tokens to help with tasks such as:

    • Refining messages in a particular tone, such as more formal or more casual
    • Polishing personal notes for easier consumption later

For the Image Description API, consider it for:

    • Generating titles of images
    • Generating metadata for image search
    • Utilizing descriptions of images in use cases where the images themselves cannot be displayed, such as within a list of chat messages
    • Generating alternative text to help visually impaired users better understand content as a whole

GenAI API in production

Envision is an app that verbalizes the visual world to help people who are blind or have low vision lead more independent lives. A common use case in the app is for users to take a picture to have a document read out loud. Utilizing the GenAI Summarization API, Envision is now able to get a concise summary of a captured document. This significantly enhances the user experience by allowing them to quickly grasp the main points of documents and determine if a more detailed reading is desired, saving them time and effort.

side by side images of a mobile device showing a document on a table on the left, and the results of the scanned document on the right showing details providing the what, when, and where as written in the document

Supported devices

GenAI APIs are available on Android devices using optimized MediaTek Dimensity, Qualcomm Snapdragon, and Google Tensor platforms through AICore. For a comprehensive list of devices that support GenAI APIs, refer to our official documentation.

Learn more

Start implementing GenAI APIs in your Android apps today with guidance from our official documentation and samples on GitHub: AI Catalog GenAI API Samples with Compose, ML Kit GenAI APIs Quickstart.

The Android Show: I/O Edition – what Android devs need to know!

Posted by Matthew McCullough – Vice President, Product Management, Android Developer

We just dropped an I/O Edition of The Android Show, where we unpacked exciting new experiences coming to the Android ecosystem: a fresh and dynamic look and feel, smarts across your devices, and enhanced safety and security features. Join Sameer Samat, President of Android Ecosystem, and the Android team to learn about exciting new development in the episode below, and read about all of the updates for users.

Tune into Google I/O next week – including the Developer Keynote as well as the full Android track of sessions – where we’re covering these topics in more detail and how you can get started.


Start building with Material 3 Expressive

The world of UX design is constantly evolving, and you deserve the tools to create truly engaging and impactful experiences. That’s why Material Design’s latest evolution, Material 3 Expressive, provides new ways to make your product more engaging, easy to use, and desirable. Learn more, and try out the new Material 3 Expressive: an expansion pack designed to enhance your app’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. It comes with new components, motion-physics system, type styles, colors, shapes and more.

Material 3 Expressive will be coming to Android 16 later this year; check out the Google I/O talk next week where we’ll dive into this in more detail.

A fluid design built for your watch's round display

Wear OS 6, arriving later this year, brings Material 3 Expressive design to Google’s smartwatch platform. New design language puts the round watch display at the heart of the experience, and is embraced in every single component and motion of the System, from buttons to notifications. You'll be able to try new visual design and upgrade existing app experiences to a new level. Next week, tune in to the What’s New in Android session to learn more.

Plus some goodies in Android 16...

We also unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, you can try out the latest Beta of Android 16.

A few new features that Android 16 adds which developers should pay attention to are Live updates, professional media and camera features, desktop windowing for tablets, major accessibility enhancements and much more:

Watch the What’s New in Android session and the Live updates talk to learn more.

Tune in next week to Google I/O

This was just a preview of some Android-related news, so remember to tune in next week to Google I/O, where we’ll be diving into a range of Android developer topics in a lot more detail. You can check out What’s New in Android and the full Android track of sessions to start planning your time.

We can’t wait to see you next week, whether you’re joining in person or virtually from anywhere around the world!

#WeArePlay: How My Lovely Planet is making environmental preservation fun through games

Posted by Robbie McLachlan – Developer Marketing

In our latest #WeArePlay film, which celebrates the people behind apps and games on Google Play, we meet Clément, the founder of Imagine Games. His game, My Lovely Planet, turns casual mobile gaming into tangible environmental action, planting real trees and supporting reforestation projects worldwide. Discover the inspiration behind My Lovely Planet and the impact it’s had so far.


What inspired you to combine gaming with positive environmental impact?

I’ve always loved gaming and believed in technology’s potential to tackle environmental challenges. But it was my time working with an NGO in Madagascar, where I witnessed firsthand the devastating effects of environmental changes that truly sparked my mission. Combining gaming and sustainability just made sense. Billions of people play games, so why not harness that entertainment to create real-world impact? So far, the results speak for themselves: we've built an engaged global community committed to protecting the environment.

Imagine Games team, Clément, from France

How do players in My Lovely Planet make real-world differences through the game?

With My Lovely Planet, planting a tree in the game means planting a real tree in the world. Our community has already planted over 360,000 trees through partnerships with NGOs like Graines de Vie in Madagascar, Kenya, and France. We've also supported ocean-cleaning, bee-protection, and drone reforestation projects.

Balancing fun with impact was key. Players wouldn’t stay just for the mission, so we focused on creating a genuinely fun match-3 style game. Once gameplay was strong, we made real-world actions like tree planting core rewards in the game, helping players feel naturally connected to their impact. Our goal is to keep growing this model to protect biodiversity and fight climate change.

Can you tell us about your drone-led reforestation project in France?

Our latest initiative involves using drones to reforest areas severely impacted by insect infestations and other environmental issues. We're dropping over one million specially-coated seeds by drone, which is a completely new and efficient way of reforesting large areas. It’s exciting because if this pilot succeeds, it could be replicated worldwide, significantly boosting global reforestation efforts.

a drone in mid air dropping seeds in a forested area

How has Google Play helped your journey?

Google Play has been crucial for My Lovely Planet – it's our main distribution channel, with about 70% of our players coming through the platform. It makes it incredibly easy and convenient for anyone to download and start playing immediately. which is essential for engaging a global community. Plus, from a developer's standpoint, the flexibility, responsiveness, and powerful testing tools Google Play provides have made launching and scaling our game faster and smoother, allowing us to focus even more on our environmental impact.

a close up of a user playing the My Lovely Planet game on their mobile device while sitting in the front seat of a vehicle

What is next for My Lovely Planet?

Right now, we're focused on expanding the game experience by adding more engaging levels, and introducing exciting new features like integrating our eco-friendly cryptocurrency, My Lovely Coin, into gameplay. Following the success of our first drone-led reforestation project in France, our next step is tracking its impact and expanding this approach to other regions. Ultimately, we aim to build the world's largest gaming community dedicated to protecting the environment, empowering millions to make a difference while enjoying the game.


Discover other inspiring app and game founders featured in #WeArePlay.

Prepare your apps for Google Play’s 16 KB page size compatibility requirement

Posted by Dan Brown – Product Manager, Google Play

Google Play empowers you to manage and distribute your innovative and trusted apps and games to billions of users around the world across the entire breadth of Android devices, and historically, all Android devices have managed memory in 4 KB pages.

As device manufacturers equip devices with more RAM to optimize performance, many will adopt larger page sizes like 16 KB. Android 15 introduces support for the increased page size, ensuring your app can run on these evolving devices and benefit from the associated performance gains.

Starting November 1st, 2025, all new apps and updates to existing apps submitted to Google Play and targeting Android 15+ devices must support 16 KB page sizes.

This is a key technical requirement to ensure your users can benefit from the performance enhancements on newer devices and prepares your apps for the platform's future direction of improved performance on newer hardware. Without recompiling to support 16 KB pages, your app might not function correctly on these devices when they become more widely available in future Android releases.

We’ve seen that 16 KB can help with:

    • Faster app launches: See improvements ranging from 3% to 30% for various apps.
    • Improved battery usage: Experience an average gain of 4.5%.
    • Quicker camera starts: Launch the camera 4.5% to 6.6% faster.
    • Speedier system boot-ups: Boot Android devices approximately 8% faster.

We recommend checking your apps early especially for dependencies that might not yet be 16 KB compatible. Many popular SDK providers, like React Native and Flutter, already offer compatible versions. For game developers, several leading game engines, such as Unity, support 16 KB, with support for Unreal Engine coming soon.

Reaching 16 KB compatibility

A substantial number of apps are already compatible, so your app may already work seamlessly with this requirement. For most of those that need to make adjustments, we expect the changes to be minimal.

    • Apps with no native code should be compatible without any changes at all.
    • Apps using libraries or SDKs that contain native code may need to update these to a compatible version.
    • Apps with native code may need to recompile with a more recent toolchain and check for any code with incompatible low level memory management.

Our December blog post, Get your apps ready for 16 KB page size devices, provides a more detailed technical explanation and guidance on how to prepare your apps.

Check your app's compatibility now

It's easy to see if your app bundle already supports 16 KB memory page sizes. Visit the app bundle explorer page in Play Console to check your app's build compliance and get guidance on where your app may need updating.

App bundle explorer in Play Console

Beyond the app bundle explorer, make sure to also test your app in a 16 KB environment. This will help you ensure users don’t experience any issues and that your app delivers its best performance.

For more information, check out the full documentation.

Thank you for your continued support in bringing delightful, fast, and high-performance experiences to users across the breadth of devices Play supports. We look forward to seeing the enhanced experiences you'll deliver with 16 KB support.

Building delightful Android camera and media experiences

Posted by Donovan McMurray, Mayuri Khinvasara Khabya, Mozart Louis, and Nevin Mital – Developer Relations Engineers

Hello Android Developers!

We are the Android Developer Relations Camera & Media team, and we’re excited to bring you something a little different today. Over the past several months, we’ve been hard at work writing sample code and building demos that showcase how to take advantage of all the great potential Android offers for building delightful user experiences.

Some of these efforts are available for you to explore now, and some you’ll see later throughout the year, but for this blog post we thought we’d share some of the learnings we gathered while going through this exercise.

Grab your favorite Android plush or rubber duck, and read on to see what we’ve been up to!

Future-proof your app with Jetpack

Nevin Mital

One of our focuses for the past several years has been improving the developer tools available for video editing on Android. This led to the creation of the Jetpack Media3 Transformer APIs, which offer solutions for both single-asset and multi-asset video editing preview and export. Today, I’d like to focus on the Composition demo app, a sample app that showcases some of the multi-asset editing experiences that Transformer enables.

I started by adding a custom video compositor to demonstrate how you can arrange input video sequences into different layouts for your final composition, such as a 2x2 grid or a picture-in-picture overlay. You can customize this by implementing a VideoCompositorSettings and overriding the getOverlaySettings method. This object can then be set when building your Composition with setVideoCompositorSettings.

Here is an example for the 2x2 grid layout:

object : VideoCompositorSettings {
  ...

  override fun getOverlaySettings(inputId: Int, presentationTimeUs: Long): OverlaySettings {
    return when (inputId) {
      0 -> { // First sequence is placed in the top left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(-0.5f, 0.5f) // Top-left section of background
          .build()
      }

      1 -> { // Second sequence is placed in the top right
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(0.5f, 0.5f) // Top-right section of background
          .build()
      }

      2 -> { // Third sequence is placed in the bottom left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(-0.5f, -0.5f) // Bottom-left section of background
          .build()
      }

      3 -> { // Fourth sequence is placed in the bottom right
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(0.5f, -0.5f) // Bottom-right section of background
          .build()
      }

      else -> {
        StaticOverlaySettings.Builder().build()
      }
    }
  }
}

Since getOverlaySettings also provides a presentation time, we can even animate the layout, such as in this picture-in-picture example:

moving image of picture in picture on a mobile device

Next, I spent some time migrating the Composition demo app to use Jetpack Compose. With complicated editing flows, it can help to take advantage of as much screen space as is available, so I decided to use the supporting pane adaptive layout. This way, the user can fine-tune their video creation on the preview screen, and export options are only shown at the same time on a larger display. Below, you can see how the UI dynamically adapts to the screen size on a foldable device, when switching from the outer screen to the inner screen and vice versa.

What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

moving image of suportive pane adaptive layout

What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

moving image of sequential composition preview in Android XR

Orbiter(
  position = OrbiterEdge.Bottom,
  offset = EdgeOffset.inner(offset = MaterialTheme.spacing.standard),
  alignment = Alignment.CenterHorizontally,
  shape = SpatialRoundedCornerShape(CornerSize(28.dp))
) {
  Row (horizontalArrangement = Arrangement.spacedBy(MaterialTheme.spacing.mini)) {
    // Playback control for rewinding by 10 seconds
    FilledTonalIconButton({ viewModel.seekBack(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.rewind_10),
        contentDescription = "Rewind by 10 seconds"
      )
    }
    // Playback control for play/pause
    FilledTonalIconButton({ viewModel.togglePlay() }) {
      Icon(
        painter = painterResource(id = R.drawable.rounded_play_pause_24),
        contentDescription = 
            if(viewModel.compositionPlayer.isPlaying) {
                "Pause preview playback"
            } else {
                "Resume preview playback"
            }
      )
    }
    // Playback control for forwarding by 10 seconds
    FilledTonalIconButton({ viewModel.seekForward(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.forward_10),
        contentDescription = "Forward by 10 seconds"
      )
    }
  }
}

Jetpack libraries unlock premium functionality incrementally

Donovan McMurray

Not only do our Jetpack libraries have you covered by working consistently across existing and future devices, but they also open the doors to advanced functionality and custom behaviors to support all types of app experiences. In a nutshell, our Jetpack libraries aim to make the common case very accessible and easy, and it has hooks for adding more custom features later.

We’ve worked with many apps who have switched to a Jetpack library, built the basics, added their critical custom features, and actually saved developer time over their estimates. Let’s take a look at CameraX and how this incremental development can supercharge your process.

// Set up CameraX app with preview and image capture.
// Note: setting the resolution selector is optional, and if not set,
// then a default 4:3 ratio will be used.
val aspectRatioStrategy = AspectRatioStrategy(
  AspectRatio.RATIO_16_9, AspectRatioStrategy.FALLBACK_RULE_NONE)
var resolutionSelector = ResolutionSelector.Builder()
  .setAspectRatioStrategy(aspectRatioStrategy)
  .build()

private val previewUseCase = Preview.Builder()
  .setResolutionSelector(resolutionSelector)
  .build()
private val imageCaptureUseCase = ImageCapture.Builder()
  .setResolutionSelector(resolutionSelector)
  .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
  .build()

val useCaseGroupBuilder = UseCaseGroup.Builder()
  .addUseCase(previewUseCase)
  .addUseCase(imageCaptureUseCase)

cameraProvider.unbindAll()

camera = cameraProvider.bindToLifecycle(
  this,  // lifecycleOwner
  CameraSelector.DEFAULT_BACK_CAMERA,
  useCaseGroupBuilder.build(),
)

After setting up the basic structure for CameraX, you can set up a simple UI with a camera preview and a shutter button. You can use the CameraX Viewfinder composable which displays a Preview stream from a CameraX SurfaceRequest.

// Create preview
Box(
  Modifier
    .background(Color.Black)
    .fillMaxSize(),
  contentAlignment = Alignment.Center,
) {
  surfaceRequest?.let {
    CameraXViewfinder(
      modifier = Modifier.fillMaxSize(),
      implementationMode = ImplementationMode.EXTERNAL,
      surfaceRequest = surfaceRequest,
     )
  }
  Button(
    onClick = onPhotoCapture,
    shape = CircleShape,
    colors = ButtonDefaults.buttonColors(containerColor = Color.White),
    modifier = Modifier
      .height(75.dp)
      .width(75.dp),
  )
}

fun onPhotoCapture() {
  // Not shown: defining the ImageCapture.OutputFileOptions for
  // your saved images
  imageCaptureUseCase.takePicture(
    outputOptions,
    ContextCompat.getMainExecutor(context),
    object : ImageCapture.OnImageSavedCallback {
      override fun onError(exc: ImageCaptureException) {
        val msg = "Photo capture failed."
        Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
      }

      override fun onImageSaved(output: ImageCapture.OutputFileResults) {
        val savedUri = output.savedUri
        if (savedUri != null) {
          // Do something with the savedUri if needed
        } else {
          val msg = "Photo capture failed."
          Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
        }
      }
    },
  )
}

You’re already on track for a solid camera experience, but what if you wanted to add some extra features for your users? Adding filters and effects are easy with CameraX’s Media3 effect integration, which is one of the new features introduced in CameraX 1.4.0.

Here’s how simple it is to add a black and white filter from Media3’s built-in effects.

val media3Effect = Media3Effect(
  application,
  PREVIEW or IMAGE_CAPTURE,
  ContextCompat.getMainExecutor(application),
  {},
)
media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()))
useCaseGroupBuilder.addEffect(media3Effect)

The Media3Effect object takes a Context, a bitwise representation of the use case constants for targeted UseCases, an Executor, and an error listener. Then you set the list of effects you want to apply. Finally, you add the effect to the useCaseGroupBuilder we defined earlier.

moving image of sequential composition preview in Android XR
(Left) Our camera app with no filter applied. 
 (Right) Our camera app after the createGrayscaleFilter was added.

There are many other built-in effects you can add, too! See the Media3 Effect documentation for more options, like brightness, color lookup tables (LUTs), contrast, blur, and many other effects.

To take your effects to yet another level, it’s also possible to define your own effects by implementing the GlEffect interface, which acts as a factory of GlShaderPrograms. You can implement a BaseGlShaderProgram’s drawFrame() method to implement a custom effect of your own. A minimal implementation should tell your graphics library to use its shader program, bind the shader program's vertex attributes and uniforms, and issue a drawing command.

Jetpack libraries meet you where you are and your app’s needs. Whether that be a simple, fast-to-implement, and reliable implementation, or custom functionality that helps the critical user journeys in your app stand out from the rest, Jetpack has you covered!

Jetpack offers a foundation for innovative AI Features

Mayuri Khinvasara Khabya

Just as Donovan demonstrated with CameraX for capture, Jetpack Media3 provides a reliable, customizable, and feature-rich solution for playback with ExoPlayer. The AI Samples app builds on this foundation to delight users with helpful and enriching AI-driven additions.

In today's rapidly evolving digital landscape, users expect more from their media applications. Simply playing videos is no longer enough. Developers are constantly seeking ways to enhance user experiences and provide deeper engagement. Leveraging the power of Artificial Intelligence (AI), particularly when built upon robust media frameworks like Media3, offers exciting opportunities. Let’s take a look at some of the ways we can transform the way users interact with video content:

    • Empowering Video Understanding: The core idea is to use AI, specifically multimodal models like the Gemini Flash and Pro models, to analyze video content and extract meaningful information. This goes beyond simply playing a video; it's about understanding what's in the video and making that information readily accessible to the user.
    • Actionable Insights: The goal is to transform raw video into summaries, insights, and interactive experiences. This allows users to quickly grasp the content of a video and find specific information they need or learn something new!
    • Accessibility and Engagement: AI helps make videos more accessible by providing features like summaries, translations, and descriptions. It also aims to increase user engagement through interactive features.

A Glimpse into AI-Powered Video Journeys

The following example demonstrates potential video journies enhanced by artificial intelligence. This sample integrates several components, such as ExoPlayer and Transformer from Media3; the Firebase SDK (leveraging Vertex AI on Android); and Jetpack Compose, ViewModel, and StateFlow. The code will be available soon on Github.

moving images of examples of AI-powered video journeys
(Left) Video summarization  
 (Right) Thumbnails timestamps and HDR frame extraction

There are two experiences in particular that I’d like to highlight:

    • HDR Thumbnails: AI can help identify key moments in the video that could make for good thumbnails. With those timestamps, you can use the new ExperimentalFrameExtractor API from Media3 to extract HDR thumbnails from videos, providing richer visual previews.
    • Text-to-Speech: AI can be used to convert textual information derived from the video into spoken audio, enhancing accessibility. On Android you can also choose to play audio in different languages and dialects thus enhancing personalization for a wider audience.

Using the right AI solution

Currently, only cloud models support video inputs, so we went ahead with a cloud-based solution.Iintegrating Firebase in our sample empowers the app to:

    • Generate real-time, concise video summaries automatically.
    • Produce comprehensive content metadata, including chapter markers and relevant hashtags.
    • Facilitate seamless multilingual content translation.

So how do you actually interact with a video and work with Gemini to process it? First, send your video as an input parameter to your prompt:

val promptData =
"Summarize this video in the form of top 3-4 takeaways only. Write in the form of bullet points. Don't assume if you don't know"

val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
_outputText.value = OutputTextState.Loading

viewModelScope.launch(Dispatchers.IO) {
    try {
        val requestContent = content {
            fileData(videoSource.toString(), "video/mp4")
            text(prompt)
        }
        val outputStringBuilder = StringBuilder()

        generativeModel.generateContentStream(requestContent).collect { response ->
            outputStringBuilder.append(response.text)
            _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
        }

        _outputText.value = OutputTextState.Success(outputStringBuilder.toString())

    } catch (error: Exception) {
        _outputText.value = error.localizedMessage?.let { OutputTextState.Error(it) }
    }
}

Notice there are two key components here:

    • FileData: This component integrates a video into the query.
    • Prompt: This asks the user what specific assistance they need from AI in relation to the provided video.

Of course, you can finetune your prompt as per your requirements and get the responses accordingly.

In conclusion, by harnessing the capabilities of Jetpack Media3 and integrating AI solutions like Gemini through Firebase, you can significantly elevate video experiences on Android. This combination enables advanced features like video summaries, enriched metadata, and seamless multilingual translations, ultimately enhancing accessibility and engagement for users. As these technologies continue to evolve, the potential for creating even more dynamic and intelligent video applications is vast.

Go above-and-beyond with specialized APIs

Mozart Louis

Android 16 introduces the new audio PCM Offload mode which can reduce the power consumption of audio playback in your app, leading to longer playback time and increased user engagement. Eliminating the power anxiety greatly enhances the user experience.

Oboe is Android’s premiere audio api that developers are able to use to create high performance, low latency audio apps. A new feature is being added to the Android NDK and Android 16 called Native PCM Offload playback.

Offload playback helps save battery life when playing audio. It works by sending a large chunk of audio to a special part of the device's hardware (a DSP). This allows the CPU of the device to go into a low-power state while the DSP handles playing the sound. This works with uncompressed audio (like PCM) and compressed audio (like MP3 or AAC), where the DSP also takes care of decoding.

This can result in significant power saving while playing back audio and is perfect for applications that play audio in the background or while the screen is off (think audiobooks, podcasts, music etc).

We created the sample app PowerPlay to demonstrate how to implement these features using the latest NDK version, C++ and Jetpack Compose.

Here are the most important parts!

First order of business is to assure the device supports audio offload of the file attributes you need. In the example below, we are checking if the device support audio offload of stereo, float PCM file with a sample rate of 48000Hz.

       val format = AudioFormat.Builder()
            .setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
            .setSampleRate(48000)
            .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
            .build()

        val attributes =
            AudioAttributes.Builder()
                .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                .setUsage(AudioAttributes.USAGE_MEDIA)
                .build()
       
        val isOffloadSupported = 
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
                AudioManager.isOffloadedPlaybackSupported(format, attributes)
            } else {
                false
            }

        if (isOffloadSupported) {
            player.initializeAudio(PerformanceMode::POWER_SAVING_OFFLOADED)
        }

Once we know the device supports audio offload, we can confidently set the Oboe audio streams’ performance mode to the new performance mode option, PerformanceMode::POWER_SAVING_OFFLOADED.

// Create an audio stream
        AudioStreamBuilder builder;
        builder.setChannelCount(mChannelCount);
        builder.setDataCallback(mDataCallback);
        builder.setFormat(AudioFormat::Float);
        builder.setSampleRate(48000);

        builder.setErrorCallback(mErrorCallback);
        builder.setPresentationCallback(mPresentationCallback);
        builder.setPerformanceMode(PerformanceMode::POWER_SAVING_OFFLOADED);
        builder.setFramesPerDataCallback(128);
        builder.setSharingMode(SharingMode::Exclusive);
           builder.setSampleRateConversionQuality(SampleRateConversionQuality::Medium);
        Result result = builder.openStream(mAudioStream);

Now when audio is played back, it will be offloading audio to the DSP, helping save power when playing back audio.

There is more to this feature that will be covered in a future blog post, fully detailing out all of the new available APIs that will help you optimize your audio playback experience!

What’s next

Of course, we were only able to share the tip of the iceberg with you here, so to dive deeper into the samples, check out the following links:

Hopefully these examples have inspired you to explore what new and fascinating experiences you can build on Android. Tune in to our session at Google I/O in a couple weeks to learn even more about use-cases supported by solutions like Jetpack CameraX and Jetpack Media3!

Zoho Achieves 6x Faster Logins with Passkey and Credential Manager Integration

Posted by Niharika Arora – Senior Developer Relations Engineer, Joseph Lewis – Staff Technical Writer, and Kumareshwaran Sreedharan – Product Manager, Zoho.

As an Android developer, you're constantly looking for ways to enhance security, improve user experience, and streamline development. Zoho, a comprehensive cloud-based software suite focused on security and seamless experiences, achieved significant improvements by adopting passkeys in their OneAuth Android app.

Since integrating passkeys in 2024, Zoho achieved login speeds up to 6x faster than previous methods and a 31% month-over-month (MoM) growth in passkey adoption.

This case study examines Zoho's adoption of passkeys and Android's Credential Manager API to address authentication difficulties. It details the technical implementation process and highlights the impactful results.

Overcoming authentication challenges

Zoho utilizes a combination of authentication methods to protect user accounts. This included Zoho OneAuth, their own multi-factor authentication (MFA) solution, which supported both password-based and passwordless authentication using push notifications, QR codes, and time-based one-time passwords (TOTP). Zoho also supported federated logins, allowing authentication through Security Assertion Markup Language (SAML) and other third-party identity providers.

Challenges

Zoho, like many organizations, aimed to improve authentication security and user experience while reducing operational burdens. The primary challenges that led to the adoption of passkeys included:

    • Security vulnerabilities: Traditional password-based methods left users susceptible to phishing attacks and password breaches.
    • User friction: Password fatigue led to forgotten passwords, frustration, and increased reliance on cumbersome recovery processes.
    • Operational inefficiencies: Handling password resets and MFA issues generated significant support overhead.
    • Scalability concerns: A growing user base demanded a more secure and efficient authentication solution.

Why the shift to passkeys?

Passkeys were implemented in Zoho's apps to address authentication challenges by offering a passwordless approach that significantly improves security and user experience. This solution leverages phishing-resistant authentication, cloud-synchronized credentials for effortless cross-device access, and biometrics (such as a fingerprint or facial recognition), PIN, or pattern for secure logins, thereby reducing the vulnerabilities and inconveniences associated with traditional passwords.

By adopting passkeys with Credential Manager, Zoho cut login times by up to 6x, slashed password-related support costs, and saw strong user adoption – doubling passkey sign-ins in 4 months with 31% MoM growth. Zoho users now enjoy faster, easier logins and phishing-resistant security.

Quote card reads 'Cloud Lion now enjoys logins that are 30% faster and more secure using passkeys – allowing us to use our thumb instead of a password. With passkeys, we can also protect our critical business data against phishing and brute force attacks.' – Fabrice Venegas, Founder, Cloud Lion (a Zoho integration partner)

Implementation with Credential Manager on Android

So, how did Zoho achieve these results? They used Android's Credential Manager API, the recommended Jetpack library for implementing authentication on Android.

Credential Manager provides a unified API that simplifies handling of the various authentication methods. Instead of juggling different APIs for passwords, passkeys, and federated logins (like Sign in with Google), you use a single interface.

Implementing passkeys at Zoho required both client-side and server-side adjustments. Here's a detailed breakdown of the passkey creation, sign-in, and server-side implementation process.

Passkey creation

Passkey creation in OneAuth on a small screen mobile device

To create a passkey, the app first retrieves configuration details from Zoho's server. This process includes a unique verification, such as a fingerprint or facial recognition. This verification data, formatted as a requestJson string), is used by the app to build a CreatePublicKeyCredentialRequest. The app then calls the credentialManager.createCredential method, which prompts the user to authenticate using their device screen lock (biometrics, fingerprint, PIN, etc.).

Upon successful user confirmation, the app receives the new passkey credential data, sends it back to Zoho's server for verification, and the server then stores the passkey information linked to the user's account. Failures or user cancellations during the process are caught and handled by the app.

Sign-in

The Zoho Android app initiates the passkey sign-in process by requesting sign-in options, including a unique challenge, from Zoho's backend server. The app then uses this data to construct a GetCredentialRequest, indicating it will authenticate with a passkey. It then invokes the Android CredentialManager.getCredential() API with this request. This action triggers a standardized Android system interface, prompting the user to choose their Zoho account (if multiple passkeys exist) and authenticate using their device's configured screen lock (fingerprint, face scan, or PIN). After successful authentication, Credential Manager returns a signed assertion (proof of login) to the Zoho app. The app forwards this assertion to Zoho's server, which verifies the signature against the user's stored public key and validates the challenge, completing the secure sign-in process.

Server-side implementation

Zoho's transition to supporting passkeys benefited from their backend systems already being FIDO WebAuthn compliant, which streamlined the server-side implementation process. However, specific modifications were still necessary to fully integrate passkey functionality.

The most significant challenge involved adapting the credential storage system. Zoho's existing authentication methods, which primarily used passwords and FIDO security keys for multi-factor authentication, required different storage approaches than passkeys, which are based on cryptographic public keys. To address this, Zoho implemented a new database schema specifically designed to securely store passkey public keys and related data according to WebAuthn protocols. This new system was built alongside a lookup mechanism to validate and retrieve credentials based on user and device information, ensuring backward compatibility with older authentication methods.

Another server-side adjustment involved implementing the ability to handle requests from Android devices. Passkey requests originating from Android apps use a unique origin format (android:apk-key-hash:example) that is distinct from standard web origins that use a URI-based format (https://example.com/app). The server logic needed to be updated to correctly parse this format, extract the SHA-256 fingerprint hash of the app's signing certificate, and validate it against a pre-registered list. This verification step ensures that authentication requests genuinely originate from Zoho's Android app and protects against phishing attacks.

This code snippet demonstrates how the server checks for the Android-specific origin format and validates the certificate hash:

val origin: String = clientData.getString("origin")

if (origin.startsWith("android:apk-key-hash:")) { 
    val originSplit: List<String> = origin.split(":")
    if (originSplit.size > 3) {
               val androidOriginHashDecoded: ByteArray = Base64.getDecoder().decode(originSplit[3])

                if (!androidOriginHashDecoded.contentEquals(oneAuthSha256FingerPrint)) {
            throw IAMException(IAMErrorCode.WEBAUTH003)
        }
    } else {
        // Optional: Handle the case where the origin string is malformed    }
}

Error handling

Zoho implemented robust error handling mechanisms to manage both user-facing and developer-facing errors. A common error, CreateCredentialCancellationException, appeared when users manually canceled their passkey setup. Zoho tracked the frequency of this error to assess potential UX improvements. Based on Android's UX recommendations, Zoho took steps to better educate their users about passkeys, ensure users were aware of passkey availability, and promote passkey adoption during subsequent sign-in attempts.

This code example demonstrates Zoho's approach for how they handled their most common passkey creation errors:

private fun handleFailure(e: CreateCredentialException) {
    val msg = when (e) {
        is CreateCredentialCancellationException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_CANCELLED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "The operation was canceled by the user."
        }
        is CreateCredentialInterruptedException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_INTERRUPTED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "Passkey setup was interrupted. Please try again."
        }
        is CreateCredentialProviderConfigurationException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_PROVIDER_MISCONFIGURED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "Credential provider misconfigured. Contact support."
        }
        is CreateCredentialUnknownException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_UNKNOWN_ERROR", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "An unknown error occurred during Passkey setup."
        }
        is CreatePublicKeyCredentialDomException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_WEB_AUTHN_ERROR", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "Passkey creation failed: ${e.domError}"
        }
        else -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_FAILED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "An unexpected error occurred. Please try again."
        }
    }
}

Testing passkeys in intranet environments

Zoho faced an initial challenge in testing passkeys within a closed intranet environment. The Google Password Manager verification process for passkeys requires public domain access to validate the relying party (RP) domain. However, Zoho's internal testing environment lacked this public Internet access, causing the verification process to fail and hindering successful passkey authentication testing. To overcome this, Zoho created a publicly accessible test environment, which included hosting a temporary server with an asset link file and domain validation.

This example from the assetlinks.json file used in Zoho's public test environment demonstrates how to associate the relying party domain with the specified Android app for passkey validation.

[
    {
        "relation": [
            "delegate_permission/common.handle_all_urls",
            "delegate_permission/common.get_login_creds"
        ],
        "target": {
            "namespace": "android_app",
            "package_name": "com.zoho.accounts.oneauth",
            "sha256_cert_fingerprints": [
                "SHA_HEX_VALUE" 
            ]
        }
    }
]

Integrate with an existing FIDO server

Android's passkey system utilizes the modern FIDO2 WebAuthn standard. This standard requires requests in a specific JSON format, which helps maintain consistency between native applications and web platforms. To enable Android passkey support, Zoho did minor compatibility and structural changes to correctly generate and process requests that adhere to the required FIDO2 JSON structure.

This server update involved several specific technical adjustments:

      1. Encoding conversion: The server converts the Base64 URL encoding (commonly used in WebAuthn for fields like credential IDs) to standard Base64 encoding before it stores the relevant data. The snippet below shows how a rawId might be encoded to standard Base64:

// Convert rawId bytes to a standard Base64 encoded string for storage
val base64RawId: String = Base64.getEncoder().encodeToString(rawId.toByteArray())

      2. Transport list format: To ensure consistent data processing, the server logic handles lists of transport mechanisms (such as USB, NFC, and Bluetooth, which specify how the authenticator communicated) as JSON arrays.

      3. Client data alignment: The Zoho team adjusted how the server encodes and decodes the clientDataJson field. This ensures the data structure aligns precisely with the expectations of Zoho’s existing internal APIs. The example below illustrates part of the conversion logic applied to client data before the server processes it:

private fun convertForServer(type: String): String {
    val clientDataBytes = BaseEncoding.base64().decode(type)
    val clientDataJson = JSONObject(String(clientDataBytes, StandardCharsets.UTF_8))
    val clientJson = JSONObject()
    val challengeFromJson = clientDataJson.getString("challenge")
    // 'challenge' is a technical identifier/token, not localizable text.
    clientJson.put("challenge", BaseEncoding.base64Url()
        .encode(challengeFromJson.toByteArray(StandardCharsets.UTF_8))) 

    clientJson.put("origin", clientDataJson.getString("origin"))
    clientJson.put("type", clientDataJson.getString("type"))
    clientJson.put("androidPackageName", clientDataJson.getString("androidPackageName"))
    return BaseEncoding.base64().encode(clientJson.toString().toByteArray())
}

User guidance and authentication preferences

A central part of Zoho's passkey strategy involved encouraging user adoption while providing flexibility to align with different organizational requirements. This was achieved through careful UI design and policy controls.

Zoho recognized that organizations have varying security needs. To accommodate this, Zoho implemented:

    • Admin enforcement: Through the Zoho Directory admin panel, administrators can designate passkeys as the mandatory, default authentication method for their entire organization. When this policy is enabled, employees are required to set up a passkey upon their next login and use it going forward.
    • User choice: If an organization does not enforce a specific policy, individual users maintain control. They can choose their preferred authentication method during login, selecting from passkeys or other configured options via their authentication settings.

To make adopting passkeys appealing and straightforward for end-users, Zoho implemented:

    • Easy setup: Zoho integrated passkey setup directly into the Zoho OneAuth mobile app (available for both Android and iOS). Users can conveniently configure their passkeys within the app at any time, smoothing the transition.
    • Consistent access: Passkey support was implemented across key user touchpoints, ensuring users can register and authenticate using passkeys via:
        • The Zoho OneAuth mobile app (Android & iOS);

This method ensured that the process of setting up and using passkeys was accessible and integrated into the platforms they already use, regardless of whether it was mandated by an admin or chosen by the user. You can learn more about how to create smooth user flows for passkey authentication by exploring our comprehensive passkeys user experience guide.

Impact on developer velocity and integration efficiency

Credential Manager, as a unified API, also helped improve developer productivity compared to older sign-in flows. It reduced the complexity of handling multiple authentication methods and APIs separately, leading to faster integration, from months to weeks, and fewer implementation errors. This collectively streamlined the sign-in process and improved overall reliability.

By implementing passkeys with Credential Manager, Zoho achieved significant, measurable improvements across the board:

    • Dramatic speed improvements
        • 2x faster login compared to traditional password authentication.
        • 4x faster login compared to username or mobile number with email or SMS OTP authentication.
        • 6x faster login compared to username, password, and SMS or authenticator OTP authentication.
    • Reduced support costs
        • Reduced password-related support requests, especially for forgotten passwords.
        • Lower costs associated with SMS-based 2FA, as existing users can onboard directly with passkeys.
    • Strong user adoption & enhanced security:
        • Passkey sign-ins doubled in just 4 months, showing high user acceptance.
        • Users migrating to passkeys are fully protected from common phishing and password breach threats.
        • With 31% MoM adoption growth, more users are benefiting daily from enhanced security against vulnerabilities like phishing and SIM swaps.

Recommendations and best practices

To successfully implement passkeys on Android, developers should consider the following best practices:

    • Leverage Android's Credential Manager API:
        • Credential Manager simplifies credential retrieval, reducing developer effort and ensuring a unified authentication experience.
        • Handles passwords, passkeys, and federated login flows in a single interface.
    • Ensure data encoding consistency while migrating from other FIDO authentication solutions:
        • Make sure you handle consistent formatting for all inputs/outputs while migrating from other FIDO authentication solutions such as FIDO security keys.
    • Optimize error handling and logging:
        • Implement robust error handling for a seamless user experience.
        • Provide localized error messages and use detailed logs to debug and resolve unexpected failures.
    • Educate users on passkey recovery options:
        • Prevent lockout scenarios by proactively guiding users on recovery options.
    • Monitor adoption metrics and user feedback:
        • Track user engagement, passkey adoption rates, and login success rates to keep optimizing user experience.
        • Conduct A/B testing on different authentication flows to improve conversion and retention.

Passkeys, combined with the Android Credential Manager API, offer a powerful, unified authentication solution that enhances security while simplifying user experience. Passkeys significantly reduce phishing risks, credential theft, and unauthorized access. We encourage developers to try out the experience in their app and bring the most secure authentication to their users.

Get started with passkeys and Credential Manager

Get hands on with passkeys and Credential Manager on Android using our public sample code.

If you have any questions or issues, you can share with us through the Android Credentials issues tracker.

Android Studio Meerkat Feature Drop is stable

Posted by Adarsh Fernando, Group Product Manager

Today, we're excited to announce the stable release of Android Studio Meerkat Feature Drop (2024.3.2)!

This release brings a host of new features and improvements designed to boost your productivity and enhance your development workflow. With numerous enhancements, this latest release helps you build high-quality Android apps faster and more efficiently: streamlined Jetpack Compose previews, new Gemini capabilities, better Kotlin Multiplatform (KMP) integration, improved device management, and more.

Read on to learn about the key updates in Android Studio Meerkat Feature Drop, and download the latest stable version today to explore them yourself!

Developer Productivity Enhancements

Analyze Crash Reports with Gemini in Android Studio

Debugging production crashes can require you to spend significant time switching contexts between your crash reporting tool, such as Firebase Crashlytics and Android Vitals, and investigating root causes in the IDE. Now, when viewing reports in App Quality Insights (AQI), click the Insights tab. Gemini provides a summary of the crash, generates insights, and links to useful documentation. If you also provide Gemini with access to local code context, it can provide more accurate results, relevant next steps, and code suggestions. This helps you reduce the time spent diagnosing and resolving issues.

moving image of Gemini in the App Quality Insights tool window in Android Studio
Gemini helps you investigate, understand, and resolve crashes in your app much more quickly in the App Quality Insights tool window.

Generate Unit Test Scenarios with Gemini

Writing effective unit tests is crucial but can be time-consuming. Gemini now helps kickstart this process by generating relevant test scenarios. Right-click on a class in your editor and select Gemini > Generate Unit Test Scenarios. Gemini analyzes the code and suggests test cases with descriptive names, outlining what to test. While you still implement the specific test logic, this significantly speeds up the initial setup and ensures better test coverage by suggesting scenarios you might have missed.

moving image of generating unit test scenarios in Android Studio
Gemini helps you generate unit test scenarios for your app.

Gemini Prompt Library

No more retyping your most frequently used prompts for Gemini! The new Prompt Library lets you save prompts directly within Android Studio (Settings > Gemini > Prompt Library). Whether it's a specific code generation pattern, a refactoring instruction, or a debugging query you use often, save it once from the chat (right-click > Save prompt) and re-apply it instantly from the editor (right-click > Gemini > Prompt Library). Prompts that you save can also be shared and standardized across your team.

moving image of prompt library in Android Studio
The prompt library saves your frequently used Gemini prompts to make them easier to use.

You have the option to store prompts on IDE level or Project level:

    • IDE level prompts are private and can be used across multiple projects.
    • Project level prompts can be shared across teams working on the same project (if .idea folder is added to VCS).

Compose and UI Development

Themed Icon Support Preview

Ensure your app's branding looks great with Android’s themed icons. Android Studio now lets you preview how your existing launcher icon adapts to the monochromatic theming algorithm directly within the IDE. This quick visual check helps you identify potential contrast issues or undesirable shapes early in the workflow, even before you provide a dedicated monochromatic drawable. This allows for faster iteration on your app's visual identity.

moving image of themed icon support in preview in Android Studio
Themed icon support in Preview helps you visually check how your existing launcher icon adapts to monochromatic theming.

Compose Preview Enhancements

Iterating on your Compose UI is now faster and better organized:

    • Enhanced Zoom: Navigate complex layouts more easily with smoother, more responsive zooming in your Compose previews.
    • Collapsible Groups: Tidy up your preview surface by collapsing groups of related composables under their @Preview annotation names, letting you focus on specific parts of the UI without clutter.
    • Grid Mode by Default: Grid mode is now the default for a clear overview. Gallery mode (for flipping through individual previews) is available via right-click, while List view has been removed to streamline the experience.
moving image of Compose previews in Android Studio
Compose previews render more smoothly and make it easier to hide previews you’re not focused on.

Build and Deploy

KMP Shared Module Integration

Android Studio now streamlines adding shared logic to your Android app with the new Kotlin Multiplatform Shared Module template. This provides a dedicated starting point within your Android project, making it easier to structure and build shared business logic for both Android and iOS directly from Android Studio.

Kotlin Multiplatform template in Android Studio
The new Kotlin Multiplatform module template makes it easier to add shared business logic to your existing app.

Updated UX for Adding Devices

Spend less time configuring test devices. The new Device Manager UX for adding virtual and remote devices makes it much easier to configure the devices you want from the Device Manager. To get started, click the ‘+’ action at the top of the window and select one of these options:

    • Create Virtual Device: New filters, recommendations, and creation flow guide you towards creating AVDs that are best suited for your intended purpose and your machine's performance.
    • Add Remote Devices: With Android Device Streaming, powered by Firebase, you can connect and debug your app with a variety of real physical devices. With a new catalog view and filters, it's now easier to locate and start using the device you need in just a few clicks.
moving image of configuring virtual devices in Android Studio
It’s now easier to configure virtual devices that are optimized for your workstation.

Google Play Deprecated SDK Warnings

Stay more informed about SDKs you publish with your app. Android Studio now displays warnings from the Google Play SDK Index when an SDK used in your app has been deprecated by its author. These warnings include information about suggested alternative SDKs, helping you proactively manage dependencies and avoid potential issues related to outdated or insecure libraries.

Google Play Deprecated SDK warnings in Android Studio
Play deprecated SDK warnings help you avoid potential issues related to outdated or insecure libraries.

Updated Build Menu and Actions

We've refined the Build menu for a more intuitive experience:

    • New 'Build run-configuration-name' Action: Builds the currently selected run configuration (e.g., :app or a specific test). This is now the default action for the toolbar button and Control/Command+F9.
    • Reordered Actions: The new build action is prioritized at the top, followed by Compile and Assemble actions.
    • Clearer Naming: "Rebuild Project" is now "Clean and Assemble Project with Tests". "Make Project" is renamed to "Assemble Project", and a new "Assemble Project with Tests" action is available.
Build menu in Android Studio
The Build menu includes behavior and naming changes to simplify and streamline the experience.

Standardized Config Directories

Switching between Stable, Beta, and Canary versions of Android Studio is now smoother. Configuration directories are standardized, removing the "Preview" suffix for non-stable builds. We've also added the micro version (e.g., AndroidStudio2024.3.2) to the path, allowing different feature drops to run side-by-side without conflicts. This simplifies managing your IDE settings, especially if you work with multiple Android Studio installations.

IntelliJ platform update

Android Studio Meerkat Feature Drop (2024.3.2) includes the IntelliJ 2024.3 platform release, which has many new features such as a feature complete K2 mode, more reliable Java** and Kotlin code inspections, grammar checks during indexing, debugger improvements, speed and quality of life improvements to Terminal, and more.

For more information, read the full IntelliJ 2024.3 release notes.

Summary

Android Studio Meerkat Feature Drop (2024.3.2) delivers these key features and enhancements:

    • Developer Productivity:
        • Analyze Crash Reports with Gemini
        • Generate Unit Test Scenarios with Gemini
        • Gemini Prompt Library
    • Compose and UI:
        • Themed Icon Preview
        • Compose Preview Enhancements (Zoom, Collapsible Groups, View Modes)
    • Build and Deploy:
        • KMP Shared Module Template
        • Updated UX for Adding Devices
        • Google Play SDK Insights: Deprecated SDK Warnings
        • Updated Build Menu & Actions
        • Standardized Config Directories
    • IntelliJ Platform Update
        • Feature complete K2 mode
        • Improved Kotlin and Java** inspection reliability
        • Debugger improvements
        • Speed and quality of life improvements in Terminal

Getting Started

Ready to elevate your Android development? Download Android Studio Meerkat Feature Drop and start using these powerful new features today!

As always, your feedback is crucial. Check known issues, report bugs, suggest improvements, and connect with the community on LinkedIn, Medium, YouTube, or X. Let's continue building amazing Android apps together!


**Java is a trademark or registered trademark of Oracle and/or its affiliates.