Posted by Matthew McCullough – VP of Product Management, Android Developer
Since launching the Android XR SDK Developer Preview alongside Samsung, Qualcomm, and Unity last year, we’ve been blown away by all of the excitement we’ve been hearing from the broader Android community. Whether it's through coding live-streams or local Google Developer Group talks, it's been an outstanding experience participating in the community to build the future of XR together, and we're just getting started.
Today we’re excited to share an update to the Android XR SDK: Developer Preview 2, packed with new features and improvements to help you develop helpful and delightful immersive experiences with familiar Android APIs, tools and open standards created for XR.
Since the release of Developer Preview 1, we’ve been focused on making the APIs easier to use and adding new immersive Android XR features. Your feedback has helped us shape the development of the tools, SDKs, and the platform itself.
With the Jetpack XR SDK, you can now play back 180° and 360° videos, which can be stereoscopic by encoding with the MV-HEVC specification or by encoding view-frames adjacently. The MV-HEVC standard is optimized and designed for stereoscopic video, allowing your app to efficiently play back immersive videos at great quality. Apps built with Jetpack Compose for XR can use the SpatialExternalSurface composable to render media, including stereoscopic videos.
Using Jetpack Compose for XR, you can now also define layouts that adapt to different XR display configurations. For example, use a SubspaceModifier to specify the size of a Subspace as a percentage of the device’s recommended viewing size, so a panel effortlessly fills the space it's positioned in.
Material Design for XR now supports more component overrides for TopAppBar, AlertDialog, and ListDetailPaneScaffold, helping your large-screen enabled apps that use Material Design effortlessly adapt to the new world of XR.
An app adapts to XR using Material Design for XR with the new component overrides
In ARCore for Jetpack XR, you can now track hands after requesting the appropriate permissions. Hands are a collection of 26 posed hand joints that can be used to detect hand gestures and bring a whole new level of interaction to your Android XR apps:
Hands bring a natural input method to your Android XR experience.
The Android XR Emulator has also received updates to stability, support for AMD GPUs, and is now fully integrated within the Android Studio UI.
The Android XR Emulator is now integrated in Android Studio
Developers using Unity have already successfully created and ported existing games and apps to Android XR. Today, you can upgrade to the Pre-Release version 2 of the Unity OpenXR: Android XR package! This update adds many performance improvements such as support for Dynamic Refresh Rate, which optimizes your app’s performance and power consumption. Shaders made with Shader Graph now support SpaceWarp, making it easier to use SpaceWarp to reduce compute load on the device. Hand meshes are now exposed with occlusion, which enables realistic hand visualization.
Check out Unity’s improved Mixed Reality template for Android XR, which now includes support for occlusion and persistent anchors.
We recently launched Android XR Samples for Unity, which demonstrate capabilities on the Android XR platform such as hand tracking, plane tracking, face tracking, and passthrough.
Google’s open-source Unity samples demonstrate platform features and show how they’re implemented
The Firebase AI Logic for Unity is now in public preview! This makes it easy for you to integrate gen AI into your apps, enabling the creation of AI-powered experiences with Gemini and Android XR. The Firebase AI Logic fully supports Gemini's capabilities, including multimodal input and output, and bi-directional streaming for immersive conversational interfaces. Built with production readiness in mind, Firebase AI Logic is integrated with core Firebase services like App Check, Remote Config, and Cloud Storage for enhanced security, configurability, and data management. Learn more about this on the Firebase blog or go straight to the Gemini API using Vertex AI in Firebase SDK documentation to get started.
Continuing to build the future together
Our commitment to open standards continues with the glTF Interactivity specification, in collaboration with the Khronos Group. which will be supported in glTF models rendered by Jetpack XR later this year. Models using the glTF Interactivity specification are self-contained interactive assets that can have many pre-programmed behaviors, like rotating objects on a button press or changing the color of a material over time.
Android XR will be available first on Samsung’s Project Moohan, launching later this year. Soon after, our partners at XREAL will release the next Android XR device. Codenamed Project Aura, it’s a portable and tethered device that gives users access to their favorite Android apps, including those that have been built for XR. It will launch as a developer edition, specifically for you to begin creating and experimenting. The best news? With the familiar tools you use to build Android apps today, you can build for these devices too.
XREAL’s Project Aura
The Google Play Store is also getting ready for Android XR. It will list supported 2D Android apps on the Android XR Play Store when it launches later this year. If you are working on an Android XR differentiated app, you can get it ready for the big launch and be one of the first differentiated apps on the Android XR Play Store:
New! Make your XR app stand out from others on Play Store with preview assets such as stereoscopic 180° or 360° videos, as well as screenshots, app description, and non-spatial video.
And we know many of you are excited for the future of Android XR on glasses. We are shaping the developer experience now and will share more details on how you can participate later this year.
To get started creating and developing for Android XR, check out developer.android.com/develop/xr where you will find all of the tools, libraries, and resources you need to work with the Android XR SDK. In particular, try out our samples and codelabs.
We welcome your feedback, suggestions, and ideas as you’re helping shape Android XR. Your passion, expertise, and bold ideas are vital as we continue to develop Android XR together. We look forward to seeing your XR-differentiated apps when Android XR devices launch later this year!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
Posted by Mayank Jain – Product Manager, Android Studio
Android Studio continues to advance Android development by empowering developers to build better app experiences, faster. Our focus has been on improving AI-driven functionality with Gemini, streamlining UI creation and testing, and helping you future-proof apps for the evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in the fast-paced world of mobile development.
Get the latest Gemini 2.5 Pro model in Android Studio
The power of artificial intelligence through Gemini is now deeply integrated into Android Studio, helping you at all stages of Android app development. Now with access to Gemini 2.5 Pro, we're continuing to look for new ways to use AI to supercharge Android development — and help you build better app experiences, faster.
Journeys for Android Studio
We’re also introducing agentic AI with Gemini in Android Studio.Testing your app is now much easier when you create journeys - just describe the actions and assertions in natural language for the user journeys you want to test, and Gemini performs the tests for you. Creating journeys lets you test your app’s critical user journeys across various devices without writing extensive code. You can then run these tests on local physical or virtual Android devices to validate that the test worked as intended by reviewing detailed results directly within the IDE. Although the feature is experimental, the goal is to increase the speed that you can ship high-quality code, while significantly reducing the amount of time you spend manually testing, validating, or reproducing issues.
Journeys for Android Studio uses Gemini to test your app.
Suggested fixes for crashes with Gemini
The App Quality Insights panel has a great new feature. The crash insights now analyzes your app's source code referenced from the crash, and not only offers a comprehensive analysis and explanation of the crash, in some cases it even offers a source fix! With just a few clicks, you are able to review the changes, accept the code suggestions, and push the changes to your source control. Now you can determine the root cause of a crash and fix it much faster!
Crash analysis with Gemini
AI features in Studio Labs (stable releases only)
We’ve heard feedback that developers want to access AI features in stable channels as soon as possible. You can now discover and try out the latest AI experimental features through the Studio Labs menu in the Settings menu starting with Narwhal stable release. You can get a first look at AI experiments, share your feedback, and help us bring them into the IDE you use everyday. Go to the Studio Labs tab in Settings and enable the features you would like to start using. These AI features are automatically enabled in canary releases and no action is required.
AI features in Studio Labs
Compose preview generation with Gemini
Gemini can automatically generate Jetpack Compose preview code saving you time and effort. You can access this feature by right-clicking within a composable and navigating to Gemini > Generate Compose Preview or Generate Compose Preview for this file, or by clicking the link in an empty preview panel. The generated preview code is presented in a diff view that enables you to quickly accept, edit, or reject the suggestions, providing a faster way to visualize your composables.
Compose Preview generation with Gemini
Transform UI with Gemini
You can now transform UI code within the Compose Preview environment using natural language directly in the preview. To use it, right click in the Compose Preview and select "Transform UI With Gemini". Then enter your natural language requests, such as "Center align these buttons," to guide Gemini in adjusting your layout or styling, or select specific UI elements in the preview for better context. Gemini will then edit your Compose UI code in place, which you can review and approve, speeding up the UI development workflow.
Transform UI with Gemini
Image attachment in Gemini
You can now attach image files and provide additional information along with your prompt. For example: you can attach UI mock-ups or screenshots to tell Gemini context about your app’s layout. Consequently, Gemini can generate Compose code based on a provided image, or explain the composables and data flow of a UI screenshot.
Image attachment and preview generation via Gemini in Android Studio
@File context in Gemini
You can now attach your project files as context in chat interactions with Gemini in Android Studio. This lets you quickly reference files in your prompts for Gemini. In the Gemini chat input, type @ to bring up a file completion menu and select files to attach. You can also click the Context drop-down to see which files were automatically attached by Gemini. This gives you more control over the context sent to Gemini.
@File context in Gemini
Rules in Prompt Library
Rules in Gemini let you define preferred coding styles or output formats within the Prompt Library. You can also mention your preferred tech stack and languages. When you set these preferences once, they are automatically applied to all subsequent prompts sent to Gemini. Rules help the AI understand project standards and preferences for more accurate and tailored code assistance. For example, you can create a rule such as “Always give me concise responses in Kotlin.”
Prompt Library Improvements
Gemini in Android Studio for businesses
Gemini in Android Studio for businesses is now available. It provides all the benefits of Gemini in Android Studio, plus enterprise-grade privacy and security features backed by Google Cloud — giving your team the confidence they need to deploy AI at scale while keeping their data protected.
Improved tools for creating great user experiences
Elevate your Compose UI development with the latest Android Studio enhancements.
Compose preview improvements
Compose preview interaction is now more efficient with the latest navigation improvements. Click on the preview name to jump to the preview definition or click the individual component to jump to the function where it’s defined. Hover states provide immediate visual feedback as you mouse over a preview frame. Improved keyboard arrow navigation eases movement through multiple previews, enabling faster UI iteration and refinement. Additionally, the Compose preview picker is now also available in the stable release.
Compose preview navigation improvements
Compose preview picker
Resizable Previews
While in Compose Preview’s focus mode in Android Studio, you can now resize the preview window by dragging its edges. This gives you instant visual feedback on how your UI adapts to different screen sizes, ensuring responsiveness and visual consistency. This rapid iteration helps create UIs that look great on any Android device.
Resizable Preview
Embedded Android XR Emulator
The Android XR Emulator now launches by default in the embedded state. You can now deploy your application, navigate the 3D space and use the Layout Inspector directly inside Android Studio, streamlining your development flow.
Embedded XR Emulator
Improved tools for future-proofing and testing your Android apps
We’ve enhanced some of your favorite features so that you can test more confidently, future-proof your apps, and ensure app compatibility across a wide range of devices and Android versions.
Streamlined testing with Backup and Restore support
Android Studio offers built-in Backup and Restore support by letting you trigger app backups on connected devices directly from the Running Devices window. You can also configure your Run/Debug settings to automatically restore from a previous backup when launching your app. This simplifies the process of validating your app's Backup and Restore implementation and speeds up development by reducing manual setup for testing.
Streamlined testing with Backup and Restore support
Android’s transition to 16 KB Page Size
The underlying architecture of Android is evolving, and a key step forward is the transition to 16 KB page sizes. This fundamental change requires all Android apps with native code or dependencies to be recompiled for compatibility. To help you navigate this transition smoothly, Android Studio now offers proactive warnings when building APKs or Android App Bundles that are incompatible with 16 KB devices. Using the APK Analyzer, you can also find out which libraries are incompatible with 16 KB devices. To test your apps in this new environment, a dedicated 16 KB emulator target is also available in Android Studio alongside existing 4 KB images.
Android’s transition to 16 KB page size
Backup and Sync your Studio settings
When you sign in with your Google account or a JetBrains account in Android Studio, you can now sync your customizations and preferences across all installs and restore preferences automatically on remote Android Studio instances. Simply select “Enable Backup and Sync” while you’re logging in to Android Studio, or from the Settings > Backup and Sync page, and follow the prompts.
Backup and Sync your Studio settings
Increasing developer productivity with Android’s Kotlin Multiplatform improvements
Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. Usage has been growing in the developer community, with apps such as Google Docs now using it in production. We’ve released new Android Studio KMP project templates, updated Jetpack libraries and new codelabs (Get Started with KMP and Migrate Existing Apps to Room KMP) to help developers who are looking to get started with KMP.
Experimental and features that are coming soon to Android Studio
Android Studio Cloud (experimental)
Android Studio Cloud is now available as an experimental public preview, accessible through Firebase Studio. This service streams a Linux virtual machine running Android Studio directly to your web browser, enabling Android application development from anywhere with an internet connection. Get started quickly with dedicated workspaces featuring pre-downloaded Android SDK components. Explore sample projects or seamlessly access your existing Android app projects from GitHub without a local installation. Please note that Android Studio Cloud is currently in an experimental phase. Features and capabilities are subject to significant change, and users may encounter known limitations.
Version Upgrade Agent (coming soon)
The Version Upgrade Agent, as part of Gemini in Android Studio, is designed to save you time and effort by automating your dependency upgrades. It intelligently analyzes your Android project, parses the release notes for included libraries, and proposes updates directly from your libs.versions.toml file or the refactoring menu (right-click > Refactor > Update dependencies). The agent automatically updates dependencies to the latest compatible version, builds the project, fixes any errors, and repeats until all errors are fixed. Once the dependencies are upgraded, the agent generates a report showing the changes it made, as well as a high level summary highlighting the changes included in the updated libraries.
Version Upgrade Agent
Agent Mode (coming soon)
Agent Mode is a new autonomous AI feature using Gemini, designed to handle complex, multi-stage development tasks that go beyond typical AI assistant capabilities, invoking multiple tools to accomplish tasks on your behalf.
You can describe a complex goal, like integrating a new API, and the agent will formulate an execution plan that spans across files in your project — adding necessary dependencies, editing files, and iteratively fixing bugs. This feature aims to empower all developers to tackle intricate challenges and accelerate the building and prototyping process. You can access it via the Gemini chat window in Android Studio.
Agent Mode
Play Policy Insights beta in Android Studio (coming soon)
Android Studio now includes richer insights and guidance on Google Play policies that might impact your app. This information, available as lint checks, helps you build safer apps from the start, preventing issues that could disrupt your launch process and cost more time and resources to fix later on. These lint checks will present an overview of the policy, do and don’ts, and links to Play policy pages where you can find more information about the policy.
Play Policy Insights beta in Android Studio
IntelliJ Platform Update (2025.1)
Here are some important IDE improvements in the IntelliJ IDEA 2025.1 platform release
Kotlin K2 mode: Android Studio now supports Kotlin K2 mode in Android-specific features requiring language support such as Live Edit, Compose Preview and many more
Improved dependency resolution in Kotlin build scripts: Makes your Kotlin build scripts for Android projects more stable and predictable
Hints about code alterations by Kotlin compiler plugins: Gives you clearer insights into how plugins used in Android development modify your Kotlin code
Automatic download of library sources for Gradle projects: Simplifies debugging and understanding your Android project dependencies by providing immediate access to their source code
Support for Gradle Daemon toolchains: Helps prevent potential JVM errors during your Android project builds and ensures smoother synchronization
Automatic plugin updates: Keeps your Android development tools within IntelliJ IDEA up-to-date effortlessly
To Summarize
Android Studio Narwhal Feature Drop (2025.2.1) is now available in the Android Studio canary channel with some amazing features to help your Android development
AI-powered development tools for Android
Journeys for Android Studio: Validate app flows easily using tests and assertions in natural language
Suggested fixes for crashes with Gemini: Determine the root cause of a crash and fix it much faster with Gemini
AI features in Studio Labs
Compose preview generation with Gemini: Generate Compose previews with Gemini's code suggestions
Transform UI with Gemini: Transform UI in Compose Preview with natural language, speeding development
Image attachment in Gemini: Attach images to Gemini for context-aware code generation
@File context in Gemini: Reference project files in Gemini chats for quick AI prompts
Rules in Prompt Library: Define preferred coding styles or output formats within the Prompt Library
Improved tools for creating great user experiences
Compose preview improvements: Navigate the Compose Preview using clickable names and components
Resizable preview: Instantly see how your Compose UI adapts to different screen sizes
Embedded XR Emulator: XR Emulator now launches by default in the embedded state
Improved tools for future-proofing and testing your Android apps
Streamlined testing with Backup and Restore support: Effortless app testing, trigger backups, auto-restore for faster validation
Android’s transition to 16 KB Page Size: Prepare for Android's 16KB page size with Studio's early warnings and testing
Backup and Sync your Studio settings: Sync Android Studio settings across devices and restore automatically for convenience
Increasing developer productivity with Android’s Kotlin Multiplatform improvements: simplified cross-platform Android and iOS development with new tools
Experimental and features that are coming soon to Android Studio
Android Studio Cloud (experimental): Develop Android apps from any browser with just an internet connection
Version Upgrade Agent (coming soon): Automated dependency updates save time and effort, ensuring projects stay current
Agent Mode (coming soon): Empowering developers to tackle multistage complex tasks that go beyond typical AI assistant capabilities
Play Policy Insights beta in Android Studio (coming soon): Insights and guidance on Google Play policies that might impact your app
How to get started
Ready to try the exciting new features in Android Studio?
Posted by Fahd Imtiaz – Product Manager, Android Developer
If your app isn’t built to adapt, you’re missing out on the opportunity to reach a giant swath of users across 500 million devices! At Google I/O this year, we are exploring how adaptive development isn’t just a good idea, but essential to building apps that shine across the expanding Android device ecosystem. This is your guide to meeting users wherever they are, with experiences that are perfectly tailored to their needs.
The advantage of building adaptive
In today's multi-device world, users expect their favorite applications to work flawlessly and intuitively, whether they're on a smartphone, tablet, or Chromebook. This expectation for seamless experiences isn't just about convenience; it's an important factor for user engagement and retention.
For example, entertainment apps (including Prime Video, Netflix, and Hulu) users on both phone and tablet spend almost 200% more time in-app (nearly 3x engagement) than phone-only users in the US*.
Peacock, NBCUniversal’s streaming service has seen a trend of users moving between mobile and large screens and building adaptively enables a single build to work across the different form factors.
“This allows Peacock to have more time to innovate faster and deliver more value to its customers.”
– Diego Valente, Head of Mobile, Peacock and Global Streaming
Adaptive Android development offers the strategic solution, enabling apps to perform effectively across an expanding array of devices and contexts through intelligent design choices that emphasize code reuse and scalability. With Android's continuous growth into new form factors and upcoming enhancements such as desktop windowing and connected displays in Android 16, an app's ability to seamlessly adapt to different screen sizes is becoming increasingly crucial for retaining users and staying competitive.
Beyond direct user benefits, designing adaptively also translates to increased visibility. The Google Play Store actively helps promote developers whose apps excel on different form factors. If your application delivers a great experience on tablets or is excellent on ChromeOS, users on those devices will have an easier time discovering your app. This creates a win-win situation: better quality apps for users and a broader audience for you.
Latest in adaptive Android development from Google I/O
To help you more effectively build compelling adaptive experiences, we shared several key updates at I/O this year.
Build for the expanding Android device ecosystem
Your mobile apps can now reach users beyond phones on over 500 million active devices, including foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 introduces significant advancements in desktop windowing for a true desktop-like experience on large screens and when devices are connected to external displays. And, Android XR is opening a new dimension, allowing your existing mobile apps to be available in immersive virtual environments.
The mindset shift to Adaptive
With the expanding Android device ecosystem, adaptive app development is a fundamental strategy. It's about how the same mobile app runs well across phones, foldables, tablets, Chromebooks, connected displays, XR, and cars, laying a strong foundation for future devices and differentiating for specific form factors. You don't need to rebuild your app for each form factor; but rather make small, iterative changes, as needed, when needed. Embracing this adaptive mindset today isn't just about keeping pace; it's about leading the charge in delivering exceptional user experiences across the entire Android ecosystem.
Leverage powerful tools and libraries to build adaptive apps:
Compose Adaptive Layouts library: This library makes adaptive development easier by allowing your app code to fit into canonical layout patterns like list-detail and supporting pane, that automatically reflow as your app is resized, flipped or folded. In the 1.1 release, we introduced pane expansion, allowing users to resize panes. The Socialite demo app showcased how one codebase using this library can adapt across six form factors. New adaptation strategies like "Levitate" (elevating a pane, e.g., into a dialog or bottom sheet) and "Reflow" (reorganizing panes on the same level) were also announced in 1.2 (alpha). For XR, component overrides can automatically spatialize UI elements.
Jetpack Navigation 3 (Alpha): This new navigation library simplifies defining user journeys across screens with less boilerplate code, especially for multi-pane layouts in Compose. It helps handle scenarios where list and detail panes might be separate destinations on smaller screens but shown together on larger ones. Check out the new Jetpack Navigation library in alpha.
Jetpack Compose input enhancements: Compose's layered architecture, strong input support, and single location for layout logic simplify creating adaptive UIs. Upcoming in Compose 1.9 are right-click context menus and enhanced trackpad/mouse functionality.
Window Size Classes: Use window size classes for top-level layout decisions. AndroidX.window 1.5 introduces two new width size classes – "large" (1200dp to 1600dp) and "extra-large" (1600dp and larger) – providing more granular breakpoints for large screens. This helps in deciding when to expand navigation rails or show three panes of content. Support for these new breakpoints was also announced in the Compose adaptive layouts library 1.2 alpha, along with design guidance.
Compose previews: Get quick feedback by visualizing your layouts across a wide variety of screen sizes and aspect ratios. You can also specify different devices by name to preview your UI on their respective sizes and with their inset values.
Testing adaptive layouts: Validating your adaptive layouts is crucial and Android Studio offers various tools for testing – including previews for different sizes and aspect ratios, a resizable emulator to test across different screen sizes with a single AVD, screenshot tests, and instrumental behavior tests. And with Journeys with Gemini in Android Studio, you can define tests using natural language for even more robust testing across different window sizes.
Ensuring app availability across devices
Avoid unnecessarily declaring required features (like specific cameras or GPS) in your manifest, as this can prevent your app from appearing in the Play Store on devices that lack those specific hardware components but could otherwise run your app perfectly.
Handling different input methods
Remember to handle various input methods like touch, keyboard, and mouse, especially with Chromebook detachables and connected displays.
Prepare for orientation and resizability API changes in Android 16
Beginning in Android 16, for apps targeting SDK 36, manifest and runtime restrictions on orientation, resizability, and aspect ratio will be ignored on displays that are at least 600dp in both dimensions. To meet user expectations, your apps will need layouts that work for both portrait and landscape windows, and support resizing at runtime. There's a temporary opt-out manifest flag at both the application and activity level to delay these changes until targetSdk 37, and these changes currently do not apply to apps categorized as "Games". Learn more about these API changes.
Adaptive considerations for games
Games need to be adaptive too and Unity 6 will add enhanced support for configuration handling, including APIs for screenshots, aspect ratio, and density. Success stories like Asphalt Legends Unite show significant user retention increases on foldables after implementing adaptive features.
Start building adaptive today
Now is the time to elevate your Android apps, making them intuitively responsive across form factors. With the latest tools and updates we’re introducing, you have the power to build experiences that seamlessly flow across all devices, from foldables to cars and beyond. Implementing these strategies will allow you to expand your reach and delight users across the Android ecosystem.
Get inspired by the “Adaptive Android development makes your app shine across devices” talk, and explore all the resources you’ll need to start your journey at developer.android.com/adaptive-apps!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
Posted by Paul Feng, VP of Product Management, Google Play
At Google Play, we're dedicated to helping people discover experiences they'll love, while empowering developers like you to bring your ideas to life and build successful businesses.
At this year’s Google I/O, we unveiled the latest ways we’re empowering your success with new tools that provide robust testing and actionable insights. We also showcased how we’re continuing to build a content-rich Play Store that fosters repeat engagement alongside new subscription capabilities that streamline checkout and reduce churn.
Check out all the exciting developments from I/O below and learn how they'll help you grow your business on Google Play.
Helping you succeed every step of the way
Last month, we introduced our latest Play Console updates focused on improving quality and performance. A redesigned app dashboard centered around four developer objectives (Test and release, Monitor and improve, Grow users, Monetize) and new Android vitals metrics offer quick insights and actionable suggestions to proactively improve the user experience.
Get more actionable insights with new Play Console overview pages
Building on these updates, we've launched dedicated overview pages for two developer objectives: Test and release and Monitor and improve. These new pages bring together more objective-related metrics, relevant features, and a "Take action" section with contextual, dynamic advice. Overview pages for Grow and Monetize will be coming soon.
Halt fully-rolled out releases when needed
Historically, a release at 100% live meant there was no turning back, leaving users stuck with a flawed version until a new update rolled out. Soon, you'll be able to halt fully-live releases, through Play Console and the Publishing API to stop the distribution of problematic versions to new users.
You'll soon be able to halt fully live releases directly from Play Console and the Publishing API, stopping the distribution of problematic versions to new users.
Optimize your store listings with better management tools and metrics
We launched two tools to enhance your store listings. The asset library makes it easy to upload, edit, and view your visual assets. Upload them from Google Drive, organize with tags, and crop for repurposing. And with new open metrics, you gain deeper insights into listing performance so you can better understand how they attract, engage, and re-engage users.
Stay ahead of threats with the Play Integrity API
We're committed to robust security and preventing abuse so you can thrive on Play’s trusted platform. The Play Integrity API continuously evolves to combat emerging threats, with these recent enhancements:
Stronger abuse detection for all developers that leverages the latest Android hardware-security with no developer effort required.
Device security update checks to safeguard your app’s sensitive actions like transfers or data access.
Public beta for device recall which enables you to detect if a device is being reused for abuse or repeated actions, even after a device reset. You can express interest in this beta.
Unlocking more discovery and engagement for your apps and its content
Last year, we shared our vision for a content-rich Google Play that has already delivered strong results. Year-over-year, Apps Home has seen over a 25% increase in average monthly visitors with apps seeing a 10% growth in acquisitions and double-digit growth in app spend for those monetizing on Google Play. Building on that vision, we're introducing even more updates to elevate your discovery and engagement, both on and off the store.
For example, curated spaces, launched last year, celebrate seasonal interests like football (soccer) in Brazil and cricket in India, and evergreen interests like comics in Japan. By adding daily content—match highlights, promotions, and editorial articles directly on the Apps Home—these spaces foster discovery and engagement. Curated spaces are a hit with over 920,000 highly engaged users in Japan returning to the comics space monthly. Building on this momentum, we are expanding to more locations and categories this year.
Our curated spaces add daily content to foster repeat discovery and engagement.
We're launching new topic browse pages that feature timely, relevant, and visually engaging content. Users can find them throughout the Store, including Apps Home, store listing pages, and search. These pages debut this month in the US with Media & Entertainment, showcasing over 100,000 shows, movies, and select sports. More localized topic pages will roll out globally later this year.
New topic browse pages for media and entertainment are rolling out this month in the US.
We’re expanding Where to Watch to more markets, including the UK, Korea, Indonesia, and Mexico, to help users find and deep-link directly into their subscribed apps for movies and TV. Since launching in the US in November 2024, we've seen promising results: People who view app content through Where to Watch return to Play more frequently and increase their content search activity by 30%.
We're also enhancing how your content is displayed on the Play Store. Starting this July, all app developers can add a hero content carousel and a YouTube playlist carousel to their store listings. These formats will help showcase your best content and drive greater user engagement and discovery.
For apps best experienced through sound, we're launching audio samples on the Apps Home. A simple tap offers users a brief escape into your audio content. In early testing, audio samples made users 3x more likely to install or open an app! This feature is now available for all Health & Wellness app developers with users in the US, with more categories and markets coming soon. You can express your interest in promoting audio content.
We're enhancing how your content is displayed on the Play Store,
offering new ways to showcase your app and drive user engagement.
Helping you take advantage of deeper engagement on Play, on and off the Store
Last year, we introduced Engage SDK, a unified solution to deliver personalized content and guide users to relevant in-app experiences. Integrating it unlocks surfaces like Collections, our immersive full-screen experience bringing content directly to the user's home screen.
We're rolling out updates to expand your content’s reach even further:
Engage SDK content is coming to the Play Store this summer, in addition to existing spaces like Collections and Entertainment Space on select Android tablets.
New content categories are now supported, starting today with Travel.
Collections are rolling out globally to Google Play markets starting today, including Brazil, India, Indonesia, Japan, and Mexico.
Users can find timely itineraries and recent searches, all in one convenient place.
Maximizing your revenue with subscriptions enhancements
With over a quarter-billion subscriptions, Google Play is one of the world's largest subscriptions platforms. We're committed to helping you turn engaged users into revenue growth by continually enhancing our tools to meet evolving customer needs.
To streamline your purchase flow, we’re introducing multi-product checkout for subscriptions. This lets you sell subscription add-ons alongside base subscriptions, all under a single, aligned payment schedule. Users get a simplified experience with one price and one transaction, while you gain more control over how subscribers upgrade, downgrade, or manage their add-ons.
You can now sell base subscriptions and add-ons together
in a single, streamlined transaction.
To help you retain more of your subscribers, we’re now showcasing subscription benefits in more places across Play – including the Subscriptions Center, in reminder emails, and during purchase and cancellation flows. This increased visibility has already reduced voluntary churn by 2%. Be sure to enter your subscription benefits in Play Console so you can leverage this powerful new capability.
To help reduce voluntary churn, we’re showcasing your subscriptions benefits across Play.
Reducing involuntary churn is a key factor in optimizing your revenue. When payment methods unexpectedly decline, users might unintentionally cancel. Now, instead of immediate cancellation, you can now choose a grace period (up to 30 days) or an account hold (up to 60 days). Developers who increased the decline recovery period – from 30 to 60 days – saw an average 10% reduction in involuntary churn for renewals.
On top of this, we're expanding our commitment to get more buyers ready for purchases throughout their entire journey. This includes prompting users to set up payment methods and verification right at device setup. After setup, we've integrated prompts into highly visible areas like the Play and Google account menus. And as always, we’re continuously enabling payments in more markets and expanding payment options. Plus, our AI models now help optimize in-app transactions by suggesting the right payment method at the right time, and we're bringing buyers back with effective cart abandonment reminders.
Grow your business on Google Play
Our latest updates reinforce our commitment to fostering a thriving Google Play ecosystem. From enhanced discovery and robust tools to new monetization avenues, we're empowering you to innovate and grow. We're excited for the future we're building together and encourage you to use these new capabilities to create even more impactful experiences. Thank you for being an essential part of the Google Play community.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
This year, we’re excited to introduce Wear OS 6: the most power-efficient and expressive version of Wear OS yet.
Wear OS 6 introduces the new design system we call Material 3 Expressive. It features a major refresh with visual and motion components designed to give users an experience with more personalization. The new design offers a great level of expression to meet user demand for experiences that are modern, relevant, and distinct. Material 3 Expressive is coming to Wear OS, Android, and all your favorite Google apps on these devices later this year.
The good news is that you don’t need to compromise battery for beauty: thanks to Wear OS platform optimizations, watches updating from Wear OS 5 to Wear OS 6 can see up to 10% improvement in battery life.1
Wear OS 6 developer preview
Today we’re releasing the Developer Preview of Wear OS 6, the next version of Google’s smartwatch platform, based on Android 16.
Wear OS 6 brings a number of developer-facing changes, such as refining the always-on display experience. Check out what’s changed and try the new Wear OS 6 emulator to test your app for compatibility with the new platform version.
Material 3 Expressive on Wear OS
Some examples of Material 3 Expressive on Wear OS experiences
Material 3 Expressive for the watch is fully optimized for the round display. We recommend developers embrace the new design system in their apps and tiles. To help you adopt Material 3 Expressive in your app, we have begun releasing new design guidance for Wear OS, along with corresponding Figma design kits.
As a developer, you can get access the Material 3 Expressive on Wear OS using new Jetpack libraries:
These two libraries provide implementations for the components catalog that adheres to the Material 3 Expressive design language.
Make it personal with richer color schemes using themes
Dynamic color theme updates colors of apps and Tiles
The Wear Compose Material 3 and Wear Protolayout Material 3 libraries provide updated and extended color schemes, typography, and shapes to bring both depth and variety to your designs. Additionally, your tiles now align with the system font by default (on Wear OS 6+ devices), offering a more cohesive experience on the watch.
Both libraries introduce dynamic color theming, which automatically generates a color theme for your app or tile to match the colors of the watch face of Pixel watches.
Make it more glanceable with new tile components
Tiles now support a new framework and a set of components that embrace the watch's circular form factor. These components make tiles more consistent and glanceable, so users can more easily take swift action on the information included in them.
We’ve introduced a 3-slot tile layout to improve visual consistency in the Tiles carousel. This layout includes a title slot, a main content slot, and a bottom slot, designed to work across a range of different screen sizes:
Some examples of Tiles with the 3-slot tile layout.
Highlight user actions and key information with components optimized for round screen
The new Wear OS Material 3 components automatically adapt to larger screen sizes, building on the Large Display support added as part of Wear OS 5. Additionally, components such as Buttons and Lists support shape morphing on apps.
The following sections highlight some of the most exciting changes to these components.
Embrace the round screen with the Edge Hugging Button
We introduced a new EdgeButton for apps and tiles with an iconic design pattern that maximizes the space within the circular form factor, hugs the edge of the screen, and comes in 4 standard sizes.
Screenshot representing an EdgeButton in a scrollable screen.
Fluid navigation through lists using new indicators
The new TransformingLazyColumn from the Foundation library makes expressive motion easy with motion that fluidly traces the edges of the display. Developers can customize the collapsing behavior of the list when scrolling to the top, bottom and both sides of the screen. For example, components like Cards can scale down as they are closer to the top of the screen.
TransformingLazyColumn allows content to collapse and change in size when approaching the edge of the screens
Material 3 Expressive also includes a ScrollIndicator that features a new visual and motion design to make it easier for users to visualize their progress through a list. The ScrollIndicator is displayed by default when you use a TransformingLazyColumn and ScreenScaffold.
ScrollIndicator
Lastly, you can now use segments with the new ProgressIndicator, which is now available as a full-screen component for apps and as a small-size component for both apps and tiles.
Example of a full-screen ProgressIndicator
To learn more about the new features and see the full list of updates, see the release notes of the latest beta release of the Wear Compose and Wear Protolayout libraries. Check out the migration guidance for apps and tiles on how to upgrade your existing apps, or try one of our codelabs if you want to start developing using Material 3 Expressive design.
Watch Faces
With Wear OS 6 we are launching updates for watch face developers:
New options for customizing the appearance of your watch face using version 4 of Watch Face Format, such as animated state transitions from ambient to interactive and photo watch faces.
Look for more information about the general availability of Wear OS 6 later this year.
Library updates
ProtoLayout
Since our last major release, we've improved capabilities and the developer experience of the Tiles and ProtoLayout libraries to address feedback we received from developers. Some of these enhancements include:
Developers can now write more idiomatic Kotlin, with APIs refined to better align with Jetpack Compose, including type-safe builders and an improved modifier syntax.
The example below shows how to display a layout with a text on a Tile using new enhancements:
// returns a LayoutElement for use in onTileRequest()
materialScope(context, requestParams.deviceConfiguration) {
primaryLayout(
mainSlot = {
text(
text = "Hello, World!".layoutString,
typography = BODY_LARGE,
)
}
)
}
The CredentialManager API is now available on Wear OS, starting with Google Pixel Watch devices running Wear OS 5.1. It introduces passkeys to Wear OS with a platform-standard authentication UI that is consistent with the experience on mobile.
The Credential Manager Jetpack library provides developers with a unified API that simplifies and centralizes their authentication implementation. Developers with an existing implementation on another form factor can use the same CredentialManager code, and most of the same supporting code to fulfill their Wear OS authentication workflow.
Credential Manager provides integration points for passkeys, passwords, and Sign in With Google, while also allowing you to keep your other authentication solutions as backups.
Users will benefit from a consistent, platform-standard authentication UI; the introduction of passkeys and other passwordless authentication methods, and the ability to authenticate without their phone nearby.
Devices that run Wear OS 5.1 or later support enhanced media controls. Users who listen to media content on phones and watches can now benefit from the following new media control features on their watch:
They can fast-forward and rewind while listening to podcasts.
They can access the playlist and controls such as shuffle, like, and repeat through a new menu.
Developers with an existing implementation of action buttons and playlist can benefit from this feature without additional effort. Check out how users will get more controls from your media app on a Google Pixel Watch device.
Start building for Wear OS 6 now
With these updates, there’s never been a better time to develop an app on Wear OS. These technical resources are a great place to learn more how to get started:
Earlier this year, we expanded our smartwatch offerings with Galaxy Watch for Kids, a unique, phone-free experience designed specifically for children. This launch gives families a new way to stay connected, allowing children to explore Wear OS independently with a dedicated smartwatch. Consult our developer guidance to create a Wear OS app for kids.
We’re looking forward to seeing the experiences that you build on Wear OS!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
Posted by Rebecca Franks – Developer Relations Engineer
The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular - we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.
Androidify app demo
Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:
Under the hood
The app combines a variety of different Google technologies, such as:
Gemini API - through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
Jetpack Compose - for building the UI with delightful animations and making the app adapt to different screen sizes.
Navigation 3 - the latest navigation library for building up Navigation graphs with Compose.
CameraX Compose and Media3 Compose - for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.
This sample app is currently using a standard Imagen model, but we've been working on a fine-tuned model that's trained specifically on all of the pieces that make the Android bot cute and fun; we'll share that version later this year. In the meantime, don't be surprised if the sample app puts out some interesting looking examples!
How does the Androidify app work?
The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.
Androidify app flow chart detailing how the app works with AI
AI in Androidify with Gemini and ML Kit
The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:
Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:
val response = generativeModel.generateContent(
content {
text(prompt)
image(image)
},
)
Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.
Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.
“Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.
Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.
The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.
The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.
Delightful details with the UI
The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.
MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:
Camera button with a MaterialShapes.Cookie9Sided shape
Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:
There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.
Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.
Fun color splash animation as a transition between screens.
Animating gradient buttons for the AI-powered actions.
Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.
Various adaptive layouts in the app
For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.
Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.
Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.
Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).
Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.
CameraX and Media3 Compose
To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.
The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.
CameraLayout composable that takes care of different device configurations, such as table top mode
CameraLayout composable that takes care of different device configurations, such as table top mode
The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:
@Composable
privatefunVideoPlayer(modifier: Modifier = Modifier) {
val context = LocalContext.current
var player by remember { mutableStateOf<Player?>(null) }
LifecycleStartEffect(Unit) {
player = ExoPlayer.Builder(context).build().apply {
setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
repeatMode = Player.REPEAT_MODE_ONE
prepare()
}
onStopOrDispose {
player?.release()
player = null
}
}
Box(
modifier
.background(MaterialTheme.colorScheme.surfaceContainerLowest),
) {
player?.let { currentPlayer ->
PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
}
}
}
Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:
var videoFullyOnScreen by remember { mutableStateOf(false) }
LaunchedEffect(videoFullyOnScreen) {
if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
}
// We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state.
Modifier.onVisibilityChanged(
containerWidth = LocalView.current.width,
containerHeight = LocalView.current.height,
) { fullyVisible -> videoFullyOnScreen = fullyVisible }
// A simple version of visibility changed detectionfun Modifier.onVisibilityChanged(
containerWidth: Int,
containerHeight: Int,
onChanged: (visible: Boolean) -> Unit,
) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
onChanged(
layoutBounds.boundsInRoot.top > 0 &&
layoutBounds.boundsInRoot.bottom < containerHeight &&
layoutBounds.boundsInRoot.left > 0 &&
layoutBounds.boundsInRoot.right < containerWidth,
)
}
Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:
val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
OutlinedIconButton(
onClick = playPauseButtonState::onClick,
enabled = playPauseButtonState.isEnabled,
) {
val icon =
if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
val contentDescription =
if (playPauseButtonState.showPlay) R.string.play else R.string.pause
Icon(
painterResource(icon),
stringResource(contentDescription),
)
}
Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.
Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:
By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
Posted by Shobana Radhakrishnan - Senior Director of Engineering, Google TV, and Paul Lammertsma - Developer Relations Engineer, Android
Over the past year, Google TV and Android TV achieved over 270 million monthly active devices, establishing one of the largest smart TV OS footprints. Building on this momentum, we are excited to share new platform features and developer tools designed to help you increase app engagement with our expanding user base.
Google TV with Gemini capabilities
Earlier this year, we announced that we’ll bring Gemini capabilities to Google TV, so users can speak more naturally and conversationally to find what to watch and get answers to complex questions.
After each movie or show search, our new voice assistant will suggest relevant content from your apps, significantly increasing the discoverability of your content.
Plus, users can easily ask questions about topics they're curious about and receive insightful answers with supporting videos.
We’re so excited to bring this helpful and delightful experience to users this fall.
Video Discovery API
Today, we’ve also opened partner enrollment for our Video Discovery API.
Video Discovery optimizes Resumption, Entitlements, and Recommendations across all Google TV form factors to enhance the end-user experience and boost app engagement.
Resumption: Partners can now easily display a user's paused video within the 'Continue Watching' row from the home screen. This row is a prime location that drives 60% of all user interactions on Google TV.
Entitlements: Video Discovery streamlines entitlement management, which matches app content to user eligibility. Users appreciate this because they can enjoy personalized recommendations without needing to manually update all their subscription details. This allows partners to connect with users across multiple discovery points on Google TV.
Recommendations: Video Discovery even highlights personalized content recommendations based on content that users watched inside apps.
Partners can begin incorporating the Video Discovery API today, starting with resumption and entitlement integrations. Check out g.co/tv/vda to learn more.
Jetpack Compose for TV
Last year, we launched Compose for TV 1.0 beta, which lets you build beautiful, adaptive UIs across Android, including Android TV OS.
Now, Compose for TV 1.0 is stable, and expands on the core and Material Compose libraries. We’ve even seen how the latest release of Compose significantly improves app startup within our internal benchmarking mobile sample, with roughly a 20% improvement compared with the March 2024 release. Because Compose for TV builds upon these libraries, apps built with Compose for TV should also see better app startup times.
New to building with Compose, and not sure where to start? Our updated Jetcaster audio streaming app sample demonstrates how to use Compose across form factors. It includes a dedicated module for playing podcasts on TV by combining separate view models with shared business logic.
Focus Management Codelab
We understand that focus management can be challenging at times. That’s why we’ve published a codelab that reviews how to set initial focus, prepare for unexpected focus traversal, and efficiently restore focus.
Memory Optimization Guide
We’ve released a comprehensive guide on memory optimization, including memory targets for low RAM devices as well. Combined with Android Studio's powerful memory profiler, this helps you understand when your app exceeds those limits and why.
In-App Ratings and Reviews
Moreover, app ratings and reviews are essential for developers, offering quantitative and qualitative feedback on user experiences. Now, we're extending the In-App Ratings and Reviews API to TV to allow developers to prompt users for ratings and reviews directly from Google TV. Check out our recent blog post detailing how to easily integrate the In-App Ratings and Reviews API.
Android 16 for TV
We're excited to announce the upcoming release of Android 16 for TV. Developers can begin using the latest beta today. With Android 16, TV developers can access several great features:
The MediaQualityManager allows developers to take control over selecting picture profiles.
Platform support for the Eclipsa Audio codec enables creators to use the IAMF spatial audio format. For ExoPlayer support that includes previous platform versions, see ExoPlayer's IAMF decoder module.
There are various improvements to media playback speed, consistency and efficiency, as well as HDMI-CEC reliability and performance optimizations for 64-bit kernels.
Additional APIs and user experiences from Android 16 are also available. We invite you to explore the complete list from the Android 16 for TV release notes.
What's next
We're incredibly excited to see how these announcements will optimize your development journey, and look forward to seeing the fantastic apps you'll launch on the platform!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
Posted by Thomas Ezan – Developer Relations Engineer, Rebecca Franks – Developer Relations Engineer, and Avneet Singh – Product Manager
We’re bringing back Androidify later this year, this time powered by Google AI, so you can customize your very own Android bot and share your creativity with the world. Today, we’re releasing a new open source demo app for Androidify as a great example of how Google is using its Gemini AI models to enhance app experiences.
In this post, we'll dive into how the Androidify app uses Gemini models and Imagen via the Firebase AI Logic SDK, and we'll provide some insights learned along the way to help you incorporate Gemini and AI into your own projects. Read more about the Androidify demo app.
App flow
The overall app functions as follows, with various parts of it using Gemini and Firebase along the way:
Gemini and image validation
To get started with Androidify, take a photo or choose an image on your device. The app needs to make sure that the image you upload is suitable for creating an avatar.
Gemini 2.5 Flash via Firebase helps with this by verifying that the image contains a person, that the person is in focus, and assessing image safety, including whether the image contains abusive content.
val jsonSchema = Schema.obj(
properties = mapOf("success" to Schema.boolean(), "error" to Schema.string()),
optionalProperties = listOf("error"),
)
val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI())
.generativeModel(
modelName = "gemini-2.5-flash-preview-04-17",
generationConfig = generationConfig {
responseMimeType = "application/json"
responseSchema = jsonSchema
},
safetySettings = listOf(
SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.LOW_AND_ABOVE),
SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.LOW_AND_ABOVE),
SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, HarmBlockThreshold.LOW_AND_ABOVE),
SafetySetting(HarmCategory.DANGEROUS_CONTENT, HarmBlockThreshold.LOW_AND_ABOVE),
SafetySetting(HarmCategory.CIVIC_INTEGRITY, HarmBlockThreshold.LOW_AND_ABOVE),
),
)
val response = generativeModel.generateContent(
content {
text("You are to analyze the provided image and determine if it is acceptable and appropriate based on specific criteria.... (more details see the full sample)")
image(image)
},
)
val jsonResponse = Json.parseToJsonElement(response.text)
val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == trueval error = jsonResponse.jsonObject["error"]?.jsonPrimitive?.content
In the snippet above, we’re leveraging structured output capabilities of the model by defining the schema of the response. We’re passing a Schema object via the responseSchema param in the generationConfig.
We want to validate that the image has enough information to generate a nice Android avatar. So we ask the model to return a json object with success = true/false and an optional error message explaining why the image doesn't have enough information.
Structured output is a powerful feature enabling a smoother integration of LLMs to your app by controlling the format of their output, similar to an API response.
Image captioning with Gemini Flash
Once it's established that the image contains sufficient information to generate an Android avatar, it is captioned using Gemini 2.5 Flash with structured output.
val jsonSchema = Schema.obj(
properties = mapOf(
"success" to Schema.boolean(),
"user_description" to Schema.string(),
),
optionalProperties = listOf("user_description"),
)
val generativeModel = createGenerativeTextModel(jsonSchema)
val prompt = "You are to create a VERY detailed description of the main person in the given image. This description will be translated into a prompt for a generative image model..."val response = generativeModel.generateContent(
content {
text(prompt)
image(image)
})
val jsonResponse = Json.parseToJsonElement(response.text!!)
val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == trueval userDescription = jsonResponse.jsonObject["user_description"]?.jsonPrimitive?.content
The other option in the app is to start with a text prompt. You can enter in details about your accessories, hairstyle, and clothing, and let Imagen be a bit more creative.
Android generation via Imagen
We’ll use this detailed description of your image to enrich the prompt used for image generation. We’ll add extra details around what we would like to generate and include the bot color selection as part of this too, including the skin tone selected by the user.
val imagenPrompt = "A 3D rendered cartoonish Android mascot in a photorealistic style, the pose is relaxed and straightforward, facing directly forward [...] The bot looks as follows $userDescription [...]"
We then call the Imagen model to create the bot. Using this new prompt, we create a model and call generateImages:
// we supply our own fine-tuned model here but you can use "imagen-3.0-generate-002" val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(
"imagen-3.0-generate-002",
safetySettings =
ImagenSafetySettings(
ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
personFilterLevel = ImagenPersonFilterLevel.ALLOW_ALL,
),
)
val response = generativeModel.generateImages(imagenPrompt)
val image = response.images.first().asBitmap()
And that’s it! The Imagen model generates a bitmap that we can display on the user’s screen.
Finetuning the Imagen model
The Imagen 3 model was finetuned using Low-Rank Adaptation (LoRA). LoRA is a fine-tuning technique designed to reduce the computational burden of training large models. Instead of updating the entire model, LoRA adds smaller, trainable "adapters" that make small changes to the model's performance. We ran a fine tuning pipeline on the Imagen 3 model generally available with Android bot assets of different color combinations and different assets for enhanced cuteness and fun. We generated text captions for the training images and the image-text pairs were used to finetune the model effectively.
The current sample app uses a standard Imagen model, so the results may look a bit different from the visuals in this post. However, the app using the fine-tuned model and a custom version of Firebase AI Logic SDK was demoed at Google I/O. This app will be released later this year and we are also planning on adding support for fine-tuned models to Firebase AI Logic SDK later in the year.
The original image... and Androidifi-ed image
ML Kit
The app also uses the ML Kit Pose Detection SDK to detect a person in the camera view, which triggers the capture button and adds visual indicators.
To do this, we add the SDK to the app, and use PoseDetection.getClient(). Then, using the poseDetector, we look at the detectedLandmarks that are in the streaming image coming from the Camera, and we set the _uiState.detectedPose to true if a nose and shoulders are visible:
private suspend funrunPoseDetection() {
PoseDetection.getClient(
PoseDetectorOptions.Builder()
.setDetectorMode(PoseDetectorOptions.STREAM_MODE)
.build(),
).use { poseDetector ->
// Since image analysis is processed by ML Kit asynchronously in its own thread pool,// we can run this directly from the calling coroutine scope instead of pushing this// work to a background dispatcher.
cameraImageAnalysisUseCase.analyze { imageProxy ->
imageProxy.image?.let { image ->
val poseDetected = poseDetector.detectPersonInFrame(image, imageProxy.imageInfo)
_uiState.update { it.copy(detectedPose = poseDetected) }
}
}
}
}
private suspend fun PoseDetector.detectPersonInFrame(
image: Image,
imageInfo: ImageInfo,
): Boolean {
val results = process(InputImage.fromMediaImage(image, imageInfo.rotationDegrees)).await()
val landmarkResults = results.allPoseLandmarks
val detectedLandmarks = mutableListOf<Int>()
for (landmark in landmarkResults) {
if (landmark.inFrameLikelihood > 0.7) {
detectedLandmarks.add(landmark.landmarkType)
}
}
return detectedLandmarks.containsAll(
listOf(PoseLandmark.NOSE, PoseLandmark.LEFT_SHOULDER, PoseLandmark.RIGHT_SHOULDER),
)
}
The camera shutter button is activated when a person (or a bot!) enters the frame.
Get started with AI on Android
The Androidify app makes an extensive use of the Gemini 2.5 Flash to validate the image and generate a detailed description used to generate the image. It also leverages the specifically fine-tuned Imagen 3 model to generate images of Android bots. Gemini and Imagen models are easily integrated into the app via the Firebase AI Logic SDK. In addition, ML Kit Pose Detection SDK controls the capture button, enabling it only when a person is present in front of the camera.
To get started with AI on Android, go to the Gemini and Imagen documentation for Android.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
At Google I/O 2025, we announced a host of features, performance, stability, libraries, and tools updates for Jetpack Compose, our recommended Android UI toolkit. With Compose you can build excellent apps that work across devices. Compose has matured a lot since it was first announced (at Google I/O 2019!) and we're now seeing 60% of the top 1,000 apps in the Play Store such as MAX and Google Drive use and love it.
New Features
Since I/O last year, Compose Bill of Materials (BOM) version 2025.05.01 adds new features such as:
Autofill support that lets users automatically insert previously entered personal information into text fields.
Auto-sizing text to smoothly adapt text size to a parent container size.
Visibility tracking for when you need high-performance information on a composable's position in its root container, screen, or window.
Animate bounds modifier for beautiful automatic animations of a Composable's position and size within a LookaheadScope.
Accessibility checks in tests that let you build a more accessible app UI through automated a11y testing.
If you’re looking to try out new Compose functionality, the alpha BOM offers new features that we're working on including:
Pausable Composition (see below)
Updates to LazyLayout prefetch
Context Menus
New modifiers: onFirstVisible, onVisbilityChanged, contentType
New Lint checks for frequently changing values and elements that should be remembered in composition
Please try out the alpha features and provide feedback to help shape the future of Compose.
Material Expressive
At Google I/O, we unveiled Material Expressive, Material Design’s latest evolution that helps you make your products even more engaging and easier to use. It's a comprehensive addition of new components, styles, motion and customization options that help you to build beautiful rich UIs. The Material3 library in the latest alpha BOM contains many of the new expressive components for you to try out.
Developing adaptive apps across form factors including phones, foldables, tablets, desktop, cars and Android XR is now easier with the latest enhancements to the Compose adaptive layouts library. The stable 1.1 release adds support for predictive back gestures for smoother transitions and pane expansion for more flexible two pane layouts on larger screens. Furthermore, the 1.2 (alpha) release adds more flexibility for how panes are displayed, adding strategies for reflowing and levitating.
Compose Adaptive Layouts Updates in the Google Play app
With each release of Jetpack Compose, we continue to prioritize performance improvements. The latest stable release includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations. Best of all these are available to you simply by upgrading your Compose dependency; no code changes required.
Internal benchmark, run on a Pixel 3a
We continue to work on further performance improvements, notable changes in the latest alpha BOM include:
Pausable Composition allows compositions to be paused, and their work split up over several frames.
Background text prefetch enables text layout caches to be pre-warmed on a background thread, enabling faster text layout.
LazyLayout prefetch improvements enabling lazy layouts to be smarter about how much content to prefetch, taking advantage of pausable composition.
Together these improvements eliminate nearly all jank in an internal benchmark.
Stability
We've heard from you that upgrading your Compose dependency can be challenging, encountering bugs or behaviour changes that prevent you from staying on the latest version. We've invested significantly in improving the stability of Compose, working closely with the many Google app teams building with Compose to detect and prevent issues before they even make it to a release.
Google apps develop against and release with snapshot builds of Compose; as such, Compose is tested against the hundreds of thousands of Google app tests and any Compose issues are immediately actioned by our team. We have recently invested in increasing the cadence of updating these snapshots and now update them daily from Compose tip-of-tree, which means we’re receiving feedback faster, and are able to resolve issues long before they reach a public release of the library.
Jetpack Compose also relies on @Experimental annotations to mark APIs that are subject to change. We heard your feedback that some APIs have remained experimental for a long time, reducing your confidence in the stability of Compose. We have invested in stabilizing experimental APIs to provide you a more solid API surface, and reduced the number of experimental APIs by 32% in the last year.
We have also heard that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. In the latest alpha BOM, we have added a new opt-in feature to provide more diagnostic information. Note that this does not currently work with minified builds and comes at a performance cost, so we recommend only using this feature in debug builds.
classApp : Application() {
overridefunonCreate() {
// Enable only for debug flavor to avoid perf impact in release
Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
}
}
Libraries
We know that to build great apps, you need Compose integration in the libraries that interact with your app's UI.
A core library that powers any Compose app is Navigation. You told us that you often encountered limitations when managing state hoisting and directly manipulating the back stack with the current Compose Navigation solution. We went back to the drawing-board and completely reimagined how a navigation library should integrate with the Compose mental model. We're excited to introduce Navigation 3, a new artifact designed to empower you with greater control and simplify complex navigation flows.
We're also investing in Compose support for CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.
The Compose alpha BOM introduces two new annotations and associated lint checks to help you to write correct and performant Compose code. The @FrequentlyChangingValue annotation and FrequentlyChangedStateReadInComposition lint check warns in situations where function calls or property reads in composition might cause frequent recompositions. For example, frequent recompositions might happen when reading scroll position values or animating values. The @RememberInComposition annotation and RememberInCompositionDetector lint check warns in situations where constructors, functions, and property getters are called directly inside composition (e.g. the TextFieldState constructor) without being remembered.
Happy Composing
We continue to invest in providing the features, performance, stability, libraries and tools that you need to build excellent apps. We value your input so please share feedback on our latest updates or what you'd like to see next.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
Posted by Caren Chang - Developer Relations Engineer, Chengji Yan - Software Engineer, Taj Darra - Product Manager
We are excited to announce a set of on-device GenAI APIs, as part of ML Kit, to help you integrate Gemini Nano in your Android apps.
To start, we are releasing 4 new APIs:
Summarization: to summarize articles and conversations
Proofreading: to polish short text
Rewriting: to reword text in different styles
Image Description: to provide short description for images
Key benefits of GenAI APIs
GenAI APIs are high level APIs that allow for easy integration, similar to existing ML Kit APIs. This means you can expect quality results out of the box without extra effort for prompt engineering or fine tuning for specific use cases.
GenAI APIs run on-device and thus provide the following benefits:
Input, inference, and output data is processed locally
Functionality remains the same without reliable internet connection
No additional cost incurred for each API call
To prevent misuse, we also added safety protection in various layers, including base model training, safety-aware LoRA fine-tuning, input and output classifiers and safety evaluations.
How GenAI APIs are built
There are 4 main components that make up each of the GenAI APIs.
Gemini Nano is the base model, as the foundation shared by all APIs.
Small API-specific LoRA adapter models are trained and deployed on top of the base model to further improve the quality for each API.
Optimized inference parameters (e.g. prompt, temperature, topK, batch size) are tuned for each API to guide the model in returning the best results.
An evaluation pipeline ensures quality in various datasets and attributes. This pipeline consists of: LLM raters, statistical metrics and human raters.
Together, these components make up the high-level GenAI APIs that simplify the effort needed to integrate Gemini Nano in your Android app.
Evaluating quality of GenAI APIs
For each API, we formulate a benchmark score based on the evaluation pipeline mentioned above. This score is based on attributes specific to a task. For example, when evaluating the summarization task, one of the attributes we look at is “grounding” (ie: factual consistency of generated summary with source content).
To provide out-of-box quality for GenAI APIs, we applied feature specific fine-tuning on top of the Gemini Nano base model. This resulted in an increase for the benchmark score of each API as shown below:
Use case in English
Gemini Nano Base Model
ML Kit GenAI API
Summarization
77.2
92.1
Proofreading
84.3
90.2
Rewriting
79.5
84.1
Image Description
86.9
92.3
In addition, this is a quick reference of how the APIs perform on a Pixel 9 Pro:
Prefix Speed (input processing rate)
Decode Speed (output generation rate)
Text-to-text
510 tokens/second
11 tokens/second
Image-to-text
510 tokens/second + 0.8 seconds for image encoding
11 tokens/second
Sample usage
This is an example of implementing the GenAI Summarization API to get a one-bullet summary of an article:
val articleToSummarize = "We are excited to announce a set of on-device generative AI APIs..."// Define task with desired input and output formatval summarizerOptions = SummarizerOptions.builder(context)
.setInputType(InputType.ARTICLE)
.setOutputType(OutputType.ONE_BULLET)
.setLanguage(Language.ENGLISH)
.build()
val summarizer = Summarization.getClient(summarizerOptions)
suspend funprepareAndStartSummarization(context: Context) {
// Check feature availability. Status will be one of the following: // UNAVAILABLE, DOWNLOADABLE, DOWNLOADING, AVAILABLEval featureStatus = summarizer.checkFeatureStatus().await()
if (featureStatus == FeatureStatus.DOWNLOADABLE) {
// Download feature if necessary.// If downloadFeature is not called, the first inference request will // also trigger the feature to be downloaded if it's not already// downloaded.
summarizer.downloadFeature(object : DownloadCallback {
overridefunonDownloadStarted(bytesToDownload: Long) { }
overridefunonDownloadFailed(e: GenAiException) { }
overridefunonDownloadProgress(totalBytesDownloaded: Long) {}
overridefunonDownloadCompleted() {
startSummarizationRequest(articleToSummarize, summarizer)
}
})
} elseif (featureStatus == FeatureStatus.DOWNLOADING) {
// Inference request will automatically run once feature is // downloaded.// If Gemini Nano is already downloaded on the device, the // feature-specific LoRA adapter model will be downloaded very // quickly. However, if Gemini Nano is not already downloaded, // the download process may take longer.
startSummarizationRequest(articleToSummarize, summarizer)
} elseif (featureStatus == FeatureStatus.AVAILABLE) {
startSummarizationRequest(articleToSummarize, summarizer)
}
}
funstartSummarizationRequest(text: String, summarizer: Summarizer) {
// Create task request val summarizationRequest = SummarizationRequest.builder(text).build()
// Start summarization request with streaming response
summarizer.runInference(summarizationRequest) { newText ->
// Show new text in UI
}
// You can also get a non-streaming response from the request// val summarizationResult = summarizer.runInference(summarizationRequest)// val summary = summarizationResult.get().summary
}
// Be sure to release the resource when no longer needed// For example, on viewModel.onCleared() or activity.onDestroy()
summarizer.close()
For more examples of implementing the GenAI APIs, check out the official documentation and samples on GitHub:
Here is some guidance on how to best use the current GenAI APIs:
For Summarization, consider:
Conversation messages or transcripts that involve 2 or more users
Articles or documents less than 4000 tokens (or about 3000 English words). Using the first few paragraphs for summarization is usually good enough to capture the most important information.
For Proofreading and Rewriting APIs, consider utilizing them during the content creation process for short content below 256 tokens to help with tasks such as:
Refining messages in a particular tone, such as more formal or more casual
Polishing personal notes for easier consumption later
For the Image Description API, consider it for:
Generating titles of images
Generating metadata for image search
Utilizing descriptions of images in use cases where the images themselves cannot be displayed, such as within a list of chat messages
Generating alternative text to help visually impaired users better understand content as a whole
GenAI API in production
Envision is an app that verbalizes the visual world to help people who are blind or have low vision lead more independent lives. A common use case in the app is for users to take a picture to have a document read out loud. Utilizing the GenAI Summarization API, Envision is now able to get a concise summary of a captured document. This significantly enhances the user experience by allowing them to quickly grasp the main points of documents and determine if a more detailed reading is desired, saving them time and effort.
Supported devices
GenAI APIs are available on Android devices using optimized MediaTek Dimensity, Qualcomm Snapdragon, and Google Tensor platforms through AICore. For a comprehensive list of devices that support GenAI APIs, refer to our official documentation.