Posted by Kateryna Semenova – Sr. Developer Relations Engineer
AI is reshaping how users interact with their favorite apps, opening new avenues for developers to create intelligent experiences. At Google I/O, we showcased how Android is making it easier than ever for you to build smart, personalized and creative apps. And we’re committed to providing you with the tools needed to innovate across the full development stack in this evolving landscape.
This year, we focused on making AI accessible across the spectrum, from on-device processing to cloud-powered capabilities. Here are the top 3 announcements you need to know for building with AI on Android from Google I/O ‘25:
#1 Leverage the efficiency of Gemini Nano for on-device AI experiences
For on-device AI, we announced a new set of ML Kit GenAI APIs powered by Gemini Nano, our most efficient and compact model designed and optimized for running directly on mobile devices. These APIs provide high-level, easy integration for common tasks including text summarization, proofreading, rewriting content in different styles, and generating image description. Building on-device offers significant benefits such as local data processing and offline availability at no additional cost for inference. To start integrating these solutions explore the ML Kit GenAI documentation, the sample on GitHub and watch the "Gemini Nano on Android: Building with on-device GenAI" talk.
#2 Seamlessly integrate on-device ML/AI with your own custom models
The Google AI Edge platform enables building and deploying a wide range of pretrained and custom models on edge devices and supports various frameworks like TensorFlow, PyTorch, Keras, and Jax, allowing for more customization in apps. The platform now also offers improved support of on-device hardware accelerators and a new AI Edge Portal service for broad coverage of on-device benchmarking and evals. If you are looking for GenAI language models on devices where Gemini Nano is not available, you can use other open models via the MediaPipe LLM Inference API.
Serving your own custom models on-device can pose challenges related to handling large model downloads and updates, impacting the user experience. To improve this, we’ve launched Play for On-Device AI in beta. This service is designed to help developers manage custom model downloads efficiently, ensuring the right model size and speed are delivered to each Android device precisely when needed.
#3 Power your Android apps with Gemini Flash, Pro and Imagen using Firebase AI Logic
For more advanced generative AI use cases, such as complex reasoning tasks, analyzing large amounts of data, processing audio or video, or generating images, you can use larger models from the Gemini Flash and Gemini Pro families, and Imagen running in the cloud. These models are well suited for scenarios requiring advanced capabilities or multimodal inputs and outputs. And since the AI inference runs in the cloud any Android device with an internet connection is supported. They are easy to integrate into your Android app by using Firebase AI Logic, which provides a simplified, secure way to access these capabilities without managing your own backend. Its SDK also includes support for conversational AI experiences using the Gemini Live API or generating custom contextual visual assets with Imagen. To learn more, check out our sample on GitHub and watch "Enhance your Android app with Gemini Pro and Flash, and Imagen" session.
Figure 1: Firebase AI Logic integration architecture
Get inspired and start building with AI on Android today
We released a new open source app, Androidify, to help developers build AI-driven Android experiences using Gemini APIs, ML Kit, Jetpack Compose, CameraX, Navigation 3, and adaptive design. Users can create personalized Android bot with Gemini and Imagen via the Firebase AI Logic SDK. Additionally, it incorporates ML Kit pose detection to detect a person in the camera viewfinder. The full code sample is available on GitHub for exploration and inspiration. Discover additional AI examples in our Android AI Sample Catalog.
The original image and Androidifi-ed image
Choosing the right Gemini model depends on understanding your specific needs and the model's capabilities, including modality, complexity, context window, offline capability, cost, and device reach. To explore these considerations further and see all our announcements in action, check out the AI on Android at I/O ‘25 playlist on YouTube and check out our documentation.
We are excited to see what you will build with the power of Gemini!
Posted by François Deschênes Product Manager - Wear OS
Today, we are announcing important changes to Wear OS watch face development that will affect how developers publish and update watch faces on Google Play. As part of our ongoing effort to enhance Wear OS app quality, we are moving towards supporting only the Watch Face Format and removing support for AndroidX / Wearable Support Library (WSL) watch faces.
We introduced Watch Face Format at Google I/O in 2023 to make it easier to create watch faces that are customizable and power-efficient. The Watch Face Format is a declarative XML format, so there is no executable code involved in creating a watch face, and there is no code embedded in the watch face APK.
What's changing?
Developers will need to migrate published watch faces to the Watch Face Format by January 14, 2026. Developers using Watch Face Studio to build watch faces will need to resubmit their watch faces to the Play Store using Watch Face Studio version 1.8.7 or above - see below for more details.
When are these changes coming?
Starting January 27, 2025 (already in effect):
No new AndroidX or Wearable Support Library (WSL) watch faces (legacy watch faces) can be published on the Play Store. Developers can still publish updates to existing watch faces.
Starting January 14, 2026:
Availability: Users will not be able to install legacy watch faces on any Wear OS devices from the Play Store. Legacy watch faces already installed on a Wear OS device will continue to work.
Updates: Developers will not be able to publish updates for legacy watch faces to the Play Store.
Monetization: The following won’t be possible for legacy watch faces: one-off watch face purchases, in-app purchases, and subscriptions. Existing purchases and subscriptions will continue to work, but they will not renew, including auto-renewals.
What should developers do next?
To prepare for these changes and to continue publishing watch faces to the Play Store, developers using AndroidX or WSL to build watch faces must migrate their watch faces to the Watch Face Format and resubmit to the Play Store by January 14, 2026.
Developers using Watch Face Studio to build watch faces will need to resubmit their watch faces to the Play Store using Watch Face Studio version 1.8.7 or above:
Be sure to republish for all Play tracks, including all testing tracks as well as production.
Remove any bundles from these tracks that were created using Watch Face Studio versions prior to 1.8.7.
Benefits of the Watch Face Format
Watch Face Format was developed to support developers in creating watch faces. This format provides numerous advantages to both developers and end users:
Simplified development: Streamlined workflows and visual design tools make building watch faces easier.
Enhanced performance: Optimized for battery efficiency and smooth interactions.
Increased security: Robust security features protect user data and privacy.
Forward-compatible: Access to the latest features and capabilities of Wear OS.
Resources to help with migration
To get started migrating your watch faces to the Watch Face Format, check out the following developer guidance:
We encourage developers to begin the migration process as soon as possible to ensure a seamless transition and continued availability of your watch faces on Google Play.
We understand that this change requires effort. If you have further questions, please refer to the Wear OS community announcement. Please report any issues using the issue tracker.
With new form factors emerging continually, the Android ecosystem is more dynamic than ever.
From phones and foldables to tablets, Chromebooks, TVs, cars, Wear and XR, Android users expect their apps to run seamlessly across an increasingly diverse range of form factors. Yet, many Android apps fall short of these expectations as they are built with UI constraints such as being locked to a single orientation or restricted in resizability.
With this in mind, Android 16 introduced API changes for apps targeting SDK level 36 to ignore orientation and resizability restrictions starting with large screen devices, shifting toward a unified model where adaptive apps are the norm. This is the moment to move ahead. Adaptive apps aren’t just the future of Android, they’re the expectation for your app to stand out across Android form factors.
Why you should prioritize adaptive now
Source: internal Google data
Prioritizing optimizations to make your app adaptive isn't just about keeping up with the orientation and resizability API changes in Android 16 for apps targeting SDK 36. Adaptive apps unlock tangible benefits across user experience, development efficiency, and market reach.
Mobile apps can now reach users on over 500 million active large screen devices: Mobile apps run on foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 will introduce significant advancements in desktop windowing for a true desktop-like experience on large screens, including connected displays. And Android XR opens a new dimension, allowing your existing apps to be available in immersive environments. The user expectation is clear: a consistent, high-quality experience that intelligently adapts to any screen – be it a foldable, a tablet with a keyboard, or a movable, resizable window on a Chromebook.
“The new baseline” with orientation and resizability API changes in Android 16: We believe mobile apps are undergoing a shift to have UI adapt responsively to any screen size, just like websites. Android 16 will ignore app-defined restrictions like fixed orientation (portrait-only) and non-resizable windows, beginning with large screens (smallest width of the device is >= 600dp) including tablets and inner displays on foldables. For most apps, it’s key to helping them stretch to any screen size. In some cases if your app isn't adaptive, it could deliver a broken user experience on these screens. This moves adaptive design from a nice-to-have to a foundational requirement.
Increase user reach and app discoverability in Play: Adaptive apps are better positioned to be ranked higher in Play, and featured in editorial articles across form factors, reaching a wider audience across Play search and homepages. Additionally, Google Play Store surfaces ratings and reviews across all form factors. If your app is not optimized, a potential user's first impression might be tainted by a 1-star review complaining about a stretched UI on a device they don't even own yet. Users are also more likely to engage with apps that provide a great experience across their devices.
Increased engagement on large screens: Users on large screen devices often have different interaction patterns. On large screens, users may engage for longer sessions, perform more complex tasks, and consume more content.
Usage for 6 major media streaming apps in the US was up to 3x more for tablet and phone users, as compared to phone only users.
More accessible app experiences: According to the World Bank, 15% of the world’s population has some type of disability. People with disabilities depend on apps and services that support accessibility to communicate, learn, and work. Matching the user’s preferred orientation improves the accessibility of applications, helping to create an inclusive experience for all.
Today, most apps are building for smartphones only
“...looking at the number of users, the ROI does not justify the investment”.
That's a frequent pushback from product managers and decision-makers, and if you're just looking at top-line analytics comparing the number of tablet sessions to smartphone sessions, it might seem like a closed case.
While top-line analytics might show lower session numbers on tablets compared to smartphones, concluding that large screens aren't worth the effort based solely on current volume can be a trap, causing you to miss out on valuable engagement and future opportunities.
Let's take a deeper look into why:
1. The user experience ‘chicken and egg’ loop: Is it possible that the low usage is a symptom rather than the root cause? Users are quick to abandon apps that feel clunky or broken. If your app on large screens is a stretched-out phone interface, the app likely provides a negative user experience. The lack of users might reflect the lack of a good experience, not always necessarily lack of potential users.
2. Beyond user volume, look at user engagement: Don't just count users, analyze their worth. Users interact with apps on large screens differently. The large screen often leads to longer sessions and more immersive experiences. As mentioned above, usage data shows that engagement time increases significantly for users who interact with apps on both their phone and tablet, as compared to phone only users.
3. Market evolution: The Android device ecosystem is continuing to evolve. With the rise of foldables, upcoming connected displays support in Android 16, and form factors like XR and Android Auto, adaptive design is now more critical than ever. Building for a specific screen size creates technical debt, and may slow your development velocity and compromise the product quality in the long run.
Okay, I am convinced. Where do I start?
For organizations ready to move forward, Android offers many resources and developer tools to optimize apps to be adaptive. See below for how to get started:
1.Check how your app looks on large screens today: Begin by looking at your app’s current state on tablets, foldables (in different postures), Chromebooks, and environments like desktop windowing. Confirm if your app is available on these devices or if you are unintentionally leaving out these users by requiring unnecessary features within your app.
2. Address common UI issues: Assess what feels awkward in your app UI today. We have a lot of guidance available on how you can easily translate your mobile app to other screens.
a. Check the Large screens design gallery for inspiration and understanding how your app UI can evolve across devices using proven solutions to common UI challenges.
b. Start with quick wins. For example, prevent buttons from stretching to the full screen width, or switch to a vertical navigation bar on large screens to improve ergonomics.
c. Identify patterns where canonical layouts (e.g. list-detail) could solve any UI awkwardness you identified. Could a list-detail view improve your app's navigation? Would a supporting pane on the side make better use of the extra space than a bottom sheet?
3. Optimize your app incrementally, screen by screen: It may be helpful to prioritize how you approach optimization because not everything needs to be perfectly adaptive on day one. Incrementally improve your app based on what matters most – it's not all or nothing.
a. Start with the foundations. Check out the large screen app quality guidelines which tier and prioritize the fixes that are most critical to users. Remove orientation restrictions to support portrait and landscape, and ensure support for resizability (for when users are in split screen), and prevent major stretching of buttons, text fields, and images. These foundational fixes are critical, especially with API changes in Android 16 that will make these aspects even more important.
b. Implement adaptive layout optimizations with a focus on core user journeys or screens first.
i. Identify screens where optimizations (for example a two-pane layout) offer the biggest UX win
ii. And then proceed to screens or parts of the app that are not as often used on large screens
c. Support input methods beyond touch, including keyboard, mouse, trackpad, and stylus input. With new form factors and connected displays support, this sets users up to interact with your UI seamlessly.
d. Add differentiating hero user experiences like support for tabletop mode or dual-screen mode on foldables. This can happen on a per-use-case basis - for example, tabletop mode is great for watching videos, and dual screen mode is great for video calls.
While there's an upfront investment in adopting adaptive principles (using tools like Jetpack Compose and window size classes), the long-term payoff may be significant. By designing and building features once, and letting them adapt across screen sizes, the benefits outweigh the cost of creating multiple bespoke layouts. Check out the adaptive apps developer guidance for more.
Unlock your app's potential with adaptive app design
The message for my fellow product managers, decision-makers, and businesses is clear: adaptive design will uplevel your app for high-quality Android experiences in 2025 and beyond. An adaptive, responsive UI is the scalable way to support the many devices in Android without developing on a per-form factor basis. If you ignore the diverse device ecosystem of foldables, tablets, Chromebooks, and emerging form factors like XR and cars, your business is accepting hidden costs from negative user reviews, lower discovery in Play, increased technical debt, and missed opportunities for increased user engagement and user acquisition.
Maximize your apps' impact and unlock new user experiences. Learn more about building adaptive apps today.
Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.
If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you're ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3's editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.
Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:
#1: Build adaptively to unlock 500 million devices
In today's diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.
The talk emphasizes that you don't need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app's potential.
Here are some resources we encourage you to use in your apps:
New feature support in Jetpack Compose Adaptive Libraries
We’re continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.
Navigation 3
The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.
Updates to Window Manager Library
AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as "extra large," while widths between 1200dp and 1600dp are classified as "large." These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.
Support all orientations and be resizable
In Android 16 important changes are coming, affecting orientation, aspect ratio, and resizability. Apps targeting SDK 36 will need to support all orientations and be resizable.
Extend to Android XR
We are making it easier for you to build for XR with the Android XR SDK in developer preview 2, which features new Material XR components, a fully integrated Emulator withinAndroid Studio and spatial videos support for your Play Store listings.
Upgrade your Wear OS apps to Material 3 Design
Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. You can upgrade your app and Tiles to Material 3 Expressive by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles.
You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app’s long-term success.
#2: Enhance your app’s performance optimization
Get ready to take your app's performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.
Redesigned UiAutomator API
To make benchmarking reliable and reproducible, there's the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.
Macrobenchmarks
Once your tests are in place, it's time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app's health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app's performance and where to focus your efforts.
R8, More than code shrinking and obfuscation
You might know R8 as a code shrinking tool, but it's capable of so much more! The talk dives into R8's capabilities using the "Androidify" sample app. You'll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It'll also be shown how library developers can include "consumer Keep rules" so that their important code is not touched when used in an application.
#3: Build Richer Image and Video Experiences
In today's digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.
Media3Effects in CameraX Preview
At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.
Google Low-Light Boost
Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.
Posted by Sa-ryong Kang and Miguel Montemayor - Developer Relations Engineers
Peacock is NBCUniversal’s streaming service app available in the US, offering culture-defining entertainment including live sports, exclusive original content, TV shows, and blockbuster movies. The app continues to evolve, becoming more than just a platform to watch content, but a hub of entertainment.
Today’s users are consuming entertainment on an increasingly wider array of device sizes and types, and in particular are moving towards mobile devices. Peacock has adopted Jetpack Compose to help with its journey in adapting to more screens and meeting users where they are.
Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.
Adapting to more flexible form factors
The Peacock development team is focused on bringing the best experience to users, no matter what device they’re using or when they want to consume content. With an emerging trend from app users to watch more on mobile devices and large screens like foldables, the Peacock app needs to be able to adapt to different screen sizes. As more devices are introduced, the team needed to explore new solutions that make the most out of each unique display permutation.
The goal was to have the Peacock app to adapt to these new displays while continually offering high-quality entertainment without interruptions, like the stream reloading or visual errors. While thinking ahead, they also wanted to prepare and build a solution that was ready for Android XR as the entertainment landscape is shifting towards including more immersive experiences.
Building a future-proof experience with Jetpack Compose
In order to build a scalable solution that would help the Peacock app continue to evolve, the app was migrated to Jetpack Compose, Android’s toolkit for building scalable UI. One of the essential tools they used was the WindowSizeClass API, which helps developers create and test UI layouts for different size ranges. This API then allows the app to seamlessly switch between pre-set layouts as it reaches established viewport breakpoints for different window sizes.
The API was used in conjunction with Kotlin Coroutines and Flows to keep the UI state responsive as the window size changed. To test their work and fine tune edge case devices, Peacock used the Android Studio emulator to simulate a wide range of Android-based devices.
Jetpack Compose allowed the team to build adaptively, so now the Peacock app responds to a wide variety of screens while offering a seamless experience to Android users. “The app feels more native, more fluid, and more intuitive across all form factors,” said Diego Valente, Head of Mobile, Peacock and Global Streaming. “That means users can start watching on a smaller screen and continue instantly on a larger one when they unfold the device—no reloads, no friction. It just works.”
Preparing for immersive entertainment experiences
In building adaptive apps on Android, John Jelley, Senior Vice President, Product & UX, Peacock and Global Streaming, says Peacock has also laid the groundwork to quickly adapt to the Android XR platform: “Android XR builds on the same large screen principles, our investment here naturally extends to those emerging experiences with less developmental work.”
The team is excited about the prospect of features unlocked by Android XR, like Multiview for sports and TV, which enables users to watch multiple games or camera angles at once. By tailoring spatial windows to the user’s environment, the app could offer new ways for users to interact with contextual metadata like sports stats or actor information—all without ever interrupting their experience.
Posted by Matthew McCullough – VP of Product Management, Android Developer
Today at Google I/O, we announced the many ways we’re helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here’s a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!
Building AI into your Apps
1: Building intelligent apps with Generative AI
Generative AI enhances apps' experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.
New experiences across devices
2: One app, every screen: think adaptive and unlock 500 million screens
Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we’re helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices – a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral’s streaming service (available in the US) is building adaptively to meet users where they are.
Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.
3: Material 3 Expressive: design for intuition and emotion
The new Material 3 Expressive update provides tools to enhance your product's appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.
4: Smarter widgets, engaging live updates
Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.
5: Enhanced Camera & Media: low light boost and battery savings
This year's I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.
6: Build next-gen app experiences for Cars
We're launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we'll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.
7: Build for Android XR's expanding ecosystem with Developer Preview 2 of the SDK
8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6
This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.
Some examples of Material 3 Expressive on Wear OS experiences
9: Engage users on Google TV with excellent TV apps
You can leverage more resources within Compose's core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We're also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.
Developer productivity
10: Build beautiful apps faster with Jetpack Compose
Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.
Compose Adaptive Layouts Updates in the Google Play app
11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily
12: Gemini in Android Studio: AI Agents to help you work
Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What’s new in Android development tools.
Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we’re continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What's new in Google Play to learn more.
15: Start migrating to Play Games Services v2 today
Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.
16: And of course, Android 16
We unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.
Check out all of the Android and Play content at Google I/O
This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What’s New in Android and the full Android track of sessions, and whether you’re joining in person or around the world, we can’t wait to engage with you!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
Posted by Ben Trengrove - Developer Relations Engineer, Matt Dyor - Product Manager
Google I/O and KotlinConf 2025 bring a series of announcements on Android’s Kotlin and Kotlin Multiplatform efforts. Here’s what to watch out for:
Announcements from Google I/O 2025
Jetpack libraries
Our focus for Jetpack libraries and KMP is on sharing business logic across Android and iOS, but we have begun experimenting with web/WASM support.
We are adding KMP support to Jetpack libraries. Last year we started with Room, DataStore and Collection, which are now available in a stable release and recently we have added ViewModel, SavedState and Paging. The levels of support that our Jetpack libraries guarantee for each platform have been categorised into three tiers, with the top tier being for Android, iOS and JVM.
Tool improvements
We're developing new tools to help easily start using KMP in your app. With the KMP new module template in Android Studio Meerkat, you can add a new module to an existing app and share code to iOS and other supported KMP platforms.
In addition to KMP enhancements, Android Studio now supports Kotlin K2 mode for Android specific features requiring language support such as Live Edit, Compose Preview and many more.
How Google is using KMP
Last year, Google Workspace began experimenting with KMP, and this is now running in production in the Google Docs app on iOS. The app’s runtime performance is on par or better than before1.
It’s been helpful to have an app at this scale test KMP out, because we’re able to identify issues and fix issues that benefit the KMP developer community.
For example, we've upgraded the Kotlin Native compiler to LLVM 16 and contributed a more efficient garbage collector and string implementation. We're also bringing the static analysis power of Android Lint to Kotlin targets and ensuring a unified Gradle DSL for both AGP and KGP to improve the plugin management experience.
Kotlin Symbol Processing (KSP2) is stable to better support new Kotlin language features and deliver better performance. It is easier to integrate with build systems, is thread-safe, and has better support for debugging annotation processors. In contrast to KSP1, KSP2 has much better compatibility across different Kotlin versions. The rewritten command line interface also becomes significantly easier to use as it is now a standalone program instead of a compiler plugin.
KotlinConf 2025
Google team members are presenting a number of talks at KotlinConf spanning multiple topics:
Talks
Deploying KMP at Google Workspace by Jason Parachoniak, Troels Lund, and Johan Bay from the Workspace team discusses the challenges and solutions, including bugs and performance optimizations, encountered when launching Kotlin Multiplatform at Google Workspace, offering comparisons to ObjectiveC and a Q&A. (Technical Session)
The Life and Death of a Kotlin/Native Object by Troels Lund offers a high-level explanation of the Kotlin/Native runtime's inner workings concerning object instantiation, memory management, and disposal. (Technical Session)
APIs: How Hard Can They Be? presented by Aurimas Liutikas and Alan Viverette from the Jetpack team delves into the lifecycle of API design, review processes, and evolution within AndroidX libraries, particularly considering KMP and related tools. (Technical Session)
Project Sparkles: How Compose for Desktop is changing Android Studio and IntelliJ with Chris Sinco and Sebastiano Poggi from the Android Studio team introduces the initiative ('Project Sparkles') aiming to modernize Android Studio and IntelliJ UIs using Compose for Desktop, covering goals, examples, and collaborations. (Technical Session)
JSpecify: Java Nullness Annotations and Kotlin presented by David Baker explains the significance and workings of JSpecify's standard Java nullness annotations for enhancing Kotlin's interoperability with Java libraries. (Lightning Session)
Lessons learned decoupling Architecture Components from platform specific code features Jeremy Woods and Marcello Galhardo from the Jetpack team sharing insights from the Android team on decoupling core components like SavedState and System Back from platform specifics to create common APIs. (Technical Session)
KotlinConf’s Closing Panel, a regular staple of the conference, returns, featuring Jeffrey van Gogh as Google’s representative on the panel. (Panel)
Live Workshops
If you are at KotlinConf in person, we will have guided live workshops with our new codelabs from above.
The codelab Migrating Room to Room KMP, also led by Matt Dyor, and Dustin Lam, Tomáš Mlynarič, demonstrates the process of migrating an existing Room database implementation to Room KMP within a shared module.
We love engaging with the Kotlin community. If you are attending KotlinConf, we hope you get a chance to check out our booth, with opportunities to chat with our engineers, get your questions answered, and learn more about how you can leverage Kotlin and KMP.
Learn more about Kotlin Multiplatform
To learn more about KMP and start sharing your business logic across platforms, check out our documentation and the sample.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
Posted by Paul Lammertsma – Developer Relations Engineer
Ratings and reviews are essential for developers, offering quantitative and qualitative feedback on user experiences. In 2022, we enhanced the granularity of this feedback by segmenting these insights by countries and form factors.
Now, we're extending the In-App Ratings and Reviews API to TV to allow developers to prompt users for ratings and reviews directly from Google TV.
Ratings and reviews on Google TV
Users can now see rating averages, browse reviews, and leave their own review directly from an app's store listing on Google TV.
Users can interact with in-app ratings and reviews on their TVs by doing the following:
Select ratings using the remote control D-pad.
Provide optional written reviews using Gboard’s on-screen voice input, or by easily typing from their phone.
Send mobile notifications to themselves to complete their TV app review directly on their phone.
Additionally, users can leave reviews for other form factors directly from their phone by simply selecting the device chip when submitting an app rating or writing a review.
We've already seen a considerable lift in app ratings on TV since bringing these changes to Google TV, and now, we're making it possible for developers to trigger a ratings prompt as well.
Before we look at the integration, let's first carefully consider the best time to request a review prompt. First, identify optimal moments within your app to request user feedback, ensuring prompts appear only when the UI is idle to prevent interruption of ongoing content.
val manager = ReviewManagerFactory.create(context)
manager.requestReviewFlow().addOnCompleteListener { task ->
if (task.isSuccessful) {
// We got the ReviewInfo objectval reviewInfo = task.result
manager.launchReviewFlow(activity, reviewInfo)
} else {
// There was some problem, log or handle the error code
@ReviewErrorCode val reviewErrorCode =
(task.getException() as ReviewException).errorCode
}
}
First, invoke requestReviewFlow() to obtain a ReviewInfo object which is used to launch the review flow. You must include an addOnCompleteListener() not just to obtain the ReviewInfo object, but also to monitor for any problems triggering this flow, such as the unavailability of Google Play on the device. Note that ReviewInfo does not offer any insights on whether or not a prompt appeared or which action the user took if a prompt did appear.
The challenge is to identify when to trigger launchReviewFlow(). Track user actions—identifying successful journeys and points where users encounter issues—so you can be confident they had a delightful experience in your app.
For this method, you may optionally also include an addOnCompleteListener() to ensure it resumes when the returned task is completed.
Note that due to throttling of how often users are presented with this prompt, there are no guarantees that the ratings dialog will appear when requesting to start this flow. For best practices, check this guide on when to request an in-app review.
Get started with In-App Reviews on Google TV
You can get a head start today by following these steps:
Identify successful journeys for users, like finishing a movie or TV show season.
Identify poor experiences that should be avoided, like buffering or playback errors.
Posted by Ben Sagmoe - Developer Relations Engineer
The in-car experience continues to evolve rapidly, and Google remains committed to pushing the boundaries of what's possible. At Google I/O 2025, we're excited to unveil the latest advancements for drivers, car manufacturers, and developers, furthering our goal of a safe, seamless, and helpful connected driving experience.
Today's car cabins are increasingly digital, offering developers exciting new opportunities with larger displays and more powerful computing. Android Auto is now supported in nearly all new cars sold, with almost 250 million compatible vehicles on the road.
We're also seeing significant growth in cars powered by Android Automotive OS with Google built-in. Over 50 models are currently available, with more launching this year. This growth is fueled by a thriving app ecosystem, including over 300 apps already available on the Play Store. These include apps optimized for a safe and seamless experience while driving as well as entertainment apps for while you're parked and waiting in your car—many of which are adaptive mobile apps that have been seamlessly brought to cars through the Car Ready Mobile Apps Program.
A vibrant developer community is essential to delivering these innovative in-car experiences utilizing the different screens within the car cabin. This past year, we've focused on key areas to help empower developers to build more differentiated experiences in cars across both platforms, as we embark on the Gemini era in cars!
Gemini for Cars
Exciting news for in-car experiences: Gemini, Google's advanced AI, is coming to vehicles! This unlocks a new era of safe and helpful interactions on the go.
Gemini enables natural voice conversations and seamless multitasking, empowering drivers to get more done simply by speaking naturally. Imagine effortlessly finding charging stations or navigating to a location pulled directly from an email, all with just your voice.
Navigation apps can integrate with Gemini using three core intent formats, allowing you to start navigation, display relevant search results, and execute custom actions, such as enabling users to report incidents like traffic congestion using their voice.
Gemini for cars will be rolling out in the coming months. Get ready to build the next generation of in-car AI experiences!
New developer programs and tools
Last year, we introduced car app quality tiers to inspire developers to create high quality in-car experiences. By developing your app in compliance with the Car ready tier, you can bring video, gaming, or browser apps to run while parked in cars with Google built-in with almost no additional effort. Learn more about Car Ready Mobile Apps.
Your app can further shine in cars within the Car optimized and Car differentiated tiers to unlock experiences while the car is in motion, and also when transitioning between parked and driving modes, while utilizing the different screens within the modern car cabin. Check the car app quality guidelines for details.
To start with, across both Android Auto and for cars with Google built-in, we've made some exciting improvements for Car App Library:
The Weather app category has graduated from beta: any developer can now publish weather apps to production tracks on both Android Auto and cars with Google Built-in. Before you publish your app, check that it meets the quality guidelines for weather apps.
Two new templates, the SectionedItemTemplate and MediaPlaybackTemplate, are now available in the Car App Library 1.8 alpha release for use on Android Auto. These templates are a great fit for building templated media apps, allowing for increased customization in layout and browsing structure.
On Android Auto, many new app categories and capabilities are now in beta:
We are adding support for Building media apps with the Car App Library, enabling media app developers to build both richer and more complete experiences that users are used to on their phones. During beta, developers can build and publish media apps built using the Car App Library to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.
The communications category is in beta. We've simplified calling integration for calling apps by utilizing the CallsManager Jetpack API. Together with the templates provided by the Car App Library, this enables communications apps to build features like full message history, upcoming meetings list, rich in-call views, and more. During beta, developers can build and publish communications apps to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.
Games are now supported in Android Auto, while parked, on phones running Android 15 and above. You can already find some popular titles like Angry Birds 2, Farm Heroes Saga, Candy Crush Soda Saga and Beach Buggy Racing 2. The Games category is in Beta and developers can publish games to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.
Finally, we have further simplified building, testing and distribution experience for developers building apps for Android Automotive OS cars with Google built-in:
Distribution through Google Play is more flexible than ever. It’s now possible for apps in the parked categories to distribute in the same APK or App Bundle to cars with Google built-in as to phones, including through the mobile release track. Learn more on how to Distribute to cars.
Android Automotive OS on Pixel Tablet is now generally available, giving you a physical device option for testing Android Automotive OS apps without buying or renting a car. Additionally, the most recent system images include support for acting as an Android Auto receiver, meaning you can use the same device to test both your app’s experience on Android Auto and Android Automotive OS. Apply for access to these images.
The road ahead
You can look forward to more updates later this year, including:
Video apps will be supported on Android Auto, starting with phones running Android 16 on select compatible cars. If your app is already adaptive, enabling your app experience while parked only requires minimal steps to distribute to cars.
For Android Automotive OS cars running Android 14+ with Google built-in, we are working with car manufacturers to add additional app compatibility, to enable thousands of adaptive mobile apps in the next phase of the Car Ready Mobile Apps Program.
Updated design documentation that visualizes car app quality guidelines and integration paths to simplify designing your app for cars.
Google Play Services for cars with Google built-in are expanding to bring them on-par with mobile, including:
a. Passkeys and Credential Manager APIs for a more seamless user sign-in experience.
b. Quick Share, which will enable easy cross-device sharing from phone to car.
Pre-launch reports for Android Automotive OS are coming soon to the Play Console, helping you ensure app quality before distributing your app to cars.
Be sure to keep up to date through goo.gle/cars-whats-new on these features and more as we continuously invest in the future of Android in the car. Stay tuned for more resources to help you build innovative and engaging experiences for drivers and passengers.
Posted by Don Turner - Developer Relations Engineer
Navigating between screens in your app should be simple, shouldn't it? However, building a robust, scalable, and delightful navigation experience can be a challenge. For years, the Jetpack Navigation library has been a key tool for developers, but as the Android UI landscape has evolved, particularly with the rise of Jetpack Compose, we recognized the need for a new approach.
Today, we're excited to introduce Jetpack Navigation 3, a new navigation library built from the ground up specifically for Compose. For brevity, we'll just call it Nav3 from now on. This library embraces the declarative programming model and Compose state as fundamental building blocks.
Why a new navigation library?
The original Jetpack Navigation library (sometimes referred to as Nav2 as it's on major version 2) was initially announced back in 2018, before AndroidX and before Compose. While it served its original goals well, we heard from you that it had several limitations when working with modern Compose patterns.
One key limitation was that the back stack state could only be observed indirectly. This meant there could be two sources of truth, potentially leading to an inconsistent application state. Also, Nav2's NavHost was designed to display only a single destination – the topmost one on the back stack – filling the available space. This made it difficult to implement adaptive layouts that display multiple panes of content simultaneously, such as a list-detail layout on large screens.
Figure 1. Changing from single pane to multi-pane layouts can create navigational challenges
Founding principles
Nav3 is built upon principles designed to provide greater flexibility and developer control:
You own the back stack: You, the developer, not the library, own and control the back stack. It's a simple list which is backed by Compose state. Specifically, Nav3 expects your back stack to be SnapshotStateList<T> where T can be any type you choose. You can navigate by adding or removing items (Ts), and state changes are observed and reflected by Nav3's UI.
Get out of your way: We heard that you don't like a navigation library to be a black box with inaccessible internal components and state. Nav3 is designed to be open and extensible, providing you with building blocks and helpful defaults. If you want custom navigation behavior you can drop down to lower layers and create your own components and customizations.
Pick your building blocks: Instead of embedding all behavior within the library, Nav3 offers smaller components that you can combine to create more complex functionality. We've also provided a "recipes book" that shows how to combine components to solve common navigation challenges.
Figure 2. The Nav3 display observes changes to the developer-owned back stack.
Key features
Animations: Built-in transition animations are provided for changes in destination, including for predictive back. It also has a flexible API for custom animation behavior, allowing animations to be overridden at both the app and the individual screen level.
Adaptive layouts: A flexible layout API (named Scenes) allows you to render multiple destinations in the same layout (for example, a list-detail layout on large screen devices). This makes it easy to switch between single and multi-pane layouts.
State scoping: Enables state to be scoped to destinations on the back stack, including optional ViewModel support via a dedicated Jetpack lifecycle library.
Modularity: The API design allows navigation code to be split across multiple modules. This improves build times and allows clear separation of responsibilities between feature modules.
Figure 3. Custom animations and predictive back are easy to implement, and easy to override for individual destinations.
Basic code example
To give you an idea of how Nav3 works, here's a short code sample.
// Define the routes in your app and any arguments.
data object Home
data classProduct(val id: String)
// Create a back stack, specifying the route the app should start with.val backStack = remember { mutableStateListOf<Any>(ProductList) }
// A NavDisplay displays your back stack. Whenever the back stack changes, the display updates.
NavDisplay(
backStack = backStack,
// Specify what should happen when the user goes back
onBack = { backStack.removeLastOrNull() },
// An entry provider converts a route into a NavEntry which contains the content for that route.
entryProvider = { route ->
when (route) {
is Home -> NavEntry(route) {
Column {
Text("Welcome to Nav3")
Button(onClick = {
// To navigate to a new route, just add that route to the back stack
backStack.add(Product("123"))
}) {
Text("Click to navigate")
}
}
}
is Product -> NavEntry(route) {
Text("Product ${route.id} ")
}
else -> NavEntry(Unit) { Text("Unknown route: $route") }
}
}
)
common navigation UI, such as a navigation rail or bar
conditional navigation, such as a login flow
custom layouts using Scenes
We plan to provide code recipes, documentation and blogs for more complex use cases in future.
Nav3 is currently in alpha, which means that the API is liable to change based on feedback. If you have any issues, or would like to provide feedback, please file an issue.
Nav3 offers a flexible and powerful foundation for building modern navigation in your Compose applications. We're really excited to see what you build with it.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.