Category Archives: Android Developers Blog

An Open Handset Alliance Project

Developer preview: Enhanced Android desktop experiences with connected displays

Posted by Francesco Romano – Developer Relations Engineer on Android, and Fahd Imtiaz – Product Manager, Android Developer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

Android has continued to evolve to enable users to be more productive on large screens.

Today, we’re excited to share that connected displays support on compatible Android devices is now in developer preview with the Android 16 QPR1 Beta 2 release. As shown at Google I/O 2025, connected displays enable users to attach an external display to their Android device and transform a small screen device into a powerful tool with a large screen. This evolution gives users the ability to move apps beyond a single screen to unlock Android’s full productivity potential on external displays.

The connected display update builds on our desktop windowing experience, a capability we previewed last year. Desktop windowing is set to launch later this year for users on compatible tablets running Android 16. Desktop windowing enables users to run multiple apps simultaneously and resize windows for optimal multitasking. This new windowing capability works seamlessly with split screen and other multitasking features users already love on Android and doesn't require switching to a special mode.

Google and Samsung have collaborated to bring a more seamless and powerful desktop windowing experience to large screen devices and phones with connected displays in Android 16 across the Android ecosystem. These advancements will enhance Samsung DeX, and also extend to other Android devices.

For developers, connected displays and desktop windowing present new opportunities for building more engaging and more productive app experiences that seamlessly adapt across form factors. You can try out these features today on your connected display with the Android 16 QPR1 Beta 2 on select Pixel devices.

What’s new in connected displays support?

When a supported Android phone or foldable is connected to an external display through a DisplayPort connection, a new desktop session starts on the connected display. The phone and the external display operate independently, and apps are specific to the display on which they’re running.

The experience on the connected display is similar to the experience on a desktop, including a task bar that shows running apps and lets users pin apps for quick access. Users are able to run multiple apps side by side simultaneously in freely resizable windows on the connected display.

moving image of a phone connected to an external display, with a desktop session on the display while the phone maintains its own state.
Phone connected to an external display, with a desktop session on the display while the phone maintains its own state.

When a desktop windowing enabled device (like a tablet) is connected to an external display, the desktop session is extended across both displays, unlocking an even more expansive workspace. The two displays then function as one continuous system, allowing app windows, content, and the cursor to move freely between the displays.

moving image of a tablet connected to an external display, extending the desktop session across both displays.
Tablet connected to an external display, extending the desktop session across both displays.

A cornerstone of this effort is the evolution of desktop windowing, which is stable in Android 16 and is packed with improvements and new capabilities.

Desktop windowing stable release

We've made substantial improvements in the stability and performance of desktop windowing in Android 16. This means users will encounter a smoother, more reliable experience when managing app windows on connected displays. Beyond general stability improvements, we're introducing several new features:

    • Flexible window tiling: Multitasking gets a boost with more intuitive window tiling options. Users can more easily arrange multiple app windows side by side or in various configurations, making it simpler to work across different applications simultaneously on a large screen.
    • Multiple desktops: Users can set up multiple desktop sessions to match their distinct productivity requirements and switch between the desktops using keyboard shortcuts, trackpad gestures, and Overview.
    • Enhanced app compatibility treatments: New compatibility treatments ensure that even legacy apps behave more predictably and look better on external displays by default. This reduces the burden on developers while providing a better out-of-the-box experience for users.
    • Multi-instance management: Users can manage multiple instances of supporting applications (for example, Chrome or, Keep) through the app header button or taskbar context menu. This allows for quick switching between different instances of the same app.
    • Desktop persistence: Android can now better maintain window sizes, positions, and states across different desktops. This means users can set up their preferred workspace and have it restored across sessions, offering a more consistent and efficient workflow.

Best practices for optimal app experiences on connected displays

With the introduction of connected display support in Android, it's important to ensure your apps take full advantage of the new display capabilities. To help you build apps that shine in this enhanced environment, here are some key development practices to follow:

Build apps optimized for desktop

    • Design for any window size: With phones now connecting to external displays, your mobile app can run in a window of almost any size and aspect ratio. This means the app window can be as big as the screen of the connected display but also flex to fit a smaller window. In desktop windowing, the minimum window size is 386 x 352 dp, which is smaller than most phones. This fundamentally changes how you need to think about UI. With orientation and resizability changes in Android 16, it becomes even more critical for you to update your apps to support resizability and portrait and landscape orientations for an optimal experience with desktop windowing and connected displays. Make sure your app supports any window size by following the best practices on adaptive development.

Handle dynamic display changes

    • Don't assume a constant Display object: The Display object associated with your app's context can change when an app window is moved to an external display or if the display configuration changes. Your app should gracefully handle configuration change events and query display metrics dynamically rather than caching them.
    • Account for density configuration changes: External displays can have vastly different pixel densities than the primary device screen. Ensure your layouts and resources adapt correctly to these changes to maintain UI clarity and usability. Use density-independent pixels (dp) for layouts, provide density-specific resources, and ensure your UI scales appropriately.

Go beyond just the screen

    • Correctly support external peripherals: When users connect to an external monitor, they often create a more desktop-like environment. This frequently involves using external keyboards, mice, trackpads, webcams, microphones, and speakers. If your app uses camera or microphone input, the app should be able to detect and utilize peripherals connected through the external display or a docking station.
    • Handle keyboard actions: Desktop users rely heavily on keyboard shortcuts for efficiency. Implement standard shortcuts (for example, Ctrl+C, Ctrl+V, Ctrl+Z) and consider app-specific shortcuts that make sense in a windowed environment. Make sure your app supports keyboard navigation.
    • Support mouse interactions: Beyond simple clicks, ensure your app responds correctly to mouse hover events (for example, for tooltips or visual feedback), right-clicks (for contextual menus), and precise scrolling. Consider implementing custom pointers to indicate different actions.

Getting started

Explore the connected displays and enhanced desktop windowing features in the latest Android Beta. Get Android 16 QPR1 Beta 2 on a supported Pixel device (Pixel 8 and Pixel 9 series) to start testing your app today. Then enable desktop experience features in the developer settings.

Support for connected displays in the Android Emulator is coming soon, so stay tuned for updates!

Dive into the updated documentation on multi-display support and window management to learn more about implementing these best practices.

Feedback

Your feedback is crucial as we continue to refine these experiences. Please share your thoughts and report any issues through our official feedback channels.

We're committed to making Android a versatile platform that adapts to the many ways users want to interact with their apps and devices. The improvements to connected display support are another step in that direction, and we can't wait to see the amazing experiences you'll build!

A product manager’s guide to adapting Android apps across devices

Posted by Fahd Imtiaz, Product Manager, Android Developer Experience
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

With new form factors emerging continually, the Android ecosystem is more dynamic than ever.

From phones and foldables to tablets, Chromebooks, TVs, cars, Wear and XR, Android users expect their apps to run seamlessly across an increasingly diverse range of form factors. Yet, many Android apps fall short of these expectations as they are built with UI constraints such as being locked to a single orientation or restricted in resizability.

With this in mind, Android 16 introduced API changes for apps targeting SDK level 36 to ignore orientation and resizability restrictions starting with large screen devices, shifting toward a unified model where adaptive apps are the norm. This is the moment to move ahead. Adaptive apps aren’t just the future of Android, they’re the expectation for your app to stand out across Android form factors.

Why you should prioritize adaptive now

500+ devices including foldables, tablets, Chromebooks, and mobile-app capable cars
Source: internal Google data

Prioritizing optimizations to make your app adaptive isn't just about keeping up with the orientation and resizability API changes in Android 16 for apps targeting SDK 36. Adaptive apps unlock tangible benefits across user experience, development efficiency, and market reach.

    • Mobile apps can now reach users on over 500 million active large screen devices: Mobile apps run on foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 will introduce significant advancements in desktop windowing for a true desktop-like experience on large screens, including connected displays. And Android XR opens a new dimension, allowing your existing apps to be available in immersive environments. The user expectation is clear: a consistent, high-quality experience that intelligently adapts to any screen – be it a foldable, a tablet with a keyboard, or a movable, resizable window on a Chromebook.

    • “The new baseline” with orientation and resizability API changes in Android 16: We believe mobile apps are undergoing a shift to have UI adapt responsively to any screen size, just like websites. Android 16 will ignore app-defined restrictions like fixed orientation (portrait-only) and non-resizable windows, beginning with large screens (smallest width of the device is >= 600dp) including tablets and inner displays on foldables. For most apps, it’s key to helping them stretch to any screen size. In some cases if your app isn't adaptive, it could deliver a broken user experience on these screens. This moves adaptive design from a nice-to-have to a foundational requirement.
Side by side displays of non-adaptive app UI with on the left with text reading Goodbye 'mobile-only' apps and adaptive app UI on the right with text reads Hello adaptive apps
    • Increase user reach and app discoverability in Play: Adaptive apps are better positioned to be ranked higher in Play, and featured in editorial articles across form factors, reaching a wider audience across Play search and homepages. Additionally, Google Play Store surfaces ratings and reviews across all form factors. If your app is not optimized, a potential user's first impression might be tainted by a 1-star review complaining about a stretched UI on a device they don't even own yet. Users are also more likely to engage with apps that provide a great experience across their devices.
    • Increased engagement on large screens: Users on large screen devices often have different interaction patterns. On large screens, users may engage for longer sessions, perform more complex tasks, and consume more content.
    • Concepts saw a 70% increase in user engagement on large screens after optimizing.

      Usage for 6 major media streaming apps in the US was up to 3x more for tablet and phone users, as compared to phone only users.

    • More accessible app experiences: According to the World Bank, 15% of the world’s population has some type of disability. People with disabilities depend on apps and services that support accessibility to communicate, learn, and work. Matching the user’s preferred orientation improves the accessibility of applications, helping to create an inclusive experience for all.

Today, most apps are building for smartphones only

A display of varying Android form factors, including a tablet, a desktop monitor, a laptop, a large-screen mobile, hand-held device, and an in-car app screen

“...looking at the number of users, the ROI does not justify the investment”.

That's a frequent pushback from product managers and decision-makers, and if you're just looking at top-line analytics comparing the number of tablet sessions to smartphone sessions, it might seem like a closed case.

While top-line analytics might show lower session numbers on tablets compared to smartphones, concluding that large screens aren't worth the effort based solely on current volume can be a trap, causing you to miss out on valuable engagement and future opportunities.

Let's take a deeper look into why:

      1. The user experience ‘chicken and egg’ loop: Is it possible that the low usage is a symptom rather than the root cause? Users are quick to abandon apps that feel clunky or broken. If your app on large screens is a stretched-out phone interface, the app likely provides a negative user experience. The lack of users might reflect the lack of a good experience, not always necessarily lack of potential users.

      2. Beyond user volume, look at user engagement: Don't just count users, analyze their worth. Users interact with apps on large screens differently. The large screen often leads to longer sessions and more immersive experiences. As mentioned above, usage data shows that engagement time increases significantly for users who interact with apps on both their phone and tablet, as compared to phone only users.

      3. Market evolution: The Android device ecosystem is continuing to evolve. With the rise of foldables, upcoming connected displays support in Android 16, and form factors like XR and Android Auto, adaptive design is now more critical than ever. Building for a specific screen size creates technical debt, and may slow your development velocity and compromise the product quality in the long run.

Okay, I am convinced. Where do I start?

A three-step workflow outlines how to optimize your Android app to be adaptive

For organizations ready to move forward, Android offers many resources and developer tools to optimize apps to be adaptive. See below for how to get started:

      1.Check how your app looks on large screens today: Begin by looking at your app’s current state on tablets, foldables (in different postures), Chromebooks, and environments like desktop windowing. Confirm if your app is available on these devices or if you are unintentionally leaving out these users by requiring unnecessary features within your app.

      2. Address common UI issues: Assess what feels awkward in your app UI today. We have a lot of guidance available on how you can easily translate your mobile app to other screens.

          a. Check the Large screens design gallery for inspiration and understanding how your app UI can evolve across devices using proven solutions to common UI challenges.

          b. Start with quick wins. For example, prevent buttons from stretching to the full screen width, or switch to a vertical navigation bar on large screens to improve ergonomics.

          c. Identify patterns where canonical layouts (e.g. list-detail) could solve any UI awkwardness you identified. Could a list-detail view improve your app's navigation? Would a supporting pane on the side make better use of the extra space than a bottom sheet?

      3. Optimize your app incrementally, screen by screen: It may be helpful to prioritize how you approach optimization because not everything needs to be perfectly adaptive on day one. Incrementally improve your app based on what matters most – it's not all or nothing.

          a. Start with the foundations. Check out the large screen app quality guidelines which tier and prioritize the fixes that are most critical to users. Remove orientation restrictions to support portrait and landscape, and ensure support for resizability (for when users are in split screen), and prevent major stretching of buttons, text fields, and images. These foundational fixes are critical, especially with API changes in Android 16 that will make these aspects even more important.

          b. Implement adaptive layout optimizations with a focus on core user journeys or screens first.

              i. Identify screens where optimizations (for example a two-pane layout) offer the biggest UX win

              ii. And then proceed to screens or parts of the app that are not as often used on large screens

          c. Support input methods beyond touch, including keyboard, mouse, trackpad, and stylus input. With new form factors and connected displays support, this sets users up to interact with your UI seamlessly.

          d. Add differentiating hero user experiences like support for tabletop mode or dual-screen mode on foldables. This can happen on a per-use-case basis - for example, tabletop mode is great for watching videos, and dual screen mode is great for video calls.

While there's an upfront investment in adopting adaptive principles (using tools like Jetpack Compose and window size classes), the long-term payoff may be significant. By designing and building features once, and letting them adapt across screen sizes, the benefits outweigh the cost of creating multiple bespoke layouts. Check out the adaptive apps developer guidance for more.

Unlock your app's potential with adaptive app design

The message for my fellow product managers, decision-makers, and businesses is clear: adaptive design will uplevel your app for high-quality Android experiences in 2025 and beyond. An adaptive, responsive UI is the scalable way to support the many devices in Android without developing on a per-form factor basis. If you ignore the diverse device ecosystem of foldables, tablets, Chromebooks, and emerging form factors like XR and cars, your business is accepting hidden costs from negative user reviews, lower discovery in Play, increased technical debt, and missed opportunities for increased user engagement and user acquisition.

Maximize your apps' impact and unlock new user experiences. Learn more about building adaptive apps today.

Top 3 updates for building excellent, adaptive apps at Google I/O ‘25

Posted by Mozart Louis – Developer Relations Engineer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.

If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you're ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3's editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.

Check out the Google I/O playlist for all the session details.

Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:

#1: Build adaptively to unlock 500 million devices

In today's diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.

The talk emphasizes that you don't need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app's potential.

Here are some resources we encourage you to use in your apps:

New feature support in Jetpack Compose Adaptive Libraries

    • We’re continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.

Navigation 3

    • The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.

Updates to Window Manager Library

    • AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as "extra large," while widths between 1200dp and 1600dp are classified as "large." These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.

Support all orientations and be resizable

Extend to Android XR

Upgrade your Wear OS apps to Material 3 Design

You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app’s long-term success.

#2: Enhance your app’s performance optimization

Get ready to take your app's performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.

Redesigned UiAutomator API

    • To make benchmarking reliable and reproducible, there's the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.

Macrobenchmarks

    • Once your tests are in place, it's time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app's health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app's performance and where to focus your efforts.

R8, More than code shrinking and obfuscation

    • You might know R8 as a code shrinking tool, but it's capable of so much more! The talk dives into R8's capabilities using the "Androidify" sample app. You'll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It'll also be shown how library developers can include "consumer Keep rules" so that their important code is not touched when used in an application.

#3: Build Richer Image and Video Experiences

In today's digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.

Media3Effects in CameraX Preview

    • At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.

Google Low-Light Boost

    • Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.

New Camera & Media Samples!

Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.

Learn how to build adaptive apps

Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.

Android 16 is here

Posted by Matthew McCullough – VP of Product Management, Android Developer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

Today we're releasing Android 16 and making it available on most supported Pixel devices. Look for new devices running Android 16 in the coming months.

This also marks the availability of the source code at the Android Open Source Project (AOSP). You can examine the source code for a deeper understanding of how Android works, and our focus on compatibility means that you can leverage your app development skills in Android Studio with Jetpack Compose to create applications that thrive across the entire ecosystem.

Major and minor SDK releases

With Android 16, we've added the concept of a minor SDK release to allow us to iterate our APIs more quickly, reflecting the rapid pace of the innovation Android is bringing to apps and devices.

Android 16 2025 SDK release timeline

We plan to have another release in Q4 of 2025 which also will include new developer APIs. Today's major release will be the only release in 2025 to include planned app-impacting behavior changes. In addition to new developer APIs, the Q4 minor release will pick up feature updates, optimizations, and bug fixes.

We'll continue to have quarterly Android releases. The Q3 update in-between the API releases is providing much of the new visual polish associated with Material Expressive, and you can get the Q3 beta today on your supported Pixel device.

Camera and media APIs to empower creators

Android 16 enhances support for professional camera users, allowing for night mode scene detection, hybrid auto exposure, and precise color temperature adjustments. It's easier than ever to capture motion photos with new Intent actions, and we're continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard. Support for the Advanced Professional Video (APV) codec improves Android's place in professional recording and post-production workflows, with perceptually lossless video quality that survives multiple decodings/re-encodings without severe visual quality degradation.

Also, Android's photo picker can now be embedded in your view hierarchy, and users will appreciate the ability to search cloud media.

More consistent, beautiful apps

Android 16 introduces changes to improve the consistency and visual appearance of apps, laying the foundation for the upcoming Material 3 Expressive changes. Apps targeting Android 16 can no longer opt-out of going edge-to-edge, and ignores the elegantTextHeight attribute to ensure proper spacing in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai.

Adaptive Android apps

With Android apps now running on a variety of devices and more windowing modes on large screens, developers should build Android apps that adapt to any screen and window size, regardless of device orientation. For apps targeting Android 16 (API level 36), Android 16 includes changes to how the system manages orientation, resizability, and aspect ratio restrictions. On displays with smallest width >= 600dp, the restrictions no longer apply and apps will fill the entire display window. You should check your apps to ensure your existing UIs scale seamlessly, working well across portrait and landscape aspect ratios. We're providing frameworks, tools, and libraries to help.

Side by side displays of non-adaptive app UI with on the left with text reading Goodbye 'mobile-only' apps and adaptive app UI on the right with text reads Hello adaptive apps

You can test these overrides without targeting using the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag. Read more about changes to orientation and resizability APIs in Android 16.

Predictive back by default and more

Apps targeting Android 16 will have system animations for back-to-home, cross-task, and cross-activity by default. In addition, Android 16 extends predictive back navigation to three-button navigation, meaning that users long-pressing the back button will see a glimpse of the previous screen before navigating back.

To make it easier to get the back-to-home animation, Android 16 adds support for the onBackInvokedCallback with the new PRIORITY_SYSTEM_NAVIGATION_OBSERVER. Android 16 additionally adds the finishAndRemoveTaskCallback and moveTaskToBackCallback for custom back stack behavior with predictive back.

Consistent progress notifications

Android 16 introduces Notification.ProgressStyle, which lets you create progress-centric notifications that can denote states and milestones in a user journey using points and segments. Key use cases include rideshare, delivery, and navigation. It's the basis for Live Updates, which will be fully realized in an upcoming Android 16 update.

side-by-side screenshots of a Pixel device showing progress notifications on the homescreen on the left and the updated progress notification in the notification menu on the right

Custom AGSL graphical effects

Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation in AGSL and apply them to draw calls.

Help to create better performing, more efficient apps and games

From APIs to help you understand app performance, to platform changes designed to increase efficiency, Android 16 is focused on making sure your apps perform well. Android 16 introduces system-triggered profiling to ProfilingManager, ensures at most one missed execution of scheduleAtFixedRate is immediately executed when the app returns to a valid lifecycle for better efficiency, introduces hasArrSupport and getSuggestedFrameRate(int) to make it easier for your apps to take advantage of adaptive display refresh rates, and introduces the getCpuHeadroom and getGpuHeadroom APIs along with CpuHeadroomParams and GpuHeadroomParams in SystemHealthManager to provide games and resource-intensive apps estimates of available GPU and CPU resources on supported devices.

JobScheduler updates

JobScheduler.getPendingJobReasons in Android 16 returns multiple reasons why a job is pending, due to both explicit constraints you set and implicit constraints set by the system. The new JobScheduler.getPendingJobReasonsHistory returns the list of the most recent pending job reason changes, allowing you to better tune the way your app works in the background.

Android 16 is making adjustments for regular and expedited job runtime quota based on which apps standby bucket the app is in, whether the job starts execution while the app is in a top state, and whether the job is executing while the app is running a Foreground Service.

To detect (and then reduce) abandoned jobs, apps should use the new STOP_REASON_TIMEOUT_ABANDONED job stop reason that the system assigns for abandoned jobs, instead of STOP_REASON_TIMEOUT.

16KB page sizes

Android 15 introduced support for 16KB page sizes to improve the performance of app launches, system boot-ups, and camera starts, while reducing battery usage. Android 16 adds a 16 KB page size compatibility mode, which, combined with new Google Play technical requirements, brings Android closer to having devices shipping with this important change. You can validate if your app needs updating using the 16KB page size checks & APK Analyzer in the latest version of Android Studio.

ART internal changes

Android 16 includes the latest updates to the Android Runtime (ART) that improve the Android Runtime's (ART's) performance and provide support for additional language features. These improvements are also available to over a billion devices running Android 12 (API level 31) and higher through Google Play System updates. Apps and libraries that rely on internal non-SDK ART structures may not continue to work correctly with these changes.

Privacy and security

Android 16 continues our mission to improve security and ensure user privacy. It includes Improved security against Intent redirection attacks, makes MediaStore.getVersion unique to each app, adds an API that allows apps to share Android Keystore keys, incorporates the latest version of the Privacy Sandbox on Android, introduces a new behavior during the companion device pairing flow to protect the user's location privacy, and allows a user to easily select from and limit access to app-owned shared media in the photo picker.

Local network permission testing

Android 16 allows your app to test the upcoming local network permission feature, which will require your app to be granted NEARBY_WIFI_DEVICES permission. This change will be enforced in a future Android major release.

An Android built for everyone

Android 16 adds features such as Auracast broadcast audio with compatible LE Audio hearing aids, Accessibility changes such as extending TtsSpan with TYPE_DURATION, a new list-based API within AccessibilityNodeInfo, improved support for expandable elements using setExpandedState, RANGE_TYPE_INDETERMINATE for indeterminate ProgressBar widgets, AccessibilityNodeInfo getChecked and setChecked(int) methods that support a "partially checked" state, setSupplementalDescription so you can provide text for a ViewGroup without overriding information from its children, and setFieldRequired so apps can tell an accessibility service that input to a form field is required.

Outline text for maximum text contrast

Android 16 introduces outline text, replacing high contrast text, which draws a larger contrasting area around text to greatly improve legibility, along with new AccessibilityManager APIs to allow your apps to check or register a listener to see if this mode is enabled.

side-by-side screenshots of a Pixel device showing text with enhanced contrast before and after Android 16's new outline text accessbility feature
Text with enhanced contrast before and after Android 16's new outline text accessibility feature

Get your apps, libraries, tools, and game engines ready!

If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates to your SDK are needed to fully support Android 16.

Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you aren't yet targeting Android 16:

Other changes that will be impactful once your app targets Android 16:

Get your app ready for the future:

    • Local network protection: Consider testing your app with the upcoming Local Network Protection feature. It will give users more control over which apps can access devices on their local network in a future Android major release.

Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.

Once you’ve published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.

Get started with Android 16

Your Pixel device should get Android 16 shortly if you haven't already been on the Android Beta. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 4.1 and have not yet taken an Android 16 QPR1 beta, you can opt out of the program and you will then be offered the release version of Android 16 over the air.

For the best development experience with Android 16, we recommend that you use the latest Canary build of Android Studio Narwhal. Once you’re set up, here are some of the things you should do:

Thank you again to everyone who participated in our Android developer preview and beta program. We're looking forward to seeing how your apps take advantage of the updates in Android 16, and have plans to bring you updates in a fast-paced release cadence going forward.

For complete information on Android 16 please visit the Android 16 developer site.

Announcing Kotlin Multiplatform Shared Module Template

Posted by Ben Trengrove - Developer Relations Engineer, Matt Dyor - Product Manager

To empower Android developers, we’re excited to announce Android Studio’s new Kotlin Multiplatform (KMP) Shared Module Template. This template was specifically designed to allow developers to use a single codebase and apply business logic across platforms. More specifically, developers will be able to add shared modules to existing Android apps and share the business logic across their Android and iOS applications.

This makes it easier for Android developers to craft, maintain, and most importantly, own the business logic. The KMP Shared Module Template is available within Android Studio when you create a new module within a project.

a screen shot of the new module tab in Android Studio
Shared Module Templates are found under the New Module tab

A single code base for business logic

Most developers have grown accustomed to maintaining different code bases, platform to platform. In the past, whenever there’s an update to the business logic, it must be carefully updated in each codebase. But with the KMP Shared Module Template:

    • Developers can write once and publish the business logic to wherever they need it.
    • Engineering teams can do more faster.
    • User experiences are more consistent across the entire audience, regardless of platform or form factor.
    • Releases are better coordinated and launched with fewer errors.

Customers and developer teams who adopt KMP Shared Module Templates should expect to achieve greater ROI from mobile teams who can turn their attention towards delighting their users more and worrying about inconsistent code less.

KMP enthusiasm

The Android developer community remains very excited about KMP, especially after Google I/O 2024 where Google announced official support for shared logic across Android and iOS. We have seen continued momentum and enthusiasm from the community. For example, there are now over 1,500 KMP libraries listed on JetBrains' klibs.io.

Our customers are excited because KMP has made Android developers more productive. Consistently, Android developers have said that they want solutions that allow them to share code more easily and they want tools which boost productivity. This is why we recommend KMP; KMP simultaneously delivers a great experience for Android users while boosting ROI for the app makers. The KMP Shared Module Template is the latest step towards a developer ecosystem where user experience is consistent and applications are updated seamlessly.

Large scale KMP adoptions

This KMP Shared Module Template is new, but KMP more broadly is a maturing technology with several large-scale migrations underway. In fact, KMP has matured enough to support mission critical applications at Google. Google Docs, for example, is now running KMP in production on iOS with runtime performance on par or better than before. Beyond Google, Stone’s 130 mobile developers are sharing over 50% of their code, allowing existing mobile teams to ship features approximately 40% faster to both Android and iOS.

KMP was designed for Android development

As always, we've designed the Shared Module Template with the needs of Android developer teams in mind. Making the KMP Shared Module Template part of the native Android Studio experience allows developers to efficiently add a shared module to an existing Android application and immediately start building shared business logic that leverages several KMP-ready Jetpack libraries including Room, SQLite, and DataStore to name just a few.

Come check it out at KotlinConf

Releasing Android Studio’s KMP Shared Module Template marks a significant step toward empowering Android development teams to innovate faster, to efficiently manage business logic, and to build high-quality applications with greater confidence. It means that Android developers can be responsible for the code that drives the business logic for every app across Android and iOS. We’re excited to bring Shared Module Template to KotlinConf in Copenhagen, May 21 - 23.

KotlinConf 2025 Copenhagen Denmark, May 21 Workshops May 22-23 Conference

Get started with KMP Shared Module Template

To get started, you'll need the latest edition of Android Studio. In your Android project, the Shared Module Template is available within Android Studio when you create a new module. Click on “File” then “New” then “New Module” and finally “Kotlin Multiplatform Shared Module” and you are ready to add a KMP Shared Module to your Android app.

We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue. Remember to also follow us on X, LinkedIn, Blog, or YouTube for more Android development updates!

Peacock built adaptively on Android to deliver great experiences across screens

Posted by Sa-ryong Kang and Miguel Montemayor - Developer Relations Engineers

Peacock is NBCUniversal’s streaming service app available in the US, offering culture-defining entertainment including live sports, exclusive original content, TV shows, and blockbuster movies. The app continues to evolve, becoming more than just a platform to watch content, but a hub of entertainment.

Today’s users are consuming entertainment on an increasingly wider array of device sizes and types, and in particular are moving towards mobile devices. Peacock has adopted Jetpack Compose to help with its journey in adapting to more screens and meeting users where they are.

Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.

Adapting to more flexible form factors

The Peacock development team is focused on bringing the best experience to users, no matter what device they’re using or when they want to consume content. With an emerging trend from app users to watch more on mobile devices and large screens like foldables, the Peacock app needs to be able to adapt to different screen sizes. As more devices are introduced, the team needed to explore new solutions that make the most out of each unique display permutation.

The goal was to have the Peacock app to adapt to these new displays while continually offering high-quality entertainment without interruptions, like the stream reloading or visual errors. While thinking ahead, they also wanted to prepare and build a solution that was ready for Android XR as the entertainment landscape is shifting towards including more immersive experiences.

quote card featuring a headshot of Diego Valente, Head of Mobile, Peacock & Global Streaming, reads 'Thinking adaptively isn't just about supporting tablets or large screens - it's about future proofing your app. Investing in adaptability helps you meet user's expectations of having seamless experiencers across all their devices and sets you up for what's next.'

Building a future-proof experience with Jetpack Compose

In order to build a scalable solution that would help the Peacock app continue to evolve, the app was migrated to Jetpack Compose, Android’s toolkit for building scalable UI. One of the essential tools they used was the WindowSizeClass API, which helps developers create and test UI layouts for different size ranges. This API then allows the app to seamlessly switch between pre-set layouts as it reaches established viewport breakpoints for different window sizes.

The API was used in conjunction with Kotlin Coroutines and Flows to keep the UI state responsive as the window size changed. To test their work and fine tune edge case devices, Peacock used the Android Studio emulator to simulate a wide range of Android-based devices.

Jetpack Compose allowed the team to build adaptively, so now the Peacock app responds to a wide variety of screens while offering a seamless experience to Android users. “The app feels more native, more fluid, and more intuitive across all form factors,” said Diego Valente, Head of Mobile, Peacock and Global Streaming. “That means users can start watching on a smaller screen and continue instantly on a larger one when they unfold the device—no reloads, no friction. It just works.”

Preparing for immersive entertainment experiences

In building adaptive apps on Android, John Jelley, Senior Vice President, Product & UX, Peacock and Global Streaming, says Peacock has also laid the groundwork to quickly adapt to the Android XR platform: “Android XR builds on the same large screen principles, our investment here naturally extends to those emerging experiences with less developmental work.”

The team is excited about the prospect of features unlocked by Android XR, like Multiview for sports and TV, which enables users to watch multiple games or camera angles at once. By tailoring spatial windows to the user’s environment, the app could offer new ways for users to interact with contextual metadata like sports stats or actor information—all without ever interrupting their experience.

Build adaptive apps

Learn how to unlock your app's full potential on phones, tablets, foldables, and beyond.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


16 things to know for Android developers at Google I/O 2025

Posted by Matthew McCullough – VP of Product Management, Android Developer

Today at Google I/O, we announced the many ways we’re helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here’s a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!

Building AI into your Apps

1: Building intelligent apps with Generative AI

Generative AI enhances apps' experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.

New experiences across devices

2: One app, every screen: think adaptive and unlock 500 million screens

Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we’re helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices – a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral’s streaming service (available in the US) is building adaptively to meet users where they are.

Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.

3: Material 3 Expressive: design for intuition and emotion

The new Material 3 Expressive update provides tools to enhance your product's appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.

moving image of Material 3 Expressive demo

4: Smarter widgets, engaging live updates

Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.

moving image of Material 3 Expressive demo

5: Enhanced Camera & Media: low light boost and battery savings

This year's I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.

6: Build next-gen app experiences for Cars

We're launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we'll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.

7: Build for Android XR's expanding ecosystem with Developer Preview 2 of the SDK

We announced Android XR in December, and today at Google I/O we shared a bunch of updates coming to the platform including Developer Preview 2 of the Android XR SDK plus an expanding ecosystem of devices: in addition to the first Android XR headset, Samsung’s Project Moohan, you’ll also see more devices including a new portable Android XR device from our partners at XREAL. There’s lots more to cover for Android XR: Watch the Compose and AI on Android XR session, and the Building differentiated apps for Android XR with 3D content session, and learn more about building for Android XR.

product image of XREAL’s Project Aura against a nebulous black background
XREAL’s Project Aura

8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6

This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.

moving image displays examples of Material 3 Expressive on Wear OS experiences
Some examples of Material 3 Expressive on Wear OS experiences

9: Engage users on Google TV with excellent TV apps

You can leverage more resources within Compose's core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We're also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.

Developer productivity

10: Build beautiful apps faster with Jetpack Compose

Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.

moving image of compose adaptive layouts updates in the Google Play app
Compose Adaptive Layouts Updates in the Google Play app

11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily

Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. We’ve released a new Android Studio KMP shared module template, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help developers who are looking to get started with KMP. Shared module templates make it easier for developers to craft, maintain, and own the business logic. Read more on what's new in Android's Kotlin Multiplatform.

12: Gemini in Android Studio: AI Agents to help you work

Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What’s new in Android development tools.

13: Android Studio: smarter with Gemini

In this latest release, we're empowering devs with AI-driven tools like Gemini in Android Studio, streamlining UI creation, making testing easier, and ensuring apps are future-proofed in our ever-evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in a dynamic mobile landscape. To take advantage, upgrade to the latest Studio release. You can read more about these innovations in What’s new in Android development tools.

moving image of Gemini in Android Studio Agentic Experiences including Journeys and Version Upgrade

And the latest on driving business growth

14: What’s new in Google Play

Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we’re continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What's new in Google Play to learn more.

a moving image of three mobile devices displaying how content is displayed on the Play Store

15: Start migrating to Play Games Services v2 today

Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.

16: And of course, Android 16

We unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.

Check out all of the Android and Play content at Google I/O

This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What’s New in Android and the full Android track of sessions, and whether you’re joining in person or around the world, we can’t wait to engage with you!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


Android’s Kotlin Multiplatform announcements at Google I/O and KotlinConf 25

Posted by Ben Trengrove - Developer Relations Engineer, Matt Dyor - Product Manager

Google I/O and KotlinConf 2025 bring a series of announcements on Android’s Kotlin and Kotlin Multiplatform efforts. Here’s what to watch out for:

Announcements from Google I/O 2025

Jetpack libraries

Our focus for Jetpack libraries and KMP is on sharing business logic across Android and iOS, but we have begun experimenting with web/WASM support.

We are adding KMP support to Jetpack libraries. Last year we started with Room, DataStore and Collection, which are now available in a stable release and recently we have added ViewModel, SavedState and Paging. The levels of support that our Jetpack libraries guarantee for each platform have been categorised into three tiers, with the top tier being for Android, iOS and JVM.

Tool improvements

We're developing new tools to help easily start using KMP in your app. With the KMP new module template in Android Studio Meerkat, you can add a new module to an existing app and share code to iOS and other supported KMP platforms.

In addition to KMP enhancements, Android Studio now supports Kotlin K2 mode for Android specific features requiring language support such as Live Edit, Compose Preview and many more.

How Google is using KMP

Last year, Google Workspace began experimenting with KMP, and this is now running in production in the Google Docs app on iOS. The app’s runtime performance is on par or better than before1.

It’s been helpful to have an app at this scale test KMP out, because we’re able to identify issues and fix issues that benefit the KMP developer community.

For example, we've upgraded the Kotlin Native compiler to LLVM 16 and contributed a more efficient garbage collector and string implementation. We're also bringing the static analysis power of Android Lint to Kotlin targets and ensuring a unified Gradle DSL for both AGP and KGP to improve the plugin management experience.

New guidance

We're providing comprehensive guidance in the form of two new codelabs: Getting started with Kotlin Multiplatform and Migrating your Room database to KMP, to help you get from standalone Android and iOS apps to shared business logic.

Kotlin Improvements

Kotlin Symbol Processing (KSP2) is stable to better support new Kotlin language features and deliver better performance. It is easier to integrate with build systems, is thread-safe, and has better support for debugging annotation processors. In contrast to KSP1, KSP2 has much better compatibility across different Kotlin versions. The rewritten command line interface also becomes significantly easier to use as it is now a standalone program instead of a compiler plugin.

KotlinConf 2025

Google team members are presenting a number of talks at KotlinConf spanning multiple topics:

Talks

    • Deploying KMP at Google Workspace by Jason Parachoniak, Troels Lund, and Johan Bay from the Workspace team discusses the challenges and solutions, including bugs and performance optimizations, encountered when launching Kotlin Multiplatform at Google Workspace, offering comparisons to ObjectiveC and a Q&A. (Technical Session)

    • The Life and Death of a Kotlin/Native Object by Troels Lund offers a high-level explanation of the Kotlin/Native runtime's inner workings concerning object instantiation, memory management, and disposal. (Technical Session)

    • APIs: How Hard Can They Be? presented by Aurimas Liutikas and Alan Viverette from the Jetpack team delves into the lifecycle of API design, review processes, and evolution within AndroidX libraries, particularly considering KMP and related tools. (Technical Session)

    • Project Sparkles: How Compose for Desktop is changing Android Studio and IntelliJ with Chris Sinco and Sebastiano Poggi from the Android Studio team introduces the initiative ('Project Sparkles') aiming to modernize Android Studio and IntelliJ UIs using Compose for Desktop, covering goals, examples, and collaborations. (Technical Session)

    • JSpecify: Java Nullness Annotations and Kotlin presented by David Baker explains the significance and workings of JSpecify's standard Java nullness annotations for enhancing Kotlin's interoperability with Java libraries. (Lightning Session)

    • Lessons learned decoupling Architecture Components from platform specific code features Jeremy Woods and Marcello Galhardo from the Jetpack team sharing insights from the Android team on decoupling core components like SavedState and System Back from platform specifics to create common APIs. (Technical Session)

    • KotlinConf’s Closing Panel, a regular staple of the conference, returns, featuring Jeffrey van Gogh as Google’s representative on the panel. (Panel)

Live Workshops

If you are at KotlinConf in person, we will have guided live workshops with our new codelabs from above.


    • The codelab Migrating Room to Room KMP, also led by Matt Dyor, and Dustin Lam, Tomáš Mlynarič, demonstrates the process of migrating an existing Room database implementation to Room KMP within a shared module.

We love engaging with the Kotlin community. If you are attending KotlinConf, we hope you get a chance to check out our booth, with opportunities to chat with our engineers, get your questions answered, and learn more about how you can leverage Kotlin and KMP.

Learn more about Kotlin Multiplatform

To learn more about KMP and start sharing your business logic across platforms, check out our documentation and the sample.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


1 Google Internal Data, March 2025

Androidify: Building delightful UIs with Compose

Posted by Rebecca Franks - Developer Relations Engineer

Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

Material 3 Expressive

Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.


It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive's component and theme updates for more engaging and user-friendly products.

Material Expressive Component updates
Material Expressive Component updates

In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that's encompassed in the Material theme.

In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

@Composable
fun AndroidifyTheme(
   content: @Composable () -> Unit,
) {
   val colorScheme = LightColorScheme


   MaterialExpressiveTheme(
       colorScheme = colorScheme,
       typography = Typography,
       shapes = shapes,
       motionScheme = MotionScheme.expressive(),
       content = {
           SharedTransitionLayout {
               CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                   content()
               }
           }
       },
   )
}

Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

moving example of expressive button shapes in slow motion

The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

Material Expressive Component updates
Camera button with a MaterialShapes.Cookie9Sided shape

Animations

Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

val interactionSource = remember { MutableInteractionSource() }
val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
Spacer(
   modifier
       .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
       .clip(MaterialShapes.Cookie9Sided.toShape())
       .size(size)
       .drawWithCache {
           //.. etc
       },
)

Camera button scale interaction
Camera button scale interaction

Shared element animations

The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

moving example of expressive button shapes in slow motion

To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

@Composable
fun Modifier.sharedBoundsRevealWithShapeMorph(
   sharedContentState: 
SharedTransitionScope.SharedContentState,
   sharedTransitionScope: SharedTransitionScope = 
LocalSharedTransitionScope.current,
   animatedVisibilityScope: AnimatedVisibilityScope = 
LocalNavAnimatedContentScope.current,
   boundsTransform: BoundsTransform = 
MaterialTheme.motionScheme.sharedElementTransitionSpec,
   resizeMode: SharedTransitionScope.ResizeMode = 
SharedTransitionScope.ResizeMode.RemeasureToBounds,
   restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
   targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
)

Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

val animatedProgress =
   animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)


val morph = remember {
   Morph(restingShape, targetShape)
}
val morphClip = MorphOverlayClip(morph, { animatedProgress.value })


return this@sharedBoundsRevealWithShapeMorph
   .sharedBounds(
       sharedContentState = sharedContentState,
       animatedVisibilityScope = animatedVisibilityScope,
       boundsTransform = boundsTransform,
       resizeMode = resizeMode,
       clipInOverlayDuringTransition = morphClip,
       renderInOverlayDuringTransition = renderInOverlayDuringTransition,
   )

View the full code snippet for this Modifer on GitHub.

Autosize text

With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

BasicText(text,
style = MaterialTheme.typography.titleLarge,
autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
)

This is used front and center for the “Customize your own Android Bot” text:

Text reads Customize your own Android Bot with an inline moving image
“Customize your own Android Bot” text with inline GIF

This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

@Composable
private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
   Box(modifier = modifier) {
       val animatedBot = "animatedBot"
       val text = buildAnnotatedString {
           append(stringResource(R.string.customize))
           // Attach "animatedBot" annotation on the placeholder
           appendInlineContent(animatedBot)
           append(stringResource(R.string.android_bot))
       }
       var placeHolderSize by remember {
           mutableStateOf(220.sp)
       }
       val inlineContent = mapOf(
           Pair(
               animatedBot,
               InlineTextContent(
                   Placeholder(
                       width = placeHolderSize,
                       height = placeHolderSize,
                       placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                   ),
               ) {
                   DancingBot(
                       modifier = Modifier
                           .padding(top = 32.dp)
                           .fillMaxSize(),
                   )
               },
           ),
       )
       BasicText(
           text,
           modifier = Modifier
               .align(Alignment.Center)
               .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
           style = MaterialTheme.typography.titleLarge,
           autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
           maxLines = 6,
           onTextLayout = { result ->
               placeHolderSize = result.layoutInput.style.fontSize * 3.5f
           },
           inlineContent = inlineContent,
       )
   }
}

Composable visibility with onLayoutRectChanged

With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

var buttonBounds by remember {
   mutableStateOf<RelativeLayoutBounds?>(null)
}
var showColorSplash by remember {
   mutableStateOf(false)
}
Box(modifier = Modifier.fillMaxSize()) {
   PrimaryButton(
       buttonText = "Let's Go",
       modifier = Modifier
           .align(Alignment.BottomCenter)
           .onLayoutRectChanged(
               callback = { bounds ->
                   buttonBounds = bounds
               },
           ),
       onClick = {
           showColorSplash = true
       },
   )
}

We use these bounds as an indication of where to start the color splash animation from.

moving image of a blue color splash transition between Androidify demo screens

Learn more delightful details

From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

animated marquee example

animated gradient button for AI powered actions example

animated loading screen example

Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

What’s new in Watch Faces

Posted by Garan Jenkin – Developer Relations Engineer

Wear OS has a thriving watch face ecosystem featuring a variety of designs that also aims to minimize battery impact. Developers have embraced the simplicity of creating watch faces using Watch Face Format – in the last year, the number of published watch faces using Watch Face Format has grown by over 180%*.

Today, we’re continuing our investment and announcing version 4 of the Watch Face Format, available as part of Wear OS 6. These updates allow developers to express even greater levels of creativity through the new features we’ve added. And we’re supporting marketplaces, which gives flexibility and control to developers and more choice for users.

In this blog post we'll cover key new features, check out the documentation for more details of changes introduced in recent versions.

Supporting marketplaces with Watch Face Push

We’re also announcing a completely new API, the Watch Face Push API, aimed at developers who want to create their own watch face marketplaces.

Watch Face Push, available on devices running Wear OS 6 and above, works exclusively with watch faces that use the Watch Face Format watch faces.

We’ve partnered with well-known watch face developers – including Facer, TIMEFLIK, WatchMaker, Pujie, and Recreative – in designing this new API. We’re excited that all of these developers will be bringing their unique watch face experiences to Wear OS 6 using Watch Face Push.

Three mobile devices representing watch face marketplace apps for watches running Wear OS 6
From left to right, Facer, Recreative and TIMEFLIK watch faces have been developing marketplace apps to work with watches running Wear OS 6.

Watch faces managed and deployed using Watch Face Push are all written using Watch Face Format. Developers publish these watch faces in the same way as publishing through Google Play, though there are some additional checks the developer must make which are described in the Watch Face Push guidance.

A flow diagram demonstrating the flow of information from Cloud-based storage to the user's phone where the app is installed, then transferred to be installed on a wearable device using the Wear OS App via the Watch Face Push API

The Watch Face Push API covers only the watch part of this typical marketplace system diagram - as the app developer, you have control and responsibility for the phone app and cloud components, as well as for building the Wear OS app using Watch Face Push. You’re also in control of the phone-watch communications, for which we recommend using the Data Layer APIs.

Adding Watch Face Push to your project

To start using Watch Face Push on Wear OS 6, include the following dependency in your Wear OS app:

// Ensure latest version is used by checking the repository
implementation("androidx.wear.watchface:watchface-push:1.3.0-alpha07")

Declare the necessary permission in your AndroidManifest.xml:

<uses-permission android:name="com.google.wear.permission.PUSH_WATCH_FACES" />

Obtain a Watch Face Push client:

val manager = WatchFacePushManagerFactory.createWatchFacePushManager(context)

You’re now ready to start using the Watch Face Push API, for example to list the watch faces you have already installed, or add a new watch face:

// List existing watch faces, installed by this app
val listResponse = manager.listWatchFaces()

// Add a watch face
manager.addWatchFace(watchFaceFileDescriptor, validationToken)

Understanding Watch Face Push

While the basics of the Watch Face Push API are easy to understand and access through the WatchFacePushManager interface, it’s important to consider several other factors when working with the API in practice to build an effective marketplace app, including:

To learn more about using Watch Face Push, see the guidance and reference documentation.

Updates to Watch Face Format

Photos

Available from Watch Face Format v4

The new Photos element allows the watch face to contain user-selectable photos. The element supports both individual photos and a gallery of photos. For a gallery of photos, developers can choose whether the photos advance automatically or when the user taps the watch face.

a wearable device and small screen mobile device side by side demonstrating how a user may configure photos for the watch face through the Companion app on the mobile device
Configuring photos through the watch Companion app

The user is able to select the photos of their choice through the companion app, making this a great way to include true personalization in your watch face. To use this feature, first add the necessary configuration:

<UserConfigurations>
  <PhotosConfiguration id="myPhoto" configType="SINGLE"/>
</UserConfigurations>

Then use the Photos element within any PartImage, in the same way as you would for an Image element:

<PartImage ...>
  <Photos source="[CONFIGURATION.myPhoto]"
          defaultImageResource="placeholder_photo"/>
</PartImage>

For details on how to support multiple photos, and how to configure the different change behaviors, refer to the Photos section of the guidance and reference, as well as the GitHub samples.

Transitions

Available from Watch Face Format v4

Watch Face Format now supports transitions when exiting and entering ambient mode.

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit
State transition animation: Example using an overshoot effect in revealing the seconds digits

This is achieved through the existing Variant tag. For example, the hours and minutes in the above watch face are animated as follows:

<DigitalClock ...>
  <Variant mode="AMBIENT" target="x" value="100" interpolation="OVERSHOOT" />

   <!-- Rest of "hh:mm" clock definition here -->
</DigitalClock>

By default, the animation takes the full extent of allowed time for the transition. The new interpolation attribute controls the animation effect - in this case the use of OVERSHOOT adds a playful experience.

The seconds are implemented in a separate DigitalClock element, which shows the use of the new duration attribute:

<DigitalClock ...>
  <Variant mode="AMBIENT" target="alpha" value="0" duration="0.5"/>
   <!-- Rest of "ss" clock definition here -->
</DigitalClock>

The duration attribute takes a value between 0.0 and 1.0, with 1.0 representing the full extent of the allowed time. In this example, by using a value of 0.5, the seconds animation is quicker - taking half the allowed time, in comparison to the hours and minutes, which take the entire transition period.

For more details on using transitions, see the guidance documentation, as well as the reference documentation for Variant.

Color Transforms

Available from Watch Face Format v4

We’ve extended the usefulness of the Transform element by allowing color to be transformed on the majority of elements where it is an attribute, and also allowing tintColor to be transformed on Group and Part* elements such as PartDraw and PartText.

The main exceptions to this addition are the clock elements, DigitalClock and AnalogClock, and also ComplicationSlot, which do not currently support Transform.

In addition to extending the list of transformable attributes to include colors, we’ve also added a handful of useful functions for manipulating color:

To see these in action, let’s consider an example.

The Weather data source provides the current UV index through [WEATHER.UV_INDEX]. When representing the UV index, these values are typically also assigned a color:

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit

We want to represent this information as an Arc, not only showing the value, but also using the appropriate color. We can achieve this as follows:

<Arc centerX="0" centerY="0" height="420" width="420"
  startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
  <Transform target="endAngle"
    value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  <Stroke thickness="20" color="#ffffff" cap="ROUND">
    <Transform target="color"
      value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  </Stroke>
</Arc>

Let’s break this down:

    • The first Transform restricts the UV index to the range 0.0 to 11.0 and adjusts the sweep of the Arc according to that value.
    • The second Transform uses the new extractColorFromWeightedColors function.
        • The first argument is our list of colors
        • The second argument is a list of weights - you can see from the chart above that green covers 3 values, whereas orange only covers 2, so we use weights to represent this.
        • The third argument is whether or not to interpolate the color values. In this case we want to stick strictly to the color convention for UV index, so this is false.
        • Finally in the fourth argument we coerce the UV value into the range 0.0 to 1.0, which is used as an index into our weighted colors.

The result looks like this:

side by side quadrants of watch face examples showing using the new color functions in applying color transforms to a Stroke in an Arc
Using the new color functions in applying color transforms to a Stroke in an Arc.

As well as being able to provide raw colors and weights to these functions, they can also be used with values from complications, such as HR, temperature or steps goal. For example, to use the color range specified in a goal complication:

<Transform target="color"
    value="extractColorFromColors(
        [COMPLICATION.GOAL_PROGRESS_COLORS],
        [COMPLICATION.GOAL_PROGRESS_COLOR_INTERPOLATE],
        [COMPLICATION.GOAL_PROGRESS_VALUE] /    
            [COMPLICATION.GOAL_PROGRESS_TARGET_VALUE]
)"/>

Introducing the Reference element

Available from Watch Face Format v4

The new Reference element allows you to refer to any transformable attribute from one part of your watch face scene in other parts of the scene tree.

In our UV index example above, we’d also like the text labels to use the same color scheme.

We could perform the same color transform calculation as on our Arc, using [WEATHER.UV_INDEX], but this is duplicative work which could lead to inconsistencies, for example if we change the exact color hues in one place but not the other.

Returning to the Arc definition, let’s create a Reference to the color:

<Arc centerX="0" centerY="0" height="420" width="420"
  startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
  <Transform target="endAngle"
    value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  <Stroke thickness="20" color="#ffffff" cap="ROUND">
    <Reference source="color" name="uv_color" defaultValue="#ffffff" />
    <Transform target="color"
      value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  </Stroke>
</Arc>

The color of the Arc is calculated from the relatively complex extractColorFromWeightedColors function. To avoid repeating this elsewhere in our watch face, we have added a Reference element, which takes as its source the Stroke color.

Let’s now look at how we can consume this value in a PartText elsewhere in the watch face. We gave the Reference the name uv_color, so we can simply refer to this in any expression:

<PartText x="0" y="225" width="450" height="225">
  <TextCircular centerX="225" centerY="0" width="420" height="420"
    startAngle="120" endAngle="90"
    align="START" direction="COUNTER_CLOCKWISE">
    <Font family="SYNC_TO_DEVICE" size="24">
      <Transform target="color" value="[REFERENCE.uv_color]" />
      <Template>%d<Parameter expression="[WEATHER.UV_INDEX]" /></Template>
    </Font>
  </TextCircular>
</PartText>
<!-- Similar PartText here for the "UV:" label -->

As a result, the color of the Arc and the UV numeric value are now coordinated:

side by side quadrants of watch face examples showing Coordinating colors across elements using the Reference element
Coordinating colors across elements using the Reference element

For more details on how to use the Reference element, refer to the Reference guidance.

Text autosizing

Available from Watch Face Format v3

Sometimes the exact length of the text to be shown on the watch face can vary, and as a developer you want to balance being able to display text that is both legible, but also complete.

Auto-sizing text can help solve this problem, and can be enabled through the isAutoSize attribute introduced to the Text element:

<Text align="CENTER" isAutoSize="true">

Having set this attribute, text will then automatically fit the available space, starting at the maximum size specified in your Font element, and with a minimum size of 12.

As an example, step count could range from tens or hundreds through to many thousands, and the new isAutoSize attribute enables best use of the available space for every possible value:

side by side examples of text sizing adjustments on watch face using isAutosize
Making the best use of the available text space through isAutoSize

For more details on isAutoSize, see the Text reference.

Android Studio support

For developers working in Android Studio, we’ve added support to make working with Watch Face Format easier, including:

    • Run configuration support
    • Auto-complete and resource reference
    • Lint checking

This is available from Android Studio Canary version 2025.1.1 Canary 10.

Learn More

To learn more about building watch faces, please take a look at the following resources:

We’ve also recently launched a codelab for Watch Face Format and have updated samples on GitHub to showcase new features. The issue tracker is available for providing feedback.

We're excited to see the watch face experiences that you create and share!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


* Google Play data for period 2025-03-24 to 2025-03-23