Category Archives: Android Developers Blog

An Open Handset Alliance Project

Home APIs: Enabling all developers to build for the home

Posted by Matt Van Der Staay – Engineering Director, Google Home


This blog was originally posted on Google for Developers.

As the saying goes, “home is where the heart is.” It’s where we spend the most time; it’s your space to be comfortable, where you can truly relax, connect and make memories. Our homes have gotten more helpful with connected products, such as a smart door lock or Nest thermostat. Despite this momentum, it's still too hard to develop for the home.

We are changing all of that. Building on the foundation of Matter, we've re-envisioned Google Home as a platform for developers - all developers, not just those that build smart home devices. Google Home is the destination to create innovative experiences for the home.

Today, we’re announcing the Home APIs and Home runtime. With the Home APIs, app developers can access over 600M devices, Google’s hubs and Matter infrastructure, and an automation engine powered by Google intelligence - all available on both Android and iOS. Here are five things to know:

1. Any developer can now build an experience that works with Google Home.

The home offers a unique opportunity for developers to create seamless and deeper relationships with users, but developing for the smart home is harder than it needs to be. Building for the smart home means integrations with many device makers, operating hubs and Matter fabrics, and operating automations engines driven by intelligent signals.

Whether you build an app specifically for smart home devices or build apps that have nothing to do with the smart home – like a fitness app or delivery app - the Home APIs will let you create app experiences that offer your customers delightful and differentiated experiences on both Android and iOS.

2. Access 600 million connected devices from your app

The new Device and Structure APIs let you access over 600M devices with a single integration. Control and manage the devices already connected to Google Home, such as Matter light bulbs or the Nest Learning Thermostat, whether at home, or on the go. You can build a complex app to manage any aspect of a smart home, or simply integrate with a smart device to solve pain points - like turning on the lights automatically before the food delivery driver arrives.

The Home APIs have been designed with privacy and security in mind, leveraging industry standard best practices. Users are always in control and need to explicitly grant access to their structure and smart home devices before an app can access it. And they can easily revoke access at any time from the Google Home app. To ensure quality experiences, developers who adopt the Home APIs must pass certification before launching their app.

The Device and Structure APIs
The Device and Structure APIs provide all of the foundational building blocks to create a smart home experience.

The new Commissioning API lets you setup Matter devices in your app or the Home app or directly with Fast Pair on Android, without the need to create a new Matter fabric, saving you time and resources.

The Commissioning API
The Commissioning API provides all of the customer experience to set up a Matter device.

3. Automate with Google’s unique intelligence about the home

As people add more devices to their home, it becomes challenging to make them all work in unison. Over the past year, we have added new signals and allowed those with advanced skills to script their home using generative AI. With the new Automation API, you can create and manage home automations in your app, using Google Home’s new automation engine and intelligent signals.

Automations can be triggered by device signals from the home such as occupancy events from motion sensors, mode changes from appliances, or media events from a smart TV. For example, Yale is using the Automation API to turn on the foyer lights when the front door is unlocked at night. Automations can also use Google’s intelligence signals like home and away, which fuses together signals from devices across the home to create a more accurate presence detection.

The Automations API
The Automations API provides all of the tools for creating and managing automations.

4. Expanding hubs for Google Home to the TV

A hub for Google Home is a device that enables remote access and local control of their Matter devices across Wifi and Thread. The Home APIs use the network of hubs for Google Home to control Matter devices whether the user is in the home or away.

Later this year, we’re upgrading our hubs and introducing the Home runtime, so other devices, including Chromecast with Google TV, select panel TVs with Google TV running Android 14, or higher and eligible LG brand TVs will also become hubs for Google Home.

Home APIs make controlling lights and switches locally over a hub feel snappy. We are adopting these APIs in the Google Home app, and our early tests show device control operating up to three times faster than before. Developers using the Home APIs can see faster and more responsive local control in their apps as well.

5. Delightful new experiences from a diverse set of apps

We are working with a broad range of brands across lighting, security, automotive, energy, and entertainment to build seamless smart home experiences that help get more usefulness from the smart home.

Partners from every major smart home category are building on the Home APIs.
Partners from every major smart home category are building on the Home APIs.

Here are how some of our first partners are using the Home APIs:

ADT’s new Trusted Neighbor will revolutionize the universal practice of “giving a trusted neighbor a key to your home,” enabling users to easily grant secure and temporary access to their homes for neighbors, friends or helpers.

ADT Trusted Neighbor Program

LG will enable millions of TVs to be hubs for Google Home, allowing seamless control of devices from any app built using Home APIs. You will also be able to use the ThinQ mobile app or the Home Hub on the LG TV to control devices.

Home APIs on LG TVs for Google Home

Eve Systems will bring their experience to Android for the first time and build helpful automations like lowering the blinds when the temperature drops at night.

Eve Systems using Home APIs

Google Pixel is bridging the digital and physical worlds so that bedtime mode can not only dim your screen, but can also automatically dim your bedroom lights, lower the shades and lock the front door.

Google Pixel using Home APIs

And this is just the beginning. With the Home APIs, a workout app could keep you cool while you are burning calories by turning on the fan before you begin working out. Or a vacation rental app could make sure that the lights are on and the temperature is just right when a guest arrives. With the Home APIs, now anyone can bridge digital experiences and physical devices.


Sign Up to Build with the Home APIs

Do you have a great idea or feature that you'd like to build into your app with the Home APIs? Tell us about it and join the waitlist for access to the Home APIs or Home runtime. We will expand access on a rolling basis and the first apps built on the Home APIs will come to the Play Store and App Store starting this fall. Learn more about what’s included in the Home APIs from our I/O session on the Google Home Developer Center.

Android for Cars: Bringing more apps to cars

Posted by Vivek Radhakrishnan – Technical Program Manager, and Seung Nam – Product Manager

With technology in cars becoming more capable, the opportunity to deliver safe and seamless connected experiences for drivers and passengers is greater than ever. Google remains committed to the automotive industry and is seeing momentum across Android Auto and cars powered by Android Automotive OS with Google built-in. We’re excited to share updates across our in-car experiences and introduce new programs and resources to make it easier for you to bring your apps to cars. Learn more below and in the Android for Cars Technical Session.

Momentum and updates

With over 200 million cars on the road compatible with Android Auto, and nearly 40 car models like the Nissan Rogue, Renault R5, Acura ZDX, and Ford Explorer offering Google built-in, the time to bring your apps to cars is now.

Over the last year, the ecosystem of apps available across these experiences has grown – thanks to you. New entertainment apps like Max, Peacock and Angry Birds are coming to select cars with Google built-in. On Android Auto, the Uber Driver app is now available, allowing drivers to accept rides and deliveries, and get turn-by-turn directions on a bigger screen.

Image showing Angry Birds on a Volvo EX90 car display
Angry Birds is coming to select cars with Google built-in, including Volvo EX90 (pictured).

We’re also pleased to share that Google Cast is coming to cars with Android Automotive OS, starting with Rivian with more to follow. This allows you to easily cast video content from your phone or tablet directly to the car while parked. If you don’t already offer casting in your app, this is a simple way for your content to reach new audiences in the car.

Coming soon - you can stream content from apps on your phone, like Pluto TV, to Rivian cars via Google Cast.

New car app quality tiers

There are unique considerations when developing apps and experiences for cars including safety, numerous screen sizes, and more. Our priority is developing resources and tools that take these considerations into account and minimize the work needed for you to bring your apps to cars.

We’re introducing new quality tiers, inspired by those that exist for large screens, to streamline the process of bringing existing apps to cars by highlighting what makes for a great user experience in cars. Here are the tiers and what they encompass:

    • Tier 1: Car differentiated
      This tier represents the best of what’s possible in cars. Apps in this tier are specifically built to work across the variety of hardware in cars and can adapt their experience across driving and parked modes. They provide the best user experience designed for the different screens in the car like the center console, instrument cluster and additional screens - like panoramic displays that we see in many premium vehicles.
    • Tier 2: Car optimized
      Most apps available in cars today fall into this tier and provide a great experience on the car’s center stack display. These apps will have some car-specific engineering to include capabilities that can be used across driving or parked modes, depending on the app’s category.
    • Tier 3: Car ready
      Apps in this tier are large screen compatible and are enabled while the car is parked, with potentially no additional work. While these apps may not have car-specific features, users can experience the app just as they would on any large screen Android device.

To learn more about the quality tiers, see Android app quality for cars.

Car ready mobile apps program

Let’s dive deep into Tier 3 apps. In collaboration with car manufacturers, we’re introducing the Car ready mobile apps program to accelerate bringing mobile apps to cars with no additional work for developers.

As part of this program, Google will proactively review mobile apps that are already adaptive and large screen compatible to ensure safety and compatibility in cars. If the app qualifies, we will automatically opt it in for distribution on cars with Google built-in and make it available in Android Auto, without the need for new development or a new release to be created. This program will start with parked app categories like video, gaming and browsers with plans to expand to other app categories in the future.

The program will roll out in the coming months, but if you already offer a large screen compatible adaptive app and it falls into one of these categories, you can request a review to participate sooner. As this program rolls out, availability of your app will depend on platform compatibility.

To learn more about building qualified mobile apps, check out the technical session titled “Building Adaptive Android Apps”. You can find guidance on what to look out for at developer.google.com

Animation showing AMC+ app on a phone, tablet and car display.
Apps optimized for large screens, like AMC+, may be able to come to cars with little to no development work.

New tools and emulators

To create high quality experiences in cars, we are also introducing some new tools that can help you along the way.

    • First, we have a new emulator for distant and panoramic displays so developers can visualize and test for the growing sizes and number of screens in the car and make sure apps can adapt to the variety of displays for the best experience.
    • We also have a new tool that addresses the wide range of screen shapes and user interfaces (UI) present in cars. Many new car displays have unique curves, insets and angles that impact the UI, so we have an emulator that lets you change the emulator screen to match OEM screen designs. This will help ensure the apps work well on real cars without needing to set up specific OEM emulators or bringing in real cars for testing.
    • Lastly, we’re introducing an Android Automotive OS system image for Pixel Tablet. This will let you physically interact with your app as you would on a car screen. We are opening this up for early access partners for the purpose of development and testing today, and you can request to participate here.

To learn more about how to use these tools, check out the “Build and test a parked app for Android Automotive OS” codelab that will be published tomorrow.

More app categories for cars

As you consider bringing your app to cars, we put together a table to help you understand what app categories are currently open and accepting app submissions across both Android Auto and cars with Google built-in. We will continue to expand the type of apps that can be enabled in cars, so if your app isn’t in one of these categories, stay tuned for future opportunities!

Android for Cars Catergory Status

Start developing apps for cars today

To learn how to bring your apps to cars, check out the documentation on the Android for Cars developer site and the Android for Cars Technical Session. With all the opportunities across car screens, there has never been a better time to bring your apps and experiences to cars. Thanks for all the contributions to the Android ecosystem. See you on the road!

Scaling Across Screens with Jetpack Compose @ Google I/O ‘24

Posted by Maru Ahues Bouza, Product Management Director, Android Developer

Scaling Across Screens with Jetpack Compose

The promise of Jetpack Compose has always been that a modern toolkit designed to build native UI can help you build better apps faster and easier. As more and more of you - 40% of the top 1k apps, in fact - use (and love) Compose, we’ve been working to extend those benefits you’re seeing on mobile to also help you build across form factors as well. At Google I/O 2024, we announced a lot of new updates for Compose that help you build across form factors, including Compose APIs to support adaptive layouts, and new updates for Compose TV and Wear OS. From foldables to wearables to TVs, Compose is delivering features built to make Android development faster and easier. Apps like yours are already using Compose to support more screens with less code.

When thinking about layouts - think adaptive

Yesterday, we announced a new set of Compose APIs for building adaptive layouts, using Material guidance. These APIs, now in Beta, provide new layouts and components that adapt as users expect when switching between small and large window sizes.

The libraries provide 3 new scaffolds that adapt to the different window sizes that users can place apps in on different types of devices, from phones to foldables to tablets and more.

3 new libraries that adapt to different window sizes

NavigationSuiteScaffold

NavigationSuiteScaffold helps make it easier to build navigation UI by automatically complying with Material guidelines to provide your users with an optimal experience based on their window size.

Material guidelines recommend using a navigation bar at the bottom of compact width windows such as most phones and a navigation rail on the size of medium width and expanded width windows. It used to be up to each app individually to handle swapping between these components; now NavigationSuiteScaffold does this for you by switching between the components when the window size changes.

Navigation bar

ListDetailPaneScaffold & SupportingPaneScaffold

The new library also has ListDetailPaneScaffold and SupportingPaneScaffold, which help you implement canonical layouts that we recommend in many cases - list-detail and supporting pane.

On a phone, you usually organize your app flow through screens. For example, clicking on an item on your list screen brings you to the detail screen.

Detaileds screen

When adapting to different window sizes, it helps to think of your app in terms of panes rather than screens. For a compact window size class, such as a phone, you might only display one pane. For an expanded window size class, you might show two, or more panes at the same time. ListDetailPaneScaffold and SupportingPaneScaffold help you build apps that easily switch between one and two pane layouts.

Different screen layouts

You can learn more about all three of these APIs and how to get started with them in the “Building UI with the Material 3 adaptive library” and “Building adaptive Android apps” technical sessions.

“Integrating SupportingPaneScaffold was effortless and quick. It enabled us to seamlessly organize primary and secondary content on To-Dos. Depending on the window size class, the supporting pane adjusts the UI without any additional custom logic. Delighting our users regardless of what device they use is a key priority for SAP Mobile Start.”
- Software Engineer on SAP Mobile Start

Compose for Wear OS

In the past year, adoption of Compose for Wear OS has grown 200%, showcasing the ease with which Compose allows developers to build for the watch form factor.

Recently we’ve seen top apps such as WhatsApp, Gmail and Google Calendar built entirely using Compose for Wear OS, and it’s the recommended way for building user interfaces for Wear OS apps.

At this year’s Google I/O, Compose for Wear OS is graduating visual improvements and fixes from beta to stable.

In the past year, we’ve added features such as SwipeToReveal, to give users additional means for completing actions, an expandableItem, to enhance the use of the smaller screen and show additional information where needed, and a range of WearPreview supporting annotations, for ensuring your app works optimally across the range of device sizes and font scales.

Compose for Wear OS previews usage in Android Studio
Compose for Wear OS previews usage in Android Studio

You can get started with Compose for Wear OS by taking the codelab and learn more about all the latest updates for Wear OS via the technical session.

Compose for Android TV

At Google I/O ‘24, we announced that Compose for TV 1.0.0 is now available in beta. Compose for TV is our recommended approach for building delightful UIs for Android TV OS. It brings all of the benefits of Jetpack Compose to your TV apps, making building beautiful and functional experiences in your app much faster and easier.

The latest updates to Compose for TV include better performance, input support, and a whole range of improved components that look great out of the box. New in this release, we’ve added lists, navigation, chips, and settings screens. We’ve also updated the developer tools in Android Studio to include a new project wizard to get a running start with Compose for TV.

The new TV Material Catalog app lets you explore components in Compose for TV with different themes and layouts, and our updated JetStream sample shows how it all fits together.

TV Material Catalog app in action

You can get started with Compose for TV by checking out the dedicated blog, the technical session or taking a look at the integration guides.

Jetpack Glance

Jetpack Glance 1.1.0 is now available in RC, bringing a new unit test library, Error UIs, and new components.

We have also released new Canonical Widget Layouts on GitHub, which are built on top of the Glance components, to allow you to get started faster with a set of layouts that align with best practices.

The first set of layouts are delivered as code samples and a matching figma design kit on Android UI Kit with more layouts coming later this year.

Lastly, we have new design guidance published on the UI design hub—check it out!

A sample of Compose across screens: Jetcaster

A sample of Compose across screens: Jetcaster

We have updated Jetcaster—one of our Compose samples—to adapt across phone, foldable and tablet screens, and added support for TV, Wear and homescreen widgets with Glance. Jetcaster showcases how Compose helps you to build across a range of devices using a shared architecture in a single project.

See how you can extract elements such as your data layer, and design system, to promote reuse and consistency while delivering an experience tailored to different form factors. You can dive directly into the code on GitHub.

Get started with Compose across screens

With these updates to Compose to help you build for tablets, foldables, wearables and TVs, it is a great time to get started! These technical sessions are a great place to learn more about all the latest updates:

Learn more about how SoundCloud supported more screens using 45% less code with Jetpack Compose!

"Our mobile Compose skills transferred directly to Compose for other form factors, The concepts and most APIs are the same across form factors” - Vitus Ortner, Android engineer at SoundCloud

What’s new in Wear OS – I/O ’24

Kseniia Shumelchyk, Android Developer Relations Engineer, and Garan Jenkin, Android Developer Relations Engineer

Wear OS has seen incredible growth and advancements over the past year. With watch launches from Pixel, Samsung and more, Wear OS grew its user base by 40% in 2023 and has users in over 160 countries and regions. And Wear OS has expanded to more brands including OnePlus, OPPO and Xiaomi. This growth has been accompanied by heavy investments in performance and power optimization.

In this blog post, we’ll be highlighting some of the key updates we announced at Google I/O this year, so let’s dive in and explore the latest advancements in Wear OS and how you can make the most of the platform.

Wear OS 5 Developer Preview

We’re excited to be releasing the Developer Preview of Wear OS 5, the next version of Google’s smartwatch platform arriving later this year, based on Android 14. Central to our release of Wear OS 5 is continuing to enhance battery life.

Wear OS 5 brings performance improvements over Wear OS 4. Tracking your workout is now more efficient; for example, running a marathon consumes up to 20% less power on Wear OS 5 than on Wear OS 4.

Wear OS 5 brings battery improvements over Wear OS 4 for longer work out tracking
Wear OS 5 brings battery improvements over Wear OS 4 for longer work out tracking

To help you develop power-efficient apps on Wear OS, we’ve released a new guide to conserve power and battery. Be sure to take a look!

Wear OS 5 is based on Android 14, which brings with it a number of developer-facing changes. Check out what’s changed and try the new Wear OS 5 emulator to test your app for compatibility with the new platform version.

Changes in Watch Faces development

Last year we introduced the Watch Face Format as part of Wear OS 4, and we’ve had a fantastic response, with 30% of watch faces in Google Play already using the format. It’s been great to see what you’ve all been able to create so far using the Watch Face Format!

Sample Watch faces created with Watch Face Format
Sample Watch faces created with Watch Face Format

We’re excited to bring you the next iteration of the Watch Face Format with Wear OS 5.

Additionally, we’re announcing some changes to existing watch face development using Jetpack Watch Face APIs. Starting from Wear OS 5, we are introducing restrictions to complications for watch faces built with AndroidX or the Wearable Support Library that will apply to some data sources, as well as Google Play publishing limitations to new watch faces built with these libraries.

Check out the Watch Faces blog post for full details on the new features in Watch Face Format and changes to watch faces development options.

Tooling and library updates

Jetpack Compose for Wear OS

Adoption of Compose on Wear OS has grown 200% in the past year, highlighting the ease with which Compose allows developers to build for the watch form factor. Recently we’ve seen top apps such as WhatsApp, Gmail and Google Calendar built entirely using Compose for Wear OS, and it’s the recommended way for building user interfaces for Wear OS apps.

With the 1.3 release of Jetpack Compose for Wear OS, we’ve graduated a number of visual improvements and fixes from beta to stable.

In the past year, we’ve added features such as SwipeToReveal, to give users additional means for completing actions, an expandable item, to enhance the use of the smaller screen and show additional information where needed, and a range of WearPreview supporting annotations, for ensuring your app works optimally across the range of device sizes and font scales.

Compose for Wear OS previews usage in Android Studio
Compose for Wear OS previews usage in Android Studio

And at Google I/O 2024, we announced a lot of new updates with Jetpack Compose that help you build across form factors, including Wear OS, read more in this blog and check out how SoundCloud supported more screens using 45% less code with Jetpack Compose.

Tiles and ProtoLayout

Wear OS tiles give users fast, predictable access to the information and actions they rely on most. Version 1.4 of the Jetpack Tiles library, currently in alpha, introduces preview support for Android Studio to help you quickly iterate on your Tile development while also helping you create optimal-looking tiles on a range of display sizes.

Previews can be seen starting in Android Studio Koala Feature Drop (Canary), with the following dependencies:

    • androidx.wear.tiles:tiles-tooling-preview:1.4.0-alpha02+
    • androidx.wear.tiles:tiles-tooling:1.4.0-alpha02+
    • androidx.wear:wear-tooling-preview:1.0.0+
@Preview(device = WearDevices.SMALL_ROUND)
fun smallPreview(context: Context) = TilePreviewData(
    onTileRequest = { request ->
        TilePreviewHelper.singleTimelineEntryTileBuilder(
            buildMyTileLayout()
        ).build()
    }
)
Tiles previews usage in Android Studio
Tiles previews usage in Android Studio

We’ve also introduced better means for your app to determine whether your tiles are in use, through the getActiveTilesAsync() method.

Within ProtoLayout’s stable version 1.1, as used by Tiles, we’ve introduced a number of changes, such as the following:

    • Gradient support in ArcLine.
    • Date-time formatting supports different time zones for dynamic data types.
    • Better text autosizing and ellipsizing options, and consistent font padding behavior.
    • Expandable spacers
    • Improved accessibility for Clickable elements

And from 1.2.0-alpha02, we’ve made it easier for your layouts to adjust appropriately for different display sizes by adding the setResponsiveContentInsetEnabled() method to PrimaryLayout, as well as updating it for EdgeContentLayout. To use this setter, update your code as follows:

PrimaryLayout.Builder(deviceParameters)
    .setResponsiveContentInsetEnabled(true)
    .setContent(
        // ...
    )
.build()

Easier testing for fitness apps

Android Studio Koala Feature Drop (Canary) brings a new sensor panel to make it easier to test use of Health Services in your Wear OS app. The panel allows you to configure capabilities of the device, set values of specific data types and stimulate events such as auto-pause and resume of exercises.

Sensor panel usage with Wear OS emulator in Android Studio
Sensor panel usage with Wear OS emulator in Android Studio

Check out this blog to learn more about tooling updates.

Larger Displays

With the momentum surrounding Wear OS, we’re seeing a wider variety of round screen sizes and resolutions, which provides more choices for the user.

We are releasing new guidelines on how to build responsive UIs for different watch display sizes, as well as updates to existing libraries to introduce adaptive layouts, and components.

Check out the ComposeStarter sample for Wear OS on Github to see how to take advantage of these updates in your app. Furthermore, we’ve updated the sample to provide examples of using tools to evaluate your layouts, including :

    • Previews - demonstrating use of WearPreviewDevices to visualize your layouts on a full range of device sizes and font scaling settings.
    • Screenshot testing - helping you detect issues and regressions in your layouts on different sized devices, with different font scales and locales, representative of real-world devices.

Start building for Wear OS now

There has never been a better time to start building for Wear OS! Be sure to check out Building for the future of Wear OS technical session to learn more about all the latest updates for Wear OS!

To get started:

We’re looking forward to seeing the experiences that you build on Wear OS!

Level up your apps with the latest features from Android Health

Posted by Breana Tate - Developer Relations Engineer, Android Health

Android Health’s mission is to enable billions of Android users to be healthier through access, storage, and control of their health, fitness, and safety data. To further this mission, we offer two primary APIs for developers, Health Connect and Health Services on Wear OS, which are both used by a growing number of apps on Android and Wear OS.

AI capabilities unlock amazing and unique use cases, but to be ready to deliver the most value to your users at the right time, you need a strong foundation of data. Our updates this year focus on helping you build up this data foundation, with support for more data types, new ways to access data, and additional methods of getting timely data updates when you need them.

Changes to the Google Fit APIs

We recently shared that Google Fit developer services will be transitioning to become a core part of the Android Health platform. As part of this, the Google Fit APIs, including the REST API, will remain available until June 30, 2025.

Health Connect is the recommended solution for storing and sharing health and fitness data on Android phones. Beginning with Android 14, it’s available by default in Settings. On pre-Android 14 devices, it’s available for download from the Play Store. Health Connect lets your app connect with hundreds of apps using a single API integration. To date, over 500 apps have integrated with Health Connect and have unlocked deeper insights for their users. Check out the featured list to see some of the apps that have integrated.

We’re excited to continue supporting the Google Fit Android Recording API functionality through the Recording API on mobile, which allows developers to record steps, and soon distance and calories, in a power-efficient manner. In contrast to the Google Fit Android Recording API, the Recording API on mobile does not store data in the cloud by default, and does not require Google Sign-In. The API is designed to make migrating from the Fit Recording API effortless. Keep an eye on d.android.com/health-and-fitness for upcoming documentation.

Upcoming capabilities from Health Connect

Health Connect will soon add support for background reads and history reads.

Background reads will enable developers to read data from Health Connect while their app is in the background, meaning that you can keep data up-to-date without relying on the user to open your app. This is a departure from current behavior, where apps can only read from Health Connect while the app is in the foreground or running a foreground service.

History reads will give users the option to grant apps access to all historical data in Health Connect, not just the past 30 days.

With both background reads and history reads, users are in control. Both capabilities require developers to declare the respective permissions, and users must approve the permission requests before developers can make use of the data protected by those permissions. Even after granting approval, users have the option of revoking access at any time from within Health Connect settings.

Both features will be released later this year, so stay tuned to learn how to add support to your apps!

Updates to Health Services on Wear OS

Health Services on Wear OS is a set of APIs that makes it simple to create power-efficient health and fitness experiences on Wear OS.

In Wear OS 5, we’re introducing 2 new features:

    • New data types for running
    • Support for debounced goals

New Data Types for Running

Starting with Wear OS 5, Health Services will support new data types for running. These data types can help provide additional insights on running form and economy.

The full list of new advanced running metrics is:

    • Ground Contact Time
    • Stride Length
    • Vertical Oscillation
    • Vertical Ratio

As with all data types supported by Health Services on Wear OS, be sure to check exercise capabilities so that your app only uses metrics that are supported on the devices running your app, creating a smoother experience for users. This is especially important for Wear OS, as there is a strong ecosystem of devices for consumers to choose from, and they don’t always support the same metrics.

// Checking if the device supports the RUNNING exercise and confirming the 
// data types that are supported.
suspend fun getExerciseCapabilities(): ExerciseTypeCapabilities? {
   val capabilities = exerciseClient.getCapabilitiesAsync().await()
   return if (ExerciseType.RUNNING in capabilities.supportedExerciseTypes) {
       capabilities.getExerciseTypeCapabilities(ExerciseType.RUNNING)
   } else {
       null
   }
}


. . .


// Checking whether the data types that we want to use are supported by
// the RUNNING exercise on this device.
val dataTypes = setOf(
   DataType.HEART_RATE_BPM_STATS,
   DataType.CALORIES_TOTAL,
   DataType.DISTANCE_TOTAL,
   DataType.GROUND_CONTACT_TIME,
   DataType.VERTICAL_OSCILLATION
).intersect(capabilities.supportedDataTypes)
Checking exercise capabilities with Health Services on Wear OS

To make this easy, we’ve introduced a sensor panel, available starting in Android Studio Koala Feature Drop, which is currently in Canary. You can use the panel to test your app across a variety of device capabilities, experimenting with situations where metrics like heart rate or distance aren’t available.

The Health Services sensor panel
The Health Services sensor panel

Support for debounced goals

Second, Health Services on Wear OS will soon support debounced goals for instantaneous metrics. These include metrics like heart rate, distance, and speed, for which users want to maintain a specified threshold or range throughout an exercise.

Debounced goals prevent the same event from being emitted multiple times—every time the condition is true—over a short time period. Instead, events are emitted only if the threshold has been continuously exceeded for a (configurable) number of seconds. You can also prevent events from being emitted immediately after goal registration.

This support comes from two new ways to better time goal alerts for instantaneous metrics: duration at threshold and initial delay:

    • Duration at threshold is the amount of uninterrupted time the user needs to cross the specified threshold before Health Services sends an alert event.
    • Initial delay is the amount of time that must pass, since goal registration, before your app is notified.

Together, these features reduce the number of false positives and repeated alerts surfaced to users if your app lets users set fitness goals or targets.

Duration at Threshold

Initial Delay

Definition

The amount of uninterrupted time the user needs to cross the specified threshold before Health Services will send an alert event.

The amount of time that must pass since goal registration, before your app is notified.

Purpose

Prevent false positives.

Prevent repeatedly notifying the user.

Counter starts

As soon as user crosses the specified threshold

As soon as the monitoring request is set

The differences between Duration at Threshold and Initial Delay

A common use case for debounced goals involves heart rate zones. Heart rate continuously fluctuates throughout an exercise, especially during cardio-intensive activities. Without support for debouncing, an app might get many alerts in a short period of time, such as each time the user’s heart rate dips above or below the target range.

By introducing an initial delay, you can inform Health Services to send a goal alert only after a specified time period has passed–think of this like an adjustment period. And by introducing a duration at threshold, you can take this customization further, by specifying the amount of time that must pass in (or out) of the specified threshold for the goal to be activated. In practice, this would be like waiting for the user to be out of their target heart rate range for 15 seconds before your app lets them know to increase or decrease their intensity.

Check out the technical session, “Building Adaptable Experiences with Android Health” to see this in action!

Your app’s training partner

The Health & Fitness Developer Center is your one-stop-shop for building health & fitness apps on Android! Visit the site for documentation, design inspiration, case studies, and more to learn how to build apps on mobile and Wear OS.

We’re excited to see the Health and Fitness experiences you continue to build on Android!

Latest updates for watch faces on Wear OS

Posted by Anna Bernbaum – Product Manager, and Garan Jenkin – Developer Relations Engineer

At last year’s Google I/O, we launched the Watch Face Format for Wear OS. This year, as part of our continued partnership with Samsung, we are excited to share some new features that you can use to create exciting new watch face designs! These features are now supported in XML definitions, and later in the year, you’ll also see an update to Watch Face Studio to take advantage of them.

The Watch Face Format is the recommended way to create watch faces for Wear OS. The format makes it easier to create customizable and more power-efficient watch faces for devices that run Wear OS 4 or higher. The Watch Face Format is a declarative XML format, so there is no executable code involved in creating a watch face, and there is no code embedded in your watch face APK.

Additionally, in our move toward the Watch Face Format for watch face creation, we have also made some changes to watch face development.

New features in the Watch Face Format

Flavors

Flavors represent preset configurations for your watch face, available in the companion app:

Watch gallery

They allow the watch face developer to configure useful and attractive combinations of the watch face’s configuration options, and allow the user to visualize and select from these with ease.

We’ve now brought flavors to the Watch Face Format. For a full guide on adding them to your watch face, see the flavors reference.

Complications

We’re adding support for both “goal progress” and “weighted elements” complication types to the Watch Face Format:

circle chart with data saying 60% of goal progress and weight elements circle chart

    • Goal progress is perfect for data where the user has a target, but that target can be exceeded. A good example is step count.
    • Weighted elements can represent discrete subsets of data, showing their relative sizes, where you might otherwise use something like a pie chart.

Both of these complication types can be accessed through the [COMPLICATION.*] expression object. For full details, see the complication guidance.

Weather

Knowing at-a-glance what the weather will be like for the next hour, day, and beyond can make all the difference to a user’s plans! Unsurprisingly, having weather data as a data source in the Watch Face Format has been a common request, and we’re delighted to be able to introduce it in this latest version. You’ll now be able make watch faces like this:

circle chart with data saying 60% of goal progress and weight elements circle chart

Weather Basics

Weather in the Watch Face Format is accessed via the [WEATHER.*] expression object. You can use it in Condition and text Template statements and anywhere where expressions are supported.

For example, to show the current weather condition, use this template and expression:

<Template>Current weather conditions: %s
    <Parameter expression="[WEATHER.CONDITION_NAME]"/>
</Template>

The weather provider in the Watch Face Format supports a range of different metric types for the current day, including the following:

    • Current conditions
    • Temperature - current, minimum (low), and maximum (high)
    • UV index
    • Chance of rain

For the full range of data types and conditions, see the weather guide.

Forecasts

In addition to the current weather, you can access forecast data, both by hour and by day. For example, to access the forecast maximum temperature for tomorrow, use a template and set of expressions similar to the following:

<Template>Tomorrow max temp: %d°%s
    <Parameter expression="[WEATHER.DAYS.1.TEMPERATURE_HIGH]" />
    <Parameter expression="[WEATHER.TEMPERATURE_UNIT] == 1 ? &quot;C&quot; : &quot;F&quot;" />
</Template>

When using weather in the Watch Face Format, there are some further details to be aware of, such as checking for forecast availability or loading errors. For all of this and more, take a look at the weather guide.

Changes to Watch Face development

As we gather momentum behind the Watch Face Format, we’re announcing some changes to existing watch face development options.

We announced recently that only some complications will be available on Wear OS 5, for watch faces built with AndroidX or the Wearable Support Library. This restriction does not apply to watch faces that use the Watch Face Format.

Additionally, starting in early 2025 (specific date to be announced in Q4 2024), all new watch faces published on Google Play must use the Watch Face Format. Existing watch faces that use other libraries, such as AndroidX or the Wearable Support Library, can continue to receive updates without transitioning to the new format.

New resources

To make it easier to create watch faces using the Watch Face Format, we’ve published some more resources on GitHub.

You now have full access to the XSD specification, to help you build your own watch face generating tools.

We’ve also provided validators to check your XML for correctness and memory usage. These are the same checks run by Google Play, so it allows you to run these checks even before you submit your watch face for publishing.

Learn more

Get started with the latest version of the Watch Face Format.

Be sure to check out Building for the future of Wear OS technical session and What’s new in Wear OS at I/O 2024 blog post to learn more about all the latest updates for Wear OS!

Code snippets license:

Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0

Get the big picture with Large Screens at Google I/O 2024

Posted by Fahd Imtiaz, Product Manager, Android Developer

With Android reaching more devices, from phones to foldables to Chromebooks, building apps that seamlessly adapt to different screen sizes and types has never been more crucial. At this year’s Google I/O, we covered building adaptable apps, increasing user productivity with key inputs like keyboard and stylus, and scaling games across surfaces.

Building adaptive apps

Throughout Google I/O 2024, we’ve talked a lot about how to build adaptive apps. With this shift, some of you may be asking “what makes an app truly adaptive?”

Adaptive apps take advantage of the full screen size they are on - whether that is a phone, a tablet, or a foldable. These apps adjust layout based on conditions, driving how your app’s layout should adapt. These conditions include things like changes to the size of the window, device posture or font size.

Adaptive apps dynamically adjust their layouts by swapping components, showing, or hiding content based on the available window size, compared to simply stretching UI elements. With ever evolving form factors and screen sizes, adaptability for your app to any window size unlocks the seamless experiences users demand today.

Now that you know what they are, how do you get started building adaptive apps? We strongly recommend using WindowSizeClasses as the opinionated breakpoints for your UI and we’re bringing you a variety of new Compose APIs that make it easier to implement common adaptive layouts.

Available now in beta, the new Compose adaptive layout libraries help you to make your UI look good across window sizes. From navigation UI to list/detail and supporting pane style layouts, we’re providing composables to make building an adaptive app easier than ever.

Check out these technical sessions to learn more:

Or get started by checking out the new documentation!

Increase user productivity on tablet and foldables

Tablets and foldables are great for consuming content but even better when it comes to creating content. That’s why leveraging input devices like stylus for more productive experiences is especially important for your users.

Improving your app’s stylus experience

Stylus users on Android can remain more productive with new support for handwriting in text fields. You no longer need to put down your stylus when you need to input some text into a text field. Stylus handwriting and gestures will work automatically if you are using standard text components including BasicTextField in Compose (1.7), EditText in Views and text input elements in WebView.

For stylus users, low latency is key to having a responsive inking experience. Reducing latency minimizes the amount of delay between when you move your stylus and when the ink appears on the screen, giving users a more authentic pen-to-paper experience. Based on developer feedback, we’ve introduced new APIs to make low latency easier for apps that use Canvas for rendering.

A great example of these libraries in practice is Infinite Painter, where the team reduced their inking latency by 5x.

Enhancing productivity with keyboard and mouse support

Next to the stylus, another essential input device is the physical keyboard that really shines when users need to do a lot of text input like long emails, documents or blog posts. As a developer, making it easy for users to navigate your app with keyboard navigation can set your app apart.

Users should be able to navigate to all elements in your app with just their keyboard. It’s also important that frequently used keyboard shortcuts are supported in your app. To help educate your users about shortcuts, consider making your keyboard shortcuts discoverable to users by adding entries to the system KeyboardHelper.

To improve the experience for keyboard, mouse, trackpad, and stylus users, we also recommend implementing hover states and keyboard focus. All interactive components should have a hover state and should show a visual cue to indicate which component has the keyboard focus.

You can learn more about all these improvements and more via the technical session, and our updated documentation and codelab.

Enhance productivity with pane expansion

Although larger window sizes allow showing multiple panes of content at once, users often want to focus on one specific pane at a time. By following the new guidance for pane expansion, users have the choice to see both panes at once, or resize them as they desire.

Google Calendar has added pane expansion to their supporting pane layout on expanded width window sizes, allowing users to resize to see more details of an event, or more information in their schedule.

Although larger window sizes allow showing multiple panes of content at once, users often want to focus on one specific pane at a time. By following the new guidance for pane expansion, users have the choice to see both panes at once, or resize them as they desire.

Google Calendar has added pane expansion to their supporting pane layout on expanded width window sizes, allowing users to resize to see more details of an event, or more information in their schedule.

This feature will be supported by activity embedding in Android 15, and is also planned to be supported by the material3-adaptive library.

Building great games across surfaces

Gamers appreciate premium, immersive experiences and with Android tablets, foldables, desktops, and Chromebooks your game can reach more players than ever. Creating meaningful experiences on each of these devices is key to ensuring your game stands out.

Diablo Immortal saw significantly increased engagement across all aspects of the game by users who play on multiple devices.

During this year’s Google I/O, we are highlighting the best practices for rendering, managing assets, and windowing in resizable contexts to build quality experiences and impress your players across form factors.

With the wide variety of devices and hardware configurations, provide configurable graphics options for your players. And, for the best experience right out of the box, define default graphics options for different devices. Additionally, consider trade offs like storage size, performance, and compatibility across platforms when deciding what texture compression formats to use.

Large screen devices support different window sizes with configurations like multi-window mode and on orientation change and fold/unfold. By default, Android provides a compatibility mode - but, for the most seamless experience, declare and handle configuration events. Display cutouts, hinges, and even system UI can also occlude your game window, so support edge-to-edge windowing and ensure no key game content is occluded. If you really want to take your game to the next level, consider using the Jetpack WindowManager library to support dynamic layouts on foldable devices.

To provide smoother gameplay and reduce input latency consider using frame pacing. Also consider enabling wide color gamut support so that vivid colors are rendered properly while also maximizing contrast and brightness on large screen HDR displays to improve realism and immersion for your players.

With the changing mobile landscape and transition from OpenGL to Vulkan, handle swapchain recreation after window configuration changes. If you rely on any device specific hardware features like host visible device local memory have fallback implementations for other platforms.

Learn more about the rendering best practices and how to level up your game across surfaces by tuning into this technical session and be sure to check out our multiplatform optimization guide.

Get started building adaptable apps from phones to tablets and foldables

You can get started building adaptable apps that look great across tablets, foldables, Chromebooks and more by checking out the “Building adaptive Android apps” technical session or heading to the large screens gallery for content tailored to your specific app type - from productivity apps to games… and more!

The Second Beta of Android 15

Posted by Dave Burke, VP of Engineering

Android 15 logo

Today we're releasing the second beta of Android 15, which continues our work to build a platform that helps improve your productivity, minimize battery impact, maximize smooth app performance, give users a premium device experience, protect user privacy and security, and make your app accessible to as many people as possible — all in a vibrant and diverse ecosystem of devices, silicon partners, and carriers.

Android delivers enhancements and new features year-round, and your feedback on the Android beta program plays a key role in helping Android continuously improve. The Android 15 developer site has lots more information about the beta, including downloads for Pixel, and the release timeline. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.

Now available on more devices

Diagram showing Android 15 compatible partners

The Android 15 beta is now available on handset, tablet, and foldable form factors from partners including Honor, iQOO, Lenovo, Nothing, OnePlus, OPPO, Realme, Sharp, Tecno, vivo, and Xiaomi, so there are so many more devices for you to test your app on, and so many more users that can run your app on the Android 15 beta.

Making Android more efficient

We are continuing to optimize the platform to improve the quality, speed and battery life of Android devices.

Foreground services changes

Foreground services keep apps running in an active state so they can do something critical and user-visible, often at the expense of battery life. In Android 15 Beta 2, the dataSync and mediaProcessing foreground service types now have a ~6 hour timeout, after which the system calls Android 15's new Service.onTimeout(int, int) method. At this point, the service is no longer considered a foreground service. If the service does not call Service.stopSelf() in response to the timeout, it will get stopped with a failure.

Beta 2 also adds new requirements for starting foreground services while the app is running in the background. - If your foreground service relies on the SYSTEM_ALERT_WINDOW permission exemption for background start, you are now required to have a visible overlay when targeting Android 15.

For battery-efficient best practices, debugging network and power usage, and detail on how we're improving battery efficiency of background work in Android 15 and recent versions of Android, see the "Improving battery efficiency of background work on Android" I/O talk.

Upcoming required support for 16 KB page sizes

Android 15 adds support for devices that use larger page sizes, with support for 16 KB pages in addition to the standard 4 KB pages. If your app uses any NDK libraries, either directly or indirectly through an SDK, then you will likely need to simply rebuild your app for it to work on these 16 KB page size devices.

Devices with larger page sizes can have improved performance for memory-intensive workloads. While our testing may not be representative of all devices in the ecosystem, here are a some of the performance gains we identified in our initial testing of devices configured with 16KB page sizes:

    • Lower app launch times while the system is under memory pressure: 3.16% lower on average, with more significant improvements (up to 30%) for some apps that we tested
    • Reduced power draw during app launch: 4.56% reduction on average
    • Faster camera launch: 4.48% faster hot starts on average, and 6.60% faster cold starts on average
    • Improved system boot time: improved by 1.5% (approximately 0.8 seconds) on average

As device manufacturers continue to build devices with larger amounts of physical memory (RAM), many of these devices will adopt 16 KB (and eventually greater) page sizes to optimize the device's performance. Adding support for 16 KB page size devices enables your app to run on these devices and helps your app benefit from the associated performance improvements. We plan to make 16 KB page compatibility required for app uploads to Play Store next year.

To help you add support for your app, we've provided guidance on how to check if your app is impacted, how to rebuild your app (if applicable), and how to test your app in a 16 KB environment using emulators (including Android 15 system images for the Android Emulator).

Modernizing Android's GPU access

Vulkan logo

Android hardware has evolved quite a bit from the early days where the core OS would run on a single CPU and GPUs were accessed using APIs based on fixed-function pipelines. The Vulkan graphics API has been available in the NDK since Android 7.0 (API level 24) with a lower-level abstraction that better reflects modern GPU hardware, scales better to support multiple CPU cores, and offers reduced CPU driver overhead — leading to improved app and game performance. Vulkan is supported by all modern game engines.

Vulkan is Android’s preferred interface to the GPU. Therefore, Android 15 includes ANGLE as an optional layer for running OpenGL ES on top of Vulkan. Moving to ANGLE will standardize the Android OpenGL implementation for improved compatibility, and, in some cases, improved performance. You can test out your OpenGL ES app stability and performance with ANGLE using the "Developer options → Experimental: Enable ANGLE" setting in Android 15.

The Android ANGLE on Vulkan roadmap

The Android ANGLE on Vulkan roadmap

As part of streamlining our GPU stack, going forward we will be shipping ANGLE as the GL system driver on more new devices, with the future expectation that OpenGL/ES will be only available through ANGLE. That being said, we plan to continue support for OpenGL ES on all devices.

Recommended next steps: Use the developer options to select the ANGLE driver for OpenGL ES and test. For new projects, we strongly encourage using Vulkan for C/C++.

Modern graphics

Android 15 continues our modernization of Android's Canvas graphics system with new functionality:

    • Matrix44, provides a 4x4 matrix for transforming coordinates that should be used when you want to manipulate the canvas in 3D.
    • clipShader intersects the current clip with the specified shader, while clipOutShader sets the clip to the difference of the current clip and the shader, each treating the shader as an alpha mask. This supports the drawing of complex shapes efficiently.

More efficient AV1 software decoding

Android 14 logo

dav1d, the popular AV1 software decoder from VideoLAN is now available for Android devices not supporting AV1 decode in hardware. It is up to 3x more performant than the legacy AV1 software decoder, enabling HD AV1 playback for more users, including some low and mid tier devices.

For now, your app needs to opt-in to using dav1d by invoking it by name "c2.android.av1-dav1d.decoder". It will be made the default AV1 software decoder in a subsequent update . This support is standardized and backported to Android 11 devices that receive Google Play system updates.

For more on the latest features and developer solutions for Android media and camera, see the "Building modern Android media and camera experiences" I/O talk.

A more private, secure Android

We're always looking to give users more transparency and control over their data while enhancing the core security features of the platform. See the "Safeguarding user security on Android" I/O talk for more of what we're doing to improve user safeguards and protect your app against new threats.

Private space

Android 14 logo

Private space allows users to create a separate space on their device where they can keep sensitive apps away from prying eyes, under an additional layer of authentication. Private space uses a separate user profile. When private space is locked by the user, the profile is paused, i.e. the apps are no longer active. The user can choose to use the device lock or a separate lock factor for private space. Private space apps show up in a separate container in the launcher, and are hidden from the recents view, notifications, settings, and from other apps when private space is locked. User generated and downloaded content (media, files) and accounts are separated between the private space and the main space. The system sharesheet and the photo picker can be used to give apps access to content across spaces when private space is unlocked. There is a known issue with private space in Beta 2 that affects home screen apps; you can find out more in the Beta 2 release notes. We'll have an update in the coming days, so you may wish to wait until then to test your app with private space to make sure it works as expected.

Selected photos access improvement

It is now possible for apps to highlight only the most recently selected photos and videos when partial access to media permissions is granted. This can improve the user experience for apps that frequently request access to photos and videos. This can be achieved by enabling the QUERY_ARG_LATEST_SELECTION_ONLY argument when querying MediaStore through ContentResolver.

valexternalContentUri = MediaStore.Files.getContentUri("external")

val mediaColumns = arrayOf(
   FileColumns._ID,
   FileColumns.DISPLAY_NAME,
   FileColumns.MIME_TYPE,
)

val queryArgs = bundleOf(
   // Return only items from the last selection (selected photos access)
   QUERY_ARG_LATEST_SELECTION_ONLY to true,
   // Sort returned items chronologically based on when they were added to the device's storage
   QUERY_ARG_SQL_SORT_ORDER to "${FileColumns.DATE_ADDED} DESC",
   QUERY_ARG_SQL_SELECTION to "${FileColumns.MEDIA_TYPE} = ? OR ${FileColumns.MEDIA_TYPE} = ?",
   QUERY_ARG_SQL_SELECTION_ARGS to arrayOf(
       FileColumns.MEDIA_TYPE_IMAGE.toString(),
       FileColumns.MEDIA_TYPE_VIDEO.toString()
   )
)

val cursor = contentResolver.query(externalContentUri, mediaColumns, queryArgs, null)

Permission checks on content URIs

Android 15 introduces a new set of APIs that perform permission checks on content URIs. They include:

Secured background activity launches

Android 15 protects users from malicious apps and gives them more control over their devices by adding changes that prevent malicious background apps from bringing other apps to the foreground, elevating their privileges, and abusing user interaction. Background activity launches have been restricted since Android 10.

Malicious apps within the same task can launch another app's activity, then overlay themselves on top, creating the illusion of being that app. This "task hijacking" attack bypasses current background launch restrictions because it all occurs within the same visible task. To mitigate this risk, we've added a flag that blocks apps that don't match the top UID on the stack from launching activities. To opt in for all of your app's activities, update the allowCrossUidActivitySwitchFromBelow attribute in your app's AndroidManifest.xml file:

<application android:allowCrossUidActivitySwitchFromBelow="false" >

Once your app has opted into the new protection, specific activities designed to be shared can be opted-out using this API within the Activity:

public void onCreate(Bundle bundle) {
super.onCreate(bundle);
setAllowCrossUidActivitySwitchFromBelow(true);
...
}

Learn more about restrictions on starting activities from the background.

Safer Intents

Android 15 introduces new security measures to make intents safer and more robust. These changes are aimed at preventing potential vulnerabilities and misuse of intents that can be exploited by malicious apps. There are two main improvements to the security of intents in Android 15:

    • Match target intent-filters: Intents that target specific components must accurately match the target's intent-filter specifications. If you send an intent to launch another app's activity, the target intent component needs to align with the receiving activity's declared intent-filters.
    • Intents must have actions: Intents without an action will no longer match any intent-filters. This means that intents used to start activities or services must have a clearly defined action.

Important: These improvements will be part of Strict Mode. If you would like to try them out please add the following method:

public void onCreate() {
    StrictMode.setVmPolicy(VmPolicy.Builder()
        .detectUnsafeIntentLaunch()
        .build());
    ...

Increased minimum target SDK version from 23 to 24

Android 15 increases the minimum targetSdkVersion required to install apps from 23 to 24, building on the previous minimum target SDK change from Android 14, Outdated apps often lack the latest security protections, making devices and data vulnerable. Requiring apps to meet modern API levels helps to ensure better security and privacy.

If you try to install an app that targets a lower API level than 24, you'll see an error raised in Logcat: INSTALL_FAILED_DEPRECATED_SDK_VERSION: App package must target at least SDK version 24, but found 7.

A premium device experience

Android 15 includes features that help your apps improve the experience of using an Android device, including smoother transitions, a more helpful UI, updates for large-screen devices, and more beautiful options for designers.

Improved large screen multitasking

GIF showing example of large screen multitasking

Android 15 beta 2 gives users better ways to multitask on large screen devices. For example, users can pin the taskbar on screen to quickly switch between apps or save their favorite split-screen app combinations for quick access. This means that making sure your app is adaptive is more important than ever. Google I/O has sessions on Building adaptive Android apps and Building UI with the Material 3 adaptive library that can help, and our documentation has more to help you Design for large screens.

Window Insets

In addition to edge-to-edge enforcement, when targeting SDK 35+ in Android 15 Configuration.screenWidthDp and screenHeightDp, now include the depth of the system bars. While these values may still be used for resource selection (e.g. res/layout-h500dp), using them for layout calculations is discouraged.

Picture-in-Picture

Android 15 introduces new changes in Picture-in-Picture (PiP) ensuring an even smoother transition when entering into PiP mode. This will be beneficial for apps having UI elements overlaid on top of their main UI, which goes into PiP. Currently, onPictureInPictureModeChanged is used to define logic that toggles the visibility of the overlaid UI elements. This callback is triggered when the PiP enter or exit animation is completed. Starting from Android 15, we are introducing a new state in the PictureInPictureUiState class. The onPictureInPictureUiStateChanged callback will be invoked with isTransitioningToPip() as soon as the PiP enter animation starts and the app can hide the overlaid UI elements.

override fun onPictureInPictureUiStateChanged(pipState: PictureInPictureUiState) {
    if (pipState.isTransitioningToPip()) {
	      // Hide UI elements
        }
    }

override fun onPictureInPictureModeChanged(isInPictureInPictureMode: Boolean) {
    if (isInPictureInPictureMode) {
	      // Unhide UI elements
        }
    }

This quick visibility toggle of irrelevant UI elements (for a PiP window), will ensure a smoother and flicker free PiP enter animation.

Richer Widget Previews with Generated Previews

Example of widget previews with generated previews

Make your widget stand out by showing a personalized preview. Apps targeting Android 15 can provide Remote Views to the Widget Picker, so they can update the content in the picker to be more representative of what the user will see. Apps may use the AppWidgetManager setWidgetPreview, getWidgetPreview and removeWidgetPreview methods to update the appearance of their widgets with up to date and personalized information.

Predictive Back

Android 14 logo

Predictive back provides a smoother, more intuitive navigation experience while using gesture navigation, leveraging built-in animations to inform users where their actions will take them to reduce unexpected outcomes. In Android 15, predictive back will no longer be behind a developer option, so system animations such as back-to-home, cross-task, and cross-activity will appear for apps that have properly migrated.

Set VibrationEffect for notification channels

Android 15 beta 2 now supports setting rich vibrations for incoming notifications by channel using NotificationChannel.setVibrationEffect, so your users can distinguish between different types of notifications without having to look at their device.

New data types for Health Connect

Health Connect, the centralized way for users to control and manage access to their fitness data, is adding support for additional data types to support even more health and fitness use cases. This release has 2 new data types: skin temperature and training plans.

Skin temperature tracking allows users to store and share more accurate temperature data from a wearable or other tracking device.

Training plans are structured workout plans to help a user achieve their fitness goals. Training plans support includes:

"Choose how you're addressed" system preference

Initially only in French, but expanding soon to additional gendered languages, users can customize how they are addressed by the Android system with a grammatical gender preference. The new setting can be found in the system language settings: Settings → System → Languages & Input → System languages → Choose how you’re addressed.

French grammatical gender preference
French grammatical gender preference

An example of where this preference changes the string being shown
An example of where this preference changes the string being shown

Modern internationalization via ICU 74

Android 15 Beta 2 includes API-related updates from ICU 74. ICU 74 contains updates from Unicode 15.1, including new characters, emoji, security mechanisms and corresponding APIs and implementations, as well as updates to CLDR 44 locale data with new locales and various additions and corrections.

CJK Variable Font

Starting from Android 15, the font file for Chinese, Japanese, and Korean languages, NotoSansCJK, is now a variable font. Variable fonts open up new possibilities for creative typography in CJK languages. Designers can explore a broader range of styles and create visually striking layouts that were previously difficult or impossible to achieve.

Examples of variable font for Chinese, Hapanese, and Korean languages with NotoSansCJK

New Japanese Hentaigana Font

In Android 15, a new font file for old Japanese Hiragana (known as Hentaigana) is bundled by default. The unique shapes of Hentaigana characters can add a distinctive flair to artwork or design while also helping to preserve accurate transmission and understanding of ancient Japanese documents.

Example of the new font file for Hentaigana characters

Avoiding clipped text

Some cursive fonts or language characters that have complex shaping may draw the letters in the previous or next character’s area. Such letters may be clipped at the beginning or ending position. Starting in Android 15, TextView allocates additional width for such letters and puts extra padding to the left.

Because this changes how the TextView decides the width, TextView allocates more width by default if the applications target Android 15 or later. You can enable or disable it by calling setUseBoundsForWidth API on TextView. Because adding left padding may cause a misalignment of existing layouts, the padding is not added by default even when targeting Android 15 or later.

To add extra padding to prevent clipping, call setShiftDrawingOffsetForStartOverhang.

Example of clipped text in English
<TextView
    android:fontFamily="cursive"
    android:text="java" />

Example of nonclipped text in English
<TextView
    android:fontFamily="cursive"
    android:text="java"
    android:useBoundsForWidth="true"
    android:shiftDrawingOffsetForStartOverhang="true" />

Example of clipped text in Hindi
<TextView
    android:text="คอมพิวเตอร์" />

Example of nonclipped text in Hindi
<TextView
    android:text="คอมพิวเตอร์"
    android:useBoundsForWidth="true"
    android:shiftDrawingOffsetForStartOverhang="true" />

App compatibility

If you haven't yet tested your app for compatibility with Android 15, now is the time to do it, with many more devices entering the program. In the weeks ahead, you can expect more users to try your app on Android 15 and raise issues they find.

To test for compatibility, install your published app on a device or emulator running Android 15 beta and work through all of your app's flows. Review the behavior changes to focus your testing. After you've resolved any issues, publish an update as soon as possible.

To give you more time to plan for app compatibility work, we’re letting you know our Platform Stability milestone well in advance.

Timeline for Platform Stability milestone rollout

At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. We’re expecting to reach Platform Stability in June 2024, and from that time you’ll have several months before the official release to do your final testing. The release timeline details are here.

Get started with Android 15

Today's beta release has everything you need to try the Android 15 features, test your apps, and give us feedback. Now that we’re in the beta phase, you can check here to get information about enrolling your device; Enrolling supported Pixel devices will get this and future Android Beta updates over-the-air. If you don’t have a supported device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you're already in the Android 14 QPR beta program on a supported device, you'll automatically get updated to Android 15 Beta 2.

For the best development experience with Android 15, we recommend that you use the latest version of Android Studio Koala. Once you’re set up, here are some of the things you should do:

    • Try the new features and APIs - your feedback is critical during the early part of the developer preview and beta program. Report issues in our tracker on the feedback page.
    • Test your current app for compatibility - learn whether your app is affected by changes in Android 15; install your app onto a device or emulator running Android 15 and extensively test it.

We’ll update the beta system images and SDK regularly throughout the Android 15 release cycle. Read more here.

For complete information, visit the Android 15 developer site.


Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

OpenGL is a registered trademark and the OpenGL ES logo is a trademark of Hewlett Packard Enterprise used by permission by Khronos.

Vulkan and the Vulkan logo are registered trademarks of the Khronos Group Inc.

VideoLAN cone Copyright (c) 1996-2010 VideoLAN. This logo or a modified version may be used or modified by anyone to refer to the VideoLAN project or any product developed by the VideoLAN team, but does not indicate endorsement by the project.

All trademarks, logos and brand names are the property of their respective owners.

15 Things to know for Android developers at Google I/O

Posted by Matthew McCullough, Vice President, Product Management, Android Developer  

AI is unlocking experiences that were not even possible a few years ago, and we’ve been hard at work reimaging Android with AI at the core, to help enable you to build a whole new class of apps. At this year’s Google I/O, we’re covering how new tools like Gemini can power building the next generations of apps on Android. Plus, we showcased a range of updates to our tools and services grounded in productivity, helping you make it faster and easier to build excellent experiences across form factors. Let’s dive in!

Powering the next generation of Apps with AI

#1: AI in your tools, with Gemini in Android Studio

Gemini in Android Studio (formerly Studio Bot) is your coding companion for Android development, and thanks to your feedback since its preview at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and brought it into the Gemini family of products. Earlier today, we previewed a number of new features coming soon, like Code suggestions, App Quality Insights that leverage Gemini, and a preview of the multi-modal inputs that are coming using Gemini 1.5 Pro. You can read more about the updates here, and make sure to check out What’s new in Android development tools.

#2: Building with Generative AI

Android provides the solution you need to build Generative AI apps. You can use our most capable models over the Cloud with the Gemini API in Google AI or Vertex AI for Firebase directly in your Android apps. For on-device, Gemini Nano is our most efficient model. We’re working closely with a few early adopters such as Patreon, Grammarly, and Adobe to ensure we’re creating the best APIs that unlock the most innovative experiences. For example, Adobe is experimenting with Gemini Nano to enhance the on-device experience of Acrobat AI Assistant, a tool that allows their users to summarize and interact with documents. Be sure to check out the Build your own generative AI powered Android app, Android on-device gen AI under the hood, and the What’s New in Android sessions to learn more!

Moving image of Gemini Nano operating in Adobe

Excellent apps, across devices

#3: Think adaptive: apps on phones, foldables, tablets and more

Build and design apps that adapt beyond the phone, with the new Compose adaptive layout libraries built with Material guidance in beta. Add rich stylus and keyboard support to increase user productivity. Check out three of our key Android adaptive sessions at Google I/O: Designing adaptive apps, Building adaptive Android apps, and Increase user productivity with large screens and accessories.

Moving image of Gemini Nano operating in Adobe

#4: Enhance homescreens with Widgets and Jetpack Glance

Jetpack Glance 1.1 is now available in release candidate and lets you build high quality widgets using your Compose skills. Check out our new canonical layouts, design guidance and figma updates to the Android UI kit. To learn more check out our Improve the user experience of your Android app workshop and Build Android widgets with Jetpack Glance technical session.

#5-9: come back here tomorrow and Thursday!

We’ll continue to share more updates for Android Developers throughout Google I/O, so check back here tomorrow!

Developer Productivity

#10: Use Kotlin Multiplatform for sharing business logic

Kotlin Multiplatform (KMP) enables sharing Kotlin code across different platforms and several of our Jetpack libraries, like DataStore and Room, have already been migrated to take advantage of KMP. We use Kotlin Multiplatform within Google and recommend using KMP for sharing business logic between platforms. Learn more about it here.

#11: Compose: Shared Elements, performance improvements and more

The upcoming Compose June ‘24 release is packed with the features you’ve been asking for! Shared element transitions, lazy list item reordering animations, strong skipping mode, performance improvements, a new lazy flow layout and more. Read more about it in our blog.

#12: Android Studio: the latest preview, with Gemini and more

Android Studio Koala 🐨Feature Drop (2024.1.2) available today in the canary channel, builds on top of IntelliJ 2024.1 and adds new innovative features unlocked by Gemini, such as insights for crashes in App Quality Insights, code transformations and a Gemini API starter template to get you quickly started with Gemini. Additionally, new features such as USB speed detection, shortcut UI to control device settings, a new way to sign into Google services, updated and speedier UI for profilers with a new task centric approach and a deep integration with the Google Play SDK index are intended to make the development process extremely productive. Read more here.

And the latest from the world of Mobile

#13: Grow your business with the latest Google Play updates

Discover new ways to attract and engage users with enhanced custom store listings. Optimize revenue with expanded payment options. Reinforce trust through secure, high-quality experiences made easier with our latest SDK Console improvements. Learn about these updates and more, including our new vertical approach, in our blog.

#14: Simplify app compliance with Checks

Streamline your app's privacy compliance with Checks, Google's AI-powered compliance solution! Checks empowers developers to swiftly identify, address, resolve privacy issues, and enables you to launch apps faster and with confidence. Harness the power of automation with Checks' intelligent reports, saving you valuable time and resources. Get started now at checks.google.com.

#15: And of course, Android 15

…but for that, you’ll have to stay tuned tomorrow, when we’ve got a bit more up our sleeve!

Google I/O 2024: What’s new in Android Development Tools

Posted by Mayank Jain – Product Manager, Android Studio

At Google I/O 2024, we announced an exciting new set of features and tools aimed at making Android development faster and easier. We also shared updates to Android Studio that will help you leverage AI and make it easier for you to build high quality apps for Android across the Android ecosystem.

You can check out the What’s new in Android Developer Tools session at Google I/O 2024 to see some of the new features in action or better yet, try them out yourself by downloading Android Studio Koala 🐨 Feature Drop in the preview release channel. Here’s a look at our announcements:

Leverage Gemini in Android Studio

Since launching AI features in Android Studio last year, we continue to evolve our underlying models, integrate your feedback, and expand availability to more countries and territories so that you can leverage AI in your workflow and become a more productive Android app developer. Using the built-in AI privacy controls, you can opt in to using the latest AI feature improvements that are tailored for your Android app project.

Code suggestions with Gemini in Android Studio

You can now provide custom prompts for Gemini in Android Studio to generate code suggestions. After you enable Gemini from the View > Tool Windows > Gemini tool window, right-click in the code editor and select Gemini > Transform selected code from the context menu to see the prompt field. You can then prompt Gemini to generate a code suggestion that either adds new code or transforms selected code. You can ask Gemini to simplify complex code by rewriting it, perform very specific code transformations such as “make this code idiomatic,” or generate new functions you describe. Android Studio then shows you Gemini’s code suggestion as a code diff, so that you can review and accept only the suggestions you want.

Code suggestions with Gemini in Android Studio

Gemini for recommendations on crash reports

App Quality Insights in Android Studio seamlessly incorporates both Firebase Crashlytics and Android Vitals data into Android Studio so you can access the most important app stability related information, without having to switch tools.

You can now use Gemini in Android Studio to analyze your crash reports, generate insights which are shown in the Gemini tool window, provide a crash summary, and when possible recommend next steps, including sample code and links to relevant documentation.

You can generate all of this information directly from the App Quality Insights tool window in Android Studio after you enable Gemini from View > Tool Windows > Gemini.

Gemini for recommendations on crash reports

Integrate Gemini API into your app with a starter template

Start prototyping with Gemini models in your apps with our new starter app template provided in Android Studio. In this app template, you can issue prompts directly to the Gemini API, add image sources as input, and display the responses on the screen. Additionally, use Google AI Studio to craft custom prompts for your app.

When you are ready to scale your AI features to production with Google Cloud infrastructure, you can also access the powerful capabilities of Gemini models through Vertex AI. This is Google’s fully-managed development platform designed for building and deploying generative AI. Whether you simply need world class inference capabilities, or want to build end-to-end AI workflows with Vertex, the Gemini API is a great solution.

Integrate Gemini API into your app with a starter template

Gemini 1.5 Pro coming to Android Studio

We previously announced that Gemini in Android Studio uses the Gemini 1.0 Pro model to help you by answering Android development questions, generating code, finding resources, or explaining best practices. In this preview stage of Gemini in Android Studio, we are offering Gemini 1.0 Pro at no-cost for all users for now. Gemini 1.0 Pro is a versatile model, making it ideal to scale. However we acknowledge that its quality of responses may be limited in some cases. Based on your feedback, we are committed to improving the quality for Android development, and excited to add more features using Gemini to make your developer experience even more productive.

Along this journey, the Gemini 1.5 Pro model will be coming to Android Studio later this year. Equipped with a Large Context Window, this model notably leads to higher quality responses, and unlocks use cases like multimodal input that you might have seen in the Google I/O 2024 sessions. Stay tuned for more updates on how you can access more capable models in Android Studio.

Productivity enhancements

Release Monitoring with Firebase

Today we announced the general availability of the Firebase Release Monitoring Dashboard. The Firebase Release Monitoring Dashboard is a single dashboard powered by Firebase Crashlytics to monitor your most recent production releases of your Android app. It updates in real time to give you a high-level view of the most important release metrics, like crash-free sessions, comparisons, and benchmarking based on your previous releases.

Android Device Streaming

Android Device Streaming, powered by Firebase, lets you securely connect to remote physical Android devices hosted in Google's data centers. It is a convenient way to test your app against physical units of some of the latest Android devices, including the Google Pixel 8 and 8 Pro, Pixel Fold, and more.

Starting today, Android Device Streaming now includes the following devices, in addition to the portfolio of 20+ device models already available:

    • Samsung Galaxy Fold5
    • Samsung Galaxy S23 Ultra
    • Google Pixel 8a

Additionally, if you’re new to Firebase, Android Studio automatically creates and sets up a no-cost Firebase project for you when you sign in to Koala Feature Drop to use Device Streaming. So, you can get to streaming the device you need much faster. Learn more about Android Device Streaming quotas, including promotional quota for the Firebase Blaze plan projects available for a limited time.

Connect to the latest physical Android devices in moments with Android Device Streaming, 
powered by Firebase

USB cable speed detection

Did you know that USB cable bandwidth varies from 480 Mbps (USB-2) to up to 40,000 Mbps (USB-4)? Android Studio Koala Feature Drop now makes it trivial to differentiate low performing USB cables from the high performing ones.

When you connect an Android device, Android Studio automatically detects the device and USB cable bandwidth and warns you if there’s a mismatch in USB bandwidth.

Note: USB cable speed detection requires an updated ADB found in Android SDK Platform Tools v34+, and is currently available for macOS and Linux.

USB cable speed detection.
Learn more about USB speeds here

A new way to sign in with Google in Android Studio

It’s now easier to sign in to multiple Google services with one authentication step. Whether you want to use Gemini in Android Studio, Firebase for Android Device Streaming, Google Play for Android Vitals reports, or all these useful services, the new sign in flow makes it easier to get up and running. If you’re new to Firebase and want to use Android Device Streaming, Android Studio automatically creates a project for you, so you can quickly start streaming a real physical Firebase device. With granular permissions scoping, you will always be in control of which services have access to your account. To get started, just click the profile avatar and sign in with your developer account.

A new way to sign in with Google in Android Studio

Device UI setting shortcut

Using the device UI setting shortcut, you can now effortlessly configure your devices to desired settings related to dark theme, font size, display size, app language, and more, all directly through the Running Devices window. You can now test and debug your UI seamlessly for any of the possible scenarios required by your use case.

Device UI settings shortcuts

Faster and improved Profiler with a task-centric approach

The internals of the Android Studio Profiler have been dramatically improved. Popular profiling tasks like capturing a system trace with profileable apps now start up to 60% faster.*

We’ve redesigned the profiler to make it easier to start the task you’re interested in, whether it’s profiling your app’s CPU, memory, or power usage. For example, initiating a system trace task to profile and improve your app’s startup time is integrated right in the UI as you open the profiler.

Faster and improved Profiler with a task-centric approach 
*Based on internal data, as tested in April 2024

Google Play SDK Index integration

Android Studio is integrated with the Google Play SDK Index to inform when there are known policy or version issues with SDKs used by your app. This enables you to update those dependencies and avoid issues that could prevent you from publishing new versions of your app.

In the Android Studio Koala Feature Drop release, the integration has been expanded to also include warnings from the Google Play SDK Console. This gives you a complete view of any potential version or policy issues in your dependencies before submitting your app to the Google Play Console.

Notes from SDK authors are now also displayed directly in Android Studio to save you time.

A warning from the SDK Index with the corresponding SDK author note

Preview tiles for Wear OS apps

Android Studio now has preview support for Tiles. You can now iterate much quicker when creating tiles, enabling you to quickly see what a Tile looks like on different configurations without needing to run it on a device.

Tiles previews usage for Wear OS apps

Generate synthetic sensor data for testing on Wear OS apps

To help simulate real life scenarios you can now generate synthetic (fake) data for a Wear OS emulator for health related sensors such as heart rate, speed, steps, and more. You are now able to set up and perform testing for a multi-sport training session in minutes, end-to-end in Android Studio, without ever leaving your desk.

Generate synthetic sensor data for testing on Wear OS apps

Compose Glance widget previews

Android Studio Koala Feature Drop makes it easy to preview your Jetpack Compose Glance widgets (1.1.0-rc01) directly within the IDE. Catch potential UI issues and fine-tune your widget's appearance early in the development process. Learn more about how to get started.

Previews for Compose Glance widgets

Live Edit for Compose enabled by default

Live Edit for Compose can accelerate your Compose development experience by automatically deploying code changes to the running application on an emulator or physical device. Live Edit can help you see the effect of updates to UX elements—for example new composables, modifier updates, and animations—on the overall app experience. As you become more familiar with Live Edit you will find many creative ways it can help improve your development experience and productivity.

In Android Studio Koala Feature Drop, Live Edit is enabled by default in manual mode and has increased stability and more robust change detection, including support for import statements.

ALT TEXT
Compose Preview Screenshot Testing with Now in Android app

Compose preview screenshot testing plugin (alpha)

Host-side screenshot testing is an easy and powerful way to test UIs and prevent regressions. Today, the first alpha version of the Compose Preview Screenshot Testing plugin is available as a separate plugin, to be used together with AGP 8.5.0-beta01 or higher. Add your Compose Previews to the src/main/screenshotTest folder and run the task to generate a diff report after UI updates. The generated HTML test report lets you visually detect any changes to your app’s UI.

This alpha version of the plugin is designed for rapid iteration and feedback. We plan to merge it back into AGP in the future, but for now, this separate plugin lets us experiment and improve the feature quickly. Learn more about how to get started.

IntelliJ Platform Update (2024.1)

Android Studio Koala Feature Drop includes the IntelliJ 2024.1 platform release, which comes with some very useful IDE improvements:

    • An overhauled terminal featuring both visual and functional enhancements to streamline command-line tasks. Learn more in this blog post.
    • A new feature called sticky lines in the editor simplifies working with large files and exploring new codebases. This feature keeps key structural elements, like the beginnings of classes or methods, pinned to the top of the editor as you scroll and provides an option to promptly navigate through the code by clicking on a pinned line.
    • Basic IDE functionalities like code highlighting and completion now work for Java and Kotlin during project indexing, which should enhance your startup experience.
    • You can now scale the IDE down to 90%, 80%, or 70%, giving you the flexibility to adjust the size of IDE elements both upward and downward.

Read the detailed IntelliJ release notes here.

To summarize

Android Studio Koala Feature Drop (2024.1.2) is now available in the Android Studio canary channel with

    • Gemini in Android Studio
        • Code suggestions with Gemini in Android Studio
        • Gemini for recommendations on crash reports
        • Gemini API starter app template to help integrate Gemini into your app (also available in Koala 2024.1.1)

    • Productivity enhancements
        • Release Monitoring with Firebase
        • Android Device Streaming
        • USB cable speed detection
        • A new way to sign in with Google in Android Studio
        • Device UI setting shortcut
        • Faster and improved Profiler with a task-centric approach
        • Google Play SDK Index integration
        • Preview tiles for Wear OS apps
        • Generate synthetic sensor data for testing on Wear OS apps
        • Compose Glance widget previews
        • Live Edit for Compose enabled by default
        • Compose preview screenshot testing plugin (alpha) - to be installed additionally

    • IntelliJ Platform Update (2024.1): also available in Koala 2024.1.1
        • An overhauled terminal
        • Sticky lines in editor simplifies working with large files
        • Code highlighting and completion now work during project indexing
        • Flexible IDE size adjustments

And last, a quick reminder that going forward, the initial Android Studio releases will have the .1 Android Studio major version and introduce the updated IntelliJ platform version, while subsequent Feature Drops will increase the Android major version to .2 and focus on introducing Android-specific features that help you be more productive for Android app development.

How to get started

Ready to try the exciting new features in Android Studio?

You can download the canary version Android Studio Koala 🐨 Feature Drop (2024.1.2) today to incorporate these new features into your workflow or try the stable version Android Studio Jellyfish 🪼. You can also install them side by side by following these instructions.

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X. Let's build the future of Android apps together!