Category Archives: Android Developers Blog

An Open Handset Alliance Project

#WeArePlay | How Zülal is using AI to help people with low vision

Posted by Leticia Lago – Developer Marketing

Born in Istanbul, Türkiye with limited sight, Zülal has been a power-user of visual assistive technologies since the age of 4. When she lost her sight completely at 10 years old, she found herself reliant on technology to help her see and experience the world around her.

Today, Zülal is the founder of FYE, her solution to the issues she found with other visual assistive technologies. The app empowers people with low vision to be inspired by the world around them. Employing a team of 4, she heads up technological development and user experience for the app.

Zülal shared her story in our latest film for #WeArePlay, which celebrates people around the world building apps and games. She shared her journey from uploading pictures of her parents to a computer to get descriptions of them as a child, to developing her own visual assistive app. Find out what’s next for Zülal and how she is using AI to help people like herself.

Tell us more about the inspiration behind FYE.

Today, there are around 330 million people with severe to moderate visual impairment. Visual assistive technology is life-changing for these people, giving them back a sense of independence and a connection to the world around them. I’m a poet and composer, and in order to create I needed this tech so that I could see and describe the world around me. Before developing FYE, the visual assistive technology I was relying on was falling short. I wanted to take back control. I didn’t want to sit back, wait and see what technology could do for me - I wanted to harness its power. So I did.

Why was it important for you to build FYE?

I never wanted to be limited by having low vision. I’ve always thought, how can I make this better? How can I make my life better? I want to do everything, because I can. I really believe that there’s nothing I can’t do. There’s nothing WE can’t do. Having a founder like me lead the way in visual assistive technology illustrates just that. We’re taking back control of how we experience the world around us.

What’s different about FYE?

With our app, I believe our audience can really see the world again. It uses a combination of AI and human input to describe the world around them to our users. It incorporates an AI model trained on a dataset of over 15 million data points, so it really encompasses all the varying factors that make up the world of everyday visual experiences. The aim was to have descriptions as vivid as if I was describing my surroundings myself. It’s the small details that make a big difference.

What’s next for your app?

We already have personalized AI outputs so the user can create different AI assistants to suit different situations. You can use it to work across the internet as you’re browsing or shopping. I use it a lot for cooking - where the AI can adapt and learn to suit any situation. We are also collaborating with places where people with low vision might struggle, like the metro and the airport. We’ve built in AI outputs in collaboration with these spaces so that anyone using our app will be able to navigate those spaces with confidence. I’m currently working on evolving From Your Eyes as an organization, reimagining the app as one element of the organization under the new name FYE. Next, we’re exploring integrations with smart glasses and watches to bring our app to wearables.

Discover more #WeArePlay stories and share your favorites.



How useful did you find this blog post?

Android Device Streaming, powered by Firebase, is now in Beta

Posted by Adarsh Fernando, Senior Product Manager, Android Developer Tools

Validating your app on a range of Android screens is an important step to developing a high quality Android app. However, getting access to the device you need, when you need it, can be challenging and time consuming. From trying to reproduce a device specific behavior on a Samsung device to testing your adaptive app layouts on the Google Pixel Fold, having the right device at the right time is critical.

To address this app developer use case, we created Android Device Streaming, powered by Firebase. With just a few clicks, you and your team can access real physical devices, such as the latest Pixel and Samsung devices, and use them in the IDE in many of the ways you would use a physical device sitting on your desk.

Animation of using Device Streaming in Android Studio
Android Device Streaming, powered by Firebase, available in Android Studio Jellyfish

Today, Android Device Streaming is in beta and is available to all Android developers using Android Studio Jellyfish or later. We’ve also added new devices to the catalog and introduced flexible pricing that provides low-cost access to the latest Android devices.

Read below to learn what changes are in this release, as well as common questions around uses, security, and pricing. However, if you want to get started right away and try Android Device Streaming at no cost, see our getting started guide.

What can you do with Android Device Streaming?

If you’ve ever used Device Mirroring, you know that Android Studio lets you see the screen of your local physical device within the IDE window. Without having to physically reach out to your device, you’re able to change the device orientation, change the posture of foldables, simulate pressing physical buttons, interact with your app, and more. Android Device Streaming leverages these same capabilities, allowing you to connect and interact with remote physical devices provided by Firebase.

Screen capture of using the debugger with Android Device Streaming
Using the Debugger with Android Device Streaming

When you use Android Studio to request a device from Android Device Streaming, the IDE establishes a secure ADB over SSL connection to the device. The connection also lets you use familiar tools in Android Studio that communicate with the device, such as the Debugger, Profiler, Device Explorer, Logcat, Compose Live Edit, and more. These tools let you more accurately validate, test, and debug the behavior of your app on real OEM hardware.

What devices would my team have access to?

Android Device Streaming gives you and your team access to a number of devices running Android versions 8.1 through 14. You have access to the latest flagship devices from top device manufacturers, such as Google Pixel and Samsung. You can expand testing your app across more form factors with access to the latest foldables and tablets, such as the Samsung Tab S8 Ultra.

Screen capture of browsing the list of devices and selecting the one you want to use in Android Studio
Browse and select devices you want to use from Android Studio

And we’re frequently adding new devices to our existing catalog of 20+ device models, such as the following recent additions:

    • Samsung Galaxy Z Fold5
    • Samsung Galaxy S23 Ultra
    • Google Pixel 8a

Without having to purchase expensive devices, each team member can access Firebase’s catalog of devices in just a few clicks, for as long as they need—giving your team confidence that your app looks great across a variety of popular devices.


Google OEM partner logos - Samsung, Google Pixel, Oppo, and Xiaomi

As we mentioned at Google I/O ‘24, we’re partnering with top Original Equipment Manufacturers (OEMs), such as Samsung, Google Pixel, Oppo, and Xiaomi, to expand device selection and availability even further in the months to come. This helps the catalog of devices grow and stay ahead of ecosystem trends, so that you can validate that your apps work great on the latest devices before they reach the majority of your users.

Is Android Device Streaming secure?

Android Device Streaming, powered by Firebase, takes the security and privacy of your device sessions very seriously. Firebase devices are hosted in secure global data centers and Android Studio uses an SSL connection to connect to the device.

A device that you’ve used to install and test your app on is never shared with another user or Google service before being completely erased and factory reset. When you’re done using a device, you can do this yourself by clicking “Return and Erase Device” to fully erase and factory reset it. The same applies if the session expires and the device is returned automatically.

Screen capture of Reuturn and Erase Device function in Android Device Streaming
When your session ends, the device is fully erased and factory reset.

How much does Android Device Streaming cost?

Depending on your Firebase project’s pricing plan, Android Device Streaming users can use Android Device Streaming with the following pricing:

    • On June 1, 2024, for a promotional period:
        • (no cost) Spark plan: 120 no cost minutes per project, per month
        • Blaze plan: 120 no cost minutes per project, per month, 15 cents for each additional minute
    • On or around February, 2025, the promotional period will end and billing will be based on the following quota limits:
        • (no cost) Spark plan: 30 no cost minutes per project, per month
        • Blaze plan: 30 no cost minutes per project, per month, 15 cents for each additional minute

With no monthly or yearly contracts, Android Device Streaming’s per-minute billing provides unparalleled flexibility for you and your team. And importantly, you don’t pay for any period of time required to set up the device before you connect, or erase the device after you end your session. This allows you and your team to save time and costs compared to purchasing and managing your own device lab.

To learn more, see Usage levels, quotas, and pricing.

What’s next

We’re really excited for you and your team to try Android Device Streaming, powered by Firebase. We think it’s an easy and cost-effective way for you to access the devices you need, when you need them, and right from your IDE, so that you can ensure the best quality and functionality of your app for your users.

The best part is, you can try out this new service in just a few clicks and at no cost. And our economical per-minute pricing provides increased flexibility for your team to go beyond the monthly quota, so that you only pay for the time you’re actively connected to a device—no subscriptions or long-term commitments required.

You can expect that the service will be adding more devices from top OEM partners to the catalog, to ensure that device selection remains up-to-date and becomes increasingly diverse. Try Android Device Streaming today and share your experience with the Android developer committee on LinkedIn, Medium, YouTube, or X.

Top 3 Updates for Building Excellent Apps at Google I/O ‘24

Posted by Tram Bui, Developer Programs Engineer, Developer Relations

Google I/O 2024 was filled with the latest Android updates, equipping you with the knowledge and tools you need to build exceptional apps that delight users and stand out from the crowd.

Here are our top three announcements for building excellent apps from Google I/O 2024:

#1: Enhancing User Experience with Android 15

Android 15 introduces a suite of enhancements aimed at elevating the user experience:

    • Edge-to-Edge Display: Take advantage of the default edge-to-edge experience offered by Android 15. Design interfaces that seamlessly extend to the edges of the screen, optimizing screen real estate and creating an immersive visual experience for users.
    • Predictive Back: Predictive back can enhance navigation fluidity and intuitiveness. The system animations are no longer behind a Developer Option, which means users will be able to see helpful preview animations. Predictive back support is available for both Compose and Views.

#2: Stylus Support on Large Screens

Android's enhanced stylus support brings exciting capabilities:

    • Stylus Handwriting: Android now supports handwriting input in text fields for both Views and Compose. Users can seamlessly input text using their stylus without having to switch input methods, which can offer a more natural and intuitive writing experience.
    • Reduced Stylus Latency: To enhance the responsiveness of stylus interactions, Android introduces two new APIs designed to lower stylus latency. Android developers have seen great success with our low latency libraries, with Infinite Painter achieving a 5x reduction in latency from from 60-90 ms down to 8-16 ms.

#3: Wear OS 5: Watch Face Format, Conservation, and Performance

In the realm of Wear OS, we are focused on power conservation and performance enhancements:

    • Enhanced Watch Face Format: We've introduced improvements to the Watch Face Format, making it easier for developers to customize and optimize watch faces. These enhancements can enable the creation of more responsive, visually appealing watch faces that delight users.
    • Power Conservation: Wear OS 5 prioritizes power efficiency and battery conservation. Now available in developer preview along with a new emulator, you can leverage these improvements to create Wear OS apps that deliver exceptional battery life without compromising functionality.

There you have it— the top updates from Google I/O 2024 to help you build excellent apps. Excited to explore more? Check out the full playlist for deeper insights into these announcements and other exciting updates unveiled at Google I/O.

A Developer’s Roadmap to Predictive Back (Views)

Posted by Ash Nohe and Tram Bui – Developer Relations Engineers

Before you read on, this topic is scoped to Views. Predictive Back with Compose is easier to implement and not included in this blog post. To learn how to implement Compose with Predictive Back, see the Add predictive back animations codelab and the I/O workshop Improve the user experience of your Android app.

This blog post aims to shed light on various dependencies and requirements to support predictive back animations in your views based app.

First, view the Predictive Back Requirements table to understand if a particular animation requires a manifest flag, a compileSDK version, additional libraries or hidden developer options to function.

Then, start your quest. Here are your milestones:

  1. Upgrade Kotlin milestone
  2. Back-to-home animation milestone
  3. Migrate all activities milestone
  4. Fragment milestone
  5. Material Components (Views) milestone
  6. [Optional] AndroidX transitions milestone
Milestones

Upgrade Kotlin milestone

The first milestone is to upgrade to Kotlin 1.8.0 or higher, which is required for other Predictive Back dependencies.

Upgrade to Kotlin 1.8.0 or higher

Back-to-home animation milestone

The back-to-home animation is the keystone predictive back animation.

To get this animation, add android:enableOnBackInvokedCallback=true in your AndroidManifest.xml for your root activity if you are a multi-activity app (see per-activity opt-in) or at the application level if you are a single-activity app. After this, you’ll see both the back-to-home animation and a cross-task animation where applicable, which are visible to users in Android 15+ and behind a developer option in Android 13 and 14.

If you are intercepting back events in your root activity (e.g. MainActivity), you can continue to do so but you’ll need to use supported APIs and you won’t get the back-to-home animation. For this reason, we generally recommend you only intercept back events for UI logic; for example, to show a dialog asking the user to save before they quit.

See the Add support for the predictive back gesture guide for more details.

Milestone grid

Migrate all activities milestone

If you are a multi-activity app, you’ll need to opt-in and handle back events within those activities too to get a system controlled cross-activity animation. Learn more about per-activity opt-in, available for devices running Android 14+. The cross-activity animation is visible to users in Android 15+ and behind a developer option in Android 13 and 14.

Custom cross activity animations are also available with overrideActivityTransition.

Milestone grid

Fragment milestone

Next, you’ll want to focus on your fragment animations and transitions. This requires updating to AndroidX fragment 1.7.0 and transition 1.5.0 or later and using Animator or AndroidX Transitions. Assuming these requirements are met, your existing fragment animations and transitions will animate in step with the back gesture. You can also use material motion with fragments. Most material motions support predictive back as of 1.12.02-alpha02 or higher, including MaterialFadeThrough, MaterialSharedAxis and MaterialFade.

Don’t strive to make your fragment transitions look like the system’s cross-activity transition. We recommend this full screen surface transition instead.

Learn more about Fragments and Predictive Back.

Milestone grid

Material Components milestone

Finally, you’ll want to take advantage of the Material Component View animations available for Predictive Back. Learn more about available components.

Milestone grid

After this, you’ve completed your quest to support Predictive Back animations in your view based app.

[Optional] AndroidX Transitions milestone

If you’re up for more, you might also ensure your AndroidX transitions are supported with Predictive Back. Read more about AndroidX Transitions and the Predictive Back Progress APIs.

Milestone grid

Other Resources

Google @ KotlinConf 2024: A Look Inside Multiplatform Development with KMP and more

Posted by Murat Yener – Developer Relations Engineer

Following our recent Google I/O announcement recommending Kotlin Multiplatform (KMP) for sharing business logic across mobile, web, server, and desktop platforms, and our move to use KMP in Google Workspace, KotlinConf 2024 was the next moment to share the highlights and connect with the Kotlin community.

Kotlin Multiplatform, developed by JetBrains, allows developers to build cross-platform apps by compiling Kotlin code into platform-native binaries while leveraging the full capabilities of a modern, memory-managed language. This approach has been a long-term investment for the Google Workspace team, enabling them to share the business logic between different platforms.

The Android team has been working to support KMP and recently released an alpha version of Room with KMP support. As of today, Annotations, Collections and DataStore are already in stable with KMP support . We've also commonified Lifecycle, ViewModel and Paging libraries to allow integrations with non-Android platforms.

Keynotes and Technical Sessions

The conference kicked off with a keynote, as part of which, Google’s Jeffrey van Gogh gave an overview of Google’s contributions to the Kotlin ecosystem. As part of this, Jeffrey delved into how Google leverages Kotlin Multiplatform (KMP) to streamline development across its own product portfolio. Jeffrey highlighted the benefits of code sharing and efficiency that KMP brings to Google's projects, aligning with our recent recommendations for Android app development.

Our technical sessions at KotlinConf 2024 span a range of topics:

  • A Tale of Two Languages by John Pampuch offered an engaging comparison of Java and Kotlin's evolution, highlighting their symbiotic relationship and mutual influence.
  • The Android Jetpack team, represented by Elif Bilgin, Yigit Boyar, and Daniel Santiago Rivera, unveiled Enabling Kotlin Multiplatform Success: The Android Jetpack Journey. They provided insights into the current state of KMP in Jetpack, shared updates on KMP-enabled Jetpack libraries, and explored the migration process of a well-established Jetpack library to KMP.
  • Going Fast with Kotlin by Andrei Shikov shared valuable insights gained from optimizing Compose for Android. Andrei highlighted interesting performance nuances in Kotlin and the guardrails the Compose team established to ensure optimal performance.
  • Kotlin Multiplatform in Google Workspace by Jason Parachoniak discussed Google Workspace's ongoing migration from a Java-oriented multiplatform foundation to Kotlin Multiplatform, aligning with Google's broader adoption of KMP. Jason shared lessons learned and the current state of this ambitious transition.
  • Write Your Own Kotlin Lint Checks! by Tor Norbye, Android Studio Engineering Director, empowered developers to extend Android Lint, a static analysis tool used by millions, by creating their own checks. Despite the name, it's not actually Android specific -- it's also used to analyze server Kotlin and Java code inside of Google!

Community Engagement at KotlinConf

We are always looking into ways to be actively engaged with the Kotlin community. If you attended KotlinConf, we hope you got a chance to check out our booth, with opportunities to chat with our engineers, get your questions answered, and learn more about how you can leverage Kotlin and KMP.

Learn more about KMP

In addition, you can view updated docs and a new mobile sample on KMP. These resources should have what you need to start learning KMP and if you have any feedback or come across any issues, please share them through this link.

Looking Ahead

We are excited about the future of Kotlin and are planning to add KMP support to more AndroidX libraries. We are looking forward to seeing how you will adopt and build the next generation of apps using KMP.

Thanks to KotlinConf organizers, speakers, attendees, and the entire Kotlin community for making this event happen and bringing Kotlin enthusiasts together.

Home APIs: Enabling all developers to build for the home

Posted by Matt Van Der Staay – Engineering Director, Google Home


This blog was originally posted on Google for Developers.

As the saying goes, “home is where the heart is.” It’s where we spend the most time; it’s your space to be comfortable, where you can truly relax, connect and make memories. Our homes have gotten more helpful with connected products, such as a smart door lock or Nest thermostat. Despite this momentum, it's still too hard to develop for the home.

We are changing all of that. Building on the foundation of Matter, we've re-envisioned Google Home as a platform for developers - all developers, not just those that build smart home devices. Google Home is the destination to create innovative experiences for the home.

Today, we’re announcing the Home APIs and Home runtime. With the Home APIs, app developers can access over 600M devices, Google’s hubs and Matter infrastructure, and an automation engine powered by Google intelligence - all available on both Android and iOS. Here are five things to know:

1. Any developer can now build an experience that works with Google Home.

The home offers a unique opportunity for developers to create seamless and deeper relationships with users, but developing for the smart home is harder than it needs to be. Building for the smart home means integrations with many device makers, operating hubs and Matter fabrics, and operating automations engines driven by intelligent signals.

Whether you build an app specifically for smart home devices or build apps that have nothing to do with the smart home – like a fitness app or delivery app - the Home APIs will let you create app experiences that offer your customers delightful and differentiated experiences on both Android and iOS.

2. Access 600 million connected devices from your app

The new Device and Structure APIs let you access over 600M devices with a single integration. Control and manage the devices already connected to Google Home, such as Matter light bulbs or the Nest Learning Thermostat, whether at home, or on the go. You can build a complex app to manage any aspect of a smart home, or simply integrate with a smart device to solve pain points - like turning on the lights automatically before the food delivery driver arrives.

The Home APIs have been designed with privacy and security in mind, leveraging industry standard best practices. Users are always in control and need to explicitly grant access to their structure and smart home devices before an app can access it. And they can easily revoke access at any time from the Google Home app. To ensure quality experiences, developers who adopt the Home APIs must pass certification before launching their app.

The Device and Structure APIs
The Device and Structure APIs provide all of the foundational building blocks to create a smart home experience.

The new Commissioning API lets you setup Matter devices in your app or the Home app or directly with Fast Pair on Android, without the need to create a new Matter fabric, saving you time and resources.

The Commissioning API
The Commissioning API provides all of the customer experience to set up a Matter device.

3. Automate with Google’s unique intelligence about the home

As people add more devices to their home, it becomes challenging to make them all work in unison. Over the past year, we have added new signals and allowed those with advanced skills to script their home using generative AI. With the new Automation API, you can create and manage home automations in your app, using Google Home’s new automation engine and intelligent signals.

Automations can be triggered by device signals from the home such as occupancy events from motion sensors, mode changes from appliances, or media events from a smart TV. For example, Yale is using the Automation API to turn on the foyer lights when the front door is unlocked at night. Automations can also use Google’s intelligence signals like home and away, which fuses together signals from devices across the home to create a more accurate presence detection.

The Automations API
The Automations API provides all of the tools for creating and managing automations.

4. Expanding hubs for Google Home to the TV

A hub for Google Home is a device that enables remote access and local control of their Matter devices across Wifi and Thread. The Home APIs use the network of hubs for Google Home to control Matter devices whether the user is in the home or away.

Later this year, we’re upgrading our hubs and introducing the Home runtime, so other devices, including Chromecast with Google TV, select panel TVs with Google TV running Android 14, or higher and eligible LG brand TVs will also become hubs for Google Home.

Home APIs make controlling lights and switches locally over a hub feel snappy. We are adopting these APIs in the Google Home app, and our early tests show device control operating up to three times faster than before. Developers using the Home APIs can see faster and more responsive local control in their apps as well.

5. Delightful new experiences from a diverse set of apps

We are working with a broad range of brands across lighting, security, automotive, energy, and entertainment to build seamless smart home experiences that help get more usefulness from the smart home.

Partners from every major smart home category are building on the Home APIs.
Partners from every major smart home category are building on the Home APIs.

Here are how some of our first partners are using the Home APIs:

ADT’s new Trusted Neighbor will revolutionize the universal practice of “giving a trusted neighbor a key to your home,” enabling users to easily grant secure and temporary access to their homes for neighbors, friends or helpers.

ADT Trusted Neighbor Program

LG will enable millions of TVs to be hubs for Google Home, allowing seamless control of devices from any app built using Home APIs. You will also be able to use the ThinQ mobile app or the Home Hub on the LG TV to control devices.

Home APIs on LG TVs for Google Home

Eve Systems will bring their experience to Android for the first time and build helpful automations like lowering the blinds when the temperature drops at night.

Eve Systems using Home APIs

Google Pixel is bridging the digital and physical worlds so that bedtime mode can not only dim your screen, but can also automatically dim your bedroom lights, lower the shades and lock the front door.

Google Pixel using Home APIs

And this is just the beginning. With the Home APIs, a workout app could keep you cool while you are burning calories by turning on the fan before you begin working out. Or a vacation rental app could make sure that the lights are on and the temperature is just right when a guest arrives. With the Home APIs, now anyone can bridge digital experiences and physical devices.


Sign Up to Build with the Home APIs

Do you have a great idea or feature that you'd like to build into your app with the Home APIs? Tell us about it and join the waitlist for access to the Home APIs or Home runtime. We will expand access on a rolling basis and the first apps built on the Home APIs will come to the Play Store and App Store starting this fall. Learn more about what’s included in the Home APIs from our I/O session on the Google Home Developer Center.

Android for Cars: Bringing more apps to cars

Posted by Vivek Radhakrishnan – Technical Program Manager, and Seung Nam – Product Manager

With technology in cars becoming more capable, the opportunity to deliver safe and seamless connected experiences for drivers and passengers is greater than ever. Google remains committed to the automotive industry and is seeing momentum across Android Auto and cars powered by Android Automotive OS with Google built-in. We’re excited to share updates across our in-car experiences and introduce new programs and resources to make it easier for you to bring your apps to cars. Learn more below and in the Android for Cars Technical Session.

Momentum and updates

With over 200 million cars on the road compatible with Android Auto, and nearly 40 car models like the Nissan Rogue, Renault R5, Acura ZDX, and Ford Explorer offering Google built-in, the time to bring your apps to cars is now.

Over the last year, the ecosystem of apps available across these experiences has grown – thanks to you. New entertainment apps like Max, Peacock and Angry Birds are coming to select cars with Google built-in. On Android Auto, the Uber Driver app is now available, allowing drivers to accept rides and deliveries, and get turn-by-turn directions on a bigger screen.

Image showing Angry Birds on a Volvo EX90 car display
Angry Birds is coming to select cars with Google built-in, including Volvo EX90 (pictured).

We’re also pleased to share that Google Cast is coming to cars with Android Automotive OS, starting with Rivian with more to follow. This allows you to easily cast video content from your phone or tablet directly to the car while parked. If you don’t already offer casting in your app, this is a simple way for your content to reach new audiences in the car.

Coming soon - you can stream content from apps on your phone, like Pluto TV, to Rivian cars via Google Cast.

New car app quality tiers

There are unique considerations when developing apps and experiences for cars including safety, numerous screen sizes, and more. Our priority is developing resources and tools that take these considerations into account and minimize the work needed for you to bring your apps to cars.

We’re introducing new quality tiers, inspired by those that exist for large screens, to streamline the process of bringing existing apps to cars by highlighting what makes for a great user experience in cars. Here are the tiers and what they encompass:

    • Tier 1: Car differentiated
      This tier represents the best of what’s possible in cars. Apps in this tier are specifically built to work across the variety of hardware in cars and can adapt their experience across driving and parked modes. They provide the best user experience designed for the different screens in the car like the center console, instrument cluster and additional screens - like panoramic displays that we see in many premium vehicles.
    • Tier 2: Car optimized
      Most apps available in cars today fall into this tier and provide a great experience on the car’s center stack display. These apps will have some car-specific engineering to include capabilities that can be used across driving or parked modes, depending on the app’s category.
    • Tier 3: Car ready
      Apps in this tier are large screen compatible and are enabled while the car is parked, with potentially no additional work. While these apps may not have car-specific features, users can experience the app just as they would on any large screen Android device.

To learn more about the quality tiers, see Android app quality for cars.

Car ready mobile apps program

Let’s dive deep into Tier 3 apps. In collaboration with car manufacturers, we’re introducing the Car ready mobile apps program to accelerate bringing mobile apps to cars with no additional work for developers.

As part of this program, Google will proactively review mobile apps that are already adaptive and large screen compatible to ensure safety and compatibility in cars. If the app qualifies, we will automatically opt it in for distribution on cars with Google built-in and make it available in Android Auto, without the need for new development or a new release to be created. This program will start with parked app categories like video, gaming and browsers with plans to expand to other app categories in the future.

The program will roll out in the coming months, but if you already offer a large screen compatible adaptive app and it falls into one of these categories, you can request a review to participate sooner. As this program rolls out, availability of your app will depend on platform compatibility.

To learn more about building qualified mobile apps, check out the technical session titled “Building Adaptive Android Apps”. You can find guidance on what to look out for at developer.google.com

Animation showing AMC+ app on a phone, tablet and car display.
Apps optimized for large screens, like AMC+, may be able to come to cars with little to no development work.

New tools and emulators

To create high quality experiences in cars, we are also introducing some new tools that can help you along the way.

    • First, we have a new emulator for distant and panoramic displays so developers can visualize and test for the growing sizes and number of screens in the car and make sure apps can adapt to the variety of displays for the best experience.
    • We also have a new tool that addresses the wide range of screen shapes and user interfaces (UI) present in cars. Many new car displays have unique curves, insets and angles that impact the UI, so we have an emulator that lets you change the emulator screen to match OEM screen designs. This will help ensure the apps work well on real cars without needing to set up specific OEM emulators or bringing in real cars for testing.
    • Lastly, we’re introducing an Android Automotive OS system image for Pixel Tablet. This will let you physically interact with your app as you would on a car screen. We are opening this up for early access partners for the purpose of development and testing today, and you can request to participate here.

To learn more about how to use these tools, check out the “Build and test a parked app for Android Automotive OS” codelab that will be published tomorrow.

More app categories for cars

As you consider bringing your app to cars, we put together a table to help you understand what app categories are currently open and accepting app submissions across both Android Auto and cars with Google built-in. We will continue to expand the type of apps that can be enabled in cars, so if your app isn’t in one of these categories, stay tuned for future opportunities!

Android for Cars Catergory Status

Start developing apps for cars today

To learn how to bring your apps to cars, check out the documentation on the Android for Cars developer site and the Android for Cars Technical Session. With all the opportunities across car screens, there has never been a better time to bring your apps and experiences to cars. Thanks for all the contributions to the Android ecosystem. See you on the road!

Scaling Across Screens with Jetpack Compose @ Google I/O ‘24

Posted by Maru Ahues Bouza, Product Management Director, Android Developer

Scaling Across Screens with Jetpack Compose

The promise of Jetpack Compose has always been that a modern toolkit designed to build native UI can help you build better apps faster and easier. As more and more of you - 40% of the top 1k apps, in fact - use (and love) Compose, we’ve been working to extend those benefits you’re seeing on mobile to also help you build across form factors as well. At Google I/O 2024, we announced a lot of new updates for Compose that help you build across form factors, including Compose APIs to support adaptive layouts, and new updates for Compose TV and Wear OS. From foldables to wearables to TVs, Compose is delivering features built to make Android development faster and easier. Apps like yours are already using Compose to support more screens with less code.

When thinking about layouts - think adaptive

Yesterday, we announced a new set of Compose APIs for building adaptive layouts, using Material guidance. These APIs, now in Beta, provide new layouts and components that adapt as users expect when switching between small and large window sizes.

The libraries provide 3 new scaffolds that adapt to the different window sizes that users can place apps in on different types of devices, from phones to foldables to tablets and more.

3 new libraries that adapt to different window sizes

NavigationSuiteScaffold

NavigationSuiteScaffold helps make it easier to build navigation UI by automatically complying with Material guidelines to provide your users with an optimal experience based on their window size.

Material guidelines recommend using a navigation bar at the bottom of compact width windows such as most phones and a navigation rail on the size of medium width and expanded width windows. It used to be up to each app individually to handle swapping between these components; now NavigationSuiteScaffold does this for you by switching between the components when the window size changes.

Navigation bar

ListDetailPaneScaffold & SupportingPaneScaffold

The new library also has ListDetailPaneScaffold and SupportingPaneScaffold, which help you implement canonical layouts that we recommend in many cases - list-detail and supporting pane.

On a phone, you usually organize your app flow through screens. For example, clicking on an item on your list screen brings you to the detail screen.

Detaileds screen

When adapting to different window sizes, it helps to think of your app in terms of panes rather than screens. For a compact window size class, such as a phone, you might only display one pane. For an expanded window size class, you might show two, or more panes at the same time. ListDetailPaneScaffold and SupportingPaneScaffold help you build apps that easily switch between one and two pane layouts.

Different screen layouts

You can learn more about all three of these APIs and how to get started with them in the “Building UI with the Material 3 adaptive library” and “Building adaptive Android apps” technical sessions.

“Integrating SupportingPaneScaffold was effortless and quick. It enabled us to seamlessly organize primary and secondary content on To-Dos. Depending on the window size class, the supporting pane adjusts the UI without any additional custom logic. Delighting our users regardless of what device they use is a key priority for SAP Mobile Start.”
- Software Engineer on SAP Mobile Start

Compose for Wear OS

In the past year, adoption of Compose for Wear OS has grown 200%, showcasing the ease with which Compose allows developers to build for the watch form factor.

Recently we’ve seen top apps such as WhatsApp, Gmail and Google Calendar built entirely using Compose for Wear OS, and it’s the recommended way for building user interfaces for Wear OS apps.

At this year’s Google I/O, Compose for Wear OS is graduating visual improvements and fixes from beta to stable.

In the past year, we’ve added features such as SwipeToReveal, to give users additional means for completing actions, an expandableItem, to enhance the use of the smaller screen and show additional information where needed, and a range of WearPreview supporting annotations, for ensuring your app works optimally across the range of device sizes and font scales.

Compose for Wear OS previews usage in Android Studio
Compose for Wear OS previews usage in Android Studio

You can get started with Compose for Wear OS by taking the codelab and learn more about all the latest updates for Wear OS via the technical session.

Compose for Android TV

At Google I/O ‘24, we announced that Compose for TV 1.0.0 is now available in beta. Compose for TV is our recommended approach for building delightful UIs for Android TV OS. It brings all of the benefits of Jetpack Compose to your TV apps, making building beautiful and functional experiences in your app much faster and easier.

The latest updates to Compose for TV include better performance, input support, and a whole range of improved components that look great out of the box. New in this release, we’ve added lists, navigation, chips, and settings screens. We’ve also updated the developer tools in Android Studio to include a new project wizard to get a running start with Compose for TV.

The new TV Material Catalog app lets you explore components in Compose for TV with different themes and layouts, and our updated JetStream sample shows how it all fits together.

TV Material Catalog app in action

You can get started with Compose for TV by checking out the dedicated blog, the technical session or taking a look at the integration guides.

Jetpack Glance

Jetpack Glance 1.1.0 is now available in RC, bringing a new unit test library, Error UIs, and new components.

We have also released new Canonical Widget Layouts on GitHub, which are built on top of the Glance components, to allow you to get started faster with a set of layouts that align with best practices.

The first set of layouts are delivered as code samples and a matching figma design kit on Android UI Kit with more layouts coming later this year.

Lastly, we have new design guidance published on the UI design hub—check it out!

A sample of Compose across screens: Jetcaster

A sample of Compose across screens: Jetcaster

We have updated Jetcaster—one of our Compose samples—to adapt across phone, foldable and tablet screens, and added support for TV, Wear and homescreen widgets with Glance. Jetcaster showcases how Compose helps you to build across a range of devices using a shared architecture in a single project.

See how you can extract elements such as your data layer, and design system, to promote reuse and consistency while delivering an experience tailored to different form factors. You can dive directly into the code on GitHub.

Get started with Compose across screens

With these updates to Compose to help you build for tablets, foldables, wearables and TVs, it is a great time to get started! These technical sessions are a great place to learn more about all the latest updates:

Learn more about how SoundCloud supported more screens using 45% less code with Jetpack Compose!

"Our mobile Compose skills transferred directly to Compose for other form factors, The concepts and most APIs are the same across form factors” - Vitus Ortner, Android engineer at SoundCloud

What’s new in Wear OS – I/O ’24

Kseniia Shumelchyk, Android Developer Relations Engineer, and Garan Jenkin, Android Developer Relations Engineer

Wear OS has seen incredible growth and advancements over the past year. With watch launches from Pixel, Samsung and more, Wear OS grew its user base by 40% in 2023 and has users in over 160 countries and regions. And Wear OS has expanded to more brands including OnePlus, OPPO and Xiaomi. This growth has been accompanied by heavy investments in performance and power optimization.

In this blog post, we’ll be highlighting some of the key updates we announced at Google I/O this year, so let’s dive in and explore the latest advancements in Wear OS and how you can make the most of the platform.

Wear OS 5 Developer Preview

We’re excited to be releasing the Developer Preview of Wear OS 5, the next version of Google’s smartwatch platform arriving later this year, based on Android 14. Central to our release of Wear OS 5 is continuing to enhance battery life.

Wear OS 5 brings performance improvements over Wear OS 4. Tracking your workout is now more efficient; for example, running a marathon consumes up to 20% less power on Wear OS 5 than on Wear OS 4.

Wear OS 5 brings battery improvements over Wear OS 4 for longer work out tracking
Wear OS 5 brings battery improvements over Wear OS 4 for longer work out tracking

To help you develop power-efficient apps on Wear OS, we’ve released a new guide to conserve power and battery. Be sure to take a look!

Wear OS 5 is based on Android 14, which brings with it a number of developer-facing changes. Check out what’s changed and try the new Wear OS 5 emulator to test your app for compatibility with the new platform version.

Changes in Watch Faces development

Last year we introduced the Watch Face Format as part of Wear OS 4, and we’ve had a fantastic response, with 30% of watch faces in Google Play already using the format. It’s been great to see what you’ve all been able to create so far using the Watch Face Format!

Sample Watch faces created with Watch Face Format
Sample Watch faces created with Watch Face Format

We’re excited to bring you the next iteration of the Watch Face Format with Wear OS 5.

Additionally, we’re announcing some changes to existing watch face development using Jetpack Watch Face APIs. Starting from Wear OS 5, we are introducing restrictions to complications for watch faces built with AndroidX or the Wearable Support Library that will apply to some data sources, as well as Google Play publishing limitations to new watch faces built with these libraries.

Check out the Watch Faces blog post for full details on the new features in Watch Face Format and changes to watch faces development options.

Tooling and library updates

Jetpack Compose for Wear OS

Adoption of Compose on Wear OS has grown 200% in the past year, highlighting the ease with which Compose allows developers to build for the watch form factor. Recently we’ve seen top apps such as WhatsApp, Gmail and Google Calendar built entirely using Compose for Wear OS, and it’s the recommended way for building user interfaces for Wear OS apps.

With the 1.3 release of Jetpack Compose for Wear OS, we’ve graduated a number of visual improvements and fixes from beta to stable.

In the past year, we’ve added features such as SwipeToReveal, to give users additional means for completing actions, an expandable item, to enhance the use of the smaller screen and show additional information where needed, and a range of WearPreview supporting annotations, for ensuring your app works optimally across the range of device sizes and font scales.

Compose for Wear OS previews usage in Android Studio
Compose for Wear OS previews usage in Android Studio

And at Google I/O 2024, we announced a lot of new updates with Jetpack Compose that help you build across form factors, including Wear OS, read more in this blog and check out how SoundCloud supported more screens using 45% less code with Jetpack Compose.

Tiles and ProtoLayout

Wear OS tiles give users fast, predictable access to the information and actions they rely on most. Version 1.4 of the Jetpack Tiles library, currently in alpha, introduces preview support for Android Studio to help you quickly iterate on your Tile development while also helping you create optimal-looking tiles on a range of display sizes.

Previews can be seen starting in Android Studio Koala Feature Drop (Canary), with the following dependencies:

    • androidx.wear.tiles:tiles-tooling-preview:1.4.0-alpha02+
    • androidx.wear.tiles:tiles-tooling:1.4.0-alpha02+
    • androidx.wear:wear-tooling-preview:1.0.0+
@Preview(device = WearDevices.SMALL_ROUND)
fun smallPreview(context: Context) = TilePreviewData(
    onTileRequest = { request ->
        TilePreviewHelper.singleTimelineEntryTileBuilder(
            buildMyTileLayout()
        ).build()
    }
)
Tiles previews usage in Android Studio
Tiles previews usage in Android Studio

We’ve also introduced better means for your app to determine whether your tiles are in use, through the getActiveTilesAsync() method.

Within ProtoLayout’s stable version 1.1, as used by Tiles, we’ve introduced a number of changes, such as the following:

    • Gradient support in ArcLine.
    • Date-time formatting supports different time zones for dynamic data types.
    • Better text autosizing and ellipsizing options, and consistent font padding behavior.
    • Expandable spacers
    • Improved accessibility for Clickable elements

And from 1.2.0-alpha02, we’ve made it easier for your layouts to adjust appropriately for different display sizes by adding the setResponsiveContentInsetEnabled() method to PrimaryLayout, as well as updating it for EdgeContentLayout. To use this setter, update your code as follows:

PrimaryLayout.Builder(deviceParameters)
    .setResponsiveContentInsetEnabled(true)
    .setContent(
        // ...
    )
.build()

Easier testing for fitness apps

Android Studio Koala Feature Drop (Canary) brings a new sensor panel to make it easier to test use of Health Services in your Wear OS app. The panel allows you to configure capabilities of the device, set values of specific data types and stimulate events such as auto-pause and resume of exercises.

Sensor panel usage with Wear OS emulator in Android Studio
Sensor panel usage with Wear OS emulator in Android Studio

Check out this blog to learn more about tooling updates.

Larger Displays

With the momentum surrounding Wear OS, we’re seeing a wider variety of round screen sizes and resolutions, which provides more choices for the user.

We are releasing new guidelines on how to build responsive UIs for different watch display sizes, as well as updates to existing libraries to introduce adaptive layouts, and components.

Check out the ComposeStarter sample for Wear OS on Github to see how to take advantage of these updates in your app. Furthermore, we’ve updated the sample to provide examples of using tools to evaluate your layouts, including :

    • Previews - demonstrating use of WearPreviewDevices to visualize your layouts on a full range of device sizes and font scaling settings.
    • Screenshot testing - helping you detect issues and regressions in your layouts on different sized devices, with different font scales and locales, representative of real-world devices.

Start building for Wear OS now

There has never been a better time to start building for Wear OS! Be sure to check out Building for the future of Wear OS technical session to learn more about all the latest updates for Wear OS!

To get started:

We’re looking forward to seeing the experiences that you build on Wear OS!

Level up your apps with the latest features from Android Health

Posted by Breana Tate - Developer Relations Engineer, Android Health

Android Health’s mission is to enable billions of Android users to be healthier through access, storage, and control of their health, fitness, and safety data. To further this mission, we offer two primary APIs for developers, Health Connect and Health Services on Wear OS, which are both used by a growing number of apps on Android and Wear OS.

AI capabilities unlock amazing and unique use cases, but to be ready to deliver the most value to your users at the right time, you need a strong foundation of data. Our updates this year focus on helping you build up this data foundation, with support for more data types, new ways to access data, and additional methods of getting timely data updates when you need them.

Changes to the Google Fit APIs

We recently shared that Google Fit developer services will be transitioning to become a core part of the Android Health platform. As part of this, the Google Fit APIs, including the REST API, will remain available until June 30, 2025.

Health Connect is the recommended solution for storing and sharing health and fitness data on Android phones. Beginning with Android 14, it’s available by default in Settings. On pre-Android 14 devices, it’s available for download from the Play Store. Health Connect lets your app connect with hundreds of apps using a single API integration. To date, over 500 apps have integrated with Health Connect and have unlocked deeper insights for their users. Check out the featured list to see some of the apps that have integrated.

We’re excited to continue supporting the Google Fit Android Recording API functionality through the Recording API on mobile, which allows developers to record steps, and soon distance and calories, in a power-efficient manner. In contrast to the Google Fit Android Recording API, the Recording API on mobile does not store data in the cloud by default, and does not require Google Sign-In. The API is designed to make migrating from the Fit Recording API effortless. Keep an eye on d.android.com/health-and-fitness for upcoming documentation.

Upcoming capabilities from Health Connect

Health Connect will soon add support for background reads and history reads.

Background reads will enable developers to read data from Health Connect while their app is in the background, meaning that you can keep data up-to-date without relying on the user to open your app. This is a departure from current behavior, where apps can only read from Health Connect while the app is in the foreground or running a foreground service.

History reads will give users the option to grant apps access to all historical data in Health Connect, not just the past 30 days.

With both background reads and history reads, users are in control. Both capabilities require developers to declare the respective permissions, and users must approve the permission requests before developers can make use of the data protected by those permissions. Even after granting approval, users have the option of revoking access at any time from within Health Connect settings.

Both features will be released later this year, so stay tuned to learn how to add support to your apps!

Updates to Health Services on Wear OS

Health Services on Wear OS is a set of APIs that makes it simple to create power-efficient health and fitness experiences on Wear OS.

In Wear OS 5, we’re introducing 2 new features:

    • New data types for running
    • Support for debounced goals

New Data Types for Running

Starting with Wear OS 5, Health Services will support new data types for running. These data types can help provide additional insights on running form and economy.

The full list of new advanced running metrics is:

    • Ground Contact Time
    • Stride Length
    • Vertical Oscillation
    • Vertical Ratio

As with all data types supported by Health Services on Wear OS, be sure to check exercise capabilities so that your app only uses metrics that are supported on the devices running your app, creating a smoother experience for users. This is especially important for Wear OS, as there is a strong ecosystem of devices for consumers to choose from, and they don’t always support the same metrics.

// Checking if the device supports the RUNNING exercise and confirming the 
// data types that are supported.
suspend fun getExerciseCapabilities(): ExerciseTypeCapabilities? {
   val capabilities = exerciseClient.getCapabilitiesAsync().await()
   return if (ExerciseType.RUNNING in capabilities.supportedExerciseTypes) {
       capabilities.getExerciseTypeCapabilities(ExerciseType.RUNNING)
   } else {
       null
   }
}


. . .


// Checking whether the data types that we want to use are supported by
// the RUNNING exercise on this device.
val dataTypes = setOf(
   DataType.HEART_RATE_BPM_STATS,
   DataType.CALORIES_TOTAL,
   DataType.DISTANCE_TOTAL,
   DataType.GROUND_CONTACT_TIME,
   DataType.VERTICAL_OSCILLATION
).intersect(capabilities.supportedDataTypes)
Checking exercise capabilities with Health Services on Wear OS

To make this easy, we’ve introduced a sensor panel, available starting in Android Studio Koala Feature Drop, which is currently in Canary. You can use the panel to test your app across a variety of device capabilities, experimenting with situations where metrics like heart rate or distance aren’t available.

The Health Services sensor panel
The Health Services sensor panel

Support for debounced goals

Second, Health Services on Wear OS will soon support debounced goals for instantaneous metrics. These include metrics like heart rate, distance, and speed, for which users want to maintain a specified threshold or range throughout an exercise.

Debounced goals prevent the same event from being emitted multiple times—every time the condition is true—over a short time period. Instead, events are emitted only if the threshold has been continuously exceeded for a (configurable) number of seconds. You can also prevent events from being emitted immediately after goal registration.

This support comes from two new ways to better time goal alerts for instantaneous metrics: duration at threshold and initial delay:

    • Duration at threshold is the amount of uninterrupted time the user needs to cross the specified threshold before Health Services sends an alert event.
    • Initial delay is the amount of time that must pass, since goal registration, before your app is notified.

Together, these features reduce the number of false positives and repeated alerts surfaced to users if your app lets users set fitness goals or targets.

Duration at Threshold

Initial Delay

Definition

The amount of uninterrupted time the user needs to cross the specified threshold before Health Services will send an alert event.

The amount of time that must pass since goal registration, before your app is notified.

Purpose

Prevent false positives.

Prevent repeatedly notifying the user.

Counter starts

As soon as user crosses the specified threshold

As soon as the monitoring request is set

The differences between Duration at Threshold and Initial Delay

A common use case for debounced goals involves heart rate zones. Heart rate continuously fluctuates throughout an exercise, especially during cardio-intensive activities. Without support for debouncing, an app might get many alerts in a short period of time, such as each time the user’s heart rate dips above or below the target range.

By introducing an initial delay, you can inform Health Services to send a goal alert only after a specified time period has passed–think of this like an adjustment period. And by introducing a duration at threshold, you can take this customization further, by specifying the amount of time that must pass in (or out) of the specified threshold for the goal to be activated. In practice, this would be like waiting for the user to be out of their target heart rate range for 15 seconds before your app lets them know to increase or decrease their intensity.

Check out the technical session, “Building Adaptable Experiences with Android Health” to see this in action!

Your app’s training partner

The Health & Fitness Developer Center is your one-stop-shop for building health & fitness apps on Android! Visit the site for documentation, design inspiration, case studies, and more to learn how to build apps on mobile and Wear OS.

We’re excited to see the Health and Fitness experiences you continue to build on Android!