Tag Archives: Wear OS

Prepare your app for the new Samsung Galaxy foldables and watches!

Posted by Maru Ahues Bouza – Product Management Director, Android Developer

Yesterday’s Galaxy Unpacked event from Samsung debuted the latest in foldables, wearables, and more! The event introduced the Galaxy Z Fold6 and Z Flip6 and the Galaxy Watch7 and Watch Ultra - and it has never been easier to build apps that look great across all these screen sizes and types. To help you get your apps ready for the latest Android devices, we’re sharing how you can prepare your app for Wear OS 5 and how to build adaptive apps that scale across mobile, tablets, foldables and more!

Get your app ready for Wear OS 5

Samsung’s new Galaxy Watch lineup, including the Watch Ultra and Watch7, will be the first smartwatches powered by Wear OS 5, the latest version of the Wear OS platform. As Wear OS 5 is based on Android 14, this new platform version brings with it a number of developer-facing changes. To ensure your app is ready for the next generation of devices, start by testing your app on the Wear OS 5 Emulator!

Galaxy Watch Ultra (left) and Galaxy Watch7 (right)
Galaxy Watch Ultra (left) and Galaxy Watch7 (right)

Wear OS 5 brings the next iteration of the Watch Face Format, providing more features to create expressive, efficient and individual watch faces for your users. New watches launched with Wear OS 5 will only support third-party watch faces built with Watch Face Format, prioritizing the user experience. For more information on watch face compatibility, see this Help Center article.

As we gather momentum behind the Watch Face Format, we’re changing requirements for publishing watch faces on Google Play. Check out the watch face page for the latest guidance.

Build adaptive to scale across screen sizes and types

The latest in large screens and foldables are here, with the new Galaxy Z Fold6 and Z Flip6, so there is even more reason to ensure your app looks great across whatever screen size or folded state your users are engaging with. The best way to do that is to make your app adaptive - meaning your users get an optimal experience on all their devices. By building an adaptive app, you scale across mobile, tablets, foldables, desktop and more.

Galaxy Watch Ultra (left) and Galaxy Watch7 (right)
Galaxy Z Fold6

A great place to start when building adaptive apps is with the new Compose adaptive layout libraries. These libraries are designed to help you to make your UI look good across window sizes. From navigation UI to list/detail and supporting pane layouts, we’re providing composables to make building an adaptive app easier than ever.

Additionally, window size classes are the best way to scale your UI, with opinionated breakpoints that help you design, develop, and test responsive/adaptive layouts across various window sizes. Window size classes enable you to change your app layout as the display space available to your app changes, for example, when a device folds or unfolds, the device orientation changes, or the app window is resized in multi‑window mode.

Discover everything you need to know about building adaptive apps with the adaptive apps documentation; it will be continually updated with the latest and greatest tools and APIs to enable you to scale across screens!

Get started with Adaptive Apps and Wear OS

With these new devices, from the smallest to the largest, there are opportunities to build apps that excite your users on all their favorite Android screens. Apps like SoundCloud, Peloton, and more are already building experiences that scale across their user’s favorite screens!

Get building for Wear OS today by checking out Wear OS developer site and visiting the Wear OS gallery for inspiration. And scale your app across even more screens by building adaptive with the latest from Compose!

Top 3 Updates with Compose across Form Factors at Google I/O ’24

Posted by Chris Arriola – Developer Relations Engineer

Google I/O 2024 was filled with lots of updates and announcements around helping you be more productive as a developer. Here are the top 3 announcements around Jetpack Compose and Form Factors from Google I/O 2024:

#1 New updates in Jetpack Compose

The June 2024 release of Jetpack Compose is packed with new features and improvements such as shared element transitions, lazy list item animations, and performance improvements across the board.

With shared element transitions, you can create delightful continuity between screens in your app. This feature works together with Navigation Compose and predictive back so that transitions can happen as users navigate your app. Another highly requested feature—lazy list item animations—is also now supported for lazy lists giving it the ability to animate inserts, deletions, and reordering of items.

Jetpack Compose also continues to improve runtime performance with every release. Our benchmarks show a faster time to first pixel of 17% in our Jetsnack Compose sample. Additionally, strong skipping mode graduated from experimental to production-ready status further improving the performance of Compose apps. Simply update your app to take advantage of these benefits.

Read What’s new in Jetpack Compose at I/O ‘24 for more information.


#2 Scaling across screens with new Compose APIs and Tools

During Google I/O, we announced new tools and APIs to make it easier to build across screens with Compose. The new Material 3 adaptive library introduces new APIs that allow you to implement common adaptive scenarios such as list-detail, and supporting pane. These APIs allow your app to display one or two panes depending on the available size for your app.

Watch Building UI with the Material 3 adaptive library and Building adaptive Android apps to learn more. If you prefer to read, you can check out About adaptive layouts in our documentation.

We also announced that Compose for TV 1.0.0 is now available in beta. The latest updates to Compose for TV include better performance, input support, and a whole range of improved components that look great out of the box. New in this release, we’ve added lists, navigation, chips, and settings screens. We’ve also added a new TV Material Catalog app and updated the developer tools in Android Studio to include a new project wizard to get a running start with Compose for TV.

Finally, Compose for Wear OS has added features such as SwipeToReveal, an expandableItem, and a range of WearPreview supporting annotations. During Google I/O 2024, Compose for Wear OS graduated visual improvements and fixes from beta to stable. Learn more about all the updates to Wear OS by checking out the technical session.

Check out case studies from SoundCloud and Adidas to see how apps are leveraging Compose to build their apps and learn more about all the updates for Compose across screens by reading more here!


#3 Glance 1.1

Jetpack Glance is Android’s modern recommended framework for building widgets. The latest version, Glance 1.1, is now stable. Glance is built on top of Jetpack Compose allowing you to use the same declarative syntax that you’re used to when building widgets.

This release brings a new unit test library, Error UIs, and new components. Additionally, we’ve released new Canonical Widget Layouts on GitHub to allow you to get started faster with a set of layouts that align with best practices and we’ve published new design guidance published on the UI design hub — check it out!

To learn more about using Glance, check out Build beautiful Android widgets with Jetpack Glance. Or if you want something more hands-on, check out the codelab Create a widget with Glance.


You can learn more about the latest updates to Compose and Form Factors by checking out the Compose Across Screens and the What’s new in Jetpack Compose at I/O ‘24 blog posts or watching the spotlight playlist!

Scaling Across Screens with Jetpack Compose @ Google I/O ‘24

Posted by Maru Ahues Bouza, Product Management Director, Android Developer

Scaling Across Screens with Jetpack Compose

The promise of Jetpack Compose has always been that a modern toolkit designed to build native UI can help you build better apps faster and easier. As more and more of you - 40% of the top 1k apps, in fact - use (and love) Compose, we’ve been working to extend those benefits you’re seeing on mobile to also help you build across form factors as well. At Google I/O 2024, we announced a lot of new updates for Compose that help you build across form factors, including Compose APIs to support adaptive layouts, and new updates for Compose TV and Wear OS. From foldables to wearables to TVs, Compose is delivering features built to make Android development faster and easier. Apps like yours are already using Compose to support more screens with less code.

When thinking about layouts - think adaptive

Yesterday, we announced a new set of Compose APIs for building adaptive layouts, using Material guidance. These APIs, now in Beta, provide new layouts and components that adapt as users expect when switching between small and large window sizes.

The libraries provide 3 new scaffolds that adapt to the different window sizes that users can place apps in on different types of devices, from phones to foldables to tablets and more.

3 new libraries that adapt to different window sizes

NavigationSuiteScaffold

NavigationSuiteScaffold helps make it easier to build navigation UI by automatically complying with Material guidelines to provide your users with an optimal experience based on their window size.

Material guidelines recommend using a navigation bar at the bottom of compact width windows such as most phones and a navigation rail on the size of medium width and expanded width windows. It used to be up to each app individually to handle swapping between these components; now NavigationSuiteScaffold does this for you by switching between the components when the window size changes.

Navigation bar

ListDetailPaneScaffold & SupportingPaneScaffold

The new library also has ListDetailPaneScaffold and SupportingPaneScaffold, which help you implement canonical layouts that we recommend in many cases - list-detail and supporting pane.

On a phone, you usually organize your app flow through screens. For example, clicking on an item on your list screen brings you to the detail screen.

Detaileds screen

When adapting to different window sizes, it helps to think of your app in terms of panes rather than screens. For a compact window size class, such as a phone, you might only display one pane. For an expanded window size class, you might show two, or more panes at the same time. ListDetailPaneScaffold and SupportingPaneScaffold help you build apps that easily switch between one and two pane layouts.

Different screen layouts

You can learn more about all three of these APIs and how to get started with them in the “Building UI with the Material 3 adaptive library” and “Building adaptive Android apps” technical sessions.

“Integrating SupportingPaneScaffold was effortless and quick. It enabled us to seamlessly organize primary and secondary content on To-Dos. Depending on the window size class, the supporting pane adjusts the UI without any additional custom logic. Delighting our users regardless of what device they use is a key priority for SAP Mobile Start.”
- Software Engineer on SAP Mobile Start

Compose for Wear OS

In the past year, adoption of Compose for Wear OS has grown 200%, showcasing the ease with which Compose allows developers to build for the watch form factor.

Recently we’ve seen top apps such as WhatsApp, Gmail and Google Calendar built entirely using Compose for Wear OS, and it’s the recommended way for building user interfaces for Wear OS apps.

At this year’s Google I/O, Compose for Wear OS is graduating visual improvements and fixes from beta to stable.

In the past year, we’ve added features such as SwipeToReveal, to give users additional means for completing actions, an expandableItem, to enhance the use of the smaller screen and show additional information where needed, and a range of WearPreview supporting annotations, for ensuring your app works optimally across the range of device sizes and font scales.

Compose for Wear OS previews usage in Android Studio
Compose for Wear OS previews usage in Android Studio

You can get started with Compose for Wear OS by taking the codelab and learn more about all the latest updates for Wear OS via the technical session.

Compose for Android TV

At Google I/O ‘24, we announced that Compose for TV 1.0.0 is now available in beta. Compose for TV is our recommended approach for building delightful UIs for Android TV OS. It brings all of the benefits of Jetpack Compose to your TV apps, making building beautiful and functional experiences in your app much faster and easier.

The latest updates to Compose for TV include better performance, input support, and a whole range of improved components that look great out of the box. New in this release, we’ve added lists, navigation, chips, and settings screens. We’ve also updated the developer tools in Android Studio to include a new project wizard to get a running start with Compose for TV.

The new TV Material Catalog app lets you explore components in Compose for TV with different themes and layouts, and our updated JetStream sample shows how it all fits together.

TV Material Catalog app in action

You can get started with Compose for TV by checking out the dedicated blog, the technical session or taking a look at the integration guides.

Jetpack Glance

Jetpack Glance 1.1.0 is now available in RC, bringing a new unit test library, Error UIs, and new components.

We have also released new Canonical Widget Layouts on GitHub, which are built on top of the Glance components, to allow you to get started faster with a set of layouts that align with best practices.

The first set of layouts are delivered as code samples and a matching figma design kit on Android UI Kit with more layouts coming later this year.

Lastly, we have new design guidance published on the UI design hub—check it out!

A sample of Compose across screens: Jetcaster

A sample of Compose across screens: Jetcaster

We have updated Jetcaster—one of our Compose samples—to adapt across phone, foldable and tablet screens, and added support for TV, Wear and homescreen widgets with Glance. Jetcaster showcases how Compose helps you to build across a range of devices using a shared architecture in a single project.

See how you can extract elements such as your data layer, and design system, to promote reuse and consistency while delivering an experience tailored to different form factors. You can dive directly into the code on GitHub.

Get started with Compose across screens

With these updates to Compose to help you build for tablets, foldables, wearables and TVs, it is a great time to get started! These technical sessions are a great place to learn more about all the latest updates:

Learn more about how SoundCloud supported more screens using 45% less code with Jetpack Compose!

"Our mobile Compose skills transferred directly to Compose for other form factors, The concepts and most APIs are the same across form factors” - Vitus Ortner, Android engineer at SoundCloud

What’s new in Wear OS – I/O ’24

Kseniia Shumelchyk, Android Developer Relations Engineer, and Garan Jenkin, Android Developer Relations Engineer

Wear OS has seen incredible growth and advancements over the past year. With watch launches from Pixel, Samsung and more, Wear OS grew its user base by 40% in 2023 and has users in over 160 countries and regions. And Wear OS has expanded to more brands including OnePlus, OPPO and Xiaomi. This growth has been accompanied by heavy investments in performance and power optimization.

In this blog post, we’ll be highlighting some of the key updates we announced at Google I/O this year, so let’s dive in and explore the latest advancements in Wear OS and how you can make the most of the platform.

Wear OS 5 Developer Preview

We’re excited to be releasing the Developer Preview of Wear OS 5, the next version of Google’s smartwatch platform arriving later this year, based on Android 14. Central to our release of Wear OS 5 is continuing to enhance battery life.

Wear OS 5 brings performance improvements over Wear OS 4. Tracking your workout is now more efficient; for example, running a marathon consumes up to 20% less power on Wear OS 5 than on Wear OS 4.

Wear OS 5 brings battery improvements over Wear OS 4 for longer work out tracking
Wear OS 5 brings battery improvements over Wear OS 4 for longer work out tracking

To help you develop power-efficient apps on Wear OS, we’ve released a new guide to conserve power and battery. Be sure to take a look!

Wear OS 5 is based on Android 14, which brings with it a number of developer-facing changes. Check out what’s changed and try the new Wear OS 5 emulator to test your app for compatibility with the new platform version.

Changes in Watch Faces development

Last year we introduced the Watch Face Format as part of Wear OS 4, and we’ve had a fantastic response, with 30% of watch faces in Google Play already using the format. It’s been great to see what you’ve all been able to create so far using the Watch Face Format!

Sample Watch faces created with Watch Face Format
Sample Watch faces created with Watch Face Format

We’re excited to bring you the next iteration of the Watch Face Format with Wear OS 5.

Additionally, we’re announcing some changes to existing watch face development using Jetpack Watch Face APIs. Starting from Wear OS 5, we are introducing restrictions to complications for watch faces built with AndroidX or the Wearable Support Library that will apply to some data sources, as well as Google Play publishing limitations to new watch faces built with these libraries.

Check out the Watch Faces blog post for full details on the new features in Watch Face Format and changes to watch faces development options.

Tooling and library updates

Jetpack Compose for Wear OS

Adoption of Compose on Wear OS has grown 200% in the past year, highlighting the ease with which Compose allows developers to build for the watch form factor. Recently we’ve seen top apps such as WhatsApp, Gmail and Google Calendar built entirely using Compose for Wear OS, and it’s the recommended way for building user interfaces for Wear OS apps.

With the 1.3 release of Jetpack Compose for Wear OS, we’ve graduated a number of visual improvements and fixes from beta to stable.

In the past year, we’ve added features such as SwipeToReveal, to give users additional means for completing actions, an expandable item, to enhance the use of the smaller screen and show additional information where needed, and a range of WearPreview supporting annotations, for ensuring your app works optimally across the range of device sizes and font scales.

Compose for Wear OS previews usage in Android Studio
Compose for Wear OS previews usage in Android Studio

And at Google I/O 2024, we announced a lot of new updates with Jetpack Compose that help you build across form factors, including Wear OS, read more in this blog and check out how SoundCloud supported more screens using 45% less code with Jetpack Compose.

Tiles and ProtoLayout

Wear OS tiles give users fast, predictable access to the information and actions they rely on most. Version 1.4 of the Jetpack Tiles library, currently in alpha, introduces preview support for Android Studio to help you quickly iterate on your Tile development while also helping you create optimal-looking tiles on a range of display sizes.

Previews can be seen starting in Android Studio Koala Feature Drop (Canary), with the following dependencies:

    • androidx.wear.tiles:tiles-tooling-preview:1.4.0-alpha02+
    • androidx.wear.tiles:tiles-tooling:1.4.0-alpha02+
    • androidx.wear:wear-tooling-preview:1.0.0+
@Preview(device = WearDevices.SMALL_ROUND)
fun smallPreview(context: Context) = TilePreviewData(
    onTileRequest = { request ->
        TilePreviewHelper.singleTimelineEntryTileBuilder(
            buildMyTileLayout()
        ).build()
    }
)
Tiles previews usage in Android Studio
Tiles previews usage in Android Studio

We’ve also introduced better means for your app to determine whether your tiles are in use, through the getActiveTilesAsync() method.

Within ProtoLayout’s stable version 1.1, as used by Tiles, we’ve introduced a number of changes, such as the following:

    • Gradient support in ArcLine.
    • Date-time formatting supports different time zones for dynamic data types.
    • Better text autosizing and ellipsizing options, and consistent font padding behavior.
    • Expandable spacers
    • Improved accessibility for Clickable elements

And from 1.2.0-alpha02, we’ve made it easier for your layouts to adjust appropriately for different display sizes by adding the setResponsiveContentInsetEnabled() method to PrimaryLayout, as well as updating it for EdgeContentLayout. To use this setter, update your code as follows:

PrimaryLayout.Builder(deviceParameters)
    .setResponsiveContentInsetEnabled(true)
    .setContent(
        // ...
    )
.build()

Easier testing for fitness apps

Android Studio Koala Feature Drop (Canary) brings a new sensor panel to make it easier to test use of Health Services in your Wear OS app. The panel allows you to configure capabilities of the device, set values of specific data types and stimulate events such as auto-pause and resume of exercises.

Sensor panel usage with Wear OS emulator in Android Studio
Sensor panel usage with Wear OS emulator in Android Studio

Check out this blog to learn more about tooling updates.

Larger Displays

With the momentum surrounding Wear OS, we’re seeing a wider variety of round screen sizes and resolutions, which provides more choices for the user.

We are releasing new guidelines on how to build responsive UIs for different watch display sizes, as well as updates to existing libraries to introduce adaptive layouts, and components.

Check out the ComposeStarter sample for Wear OS on Github to see how to take advantage of these updates in your app. Furthermore, we’ve updated the sample to provide examples of using tools to evaluate your layouts, including :

    • Previews - demonstrating use of WearPreviewDevices to visualize your layouts on a full range of device sizes and font scaling settings.
    • Screenshot testing - helping you detect issues and regressions in your layouts on different sized devices, with different font scales and locales, representative of real-world devices.

Start building for Wear OS now

There has never been a better time to start building for Wear OS! Be sure to check out Building for the future of Wear OS technical session to learn more about all the latest updates for Wear OS!

To get started:

We’re looking forward to seeing the experiences that you build on Wear OS!

Latest updates for watch faces on Wear OS

Posted by Anna Bernbaum – Product Manager, and Garan Jenkin – Developer Relations Engineer

At last year’s Google I/O, we launched the Watch Face Format for Wear OS. This year, as part of our continued partnership with Samsung, we are excited to share some new features that you can use to create exciting new watch face designs! These features are now supported in XML definitions, and later in the year, you’ll also see an update to Watch Face Studio to take advantage of them.

The Watch Face Format is the recommended way to create watch faces for Wear OS. The format makes it easier to create customizable and more power-efficient watch faces for devices that run Wear OS 4 or higher. The Watch Face Format is a declarative XML format, so there is no executable code involved in creating a watch face, and there is no code embedded in your watch face APK.

Additionally, in our move toward the Watch Face Format for watch face creation, we have also made some changes to watch face development.

New features in the Watch Face Format

Flavors

Flavors represent preset configurations for your watch face, available in the companion app:

Watch gallery

They allow the watch face developer to configure useful and attractive combinations of the watch face’s configuration options, and allow the user to visualize and select from these with ease.

We’ve now brought flavors to the Watch Face Format. For a full guide on adding them to your watch face, see the flavors reference.

Complications

We’re adding support for both “goal progress” and “weighted elements” complication types to the Watch Face Format:

circle chart with data saying 60% of goal progress and weight elements circle chart

    • Goal progress is perfect for data where the user has a target, but that target can be exceeded. A good example is step count.
    • Weighted elements can represent discrete subsets of data, showing their relative sizes, where you might otherwise use something like a pie chart.

Both of these complication types can be accessed through the [COMPLICATION.*] expression object. For full details, see the complication guidance.

Weather

Knowing at-a-glance what the weather will be like for the next hour, day, and beyond can make all the difference to a user’s plans! Unsurprisingly, having weather data as a data source in the Watch Face Format has been a common request, and we’re delighted to be able to introduce it in this latest version. You’ll now be able make watch faces like this:

circle chart with data saying 60% of goal progress and weight elements circle chart

Weather Basics

Weather in the Watch Face Format is accessed via the [WEATHER.*] expression object. You can use it in Condition and text Template statements and anywhere where expressions are supported.

For example, to show the current weather condition, use this template and expression:

<Template>Current weather conditions: %s
    <Parameter expression="[WEATHER.CONDITION_NAME]"/>
</Template>

The weather provider in the Watch Face Format supports a range of different metric types for the current day, including the following:

    • Current conditions
    • Temperature - current, minimum (low), and maximum (high)
    • UV index
    • Chance of rain

For the full range of data types and conditions, see the weather guide.

Forecasts

In addition to the current weather, you can access forecast data, both by hour and by day. For example, to access the forecast maximum temperature for tomorrow, use a template and set of expressions similar to the following:

<Template>Tomorrow max temp: %d°%s
    <Parameter expression="[WEATHER.DAYS.1.TEMPERATURE_HIGH]" />
    <Parameter expression="[WEATHER.TEMPERATURE_UNIT] == 1 ? &quot;C&quot; : &quot;F&quot;" />
</Template>

When using weather in the Watch Face Format, there are some further details to be aware of, such as checking for forecast availability or loading errors. For all of this and more, take a look at the weather guide.

Changes to Watch Face development

As we gather momentum behind the Watch Face Format, we’re announcing some changes to existing watch face development options.

We announced recently that only some complications will be available on Wear OS 5, for watch faces built with AndroidX or the Wearable Support Library. This restriction does not apply to watch faces that use the Watch Face Format.

Additionally, starting in early 2025 (specific date to be announced in Q4 2024), all new watch faces published on Google Play must use the Watch Face Format. Existing watch faces that use other libraries, such as AndroidX or the Wearable Support Library, can continue to receive updates without transitioning to the new format.

New resources

To make it easier to create watch faces using the Watch Face Format, we’ve published some more resources on GitHub.

You now have full access to the XSD specification, to help you build your own watch face generating tools.

We’ve also provided validators to check your XML for correctness and memory usage. These are the same checks run by Google Play, so it allows you to run these checks even before you submit your watch face for publishing.

Learn more

Get started with the latest version of the Watch Face Format.

Be sure to check out Building for the future of Wear OS technical session and What’s new in Wear OS at I/O 2024 blog post to learn more about all the latest updates for Wear OS!

Code snippets license:

Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0

15 Things to know for Android developers at Google I/O

Posted by Matthew McCullough, Vice President, Product Management, Android Developer  

AI is unlocking experiences that were not even possible a few years ago, and we’ve been hard at work reimaging Android with AI at the core, to help enable you to build a whole new class of apps. At this year’s Google I/O, we’re covering how new tools like Gemini can power building the next generations of apps on Android. Plus, we showcased a range of updates to our tools and services grounded in productivity, helping you make it faster and easier to build excellent experiences across form factors. Let’s dive in!

Powering the next generation of Apps with AI

#1: AI in your tools, with Gemini in Android Studio

Gemini in Android Studio (formerly Studio Bot) is your coding companion for Android development, and thanks to your feedback since its preview at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and brought it into the Gemini family of products. Earlier today, we previewed a number of new features coming soon, like Code suggestions, App Quality Insights that leverage Gemini, and a preview of the multi-modal inputs that are coming using Gemini 1.5 Pro. You can read more about the updates here, and make sure to check out What’s new in Android development tools.

#2: Building with Generative AI

Android provides the solution you need to build Generative AI apps. You can use our most capable models over the Cloud with the Gemini API in Google AI or Vertex AI for Firebase directly in your Android apps. For on-device, Gemini Nano is our most efficient model. We’re working closely with a few early adopters such as Patreon, Grammarly, and Adobe to ensure we’re creating the best APIs that unlock the most innovative experiences. For example, Adobe is experimenting with Gemini Nano to enhance the on-device experience of Acrobat AI Assistant, a tool that allows their users to summarize and interact with documents. Be sure to check out the Build your own generative AI powered Android app, Android on-device gen AI under the hood, and the What’s New in Android sessions to learn more!

Moving image of Gemini Nano operating in Adobe

Excellent apps, across devices

#3: Think adaptive: apps on phones, foldables, tablets and more

Build and design apps that adapt beyond the phone, with the new Compose adaptive layout libraries built with Material guidance in beta. Add rich stylus and keyboard support to increase user productivity. Check out three of our key Android adaptive sessions at Google I/O: Designing adaptive apps, Building adaptive Android apps, and Increase user productivity with large screens and accessories.

Moving image of Gemini Nano operating in Adobe

#4: Enhance homescreens with Widgets and Jetpack Glance

Jetpack Glance 1.1 is now available in release candidate and lets you build high quality widgets using your Compose skills. Check out our new canonical layouts, design guidance and figma updates to the Android UI kit. To learn more check out our Improve the user experience of your Android app workshop and Build Android widgets with Jetpack Glance technical session.

#5-9: come back here tomorrow and Thursday!

We’ll continue to share more updates for Android Developers throughout Google I/O, so check back here tomorrow!

Developer Productivity

#10: Use Kotlin Multiplatform for sharing business logic

Kotlin Multiplatform (KMP) enables sharing Kotlin code across different platforms and several of our Jetpack libraries, like DataStore and Room, have already been migrated to take advantage of KMP. We use Kotlin Multiplatform within Google and recommend using KMP for sharing business logic between platforms. Learn more about it here.

#11: Compose: Shared Elements, performance improvements and more

The upcoming Compose June ‘24 release is packed with the features you’ve been asking for! Shared element transitions, lazy list item reordering animations, strong skipping mode, performance improvements, a new lazy flow layout and more. Read more about it in our blog.

#12: Android Studio: the latest preview, with Gemini and more

Android Studio Koala 🐨Feature Drop (2024.1.2) available today in the canary channel, builds on top of IntelliJ 2024.1 and adds new innovative features unlocked by Gemini, such as insights for crashes in App Quality Insights, code transformations and a Gemini API starter template to get you quickly started with Gemini. Additionally, new features such as USB speed detection, shortcut UI to control device settings, a new way to sign into Google services, updated and speedier UI for profilers with a new task centric approach and a deep integration with the Google Play SDK index are intended to make the development process extremely productive. Read more here.

And the latest from the world of Mobile

#13: Grow your business with the latest Google Play updates

Discover new ways to attract and engage users with enhanced custom store listings. Optimize revenue with expanded payment options. Reinforce trust through secure, high-quality experiences made easier with our latest SDK Console improvements. Learn about these updates and more, including our new vertical approach, in our blog.

#14: Simplify app compliance with Checks

Streamline your app's privacy compliance with Checks, Google's AI-powered compliance solution! Checks empowers developers to swiftly identify, address, resolve privacy issues, and enables you to launch apps faster and with confidence. Harness the power of automation with Checks' intelligent reports, saving you valuable time and resources. Get started now at checks.google.com.

#15: And of course, Android 15

…but for that, you’ll have to stay tuned tomorrow, when we’ve got a bit more up our sleeve!

Google I/O 2024: What’s new in Android Development Tools

Posted by Mayank Jain – Product Manager, Android Studio

At Google I/O 2024, we announced an exciting new set of features and tools aimed at making Android development faster and easier. We also shared updates to Android Studio that will help you leverage AI and make it easier for you to build high quality apps for Android across the Android ecosystem.

You can check out the What’s new in Android Developer Tools session at Google I/O 2024 to see some of the new features in action or better yet, try them out yourself by downloading Android Studio Koala 🐨 Feature Drop in the preview release channel. Here’s a look at our announcements:

Leverage Gemini in Android Studio

Since launching AI features in Android Studio last year, we continue to evolve our underlying models, integrate your feedback, and expand availability to more countries and territories so that you can leverage AI in your workflow and become a more productive Android app developer. Using the built-in AI privacy controls, you can opt in to using the latest AI feature improvements that are tailored for your Android app project.

Code suggestions with Gemini in Android Studio

You can now provide custom prompts for Gemini in Android Studio to generate code suggestions. After you enable Gemini from the View > Tool Windows > Gemini tool window, right-click in the code editor and select Gemini > Transform selected code from the context menu to see the prompt field. You can then prompt Gemini to generate a code suggestion that either adds new code or transforms selected code. You can ask Gemini to simplify complex code by rewriting it, perform very specific code transformations such as “make this code idiomatic,” or generate new functions you describe. Android Studio then shows you Gemini’s code suggestion as a code diff, so that you can review and accept only the suggestions you want.

Code suggestions with Gemini in Android Studio

Gemini for recommendations on crash reports

App Quality Insights in Android Studio seamlessly incorporates both Firebase Crashlytics and Android Vitals data into Android Studio so you can access the most important app stability related information, without having to switch tools.

You can now use Gemini in Android Studio to analyze your crash reports, generate insights which are shown in the Gemini tool window, provide a crash summary, and when possible recommend next steps, including sample code and links to relevant documentation.

You can generate all of this information directly from the App Quality Insights tool window in Android Studio after you enable Gemini from View > Tool Windows > Gemini.

Gemini for recommendations on crash reports

Integrate Gemini API into your app with a starter template

Start prototyping with Gemini models in your apps with our new starter app template provided in Android Studio. In this app template, you can issue prompts directly to the Gemini API, add image sources as input, and display the responses on the screen. Additionally, use Google AI Studio to craft custom prompts for your app.

When you are ready to scale your AI features to production with Google Cloud infrastructure, you can also access the powerful capabilities of Gemini models through Vertex AI. This is Google’s fully-managed development platform designed for building and deploying generative AI. Whether you simply need world class inference capabilities, or want to build end-to-end AI workflows with Vertex, the Gemini API is a great solution.

Integrate Gemini API into your app with a starter template

Gemini 1.5 Pro coming to Android Studio

We previously announced that Gemini in Android Studio uses the Gemini 1.0 Pro model to help you by answering Android development questions, generating code, finding resources, or explaining best practices. In this preview stage of Gemini in Android Studio, we are offering Gemini 1.0 Pro at no-cost for all users for now. Gemini 1.0 Pro is a versatile model, making it ideal to scale. However we acknowledge that its quality of responses may be limited in some cases. Based on your feedback, we are committed to improving the quality for Android development, and excited to add more features using Gemini to make your developer experience even more productive.

Along this journey, the Gemini 1.5 Pro model will be coming to Android Studio later this year. Equipped with a Large Context Window, this model notably leads to higher quality responses, and unlocks use cases like multimodal input that you might have seen in the Google I/O 2024 sessions. Stay tuned for more updates on how you can access more capable models in Android Studio.

Productivity enhancements

Release Monitoring with Firebase

Today we announced the general availability of the Firebase Release Monitoring Dashboard. The Firebase Release Monitoring Dashboard is a single dashboard powered by Firebase Crashlytics to monitor your most recent production releases of your Android app. It updates in real time to give you a high-level view of the most important release metrics, like crash-free sessions, comparisons, and benchmarking based on your previous releases.

Android Device Streaming

Android Device Streaming, powered by Firebase, lets you securely connect to remote physical Android devices hosted in Google's data centers. It is a convenient way to test your app against physical units of some of the latest Android devices, including the Google Pixel 8 and 8 Pro, Pixel Fold, and more.

Starting today, Android Device Streaming now includes the following devices, in addition to the portfolio of 20+ device models already available:

    • Samsung Galaxy Fold5
    • Samsung Galaxy S23 Ultra
    • Google Pixel 8a

Additionally, if you’re new to Firebase, Android Studio automatically creates and sets up a no-cost Firebase project for you when you sign in to Koala Feature Drop to use Device Streaming. So, you can get to streaming the device you need much faster. Learn more about Android Device Streaming quotas, including promotional quota for the Firebase Blaze plan projects available for a limited time.

Connect to the latest physical Android devices in moments with Android Device Streaming, 
powered by Firebase

USB cable speed detection

Did you know that USB cable bandwidth varies from 480 Mbps (USB-2) to up to 40,000 Mbps (USB-4)? Android Studio Koala Feature Drop now makes it trivial to differentiate low performing USB cables from the high performing ones.

When you connect an Android device, Android Studio automatically detects the device and USB cable bandwidth and warns you if there’s a mismatch in USB bandwidth.

Note: USB cable speed detection requires an updated ADB found in Android SDK Platform Tools v34+, and is currently available for macOS and Linux.

USB cable speed detection.
Learn more about USB speeds here

A new way to sign in with Google in Android Studio

It’s now easier to sign in to multiple Google services with one authentication step. Whether you want to use Gemini in Android Studio, Firebase for Android Device Streaming, Google Play for Android Vitals reports, or all these useful services, the new sign in flow makes it easier to get up and running. If you’re new to Firebase and want to use Android Device Streaming, Android Studio automatically creates a project for you, so you can quickly start streaming a real physical Firebase device. With granular permissions scoping, you will always be in control of which services have access to your account. To get started, just click the profile avatar and sign in with your developer account.

A new way to sign in with Google in Android Studio

Device UI setting shortcut

Using the device UI setting shortcut, you can now effortlessly configure your devices to desired settings related to dark theme, font size, display size, app language, and more, all directly through the Running Devices window. You can now test and debug your UI seamlessly for any of the possible scenarios required by your use case.

Device UI settings shortcuts

Faster and improved Profiler with a task-centric approach

The internals of the Android Studio Profiler have been dramatically improved. Popular profiling tasks like capturing a system trace with profileable apps now start up to 60% faster.*

We’ve redesigned the profiler to make it easier to start the task you’re interested in, whether it’s profiling your app’s CPU, memory, or power usage. For example, initiating a system trace task to profile and improve your app’s startup time is integrated right in the UI as you open the profiler.

Faster and improved Profiler with a task-centric approach 
*Based on internal data, as tested in April 2024

Google Play SDK Index integration

Android Studio is integrated with the Google Play SDK Index to inform when there are known policy or version issues with SDKs used by your app. This enables you to update those dependencies and avoid issues that could prevent you from publishing new versions of your app.

In the Android Studio Koala Feature Drop release, the integration has been expanded to also include warnings from the Google Play SDK Console. This gives you a complete view of any potential version or policy issues in your dependencies before submitting your app to the Google Play Console.

Notes from SDK authors are now also displayed directly in Android Studio to save you time.

A warning from the SDK Index with the corresponding SDK author note

Preview tiles for Wear OS apps

Android Studio now has preview support for Tiles. You can now iterate much quicker when creating tiles, enabling you to quickly see what a Tile looks like on different configurations without needing to run it on a device.

Tiles previews usage for Wear OS apps

Generate synthetic sensor data for testing on Wear OS apps

To help simulate real life scenarios you can now generate synthetic (fake) data for a Wear OS emulator for health related sensors such as heart rate, speed, steps, and more. You are now able to set up and perform testing for a multi-sport training session in minutes, end-to-end in Android Studio, without ever leaving your desk.

Generate synthetic sensor data for testing on Wear OS apps

Compose Glance widget previews

Android Studio Koala Feature Drop makes it easy to preview your Jetpack Compose Glance widgets (1.1.0-rc01) directly within the IDE. Catch potential UI issues and fine-tune your widget's appearance early in the development process. Learn more about how to get started.

Previews for Compose Glance widgets

Live Edit for Compose enabled by default

Live Edit for Compose can accelerate your Compose development experience by automatically deploying code changes to the running application on an emulator or physical device. Live Edit can help you see the effect of updates to UX elements—for example new composables, modifier updates, and animations—on the overall app experience. As you become more familiar with Live Edit you will find many creative ways it can help improve your development experience and productivity.

In Android Studio Koala Feature Drop, Live Edit is enabled by default in manual mode and has increased stability and more robust change detection, including support for import statements.

ALT TEXT
Compose Preview Screenshot Testing with Now in Android app

Compose preview screenshot testing plugin (alpha)

Host-side screenshot testing is an easy and powerful way to test UIs and prevent regressions. Today, the first alpha version of the Compose Preview Screenshot Testing plugin is available as a separate plugin, to be used together with AGP 8.5.0-beta01 or higher. Add your Compose Previews to the src/main/screenshotTest folder and run the task to generate a diff report after UI updates. The generated HTML test report lets you visually detect any changes to your app’s UI.

This alpha version of the plugin is designed for rapid iteration and feedback. We plan to merge it back into AGP in the future, but for now, this separate plugin lets us experiment and improve the feature quickly. Learn more about how to get started.

IntelliJ Platform Update (2024.1)

Android Studio Koala Feature Drop includes the IntelliJ 2024.1 platform release, which comes with some very useful IDE improvements:

    • An overhauled terminal featuring both visual and functional enhancements to streamline command-line tasks. Learn more in this blog post.
    • A new feature called sticky lines in the editor simplifies working with large files and exploring new codebases. This feature keeps key structural elements, like the beginnings of classes or methods, pinned to the top of the editor as you scroll and provides an option to promptly navigate through the code by clicking on a pinned line.
    • Basic IDE functionalities like code highlighting and completion now work for Java and Kotlin during project indexing, which should enhance your startup experience.
    • You can now scale the IDE down to 90%, 80%, or 70%, giving you the flexibility to adjust the size of IDE elements both upward and downward.

Read the detailed IntelliJ release notes here.

To summarize

Android Studio Koala Feature Drop (2024.1.2) is now available in the Android Studio canary channel with

    • Gemini in Android Studio
        • Code suggestions with Gemini in Android Studio
        • Gemini for recommendations on crash reports
        • Gemini API starter app template to help integrate Gemini into your app (also available in Koala 2024.1.1)

    • Productivity enhancements
        • Release Monitoring with Firebase
        • Android Device Streaming
        • USB cable speed detection
        • A new way to sign in with Google in Android Studio
        • Device UI setting shortcut
        • Faster and improved Profiler with a task-centric approach
        • Google Play SDK Index integration
        • Preview tiles for Wear OS apps
        • Generate synthetic sensor data for testing on Wear OS apps
        • Compose Glance widget previews
        • Live Edit for Compose enabled by default
        • Compose preview screenshot testing plugin (alpha) - to be installed additionally

    • IntelliJ Platform Update (2024.1): also available in Koala 2024.1.1
        • An overhauled terminal
        • Sticky lines in editor simplifies working with large files
        • Code highlighting and completion now work during project indexing
        • Flexible IDE size adjustments

And last, a quick reminder that going forward, the initial Android Studio releases will have the .1 Android Studio major version and introduce the updated IntelliJ platform version, while subsequent Feature Drops will increase the Android major version to .2 and focus on introducing Android-specific features that help you be more productive for Android app development.

How to get started

Ready to try the exciting new features in Android Studio?

You can download the canary version Android Studio Koala 🐨 Feature Drop (2024.1.2) today to incorporate these new features into your workflow or try the stable version Android Studio Jellyfish 🪼. You can also install them side by side by following these instructions.

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X. Let's build the future of Android apps together!

Introducing a new Text-To-Speech engine on Wear OS

Posted by Ouiam Koubaa – Product Manager and Yingzhe Li – Software Engineer

Today, we’re excited to announce the release of a new Text-To-Speech (TTS) engine that is performant and reliable. Text-to-speech turns text into natural-sounding speech across more than 50 languages powered by Google’s machine learning (ML) technology. The new text-to-speech engine on Wear OS uses decreased prosody ML models to bring faster synthesis on Wear OS devices.

Use cases for Wear OS’s text-to-speech can range from accessibility services, coaching cues for exercise apps, navigation cues, and reading aloud incoming alerts through the watch speaker or Bluetooth connected headphones. The engine is meant for brief interactions, so it shouldn’t be used for reading aloud a long article, or a long summary of a podcast.

How to use Wear OS’s TTS

Text-to-speech has long been supported on Android. Wear OS’s new TTS has been tuned to be performant and reliable on low-memory devices. All the Android APIs are still the same, so developers use the same process to integrate it into a Wear OS app, for example, TextToSpeech#speak can be used to speak specific text. This is available on devices that run Wear OS 4 or higher.

When the user interacts with the Wear OS TTS for the first time following a device boot, the synthesis engine is ready in about 10 seconds. For special cases where developers want the watch to speak immediately after opening an app or launching an experience, the following code can be used to pre-warm the TTS engine before any synthesis requests come in.

private fun initTtsEngine() {
    // Callback when TextToSpeech connection is set up
    val callback = TextToSpeech.OnInitListener { status ->
        if (status == TextToSpeech.SUCCESS) {
            Log.i(TAG, "tts Client Initialized successfully")


            // Get default TTS locale
            val defaultVoice = tts.voice
            if (defaultVoice == null) {
                Log.w(TAG, "defaultVoice == null")
                return@OnInitListener
            }


            // Set TTS engine to use default locale
            tts.language = defaultVoice.locale




            try {
                // Create a temporary file to synthesize sample text
                val tempFile =
                        File.createTempFile("tmpsynthesize", null, applicationContext.cacheDir)


                // Synthesize sample text to our file
                tts.synthesizeToFile(
                        /* text= */ "1 2 3", // Some sample text
                        /* params= */ null, // No params necessary for a sample request
                        /* file= */ tempFile,
                        /* utteranceId= */ "sampletext"
                )


                // And clean up the file
                tempFile.deleteOnExit()
            } catch (e: Exception) {
                Log.e(TAG, "Unhandled exception: ", e)
            }
        }
    }


    tts = TextToSpeech(applicationContext, callback)
}

When you are done using TTS, you can release the engine by calling tts.shutdown() in your activity’s onDestroy() method. This command should also be used when closing an app that TTS is used for.

Languages and Locales

By default, Wear OS TTS includes 7 pre-loaded languages in the system image: English, Spanish, French, Italian, German, Japanese, and Mandarin Chinese. OEMs may choose to preload a different set of languages. You can check what languages are available by using TextToSpeech#getAvailableLanguages(). During watch setup, if the user selects a system language that is not a pre-loaded voice file, the watch automatically downloads the corresponding voice file the first time the user connects to Wi-Fi while charging their watch.

There are limited cases where the speech output may differ from the user’s system language. For example, in a scenario where a safety app uses TTS to call emergency responders, developers might want to synthesize speech in the language of the locale the user is in, not in the language the user has their watch set to. To synthesize text in a different language from system settings, use TextToSpeech#setLanguage(java.util.Locale)

Conclusion

Your Wear OS apps now have the power to talk, either directly from the watch’s speakers or through Bluetooth connected headphones. Learn more about using TTS.

We look forward to seeing how you use Text-to-speech engine to create more helpful and engaging experiences for your users on Wear OS!


Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0

Wear OS hybrid interface: Boosting power and performance

Posted by Kseniia Shumelchyk, Android Developer Relations Engineer

In collaboration with our hardware partners, we’ve continued to prioritize the Wear OS by Google user experience. As such, we’ve made fundamental design changes to the platform and substantially expanded the capabilities of the Wear OS hybrid interface that improve two key areas: power and performance.

With OnePlus Watch 2, powered with the latest version of Wear OS (Wear OS 4), the dual-chipset architecture works with our hybrid interface to get both chips to work better in tandem. This enables even more use cases to benefit from dramatically extended battery life of up to 100 hours of regular use with all functionalities accessible in Smart Mode.

Together, we’ve created a premium smartwatch experience that doesn’t compromise the advanced feature set or battery life. In this post, we’ll share how you can benefit from these changes when building experiences for Wear OS.

On the edge of innovation: redesigned smartwatch architecture

Wear OS smartwatches have a dual-chipset architecture inclusive of a powerful application processor (AP) and ultra low-power co-processor microcontroller unit (MCU). The architecture has a powerful AP capable of handling complex operations en-masse, and is seamlessly coupled with a low power MCU.

The Wear OS hybrid interface enables intelligent switching between the MCU or the AP, allowing the AP to be suspended when not needed to preserve battery life. It helps, for instance, achieve more power-efficient experiences, like sensor data processing on the MCU while the AP is asleep. At the same time, the hybrid interface provides a seamless transition between these states, keeping a rich and premium user experience without jarring transitions between power modes.

ALT TEXT

Connectivity and notification experience

To enhance connectivity-reliant interactions like notifications and phone calls, OnePlus utilized platform capabilities with the notification API in the hybrid interface, enabling the MCU to process regular notification experiences and reduce the need to activate the AP.

For example, bridged notifications will be delivered to the watch without waking up the high-performance AP. Users can read and dismiss these notifications while the watch is still powered by the MCU. The MCU can also handle wearable-specific actions in notifications, such as quick replies or remote actions.

What this means for development

You can leverage existing Wear OS APIs to get these optimizations without any added effort – no code changes required!

Notifications

The notification hybrid interface enables seamless transitions between power modes to work with the Wear OS notification stack. You get the best notification performance by using the Notification API.

Health & Fitness experiences

The Wear OS hybrid interface also elevates the fitness experience with more precise workout tracking, automatic sports recognition and smarter health data monitoring. All of these can be offered to users without compromising battery life.

Starting with Wear OS 3, developers use Health Services on Wear OS to gain access to sensor data. The health hybrid interface works under the hood to enable power optimizations by batching sensor data on the MCU and periodically updating developer apps through the Health Services API on the AP.

Watch Faces

With Wear OS 4, we launched the Watch Face Format, a declarative XML format to create customizable and power-efficient watch faces.

The platform has created capabilities to implement Watch Face Format rendering on the MCU, so using the new format helps future-proof certain watch faces to take advantage of emerging optimizations in future devices for better battery usage.

Check out the watch face format documentation and design guidelines for Wear OS watch faces.

Expand your reach with Wear OS

With the additions to the Wear OS smartwatch ecosystem and expanded device capabilities, it's an ideal time to build experiences for smartwatches that can reach more users and benefit your business.

To begin developing apps for Wear OS, try our Compose for Wear OS codelab, and check out the documentation and samples.

Read more about developer updates in Wear OS 4, and how you can get your apps ready for the latest Wear OS watches.

We can’t wait to see what experiences you’ll build!