Category Archives: Android Developers Blog

An Open Handset Alliance Project

Introducing Jetpack Media3

Posted by Don Turner, Developer Relations Engineer

Blue background with a dark blue tablet illustration. The Android Jetpack logo is flying across the screen

Introducing Jetpack Media3

Today, we're launching the first alpha of Jetpack Media3. It's a collection of support libraries for media playback, including ExoPlayer. This article will explain why we created Media3, what it contains, and how it can simplify your app architecture.


Why another media API?

We have several existing media APIs: Jetpack Media also known as MediaCompat, Jetpack Media2, and ExoPlayer. These libraries were developed with different goals, and have several areas of overlapping functionality.

For example, ExoPlayer and Media2 both contain UI components, and MediaCompat and Media2 contain classes for handling media sessions.

It can be challenging to decide which library to use for a given use case, and objects from different libraries are often not compatible, requiring adapters or connecting code. Media3 removes these challenges by providing a single set of libraries which work well together.

To create Media3 we:

  • Identified the common areas of functionality in our existing media libraries, including UI, playback and media session handling.
  • Refined and merged the best parts.
  • Created a common Player interface for all "player-like" objects (more on this later).

What's in the box

Media3 contains many libraries. The ones most relevant for simple media playback are shown below.


Library name

Purpose

Useful classes for playback

media3-exoplayer

Objects for playing video and audio, provided by ExoPlayer. 

SimpleExoPlayer for simple playback use cases

media3-ui

Views for displaying media playback controls, content, and metadata. 

StyledPlayerView displays audio and video content from a Player

media3-session

Objects for creating and interacting with a media session.

MediaSession for advertising what you're playing

MediaLibraryService for advertising your content library



A common Player

Our existing media APIs have a lot of objects which accept playback commands, like "play," "pause," and "skip". Identifying these "player-like" objects and ensuring that they implement a common Player interface was one of the biggest undertakings in the development of Media3.

We've updated, enhanced, and streamlined the Player interface from ExoPlayer to act as the common Player interface for Media3.

Classes such as MediaController and MediaSession that previously contained references to other "player-like" objects have been updated to reference the new player.

This is useful when communicating with UI components. Both ExoPlayer and MediaController now implement Player, so either one of them can be used to communicate with StyledPlayerView or other UI components.

Diagram showing how MediaController and ExoPlayer implement the Player interface and can be used to communicate with UI components, like StyledPlayerView

Diagram showing how MediaController and ExoPlayer implement the Player interface and can be used to communicate with UI components, like StyledPlayerView


Simplified architecture

Using this Player interface avoids the need for connecting components, allowing for less code and a simpler app architecture.

In particular, this makes working with media sessions easier. Instead of using the MediaSessionConnector extension, or writing your own "player to media session" connector, you can create a MediaSession using a Player, like this:

player = ExoPlayer.Builder(context).build()
session = MediaSession.Builder(context, player).build()

Now your media session will automatically reflect the state of your player, and any commands sent to your media session will be automatically forwarded to your player. All that in just two lines of code!

Providing a content library

If your app needs to expose its content library to other apps, like Android Auto, use MediaLibraryService, rather than a MediaBrowserService from MediaCompat.

You'll then create a MediaLibrarySession and implement a MediaLibrarySessionCallback whose methods will be called by the browsing app to obtain your content tree.

Diagram showing how MediaLibraryService can be used to expose a content library

Diagram showing how MediaLibraryService can be used to expose a content library


Easier updates

One of the key benefits of using Jetpack libraries is API stability. If you use symbols that are part of the stable API, you generally don't need to update your code to use a new release of that library within the same major version.

In Media3, some of the most commonly used objects are marked as stable, including the Player API and media session classes.

Most of ExoPlayer's API surface is marked as unstable.

Diagram showing stable and unstable areas of the Media3 API

Diagram showing stable and unstable areas of the Media3 API


To use an unstable method or class you'll need to add the OptIn annotation before using it.

@androidx.annotation.OptIn(UnstableApi::class)
private fun initializeExoPlayer() {
  // ...
}

If your project uses a lot of unstable methods it may be more convenient to add this suppression to your project-wide lint.xml.

<issue id="UnsafeOptInUsageError">
  <ignore
      regexp='\(markerClass = androidx\.media3\.UnstableApi\.class\)'/>
</issue>

Just because part of an API is marked as unstable doesn't mean that the API is unreliable or that you shouldn't use it - it's just a way of informing you that it might change in the future.


Getting started

Media3 is released today in alpha and we'd love you to try it out.

One of the best ways to do this is to check out the demo app, which shows how to play video and audio, and integrate with a media session.

You can add the Media3 dependencies to your app by adding the following artifacts to your build.gradle:

implementation 'androidx.media3:media3-ui:1.0.0-alpha01'
implementation 'androidx.media3:media3-exoplayer:1.0.0-alpha01'
implementation 'androidx.media3:media3-session:1.0.0-alpha01'

If you have feedback or run into problems, please file an issue. We'd really love to hear from you.

For more information check out the “What's next for AndroidX Media and ExoPlayer” talk from Android Dev Summit 2021 and the Media3 release notes.

Watch out for Wear OS at Android Dev Summit 2021

Posted by Jeremy Walker, Developer Relations Engineer

image of 4 watch faces against dark blue background.

This year’s Android Dev Summit had many exciting announcements for Android developers, including some major updates for the Wear OS platform. At Google I/O, we announced the launch of the new Wear OS. Since then, Wear OS Powered by Samsung has launched on the Galaxy Watch4 series. Many developers such as Strava, Spotify, and Calm have already created helpful experiences for the latest version of Wear OS, and we’re looking forward to seeing what new experiences developers will help bring to the watch. To learn more and create better apps for the wrist, read more about the updates to our APIs, design tools, and the Play store.


Compose for Wear OS

The Jetpack Compose library simplifies and accelerates UI development, and we’re bringing Compose support to Wear OS. You can design your app with familiar UI components, adapted for the watch. These components include Material You, so you can create beautiful apps with less code.

Compose for Wear OS is now in developer preview. To learn more and get started:

Try it out and share your feedback here or join the #compose-wear channel on the Jetbrains Slack and let us know there! Make sure you do it before we finalize APIs during beta!


Watch Face Studio

image of clock face in editing software

Watch faces are one of the most visible ways that users can express themselves on their smartwatches. Creating a watch face is a great way to showcase your brand for users on Wear OS. We’ve partnered with Samsung to provide better tools for watch face creation and make it easier to design watch faces for the Wear OS ecosystem.

Watch Face Studio is a design tool created by Samsung that allows you to produce and distribute your own watch faces without any coding. It includes includes intuitive graphics tools to allow you to easily design watch faces. You can create watch faces for your personal use, or upload them in Google Play Console to share with your users on Wear OS devices that support API level 28 and above.


Library updates

We recently released a number of Android Jetpack Wear OS libraries to help you follow best practices, reduce boilerplate, and create performant, glanceable experiences for your users.

Tiles are now enabled for most devices in the market, providing predictable, glanceable access to information and quick actions. The API is now in beta, check it out!

For developers who want more fine-grain control of their watch faces (outside of Watch Face Studio), we've launched the new Jetpack Watch Face APIs beta built from the ground up in Kotlin.

The new API offers a number of new features:

  • Watch face styling which persists across both the watch and phone (no need for your own database).
  • Support a WYSIWYG watch face configuration UI on the phone.
  • Smaller, separate libraries (only include what you need).
  • Battery improvements by encouraging good battery usage patterns out of the box; for example, reducing the interactive frame rate when battery is low.
  • New Screenshot APIs so users can see their watch face changes in real time.
  • And many more...

This is a great time to start moving from the older Watch Face Support Library to this new version.


Play Store updates

We’re making it easier for people to discover your Wear OS apps in the Google Play Store. Earlier this year, we enabled searching for watch faces and made it easier for people to find your apps in the Wear category. We also launched the capability for people to download apps onto their watches directly from the mobile Play Store. You can read more about these changes here.

We’ve also released updated Wear OS quality guidelines to help you meet your users’ expectations, as well as new screenshot guidelines to help your users have a better understanding of what your app will look like. To help people better understand how your app would work on their device in their location, we will be launching form factor and location specific ratings in 2022.

To learn more about developing for Wear OS, check out the developer website.

What’s New in Scalable Automated Testing

Posted by Arif Sukoco, Android Studio Engineering Manager (@GoogArif) & Jolanda Verhoef, Developer Relations Engineer (@Lojanda)

dark blue background with three different devices showing the same screen: phone , tablet, and watch

We know it can be challenging to run Android instrumented tests at scale, especially when you have a big test suite that you want to run against a variety of Android device profiles.

At I/O 2021 we first introduced Unified Test Platform or UTP. UTP allows us to build testing features for Android instrumented tests such as running instrumented tests from Android Studio through Gradle, and Gradle Managed Devices (GMD). GMD allows you to define a set of virtual devices in build.gradle, and let Gradle manage them by spinning them up before each instrumented test run, and tearing them down afterwards. In the latest version of Android Gradle Plugin 7.2.0, we are introducing more features on top of GMD to help scale tests across multiple Android virtual devices in parallel.


Sharding

The first feature we are introducing is sharding on top of GMD. Sharding is a common technique used in test runners where the test runner splits up the tests into multiple groups, or shards, and runs them in parallel. With the ability to spin up multiple emulator instances in GMD, sharding is an obvious next step to make GMD a more scalable solution for large test suites.

When you enable sharding for GMD and specify the desired number of shards, it will automatically spin up that number of managed devices for you. For example, the sample below configures a Gradle Managed Devices called pixel2 in your build.gradle:


android {
  testOptions {
    devices {
      pixel2 (com.android.build.api.dsl.ManagedVirtualDevice) {
        device = "Pixel 2"
        apiLevel = 30
        systemImageSource = "google"
        abi = "x86"
      }
    }
  }
}

Let’s say you have 4 instrumented tests in your test suite. You can pass an experimental property to Gradle to specify how many shards you want to divide your tests in. The following command splits the test run into two shards:

class com.example.myapplicationExampleInstrumentedTests

 ./gradlew -Pandroid.experimental.androidTest.numManagedDeviceShards=2 pixel2DebugAndroidTest

Invoking Gradle this way will tell GMD to spin up 2 instances of pixel2, and split the running of your 4 instrumented tests between those 2 emulated devices. In the Gradle output, you will see ​​"Starting 2 tests on pixel2_0", and "Starting 2 tests on pixel2_1".

As seen in this example, sharding through GMD spins up multiple identical virtual devices. If you apply sharding and have more than one device defined in build.gradle, GMD will spin up multiple instances of each virtual device.

The HTML format output of your test run report will be generated in app/build/reports/androidTests/managedDevice/pixel2. This report will contain the combined test results from all the shards.

You can also load the test results from each shard to Android Studio by selecting Run > Import Tests From File from the menu and loading the protobuf output files app/build/outputs/androidTest-results/managedDevice/pixel2/shard_1/test-result.pb and app/build/outputs/androidTest-results/managedDevice/pixel2/shard_2/test-result.pb.

It’s worth remembering that when sharding your tests, there is always a tradeoff between the extra resources and time required to spin up additional emulator instances, and the savings in test running time. As such, it is more useful when you have larger test suites to run.

Also please note that currently GMD doesn’t support running tests for test-only modules yet, and there are known flakiness issues when running on cloud hosted CI servers.


Slimmer Emulator System Images

When running multiple emulator instances at the same time, your limited server’s computing resources could become an issue.

One of the ways to improve this is by slimming down the Android emulator system image to create a new type of device that’s optimized for running automated tests. The Automated Test Device (ATD) system image is designed to consume less CPU and memory by removing components that normally do not affect the running of your app’s instrumented tests, such as the SystemUI, Settings app, bundled apps like Gmail, Google Maps, etc., and some other components. Please read the release notes for more information about the ATD system image.

The ATD system images have hardware rendering disabled by default. This helps with another common source of slow-running test suites. Often, when running instrumented tests on an emulator, access to the host’s GPU for graphics hardware acceleration is not available. In this case, the emulator will choose to use software graphics acceleration, which is much more CPU intensive. Nearly all functionalities still work as expected with hardware rendering off, with the notable exception of screenshots. If you need to take screenshots in your test, we recommend taking a look at the new AndroidX Test Screenshot APIs which will dynamically enable hardware rendering in order to take a screenshot. Please take a look at the examples for how to use these APIs.

To use ATD, first make sure you have downloaded the latest version of the Android emulator from the Canary channel (version 30.9.2 or newer). To download this emulator, go to Appearance & Behavior > System Settings > Updates and set the IDE updates dropdown to “Canary Channel”.

Next, you need to specify an ATD system image in your GMD configuration:


android {
  testOptions {
    devices {
      pixel2 (com.android.build.api.dsl.ManagedVirtualDevice) {
        device = "Pixel 2"
        apiLevel = 30
        systemImageSource = "aosp-atd" // Or "google-atd" if you need
                                       // access to Google APIs
        abi = "x86" // Or "arm64-v8a" if you are on an Apple M1 machine
      }
    }
  }
}

You can now run tests from the Gradle command line just like you would with GMD as before, including with sharding enabled. The only thing you need to add for now is to let Gradle know you are referring to a system image in the Canary channel.


./gradlew -Pandroid.sdk.channel=3
-Pandroid.experimental.androidTest.numManagedDeviceShards=2
pixel2DebugAndroidTest

Test running time improvement using ATD might vary, depending on your machine configuration. In our tests, comparing ATD and non-ATD system images running on a Linux machine with Intel Xeon CPU and 64GB of RAM, we saw 33% shorter test running time when using ATD, while on a 2020 Macbook Pro with Intel i9 processor and 32GB of RAM, we saw 55% improvement.

We’re really excited about these new features, and we hope they can allow you to better scale out your instrumented tests. Please try them out and let us know what you think! Follow us -- the Android Studio development team ‐ on Twitter and on Medium.

Here’s how to watch the 2021 Android Dev Summit!

Posted by The Android Team

We’re less than 24 hours away from kicking off the 2021 Android Dev Summit, broadcasting live online on October 27 & 28. The summit kicks off on October 27 at 10AM PDT with a 50-minute technical keynote, The Android Show. You can tune in at developer.android.com/dev-summit, or watch on YouTube.

After the show, we’ll be posting 30+ technical sessions to the site as well as YouTube for you to watch at your own pace, from Material You in Jetpack Compose to Kotlin Flows in practice.

Two days of live, technical Android content

Over the two day event, we have a number of ways for you to tune in and hear your favorite Android development topics discussed live from the team who built Android. Got questions about Modern Android Development, Large Screens, or Material You? Ask them on Twitter now using #AskAndroid to get them answered live on the air. We’ll also host live Android Code-Alongs. Tune in to watch Android experts as they code, tackle programming challenges, and answer your questions live across Jetpack Compose and Compose for Wear OS.

screenshot of conference agenda

For the full agenda with timings, check out the Android Dev Summit page. And of course, don’t forget: if you run into the bugs of chaos before then, let them know that together with Team Jetpack, we’re coming for them at Android Dev Summit…

Evolving our business model to address developer needs

Posted by Sameer Samat, Vice President, Product Management

When we started Android and Google Play more than a decade ago, we made a bet that a free and open mobile ecosystem could compete with the proprietary walled gardens that dominated the industry. It wasn’t yet clear what kinds of businesses would move to mobile or what apps would be successful. To keep things simple, we went with an easy-to-understand business model: The vast majority of developers could distribute their apps on Google Play for free (currently 97% do so at no charge). For the developers who offered a paid app or sold in-app digital goods (currently just 3% of developers), the flat service fee was 30%. This model helped apps to become one of the fastest-growing software segments. And instead of charging licensing fees for our OS, our service fee allowed us to continually invest in Android and Play while making them available for free to device makers all over the world.

The creativity and innovation from developers around the world spurred amazing new app experiences we could have never imagined when we first introduced Android. As the ecosystem evolved, a wider range of business models emerged to support these different types of apps. We've made important changes along the way, including moving beyond a “one size fits all” service fee model to ensure all types of businesses can be successful. Instead of a single service fee, we now have multiple programs designed to support and encourage our diverse app ecosystem.

The result is that 99% of developers qualify for a service fee of 15% or less. And after learning from and listening to developers across many industries and regions, including developers like Anghami, AWA, Bumble, Calm, Duolingo, KADOKAWA, KKBOX, Picsart, and Smule, we're announcing additional changes to further support our ecosystem of partners and help them build sustainable businesses, and ensure Play continues to lead in the mobile app ecosystem.

Decreasing service fees on subscriptions to 15%

Digital subscriptions have become one of the fastest growing models for developers but we know that subscription businesses face specific challenges in customer acquisition and retention. We’ve worked with our partners in dating, fitness, education and other sectors to understand the nuances of their businesses. Our current service fee drops from 30% to 15% after 12 months of a recurring subscription. But we’ve heard that customer churn makes it challenging for subscription businesses to benefit from that reduced rate. So, we’re simplifying things to ensure they can.

To help support the specific needs of developers offering subscriptions, starting on January 1, 2022, we're decreasing the service fee for all subscriptions on Google Play from 30% to 15%, starting from day one.

For developers offering subscriptions, this means that first-year subscription fees will be cut in half. We’ve already gotten positive feedback from our developer partners on this change:

“Our partnership with Google has been a powerful one for our business, helping us to scale and ultimately playing a key role in advancing our mission to empower women globally. The pricing change they’ve announced will allow us to better invest in our products and further empower users to confidently connect online.”
– Whitney Wolfe Herd, Founder and CEO, Bumble Inc.
"Just as every person learns in different ways, every developer is different as well. We're excited to see Google continuing to collaborate with the ecosystem to find models that work for both the developer and platform. This reduction in subscription fees will help Duolingo accelerate our mission of universally available language learning."
- Luis von Ahn, Co-Founder and CEO of Duolingo.

Going further with cross platform experiences

While apps remain incredibly important for mobile phones, great services must now also span TVs, cars, watches, tablets and more. And we recognize that developers need to invest in building for those platforms now more than ever.

Earlier this year we launched the Play Media Experience program to encourage video, audio and book developers alike to help grow the Android platform by building amazing cross-device experiences. This helped developers invest in these multi-screen experiences with a service fee as low as 15%.

Today, we’re also making changes to the service fee in the Media Experience program, to better accommodate differences in these categories. Ebooks and on-demand music streaming services, where content costs account for the majority of sales, will now be eligible for a service fee as low as 10%. The new rates recognize industry economics of media content verticals and make Google Play work better for developers and the communities of artists, musicians and authors they represent. You can go here for more information.

We will continue to engage with developers to understand their challenges and opportunities — and how we can best support them in building sustainable businesses. It’s a theme that will be front and center at the Android Developer Summit on October 27-28, where you’ll hear more about our latest tools, application programming interfaces (APIs) and technologies designed to help developers be more productive and create better apps.

If you’re looking for more information about Google Play and its service fees, we've answered some common questions here.

Android Devs assemble: help Team Jetpack fight the bugs of chaos at #AndroidDevSummit + agenda now live!

Posted by The Android Team

Image shows Jetpack superhero avatar

Excited for Android Dev Summit on October 27-28? Us too! But, before we get there, we need your help. Team Jetpack is in a brutal fight against the bugs of chaos… they are outnumbered and they need you to join their forces, defeat the bugs, and help Android restore order to the universe. Will you answer the call?



Create your own Team Jetpack superhero, with a custom look and feel, and add your own mix of Android coding power boosts to unlock magical superpowers. Once you’re done, you’ll get a digital trading card for your superhero to share on Twitter, and you’ll be all set to join us at #AndroidDevSummit and help restore order to the universe. Go to goo.gle/ads21 to make yours!



#AndroidDevSummit agenda + sessions announced!

We just posted the livestream agenda, released the full technical talk details, and added additional speakers to the lineup for Android Dev Summit. Take a look and start planning your days. Android Dev Summit kicks off with a 50-minute technical keynote, The Android Show. After the show, we’ll be posting 30+ technical sessions for you to watch at your own pace, from Material You in Jetpack Compose to Kotlin Flows in practice.

Photo of ADS21 session schedule

Over the two day event, we have a number of ways for you to tune in and hear your favorite Android development topics discussed live from the team who built Android. Got questions about Modern Android Development, Large Screens, or Material You? Ask them on Twitter now using #AskAndroid to get them answered live on the air. We’ll also host live Android Code-Alongs. Tune in to watch Android experts as they code, tackle programming challenges, and answer your questions live across Jetpack Compose and Compose for Wear OS.

We can’t wait to connect with you in just over a week! For the full agenda with timings, check out the Android Dev Summit page. And of course, don’t forget: if you run into the bugs of chaos before then, let them know that together with Team Jetpack, we’re coming for them at Android Dev Summit…

Launching Data safety in Play Console: Elevating Privacy and Security for your users

Posted by Krish Vitaldevara, Director, Product Management

Illustration of a phone with a security symbol

We know that a big part of feeling safe online is having control over your data. That’s why every day we’re committed to empowering users with advanced security and privacy controls and increased agency with respect to data practices. With the new Data safety section, developers will now have a transparent way to show users if and how they collect, share, and protect user data, before users install an app.

Starting today, we’re rolling out the Data safety form in Google Play Console. We’ve also listened to your feedback, so to provide developers with additional guidance, we’re sharing helpful information in our Help Center, developer guide, Play Academy course, and more. Following our common protocols, we'll begin gradual rollout today and expect to expand access to everyone within a couple of weeks.


How to submit your app information in Play Console

Starting today, you can go to App content in your Play Console and look for a new section called “Data safety.” We recommend that you review the guidance and submit your form early so you can get review feedback and make changes before rejected forms prevent you from publishing new app updates. Developers have told us that early feedback would help them fill out the form correctly before users see the Data safety section in February 2022. The enforcement on apps without approved forms starts April 2022.

We understand that completing the form may require a meaningful amount of work, so we built the product and timeline based on developer feedback to make this process as streamlined as possible. Also, developers have asked for a way to more easily import information when they have multiple apps. Therefore, we’ve added an option for developers to import a pre-populated file.


How to get prepared


What your users will see in your app's store listing starting February

Image of app store data privacy and security section. Text reads Developers can showcase key privacy and security practices at a glance

Users will first see the Data safety summary in your store listing. Your app profile will show what data an app collects or shares and highlight safety details, such as whether:

  • The app has security practices, like data encryption in transit
  • The app has committed to follow our Families policy
  • The app has been independently reviewed for conformance with a global security standard
Image of phone data privacy and security. Text reads Developers can share what their app collects and why, so users can download with confidence
GIF of location settings. text reads developers can explain how the data is used

Users can tap the summary to see more details like:

  • What type of data is collected and shared, such as location, contacts, personal information (e.g., name, email address), financial information, and more
  • How the data is used, such as for app functionality, personalization, and more
  • Whether data collection is optional or required in order to use an app

Users have shared that seeing this information helps them understand how some apps may handle their information and feel more trusting about certain apps.


What to expect

Image shows timeline. May '21 pre anouncement. July '21 policy is available. October '21 developers can start declaring information in Google Play Console. Febryary '22 users can start seeing the section on Google Play. April '22 deadline for developers to declare information

Timeline dates subject to change.


You can submit your Data safety form in the Play Console now for early review feedback. You are not required to submit an app update in order to submit your safety profile.

In February 2022, we will launch this feature in the Play store. If your information is approved, your store listing will automatically update with your data safety information. If your information has not been submitted or has been rejected, your users will see “No information available.”

image of data privacy and security settings

By April 2022, all your apps must have their Data safety section approved. While we want as many apps as possible to be ready for the February 2022 consumer experience, we know that some developers will need more time to assess their apps and coordinate with multiple teams.

Also by April, all apps must also provide a privacy policy. Previously, only apps that collected personal and sensitive user data needed to share a privacy policy. Without an approved section or privacy policy, your new app submissions or app updates may be rejected. Non-compliant apps may face additional enforcement actions in the future.

Thank you for your continued partnership in building this feature alongside us and in making Google Play a safe and trustworthy platform for everyone.

Announcing the Android Basics in Kotlin Course

Posted by Murat Yener, Developer Advocate

image with phone showing the different Android Basics in Kotlin units

We are always looking for ways to make learning Android development accessible for all. In 2020, we announced the launch of Android Basics in Kotlin, a free self-paced programming course. Since then, over 100,000 beginners have completed their first milestone in the course.

Android Basics in Kotlin teaches people with no programming experience how to build simple Android apps. Along the way, students learn the fundamentals of programming and the basics of the Kotlin programming language. Today, we’re excited to share that the final unit has been released, and the full Android Basics in Kotlin course is now available.

This course is organized into units, where each unit is made up of a series of pathways. At the end of each pathway, there is a quiz to assess what you’ve learned so far. If you complete the quiz, you earn a badge that can be saved to your Google Developer Profile.

The course is free for anyone to take. Basic computer literacy and basic math skills are recommended prerequisites, along with access to a computer that can run Android Studio. If you’ve never built an app before but want to learn how, check out the Android Basics in Kotlin course.

Compose for Wear OS now in Developer Preview!

Posted by Jeremy Walker, Developer Relations Engineer

Blue background with illustration of watch

At this year’s Google I/O, we announced we are bringing the best of Jetpack Compose to Wear OS. Well, today, Compose for Wear OS is in Developer Preview after a number of successful alpha releases.

Compose simplifies and accelerates UI development, and the same is true of Compose for Wear OS, with built-in support for Material You to help you create beautiful apps with less code.

In addition, what you’ve learned building mobile apps with Jetpack Compose translates directly to the Wear OS version. Just like mobile, you’re welcome to start testing it out right away, and we want to incorporate your feedback into the early iterations of the libraries before the beta release.

This article will review the main composables we've built and point you towards resources to get started using them.

Let's get started!


Dependencies

Most of the Wear related changes you make will be at the top architectural layers.

Flow chart showing the top two boxes circled in red. Boxes order reads: Material, Foundation, UI, Runtime

That means many of the dependencies you already use with Jetpack Compose don't change when targeting Wear OS. For example, the UI, Runtime, Compiler, and Animation dependencies will remain the same.

However, you will need to use the proper Wear OS Material, Navigation, and Foundation libraries which are different from the libraries you have used before in your mobile app.

Below is a comparison to help clarify the differences:


Wear OS Dependency

(androidx.wear.*)

Comparison

Mobile Dependency

(androidx.*)

androidx.wear.compose:compose-material

instead of

androidx.compose.material:material 

androidx.wear.compose:compose-navigation

instead of

androidx.navigation:navigation-compose

androidx.wear.compose:compose-foundation

in addition to

androidx.compose.foundation:foundation

1. Developers can continue to use other material related libraries like material ripple and material icons extended with the Wear Compose Material library.


While it's technically possible to use the mobile dependencies on Wear OS, we always recommend using the wear-specific versions for the best experience.

Note: We will be adding more wear composables with future releases. If you feel any are missing, please let us know.


Here's an example build.gradle file:

// Example project in app/build.gradle file
dependencies {
    // Standard Compose dependencies...

    // Wear specific Compose Dependencies
    // Developer Preview starts with Alpha 07, with new releases coming soon.
    def wear_version = "1.0.0-alpha07"
    implementation "androidx.wear.compose:compose-material:$wear_version"
    implementation "androidx.wear.compose:compose-foundation:$wear_version"

    // For navigation within your app...
    implementation "androidx.wear.compose:compose-navigation:$wear_version"

    // Other dependencies...
}

After you've added the right Wear Material, Foundation, and Navigation dependencies, you are ready to get started.


Composables

Let's explore some composables you can start using today.

As a general rule, many of the Wear composables that are equivalent to the mobile versions can use the same code. The code for styling color, typography, and shapes with MaterialTheme is identical to mobile as well.

For example, to create a Wear OS button your code looks like this:

Button(
    modifier = Modifier.size(ButtonDefaults.LargeButtonSize),
    onClick = { /*...*/ },
    enabled = enabledState
) {
    Icon(
        painter = painterResource(id = R.drawable.ic_airplane),
        contentDescription = "phone",
        modifier = Modifier
            .size(24.dp)
            .wrapContentSize(align = Alignment.Center),
    )
}

The code above is very similar to the mobile side, but the library creates a Wear OS optimized version of the button, that is, a button circular in shape and sized by ButtonDefaults to follow Wear OS Material Guidelines.

Blue circle with a black airplane logo in the middle

Below are several composable examples from the library:

In addition, we've introduced many new composables that improve the Wear experience:

We also offer a wear optimized composable for lists, ScalingLazyColumn, which extends LazyColumn and adds scaling and transparency changes to better support round surfaces. You can see in the app below, the content shrinks and fades at the top and bottom of the screen to help readability:

GIF showing watch face scrolling though calendar

If you look at the code, you can see it's the same as LazyColumn, just with a different name.

val scalingLazyListState: ScalingLazyListState = 
    rememberScalingLazyListState()

ScalingLazyColumn(
    modifier = Modifier.fillMaxSize(),
    verticalArrangement = Arrangement.spacedBy(6.dp),
    state = scalingLazyListState,
) {
    items(messageList.size) { message ->
        Card(/*...*/) { /*...*/ }
    }

    item {
        Card(/*...*/) { /*...*/ }
    }
}

Swipe to Dismiss

Wear has its own version of Box, SwipeToDismissBox, which adds support for the swipe-to-dismiss gesture (similar to the back button/gesture on mobile) out of the box.

Here's a simple example of the code:

// Requires state (different from Box).
val state = rememberSwipeToDismissBoxState()

SwipeToDismissBox(
    modifier = Modifier.fillMaxSize(),
    state = state
) { swipeBackgroundScreen ->

    // Can render a different composable in the background during swipe.
    if (swipeBackgroundScreen) {
        /* ... */
        Text(text = "Swiping Back Content")
    } else {
        /* ... */
        Text( text = "Main Content")
    }
}

Here's a more complex example of the behavior:

GIF of watch face showing calendar agenda

Navigation

Finally, we also offer a Navigation composable, SwipeDismissableNavHost, which works just like NavHost on mobile but also supports the swipe-to-dismiss gesture out of the box (actually uses SwipeToDismissBox under the hood).

Here's an example (code):

GIF showing watch face alarm

Scaffold

Scaffold provides a layout structure to help you arrange screens in common patterns, just like mobile, but instead of an App Bar, FAB, or Drawer, it supports Wear specific layouts with top-level components like Time, Vignette, and the scroll/position indicator.

The code is very similar to what you would write on mobile.


Get Started

We're excited to bring Jetpack Compose to Wear OS and make watch development faster and easier. To dive right in and create an app, check out our quick start guide. To see working examples (both simple and complex), have a look at our sample repo.

The Developer Preview is your opportunity to influence the APIs, so please share your feedback here or join the Slack #compose-wear channel and let us know there!

Android 12 is live in AOSP!

Posted by Dave Burke, VP of Engineering

Android 12 logo

Today we’re pushing the source to the Android Open Source Project (AOSP) and officially releasing the latest version of Android. Keep an eye out for Android 12 coming to a device near you starting with Pixel in the next few weeks and Samsung Galaxy, OnePlus, Oppo, Realme, Tecno, Vivo, and Xiaomi devices later this year.

As always, thank you for your feedback during Android 12 Beta! More than 225,000 of you tested our early releases on Pixel and devices from our partners, and you sent us nearly 50,000 issue reports to help improve the quality of the release. We also appreciate the many articles, discussions, surveys, and in-person meetings where you voiced your thoughts, as well as the work you’ve done to make your apps compatible in time for today’s release. Your support and contributions are what make Android such a great platform for everyone.

We’ll also be talking about Android 12 in more detail at this year’s Android Dev Summit, coming up on October 27-28. We’ve just released more information on the event, including a snapshot of the technical Android sessions; read on for more details later in the post.

What’s in Android 12 for developers?

Here’s a look at some of what’s new in Android 12 for developers. Make sure to check out the Android 12 developer site for details on all of the new features.

A new UI for Android

Material You - Android 12 introduces a new design language called Material You, helping you to build more personalized, beautiful apps. To bring all of the latest Material Design 3 updates into your apps, try an alpha version of Material Design Components and watch for support for Jetpack Compose coming soon.

image of new UI for Android 12

Redesigned widgets - We refreshed app widgets to make them more useful, beautiful, and discoverable. Try them with new interactive controls, responsive layouts for any device, and dynamic colors to create a personalized but consistent look. More here.

Notification UI updates - We also refreshed notification designs to make them more modern and useful. Android 12 also decorates custom notifications with standard affordances to make them consistent with all other notifications. More here.

Stretch overscroll - To make scrolling your app’s content more smooth, Android 12 adds a new “stretch” overscroll effect to all scrolling containers. It’s a natural scroll-stop indicator that’s common across the system and apps. More here.

App launch splash screens - Android 12 also introduces splash screens for all apps. Apps can customize the splash screen in a number of ways to meet their unique branding needs. More here.

Performance

Faster, more efficient system performance - We reduced the CPU time used by core system services by 22% and the use of big cores by 15%. We’ve also improved app startup times and optimized I/O for faster app loading, and for database queries we’ve improved CursorWindow by as much as 49x for large windows.

Optimized foreground services - To provide a better experience for users, Android 12 prevents apps from starting foreground services while in the background. Apps can use a new expedited job in JobScheduler instead. More here.

More responsive notifications - Android 12’s restriction on notification trampolines helps reduce latency for apps started from a notification. For example, the Google Photos app now launches 34% faster after moving away from notification trampolines. More here.

Performance class - Performance Class is a set of device capabilities that together support demanding use-cases and higher quality content on Android 12 devices. Apps can check for a device’s performance class at runtime and take full advantage of the device’s performance. More here.

Faster machine learning - Android 12 helps you make the most of ML accelerators and always get the best possible performance through the Neural Networks API. ML accelerator drivers are also now updatable outside of platform releases, through Google Play services, so you can take advantage of the latest drivers on any compatible device.

Privacy

image of privacy notification in Android 12

Privacy Dashboard - A new dashboard in Settings gives users better visibility over when your app accesses microphone, camera, and location data. More here.

Approximate location - Users have even more control over their location data, and they can grant your app access to approximate location even if it requests precise location. More here.

Microphone and camera indicators - Indicators in the status bar let users know when your app is using the device camera or microphone. More here.

Microphone and camera toggles - On supported devices, new toggles in Quick Settings make it easy for users to instantly disable app access to the microphone and camera. More here.

Nearby device permissions - Your app can use new permissions to scan for and pair with nearby devices without needing location permission. More here.

Better user experience tools

Rich content insertion - A new unified API lets you receive rich content in your UI from any source: clipboard, keyboard, or drag-and-drop. For back-compatibility, we’ve added the unified API to AndroidX. More here.

Support for rounded screen corners - Many modern devices use screens with rounded corners. To deliver a great UX on these devices, you can use new APIs to query for corner details and then manage your UI elements as needed. More here.

image of phone UI with notification that says hello blurry world

AVIF image support - Android 12 adds platform support for AV1 Image File Format (AVIF). AVIF takes advantage of the intra-frame encoded content from video compression to dramatically improve image quality for the same file size when compared to older image formats, such as JPEG.

Compatible media transcoding - For video, HEVC format offers significant improvements in quality and compression and we recommend that all apps support it. For apps that can’t, the compatible media transcoding feature lets your app request files in AVC and have the system handle the transcoding. More here.

Easier blurs, color filters and other effects - new APIs make it easier to apply common graphics effects to your Views and rendering hierarchies. You can use RenderEffect to apply blurs, color filters, and more to RenderNodes or Views. You can also create a frosted glass effect for your window background using a new Window.setBackgroundBlurRadius() API, or use blurBehindRadius to blur all of the content behind a window.

Enhanced haptic experiences - Android 12 expands the tools you can use to create informative haptic feedback for UI events, immersive and delightful effects for gaming, and attentional haptics for productivity. More here.

New camera effects and sensor capabilities - New vendor extensions let your apps take advantage of the custom camera effects built by device manufacturers—bokeh, HDR, night mode, and others. You can also use new APIs to take full advantage of ultra high-resolution camera sensors that use Quad / Nona Bayer patterns. More here.

Better debugging for native crashes - Android 12 gives you more actionable diagnostic information to make debugging NDK-related crashes easier. Apps can now access detailed crash dump files called tombstones through the App Exit Reasons API.

Android 12 for Games - With Game Mode APIs, you can react to the players' performance profile selection for your game - like better battery life for a long commute, or performance mode to get peak frame rates. Play as you download will allow game assets to be fetched in the background during install, getting your players into gameplay faster.

Get your apps ready for Android 12

Now with today’s public release of Android 12, we’re asking all Android developers to finish your compatibility testing and publish your updates as soon as possible, to give your users a smooth transition to Android 12.

To test your app for compatibility, just install it on a device running Android 12 and work through the app flows looking for any functional or UI issues. Review the Android 12 behavior changes for all apps to focus on areas where your app could be affected. Here are some of the top changes to test:

  • Privacy dashboard — Use this new dashboard in Settings to check your app’s accesses to microphone, location, and other sensitive data, and consider providing details to users on the reasons. More here.
  • Microphone & camera indicators — Android 12 shows an indicator in the status bar when an app is using the camera or microphone. Make sure this doesn’t affect your app’s UI. More here.
  • Microphone & camera toggles — Try using the new toggles in Quick Settings to disable microphone and camera access for apps and ensure that your app handles the change properly. More here.
  • Clipboard read notification — Watch for toast notifications when your app reads data from the clipboard unexpectedly. Remove unintended accesses. More here.
  • Stretch overscroll — Try your scrolling content with the new “stretch” overscroll effect and ensure that it displays as expected. More here.
  • App splash screens — Launch your app from various flows to test the new splash screen animation. If necessary, you can customize it. More here.
  • Keygen changes — Several deprecated BouncyCastle cryptographic algorithms are removed in favor of Conscrypt versions. If your app uses a 512-bit key with AES, you’ll need to use one of the standard sizes supported by Conscrypt. More here.

Remember to test the libraries and SDKs in your app for compatibility. If you find any SDK issues, try updating to the latest version of the SDK or reaching out to the developer for help.

Once you’ve published the compatible version of your current app, you can start the process to update your app's targetSdkVersion. Review the behavior changes for Android 12 apps and use the compatibility framework to help detect issues quickly.

Tune in to Android Dev Summit to learn about Android 12 and more!

The #AndroidDevSummit is back! Join us October 27-28 to hear about the latest updates in Android development, including Android 12. This year’s theme is excellent apps, across devices; tune in later this month to learn more about the development tools, APIs and technology to help you be more productive and create better apps that run across billions of devices, including tablets, foldables, wearables, and more.

We’ve just released more information on the event, including a snapshot of the 30+ technical Android sessions; you can take a look at some of those sessions here, and start planning which talks you want to check out. Over the coming weeks, we’ll be asking you to share your top #AskAndroid questions, to be answered live by the team during the event.

The show kicks off at 10 AM PT on October 27 with The Android Show, a 50-minute technical keynote where you’ll hear all the latest news and updates for Android developers. You can learn more and sign up for updates here.