Tag Archives: Android

Build kids app experiences for Wear OS

Posted by John Zoeller – Developer Relations Engineer, and Caroline Vander Wilt – Group Product Manager

New Wear OS features enable ‘standalone’ watches for kids, unlocking new possibilities for Wear OS app developers

In collaboration with Samsung, Wear OS is introducing Galaxy Watch for Kids, a new kids experience enabling kids to explore while staying connected with their families from their smartwatch, no phone necessary. This launch unlocks new opportunities for Wear OS developers to reach younger audiences.

Galaxy Watch for Kids is rolling out to Galaxy Watch7 LTE models , with features including:

    • No phone ownership required: This experience enables the watch and its associated apps to operate on a fully standalone basis using LTE, and when available, Wifi connectivity. This includes calling, texting, games, and more.
    • Selection of kid-friendly apps: From gaming to health, kids can browse and request installs of Teacher Approved apps and watch faces onGoogle Play. In addition to approving and blocking apps, parents can also monitor app usage from Google Family Link.
    • Stay in touch with parent-managed contacts: Parents can ensure safer communications by limiting text and calling to approved contacts.
    • Location sharing: Offers peace of mind with location sharing and geofencing notifications when kids leave or arrive at designated areas.
    • School time: Limits watch functionality during scheduled hours of the day, so kids can focus while in school or studying.

Building kids experiences with standalone functionality enables you to reach both standalone and tethered watches for kids. Apps like Math Tango have already created great Wear OS experiences for kids. Check out the video below to learn how they built a rich and engaging Wear OS app.

Our new kids-focused design and content principles and developer guidance are also available today. Check out some of the highlights in the next section.

New principles and guidelines for development

We've created new design principles and guidelines to help developers take advantage of this opportunity to build and improve apps and watch faces for kids.

Design principle: Active and fun

Build engaging healthy experiences for children by including activity-based features.

A great example of this is the Odd Squad Time Unit app from PBS KIDS that encourages children to get up and be physically active. By using the on-device sensors and power-efficient platform APIs, the app is able to provide a fun experience all day and still maintain battery life of the watch from wakeup to bed time.

A circular timer display with a hexagonal background 'JUMP!' and '5 SECONDS REMAIN'. A gold hand points to the number 5.  A colorful segmented ring surrounds the center of the timer.

Note that while experiences should be catered to kids, they must also follow the Wear OS quality requirements related to the visual experience of your app, especially when crafting touch targets and font sizes.

Content principle: Thoughtfully crafted

Consider adjusting your content to make it not only appropriate, but also consumable and intuitive for younger kids (including those as young as 6). This includes both audio and visual app components.

Tinkercast’s Two Whats?! And a Wow! app uses age-appropriate vocabulary and fun characters to aid in their teaching. It’s a great example of how a developer should account for reading comprehension.

A smartwatch face displays a cartoon bird with a speech bubble that says 'SWIPE TO VIEW YOUR OPTIONS!'. Yellow arrows point left and right with a large letter 'A' between them.

Development guidelines

New Wear OS kids apps must adhere to the Wear OS app quality guidelines, the guidelines for standalone apps, and the new Kids development guide.


Minimize impact on device battery

Minimize events that affect battery life over the course of one session. Kids use watches that provide important safety features for their parents or guardians, which depend on the device having enough battery life. Below are best practices for reducing battery impact.

      DO design for offline use cases so that kids can play without incurring network-related battery costs

      DO minimize tasks that require an internet or GPS connection

      🚫 DO NOT use direct sensor tracking as this will significantly reduce the battery life

      🚫 DO NOT include long-running animations


Choose a development environment

To develop kid-friendly apps and games you can use Compose for Wear OS, our recommended approach for building UI for Wear OS, as well as Unity for Android.

We recommend Unity for developing games on Wear OS if you’re familiar and comfortable with its workflows and capabilities. However, for games with only a few animations, Compose Animation should be sufficient and is better supported within the Android environment.

Be sure to consider that some Wear OS quality requirements may require custom Unity implementations, such as support for Rotary Input.

Originator’s MathTango showcases the flexibility and richness of developing with Unity:

A purple cartoon moose-like character with large antlers is displayed on a round smartwatch face. The name 'ISAAC' is shown below the character, along with the label 'NEW!'. A green arrow is visible in the top left corner of the screen.

Creating Watch Faces

Developing watch faces for kids requires the use of Watch Face Format. Watch faces should adhere to our content and design principles mentioned above, as well as our quality standards, including our ambient mode requirement.

The following examples demonstrate our Content Principle: Appealing. The content is relevant, engaging, and fun for kids, sparking their interest and imagination.

The Crayola Pets Watch Face comes with a great variety of customization options, and demonstrates an informative and pleasant watch face:

A circular watch face shows a cartoon character, the time (3:30), the date (Feb 10), and a battery indicator (89%).

The Marvel Watch Faces (Captain America shown) provide a fun and useful step tracking feature:

A round smartwatch face displays a cartoon Captain America, his shield, and the time (12:30). A step counter shows 650 steps. The Marvel logo is visible.

Kids experience publishing requirements

Developers looking to get started on a new kids experience will need to keep a few things in mind when publishing on the Play Store.

Expand your reach with Wear OS

Get ready to reach a new generation of Wear OS users! We've created all-new guidelines to help you build engaging experiences for kids. Here’s a quick recap:

With the Wear for Kids experience, developers can reach an entirely new audience of users and be part of the next generation of learning and enrichment on Wear OS.

Check out all of the new experiences on the Play Store!

Apps adopt Transformer to support more reliable and performant media editing use cases

Posted by Caren Chang – Developer Relations Engineer

The Jetpack Media3 library enables Android apps to build high quality media apps. As part of the Media3 library, the Transformer module aims to provide easy to use, reliable, and performant APIs for transcoding and editing media.

For example, apps can use Transformer to apply editing operations such as trimming a long piece of media file, or applying effects to video tracks. Transformer can also be used to convert media files from one format to another, such as adjusting the resolution or encoding of the media file.

Developing Transformer APIs

As part of the process to introduce new APIs, our engineering team works closely with Google apps such as Google Photos to test and experiment the new APIs. Experimental flags are first introduced to enable performance improvements. Once the results are successful and conclusive, these experimental features are then built into the default API implementations or promoted to public APIs for all apps to use. This approach allows Transformer APIs to be tested on a wide variety of devices.

Transformer Adoption in apps

Apps that have been using Transformer in production observed in-app performance improvements, less code to maintain, and better developer experience. Let’s take a closer look at how Transformer has helped apps for their media-editing use cases.

One of users’ favorite features in Google Photos is memory sharing, where snippets of your life story that are curated and presented as Google Photos memories can now be shared as videos to social media and chat apps. However, the process of combining media items to create a video on device is resource intensive and subject to significant latency, especially on low-end devices. To reduce this latency and enable the feature on a wider range of devices, Photos adopted Transformer in their media creation pipeline. Along with other improvements made, the team found that Transformer played a part in reducing the median user latency for creating memory videos by 41% on high-end devices and 27% on mid-range devices.

The Photos app also enables users to perform media edits such as trimming or rotating a video. By adopting Transformer APIs for rotating videos, median save latency was reduced by 79% for applicable videos. The app also adopted Transformer’s API for optimizing video trimming, and observed video save latency decrease by 64%.

1 Second Everyday is a personal video journal that helps you create captivating montages and timelapses. One of the app’s main user journeys is sequentially combining short videos to create a meaningful movie. After adopting Transformer for this use case, the app observed that video encoding performance was up to 5x faster, allowing them to explore enabling 4k and HDR support. The Transformer adoption also helped decrease relevant code by 30%, making it easier for the developers to maintain the code base.

BandLab is the next-generation music creation platform used by millions around the world to make and share their music. The app originally used MediaCodecs for their video creation use cases, but found that the low level implementation resulted in native crashes that were difficult to debug. After researching more on Transformer, the team made the decision to migrate from MediaCodecs to Transformer. Overall, it only took the team 12 working days for the migration, and this resulted in a simpler codebase and more maintainable pipeline for their media creation use cases. In addition, the app observed that all previously observed native crashes were no longer occurring anymore.

What’s next for Transformers?

We’re excited to see Transformer’s adoption in the developer community, and will continue adding new features to support more media-editing use cases for the Android ecosystem including:

    • Better support for previewing media edits
    • Improving the performance and developer experience for video frame extraction
    • Easier integration with AI effects
    • and much more

Keep an eye out on what we’re working on in the Media3 Github, and file feature requests to help shape the future of Transformer!

Android Studio Ladybug Feature Drop is Stable!

Posted by Steven Jenkins – Product Manager, Android Studio

Today, we are thrilled to announce the stable release of Android Studio Ladybug 🐞 Feature Drop (2024.2.2)!

Accelerate your productivity with Gemini in Android Studio, Animation Preview support for Wear Tiles, App Links Assistant and much more. All of these new features are designed to help you build high-quality Android apps faster.

Read on to learn more about all the updates, quality improvements, and new features across your key workflows in Android Studio Ladybug Feature Drop, and download the latest stable version today to try them out!

Android Studio Ladybug Feature Drop

Gemini in Android Studio

Gemini Code Transforms

Gemini Code Transforms can help you modify, optimize, or add code to your app with AI assistance. Simply right-click in your code editor and select "Gemini > Generate code" or highlight code and select "Gemini > Transform selected code." You can also use the keyboard shortcut Ctrl+\ (⌘+\ on macOS) to bring up the Gemini prompt. Describe the changes you want to make to your code, and Gemini will suggest a code diff, allowing you to easily review and accept only the suggestions you want.

With Gemini Code Transforms, you can simplify complex code, perform specific code transformations, or even generate new functions. You can also refine the suggested code to iterate on the code suggestions with Gemini. It's an AI coding assistant right in your editor, helping you write better code more efficiently.

Android Studio displays a code editor window open to Gemini Code Transform
Gemini Code Transform

Rename

Gemini in Android Studio enhances your workflow with intelligent assistance for common tasks. When renaming a single variable, class, or method from the code editor, the "Refactor > Rename" action uses Gemini to suggest contextually appropriate names, making it smoother and more efficient to refactor names as you’re coding in the editor.

A code editor window open to Gemini renaming a variable in Android Studio
Rename

Rethink

For larger renaming refactors, Gemini can "Rethink variable names" across your whole file. This feature analyzes your code and suggests more intuitive and descriptive names for variables and methods, improving readability and maintainability.

A code editor window open to Gemini analyzing code and suggesting more descriptive names for variables in Android Studio
Rethink

Commit Message

Gemini now assists with commit messages. When committing changes to version control, it analyzes your code modifications and suggests a detailed commit message.

A code editor window open to Gemini analyzing code and suggesting a detailed commit message in Android Studio
Commit Message

Generate Documentation

Gemini in Android Studio makes documenting your code easier than ever. To generate clear and concise documentation, select a code snippet, right-click in the editor and choose "Gemini > Document Function" (or "Document Class" or "Document Property", depending on the context). Gemini will generate a draft that you can then refine and perfect before accepting the changes. This streamlined process helps you create informative documentation quickly and efficiently.

A code editor window open to Gemini adding documentation to a code snippet in Android Studio
Generate Documentation

Debug

Animation Preview support for Wear OS Tiles

Animation Preview support for Wear OS Tiles helps you visualize and debug tile animations with ease. It provides a real-time view of your animations, allowing you to preview them, control playback with options like play, pause, and speed adjustment, and inspect key properties such as initial/end states and animation curves. You can even dynamically modify animation code and instantly observe the results within the inspector, streamlining the debugging and refinement process.

A code editor window open to animation preview support in Android Studio
Animation Preview support for Wear OS Tiles

Wear Health Services

The Wear Health Services feature in Android Studio simplifies the process of testing health and fitness apps by enabling Wear Health Services within the emulator. You can now easily customize various parameters for a given exercise such as heart rate, distance, and speed without needing a physical device or performing the activity itself. This streamlines the development and testing workflow, allowing for faster iteration and more efficient debugging of health-related features.

A code editor window open to Wear Health Services in Android Studio Emulator
Wear Health Services

Optimize

App Links Assistant

App Links Assistant simplifies the process of implementing app links by serving valid JSON syntax that resolves broken deep links for your app. You can review the JSON file and then upload it to your website, resolving issues quickly. This eliminates the manual creation of the JSON file, saving you time and effort. The tool also allows you to compare existing JSON files with newly generated ones to easily identify any discrepancies.

A code editor window open to App Links Assistant in Android Studio
App Links Assistant

Google Play SDK Insights Integration

Android Studio now provides enhanced lint warnings for public SDKs from the Google Play SDK Index and the Google Play SDK Console, helping you identify and address potential issues. These warnings alert you if an SDK is outdated, violates Google Play policies, or has known security vulnerabilities. Furthermore, Android Studio provides helpful quick fixes and recommended version ranges whenever possible, making it easier to update your dependencies and keeping your app more secure and compliant.

Android Studio displays a code editor window open to a Gradle build file. The IDE warns that an outdated Firebase authentication library is being used, preventing release to Google Play Console.
Google Play SDK Insights Integration

Quality improvements

Beyond new features, we also continued to improve the overall quality and stability of Android Studio. In fact, the Android Studio team addressed over 770 bugs during the Ladybug Feature Drop development cycle.

IntelliJ platform update

Android Studio Ladybug Feature Drop (2024.2.2) includes the IntelliJ 2024.2 platform release, which has many new features such as more intuitive full line code completion suggestions, a preview in the Search Everywhere dialog and improved log management for the Java** and Kotlin programming languages.

See the full IntelliJ 2024.2 release notes.

Summary

To recap, Android Studio Ladybug Feature Drop includes the following enhancements and features:

Gemini in Android Studio

    • Gemini Code Transforms
    • Rename
    • Rethink
    • Commit Message
    • Generate Documentation

Debug

    • Animation Preview support for Wear OS Tiles
    • Wear Health Services

Optimize

    • App Links Assistant
    • Google Play SDK Insights Integration

Quality Improvements

    • 770+ bugs addressed

IntelliJ Platform Update

    • More intuitive full line code completion suggestions
    • Preview in the Search Everywhere dialog
    • Improved log management for Java and Kotlin programming languages

Getting Started

Ready for next-level Android development? Download Android Studio Ladybug Feature Drop and unlock these cutting-edge features today. As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!


**Java is a trademark or registered trademark of Oracle and/or its affiliates.

Performance Class helps Google Maps deliver premium experiences

Posted by Nevin Mital - Developer Relations Engineer, Android Media

The Android ecosystem features a diverse range of devices, and it can be difficult to build experiences that take advantage of new or premium hardware features while still working well for users on all devices. With Android 12, we introduced the Media Performance Class (MPC) standard to help developers better understand a device’s capabilities and identify high-performing devices. For a refresher on what MPC is, please see our last blog post, Using performance class to optimize your user experience, or check out the Performance Class documentation.

Earlier this year, we published the first stable release of the Jetpack Core Performance library as the recommended solution for more reliably obtaining a device’s MPC level. In particular, this library introduces the PlayServicesDevicePerformance class, an API that queries Google Play Services to get the most up-to-date MPC level for the current device and build. I’ll get into the technical details further down, but let’s start by taking a look at how Google Maps was able to tailor a feature launch to best fit each device with MPC.

Performance Class unblocks premium experience launch for Google Maps

Google Maps recently took advantage of the expanded device coverage enabled by the Play Services module to unblock a feature launch. Google Maps wanted to update their UI by increasing the transparency of some layers. Consequently, this meant they would need to render more of the map, and found they had to stop the rollout due to latency increases on many devices, especially towards the low-end. To resolve this, the Maps team started by slicing an existing key metric, “seconds to UI item visibility”, by MPC level, which revealed that while all devices had a small increase in this latency, devices without an MPC level had the largest increase.

A bar graph displays A/B test results for Seconds to UI item visibility, comparing control results with those using increased transparency across different Media Performance Class Levels.  A green horizontal line and text indicate the updated experience shipped to devices qualifying for MPC. A vertical green dotted line separates results for devices without a specific MPC level, which kept the previous UI.

With these results in hand, Google Maps started their rollout again, but this time only launching the feature on devices that report an MPC level. As devices continue to get updated and meet the bar for MPC, the updated Google Maps UI will be available to them as well.

The new Play Services module

MPC level requirements are defined in the Android Compatibility Definition Document (CDD), then devices and Android builds are validated against these requirements by the Android Compatibility Test Suite (CTS). The Play Services module of the Jetpack Core Performance library leverages these test results to continually update a device’s reported MPC level without any additional effort on your end. This also means that you’ll immediately have access to the MPC level for new device launches without needing to acquire and test each device yourself, since it already passed CTS. If the MPC level is not available from Google Play Services, the library will fall back to the MPC level declared by the OEM as a build constant.

A flowchart depicts the process of determining Performance Class levels for Android devices, involving manufacturers, CTS tests, a Grader, the Play Services module, and the CDD.

As of writing, more than 190M in-market devices covering over 500 models across 40+ brands report an MPC level. This coverage will continue to grow over time, as older devices update to newer builds, from Android 11 and up.

Using the Core Performance library

To use Jetpack Core Performance, start by adding a dependency for the relevant modules in your Gradle configuration, and create an instance of DevicePerformance. Initializing a DevicePerformance should only happen once in your app, as early as possible - for example, in the onCreate() lifecycle event of your Application. In this example, we’ll use the Google Play services implementation of DevicePerformance.

// Implementation of Jetpack Core library.
implementation("androidx.core:core-ktx:1.12.0")
// Enable APIs to query for device-reported performance class.
implementation("androidx.core:core-performance:1.0.0")
// Enable APIs to query Google Play Services for performance class.
implementation("androidx.core:core-performance-play-services:1.0.0")

import androidx.core.performance.play.services.PlayServicesDevicePerformance

class MyApplication : Application() {
  lateinit var devicePerformance: DevicePerformance

  override fun onCreate() {
    // Use a class derived from the DevicePerformance interface
    devicePerformance = PlayServicesDevicePerformance(applicationContext)
  }
}

Then, later in your app when you want to retrieve the device’s MPC level, you can call getMediaPerformanceClass():

class MyActivity : Activity() {
  private lateinit var devicePerformance: DevicePerformance
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    // Note: Good app architecture is to use a dependency framework. See
    // https://developer.android.com/training/dependency-injection for more
    // information.
    devicePerformance = (application as MyApplication).devicePerformance
  }

  override fun onResume() {
    super.onResume()
    when {
      devicePerformance.mediaPerformanceClass >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE -> {
        // MPC level 34 and later.
        // Provide the most premium experience for the highest performing devices.
      }
      devicePerformance.mediaPerformanceClass == Build.VERSION_CODES.TIRAMISU -> {
        // MPC level 33.
        // Provide a high quality experience.
      }
      else -> {
        // MPC level 31, 30, or undefined.
        // Remove extras to keep experience functional.
      }
    }
  }
}

Strategies for using Performance Class

MPC is intended to identify high-end devices, so you can expect to see MPC levels for the top devices from each year, which are the devices you’re likely to want to be able to support for the longest time. For example, the Pixel 9 Pro released with Android 14 and reports an MPC level of 34, the latest level definition when it launched.

You should use MPC as a complement to any existing Device Clustering solutions you already use, such as querying a device’s static specs or manually blocklisting problematic devices. An area where MPC can be a particularly helpful tool is for new device launches. New devices should be included at launch, so you can use MPC to gauge new devices’ capabilities right from the start, without needing to acquire the hardware yourself or manually test each device.

A great first step to get involved is to include MPC levels in your telemetry. This can help you identify patterns in error reports or generally get a better sense of the devices your user base uses if you segment key metrics by MPC level. From there, you might consider using MPC as a dimension in your experimentation pipeline, for example by setting up A/B testing groups based on MPC level, or by starting a feature rollout with the highest MPC level and working your way down. As discussed previously, this is the approach that Google Maps took.

You could further use MPC to tune a user-facing feature, for example by adjusting the number of concurrent video playbacks your app attempts based on the MPC level’s concurrent codec guarantees. However, make sure to still query a device’s runtime capabilities when using this approach, as they may differ depending on the environment and state the device is in.

Get in touch!

If MPC sounds like it could be useful for your app, please give it a try! You can get started by taking a look at our sample code or documentation. We welcome you to share any questions or feedback you have in this short form.


This blog post is a part of Camera and Media Spotlight Week. We're providing resources – blog posts, videos, sample code, and more – all designed to help you uplevel the media experiences in your app.

To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.

Spotlight Week: Android Camera and Media

Posted by Caren Chang- Android Developer Relations Engineer

Android offers Camera and Media APIs to help you build apps that can capture, edit, share, and play media. To help you enhance Android Camera and Media experiences to be even more delightful for your users, this week we will be kicking off the Camera and Media Spotlight week.

This Spotlight Week will provide resources—blog posts, videos, sample code, and more—all designed to help you uplevel the media experiences in your app. Check out highlights from the latest releases in Camera and Media APIs, including better Jetpack Compose support in CameraX, motion photo support in Media3 Transformer, simpler ExoPlayer setup, and much more! We’ll also bring in developers from the community to talk about their experiences building Android camera and media apps.


Here’s what we’re covering during Camera and Media Spotlight week:

What’s new in camera and media

Tuesday, January 7

Check out what’s new in the latest CameraX and Media3 releases, including how to get started with building Camera apps with Compose.

Creating delightful and premium experiences

Wednesday, January 8

Building delightful and premium experiences for your users is what can help your app really stand out. Learn about different ways to achieve this such as utilizing the Media Performance Class or enabling HDR video capture in your app. Learn from developers, such as how Google Drive enabled Ultra HDR images in their Android app, and Instagram improved the in-app image capture experience by implementing Night Mode.

Adaptive for camera and media, for large screens and now XR!

Thursday, January 9

Thinking adaptive is important, so your app works just as well on phones as it does large screens, like foldables, tablets, ChromeOS, cars, and the new Android XR platform! On Thursday, we’ll be diving into the media experience on large screen devices, and how you can build in a smooth tabletop mode for your camera applications. Prepare your apps for XR devices by considering Spatial Audio and Video.

Media creation

Friday, January 10

Capturing, editing, and processing media content are fundamental features of the Android ecosystem. Learn about how Media3’s Transformer module can help your app’s media processing use cases, and see case studies of apps that are using Transformer in production. Listen in to how the 1 Second Everyday Android app approaches media use cases, and check out a new API that allows apps to capture concurrent camera streams.Learn from Android Google Developer Tom Colvin on how he experimented with building an AI-powered Camera app.


These are just some of the things to think about when building camera and media experiences in your app. Keep checking this blog post for updates; we’ll be adding links and more throughout the week.

Media3 1.5.0 — what’s new?

Posted by Kristina Simakova – Engineering Manager

This article is cross-published on Medium

Media3 1.5.0 is now available!

Transformer now supports motion photos and faster image encoding. We’ve also simplified the setup for DefaultPreloadManager and ExoPlayer, making it easier to use. But that’s not all! We’ve included a new IAMF decoder, a Kotlin listener extension, and easier Player optimization through delegation.

To learn more about all new APIs and bug fixes, check out the full release notes.

Transformer improvements

Motion photo support

Transformer now supports exporting motion photos. The motion photo’s image is exported if the corresponding MediaItem’s image duration is set (see MediaItem.Builder().setImageDurationMs()) Otherwise, the motion photo’s video is exported. Note that the EditedMediaItem’s duration should not be set in either case as it will automatically be set to the corresponding MediaItem’s image duration.

Faster image encoding

This release accelerates image-to-video encoding, thanks to optimizations in DefaultVideoFrameProcessor.queueInputBitmap(). DefaultVideoFrameProcessor now treats the Bitmap given to queueInputBitmap() as immutable. The GL pipeline will resample and color-convert the input Bitmap only once. As a result, Transformer operations that take large (e.g. 12 megapixels) images as input execute faster.

AudioEncoderSettings

Similar to VideoEncoderSettings, Transformer now supports AudioEncoderSettings which can be used to set the desired encoding profile and bitrate.

Edit list support

Transformer now shifts the first video frame to start from 0. This fixes A/V sync issues in some files where an edit list is present.

Unsupported track type logging

This release includes improved logging for unsupported track types, providing more detailed information for troubleshooting and debugging.

Media3 muxer

In one of the previous releases we added a new muxer library which can be used to create MP4 container files. The media3 muxer offers support for a wide range of audio and video codecs, enabling seamless handling of diverse media formats. This new library also brings advanced features including:

    • B-frame support
    • Fragmented MP4 output
    • Edit list support

The muxer library can be included as a gradle dependency:

implementation ("androidx.media3:media3-muxer:1.5.0")

Media3 muxer with Transformer

To use the media3 muxer with Transformer, set an InAppMuxer.Factory (which internally wraps media3 muxer) as the muxer factory when creating a Transformer:

val transformer = Transformer.Builder(context)
    .setMuxerFactory(InAppMuxer.Factory.Builder().build())
    .build()

Simpler setup for DefaultPreloadManager and ExoPlayer

With Media3 1.5.0, we added DefaultPreloadManager.Builder, which makes it much easier to build the preload components and the player. Previously we asked you to instantiate several required components (RenderersFactory, TrackSelectorFactory, LoadControl, BandwidthMeter and preload / playback Looper) first, and be super cautious on correctly sharing those components when injecting them into the DefaultPreloadManager constructor and the ExoPlayer.Builder. With the new DefaultPreloadManager.Builder this becomes a lot simpler:

    • Build a DefaultPreloadManager and ExoPlayer instances with all default components.
val preloadManagerBuilder = DefaultPreloadManager.Builder()
val preloadManager = preloadManagerBuilder.build()
val player = preloadManagerBuilder.buildExoPlayer()

    • Build a DefaultPreloadManager and ExoPlayer instances with custom sharing components.
val preloadManagerBuilder = DefaultPreloadManager.Builder().setRenderersFactory(customRenderersFactory)
// The resulting preloadManager uses customRenderersFactory
val preloadManager = preloadManagerBuilder.build()
// The resulting player uses customRenderersFactory
val player = preloadManagerBuilder.buildExoPlayer()

    • Build a DefaultPreloadManager and ExoPlayer instances, while setting the custom playback-only configurations on the ExoPlayers.
val preloadManagerBuilder = DefaultPreloadManager.Builder()
val preloadManager = preloadManagerBuilder.build()
// Tune the playback-only configurations
val playerBuilder = ExoPlayer.Builder().setFooEnabled()
// The resulting player will have playback feature "Foo" enabled
val player = preloadManagerBuilder.buildExoPlayer(playerBuilder)

Preloading the next playlist item

We’ve added the ability to preload the next item in the playlist of ExoPlayer. By default, playlist preloading is disabled but can be enabled by setting the duration which should be preloaded to memory:

player.preloadConfiguration =
    PreloadConfiguration(/* targetPreloadDurationUs= */ 5_000_000L)

With the PreloadConfiguration above, the player tries to preload five seconds of media for the next item in the playlist. Preloading is only started when no media is being loaded that is required for the ongoing playback. This way preloading doesn’t compete for bandwidth with the primary playback.

When enabled, preloading can help minimize join latency when a user skips to the next item before the playback buffer reaches the next item. The first period of the next window is prepared and video, audio and text samples are preloaded into its sample queues. The preloaded period is later queued into the player with preloaded samples immediately available and ready to be fed to the codec for rendering.

Once opted-in, playlist preloading can be turned off again by using PreloadConfiguration.DEFAULT to disable playlist preloading:

player.preloadConfiguration = PreloadConfiguration.DEFAULT

New IAMF decoder and Kotlin listener extension

The 1.5.0 release includes a new media3-decoder-iamf module, which allows playback of IAMF immersive audio tracks in MP4 files. Apps wanting to try this out will need to build the libiamf decoder locally. See the media3 README for full instructions.

implementation ("androidx.media3:media3-decoder-iamf:1.5.0")

This release also includes a new media3-common-ktx module, a home for Kotlin-specific functionality. The first version of this module contains a suspend function that lets the caller listen to Player.Listener.onEvents. This is a building block that’s used by the upcoming media3-ui-compose module (launching with media3 1.6.0) to power a Jetpack Compose playback UI.

implementation ("androidx.media3:media3-common-ktx:1.5.0")

Easier Player customization via delegation

Media3 has provided a ForwardingPlayer implementation since version 1.0.0, and we have previously suggested that apps should use it when they want to customize the way certain Player operations work, by using the decorator pattern. One very common use-case is to allow or disallow certain player commands (in order to show/hide certain buttons in a UI). Unfortunately, doing this correctly with ForwardingPlayer is surprisingly hard and error-prone, because you have to consistently override multiple methods, and handle the listener as well. The example code to demonstrate how fiddly this is too long for this blog, so we’ve put it in a gist instead.

In order to make these sorts of customizations easier, 1.5.0 includes a new ForwardingSimpleBasePlayer, which builds on the consistency guarantees provided by SimpleBasePlayer to make it easier to create consistent Player implementations following the decorator pattern. The same command-modifying Player is now much simpler to implement:

class PlayerWithoutSeekToNext(player: Player) : ForwardingSimpleBasePlayer(player) {
  override fun getState(): State {
    val state = super.getState()
    return state
      .buildUpon()
      .setAvailableCommands(
        state.availableCommands.buildUpon().remove(COMMAND_SEEK_TO_NEXT).build()
      )
      .build()
  }

  // We don't need to override handleSeek, because it is guaranteed not to be called for
  // COMMAND_SEEK_TO_NEXT since we've marked that command unavailable.
}

MediaSession: Command button for media items

Command buttons for media items allow a session app to declare commands supported by certain media items that then can be conveniently displayed and executed by a MediaController or MediaBrowser:

image of command buttons for media items in the Media Center of android Automotive OS
Screenshot: Command buttons for media items in the Media Center of Android Automotive OS.

You'll find the detailed documentation on android.developer.com.

This is the Media3 equivalent of the legacy “custom browse actions” API, with which Media3 is fully interoperable. Unlike the legacy API, command buttons for media items do not require a MediaLibraryService but are a feature of the Media3 MediaSession instead. Hence they are available for MediaController and MediaBrowser in the same way.


If you encounter any issues, have feature requests, or want to share feedback, please let us know using the Media3 issue tracker on GitHub. We look forward to hearing from you!


This blog post is a part of Camera and Media Spotlight Week. We're providing resources – blog posts, videos, sample code, and more – all designed to help you uplevel the media experiences in your app.

To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.