Tag Archives: Camera

Apps adopt Transformer to support more reliable and performant media editing use cases

Posted by Caren Chang – Developer Relations Engineer

The Jetpack Media3 library enables Android apps to build high quality media apps. As part of the Media3 library, the Transformer module aims to provide easy to use, reliable, and performant APIs for transcoding and editing media.

For example, apps can use Transformer to apply editing operations such as trimming a long piece of media file, or applying effects to video tracks. Transformer can also be used to convert media files from one format to another, such as adjusting the resolution or encoding of the media file.

Developing Transformer APIs

As part of the process to introduce new APIs, our engineering team works closely with Google apps such as Google Photos to test and experiment the new APIs. Experimental flags are first introduced to enable performance improvements. Once the results are successful and conclusive, these experimental features are then built into the default API implementations or promoted to public APIs for all apps to use. This approach allows Transformer APIs to be tested on a wide variety of devices.

Transformer Adoption in apps

Apps that have been using Transformer in production observed in-app performance improvements, less code to maintain, and better developer experience. Let’s take a closer look at how Transformer has helped apps for their media-editing use cases.

One of users’ favorite features in Google Photos is memory sharing, where snippets of your life story that are curated and presented as Google Photos memories can now be shared as videos to social media and chat apps. However, the process of combining media items to create a video on device is resource intensive and subject to significant latency, especially on low-end devices. To reduce this latency and enable the feature on a wider range of devices, Photos adopted Transformer in their media creation pipeline. Along with other improvements made, the team found that Transformer played a part in reducing the median user latency for creating memory videos by 41% on high-end devices and 27% on mid-range devices.

The Photos app also enables users to perform media edits such as trimming or rotating a video. By adopting Transformer APIs for rotating videos, median save latency was reduced by 79% for applicable videos. The app also adopted Transformer’s API for optimizing video trimming, and observed video save latency decrease by 64%.

1 Second Everyday is a personal video journal that helps you create captivating montages and timelapses. One of the app’s main user journeys is sequentially combining short videos to create a meaningful movie. After adopting Transformer for this use case, the app observed that video encoding performance was up to 5x faster, allowing them to explore enabling 4k and HDR support. The Transformer adoption also helped decrease relevant code by 30%, making it easier for the developers to maintain the code base.

BandLab is the next-generation music creation platform used by millions around the world to make and share their music. The app originally used MediaCodecs for their video creation use cases, but found that the low level implementation resulted in native crashes that were difficult to debug. After researching more on Transformer, the team made the decision to migrate from MediaCodecs to Transformer. Overall, it only took the team 12 working days for the migration, and this resulted in a simpler codebase and more maintainable pipeline for their media creation use cases. In addition, the app observed that all previously observed native crashes were no longer occurring anymore.

What’s next for Transformers?

We’re excited to see Transformer’s adoption in the developer community, and will continue adding new features to support more media-editing use cases for the Android ecosystem including:

    • Better support for previewing media edits
    • Improving the performance and developer experience for video frame extraction
    • Easier integration with AI effects
    • and much more

Keep an eye out on what we’re working on in the Media3 Github, and file feature requests to help shape the future of Transformer!

Performance Class helps Google Maps deliver premium experiences

Posted by Nevin Mital - Developer Relations Engineer, Android Media

The Android ecosystem features a diverse range of devices, and it can be difficult to build experiences that take advantage of new or premium hardware features while still working well for users on all devices. With Android 12, we introduced the Media Performance Class (MPC) standard to help developers better understand a device’s capabilities and identify high-performing devices. For a refresher on what MPC is, please see our last blog post, Using performance class to optimize your user experience, or check out the Performance Class documentation.

Earlier this year, we published the first stable release of the Jetpack Core Performance library as the recommended solution for more reliably obtaining a device’s MPC level. In particular, this library introduces the PlayServicesDevicePerformance class, an API that queries Google Play Services to get the most up-to-date MPC level for the current device and build. I’ll get into the technical details further down, but let’s start by taking a look at how Google Maps was able to tailor a feature launch to best fit each device with MPC.

Performance Class unblocks premium experience launch for Google Maps

Google Maps recently took advantage of the expanded device coverage enabled by the Play Services module to unblock a feature launch. Google Maps wanted to update their UI by increasing the transparency of some layers. Consequently, this meant they would need to render more of the map, and found they had to stop the rollout due to latency increases on many devices, especially towards the low-end. To resolve this, the Maps team started by slicing an existing key metric, “seconds to UI item visibility”, by MPC level, which revealed that while all devices had a small increase in this latency, devices without an MPC level had the largest increase.

A bar graph displays A/B test results for Seconds to UI item visibility, comparing control results with those using increased transparency across different Media Performance Class Levels.  A green horizontal line and text indicate the updated experience shipped to devices qualifying for MPC. A vertical green dotted line separates results for devices without a specific MPC level, which kept the previous UI.

With these results in hand, Google Maps started their rollout again, but this time only launching the feature on devices that report an MPC level. As devices continue to get updated and meet the bar for MPC, the updated Google Maps UI will be available to them as well.

The new Play Services module

MPC level requirements are defined in the Android Compatibility Definition Document (CDD), then devices and Android builds are validated against these requirements by the Android Compatibility Test Suite (CTS). The Play Services module of the Jetpack Core Performance library leverages these test results to continually update a device’s reported MPC level without any additional effort on your end. This also means that you’ll immediately have access to the MPC level for new device launches without needing to acquire and test each device yourself, since it already passed CTS. If the MPC level is not available from Google Play Services, the library will fall back to the MPC level declared by the OEM as a build constant.

A flowchart depicts the process of determining Performance Class levels for Android devices, involving manufacturers, CTS tests, a Grader, the Play Services module, and the CDD.

As of writing, more than 190M in-market devices covering over 500 models across 40+ brands report an MPC level. This coverage will continue to grow over time, as older devices update to newer builds, from Android 11 and up.

Using the Core Performance library

To use Jetpack Core Performance, start by adding a dependency for the relevant modules in your Gradle configuration, and create an instance of DevicePerformance. Initializing a DevicePerformance should only happen once in your app, as early as possible - for example, in the onCreate() lifecycle event of your Application. In this example, we’ll use the Google Play services implementation of DevicePerformance.

// Implementation of Jetpack Core library.
implementation("androidx.core:core-ktx:1.12.0")
// Enable APIs to query for device-reported performance class.
implementation("androidx.core:core-performance:1.0.0")
// Enable APIs to query Google Play Services for performance class.
implementation("androidx.core:core-performance-play-services:1.0.0")

import androidx.core.performance.play.services.PlayServicesDevicePerformance

class MyApplication : Application() {
  lateinit var devicePerformance: DevicePerformance

  override fun onCreate() {
    // Use a class derived from the DevicePerformance interface
    devicePerformance = PlayServicesDevicePerformance(applicationContext)
  }
}

Then, later in your app when you want to retrieve the device’s MPC level, you can call getMediaPerformanceClass():

class MyActivity : Activity() {
  private lateinit var devicePerformance: DevicePerformance
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    // Note: Good app architecture is to use a dependency framework. See
    // https://developer.android.com/training/dependency-injection for more
    // information.
    devicePerformance = (application as MyApplication).devicePerformance
  }

  override fun onResume() {
    super.onResume()
    when {
      devicePerformance.mediaPerformanceClass >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE -> {
        // MPC level 34 and later.
        // Provide the most premium experience for the highest performing devices.
      }
      devicePerformance.mediaPerformanceClass == Build.VERSION_CODES.TIRAMISU -> {
        // MPC level 33.
        // Provide a high quality experience.
      }
      else -> {
        // MPC level 31, 30, or undefined.
        // Remove extras to keep experience functional.
      }
    }
  }
}

Strategies for using Performance Class

MPC is intended to identify high-end devices, so you can expect to see MPC levels for the top devices from each year, which are the devices you’re likely to want to be able to support for the longest time. For example, the Pixel 9 Pro released with Android 14 and reports an MPC level of 34, the latest level definition when it launched.

You should use MPC as a complement to any existing Device Clustering solutions you already use, such as querying a device’s static specs or manually blocklisting problematic devices. An area where MPC can be a particularly helpful tool is for new device launches. New devices should be included at launch, so you can use MPC to gauge new devices’ capabilities right from the start, without needing to acquire the hardware yourself or manually test each device.

A great first step to get involved is to include MPC levels in your telemetry. This can help you identify patterns in error reports or generally get a better sense of the devices your user base uses if you segment key metrics by MPC level. From there, you might consider using MPC as a dimension in your experimentation pipeline, for example by setting up A/B testing groups based on MPC level, or by starting a feature rollout with the highest MPC level and working your way down. As discussed previously, this is the approach that Google Maps took.

You could further use MPC to tune a user-facing feature, for example by adjusting the number of concurrent video playbacks your app attempts based on the MPC level’s concurrent codec guarantees. However, make sure to still query a device’s runtime capabilities when using this approach, as they may differ depending on the environment and state the device is in.

Get in touch!

If MPC sounds like it could be useful for your app, please give it a try! You can get started by taking a look at our sample code or documentation. We welcome you to share any questions or feedback you have in this short form.


This blog post is a part of Camera and Media Spotlight Week. We're providing resources – blog posts, videos, sample code, and more – all designed to help you uplevel the media experiences in your app.

To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.

Spotlight Week: Android Camera and Media

Posted by Caren Chang- Android Developer Relations Engineer

Android offers Camera and Media APIs to help you build apps that can capture, edit, share, and play media. To help you enhance Android Camera and Media experiences to be even more delightful for your users, this week we will be kicking off the Camera and Media Spotlight week.

This Spotlight Week will provide resources—blog posts, videos, sample code, and more—all designed to help you uplevel the media experiences in your app. Check out highlights from the latest releases in Camera and Media APIs, including better Jetpack Compose support in CameraX, motion photo support in Media3 Transformer, simpler ExoPlayer setup, and much more! We’ll also bring in developers from the community to talk about their experiences building Android camera and media apps.


Here’s what we’re covering during Camera and Media Spotlight week:

What’s new in camera and media

Tuesday, January 7

Check out what’s new in the latest CameraX and Media3 releases, including how to get started with building Camera apps with Compose.

Creating delightful and premium experiences

Wednesday, January 8

Building delightful and premium experiences for your users is what can help your app really stand out. Learn about different ways to achieve this such as utilizing the Media Performance Class or enabling HDR video capture in your app. Learn from developers, such as how Google Drive enabled Ultra HDR images in their Android app, and Instagram improved the in-app image capture experience by implementing Night Mode.

Adaptive for camera and media, for large screens and now XR!

Thursday, January 9

Thinking adaptive is important, so your app works just as well on phones as it does large screens, like foldables, tablets, ChromeOS, cars, and the new Android XR platform! On Thursday, we’ll be diving into the media experience on large screen devices, and how you can build in a smooth tabletop mode for your camera applications. Prepare your apps for XR devices by considering Spatial Audio and Video.

Media creation

Friday, January 10

Capturing, editing, and processing media content are fundamental features of the Android ecosystem. Learn about how Media3’s Transformer module can help your app’s media processing use cases, and see case studies of apps that are using Transformer in production. Listen in to how the 1 Second Everyday Android app approaches media use cases, and check out a new API that allows apps to capture concurrent camera streams.Learn from Android Google Developer Tom Colvin on how he experimented with building an AI-powered Camera app.


These are just some of the things to think about when building camera and media experiences in your app. Keep checking this blog post for updates; we’ll be adding links and more throughout the week.

Media3 1.5.0 — what’s new?

Posted by Kristina Simakova – Engineering Manager

This article is cross-published on Medium

Media3 1.5.0 is now available!

Transformer now supports motion photos and faster image encoding. We’ve also simplified the setup for DefaultPreloadManager and ExoPlayer, making it easier to use. But that’s not all! We’ve included a new IAMF decoder, a Kotlin listener extension, and easier Player optimization through delegation.

To learn more about all new APIs and bug fixes, check out the full release notes.

Transformer improvements

Motion photo support

Transformer now supports exporting motion photos. The motion photo’s image is exported if the corresponding MediaItem’s image duration is set (see MediaItem.Builder().setImageDurationMs()) Otherwise, the motion photo’s video is exported. Note that the EditedMediaItem’s duration should not be set in either case as it will automatically be set to the corresponding MediaItem’s image duration.

Faster image encoding

This release accelerates image-to-video encoding, thanks to optimizations in DefaultVideoFrameProcessor.queueInputBitmap(). DefaultVideoFrameProcessor now treats the Bitmap given to queueInputBitmap() as immutable. The GL pipeline will resample and color-convert the input Bitmap only once. As a result, Transformer operations that take large (e.g. 12 megapixels) images as input execute faster.

AudioEncoderSettings

Similar to VideoEncoderSettings, Transformer now supports AudioEncoderSettings which can be used to set the desired encoding profile and bitrate.

Edit list support

Transformer now shifts the first video frame to start from 0. This fixes A/V sync issues in some files where an edit list is present.

Unsupported track type logging

This release includes improved logging for unsupported track types, providing more detailed information for troubleshooting and debugging.

Media3 muxer

In one of the previous releases we added a new muxer library which can be used to create MP4 container files. The media3 muxer offers support for a wide range of audio and video codecs, enabling seamless handling of diverse media formats. This new library also brings advanced features including:

    • B-frame support
    • Fragmented MP4 output
    • Edit list support

The muxer library can be included as a gradle dependency:

implementation ("androidx.media3:media3-muxer:1.5.0")

Media3 muxer with Transformer

To use the media3 muxer with Transformer, set an InAppMuxer.Factory (which internally wraps media3 muxer) as the muxer factory when creating a Transformer:

val transformer = Transformer.Builder(context)
    .setMuxerFactory(InAppMuxer.Factory.Builder().build())
    .build()

Simpler setup for DefaultPreloadManager and ExoPlayer

With Media3 1.5.0, we added DefaultPreloadManager.Builder, which makes it much easier to build the preload components and the player. Previously we asked you to instantiate several required components (RenderersFactory, TrackSelectorFactory, LoadControl, BandwidthMeter and preload / playback Looper) first, and be super cautious on correctly sharing those components when injecting them into the DefaultPreloadManager constructor and the ExoPlayer.Builder. With the new DefaultPreloadManager.Builder this becomes a lot simpler:

    • Build a DefaultPreloadManager and ExoPlayer instances with all default components.
val preloadManagerBuilder = DefaultPreloadManager.Builder()
val preloadManager = preloadManagerBuilder.build()
val player = preloadManagerBuilder.buildExoPlayer()

    • Build a DefaultPreloadManager and ExoPlayer instances with custom sharing components.
val preloadManagerBuilder = DefaultPreloadManager.Builder().setRenderersFactory(customRenderersFactory)
// The resulting preloadManager uses customRenderersFactory
val preloadManager = preloadManagerBuilder.build()
// The resulting player uses customRenderersFactory
val player = preloadManagerBuilder.buildExoPlayer()

    • Build a DefaultPreloadManager and ExoPlayer instances, while setting the custom playback-only configurations on the ExoPlayers.
val preloadManagerBuilder = DefaultPreloadManager.Builder()
val preloadManager = preloadManagerBuilder.build()
// Tune the playback-only configurations
val playerBuilder = ExoPlayer.Builder().setFooEnabled()
// The resulting player will have playback feature "Foo" enabled
val player = preloadManagerBuilder.buildExoPlayer(playerBuilder)

Preloading the next playlist item

We’ve added the ability to preload the next item in the playlist of ExoPlayer. By default, playlist preloading is disabled but can be enabled by setting the duration which should be preloaded to memory:

player.preloadConfiguration =
    PreloadConfiguration(/* targetPreloadDurationUs= */ 5_000_000L)

With the PreloadConfiguration above, the player tries to preload five seconds of media for the next item in the playlist. Preloading is only started when no media is being loaded that is required for the ongoing playback. This way preloading doesn’t compete for bandwidth with the primary playback.

When enabled, preloading can help minimize join latency when a user skips to the next item before the playback buffer reaches the next item. The first period of the next window is prepared and video, audio and text samples are preloaded into its sample queues. The preloaded period is later queued into the player with preloaded samples immediately available and ready to be fed to the codec for rendering.

Once opted-in, playlist preloading can be turned off again by using PreloadConfiguration.DEFAULT to disable playlist preloading:

player.preloadConfiguration = PreloadConfiguration.DEFAULT

New IAMF decoder and Kotlin listener extension

The 1.5.0 release includes a new media3-decoder-iamf module, which allows playback of IAMF immersive audio tracks in MP4 files. Apps wanting to try this out will need to build the libiamf decoder locally. See the media3 README for full instructions.

implementation ("androidx.media3:media3-decoder-iamf:1.5.0")

This release also includes a new media3-common-ktx module, a home for Kotlin-specific functionality. The first version of this module contains a suspend function that lets the caller listen to Player.Listener.onEvents. This is a building block that’s used by the upcoming media3-ui-compose module (launching with media3 1.6.0) to power a Jetpack Compose playback UI.

implementation ("androidx.media3:media3-common-ktx:1.5.0")

Easier Player customization via delegation

Media3 has provided a ForwardingPlayer implementation since version 1.0.0, and we have previously suggested that apps should use it when they want to customize the way certain Player operations work, by using the decorator pattern. One very common use-case is to allow or disallow certain player commands (in order to show/hide certain buttons in a UI). Unfortunately, doing this correctly with ForwardingPlayer is surprisingly hard and error-prone, because you have to consistently override multiple methods, and handle the listener as well. The example code to demonstrate how fiddly this is too long for this blog, so we’ve put it in a gist instead.

In order to make these sorts of customizations easier, 1.5.0 includes a new ForwardingSimpleBasePlayer, which builds on the consistency guarantees provided by SimpleBasePlayer to make it easier to create consistent Player implementations following the decorator pattern. The same command-modifying Player is now much simpler to implement:

class PlayerWithoutSeekToNext(player: Player) : ForwardingSimpleBasePlayer(player) {
  override fun getState(): State {
    val state = super.getState()
    return state
      .buildUpon()
      .setAvailableCommands(
        state.availableCommands.buildUpon().remove(COMMAND_SEEK_TO_NEXT).build()
      )
      .build()
  }

  // We don't need to override handleSeek, because it is guaranteed not to be called for
  // COMMAND_SEEK_TO_NEXT since we've marked that command unavailable.
}

MediaSession: Command button for media items

Command buttons for media items allow a session app to declare commands supported by certain media items that then can be conveniently displayed and executed by a MediaController or MediaBrowser:

image of command buttons for media items in the Media Center of android Automotive OS
Screenshot: Command buttons for media items in the Media Center of Android Automotive OS.

You'll find the detailed documentation on android.developer.com.

This is the Media3 equivalent of the legacy “custom browse actions” API, with which Media3 is fully interoperable. Unlike the legacy API, command buttons for media items do not require a MediaLibraryService but are a feature of the Media3 MediaSession instead. Hence they are available for MediaController and MediaBrowser in the same way.


If you encounter any issues, have feature requests, or want to share feedback, please let us know using the Media3 issue tracker on GitHub. We look forward to hearing from you!


This blog post is a part of Camera and Media Spotlight Week. We're providing resources – blog posts, videos, sample code, and more – all designed to help you uplevel the media experiences in your app.

To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.

CameraX update makes dual concurrent camera even easier

Posted by Donovan McMurray – Developer Relations Engineer

CameraX, Android's Jetpack camera library, is getting an exciting update to its Dual Concurrent Camera feature, making it even easier to integrate this feature into your app. This feature allows you to stream from 2 different cameras at the same time. The original version of Dual Concurrent Camera was released in CameraX 1.3.0, and it was already a huge leap in making this feature easier to implement.

Starting with 1.5.0-alpha01, CameraX will now handle the composition of the 2 camera streams as well. This update is additional functionality, and it doesn’t remove any prior functionality nor is it a breaking change to your existing Dual Concurrent Camera code. To tell CameraX to handle the composition, simply use the new SingleCameraConfig constructor which has a new parameter for a CompositionSettings object. Since you’ll be creating 2 SingleCameraConfigs, you should be consistent with what constructor you use.

Nothing has changed in the way you check for concurrent camera support from the prior version of this feature. As a reminder, here is what that code looks like.

// Set up primary and secondary camera selectors if supported on device.
var primaryCameraSelector: CameraSelector? = null
var secondaryCameraSelector: CameraSelector? = null

for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) {
    primaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_FRONT
    }.cameraSelector
    secondaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_BACK
    }.cameraSelector

    if (primaryCameraSelector == null || secondaryCameraSelector == null) {
        // If either a primary or secondary selector wasn't found, reset both
        // to move on to the next list of CameraInfos.
        primaryCameraSelector = null
        secondaryCameraSelector = null
    } else {
        // If both primary and secondary camera selectors were found, we can
        // conclude the search.
        break
    }
}

if (primaryCameraSelector == null || secondaryCameraSelector == null) {
    // Front and back concurrent camera not available. Handle accordingly.
}

Here’s the updated code snippet showing how to implement picture-in-picture, with the front camera stream scaled down to fit into the lower right corner. In this example, CameraX handles the composition of the camera streams.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.0f)
        .setScale(1.0f, 1.0f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(2 / 3f - 0.1f, -2 / 3f + 0.1f)
        .setScale(1 / 3f, 1 / 3f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

You are not constrained to a picture-in-picture layout. For instance, you could define a side-by-side layout by setting the offsets and scaling factors accordingly. You want to keep both dimensions scaled by the same amount to avoid a stretched preview. Here’s how that might look.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.5f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

We’re excited to offer this improvement to an already developer-friendly feature. Truly the CameraX way! CompositionSettings in Dual Concurrent Camera is currently in alpha, so if you have feature requests to improve upon it before the API is locked in, please give us feedback in the CameraX Discussion Group. And check out the full CameraX 1.5.0-alpha01 release notes to see what else is new in CameraX.

CameraX 1.3 is now in Beta

Posted by Donovan McMurray, Camera Developer Relations Engineer

CameraX, the Android Jetpack camera library which helps you create a best-in-class experience that works consistently across Android versions and devices, is becoming even more helpful with its 1.3 release. CameraX is already used in a growing number of Android apps, encompassing a wide range of use cases from straightforward and performant camera interactions to advanced image processing and beyond.

CameraX 1.3 opens up even more advanced capabilities. With the dual concurrent camera feature, apps can operate two cameras at the same time. Additionally, 1.3 makes it simple to delight users with new HDR video capabilities. You can also now add graphics library transformations (for example, with OpenGL or Vulkan) to the Preview, ImageCapture, and VideoCapture UseCases to apply filters and effects. There are also many other video improvements.

CameraX version 1.3 is officially in Beta as of today, so let’s get right into the details!

Dual concurrent camera

CameraX makes complex camera functionality easy to use, and the new dual concurrent camera feature is no exception. CameraX handles the low-level details like ensuring the concurrent camera streams are opened and closed in the correct order. In CameraX, binding dual concurrent cameras is not that different from binding a single camera.

First, check which cameras support a concurrent connection with getAvailableConcurrentCameraInfos(). A common scenario is to select a front-facing and a back-facing camera.

var primaryCameraSelector: CameraSelector? = null var secondaryCameraSelector: CameraSelector? = null for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) { primaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_FRONT }.cameraSelector secondaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_BACK }.cameraSelector if (primaryCameraSelector == null || secondaryCameraSelector == null) { // If either a primary or secondary selector wasn't found, reset both // to move on to the next list of CameraInfos. primaryCameraSelector = null secondaryCameraSelector = null } else { // If both primary and secondary camera selectors were found, we can // conclude the search. break } } if (primaryCameraSelector == null || secondaryCameraSelector == null) { // Front and back concurrent camera not available. Handle accordingly. }

Then, create a SingleCameraConfig for each camera, passing in each camera selector from before, along with your UseCaseGroup and LifecycleOwner. Then call bindToLifecycle() on your CameraProvider with both SingleCameraConfigs in a list.


val primary = ConcurrentCamera.SingleCameraConfig( primaryCameraSelector, useCaseGroup, lifecycleOwner ) val secondary = ConcurrentCamera.SingleCameraConfig( secondaryCameraSelector, useCaseGroup, lifecycleOwner ) val concurrentCamera = cameraProvider.bindToLifecycle( listOf(primary, secondary) )

For compatibility reasons, dual concurrent camera supports each camera being bound to 2 or fewer UseCases with a maximum resolution of 720p or 1440p, depending on the device.

HDR video

CameraX 1.3 also adds support for 10-bit video streaming along with HDR profiles, giving you the ability to capture video with greater detail, color and contrast than previously available. You can use the VideoCapture.Builder.setDynamicRange() method to set a number of configurations. There are several pre-configured values:

  • HLG_10_BIT - A 10-bit high-dynamic range with HLG encoding.This is the recommended HDR encoding to use because every device that supports HDR capture will support HLG10. See the Check for HDR support guide for details.
  • HDR10_10_BIT - A 10-bit high-dynamic range with HDR10 encoding.
  • HDR10_PLUS_10_BIT - A 10-bit high-dynamic range with HDR10+ encoding.
  • DOLBY_VISION_10_BIT - A 10-bit high-dynamic range with Dolby Vision encoding.
  • DOLBY_VISION_8_BIT - An 8-bit high-dynamic range with Dolby Vision encoding.

First, loop through the available CameraInfos to find the first one that supports HDR. You can add additional camera selection criteria here.

var supportedHdrEncoding: DynamicRange? = null val hdrCameraInfo = cameraProvider.availableCameraInfos .first { cameraInfo -> val videoCapabilities = Recorder.getVideoCapabilities(cameraInfo) val supportedDynamicRanges = videoCapabilities.getSupportedDynamicRanges() supportedHdrEncoding = supportedDynamicRanges.firstOrNull { it != DynamicRange.SDR // Ensure an HDR encoding is found } return@first supportedDynamicRanges != null } var cameraSelector = hdrCameraInfo?.cameraSelector ?: CameraSelector.DEFAULT_BACK_CAMERA

Then, set up a Recorder and a VideoCapture UseCase. If you found a supportedHdrEncoding earlier, also call setDynamicRange() to turn on HDR in your camera app.


// Create a Recorder with Quality.HIGHEST, which will select the highest // resolution compatible with the chosen DynamicRange. val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from(Quality.HIGHEST)) .build() val videoCaptureBuilder = VideoCapture.Builder(recorder) if (supportedHdrEncoding != null) { videoCaptureBuilder.setDynamicRange(supportedHdrEncoding!!) } val videoCapture = videoCaptureBuilder.build()

Effects

While CameraX makes many camera tasks easy, it also provides hooks to accomplish advanced or custom functionality. The new effects methods enable custom graphics library transformations to be applied to frames for Preview, ImageCapture, and VideoCapture.

You can define a CameraEffect to inject code into the CameraX pipeline and apply visual effects, such as a custom portrait effect. When creating your own CameraEffect via the constructor, you must specify which use cases to target (from PREVIEWVIDEO_CAPTURE, and IMAGE_CAPTURE). You must also specify a SurfaceProcessor to implement a GPU effect for the underlying Surface. It's recommended to use graphics API such as OpenGL or Vulkan to access the Surface. This process will block the Executor associated with the ImageCapture. An internal I/O thread is used by default, or you can set one with ImageCapture.Builder.setIoExecutor(). Note: It’s the implementation’s responsibility to be performant. For a 30fps input, each frame should be processed under 30 ms to avoid frame drops.

There is an alternative CameraEffect constructor for processing still images, since higher latency is more acceptable when processing a single image. For this constructor, you pass in an ImageProcessor, implementing the process method to return an image as detailed in the ImageProcessor.Request.getInputImage() method.

Once you’ve defined one or more CameraEffects, you can add them to your CameraX setup. If you’re using a CameraProvider, you should call UseCaseGroup.Builder.addEffect() for each CameraEffect, then build the UseCaseGroup, and pass it in to bindToLifecycle(). If you’re using a CameraController, you should pass all of our CameraEffects into setEffects().

Additional video features

CameraX 1.3 has many additional highly-requested video features that we’re excited to add support for.

With VideoCapture.Builder.setMirrorMode(), you can control when video recordings are reflected horizontally. You can set MIRROR_MODE_OFF (the default), MIRROR_MODE_ON, and MIRROR_MODE_ON_FRONT_ONLY (useful for matching the mirror state of the Preview, which is mirrored on front-facing cameras). Note: in an app that only uses the front-facing camera, MIRROR_MODE_ON and MIRROR_MODE_ON_FRONT_ONLY are equivalent.

PendingRecording.asPersistentRecording() method prevents a video from being stopped by lifecycle events or the explicit unbinding of a VideoCapture use case that the recording's Recorder is attached to. This is useful if you want to bind to a different camera and continue the video recording with that camera. When this option is enabled, you must explicitly call Recording.stop() or Recording.close() to end the recording.

For videos that are set to record audio via PendingRecording.withAudioEnabled(), you can now call Recording.mute() while the recording is in progress. Pass in a boolean to specify whether to mute or unmute the audio, and CameraX will insert silence during the muted portions to ensure the audio stays aligned with the video.

AudioStats now has a getAudioAmplitude() method, which is perfect for showing a visual indicator to users that audio is being recorded. While a video recording is in progress, each VideoRecordEvent can be used to access RecordingStats, which in turn contains the AudioStats object.

Next steps

Check the full release notes for CameraX 1.3 for more details on the features described here and more! If you’re ready to try out CameraX 1.3, update your project’s CameraX dependency to 1.3.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.

CameraX 1.2 is now in Beta

Posted by Donovan McMurray, CameraX Developer Relations Engineer

As part of Android Jetpack, the CameraX library makes complex camera functionality available in an easy-to-use API, helping you create a best-in-class experience that works consistently across Android versions and devices. As of today, CameraX version 1.2 is officially in Beta. Update from version 1.1 to take advantage of the latest game-changing features: our new ML Kit integration, which can reduce your boilerplate code when using ML Kit in a CameraX app, and Zero-Shutter Lag, which enables faster action shots than were previously possible.

These two advanced features are simple to implement with CameraX 1.2, so let’s take a look at each of them in depth.

ML Kit Integration

Google’s ML Kit provides several on-device vision APIs for detecting faces, barcodes, text, objects, and more. We’re making it easier to integrate these APIs with CameraX. Version 1.2 introduces MlKitAnalyzer, an implementation of ImageAnalysis.Analyzer that handles much of the ML Kit setup for you.


You can use MlKitAnalyzer with both cameraController and cameraProvider workflows. If you use the cameraController.setImageAnalysisAnalyzer() method, then CameraX can also handle the coordinates transformation between the ML Kit output and your PreviewView.

Here’s a code snippet using setImageAnalysisAnalyzer() to set a BarcodeScanner on a cameraController to detect QR codes. CameraX automatically handles the coordinate transformations when you pass COORDINATE_SYSTEM_VIEW_REFERENCED into the MlKitAnalyzer. (Use COORDINATE_SYSTEM_ORIGINAL to prevent CameraX from applying any coordinate transformations.)

val options = BarcodeScannerOptions.Builder()

  .setBarcodeFormats(Barcode.FORMAT_QR_CODE)

  .build()

val barcodeScanner = BarcodeScanning.getClient(options)


cameraController.setImageAnalysisAnalyzer(

  executor,

  new MlKitAnalyzer(List.of(barcodeScanner),

    COORDINATE_SYSTEM_VIEW_REFERENCED,

    executor, result -> {

      // The value of result.getResult(barcodeScanner)

      // can be used directly for drawing UI overlay.

    }

  )

)



Zero-Shutter Lag

Have you ever lined up the perfect photo, but when you click the shutter button the lag causes you to miss the best moment? CameraX 1.2 offers a solution to this problem by introducing Zero-Shutter Lag.

Prior to CameraX 1.2, you could optimize for quality (CAPTURE_MODE_MAXIMIZE_QUALITY) or efficiency (CAPTURE_MODE_MINIMIZE_LATENCY) when calling ImageCapture.Builder.setCaptureMode(). CameraX 1.2 adds a new value (CAPTURE_MODE_ZERO_SHOT_LAG) that reduces latency even further than CAPTURE_MODE_MINIMIZE_LATENCY. Note: for devices that cannot support Zero-Shutter Lag, CameraX will fallback to CAPTURE_MODE_MINIMIZE_LATENCY.

We accomplish this by using a circular buffer of photos. On image capture, we go back in time in the circular buffer to get the frame closest to the actual press of the shutter button. No DeLorean needed. Great Scott!

Here’s an example of how this works in a CameraX app with Preview and ImageCapture use cases:


  1. Just like any other app with a Preview use case, CameraX sends images from the camera to the UI for the user to see.
  2. With Zero-Shutter Lag, CameraX also sends images to a circular buffer which holds multiple recent images.
  3. When the user presses the shutter button, there is inevitably some lag in sending the current camera image to your app. For this reason, Zero-Shutter Lag goes to the circular buffer to fetch an image.
  4. CameraX finds the photo in the circular buffer closest to the actual time when the user pressed the shutter button, and returns that photo to your app.
There are a few limitations to keep in mind with Zero-Shutter Lag. First, please be mindful that this is still an experimental feature. Second, since keeping a circular buffer of images is computationally intensive, you cannot use CAPTURE_MODE_ZERO_SHOT_LAG while using VideoCapture or extensions. Third, the circular buffer will increase the memory footprint of your app.


Next steps


Check our full release notes for CameraX 1.2 for more details on the features described here and more! If you’re ready to try out CameraX 1.2, update your project’s CameraX dependency to 1.2.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.