Tag Archives: Camera

CameraX 1.3 is now in Beta

Posted by Donovan McMurray, Camera Developer Relations Engineer

CameraX, the Android Jetpack camera library which helps you create a best-in-class experience that works consistently across Android versions and devices, is becoming even more helpful with its 1.3 release. CameraX is already used in a growing number of Android apps, encompassing a wide range of use cases from straightforward and performant camera interactions to advanced image processing and beyond.

CameraX 1.3 opens up even more advanced capabilities. With the dual concurrent camera feature, apps can operate two cameras at the same time. Additionally, 1.3 makes it simple to delight users with new HDR video capabilities. You can also now add graphics library transformations (for example, with OpenGL or Vulkan) to the Preview, ImageCapture, and VideoCapture UseCases to apply filters and effects. There are also many other video improvements.

CameraX version 1.3 is officially in Beta as of today, so let’s get right into the details!

Dual concurrent camera

CameraX makes complex camera functionality easy to use, and the new dual concurrent camera feature is no exception. CameraX handles the low-level details like ensuring the concurrent camera streams are opened and closed in the correct order. In CameraX, binding dual concurrent cameras is not that different from binding a single camera.

First, check which cameras support a concurrent connection with getAvailableConcurrentCameraInfos(). A common scenario is to select a front-facing and a back-facing camera.

var primaryCameraSelector: CameraSelector? = null var secondaryCameraSelector: CameraSelector? = null for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) { primaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_FRONT }.cameraSelector secondaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_BACK }.cameraSelector if (primaryCameraSelector == null || secondaryCameraSelector == null) { // If either a primary or secondary selector wasn't found, reset both // to move on to the next list of CameraInfos. primaryCameraSelector = null secondaryCameraSelector = null } else { // If both primary and secondary camera selectors were found, we can // conclude the search. break } } if (primaryCameraSelector == null || secondaryCameraSelector == null) { // Front and back concurrent camera not available. Handle accordingly. }

Then, create a SingleCameraConfig for each camera, passing in each camera selector from before, along with your UseCaseGroup and LifecycleOwner. Then call bindToLifecycle() on your CameraProvider with both SingleCameraConfigs in a list.


val primary = ConcurrentCamera.SingleCameraConfig( primaryCameraSelector, useCaseGroup, lifecycleOwner ) val secondary = ConcurrentCamera.SingleCameraConfig( secondaryCameraSelector, useCaseGroup, lifecycleOwner ) val concurrentCamera = cameraProvider.bindToLifecycle( listOf(primary, secondary) )

For compatibility reasons, dual concurrent camera supports each camera being bound to 2 or fewer UseCases with a maximum resolution of 720p or 1440p, depending on the device.

HDR video

CameraX 1.3 also adds support for 10-bit video streaming along with HDR profiles, giving you the ability to capture video with greater detail, color and contrast than previously available. You can use the VideoCapture.Builder.setDynamicRange() method to set a number of configurations. There are several pre-configured values:

  • HLG_10_BIT - A 10-bit high-dynamic range with HLG encoding.This is the recommended HDR encoding to use because every device that supports HDR capture will support HLG10. See the Check for HDR support guide for details.
  • HDR10_10_BIT - A 10-bit high-dynamic range with HDR10 encoding.
  • HDR10_PLUS_10_BIT - A 10-bit high-dynamic range with HDR10+ encoding.
  • DOLBY_VISION_10_BIT - A 10-bit high-dynamic range with Dolby Vision encoding.
  • DOLBY_VISION_8_BIT - An 8-bit high-dynamic range with Dolby Vision encoding.

First, loop through the available CameraInfos to find the first one that supports HDR. You can add additional camera selection criteria here.

var supportedHdrEncoding: DynamicRange? = null val hdrCameraInfo = cameraProvider.availableCameraInfos .first { cameraInfo -> val videoCapabilities = Recorder.getVideoCapabilities(cameraInfo) val supportedDynamicRanges = videoCapabilities.getSupportedDynamicRanges() supportedHdrEncoding = supportedDynamicRanges.firstOrNull { it != DynamicRange.SDR // Ensure an HDR encoding is found } return@first supportedDynamicRanges != null } var cameraSelector = hdrCameraInfo?.cameraSelector ?: CameraSelector.DEFAULT_BACK_CAMERA

Then, set up a Recorder and a VideoCapture UseCase. If you found a supportedHdrEncoding earlier, also call setDynamicRange() to turn on HDR in your camera app.


// Create a Recorder with Quality.HIGHEST, which will select the highest // resolution compatible with the chosen DynamicRange. val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from(Quality.HIGHEST)) .build() val videoCaptureBuilder = VideoCapture.Builder(recorder) if (supportedHdrEncoding != null) { videoCaptureBuilder.setDynamicRange(supportedHdrEncoding!!) } val videoCapture = videoCaptureBuilder.build()

Effects

While CameraX makes many camera tasks easy, it also provides hooks to accomplish advanced or custom functionality. The new effects methods enable custom graphics library transformations to be applied to frames for Preview, ImageCapture, and VideoCapture.

You can define a CameraEffect to inject code into the CameraX pipeline and apply visual effects, such as a custom portrait effect. When creating your own CameraEffect via the constructor, you must specify which use cases to target (from PREVIEWVIDEO_CAPTURE, and IMAGE_CAPTURE). You must also specify a SurfaceProcessor to implement a GPU effect for the underlying Surface. It's recommended to use graphics API such as OpenGL or Vulkan to access the Surface. This process will block the Executor associated with the ImageCapture. An internal I/O thread is used by default, or you can set one with ImageCapture.Builder.setIoExecutor(). Note: It’s the implementation’s responsibility to be performant. For a 30fps input, each frame should be processed under 30 ms to avoid frame drops.

There is an alternative CameraEffect constructor for processing still images, since higher latency is more acceptable when processing a single image. For this constructor, you pass in an ImageProcessor, implementing the process method to return an image as detailed in the ImageProcessor.Request.getInputImage() method.

Once you’ve defined one or more CameraEffects, you can add them to your CameraX setup. If you’re using a CameraProvider, you should call UseCaseGroup.Builder.addEffect() for each CameraEffect, then build the UseCaseGroup, and pass it in to bindToLifecycle(). If you’re using a CameraController, you should pass all of our CameraEffects into setEffects().

Additional video features

CameraX 1.3 has many additional highly-requested video features that we’re excited to add support for.

With VideoCapture.Builder.setMirrorMode(), you can control when video recordings are reflected horizontally. You can set MIRROR_MODE_OFF (the default), MIRROR_MODE_ON, and MIRROR_MODE_ON_FRONT_ONLY (useful for matching the mirror state of the Preview, which is mirrored on front-facing cameras). Note: in an app that only uses the front-facing camera, MIRROR_MODE_ON and MIRROR_MODE_ON_FRONT_ONLY are equivalent.

PendingRecording.asPersistentRecording() method prevents a video from being stopped by lifecycle events or the explicit unbinding of a VideoCapture use case that the recording's Recorder is attached to. This is useful if you want to bind to a different camera and continue the video recording with that camera. When this option is enabled, you must explicitly call Recording.stop() or Recording.close() to end the recording.

For videos that are set to record audio via PendingRecording.withAudioEnabled(), you can now call Recording.mute() while the recording is in progress. Pass in a boolean to specify whether to mute or unmute the audio, and CameraX will insert silence during the muted portions to ensure the audio stays aligned with the video.

AudioStats now has a getAudioAmplitude() method, which is perfect for showing a visual indicator to users that audio is being recorded. While a video recording is in progress, each VideoRecordEvent can be used to access RecordingStats, which in turn contains the AudioStats object.

Next steps

Check the full release notes for CameraX 1.3 for more details on the features described here and more! If you’re ready to try out CameraX 1.3, update your project’s CameraX dependency to 1.3.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.

CameraX 1.2 is now in Beta

Posted by Donovan McMurray, CameraX Developer Relations Engineer

As part of Android Jetpack, the CameraX library makes complex camera functionality available in an easy-to-use API, helping you create a best-in-class experience that works consistently across Android versions and devices. As of today, CameraX version 1.2 is officially in Beta. Update from version 1.1 to take advantage of the latest game-changing features: our new ML Kit integration, which can reduce your boilerplate code when using ML Kit in a CameraX app, and Zero-Shutter Lag, which enables faster action shots than were previously possible.

These two advanced features are simple to implement with CameraX 1.2, so let’s take a look at each of them in depth.

ML Kit Integration

Google’s ML Kit provides several on-device vision APIs for detecting faces, barcodes, text, objects, and more. We’re making it easier to integrate these APIs with CameraX. Version 1.2 introduces MlKitAnalyzer, an implementation of ImageAnalysis.Analyzer that handles much of the ML Kit setup for you.


You can use MlKitAnalyzer with both cameraController and cameraProvider workflows. If you use the cameraController.setImageAnalysisAnalyzer() method, then CameraX can also handle the coordinates transformation between the ML Kit output and your PreviewView.

Here’s a code snippet using setImageAnalysisAnalyzer() to set a BarcodeScanner on a cameraController to detect QR codes. CameraX automatically handles the coordinate transformations when you pass COORDINATE_SYSTEM_VIEW_REFERENCED into the MlKitAnalyzer. (Use COORDINATE_SYSTEM_ORIGINAL to prevent CameraX from applying any coordinate transformations.)

val options = BarcodeScannerOptions.Builder()

  .setBarcodeFormats(Barcode.FORMAT_QR_CODE)

  .build()

val barcodeScanner = BarcodeScanning.getClient(options)


cameraController.setImageAnalysisAnalyzer(

  executor,

  new MlKitAnalyzer(List.of(barcodeScanner),

    COORDINATE_SYSTEM_VIEW_REFERENCED,

    executor, result -> {

      // The value of result.getResult(barcodeScanner)

      // can be used directly for drawing UI overlay.

    }

  )

)



Zero-Shutter Lag

Have you ever lined up the perfect photo, but when you click the shutter button the lag causes you to miss the best moment? CameraX 1.2 offers a solution to this problem by introducing Zero-Shutter Lag.

Prior to CameraX 1.2, you could optimize for quality (CAPTURE_MODE_MAXIMIZE_QUALITY) or efficiency (CAPTURE_MODE_MINIMIZE_LATENCY) when calling ImageCapture.Builder.setCaptureMode(). CameraX 1.2 adds a new value (CAPTURE_MODE_ZERO_SHOT_LAG) that reduces latency even further than CAPTURE_MODE_MINIMIZE_LATENCY. Note: for devices that cannot support Zero-Shutter Lag, CameraX will fallback to CAPTURE_MODE_MINIMIZE_LATENCY.

We accomplish this by using a circular buffer of photos. On image capture, we go back in time in the circular buffer to get the frame closest to the actual press of the shutter button. No DeLorean needed. Great Scott!

Here’s an example of how this works in a CameraX app with Preview and ImageCapture use cases:


  1. Just like any other app with a Preview use case, CameraX sends images from the camera to the UI for the user to see.
  2. With Zero-Shutter Lag, CameraX also sends images to a circular buffer which holds multiple recent images.
  3. When the user presses the shutter button, there is inevitably some lag in sending the current camera image to your app. For this reason, Zero-Shutter Lag goes to the circular buffer to fetch an image.
  4. CameraX finds the photo in the circular buffer closest to the actual time when the user pressed the shutter button, and returns that photo to your app.
There are a few limitations to keep in mind with Zero-Shutter Lag. First, please be mindful that this is still an experimental feature. Second, since keeping a circular buffer of images is computationally intensive, you cannot use CAPTURE_MODE_ZERO_SHOT_LAG while using VideoCapture or extensions. Third, the circular buffer will increase the memory footprint of your app.


Next steps


Check our full release notes for CameraX 1.2 for more details on the features described here and more! If you’re ready to try out CameraX 1.2, update your project’s CameraX dependency to 1.2.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.