Tag Archives: CameraX

CameraX 1.3 is now in Beta

Posted by Donovan McMurray, Camera Developer Relations Engineer

CameraX, the Android Jetpack camera library which helps you create a best-in-class experience that works consistently across Android versions and devices, is becoming even more helpful with its 1.3 release. CameraX is already used in a growing number of Android apps, encompassing a wide range of use cases from straightforward and performant camera interactions to advanced image processing and beyond.

CameraX 1.3 opens up even more advanced capabilities. With the dual concurrent camera feature, apps can operate two cameras at the same time. Additionally, 1.3 makes it simple to delight users with new HDR video capabilities. You can also now add graphics library transformations (for example, with OpenGL or Vulkan) to the Preview, ImageCapture, and VideoCapture UseCases to apply filters and effects. There are also many other video improvements.

CameraX version 1.3 is officially in Beta as of today, so let’s get right into the details!

Dual concurrent camera

CameraX makes complex camera functionality easy to use, and the new dual concurrent camera feature is no exception. CameraX handles the low-level details like ensuring the concurrent camera streams are opened and closed in the correct order. In CameraX, binding dual concurrent cameras is not that different from binding a single camera.

First, check which cameras support a concurrent connection with getAvailableConcurrentCameraInfos(). A common scenario is to select a front-facing and a back-facing camera.

var primaryCameraSelector: CameraSelector? = null var secondaryCameraSelector: CameraSelector? = null for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) { primaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_FRONT }.cameraSelector secondaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_BACK }.cameraSelector if (primaryCameraSelector == null || secondaryCameraSelector == null) { // If either a primary or secondary selector wasn't found, reset both // to move on to the next list of CameraInfos. primaryCameraSelector = null secondaryCameraSelector = null } else { // If both primary and secondary camera selectors were found, we can // conclude the search. break } } if (primaryCameraSelector == null || secondaryCameraSelector == null) { // Front and back concurrent camera not available. Handle accordingly. }

Then, create a SingleCameraConfig for each camera, passing in each camera selector from before, along with your UseCaseGroup and LifecycleOwner. Then call bindToLifecycle() on your CameraProvider with both SingleCameraConfigs in a list.


val primary = ConcurrentCamera.SingleCameraConfig( primaryCameraSelector, useCaseGroup, lifecycleOwner ) val secondary = ConcurrentCamera.SingleCameraConfig( secondaryCameraSelector, useCaseGroup, lifecycleOwner ) val concurrentCamera = cameraProvider.bindToLifecycle( listOf(primary, secondary) )

For compatibility reasons, dual concurrent camera supports each camera being bound to 2 or fewer UseCases with a maximum resolution of 720p or 1440p, depending on the device.

HDR video

CameraX 1.3 also adds support for 10-bit video streaming along with HDR profiles, giving you the ability to capture video with greater detail, color and contrast than previously available. You can use the VideoCapture.Builder.setDynamicRange() method to set a number of configurations. There are several pre-configured values:

  • HLG_10_BIT - A 10-bit high-dynamic range with HLG encoding.This is the recommended HDR encoding to use because every device that supports HDR capture will support HLG10. See the Check for HDR support guide for details.
  • HDR10_10_BIT - A 10-bit high-dynamic range with HDR10 encoding.
  • HDR10_PLUS_10_BIT - A 10-bit high-dynamic range with HDR10+ encoding.
  • DOLBY_VISION_10_BIT - A 10-bit high-dynamic range with Dolby Vision encoding.
  • DOLBY_VISION_8_BIT - An 8-bit high-dynamic range with Dolby Vision encoding.

First, loop through the available CameraInfos to find the first one that supports HDR. You can add additional camera selection criteria here.

var supportedHdrEncoding: DynamicRange? = null val hdrCameraInfo = cameraProvider.availableCameraInfos .first { cameraInfo -> val videoCapabilities = Recorder.getVideoCapabilities(cameraInfo) val supportedDynamicRanges = videoCapabilities.getSupportedDynamicRanges() supportedHdrEncoding = supportedDynamicRanges.firstOrNull { it != DynamicRange.SDR // Ensure an HDR encoding is found } return@first supportedDynamicRanges != null } var cameraSelector = hdrCameraInfo?.cameraSelector ?: CameraSelector.DEFAULT_BACK_CAMERA

Then, set up a Recorder and a VideoCapture UseCase. If you found a supportedHdrEncoding earlier, also call setDynamicRange() to turn on HDR in your camera app.


// Create a Recorder with Quality.HIGHEST, which will select the highest // resolution compatible with the chosen DynamicRange. val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from(Quality.HIGHEST)) .build() val videoCaptureBuilder = VideoCapture.Builder(recorder) if (supportedHdrEncoding != null) { videoCaptureBuilder.setDynamicRange(supportedHdrEncoding!!) } val videoCapture = videoCaptureBuilder.build()

Effects

While CameraX makes many camera tasks easy, it also provides hooks to accomplish advanced or custom functionality. The new effects methods enable custom graphics library transformations to be applied to frames for Preview, ImageCapture, and VideoCapture.

You can define a CameraEffect to inject code into the CameraX pipeline and apply visual effects, such as a custom portrait effect. When creating your own CameraEffect via the constructor, you must specify which use cases to target (from PREVIEWVIDEO_CAPTURE, and IMAGE_CAPTURE). You must also specify a SurfaceProcessor to implement a GPU effect for the underlying Surface. It's recommended to use graphics API such as OpenGL or Vulkan to access the Surface. This process will block the Executor associated with the ImageCapture. An internal I/O thread is used by default, or you can set one with ImageCapture.Builder.setIoExecutor(). Note: It’s the implementation’s responsibility to be performant. For a 30fps input, each frame should be processed under 30 ms to avoid frame drops.

There is an alternative CameraEffect constructor for processing still images, since higher latency is more acceptable when processing a single image. For this constructor, you pass in an ImageProcessor, implementing the process method to return an image as detailed in the ImageProcessor.Request.getInputImage() method.

Once you’ve defined one or more CameraEffects, you can add them to your CameraX setup. If you’re using a CameraProvider, you should call UseCaseGroup.Builder.addEffect() for each CameraEffect, then build the UseCaseGroup, and pass it in to bindToLifecycle(). If you’re using a CameraController, you should pass all of our CameraEffects into setEffects().

Additional video features

CameraX 1.3 has many additional highly-requested video features that we’re excited to add support for.

With VideoCapture.Builder.setMirrorMode(), you can control when video recordings are reflected horizontally. You can set MIRROR_MODE_OFF (the default), MIRROR_MODE_ON, and MIRROR_MODE_ON_FRONT_ONLY (useful for matching the mirror state of the Preview, which is mirrored on front-facing cameras). Note: in an app that only uses the front-facing camera, MIRROR_MODE_ON and MIRROR_MODE_ON_FRONT_ONLY are equivalent.

PendingRecording.asPersistentRecording() method prevents a video from being stopped by lifecycle events or the explicit unbinding of a VideoCapture use case that the recording's Recorder is attached to. This is useful if you want to bind to a different camera and continue the video recording with that camera. When this option is enabled, you must explicitly call Recording.stop() or Recording.close() to end the recording.

For videos that are set to record audio via PendingRecording.withAudioEnabled(), you can now call Recording.mute() while the recording is in progress. Pass in a boolean to specify whether to mute or unmute the audio, and CameraX will insert silence during the muted portions to ensure the audio stays aligned with the video.

AudioStats now has a getAudioAmplitude() method, which is perfect for showing a visual indicator to users that audio is being recorded. While a video recording is in progress, each VideoRecordEvent can be used to access RecordingStats, which in turn contains the AudioStats object.

Next steps

Check the full release notes for CameraX 1.3 for more details on the features described here and more! If you’re ready to try out CameraX 1.3, update your project’s CameraX dependency to 1.3.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.

Introducing Camera Viewfinder

Posted by Francesco Romano, Developer Relations Engineer, Androidhand holding a phoneOver the years, Android devices have evolved to include a variety of sizes, shapes, and displays, among other features. Since the beginning, however, taking pictures with your phone has been one of the most important use cases. Today, camera capabilities are still one of the top reasons consumers purchase a phone.

As a developer, you want to leverage camera capabilities in your app, so you decide to adopt the Android Camera Framework. The first use case you want to implement is the Preview use case, which shows the output of the camera sensor on the screen.

So you go ahead and create a CaptureSession using a surface as big as the screen size. As long as the screen has the same aspect ratio as the camera sensor output and the device stays in its natural portrait orientation, everything should be fine.

But what happens when you resize the window, unfold your device, or change the display or orientation? Well, in most cases, the preview may appear stretched, upside down, or incorrectly rotated. And if you are in a multi-window environment, your app may even crash.

Why does this happen? Because of the implicit assumptions you made while creating the CaptureSession.

Historically, your app could live in the same window for its whole life cycle, but with the availability of new form factors such as foldable devices, and new display modes such as multi-window and multi-display, you can't assume this will be true anymore.

In particular, let's see some of the most important considerations when developing an app targeting various form factors:

Let's examine some common pitfalls to avoid when developing an app targeting various form factors:

  • Don't assume your app will live in a portrait-shaped window. Requesting a fixed orientation is still supported in Android 13, but now device manufacturers may have the option of overriding an app request for a preferred orientation.
  • Don't assume any fixed dimension or aspect ratio for your app. Even if you set resizableActivity = "false", your app could still be used in multi-window mode on large screens (>=600dp).
  • Don't assume a fixed relationship between the orientation of the screen and the camera. The Android Compatibility Definition Document specifies that a camera image sensor "MUST be oriented so that the long dimension of the camera aligns with the screen's long dimension." Starting with API level 32, camera clients that query the orientation on foldable devices can receive a value that dynamically changes depending on the device/fold state.
  • Don't assume the size of the inset can't change. The new taskbar is reported to applications as an inset, and when used with gesture navigation, the taskbar can be hidden and shown dynamically.
  • Don't assume your app has exclusive access to the camera. While your app is in a multi-window state, other apps can obtain access to shared resources like camera and microphone.

While CameraX already handles most of the cases above, implementing a preview that works in different scenarios with Camera2 APIs can be complex, as we describe in the Support resizable surfaces in your camera app Codelab.

Wouldn’t it be great to have a simple component that takes care of those details and lets you focus on your specific app logic?

Say no more…

Introducing CameraViewfinder

CameraViewfinder is a new artifact from the Jetpack library that allows you to quickly implement camera previews with minimal effort. It internally uses either a TextureView or SurfaceView to display the camera feed, and applies the required transformations on them to correctly display the viewfinder. This involves correcting their aspect ratio, scale, and rotation. It is fully compatible with your existing Camera2 codebase and continuously tested on several devices.

Let’s see how to use it.

First, add the dependency in your app-level build.gradle file:

implementation "androidx.camera:camera-viewfinder:1.3.0-alpha01"

Sync your project. Now you should be able to directly use the CameraViewfinder as any other View. For example, you can add it to your layout file:

<androidx.camera.viewfinder.CameraViewfinder
  android:id="@+id/view_finder"
  app:scaleType="fitCenter"
  app:implementationMode="performance"
  android:layout_width="match_parent"
  android:layout_height="match_parent"/>

As you can see, CameraViewfinder has the same controls available on PreviewView, so you can choose different Implementation modes and scaling types.

Now that the component is part of the layout, you can still create a CameraCaptureSession, but instead of providing a TextureView or SurfaceView as target surfaces, use the result of requestSurfaceAsync().

fun startCamera(){
    val previewResolution = Size(width, height)
    val viewfinderSurfaceRequest =
ViewfinderSurfaceRequest(previewResolution, characteristics)
    val surfaceListenableFuture =
        cameraViewfinder.requestSurfaceAsync(viewfinderSurfaceRequest)

    Futures.addCallback(surfaceListenableFuture, object : FutureCallback<Surface> {
        override fun onSuccess(surface: Surface) {
            //create a CaptureSession using this surface as usual
        }
        override fun onFailure(t: Throwable) { /* something went wrong */}
    }, ContextCompat.getMainExecutor(context))
}


Bonus: optimized layouts for foldable devices

CameraViewFinder is ready-to-use across resizable surfaces, configuration changes, rotations, and multi-window modes, and it has been tested on many foldable devices.

But if you want to implement optimized layouts for foldable and dual screen devices, you can combine CameraViewFinder with the Jetpack WindowManager library to provide unique experiences for your users.

For example, you can choose to avoid showing full screen preview if there is a hinge in the middle of the screen, or if the device is in “book” or “tabletop” mode. In those scenarios, you can have the viewfinder in one portion of the screen and the controls on the other side, or you can use part of the screen to show the last pictures taken. Imagination is the limit!

The sample app is already optimized for foldable devices and you can find the code to handle posture changes here. Have a look!

CameraX 1.2 is now in Beta

Posted by Donovan McMurray, CameraX Developer Relations Engineer

As part of Android Jetpack, the CameraX library makes complex camera functionality available in an easy-to-use API, helping you create a best-in-class experience that works consistently across Android versions and devices. As of today, CameraX version 1.2 is officially in Beta. Update from version 1.1 to take advantage of the latest game-changing features: our new ML Kit integration, which can reduce your boilerplate code when using ML Kit in a CameraX app, and Zero-Shutter Lag, which enables faster action shots than were previously possible.

These two advanced features are simple to implement with CameraX 1.2, so let’s take a look at each of them in depth.

ML Kit Integration

Google’s ML Kit provides several on-device vision APIs for detecting faces, barcodes, text, objects, and more. We’re making it easier to integrate these APIs with CameraX. Version 1.2 introduces MlKitAnalyzer, an implementation of ImageAnalysis.Analyzer that handles much of the ML Kit setup for you.


You can use MlKitAnalyzer with both cameraController and cameraProvider workflows. If you use the cameraController.setImageAnalysisAnalyzer() method, then CameraX can also handle the coordinates transformation between the ML Kit output and your PreviewView.

Here’s a code snippet using setImageAnalysisAnalyzer() to set a BarcodeScanner on a cameraController to detect QR codes. CameraX automatically handles the coordinate transformations when you pass COORDINATE_SYSTEM_VIEW_REFERENCED into the MlKitAnalyzer. (Use COORDINATE_SYSTEM_ORIGINAL to prevent CameraX from applying any coordinate transformations.)

val options = BarcodeScannerOptions.Builder()

  .setBarcodeFormats(Barcode.FORMAT_QR_CODE)

  .build()

val barcodeScanner = BarcodeScanning.getClient(options)


cameraController.setImageAnalysisAnalyzer(

  executor,

  new MlKitAnalyzer(List.of(barcodeScanner),

    COORDINATE_SYSTEM_VIEW_REFERENCED,

    executor, result -> {

      // The value of result.getResult(barcodeScanner)

      // can be used directly for drawing UI overlay.

    }

  )

)



Zero-Shutter Lag

Have you ever lined up the perfect photo, but when you click the shutter button the lag causes you to miss the best moment? CameraX 1.2 offers a solution to this problem by introducing Zero-Shutter Lag.

Prior to CameraX 1.2, you could optimize for quality (CAPTURE_MODE_MAXIMIZE_QUALITY) or efficiency (CAPTURE_MODE_MINIMIZE_LATENCY) when calling ImageCapture.Builder.setCaptureMode(). CameraX 1.2 adds a new value (CAPTURE_MODE_ZERO_SHOT_LAG) that reduces latency even further than CAPTURE_MODE_MINIMIZE_LATENCY. Note: for devices that cannot support Zero-Shutter Lag, CameraX will fallback to CAPTURE_MODE_MINIMIZE_LATENCY.

We accomplish this by using a circular buffer of photos. On image capture, we go back in time in the circular buffer to get the frame closest to the actual press of the shutter button. No DeLorean needed. Great Scott!

Here’s an example of how this works in a CameraX app with Preview and ImageCapture use cases:


  1. Just like any other app with a Preview use case, CameraX sends images from the camera to the UI for the user to see.
  2. With Zero-Shutter Lag, CameraX also sends images to a circular buffer which holds multiple recent images.
  3. When the user presses the shutter button, there is inevitably some lag in sending the current camera image to your app. For this reason, Zero-Shutter Lag goes to the circular buffer to fetch an image.
  4. CameraX finds the photo in the circular buffer closest to the actual time when the user pressed the shutter button, and returns that photo to your app.
There are a few limitations to keep in mind with Zero-Shutter Lag. First, please be mindful that this is still an experimental feature. Second, since keeping a circular buffer of images is computationally intensive, you cannot use CAPTURE_MODE_ZERO_SHOT_LAG while using VideoCapture or extensions. Third, the circular buffer will increase the memory footprint of your app.


Next steps


Check our full release notes for CameraX 1.2 for more details on the features described here and more! If you’re ready to try out CameraX 1.2, update your project’s CameraX dependency to 1.2.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.

The exciting aspects of Android Camera

Posted by Marwa Mabrouk, Android Camera Platform Product Manager

hand holding a phone 

Android Camera is an exciting space. Camera is one of the top reasons consumers purchase a phone. Android Camera empowers developers today through different tools. Camera 2 is the framework API that is included in Android since Android 5.0 Lollipop, and CameraX is a Jetpack support library that runs on top of Camera 2, and is available to all Android developers. These solutions are meant to complement each other in addressing the needs of the Android Camera ecosystem.

For developers who are starting with Android Camera, refreshing their app to the latest, or migrating their app from Camera 1, CameraX is the best tool to get started! CameraX offers key benefits that empower developers, and address the complexities of the ecosystem.

  1. Development speed was the main driver behind CameraX design. The SDK doesn’t just allow developers to get up and running much faster, it also has built in the best of development practices and photography know-how to get the most out of the camera.
  2. Android-enabled devices come in large numbers with many variations. CameraX aims to be consistent across many Android devices and has taken that complexity upon itself, to offer developers an SDK that works consistently across 150+ phone models, with backward-compatibility to Android 5.0 (API level 21). CameraX is tested daily by Google on each of those devices in our labs, to ensure complexity is not surfaced to developers, while keeping the quality high.
  3. Fast library releases is a flexibility that CameraX gets as a Jetpack support library. CameraX launches can happen on shorter regular bases, or ad hoc, to address feedback, and provide new capabilities. We plan to expand on this more in another blog post.

For developers who are building highly specialized functionality with Camera for low level control of the capture flow, and where device variations are to be taken into consideration, Camera 2 should be used.

Camera 2 is the common API that enables the camera hardware on every Android device and is deployed on all the billions of Android devices around the world in the market today. As a framework API, Camera 2 enables developers to utilize their deep knowledge of photography and device implementations. To ensure the quality of Camera 2, device manufacturers show compliance by testing their devices. Device variations do surface in the API based on the device manufacturer's choices, allowing custom features to take advantage of those variations on specific devices as they see fit.

To understand this more, let’s use an example. We’ll compare camera capture capabilities. Camera 2 offers special control of the individual capture pipeline for each of the cameras on the phone at the same time, in addition to very fine grained manual settings. CameraX enables capturing high-resolution, high-quality photos and provides auto-white-balance, auto-exposure, and auto-focus functionality, in addition to simple manual camera controls.

Considering application examples: Samsung uses the Camera Framework API to help the advanced pro-grade camera system to capture studio-quality photos in various lightings and settings on Samsung Galaxy devices. While the API is common, Samsung has enabled variations that are unique to each device's capabilities, and takes advantage of that in the camera app on each device. The Camera Framework API enables Samsung to reach into the low level camera capabilities, and tailor the native app for the device

Another example, Microsoft decided to integrate CameraX across all productivity apps where Microsoft Lens is used (i.e. Office, Outlook, OneDrive), to ensure high quality images are used in all these applications. By switching to CameraX, the Microsoft Lens team was able not only to improve its developer experience in view of the simpler API, but also improve performance, increase developer productivity and reduce time to go to market. You can learn more about this here.


This is a very exciting time for Android Camera, with many new features on both APIs:

  • CameraX has launched several features recently, the most significant has been Video Capture which became available to developers in beta on Jan 26th.
  • With Android 12 launch, Camera 2 has a number of features now available.

As we move forward, we plan to share with you more details about the exciting features that we have planned for Android Camera. We look forward to engaging with you and hearing your feedback, through the CameraX mailing list: [email protected] and the AOSP issue tracker.

Thank you for your continued interest in Android Camera, and we look forward to building amazing camera experiences for users in collaboration with you!

The exciting aspects of Android Camera

Posted by Marwa Mabrouk, Android Camera Platform Product Manager

hand holding a phone 

Android Camera is an exciting space. Camera is one of the top reasons consumers purchase a phone. Android Camera empowers developers today through different tools. Camera 2 is the framework API that is included in Android since Android 5.0 Lollipop, and CameraX is a Jetpack support library that runs on top of Camera 2, and is available to all Android developers. These solutions are meant to complement each other in addressing the needs of the Android Camera ecosystem.

For developers who are starting with Android Camera, refreshing their app to the latest, or migrating their app from Camera 1, CameraX is the best tool to get started! CameraX offers key benefits that empower developers, and address the complexities of the ecosystem.

  1. Development speed was the main driver behind CameraX design. The SDK doesn’t just allow developers to get up and running much faster, it also has built in the best of development practices and photography know-how to get the most out of the camera.
  2. Android-enabled devices come in large numbers with many variations. CameraX aims to be consistent across many Android devices and has taken that complexity upon itself, to offer developers an SDK that works consistently across 150+ phone models, with backward-compatibility to Android 5.0 (API level 21). CameraX is tested daily by Google on each of those devices in our labs, to ensure complexity is not surfaced to developers, while keeping the quality high.
  3. Fast library releases is a flexibility that CameraX gets as a Jetpack support library. CameraX launches can happen on shorter regular bases, or ad hoc, to address feedback, and provide new capabilities. We plan to expand on this more in another blog post.

For developers who are building highly specialized functionality with Camera for low level control of the capture flow, and where device variations are to be taken into consideration, Camera 2 should be used.

Camera 2 is the common API that enables the camera hardware on every Android device and is deployed on all the billions of Android devices around the world in the market today. As a framework API, Camera 2 enables developers to utilize their deep knowledge of photography and device implementations. To ensure the quality of Camera 2, device manufacturers show compliance by testing their devices. Device variations do surface in the API based on the device manufacturer's choices, allowing custom features to take advantage of those variations on specific devices as they see fit.

To understand this more, let’s use an example. We’ll compare camera capture capabilities. Camera 2 offers special control of the individual capture pipeline for each of the cameras on the phone at the same time, in addition to very fine grained manual settings. CameraX enables capturing high-resolution, high-quality photos and provides auto-white-balance, auto-exposure, and auto-focus functionality, in addition to simple manual camera controls.

Considering application examples: Samsung uses the Camera Framework API to help the advanced pro-grade camera system to capture studio-quality photos in various lightings and settings on Samsung Galaxy devices. While the API is common, Samsung has enabled variations that are unique to each device's capabilities, and takes advantage of that in the camera app on each device. The Camera Framework API enables Samsung to reach into the low level camera capabilities, and tailor the native app for the device

Another example, Microsoft decided to integrate CameraX across all productivity apps where Microsoft Lens is used (i.e. Office, Outlook, OneDrive), to ensure high quality images are used in all these applications. By switching to CameraX, the Microsoft Lens team was able not only to improve its developer experience in view of the simpler API, but also improve performance, increase developer productivity and reduce time to go to market. You can learn more about this here.


This is a very exciting time for Android Camera, with many new features on both APIs:

  • CameraX has launched several features recently, the most significant has been Video Capture which became available to developers in beta on Jan 26th.
  • With Android 12 launch, Camera 2 has a number of features now available.

As we move forward, we plan to share with you more details about the exciting features that we have planned for Android Camera. We look forward to engaging with you and hearing your feedback, through the CameraX mailing list: [email protected] and the AOSP issue tracker.

Thank you for your continued interest in Android Camera, and we look forward to building amazing camera experiences for users in collaboration with you!

What’s New with Android Jetpack

Posted by Karen Ng, Group Product Manager and Jisha Abubaker, Product Manager, Android

Last year, we launched Android Jetpack, a collection of software components designed to accelerate Android development and make writing high-quality apps easier. Jetpack was built with you in mind -- to take the hardest, most common developer problems on Android and make your lives easier.

Jetpack has seen incredible adoption and momentum. Today, 80% of the top 1,000 apps in the Play store are using Jetpack. We’ve also heard feedback from so many of you across our early access developer programs and user studies, as well as Reddit, Stack Overflow, and Slack, that has helped shape these APIs. Very humbly, thank you.

What’s New in Jetpack

Today, we are excited to share with you 11 Jetpack libraries that can be used in development now and an early-development, open-source project called Jetpack Compose to simplify UI development.

Now in Alpha

CameraX

We've heard from many of you that developing camera apps or integrating camera functionality within your existing apps is hard. With the new CameraX library, we want to enable you to create great camera-driven experiences in your application without worrying about the underlying device behavior. This API is backwards compatible to Android 5.0 (API 21) or higher, ensuring that the same code works on most devices in the market. While it leverages the capabilities of camera2, it uses a simpler, use case-based approach that is lifecycle-aware eliminating significant amount of boilerplate code vs camera2. Finally, it enables you to access the same functionality as the native camera app on supported devices. These optional Extensions enable features like Portrait, Night, HDR, and Beauty.

LiveData and Lifecycles w/ coroutines

We heard you loud and clear and agree that LiveData must support your common one-shot asynchronous operations. With Lifecycle & LiveData KTX, you can do so with Kotlin coroutines that are lifecycle-aware. Kotlin coroutines have been well received by the developer community for how they simplify the way concurrency is handled within Android apps. We want to simplify it even further and enabling you to use them safely by offering coroutine scopes tied to lifecycles, coroutine dispatchers that are lifecycle-aware, and support for simple asynchronous chains with the new liveData builder.

Benchmark

The Benchmark library provides you a quick way to benchmark your app code, whether it is written in Kotlin, the Java programming language or native code. We use this library to continuously benchmark Jetpack libraries we release to ensure we do not introduce any latency into your code. You can now do the same right within your development environment in Android Studio, easily measuring database queries, view inflation, or a RecyclerView scroll. The library takes care of what is needed to provide reliable and consistent results like handling warm-up periods, removing outliers, and locking CPU clocks.

Security

To maximize security of an application’s data at-rest, the new Security library implements security best practices for you. It provides strong security that balances encryption with performance for consumer apps like banking and chat. It also provides a maximum level of security for apps that require a hardware-backed keystore with user presence and simplifies many operations including key generation and validation.

ViewModel with SavedState

ViewModel provided you an easy way to save your UI data in the event of a configuration change. It did not save your app state in the event of process death, and many of you have been relying on SavedInstanceState alongside ViewModel. With the ViewModel with SavedState module, you can eliminate boilerplate code and gain the benefits of using both ViewModel and SavedState with simple APIs to save and retrieve data right from your ViewModel.

ViewPager2

ViewPager2, the next generation of ViewPager, is now based on RecyclerView and supports vertical scrolling and RTL (Right-to-Left) layouts. It also provides a much easier way to listen for page data changes with registerOnPageChangeCallback.

Now in Beta

ConstraintLayout 2.0

ConstraintLayout 2.0 brings up new optimizations, and new way of customizing layouts, with the addition of helper classes. As part of ConstraintLayout 2.0, MotionLayout provides an easy way to manage motion and widget animation in your applications. You can easily describe transitions between layouts and animation of properties. MotionLayout is fully declarative in XML, allowing you to describe even complex transitions without requiring any code.

Biometrics Prompt

Users are accustomed to biometric credentials on their phones, but if your app requires a biometric login, it is important to make sure that users are provided a consistent and safe way to enter their credentials. The Biometrics library provides a simple system prompt giving the user a trustworthy experience.

Enterprise

With the Jetpack Enterprise library, your managed enterprise apps can send feedback back to Enterprise Mobility Management providers in the form of keyed app states, while taking advantage of backwards compatibility with managed configurations.

Android for Cars

With the Android for Cars libraries, you can provide your users a driver-optimized version of your app that will be automatically installed onto the vehicle’s infotainment system in vehicles equipped with the Android Automotive OS. It also allows your apps to work with the Android Auto app, providing the driver-optimized version anytime on their device.

Now in Stable

And in case you missed it, we announced stable releases of Jetpack WorkManager (background processing) and Jetpack Navigation (in-app navigation) just a few months ago.

Jetpack Compose

Today, we open-sourced an early preview of Jetpack Compose, a new unbundled toolkit designed to simplify UI development by combining a reactive programming model with the conciseness and ease-of-use of Kotlin. We have always done our best work when we did it with you - our developer community. That’s why we decided to develop Jetpack Compose in the open, starting today.

In that vein, we took a step back and chatted with many of you. We heard strong feedback from developers that they like the modern, reactive APIs that Flutter, React Native, Litho, and Vue.js represent. We also heard that developers love Kotlin, with over 53% of professional Android developers using it and with 20% higher language satisfaction ratings than the Java programming language. Kotlin has become the fastest-growing language in terms of number of contributors on GitHub.

So, we decided to invest in the reactive approach to declarative programming and create an easier way to build UIs with Kotlin.

We are building Compose with a few core principles:

  • Build with the benefits that Kotlin brings -- concise, safe, and fully interoperable with the Java programming language. Designed to drastically reduce the amount of boilerplate code you have to write, so you can focus on your app code, and help avoid entire classes of errors.
  • Fully declarative for defining UI components, including drawing and creating custom layouts. Simply describe your UI as a set of composable functions, and the framework handles UI optimizations and updates to the view hierarchy under the hood.
  • Provide reusable building blocks that let you build custom widgets easier, and without starting from scratch.
  • Compatible with existing views so you can mix and match and adopt at your own pace with direct access to all of the Android and Jetpack APIs.
  • Material Design out of the box and animations from the start, so it’s easy to create beautiful apps that are full of motion.
  • Accelerate development with tools like live preview and apply changes.

A Compose application is made up of composable functions that transform application data into a UI hierarchy. A function is all you need to create a new UI component. To create a composable function just add the @Composable annotation to the function name. Under the hood, Compose uses a custom Kotlin compiler plug-in so when the underlying data changes, the composable functions can be re-invoked to generate an updated UI hierarchy. The simple example below prints a string to the screen.

We know that adopting any new framework is a big change for existing projects and codebases, which is why we’ve designed Compose like all of Jetpack -- with individual components that you can adopt at your own pace and are compatible with existing views.

If you want to learn more about Jetpack Compose or download its source to try it for yourself, check out http://d.android.com/jetpackcompose

We'd love to hear from you as we iterate on this exciting future together. Send us feedback by posting comments below, and please file any bugs you run into on AOSP or directly through the feedback buttons in the Android Studio Jetpack Compose build in AOSP. Since this is an early preview, we do not recommend trying this on any production projects.

Happy Jetpacking!