Tag Archives: CameraX

Top 3 updates for building excellent, adaptive apps at Google I/O ‘25

Posted by Mozart Louis – Developer Relations Engineer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.

If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you're ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3's editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.

Check out the Google I/O playlist for all the session details.

Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:

#1: Build adaptively to unlock 500 million devices

In today's diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.

The talk emphasizes that you don't need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app's potential.

Here are some resources we encourage you to use in your apps:

New feature support in Jetpack Compose Adaptive Libraries

    • We’re continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.

Navigation 3

    • The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.

Updates to Window Manager Library

    • AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as "extra large," while widths between 1200dp and 1600dp are classified as "large." These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.

Support all orientations and be resizable

Extend to Android XR

Upgrade your Wear OS apps to Material 3 Design

You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app’s long-term success.

#2: Enhance your app’s performance optimization

Get ready to take your app's performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.

Redesigned UiAutomator API

    • To make benchmarking reliable and reproducible, there's the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.

Macrobenchmarks

    • Once your tests are in place, it's time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app's health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app's performance and where to focus your efforts.

R8, More than code shrinking and obfuscation

    • You might know R8 as a code shrinking tool, but it's capable of so much more! The talk dives into R8's capabilities using the "Androidify" sample app. You'll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It'll also be shown how library developers can include "consumer Keep rules" so that their important code is not touched when used in an application.

#3: Build Richer Image and Video Experiences

In today's digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.

Media3Effects in CameraX Preview

    • At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.

Google Low-Light Boost

    • Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.

New Camera & Media Samples!

Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.

Learn how to build adaptive apps

Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.

Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX

Posted by Rebecca Franks – Developer Relations Engineer

The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular - we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

a moving image of various droid bots dancing individually

Androidify app demo

Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

Under the hood

The app combines a variety of different Google technologies, such as:

    • Gemini API - through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
    • Jetpack Compose - for building the UI with delightful animations and making the app adapt to different screen sizes.
    • Navigation 3 - the latest navigation library for building up Navigation graphs with Compose.
    • CameraX Compose and Media3 Compose - for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

This sample app is currently using a standard Imagen model, but we've been working on a fine-tuned model that's trained specifically on all of the pieces that make the Android bot cute and fun; we'll share that version later this year. In the meantime, don't be surprised if the sample app puts out some interesting looking examples!

How does the Androidify app work?

The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

Flow chart describing Androidify app flow
Androidify app flow chart detailing how the app works with AI

AI in Androidify with Gemini and ML Kit

The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

    • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

val response = generativeModel.generateContent(
   content {
       text(prompt)
       image(image)
   },
)

    • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

    • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

    • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

    • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

Explore more detailed information about AI usage in Androidify.

Jetpack Compose

The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

Delightful details with the UI

The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:


Androidify app UI showing camera button
Camera button with a MaterialShapes.Cookie9Sided shape

Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

    • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

      moving example of expressive button shapes in slow motion

    • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

      animated marquee example

    • Fun color splash animation as a transition between screens.

      moving image of a blue color splash transition between Androidify demo screens

    • Animating gradient buttons for the AI-powered actions.

      animated gradient button for AI powered actions example

To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

Adapting to different devices

Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

a collage of different adaptive layouts for the Androidify app across small and large screens
Various adaptive layouts in the app

For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

    • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

    • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

    • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

CameraX and Media3 Compose

To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

CameraLayout in Compose
CameraLayout composable that takes care of different device configurations, such as table top mode

CameraLayout in Compose
CameraLayout composable that takes care of different device configurations, such as table top mode

The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

@Composable
private fun VideoPlayer(modifier: Modifier = Modifier) {
    val context = LocalContext.current
    var player by remember { mutableStateOf<Player?>(null) }
    LifecycleStartEffect(Unit) {
        player = ExoPlayer.Builder(context).build().apply {
            setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
            repeatMode = Player.REPEAT_MODE_ONE
            prepare()
        }
        onStopOrDispose {
            player?.release()
            player = null
        }
    }
    Box(
        modifier
            .background(MaterialTheme.colorScheme.surfaceContainerLowest),
    ) {
        player?.let { currentPlayer ->
            PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
        }
    }
}

Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

var videoFullyOnScreen by remember { mutableStateOf(false) }     

LaunchedEffect(videoFullyOnScreen) {
     if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
} 

// We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
Modifier.onVisibilityChanged(
                containerWidth = LocalView.current.width,
                containerHeight = LocalView.current.height,
) { fullyVisible -> videoFullyOnScreen = fullyVisible }

// A simple version of visibility changed detection
fun Modifier.onVisibilityChanged(
    containerWidth: Int,
    containerHeight: Int,
    onChanged: (visible: Boolean) -> Unit,
) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
    onChanged(
        layoutBounds.boundsInRoot.top > 0 &&
            layoutBounds.boundsInRoot.bottom < containerHeight &&
            layoutBounds.boundsInRoot.left > 0 &&
            layoutBounds.boundsInRoot.right < containerWidth,
    )
}

Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
            OutlinedIconButton(
                onClick = playPauseButtonState::onClick,
                enabled = playPauseButtonState.isEnabled,
            ) {
                val icon =
                    if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                val contentDescription =
                    if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                Icon(
                    painterResource(icon),
                    stringResource(contentDescription),
                )
            }

Check out the code for more details on how CameraX and Media3 were used in Androidify.

Navigation 3

Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

@Composable
fun MainNavigation() {
   val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
   NavDisplay(
       backStack = backStack,
       onBack = { backStack.removeLastOrNull() },
       entryProvider = entryProvider {
           entry<Home> { entry ->
               HomeScreen(
                   onAboutClicked = {
                       backStack.add(About)
                   },
               )
           }
           entry<Camera> {
               CameraPreviewScreen(
                   onImageCaptured = { uri ->
                       backStack.add(Create(uri.toString()))
                   },
               )
           }
           // etc
       },
   )
}

Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

CameraLayout in Compose

Learn more about Jetpack Navigation 3, currently in alpha.

Learn more

By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


What’s New in Jetpack Compose

Posted by Nick Butcher – Product Manager

At Google I/O 2025, we announced a host of features, performance, stability, libraries, and tools updates for Jetpack Compose, our recommended Android UI toolkit. With Compose you can build excellent apps that work across devices. Compose has matured a lot since it was first announced (at Google I/O 2019!) and we're now seeing 60% of the top 1,000 apps in the Play Store such as MAX and Google Drive use and love it.

New Features

Since I/O last year, Compose Bill of Materials (BOM) version 2025.05.01 adds new features such as:

    • Autofill support that lets users automatically insert previously entered personal information into text fields.
    • Auto-sizing text to smoothly adapt text size to a parent container size.
    • Visibility tracking for when you need high-performance information on a composable's position in its root container, screen, or window.
    • Animate bounds modifier for beautiful automatic animations of a Composable's position and size within a LookaheadScope.
    • Accessibility checks in tests that let you build a more accessible app UI through automated a11y testing.

LookaheadScope {
    Box(
        Modifier
            .animateBounds(this@LookaheadScope)
            .width(if(inRow) 100.dp else 150.dp)
            .background(..)
            .border(..)
    )
}
moving image of animate bounds modifier in action

For more details on these features, read What’s new in the Jetpack Compose April ’25 release and check out these talks from Google I/O:

If you’re looking to try out new Compose functionality, the alpha BOM offers new features that we're working on including:

    • Pausable Composition (see below)
    • Updates to LazyLayout prefetch
    • Context Menus
    • New modifiers: onFirstVisible, onVisbilityChanged, contentType
    • New Lint checks for frequently changing values and elements that should be remembered in composition

Please try out the alpha features and provide feedback to help shape the future of Compose.

Material Expressive

At Google I/O, we unveiled Material Expressive, Material Design’s latest evolution that helps you make your products even more engaging and easier to use. It's a comprehensive addition of new components, styles, motion and customization options that help you to build beautiful rich UIs. The Material3 library in the latest alpha BOM contains many of the new expressive components for you to try out.

moving image of material expressive design example

Learn more to start building with Material Expressive.

Adaptive layouts library

Developing adaptive apps across form factors including phones, foldables, tablets, desktop, cars and Android XR is now easier with the latest enhancements to the Compose adaptive layouts library. The stable 1.1 release adds support for predictive back gestures for smoother transitions and pane expansion for more flexible two pane layouts on larger screens. Furthermore, the 1.2 (alpha) release adds more flexibility for how panes are displayed, adding strategies for reflowing and levitating.

moving image of compose adaptive layouts updates in the Google Play app
Compose Adaptive Layouts Updates in the Google Play app

Learn more about building adaptive android apps with Compose.

Performance

With each release of Jetpack Compose, we continue to prioritize performance improvements. The latest stable release includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations. Best of all these are available to you simply by upgrading your Compose dependency; no code changes required.

bar chart of internal benchmarks for performance run on a Pixel 3a device from January to May 2023 measured by jank rate
Internal benchmark, run on a Pixel 3a

We continue to work on further performance improvements, notable changes in the latest alpha BOM include:

    • Pausable Composition allows compositions to be paused, and their work split up over several frames.
    • Background text prefetch enables text layout caches to be pre-warmed on a background thread, enabling faster text layout.
    • LazyLayout prefetch improvements enabling lazy layouts to be smarter about how much content to prefetch, taking advantage of pausable composition.

Together these improvements eliminate nearly all jank in an internal benchmark.

Stability

We've heard from you that upgrading your Compose dependency can be challenging, encountering bugs or behaviour changes that prevent you from staying on the latest version. We've invested significantly in improving the stability of Compose, working closely with the many Google app teams building with Compose to detect and prevent issues before they even make it to a release.

Google apps develop against and release with snapshot builds of Compose; as such, Compose is tested against the hundreds of thousands of Google app tests and any Compose issues are immediately actioned by our team. We have recently invested in increasing the cadence of updating these snapshots and now update them daily from Compose tip-of-tree, which means we’re receiving feedback faster, and are able to resolve issues long before they reach a public release of the library.

Jetpack Compose also relies on @Experimental annotations to mark APIs that are subject to change. We heard your feedback that some APIs have remained experimental for a long time, reducing your confidence in the stability of Compose. We have invested in stabilizing experimental APIs to provide you a more solid API surface, and reduced the number of experimental APIs by 32% in the last year.

We have also heard that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. In the latest alpha BOM, we have added a new opt-in feature to provide more diagnostic information. Note that this does not currently work with minified builds and comes at a performance cost, so we recommend only using this feature in debug builds.

class App : Application() {
   override fun onCreate() {
        // Enable only for debug flavor to avoid perf impact in release
        Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
   }
}

Libraries

We know that to build great apps, you need Compose integration in the libraries that interact with your app's UI.

A core library that powers any Compose app is Navigation. You told us that you often encountered limitations when managing state hoisting and directly manipulating the back stack with the current Compose Navigation solution. We went back to the drawing-board and completely reimagined how a navigation library should integrate with the Compose mental model. We're excited to introduce Navigation 3, a new artifact designed to empower you with greater control and simplify complex navigation flows.

We're also investing in Compose support for CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.

@Composable
private fun VideoPlayer(
    player: Player?, // from media3
    modifier: Modifier = Modifier
) {
    Box(modifier) {
        PlayerSurface(player) // from media3-ui-compose
        player?.let {
            // custom play-pause button UI
            val playPauseButtonState = rememberPlayPauseButtonState(it) // from media3-ui-compose
            MyPlayPauseButton(playPauseButtonState, Modifier.align(BottomEnd).padding(16.dp))
        }
    }
}
To learn more, see the media3 Compose documentation and the CameraX samples.

Tools

We continue to improve the Android Studio tools for creating Compose UIs. The latest Narwhal canary includes:

    • Resizable Previews instantly show you how your Compose UI adapts to different window sizes
    • Preview navigation improvements using clickable names and components
    • Studio Labs 🧪: Compose preview generation with Gemini quickly generate a preview
    • Studio Labs 🧪: Transform UI with Gemini change your UI with natural language, directly from preview.
    • Studio Labs 🧪: Image attachment in Gemini generate Compose code from images.

For more information read What's new in Android development tools.

moving image of resizable preview in Jetpack Compose
Resizable Preview

New Compose Lint checks

The Compose alpha BOM introduces two new annotations and associated lint checks to help you to write correct and performant Compose code. The @FrequentlyChangingValue annotation and FrequentlyChangedStateReadInComposition lint check warns in situations where function calls or property reads in composition might cause frequent recompositions. For example, frequent recompositions might happen when reading scroll position values or animating values. The @RememberInComposition annotation and RememberInCompositionDetector lint check warns in situations where constructors, functions, and property getters are called directly inside composition (e.g. the TextFieldState constructor) without being remembered.

Happy Composing

We continue to invest in providing the features, performance, stability, libraries and tools that you need to build excellent apps. We value your input so please share feedback on our latest updates or what you'd like to see next.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


Androidify: Building delightful UIs with Compose

Posted by Rebecca Franks - Developer Relations Engineer

Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

Material 3 Expressive

Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.


It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive's component and theme updates for more engaging and user-friendly products.

Material Expressive Component updates
Material Expressive Component updates

In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that's encompassed in the Material theme.

In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

@Composable
fun AndroidifyTheme(
   content: @Composable () -> Unit,
) {
   val colorScheme = LightColorScheme


   MaterialExpressiveTheme(
       colorScheme = colorScheme,
       typography = Typography,
       shapes = shapes,
       motionScheme = MotionScheme.expressive(),
       content = {
           SharedTransitionLayout {
               CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                   content()
               }
           }
       },
   )
}

Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

moving example of expressive button shapes in slow motion

The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

Material Expressive Component updates
Camera button with a MaterialShapes.Cookie9Sided shape

Animations

Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

val interactionSource = remember { MutableInteractionSource() }
val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
Spacer(
   modifier
       .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
       .clip(MaterialShapes.Cookie9Sided.toShape())
       .size(size)
       .drawWithCache {
           //.. etc
       },
)

Camera button scale interaction
Camera button scale interaction

Shared element animations

The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

moving example of expressive button shapes in slow motion

To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

@Composable
fun Modifier.sharedBoundsRevealWithShapeMorph(
   sharedContentState: 
SharedTransitionScope.SharedContentState,
   sharedTransitionScope: SharedTransitionScope = 
LocalSharedTransitionScope.current,
   animatedVisibilityScope: AnimatedVisibilityScope = 
LocalNavAnimatedContentScope.current,
   boundsTransform: BoundsTransform = 
MaterialTheme.motionScheme.sharedElementTransitionSpec,
   resizeMode: SharedTransitionScope.ResizeMode = 
SharedTransitionScope.ResizeMode.RemeasureToBounds,
   restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
   targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
)

Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

val animatedProgress =
   animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)


val morph = remember {
   Morph(restingShape, targetShape)
}
val morphClip = MorphOverlayClip(morph, { animatedProgress.value })


return this@sharedBoundsRevealWithShapeMorph
   .sharedBounds(
       sharedContentState = sharedContentState,
       animatedVisibilityScope = animatedVisibilityScope,
       boundsTransform = boundsTransform,
       resizeMode = resizeMode,
       clipInOverlayDuringTransition = morphClip,
       renderInOverlayDuringTransition = renderInOverlayDuringTransition,
   )

View the full code snippet for this Modifer on GitHub.

Autosize text

With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

BasicText(text,
style = MaterialTheme.typography.titleLarge,
autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
)

This is used front and center for the “Customize your own Android Bot” text:

Text reads Customize your own Android Bot with an inline moving image
“Customize your own Android Bot” text with inline GIF

This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

@Composable
private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
   Box(modifier = modifier) {
       val animatedBot = "animatedBot"
       val text = buildAnnotatedString {
           append(stringResource(R.string.customize))
           // Attach "animatedBot" annotation on the placeholder
           appendInlineContent(animatedBot)
           append(stringResource(R.string.android_bot))
       }
       var placeHolderSize by remember {
           mutableStateOf(220.sp)
       }
       val inlineContent = mapOf(
           Pair(
               animatedBot,
               InlineTextContent(
                   Placeholder(
                       width = placeHolderSize,
                       height = placeHolderSize,
                       placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                   ),
               ) {
                   DancingBot(
                       modifier = Modifier
                           .padding(top = 32.dp)
                           .fillMaxSize(),
                   )
               },
           ),
       )
       BasicText(
           text,
           modifier = Modifier
               .align(Alignment.Center)
               .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
           style = MaterialTheme.typography.titleLarge,
           autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
           maxLines = 6,
           onTextLayout = { result ->
               placeHolderSize = result.layoutInput.style.fontSize * 3.5f
           },
           inlineContent = inlineContent,
       )
   }
}

Composable visibility with onLayoutRectChanged

With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

var buttonBounds by remember {
   mutableStateOf<RelativeLayoutBounds?>(null)
}
var showColorSplash by remember {
   mutableStateOf(false)
}
Box(modifier = Modifier.fillMaxSize()) {
   PrimaryButton(
       buttonText = "Let's Go",
       modifier = Modifier
           .align(Alignment.BottomCenter)
           .onLayoutRectChanged(
               callback = { bounds ->
                   buttonBounds = bounds
               },
           ),
       onClick = {
           showColorSplash = true
       },
   )
}

We use these bounds as an indication of where to start the color splash animation from.

moving image of a blue color splash transition between Androidify demo screens

Learn more delightful details

From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

animated marquee example

animated gradient button for AI powered actions example

animated loading screen example

Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

What’s new in CameraX 1.4.0 and a sneak peek of Jetpack Compose support

Posted by Scott Nien – Software Engineer (scottnien@)

Get ready to level up your Android camera apps! CameraX 1.4.0 just dropped with a load of awesome new features and improvements. We're talking expanded HDR capabilities, preview stabilization and the versatile effect framework, and a whole lot of cool stuff to explore. We will also explore how to seamlessly integrate CameraX with Jetpack Compose! Let's dive in and see how these enhancements can take your camera app to the next level.

HDR preview and Ultra HDR

A split-screen image compares Standard Dynamic Range (SDR) and High Dynamic Range (HDR) image quality side-by-side using a singular image of a detailed landscape. The HDR side is more vivid and vibrant.

High Dynamic Range (HDR) is a game-changer for photography, capturing a wider range of light and detail to create stunningly realistic images. With CameraX 1.3.0, we brought you HDR video recording capabilities, and now in 1.4.0, we're taking it even further! Get ready for HDR Preview and Ultra HDR. These exciting additions empower you to deliver an even richer visual experience to your users.

HDR Preview

This new feature allows you to enable HDR on Preview without needing to bind a VideoCapture use case. This is especially useful for apps that use a single preview stream for both showing preview on display and video recording with an OpenGL pipeline.

To fully enable the HDR, you need to ensure your OpenGL pipeline is capable of processing the specific dynamic range format and then check the camera capability.

See following code snippet as an example to enable HLG10 which is the baseline HDR standard that device makers must support on cameras with 10-bit output.

// Declare your OpenGL pipeline supported dynamic range format. 
val openGLPipelineSupportedDynamicRange = setOf(
     DynamicRange.SDR, 
     DynamicRange.HLG_10_BIT
)
// Check camera dynamic range capabilities. 
val isHlg10Supported =  
     cameraProvider.getCameraInfo(cameraSelector)
           .querySupportedDynamicRanges(openGLPipelineSupportedDynamicRange)
           .contains(DynamicRange.HLG_10_BIT)

val preview = Preview.Builder().apply {
     if (isHlg10Supported) {
        setDynamicRange(DynamicRange.HLG_10_BIT)
     }
}

Ultra HDR

Introducing Ultra HDR, a new format in Android 14 that lets users capture stunningly realistic photos with incredible dynamic range. And the best part? CameraX 1.4.0 makes it incredibly easy to add Ultra HDR capture to your app with just a few lines of code:

val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
val cameraInfo = cameraProvider.getCameraInfo(cameraSelector)
val isUltraHdrSupported = 
      ImageCapture.getImageCaptureCapabilities(cameraInfo)
                  .supportedOutputFormats
                  .contains(ImageCapture.OUTPUT_FORMAT_JPEG_ULTRA_HDR)

val imageCapture = ImageCapture.Builder().apply {
    if (isUltraHdrSupported) {
        setOutputFormat(ImageCapture.OUTPUT_FORMAT_JPEG_ULTRA_HDR)
    }
}.build()

Jetpack Compose support

While this post focuses on 1.4.0, we're excited to announce the Jetpack Compose support in CameraX 1.5.0 alpha. We’re adding support for a Composable Viewfinder built on top of AndroidExternalSurface and AndroidEmbeddedExternalSurface. The CameraXViewfinder Composable hooks up a display surface to a CameraX Preview use case, handling the complexities of rotation, scaling and Surface lifecycle so you don’t need to.

// in build.gradle 
implementation ("androidx.camera:camera-compose:1.5.0-alpha03")


class PreviewViewModel : ViewModel() {
    private val _surfaceRequests = MutableStateFlow<SurfaceRequest?>(null)

    val surfaceRequests: StateFlow<SurfaceRequest?>
        get() = _surfaceRequests.asStateFlow()

    private fun produceSurfaceRequests(previewUseCase: Preview) {
        // Always publish new SurfaceRequests from Preview
        previewUseCase.setSurfaceProvider { newSurfaceRequest ->
            _surfaceRequests.value = newSurfaceRequest
        }
    }

    // ...
}

@Composable
fun MyCameraViewfinder(
    viewModel: PreviewViewModel,
    modifier: Modifier = Modifier
) {
    val currentSurfaceRequest: SurfaceRequest? by
        viewModel.surfaceRequests.collectAsState()

    currentSurfaceRequest?.let { surfaceRequest ->
        CameraXViewfinder(
            surfaceRequest = surfaceRequest,
            implementationMode = ImplementationMode.EXTERNAL, // Or EMBEDDED
            modifier = modifier        
        )
    }
}

Kotlin-friendly APIs

CameraX is getting even more Kotlin-friendly! In 1.4.0, we've introduced two new suspend functions to streamline camera initialization and image capture.

// CameraX initialization 
val cameraProvider = ProcessCameraProvider.awaitInstance()

val imageProxy = imageCapture.takePicture() 
// Processing imageProxy
imageProxy.close()

Preview Stabilization and Mirror mode

Preview Stabilization

Preview stabilization mode was added in Android 13 to enable the stabilization on all non-RAW streams, including previews and MediaCodec input surfaces. Compared to the previous video stabilization mode, which may have inconsistent FoV (Field of View) between the preview and recorded video, this new preview stabilization mode ensures consistency and thus provides a better user experience. For apps that record the preview directly for video recording, this mode is also the only way to enable stabilization.

Follow the code below to enable preview stabilization. Please note that once preview stabilization is turned on, it is not only applied to the Preview but also to the VideoCapture if it is bound as well.

val isPreviewStabilizationSupported =  
    Preview.getPreviewCapabilities(cameraProvider.getCameraInfo(cameraSelector))
        .isStabilizationSupported
val preview = Preview.Builder().apply {
    if (isPreviewStabilizationSupported) {
      setPreviewStabilizationEnabled(true)
    }
}.build()

MirrorMode

While CameraX 1.3.0 introduced mirror mode for VideoCapture, we've now brought this handy feature to Preview in 1.4.0. This is especially useful for devices with outer displays, allowing you to create a more natural selfie experience when using the rear camera.

To enable the mirror mode, simply call Preview.Builder.setMirrorMode APIs. This feature is supported for Android 13 and above.

Real-time Effect

CameraX 1.3.0 introduced the CameraEffect framework, giving you the power to customize your camera output with OpenGL. Now, in 1.4.0, we're taking it a step further. In addition to applying your own custom effects, you can now leverage a set of pre-built effects provided by CameraX and Media3, making it easier than ever to enhance your app's camera features.

Overlay Effect

The new camera-effects artifact aims to provide ready-to-use effect implementations, starting with the OverlayEffect. This effect lets you draw overlays on top of camera frames using the familiar Canvas API.

The following sample code shows how to detect the QR code and draw the shape of the QR code once it is detected.

By default, drawing is performed in surface frame coordinates. But what if you need to use camera sensor coordinates? No problem! OverlayEffect provides the Frame#getSensorToBufferTransform function, allowing you to apply the necessary transformation matrix to your overlayCanvas.

In this example, we use CameraX's MLKit Vision APIs (MlKitAnalyzer) and specify COORDINATE_SYSTEM_SENSOR to obtain QR code corner points in sensor coordinates. This ensures accurate overlay placement regardless of device orientation or screen aspect ratio.

// in build.gradle 
implementation ("androidx.camera:camera-effects:1.4.1}")      
implementation ("androidx.camera:camera-mlkit-vision:1.4.1")

var qrcodePoints: Array<Point>? = null
val qrcodeBoxEffect 
    = OverlayEffect(
        PREVIEW /* applied on the preview only */,
        0, /* queueDepth */, 
        Handler(Looper.getMainLooper()), {}
      )

fun initCamera() {
    qrcodeBoxEffect.setOnDrawListener { frame ->
        frame.overlayCanvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR)
        qrcodePoints?.let {
            // Using sensor coordinates to draw.
            frame.overlayCanvas.setMatrix(frame.sensorToBufferTransform)
            val path = android.graphics.Path().apply {
                it.forEachIndexed { index, point ->
                    if (index == 0) {
                        moveTo(point.x.toFloat(), point.y.toFloat())
                    } else {
                        lineTo(point.x.toFloat(), point.y.toFloat())
                    }
                 }
                 lineTo(it[0].x.toFloat(), it[0].y.toFloat())
            }
            frame.overlayCanvas.drawPath(path, paint)
        }
        true
    }

    val imageAnalysis = ImageAnalysis.Builder()
        .build()
        .apply {
            setAnalyzer(executor,
                MlKitAnalyzer(
                    listOf(barcodeScanner!!),
                    COORDINATE_SYSTEM_SENSOR,
                    executor
                ) { result ->
                    val barcodes = result.getValue(barcodeScanner!!)
                    qrcodePoints = 
                        barcodes?.takeIf { it.size > 0}?.get(0)?.cornerPoints
                }
            )
        }

    val useCaseGroup = UseCaseGroup.Builder()
          .addUseCase(preview)
          .addUseCase(imageAnalysis)
          .addEffect(qrcodeBoxEffect)
          .build()

    cameraProvider.bindToLifecycle(
        lifecycleOwner, cameraSelector, usecaseGroup)
  }

Media3 Effect

Want to add stunning camera effects to your CameraX app? Now you can tap into the power of Media3's rich effects framework! This exciting integration allows you to apply Media3 effects to your CameraX output, including Preview, VideoCapture, and ImageCapture.

This means you can easily enhance your app with a wide range of professional-grade effects, from blurs and color filters to transitions and more. To get started, simply use the new androidx.camera:media3:media3-effect artifact.

Here's a quick example of how to apply a Gaussian blur to your camera output:

// in build.gradle 
implementation ("androidx.camera.media3:media3-effect:1.0.0-alpha01")
implementation ("androidx.media3:media3-effect:1.5.0")

import androidx.camera.media3.effect.Media3Effect
val media3Effect =
            Media3Effect(
                requireContext(),  PREVIEW or VIDEO_CAPTURE or IMAGE_CAPTURE,
                mainThreadExecutor(), {}
            )
// use grayscale effect
media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()) 
cameraController.setEffects(setOf(media3Effect)) // or using UseCaseGroup API

Here is what the effect looks like:

A black and white view from inside a coffee shop looking out at a city street.  The bottom of the photo shows the edge of a table with a laptop and two buttons labeled 'BACK' and 'RECORD'

Screen Flash

Taking selfies in low light just got easier with CameraX 1.4.0! This release introduces a powerful new feature: screen flash. Instead of relying on a traditional LED flash which most selfie cameras don’t have, screen flash cleverly utilizes your phone's display. By momentarily turning the screen bright white, it provides a burst of illumination that helps capture clear and vibrant selfies even in challenging lighting conditions.

Integrating screen flash into your CameraX app is flexible and straightforward. You have two main options:

      1. Implement the ScreenFlash interface: This gives you full control over the screen flash behavior. You can customize the color, intensity, duration, and any other aspect of the flash. This is ideal if you need a highly tailored solution.

      2. Use the built-in implementation: For a quick and easy solution, leverage the pre-built screen flash functionality in ScreenFlashView or PreviewView. This implementation handles all the heavy lifting for you.

If you're already using PreviewView in your app, enabling screen flash is incredibly simple. Just enable it directly on the PreviewView instance. If you need more control or aren't using PreviewView, you can use ScreenFlashView directly.

Here's a code example demonstrating how to enable screen flash:

// case 1: PreviewView + CameraX core API.
previewView.setScreenFlashWindow(activity.getWindow());
imageCapture.screenFlash = previewView.screenFlash
imageCapture.setFlashMode(ImageCapture.FLASH_MODE_SCREEN)

// case 2: PreviewView + CameraController
previewView.setScreenFlashWindow(activity.getWindow());
cameraController.setImageCaptureFlashMode(ImageCapture.FLASH_MODE_SCREEN);

// case 3 : use ScreenFlashView 
screenFlashView.setScreenFlashWindow(activity.getWindow());
imageCapture.setScreenFlash(screenFlashView.getScreenFlash());
imageCapture.setFlashMode(ImageCapture.FLASH_MODE_SCREEN);

Camera Extensions new features

Camera Extensions APIs aim to help apps to access the cutting-edge capabilities previously available only on built-in camera apps. And the ecosystem is growing rapidly! In 2024, we've seen major players like Pixel, Samsung, Xiaomi, Oppo, OnePlus, Vivo, and Honor all embrace Camera Extensions, particularly for Night Mode and Bokeh Mode. CameraX 1.4.0 takes this even further by adding support for brand-new Android 15 Camera Extensions features, including:

    • Postview: Provides a preview of the captured image almost instantly before the long-exposure shots are completed
    • Capture Process Progress: Displays a progress indicator so users know how long capturing and processing will take, improving the experience for features like Night Mode
    • Extensions Strength: Allows users to fine-tune the intensity of the applied effect

Below is an example of the improved UX that uses postview and capture process progress features on Samsung S24 Ultra.

moving image capturing process progress features on Samsung S24 Ultra

Interested to know how this can be implemented? See the sample code below:

val extensionsCameraSelector =  
    extensionsManager
        .getExtensionEnabledCameraSelector(DEFAULT_BACK_CAMERA, extensionMode)
val isPostviewSupported = ImageCapture.getImageCaptureCapabilities(                   
    cameraProvider.getCameraInfo(extensionsCameraSelector)
).isPostviewSupported
val imageCapture = ImageCapture.Builder().apply {
    setPostviewEnabled(isPostviewSupported)
}.build()

imageCapture.takePicture(outputfileOptions, executor,  
    object : OnImageSavedCallback {
        override fun onImageSaved(outputFileResults: OutputFileResults) {
            // final image saved. 
        }
        override fun onPostviewBitmapAvailable(bitmap: Bitmap) {
            // Postview bitmap is available.
        }
        override fun onCaptureProcessProgressed(progress: Int) {
            // capture process progress update 
        }
}
Important: If your app ran into the CameraX Extensions issue on Pixel 9 series devices, please use CameraX 1.4.1 instead. This release fixes a critical issue that prevented Night Mode from working correctly with takePicture.

What's Next

We hope you enjoy this new release. Our mission is to make camera development a joy, removing the friction and pain points so you can focus on innovation. With CameraX, you can easily harness the power of Android's camera capabilities and build truly amazing app experiences.

Have questions or want to connect with the CameraX team? Join the CameraX developer discussion group or file a bug report:

We can’t wait to see what you create!

CameraX 1.3 is now in Beta

Posted by Donovan McMurray, Camera Developer Relations Engineer

CameraX, the Android Jetpack camera library which helps you create a best-in-class experience that works consistently across Android versions and devices, is becoming even more helpful with its 1.3 release. CameraX is already used in a growing number of Android apps, encompassing a wide range of use cases from straightforward and performant camera interactions to advanced image processing and beyond.

CameraX 1.3 opens up even more advanced capabilities. With the dual concurrent camera feature, apps can operate two cameras at the same time. Additionally, 1.3 makes it simple to delight users with new HDR video capabilities. You can also now add graphics library transformations (for example, with OpenGL or Vulkan) to the Preview, ImageCapture, and VideoCapture UseCases to apply filters and effects. There are also many other video improvements.

CameraX version 1.3 is officially in Beta as of today, so let’s get right into the details!

Dual concurrent camera

CameraX makes complex camera functionality easy to use, and the new dual concurrent camera feature is no exception. CameraX handles the low-level details like ensuring the concurrent camera streams are opened and closed in the correct order. In CameraX, binding dual concurrent cameras is not that different from binding a single camera.

First, check which cameras support a concurrent connection with getAvailableConcurrentCameraInfos(). A common scenario is to select a front-facing and a back-facing camera.

var primaryCameraSelector: CameraSelector? = null var secondaryCameraSelector: CameraSelector? = null for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) { primaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_FRONT }.cameraSelector secondaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_BACK }.cameraSelector if (primaryCameraSelector == null || secondaryCameraSelector == null) { // If either a primary or secondary selector wasn't found, reset both // to move on to the next list of CameraInfos. primaryCameraSelector = null secondaryCameraSelector = null } else { // If both primary and secondary camera selectors were found, we can // conclude the search. break } } if (primaryCameraSelector == null || secondaryCameraSelector == null) { // Front and back concurrent camera not available. Handle accordingly. }

Then, create a SingleCameraConfig for each camera, passing in each camera selector from before, along with your UseCaseGroup and LifecycleOwner. Then call bindToLifecycle() on your CameraProvider with both SingleCameraConfigs in a list.


val primary = ConcurrentCamera.SingleCameraConfig( primaryCameraSelector, useCaseGroup, lifecycleOwner ) val secondary = ConcurrentCamera.SingleCameraConfig( secondaryCameraSelector, useCaseGroup, lifecycleOwner ) val concurrentCamera = cameraProvider.bindToLifecycle( listOf(primary, secondary) )

For compatibility reasons, dual concurrent camera supports each camera being bound to 2 or fewer UseCases with a maximum resolution of 720p or 1440p, depending on the device.

HDR video

CameraX 1.3 also adds support for 10-bit video streaming along with HDR profiles, giving you the ability to capture video with greater detail, color and contrast than previously available. You can use the VideoCapture.Builder.setDynamicRange() method to set a number of configurations. There are several pre-configured values:

  • HLG_10_BIT - A 10-bit high-dynamic range with HLG encoding.This is the recommended HDR encoding to use because every device that supports HDR capture will support HLG10. See the Check for HDR support guide for details.
  • HDR10_10_BIT - A 10-bit high-dynamic range with HDR10 encoding.
  • HDR10_PLUS_10_BIT - A 10-bit high-dynamic range with HDR10+ encoding.
  • DOLBY_VISION_10_BIT - A 10-bit high-dynamic range with Dolby Vision encoding.
  • DOLBY_VISION_8_BIT - An 8-bit high-dynamic range with Dolby Vision encoding.

First, loop through the available CameraInfos to find the first one that supports HDR. You can add additional camera selection criteria here.

var supportedHdrEncoding: DynamicRange? = null val hdrCameraInfo = cameraProvider.availableCameraInfos .first { cameraInfo -> val videoCapabilities = Recorder.getVideoCapabilities(cameraInfo) val supportedDynamicRanges = videoCapabilities.getSupportedDynamicRanges() supportedHdrEncoding = supportedDynamicRanges.firstOrNull { it != DynamicRange.SDR // Ensure an HDR encoding is found } return@first supportedDynamicRanges != null } var cameraSelector = hdrCameraInfo?.cameraSelector ?: CameraSelector.DEFAULT_BACK_CAMERA

Then, set up a Recorder and a VideoCapture UseCase. If you found a supportedHdrEncoding earlier, also call setDynamicRange() to turn on HDR in your camera app.


// Create a Recorder with Quality.HIGHEST, which will select the highest // resolution compatible with the chosen DynamicRange. val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from(Quality.HIGHEST)) .build() val videoCaptureBuilder = VideoCapture.Builder(recorder) if (supportedHdrEncoding != null) { videoCaptureBuilder.setDynamicRange(supportedHdrEncoding!!) } val videoCapture = videoCaptureBuilder.build()

Effects

While CameraX makes many camera tasks easy, it also provides hooks to accomplish advanced or custom functionality. The new effects methods enable custom graphics library transformations to be applied to frames for Preview, ImageCapture, and VideoCapture.

You can define a CameraEffect to inject code into the CameraX pipeline and apply visual effects, such as a custom portrait effect. When creating your own CameraEffect via the constructor, you must specify which use cases to target (from PREVIEWVIDEO_CAPTURE, and IMAGE_CAPTURE). You must also specify a SurfaceProcessor to implement a GPU effect for the underlying Surface. It's recommended to use graphics API such as OpenGL or Vulkan to access the Surface. This process will block the Executor associated with the ImageCapture. An internal I/O thread is used by default, or you can set one with ImageCapture.Builder.setIoExecutor(). Note: It’s the implementation’s responsibility to be performant. For a 30fps input, each frame should be processed under 30 ms to avoid frame drops.

There is an alternative CameraEffect constructor for processing still images, since higher latency is more acceptable when processing a single image. For this constructor, you pass in an ImageProcessor, implementing the process method to return an image as detailed in the ImageProcessor.Request.getInputImage() method.

Once you’ve defined one or more CameraEffects, you can add them to your CameraX setup. If you’re using a CameraProvider, you should call UseCaseGroup.Builder.addEffect() for each CameraEffect, then build the UseCaseGroup, and pass it in to bindToLifecycle(). If you’re using a CameraController, you should pass all of our CameraEffects into setEffects().

Additional video features

CameraX 1.3 has many additional highly-requested video features that we’re excited to add support for.

With VideoCapture.Builder.setMirrorMode(), you can control when video recordings are reflected horizontally. You can set MIRROR_MODE_OFF (the default), MIRROR_MODE_ON, and MIRROR_MODE_ON_FRONT_ONLY (useful for matching the mirror state of the Preview, which is mirrored on front-facing cameras). Note: in an app that only uses the front-facing camera, MIRROR_MODE_ON and MIRROR_MODE_ON_FRONT_ONLY are equivalent.

PendingRecording.asPersistentRecording() method prevents a video from being stopped by lifecycle events or the explicit unbinding of a VideoCapture use case that the recording's Recorder is attached to. This is useful if you want to bind to a different camera and continue the video recording with that camera. When this option is enabled, you must explicitly call Recording.stop() or Recording.close() to end the recording.

For videos that are set to record audio via PendingRecording.withAudioEnabled(), you can now call Recording.mute() while the recording is in progress. Pass in a boolean to specify whether to mute or unmute the audio, and CameraX will insert silence during the muted portions to ensure the audio stays aligned with the video.

AudioStats now has a getAudioAmplitude() method, which is perfect for showing a visual indicator to users that audio is being recorded. While a video recording is in progress, each VideoRecordEvent can be used to access RecordingStats, which in turn contains the AudioStats object.

Next steps

Check the full release notes for CameraX 1.3 for more details on the features described here and more! If you’re ready to try out CameraX 1.3, update your project’s CameraX dependency to 1.3.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.

Introducing Camera Viewfinder

Posted by Francesco Romano, Developer Relations Engineer, Androidhand holding a phoneOver the years, Android devices have evolved to include a variety of sizes, shapes, and displays, among other features. Since the beginning, however, taking pictures with your phone has been one of the most important use cases. Today, camera capabilities are still one of the top reasons consumers purchase a phone.

As a developer, you want to leverage camera capabilities in your app, so you decide to adopt the Android Camera Framework. The first use case you want to implement is the Preview use case, which shows the output of the camera sensor on the screen.

So you go ahead and create a CaptureSession using a surface as big as the screen size. As long as the screen has the same aspect ratio as the camera sensor output and the device stays in its natural portrait orientation, everything should be fine.

But what happens when you resize the window, unfold your device, or change the display or orientation? Well, in most cases, the preview may appear stretched, upside down, or incorrectly rotated. And if you are in a multi-window environment, your app may even crash.

Why does this happen? Because of the implicit assumptions you made while creating the CaptureSession.

Historically, your app could live in the same window for its whole life cycle, but with the availability of new form factors such as foldable devices, and new display modes such as multi-window and multi-display, you can't assume this will be true anymore.

In particular, let's see some of the most important considerations when developing an app targeting various form factors:

Let's examine some common pitfalls to avoid when developing an app targeting various form factors:

  • Don't assume your app will live in a portrait-shaped window. Requesting a fixed orientation is still supported in Android 13, but now device manufacturers may have the option of overriding an app request for a preferred orientation.
  • Don't assume any fixed dimension or aspect ratio for your app. Even if you set resizableActivity = "false", your app could still be used in multi-window mode on large screens (>=600dp).
  • Don't assume a fixed relationship between the orientation of the screen and the camera. The Android Compatibility Definition Document specifies that a camera image sensor "MUST be oriented so that the long dimension of the camera aligns with the screen's long dimension." Starting with API level 32, camera clients that query the orientation on foldable devices can receive a value that dynamically changes depending on the device/fold state.
  • Don't assume the size of the inset can't change. The new taskbar is reported to applications as an inset, and when used with gesture navigation, the taskbar can be hidden and shown dynamically.
  • Don't assume your app has exclusive access to the camera. While your app is in a multi-window state, other apps can obtain access to shared resources like camera and microphone.

While CameraX already handles most of the cases above, implementing a preview that works in different scenarios with Camera2 APIs can be complex, as we describe in the Support resizable surfaces in your camera app Codelab.

Wouldn’t it be great to have a simple component that takes care of those details and lets you focus on your specific app logic?

Say no more…

Introducing CameraViewfinder

CameraViewfinder is a new artifact from the Jetpack library that allows you to quickly implement camera previews with minimal effort. It internally uses either a TextureView or SurfaceView to display the camera feed, and applies the required transformations on them to correctly display the viewfinder. This involves correcting their aspect ratio, scale, and rotation. It is fully compatible with your existing Camera2 codebase and continuously tested on several devices.

Let’s see how to use it.

First, add the dependency in your app-level build.gradle file:

implementation "androidx.camera:camera-viewfinder:1.3.0-alpha01"

Sync your project. Now you should be able to directly use the CameraViewfinder as any other View. For example, you can add it to your layout file:

<androidx.camera.viewfinder.CameraViewfinder
  android:id="@+id/view_finder"
  app:scaleType="fitCenter"
  app:implementationMode="performance"
  android:layout_width="match_parent"
  android:layout_height="match_parent"/>

As you can see, CameraViewfinder has the same controls available on PreviewView, so you can choose different Implementation modes and scaling types.

Now that the component is part of the layout, you can still create a CameraCaptureSession, but instead of providing a TextureView or SurfaceView as target surfaces, use the result of requestSurfaceAsync().

fun startCamera(){
    val previewResolution = Size(width, height)
    val viewfinderSurfaceRequest =
ViewfinderSurfaceRequest(previewResolution, characteristics)
    val surfaceListenableFuture =
        cameraViewfinder.requestSurfaceAsync(viewfinderSurfaceRequest)

    Futures.addCallback(surfaceListenableFuture, object : FutureCallback<Surface> {
        override fun onSuccess(surface: Surface) {
            //create a CaptureSession using this surface as usual
        }
        override fun onFailure(t: Throwable) { /* something went wrong */}
    }, ContextCompat.getMainExecutor(context))
}


Bonus: optimized layouts for foldable devices

CameraViewFinder is ready-to-use across resizable surfaces, configuration changes, rotations, and multi-window modes, and it has been tested on many foldable devices.

But if you want to implement optimized layouts for foldable and dual screen devices, you can combine CameraViewFinder with the Jetpack WindowManager library to provide unique experiences for your users.

For example, you can choose to avoid showing full screen preview if there is a hinge in the middle of the screen, or if the device is in “book” or “tabletop” mode. In those scenarios, you can have the viewfinder in one portion of the screen and the controls on the other side, or you can use part of the screen to show the last pictures taken. Imagination is the limit!

The sample app is already optimized for foldable devices and you can find the code to handle posture changes here. Have a look!

CameraX 1.2 is now in Beta

Posted by Donovan McMurray, CameraX Developer Relations Engineer

As part of Android Jetpack, the CameraX library makes complex camera functionality available in an easy-to-use API, helping you create a best-in-class experience that works consistently across Android versions and devices. As of today, CameraX version 1.2 is officially in Beta. Update from version 1.1 to take advantage of the latest game-changing features: our new ML Kit integration, which can reduce your boilerplate code when using ML Kit in a CameraX app, and Zero-Shutter Lag, which enables faster action shots than were previously possible.

These two advanced features are simple to implement with CameraX 1.2, so let’s take a look at each of them in depth.

ML Kit Integration

Google’s ML Kit provides several on-device vision APIs for detecting faces, barcodes, text, objects, and more. We’re making it easier to integrate these APIs with CameraX. Version 1.2 introduces MlKitAnalyzer, an implementation of ImageAnalysis.Analyzer that handles much of the ML Kit setup for you.


You can use MlKitAnalyzer with both cameraController and cameraProvider workflows. If you use the cameraController.setImageAnalysisAnalyzer() method, then CameraX can also handle the coordinates transformation between the ML Kit output and your PreviewView.

Here’s a code snippet using setImageAnalysisAnalyzer() to set a BarcodeScanner on a cameraController to detect QR codes. CameraX automatically handles the coordinate transformations when you pass COORDINATE_SYSTEM_VIEW_REFERENCED into the MlKitAnalyzer. (Use COORDINATE_SYSTEM_ORIGINAL to prevent CameraX from applying any coordinate transformations.)

val options = BarcodeScannerOptions.Builder()

  .setBarcodeFormats(Barcode.FORMAT_QR_CODE)

  .build()

val barcodeScanner = BarcodeScanning.getClient(options)


cameraController.setImageAnalysisAnalyzer(

  executor,

  new MlKitAnalyzer(List.of(barcodeScanner),

    COORDINATE_SYSTEM_VIEW_REFERENCED,

    executor, result -> {

      // The value of result.getResult(barcodeScanner)

      // can be used directly for drawing UI overlay.

    }

  )

)



Zero-Shutter Lag

Have you ever lined up the perfect photo, but when you click the shutter button the lag causes you to miss the best moment? CameraX 1.2 offers a solution to this problem by introducing Zero-Shutter Lag.

Prior to CameraX 1.2, you could optimize for quality (CAPTURE_MODE_MAXIMIZE_QUALITY) or efficiency (CAPTURE_MODE_MINIMIZE_LATENCY) when calling ImageCapture.Builder.setCaptureMode(). CameraX 1.2 adds a new value (CAPTURE_MODE_ZERO_SHOT_LAG) that reduces latency even further than CAPTURE_MODE_MINIMIZE_LATENCY. Note: for devices that cannot support Zero-Shutter Lag, CameraX will fallback to CAPTURE_MODE_MINIMIZE_LATENCY.

We accomplish this by using a circular buffer of photos. On image capture, we go back in time in the circular buffer to get the frame closest to the actual press of the shutter button. No DeLorean needed. Great Scott!

Here’s an example of how this works in a CameraX app with Preview and ImageCapture use cases:


  1. Just like any other app with a Preview use case, CameraX sends images from the camera to the UI for the user to see.
  2. With Zero-Shutter Lag, CameraX also sends images to a circular buffer which holds multiple recent images.
  3. When the user presses the shutter button, there is inevitably some lag in sending the current camera image to your app. For this reason, Zero-Shutter Lag goes to the circular buffer to fetch an image.
  4. CameraX finds the photo in the circular buffer closest to the actual time when the user pressed the shutter button, and returns that photo to your app.
There are a few limitations to keep in mind with Zero-Shutter Lag. First, please be mindful that this is still an experimental feature. Second, since keeping a circular buffer of images is computationally intensive, you cannot use CAPTURE_MODE_ZERO_SHOT_LAG while using VideoCapture or extensions. Third, the circular buffer will increase the memory footprint of your app.


Next steps


Check our full release notes for CameraX 1.2 for more details on the features described here and more! If you’re ready to try out CameraX 1.2, update your project’s CameraX dependency to 1.2.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.

The exciting aspects of Android Camera

Posted by Marwa Mabrouk, Android Camera Platform Product Manager

hand holding a phone 

Android Camera is an exciting space. Camera is one of the top reasons consumers purchase a phone. Android Camera empowers developers today through different tools. Camera 2 is the framework API that is included in Android since Android 5.0 Lollipop, and CameraX is a Jetpack support library that runs on top of Camera 2, and is available to all Android developers. These solutions are meant to complement each other in addressing the needs of the Android Camera ecosystem.

For developers who are starting with Android Camera, refreshing their app to the latest, or migrating their app from Camera 1, CameraX is the best tool to get started! CameraX offers key benefits that empower developers, and address the complexities of the ecosystem.

  1. Development speed was the main driver behind CameraX design. The SDK doesn’t just allow developers to get up and running much faster, it also has built in the best of development practices and photography know-how to get the most out of the camera.
  2. Android-enabled devices come in large numbers with many variations. CameraX aims to be consistent across many Android devices and has taken that complexity upon itself, to offer developers an SDK that works consistently across 150+ phone models, with backward-compatibility to Android 5.0 (API level 21). CameraX is tested daily by Google on each of those devices in our labs, to ensure complexity is not surfaced to developers, while keeping the quality high.
  3. Fast library releases is a flexibility that CameraX gets as a Jetpack support library. CameraX launches can happen on shorter regular bases, or ad hoc, to address feedback, and provide new capabilities. We plan to expand on this more in another blog post.

For developers who are building highly specialized functionality with Camera for low level control of the capture flow, and where device variations are to be taken into consideration, Camera 2 should be used.

Camera 2 is the common API that enables the camera hardware on every Android device and is deployed on all the billions of Android devices around the world in the market today. As a framework API, Camera 2 enables developers to utilize their deep knowledge of photography and device implementations. To ensure the quality of Camera 2, device manufacturers show compliance by testing their devices. Device variations do surface in the API based on the device manufacturer's choices, allowing custom features to take advantage of those variations on specific devices as they see fit.

To understand this more, let’s use an example. We’ll compare camera capture capabilities. Camera 2 offers special control of the individual capture pipeline for each of the cameras on the phone at the same time, in addition to very fine grained manual settings. CameraX enables capturing high-resolution, high-quality photos and provides auto-white-balance, auto-exposure, and auto-focus functionality, in addition to simple manual camera controls.

Considering application examples: Samsung uses the Camera Framework API to help the advanced pro-grade camera system to capture studio-quality photos in various lightings and settings on Samsung Galaxy devices. While the API is common, Samsung has enabled variations that are unique to each device's capabilities, and takes advantage of that in the camera app on each device. The Camera Framework API enables Samsung to reach into the low level camera capabilities, and tailor the native app for the device

Another example, Microsoft decided to integrate CameraX across all productivity apps where Microsoft Lens is used (i.e. Office, Outlook, OneDrive), to ensure high quality images are used in all these applications. By switching to CameraX, the Microsoft Lens team was able not only to improve its developer experience in view of the simpler API, but also improve performance, increase developer productivity and reduce time to go to market. You can learn more about this here.


This is a very exciting time for Android Camera, with many new features on both APIs:

  • CameraX has launched several features recently, the most significant has been Video Capture which became available to developers in beta on Jan 26th.
  • With Android 12 launch, Camera 2 has a number of features now available.

As we move forward, we plan to share with you more details about the exciting features that we have planned for Android Camera. We look forward to engaging with you and hearing your feedback, through the CameraX mailing list: [email protected] and the AOSP issue tracker.

Thank you for your continued interest in Android Camera, and we look forward to building amazing camera experiences for users in collaboration with you!

The exciting aspects of Android Camera

Posted by Marwa Mabrouk, Android Camera Platform Product Manager

hand holding a phone 

Android Camera is an exciting space. Camera is one of the top reasons consumers purchase a phone. Android Camera empowers developers today through different tools. Camera 2 is the framework API that is included in Android since Android 5.0 Lollipop, and CameraX is a Jetpack support library that runs on top of Camera 2, and is available to all Android developers. These solutions are meant to complement each other in addressing the needs of the Android Camera ecosystem.

For developers who are starting with Android Camera, refreshing their app to the latest, or migrating their app from Camera 1, CameraX is the best tool to get started! CameraX offers key benefits that empower developers, and address the complexities of the ecosystem.

  1. Development speed was the main driver behind CameraX design. The SDK doesn’t just allow developers to get up and running much faster, it also has built in the best of development practices and photography know-how to get the most out of the camera.
  2. Android-enabled devices come in large numbers with many variations. CameraX aims to be consistent across many Android devices and has taken that complexity upon itself, to offer developers an SDK that works consistently across 150+ phone models, with backward-compatibility to Android 5.0 (API level 21). CameraX is tested daily by Google on each of those devices in our labs, to ensure complexity is not surfaced to developers, while keeping the quality high.
  3. Fast library releases is a flexibility that CameraX gets as a Jetpack support library. CameraX launches can happen on shorter regular bases, or ad hoc, to address feedback, and provide new capabilities. We plan to expand on this more in another blog post.

For developers who are building highly specialized functionality with Camera for low level control of the capture flow, and where device variations are to be taken into consideration, Camera 2 should be used.

Camera 2 is the common API that enables the camera hardware on every Android device and is deployed on all the billions of Android devices around the world in the market today. As a framework API, Camera 2 enables developers to utilize their deep knowledge of photography and device implementations. To ensure the quality of Camera 2, device manufacturers show compliance by testing their devices. Device variations do surface in the API based on the device manufacturer's choices, allowing custom features to take advantage of those variations on specific devices as they see fit.

To understand this more, let’s use an example. We’ll compare camera capture capabilities. Camera 2 offers special control of the individual capture pipeline for each of the cameras on the phone at the same time, in addition to very fine grained manual settings. CameraX enables capturing high-resolution, high-quality photos and provides auto-white-balance, auto-exposure, and auto-focus functionality, in addition to simple manual camera controls.

Considering application examples: Samsung uses the Camera Framework API to help the advanced pro-grade camera system to capture studio-quality photos in various lightings and settings on Samsung Galaxy devices. While the API is common, Samsung has enabled variations that are unique to each device's capabilities, and takes advantage of that in the camera app on each device. The Camera Framework API enables Samsung to reach into the low level camera capabilities, and tailor the native app for the device

Another example, Microsoft decided to integrate CameraX across all productivity apps where Microsoft Lens is used (i.e. Office, Outlook, OneDrive), to ensure high quality images are used in all these applications. By switching to CameraX, the Microsoft Lens team was able not only to improve its developer experience in view of the simpler API, but also improve performance, increase developer productivity and reduce time to go to market. You can learn more about this here.


This is a very exciting time for Android Camera, with many new features on both APIs:

  • CameraX has launched several features recently, the most significant has been Video Capture which became available to developers in beta on Jan 26th.
  • With Android 12 launch, Camera 2 has a number of features now available.

As we move forward, we plan to share with you more details about the exciting features that we have planned for Android Camera. We look forward to engaging with you and hearing your feedback, through the CameraX mailing list: [email protected] and the AOSP issue tracker.

Thank you for your continued interest in Android Camera, and we look forward to building amazing camera experiences for users in collaboration with you!