Tag Archives: Jetpack

Apps adopt Transformer to support more reliable and performant media editing use cases

Posted by Caren Chang – Developer Relations Engineer

The Jetpack Media3 library enables Android apps to build high quality media apps. As part of the Media3 library, the Transformer module aims to provide easy to use, reliable, and performant APIs for transcoding and editing media.

For example, apps can use Transformer to apply editing operations such as trimming a long piece of media file, or applying effects to video tracks. Transformer can also be used to convert media files from one format to another, such as adjusting the resolution or encoding of the media file.

Developing Transformer APIs

As part of the process to introduce new APIs, our engineering team works closely with Google apps such as Google Photos to test and experiment the new APIs. Experimental flags are first introduced to enable performance improvements. Once the results are successful and conclusive, these experimental features are then built into the default API implementations or promoted to public APIs for all apps to use. This approach allows Transformer APIs to be tested on a wide variety of devices.

Transformer Adoption in apps

Apps that have been using Transformer in production observed in-app performance improvements, less code to maintain, and better developer experience. Let’s take a closer look at how Transformer has helped apps for their media-editing use cases.

One of users’ favorite features in Google Photos is memory sharing, where snippets of your life story that are curated and presented as Google Photos memories can now be shared as videos to social media and chat apps. However, the process of combining media items to create a video on device is resource intensive and subject to significant latency, especially on low-end devices. To reduce this latency and enable the feature on a wider range of devices, Photos adopted Transformer in their media creation pipeline. Along with other improvements made, the team found that Transformer played a part in reducing the median user latency for creating memory videos by 41% on high-end devices and 27% on mid-range devices.

The Photos app also enables users to perform media edits such as trimming or rotating a video. By adopting Transformer APIs for rotating videos, median save latency was reduced by 79% for applicable videos. The app also adopted Transformer’s API for optimizing video trimming, and observed video save latency decrease by 64%.

1 Second Everyday is a personal video journal that helps you create captivating montages and timelapses. One of the app’s main user journeys is sequentially combining short videos to create a meaningful movie. After adopting Transformer for this use case, the app observed that video encoding performance was up to 5x faster, allowing them to explore enabling 4k and HDR support. The Transformer adoption also helped decrease relevant code by 30%, making it easier for the developers to maintain the code base.

BandLab is the next-generation music creation platform used by millions around the world to make and share their music. The app originally used MediaCodecs for their video creation use cases, but found that the low level implementation resulted in native crashes that were difficult to debug. After researching more on Transformer, the team made the decision to migrate from MediaCodecs to Transformer. Overall, it only took the team 12 working days for the migration, and this resulted in a simpler codebase and more maintainable pipeline for their media creation use cases. In addition, the app observed that all previously observed native crashes were no longer occurring anymore.

What’s next for Transformers?

We’re excited to see Transformer’s adoption in the developer community, and will continue adding new features to support more media-editing use cases for the Android ecosystem including:

    • Better support for previewing media edits
    • Improving the performance and developer experience for video frame extraction
    • Easier integration with AI effects
    • and much more

Keep an eye out on what we’re working on in the Media3 Github, and file feature requests to help shape the future of Transformer!

CameraX update makes dual concurrent camera even easier

Posted by Donovan McMurray – Developer Relations Engineer

CameraX, Android's Jetpack camera library, is getting an exciting update to its Dual Concurrent Camera feature, making it even easier to integrate this feature into your app. This feature allows you to stream from 2 different cameras at the same time. The original version of Dual Concurrent Camera was released in CameraX 1.3.0, and it was already a huge leap in making this feature easier to implement.

Starting with 1.5.0-alpha01, CameraX will now handle the composition of the 2 camera streams as well. This update is additional functionality, and it doesn’t remove any prior functionality nor is it a breaking change to your existing Dual Concurrent Camera code. To tell CameraX to handle the composition, simply use the new SingleCameraConfig constructor which has a new parameter for a CompositionSettings object. Since you’ll be creating 2 SingleCameraConfigs, you should be consistent with what constructor you use.

Nothing has changed in the way you check for concurrent camera support from the prior version of this feature. As a reminder, here is what that code looks like.

// Set up primary and secondary camera selectors if supported on device.
var primaryCameraSelector: CameraSelector? = null
var secondaryCameraSelector: CameraSelector? = null

for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) {
    primaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_FRONT
    }.cameraSelector
    secondaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_BACK
    }.cameraSelector

    if (primaryCameraSelector == null || secondaryCameraSelector == null) {
        // If either a primary or secondary selector wasn't found, reset both
        // to move on to the next list of CameraInfos.
        primaryCameraSelector = null
        secondaryCameraSelector = null
    } else {
        // If both primary and secondary camera selectors were found, we can
        // conclude the search.
        break
    }
}

if (primaryCameraSelector == null || secondaryCameraSelector == null) {
    // Front and back concurrent camera not available. Handle accordingly.
}

Here’s the updated code snippet showing how to implement picture-in-picture, with the front camera stream scaled down to fit into the lower right corner. In this example, CameraX handles the composition of the camera streams.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.0f)
        .setScale(1.0f, 1.0f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(2 / 3f - 0.1f, -2 / 3f + 0.1f)
        .setScale(1 / 3f, 1 / 3f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

You are not constrained to a picture-in-picture layout. For instance, you could define a side-by-side layout by setting the offsets and scaling factors accordingly. You want to keep both dimensions scaled by the same amount to avoid a stretched preview. Here’s how that might look.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.5f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

We’re excited to offer this improvement to an already developer-friendly feature. Truly the CameraX way! CompositionSettings in Dual Concurrent Camera is currently in alpha, so if you have feature requests to improve upon it before the API is locked in, please give us feedback in the CameraX Discussion Group. And check out the full CameraX 1.5.0-alpha01 release notes to see what else is new in CameraX.

Introducing Ink API, a new Jetpack library for stylus apps

Posted by Chris Assigbe – Developer Relations Engineer and Tom Buckley – Product Manager

With stylus input, Android apps on phones, foldables, tablets, and Chromebooks become even more powerful tools for productivity and creativity. While there's already a lot to think about when designing for large screens – see our full guidance and inspiration gallery – styluses are especially impactful, transforming these devices into a digital notebook or sketchbook. Users expect stylus experiences to feel as fluid and natural as writing on paper, which is why Android previously added APIs to reduce inking latency to as low as 4ms; virtually imperceptible. However, latency is just one aspect of an inking experience – developers currently need to generate stroke shapes from stylus input, render those strokes quickly, and efficiently run geometric queries over strokes for tools like selection and eraser. These capabilities can require significant investment in geometry and graphics just to get started.

Today, we're excited to share Ink API, an alpha Jetpack library that makes it easy to create, render, and manipulate beautiful ink strokes, enabling developers to build amazing features on top of these APIs. Ink API builds upon the Android framework's foundation of low latency and prediction, providing you with a powerful and intuitive toolkit for integrating rich inking features into your apps.

moving image of a stylus writing with Ink API on a Samsung Tab S8, 4ms showing end-to-end latency
Writing with Ink API on a Samsung Tab S8, 4ms end-to-end latency

What is Ink API?

Ink API is a comprehensive stylus input library that empowers you to quickly create innovative and expressive inking experiences. It offers a modular architecture rather than a one-size-fits-all canvas, so you can tailor Ink API to your app's stack and needs. The modules encompass key functionalities like:

    • Strokes module: Represents the ink input and its visual representation.
    • Geometry module: Supports manipulating and analyzing strokes, facilitating features like erasing, and selecting strokes.
    • Brush module: Provides a declarative way to define the visual style of strokes, including color, size, and the type of tool to draw with.
    • Rendering module: Efficiently displays ink strokes on the screen, allowing them to be combined with Jetpack Compose or Android Views.
    • Live Authoring module: Handles real-time inking input to create smooth strokes with the lowest latency a device can provide.

Ink API is compatible with devices running Android 5.0 (API level 21) or later, and offers benefits on all of these devices. It can also take advantage of latency improvements in Android 10 (API 29) and improved rendering effects and performance in Android 14 (API 34).

Why choose Ink API?

Ink API provides an out-of-the-box implementation for basic inking tasks so you can create a unique drawing experience for your own app. Ink API offers several advantages over a fully custom implementation:

    • Ease of Use: Ink API abstracts away the complexities of graphics and geometry, allowing you to focus on your app's unique inking features.
    • Performance: Built-in low latency support and optimized rendering ensure a smooth and responsive inking experience.
    • Flexibility: The modular design allows you to pick and choose the components you need, tailoring the library to your specific requirements.

Ink API has already been adopted across many Google apps because of these advantages, including for markup in Docs and Circle-to-Search; and the underlying technology also powers markup in Photos, Drive, Meet, Keep, and Classroom. For Circle to Search, the Ink API modular design empowered the team to utilize only the components they needed. They leveraged the live authoring and brush capabilities of Ink API to render a beautiful stroke as users circle (to search). The team also built custom geometry tools tailored to their ML models. That’s modularity at its finest.

moving image of a stylus writing with Ink API on a Samsung Tab S8, 4ms showing end-to-end latency

“Ink API was our first choice for Circle-to-Search (CtS). Utilizing their extensive documentation, integrating the Ink API was a breeze, allowing us to reach our first working prototype w/in just one week. Ink's custom brush texture and animation support allowed us to quickly iterate on the stroke design.” 

- Jordan Komoda, Software Engineer, Google

We have also designed Ink API with our Android app partners' feedback in mind to make sure it fits with their existing app architectures and requirements.

With Ink API, building a natural and fluid inking experience on Android is simpler than ever. Ink API lets you focus on what differentiates your experience rather than on the details of paths, meshes, and shaders. Whether you are exploring inking for note-taking, photo or document markup, interactive learning, or something completely different, we hope you’ll give Ink API a try!

Get started with Ink API

Ready to dive into the well of Ink API? Check out the official developer guide and explore the API reference to start building your next-generation inking app. We're eager to see the innovative experiences you create!

Note: This alpha release is just the beginning for Ink API. We're committed to continuously improving the library, adding new features and functionalities based on your feedback. Stay tuned for updates and join us in shaping the future of inking on Android!

15 Things to know for Android developers at Google I/O

Posted by Matthew McCullough, Vice President, Product Management, Android Developer  

AI is unlocking experiences that were not even possible a few years ago, and we’ve been hard at work reimaging Android with AI at the core, to help enable you to build a whole new class of apps. At this year’s Google I/O, we’re covering how new tools like Gemini can power building the next generations of apps on Android. Plus, we showcased a range of updates to our tools and services grounded in productivity, helping you make it faster and easier to build excellent experiences across form factors. Let’s dive in!

Powering the next generation of Apps with AI

#1: AI in your tools, with Gemini in Android Studio

Gemini in Android Studio (formerly Studio Bot) is your coding companion for Android development, and thanks to your feedback since its preview at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and brought it into the Gemini family of products. Earlier today, we previewed a number of new features coming soon, like Code suggestions, App Quality Insights that leverage Gemini, and a preview of the multi-modal inputs that are coming using Gemini 1.5 Pro. You can read more about the updates here, and make sure to check out What’s new in Android development tools.

#2: Building with Generative AI

Android provides the solution you need to build Generative AI apps. You can use our most capable models over the Cloud with the Gemini API in Google AI or Vertex AI for Firebase directly in your Android apps. For on-device, Gemini Nano is our most efficient model. We’re working closely with a few early adopters such as Patreon, Grammarly, and Adobe to ensure we’re creating the best APIs that unlock the most innovative experiences. For example, Adobe is experimenting with Gemini Nano to enhance the on-device experience of Acrobat AI Assistant, a tool that allows their users to summarize and interact with documents. Be sure to check out the Build your own generative AI powered Android app, Android on-device gen AI under the hood, and the What’s New in Android sessions to learn more!

Moving image of Gemini Nano operating in Adobe

Excellent apps, across devices

#3: Think adaptive: apps on phones, foldables, tablets and more

Build and design apps that adapt beyond the phone, with the new Compose adaptive layout libraries built with Material guidance in beta. Add rich stylus and keyboard support to increase user productivity. Check out three of our key Android adaptive sessions at Google I/O: Designing adaptive apps, Building adaptive Android apps, and Increase user productivity with large screens and accessories.

Moving image of Gemini Nano operating in Adobe

#4: Enhance homescreens with Widgets and Jetpack Glance

Jetpack Glance 1.1 is now available in release candidate and lets you build high quality widgets using your Compose skills. Check out our new canonical layouts, design guidance and figma updates to the Android UI kit. To learn more check out our Improve the user experience of your Android app workshop and Build Android widgets with Jetpack Glance technical session.

#5-9: come back here tomorrow and Thursday!

We’ll continue to share more updates for Android Developers throughout Google I/O, so check back here tomorrow!

Developer Productivity

#10: Use Kotlin Multiplatform for sharing business logic

Kotlin Multiplatform (KMP) enables sharing Kotlin code across different platforms and several of our Jetpack libraries, like DataStore and Room, have already been migrated to take advantage of KMP. We use Kotlin Multiplatform within Google and recommend using KMP for sharing business logic between platforms. Learn more about it here.

#11: Compose: Shared Elements, performance improvements and more

The upcoming Compose June ‘24 release is packed with the features you’ve been asking for! Shared element transitions, lazy list item reordering animations, strong skipping mode, performance improvements, a new lazy flow layout and more. Read more about it in our blog.

#12: Android Studio: the latest preview, with Gemini and more

Android Studio Koala 🐨Feature Drop (2024.1.2) available today in the canary channel, builds on top of IntelliJ 2024.1 and adds new innovative features unlocked by Gemini, such as insights for crashes in App Quality Insights, code transformations and a Gemini API starter template to get you quickly started with Gemini. Additionally, new features such as USB speed detection, shortcut UI to control device settings, a new way to sign into Google services, updated and speedier UI for profilers with a new task centric approach and a deep integration with the Google Play SDK index are intended to make the development process extremely productive. Read more here.

And the latest from the world of Mobile

#13: Grow your business with the latest Google Play updates

Discover new ways to attract and engage users with enhanced custom store listings. Optimize revenue with expanded payment options. Reinforce trust through secure, high-quality experiences made easier with our latest SDK Console improvements. Learn about these updates and more, including our new vertical approach, in our blog.

#14: Simplify app compliance with Checks

Streamline your app's privacy compliance with Checks, Google's AI-powered compliance solution! Checks empowers developers to swiftly identify, address, resolve privacy issues, and enables you to launch apps faster and with confidence. Harness the power of automation with Checks' intelligent reports, saving you valuable time and resources. Get started now at checks.google.com.

#15: And of course, Android 15

…but for that, you’ll have to stay tuned tomorrow, when we’ve got a bit more up our sleeve!

AndroidX moving to minSdkVersion 19

Posted by Aurimas Liutikas, Software Engineer on AndroidX

AndroidX libraries are moving to a default minimum supported Android API level 19 (previously 14) starting with releases in October, 2023. According to Play Store check-in data, nearly all Android users have devices on API 19 or newer, so it’s no longer necessary to support legacy versions. This change will help AndroidX libraries maximize the potential number of users for app developers and aligns with Google Play Services and Android NDK.

If you are currently supporting a lower minSdkVersion, we recommend increasing that value to 19 and cleaning up any code to support prior versions or if you are unable to do so for business reasons you should stay on the previous versions of AndroidX.

What’s new in the Jetpack Compose August ’23 release

Posted by Ben Trengrove, Android Developer Relations Engineer

Today, as part of the Compose August ‘23 Bill of Materials, we’re releasing version 1.5 of Jetpack Compose, Android's modern, native UI toolkit that is used by apps such as Play Store, Dropbox, and Airbnb. This release largely focuses on performance improvements, as major parts of our modifier refactor we began in the October ‘22 release are now merged.

Performance

When we first released Compose 1.0 in 2021, we were focused on getting the API surface right to provide a solid foundation to build on. We wanted a powerful and expressive API that was easy to use and stable so that developers could confidently use it in production. As we continue to improve the API, performance is our top priority, and in the August ‘23 release, we have landed many performance improvements.

Modifier performance

Modifiers see large performance improvements, up to 80% improvement to composition time, in this release. The best part is that, thanks to our work getting the API surface right in the first release, most apps will see these benefits just by upgrading to the August ‘23 release.

We have a suite of benchmarks that are used to monitor for regressions and to inform our investments in improving performance. After the initial 1.0 release of Compose, we began focusing on where we could make improvements. The benchmarks showed that we were spending more time than anticipated materializing modifiers. Modifiers make up the vast majority of a composition tree and, as such, were the largest contributor to initial composition time in Compose. Refactoring modifiers to a more efficient design began under the hood in the October ‘22 release.

The October ‘22 release included new APIs and performance improvements in our lowest level module, Compose UI. Modifiers build on top of each other so we started migrating our low level modifiers in Compose Foundation in the next release, March ‘23. This included graphicsLayer, low level focus modifiers, padding, and offset. These low level modifiers are used by other highly utilized modifiers such as Clickable, and are also utilized by many framework Composables such as Text. Migrating modifiers in the March ‘23 release brought performance improvements to those components, but the real gains would come when we could migrate the higher level modifiers and composables themselves to the new modifier system.

In the August ‘23 release, we have begun migrating the Clickable modifier to the new modifier system, bringing substantial improvements to composition time, in some cases up to 80%. This is especially relevant in lazy lists that contain clickable elements such as buttons. Modifier.indication, used by Clickable, is still in the process of being migrated, so we anticipate further gains to come in future releases.

As part of this work, we identified a use case for composed modifiers that wasn’t covered in the original refactor and added a new API to create Modifier.Node elements that consume CompositionLocal instances.

We are now working on documentation to guide you through migrating your own modifiers to the new Modifier.Node API. To get started right away, you can reference the samples in our repository.

Learn more about the rationale behind the changes in the Compose Modifiers deep dive talk from Android Dev Summit ‘22.

Memory

This release includes a number of improvements in memory usage. We have taken a hard look at allocations happening across different Compose APIs and have reduced the total allocations in a number of areas, especially in the graphics stack and vector resource loading. This not only reduces the memory footprint of Compose, but also directly improves performance, as we spend less time allocating memory and reduce garbage collection.

In addition, we fixed a memory leak when using ComposeView, which will benefit all apps but especially those that use multi-activity architecture or large amounts of View/Compose interop.

Text

BasicText has moved to a new rendering system backed by the modifier work, which has brought an average of gain of 22% to initial composition time and up to a 70% gain in one benchmark of complex layouts involving text.

A number of Text APIs have also been stabilized, including:

Improvements and fixes for core features

We have also shipped new features and improvements in our core APIs as well as stabilizing some APIs:

  • LazyStaggeredGrid is now stable.
  • Added asComposePaint API to replace toComposePaint as the returned object wraps the original android.graphics.Paint.
  • Added IntermediateMeasurePolicy to support lookahead in SubcomposeLayout.
  • Added onInterceptKeyBeforeSoftKeyboard modifier to intercept key events before the soft keyboard.

Get started!

We’re grateful for all of the bug reports and feature requests submitted to our issue tracker — they help us to improve Compose and build the APIs you need. Continue providing your feedback, and help us make Compose better!

Wondering what’s next? Check out our roadmap to see the features we’re currently thinking about and working on. We can’t wait to see what you build next!

Happy composing!

Compose for Wear OS and Tiles 1.2 libraries are now stable: check out new features!

Posted by Anna Bernbaum, Product Manager and Kseniia Shumelchyk, Android Developer Relations Engineer

We’re excited to announce that version 1.2 of Compose for Wear OS and Wear Tiles libraries have reached the stable milestone. This makes it easier than ever to use these modern APIs to build beautiful and engaging apps for Wear OS.

We continue to evolve Android Jetpack libraries for Wear OS with new features and improvements to streamline development, including support for the latest Wear OS 4 release.

Many developers are already leveraging the powerful tools and intuitive APIs to create exceptional experiences for Wear OS. Partners like Peloton and Deezer were able to quickly build a watch experience and are seeing the impact on their feature-adoption and user engagement.

"The Wear OS app was our first usage of Compose in production, we really enjoyed how much more productive it made us.” 

– Stefan Haacker, a senior Android engineer at Peloton.

Compose for Wear OS and Wear Tiles complement one another. Use Wear Tiles to define the experience in your app’s tiles, and use Compose for Wear OS to build UIs across the more detailed screens in your app. Both sets of APIs offer material components and layouts that ensure your app experience on Wear OS is coherent and follows our best practices.

Now, let’s look into key features of version 1.2 of Jetpack libraries for Wear OS.

Compose for Wear OS 1.2 release

Compose for Wear OS version 1.2 contains new components and brings improvements to tooling, as well as the usability and accessibility of existing components:

Expandable Items

The new expandableItem, expandableItems and expandableButton components provide a simple way to fold and unfold content on demand. Use these components to hide detailed information on long pages or expanded sections by default. This design pattern allows users to focus on essential content and choose when to view the more detailed information.

This pattern enables apps to include high-density content while preserving the key principles of wearables – compactness and glanceability.


Moving images of expanding list and expanding text using the new component
Example of expanding list and expanding text using the new component

The component can be used for expanding lists within ScalingLazyColumn, so expandableButton collapses after the content in expandableItems is revealed in one smooth option. Another use case is expanding the content of a single item, such as Text, that would otherwise contain too many lines to show all at once when the screen first loads.

Swipe to Reveal

A new experimental API has been added to support the SwipeToReveal pattern, as a way to add up to 2 secondary actions when the composable is swiped to the left. It also provides support for users to undo the secondary actions that they take. This component is intended for use cases where the existing ‘long press’ pattern is not ideal.


Moving images showing SwipeToReveal implementation with two actions (left) and single action with undo (right)
SwipeToReveal implementation with two actions (left) and single action with undo (right)

Note that this feature is distinct from swipe-to-dismiss, which is used to navigate back to the previous screen.

Compose Previews for Wear OS

In version 1.2 we’ve added device configurations to the set of Compose Preview annotations that you use when evaluating how a design looks and behaves on a variety of devices.

We added a number of custom Wear Preview annotations for different watch shapes and sizes: WearPreviewSmallRound, WearPreviewLargeRound, WearPreviewSquare. We’ve also added the WearPreviewDevices, WearPreviewFontScales annotations to check your app against multiple device configurations and types at once. Use these new annotations to instantly verify how your app’s layout behaves on a variety of Wear OS devices.

Image showing WearPreviewDevices and WearPreviewFontScales annotations used for Horologist VolumeScreen preview
WearPreviewDevices and WearPreviewFontScales annotations used for Horologist VolumeScreen preview

Wear Compose tooling is available within a separate dependency androidx.wear.compose.ui.tooling.preview that you’ll need to include in addition to general Compose dependencies.

UX and accessibility improvements

The 1.2 release also introduced numerous improvements for user experience and accessibility:

  • Reduce-motion setting is now supported. When setting switched on it will disable scaling and fading animations in ScalingLazyColumn, and turn off the shimmering effect and wipe-off motion on placeholders.
  • HierarchicalFocusCoordinator - new experimental composable that enables marking sub-trees of the composition as focus enabled or focus disabled. Use this to control which element receives rotary scroll events, such as multiple ScalingLazyColumns in a HorizontalPage
  • PickerGroup - a new composable designed to combine multiple pickers together. It handles focus between the pickers using the HierarchicalFocusCoordinator API and enables auto-centering of Picker items. It’s already integrated in prebuilt Date and Time pickers from Horologist: check out some examples.
  • Picker has a new userScrollEnabled parameter, which determines if picker should be scrollable and disables scrolling when not focused.
  • The shimmer and wipe-off animations for placeholder now apply the wipe-off effect immediately when the content is ready.
  • Stepper has an additional parameter, enableRangeSemantics, that allows customization of semantics, such as disabling default range semantics when required.

Other changes

ScalingLazyColumn and associated classes have migrated from the material package to the foundation.lazy package, as a preparation for a new Material3 library. You can use this migration script to update your code seamlessly.

The Horologist library enhances the implementation of snap behavior to a ScalingLazyColumn, TimePicker and DatePicker when the user interacts with a rotary crown. The rotaryWithFling modifier was deprecated in favor of rotaryWithScroll which includes fling behavior by default. Check out rotaryWithScroll and rotaryWithSnap reference documentation for details.


Moving image of Snap and fling behavior for scrolling list
Snap and fling behavior for scrolling list

Tiles 1.2 release

Tiles are designed to give users fast, predictable access to the information and actions they rely on most. Version 1.2 of the Jetpack Tiles Library introduces support for platform data bindings and animations so you can provide even more responsive experiences to your users.

Moving image of Tiles carousel on Wear Os
Tiles carousel on Wear OS

Platform data bindings

Version 1.2 introduces support dynamic expressions that link elements of your tile to platform data sources. If your tile uses platform data sources such as heart rate, or, step count, or time, your tile can be updated up to once per second.

Moving image of a tile using data binding
Examples of a tile using data binding

Animations

The new version of tiles also adds support for animations. You can use tween animations to create smooth transitions when part of your layout changes, and use transition animations to animate new or disappearing elements from the tile.

Moving images of animated tiles
Examples of animated tiles

Partial tile updates

We have also now enabled partial tile updates, meaning that we will only update the part of your tile that has been updated, not the entire layout. This allows you to update part of your tile, while an animation is playing in another part, without disrupting that animation.

Learn more

Get started with hands-on experience trying our codelab to create your first Tile and Compose for Wear OS codelab.

We’ve already updated our samples and Horologist libraries to work with the latest version of Jetpack libraries for Wear OS. Also make sure to check out the documentation for Tiles and Compose for Wear OS to learn more about best practices when building apps for wearables.

Provide feedback

We continue to evolve our APIs with the features you’ve been asking for. Please do continue providing us feedback on the issue tracker , and join the Kotlin Slack #compose-wear channel to connect with the Google team and developer community.

Start building for Wear OS now

Discover even more by taking a look at our developer site and reading the latest Wear OS announcements from Google I/O!

Introducing Jetpack Emoji Picker: A New Way to Add Emojis to Your Android App

Posted by Lin Guo, Software Engineer

The use of emojis in communication has become increasingly popular in recent years. These small icons can be used to express a wide range of emotions and can add a personal touch to messages. However, adding emojis to your Android app can be a bit of a challenge. That's where the Emoji picker library comes in. You can simply add a few lines of code to your app, and you'll be able to start using emojis right away. It's the easiest way to get started with emojis, and it will make your app more fun and expressive.

Moving image of using EmojiPicker on Google Pixel 6 Pro
Figure 1. Emoji Picker

Some useful features provided by the library

Up-to-date emojis without tofu (☐)

Every year, new emoji versions are published, and we will regularly update the library to provide these new emojis. Higher-end phones will be able to render these newer emojis without any problem. For lower-end phones, newer emoji may be displayed as a small square box called tofu (☐). The library guarantees to detect and remove them. This ensures the library is compatible across multiple Android versions/devices.

Smooth UI

The library has several optimizations that attempt to reduce startup latency and speed up scrolling experience, such as caching renderable emojis, drawing emojis asynchronously and RecyclerView optimizations.

Personalized inclusive experience

User selections are persistent in the library. Emojis that are newly chosen will be shown at the top row, making it simpler for users to find and share them. The library also offers a variety of emojis that represent different people and cultures in the variant panels. If the user chooses an emoji from one of the variation panels (Figure 2), the choice is retained and set as the default in the main panel.

Image showijng diversity of characters to choose from in EmojiPicker
Figure 2. Emoji variants

Integrate emoji picker into your app in 3 steps

Step 1: Import the library in build.gradle 
dependencies { implementation "androidx.emoji2:emojipicker:$version" }

Step 2: Inflate the EmojiPickerView

Optionally set emojiGridColumns and emojiGridRows based on the desired size of each emoji cell

An example that uses EmojiPickerView in XML
<androidx.emoji2.emojipicker.EmojiPickerView app:emojiGridColumns="9" />

A very simple emoji picker should now be presented on your app! For the next step, we assume you would like to do something to the picked emoji.


Step 3: Provide listener to the picked emoji
// a listener example emojiPickerView.setOnEmojiPickedListener { findViewById<EditText>(R.id.edit_text).append(it.emoji) }

Now you have a basic functioning emoji picker. To customize it further (e.g, override some styles or provide a different behavior to the recent emoji row), please refer to our api and sample app.

Feel free to file Bug Report or Feature Request to help us improve the library!

Jetpack WindowManager 1.1 is stable!

Posted by Francesco Romano, Developer Relations Engineer on Android

It’s been more than a year since the release of the Jetpack WindowManager 1.0 stable version, and many things have happened in the foldables and large screen space. Many new devices have entered the market, and many new use cases have been unlocked!

Jetpack WindowManager is one of the most important libraries for optimizing your Android app for different form factors. And this release is a major milestone that includes a number of new features and improvements.

Let’s recap all the use cases covered by the Jetpack WindowManager library.

Get window metrics (and size classes!)

Historically, developers relied on the device display size to decide the layout of their apps, but with the availability of different form factors (such as foldables) and display modes (such as multi-window and multi-display) information about the size of the app window rather than the device display has become essential.

The Jetpack WindowManager WindowMetricsCalculator interface provides the source of truth to measure how much screen space is currently available for your app.

Built on top of that, the window size classes are a set of opinionated viewport breakpoints that help you design, develop, and test responsive and adaptive application layouts. The breakpoints have been chosen specifically to balance layout simplicity with the flexibility to optimize your app for unique cases.

With Jetpack Compose, use window size classes by importing them from the androidx.compose.material3 library, which uses WindowMetricsCalculator internally.

For View-based app, you can use the following code snippet to compute the window size classes:

private fun computeWindowSizeClasses() { val metrics = WindowMetricsCalculator.getOrCreate() .computeCurrentWindowMetrics(this) val widthDp = metrics.bounds.width() / resources.displayMetrics.density val widthWindowSizeClass = when { widthDp < 600f -> WindowSizeClass.COMPACT widthDp < 840f -> WindowSizeClass.MEDIUM else -> WindowSizeClass.EXPANDED } val heightDp = metrics.bounds.height() / resources.displayMetrics.density val heightWindowSizeClass = when { heightDp < 480f -> WindowSizeClass.COMPACT heightDp < 900f -> WindowSizeClass.MEDIUM else -> WindowSizeClass.EXPANDED } }

To learn more, see our Support different screen sizes developer guide.

Make your app fold aware

Jetpack WindowManager also provides all the APIs you need to optimize the layout for foldable devices.

In particular, use WindowInfoTracker to query FoldingFeature information, such as:

  • state: The folded state of the device, FLAT or HALF_OPENED
  • orientation: The orientation of the fold or device hinge, HORIZONTAL or VERTICAL
  • occlusion type: Whether the fold or hinge conceals part of the display, NONE or FULL
  • is separating: Whether the fold or hinge creates two logical display areas, true or false
  • bounds: The bounding rectangle of the feature within the application window (inherited from DisplayFeature)

You can access this data through a Flow:

override fun onCreate(savedInstanceState: Bundle?) { ... lifecycleScope.launch(Dispatchers.Main) { lifecycle.repeatOnLifecycle(Lifecycle.State.STARTED) { WindowInfoTracker.getOrCreate(this@MainActivity) .windowLayoutInfo(this@MainActivity) .collect { layoutInfo -> // New posture information val foldingFeature = layoutInfo.displayFeatures // use the folding feature to update the layout } } } }

Once you collect the FoldingFeature info, you can use the data to create an optimized layout for the current device state, for example, by implementing tabletop mode! You can see a tabletop mode example in MediaPlayerActivity.kt.

A great place to start learning about foldables is our codelab: Support foldable and dual-screen devices with Jetpack WindowManager.

Show two Activities side by side

Last, but not least, you can use the latest stable Jetpack WindowManager API: activity embedding.

Available since Android 12L, activity embedding enables developers with legacy multi-activiity architectures to display multiple activities from the same application—or even from multiple applications—side-by-side on large screens.

It’s a great way to implement list-detail layouts with minimal or no code changes.

Note: Modern Android Development (MAD) recommends using a single-activity architecture based on Jetpack APIs, including Jetpack Compose. If your app uses fragments, check out SlidingPaneLayout. Activity embedding is designed for multiple-activity, legacy apps that can't be easily updated to MAD.

It is also the biggest change in the library, as the activity embedding APIs are now stable in 1.1!

Not only that, but the API is now richer in features, as it enables you to:

  • Modify the behavior of the split screen (split ratio, rules, finishing behavior)
  • Define placeholders
  • Check (and change) the split state at runtime
  • Implement horizontal splits
  • Start a modal in full window

Interested in exploring activity embedding? We’ve got you covered with a dedicated codelab: Build a list-detail layout with activity embedding.

Many apps are already using activity embedding in production, for example, WhatsApp:

Image of WhatsApp on a large screen device showing activity embedding

And ebay!

Image of Ebay on a large screen device showing activity embedding

Implementing list-details layouts with multiple activities is not the only use case of activity embedding!

Starting from Android 13 (API level 33), apps can embed activities from other apps.

Cross‑application activity embedding enables visual integration of activities from multiple Android applications. The system displays an activity of the host app and an embedded activity from another app on screen side by side or top and bottom, just as in single-app activity embedding.

Host apps implement cross-app activity embedding the same way they implement single-app activity embedding, but the embedded app must opt-in for security reasons.

You can learn more about cross-application embedding in the Activity embedding developer guide.

Conclusion

Jetpack WindowManager is one of the most important libraries you should learn if you want to optimize your app’s user experience for different form factors.

WindowManager is also adding new, interesting features with every release, so keep an eye out for what’s coming in version 1.2.

See the Jetpack WindowManager documentation and sample app to get started with WindowManager today!

CameraX 1.3 is now in Beta

Posted by Donovan McMurray, Camera Developer Relations Engineer

CameraX, the Android Jetpack camera library which helps you create a best-in-class experience that works consistently across Android versions and devices, is becoming even more helpful with its 1.3 release. CameraX is already used in a growing number of Android apps, encompassing a wide range of use cases from straightforward and performant camera interactions to advanced image processing and beyond.

CameraX 1.3 opens up even more advanced capabilities. With the dual concurrent camera feature, apps can operate two cameras at the same time. Additionally, 1.3 makes it simple to delight users with new HDR video capabilities. You can also now add graphics library transformations (for example, with OpenGL or Vulkan) to the Preview, ImageCapture, and VideoCapture UseCases to apply filters and effects. There are also many other video improvements.

CameraX version 1.3 is officially in Beta as of today, so let’s get right into the details!

Dual concurrent camera

CameraX makes complex camera functionality easy to use, and the new dual concurrent camera feature is no exception. CameraX handles the low-level details like ensuring the concurrent camera streams are opened and closed in the correct order. In CameraX, binding dual concurrent cameras is not that different from binding a single camera.

First, check which cameras support a concurrent connection with getAvailableConcurrentCameraInfos(). A common scenario is to select a front-facing and a back-facing camera.

var primaryCameraSelector: CameraSelector? = null var secondaryCameraSelector: CameraSelector? = null for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) { primaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_FRONT }.cameraSelector secondaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_BACK }.cameraSelector if (primaryCameraSelector == null || secondaryCameraSelector == null) { // If either a primary or secondary selector wasn't found, reset both // to move on to the next list of CameraInfos. primaryCameraSelector = null secondaryCameraSelector = null } else { // If both primary and secondary camera selectors were found, we can // conclude the search. break } } if (primaryCameraSelector == null || secondaryCameraSelector == null) { // Front and back concurrent camera not available. Handle accordingly. }

Then, create a SingleCameraConfig for each camera, passing in each camera selector from before, along with your UseCaseGroup and LifecycleOwner. Then call bindToLifecycle() on your CameraProvider with both SingleCameraConfigs in a list.


val primary = ConcurrentCamera.SingleCameraConfig( primaryCameraSelector, useCaseGroup, lifecycleOwner ) val secondary = ConcurrentCamera.SingleCameraConfig( secondaryCameraSelector, useCaseGroup, lifecycleOwner ) val concurrentCamera = cameraProvider.bindToLifecycle( listOf(primary, secondary) )

For compatibility reasons, dual concurrent camera supports each camera being bound to 2 or fewer UseCases with a maximum resolution of 720p or 1440p, depending on the device.

HDR video

CameraX 1.3 also adds support for 10-bit video streaming along with HDR profiles, giving you the ability to capture video with greater detail, color and contrast than previously available. You can use the VideoCapture.Builder.setDynamicRange() method to set a number of configurations. There are several pre-configured values:

  • HLG_10_BIT - A 10-bit high-dynamic range with HLG encoding.This is the recommended HDR encoding to use because every device that supports HDR capture will support HLG10. See the Check for HDR support guide for details.
  • HDR10_10_BIT - A 10-bit high-dynamic range with HDR10 encoding.
  • HDR10_PLUS_10_BIT - A 10-bit high-dynamic range with HDR10+ encoding.
  • DOLBY_VISION_10_BIT - A 10-bit high-dynamic range with Dolby Vision encoding.
  • DOLBY_VISION_8_BIT - An 8-bit high-dynamic range with Dolby Vision encoding.

First, loop through the available CameraInfos to find the first one that supports HDR. You can add additional camera selection criteria here.

var supportedHdrEncoding: DynamicRange? = null val hdrCameraInfo = cameraProvider.availableCameraInfos .first { cameraInfo -> val videoCapabilities = Recorder.getVideoCapabilities(cameraInfo) val supportedDynamicRanges = videoCapabilities.getSupportedDynamicRanges() supportedHdrEncoding = supportedDynamicRanges.firstOrNull { it != DynamicRange.SDR // Ensure an HDR encoding is found } return@first supportedDynamicRanges != null } var cameraSelector = hdrCameraInfo?.cameraSelector ?: CameraSelector.DEFAULT_BACK_CAMERA

Then, set up a Recorder and a VideoCapture UseCase. If you found a supportedHdrEncoding earlier, also call setDynamicRange() to turn on HDR in your camera app.


// Create a Recorder with Quality.HIGHEST, which will select the highest // resolution compatible with the chosen DynamicRange. val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from(Quality.HIGHEST)) .build() val videoCaptureBuilder = VideoCapture.Builder(recorder) if (supportedHdrEncoding != null) { videoCaptureBuilder.setDynamicRange(supportedHdrEncoding!!) } val videoCapture = videoCaptureBuilder.build()

Effects

While CameraX makes many camera tasks easy, it also provides hooks to accomplish advanced or custom functionality. The new effects methods enable custom graphics library transformations to be applied to frames for Preview, ImageCapture, and VideoCapture.

You can define a CameraEffect to inject code into the CameraX pipeline and apply visual effects, such as a custom portrait effect. When creating your own CameraEffect via the constructor, you must specify which use cases to target (from PREVIEWVIDEO_CAPTURE, and IMAGE_CAPTURE). You must also specify a SurfaceProcessor to implement a GPU effect for the underlying Surface. It's recommended to use graphics API such as OpenGL or Vulkan to access the Surface. This process will block the Executor associated with the ImageCapture. An internal I/O thread is used by default, or you can set one with ImageCapture.Builder.setIoExecutor(). Note: It’s the implementation’s responsibility to be performant. For a 30fps input, each frame should be processed under 30 ms to avoid frame drops.

There is an alternative CameraEffect constructor for processing still images, since higher latency is more acceptable when processing a single image. For this constructor, you pass in an ImageProcessor, implementing the process method to return an image as detailed in the ImageProcessor.Request.getInputImage() method.

Once you’ve defined one or more CameraEffects, you can add them to your CameraX setup. If you’re using a CameraProvider, you should call UseCaseGroup.Builder.addEffect() for each CameraEffect, then build the UseCaseGroup, and pass it in to bindToLifecycle(). If you’re using a CameraController, you should pass all of our CameraEffects into setEffects().

Additional video features

CameraX 1.3 has many additional highly-requested video features that we’re excited to add support for.

With VideoCapture.Builder.setMirrorMode(), you can control when video recordings are reflected horizontally. You can set MIRROR_MODE_OFF (the default), MIRROR_MODE_ON, and MIRROR_MODE_ON_FRONT_ONLY (useful for matching the mirror state of the Preview, which is mirrored on front-facing cameras). Note: in an app that only uses the front-facing camera, MIRROR_MODE_ON and MIRROR_MODE_ON_FRONT_ONLY are equivalent.

PendingRecording.asPersistentRecording() method prevents a video from being stopped by lifecycle events or the explicit unbinding of a VideoCapture use case that the recording's Recorder is attached to. This is useful if you want to bind to a different camera and continue the video recording with that camera. When this option is enabled, you must explicitly call Recording.stop() or Recording.close() to end the recording.

For videos that are set to record audio via PendingRecording.withAudioEnabled(), you can now call Recording.mute() while the recording is in progress. Pass in a boolean to specify whether to mute or unmute the audio, and CameraX will insert silence during the muted portions to ensure the audio stays aligned with the video.

AudioStats now has a getAudioAmplitude() method, which is perfect for showing a visual indicator to users that audio is being recorded. While a video recording is in progress, each VideoRecordEvent can be used to access RecordingStats, which in turn contains the AudioStats object.

Next steps

Check the full release notes for CameraX 1.3 for more details on the features described here and more! If you’re ready to try out CameraX 1.3, update your project’s CameraX dependency to 1.3.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.