Tag Archives: How-To Guide

CameraX update makes dual concurrent camera even easier

Posted by Donovan McMurray – Developer Relations Engineer

CameraX, Android's Jetpack camera library, is getting an exciting update to its Dual Concurrent Camera feature, making it even easier to integrate this feature into your app. This feature allows you to stream from 2 different cameras at the same time. The original version of Dual Concurrent Camera was released in CameraX 1.3.0, and it was already a huge leap in making this feature easier to implement.

Starting with 1.5.0-alpha01, CameraX will now handle the composition of the 2 camera streams as well. This update is additional functionality, and it doesn’t remove any prior functionality nor is it a breaking change to your existing Dual Concurrent Camera code. To tell CameraX to handle the composition, simply use the new SingleCameraConfig constructor which has a new parameter for a CompositionSettings object. Since you’ll be creating 2 SingleCameraConfigs, you should be consistent with what constructor you use.

Nothing has changed in the way you check for concurrent camera support from the prior version of this feature. As a reminder, here is what that code looks like.

// Set up primary and secondary camera selectors if supported on device.
var primaryCameraSelector: CameraSelector? = null
var secondaryCameraSelector: CameraSelector? = null

for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) {
    primaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_FRONT
    }.cameraSelector
    secondaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_BACK
    }.cameraSelector

    if (primaryCameraSelector == null || secondaryCameraSelector == null) {
        // If either a primary or secondary selector wasn't found, reset both
        // to move on to the next list of CameraInfos.
        primaryCameraSelector = null
        secondaryCameraSelector = null
    } else {
        // If both primary and secondary camera selectors were found, we can
        // conclude the search.
        break
    }
}

if (primaryCameraSelector == null || secondaryCameraSelector == null) {
    // Front and back concurrent camera not available. Handle accordingly.
}

Here’s the updated code snippet showing how to implement picture-in-picture, with the front camera stream scaled down to fit into the lower right corner. In this example, CameraX handles the composition of the camera streams.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.0f)
        .setScale(1.0f, 1.0f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(2 / 3f - 0.1f, -2 / 3f + 0.1f)
        .setScale(1 / 3f, 1 / 3f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

You are not constrained to a picture-in-picture layout. For instance, you could define a side-by-side layout by setting the offsets and scaling factors accordingly. You want to keep both dimensions scaled by the same amount to avoid a stretched preview. Here’s how that might look.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.5f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

We’re excited to offer this improvement to an already developer-friendly feature. Truly the CameraX way! CompositionSettings in Dual Concurrent Camera is currently in alpha, so if you have feature requests to improve upon it before the API is locked in, please give us feedback in the CameraX Discussion Group. And check out the full CameraX 1.5.0-alpha01 release notes to see what else is new in CameraX.

Creating a responsive dashboard layout for JetLagged with Jetpack Compose

Posted by Rebecca Franks - Developer Relations Engineer

This blog post is part of our series: Adaptive Spotlight Week where we provide resources—blog posts, videos, sample code, and more—all designed to help you adapt your apps to phones, foldables, tablets, ChromeOS and even cars. You can read more in the overview of the Adaptive Spotlight Week, which will be updated throughout the week.


We’ve heard the news, creating adaptive layouts in Jetpack Compose is easier than ever. As a declarative UI toolkit, Jetpack Compose is well suited for designing and implementing layouts that adjust themselves to render content differently across a variety of sizes. By using logic coupled with Window Size Classes, Flow layouts, movableContentOf and LookaheadScope, we can ensure fluid responsive layouts in Jetpack Compose.

Following the release of the JetLagged sample at Google I/O 2023, we decided to add more examples to it. Specifically, we wanted to demonstrate how Compose can be used to create a beautiful dashboard-like layout. This article shows how we’ve achieved this.

Moving image demonstrating responsive design in Jetlagged where items animate positions automatically
Responsive design in Jetlagged where items animate positions automatically

Use FlowRow and FlowColumn to build layouts that respond to different screen sizes

Using Flow layouts ( FlowRow and FlowColumn ) make it much easier to implement responsive, reflowing layouts that respond to screen sizes and automatically flow content to a new line when the available space in a row or column is full.

In the JetLagged example, we use a FlowRow, with a maxItemsInEachRow set to 3. This ensures we maximize the space available for the dashboard, and place each individual card in a row or column where space is used wisely, and on mobile devices, we mostly have 1 card per row, only if the items are smaller are there two visible per row.

Some cards leverage Modifiers that don’t specify an exact size, therefore allowing the cards to grow to fill the available width, for instance using Modifier.widthIn(max = 400.dp), or set a certain size, like Modifier.width(200.dp).

FlowRow(
    modifier = Modifier.fillMaxSize(),
    horizontalArrangement = Arrangement.Center,
    verticalArrangement = Arrangement.Center,
    maxItemsInEachRow = 3
) {
    Box(modifier = Modifier.widthIn(max = 400.dp))
    Box(modifier = Modifier.width(200.dp))
    Box(modifier = Modifier.size(200.dp))
    // etc 
}

We could also leverage the weight modifier to divide up the remaining area of a row or column, check out the documentation on item weights for more information.


Use WindowSizeClasses to differentiate between devices

WindowSizeClasses are useful for building up breakpoints in our UI for when elements should display differently. In JetLagged, we use the classes to know whether we should include cards in Columns or keep them flowing one after the other.

For example, if WindowWidthSizeClass.COMPACT, we keep items in the same FlowRow, where as if the layout it larger than compact, they are placed in a FlowColumn, nested inside a FlowRow:

            FlowRow(
                modifier = Modifier.fillMaxSize(),
                horizontalArrangement = Arrangement.Center,
                verticalArrangement = Arrangement.Center,
                maxItemsInEachRow = 3
            ) {
                JetLaggedSleepGraphCard(uiState.value.sleepGraphData)
                if (windowSizeClass == WindowWidthSizeClass.COMPACT) {
                    AverageTimeInBedCard()
                    AverageTimeAsleepCard()
                } else {
                    FlowColumn {
                        AverageTimeInBedCard()
                        AverageTimeAsleepCard()
                    }
                }
                if (windowSizeClass == WindowWidthSizeClass.COMPACT) {
                    WellnessCard(uiState.value.wellnessData)
                    HeartRateCard(uiState.value.heartRateData)
                } else {
                    FlowColumn {
                        WellnessCard(uiState.value.wellnessData)
                        HeartRateCard(uiState.value.heartRateData)
                    }
                }
            }

From the above logic, the UI will appear in the following ways on different device sizes:

Side by side comparisons of the differeces in UI on three different sized devices
Different UI on different sized devices

Use movableContentOf to maintain bits of UI state across screen resizes

Movable content allows you to save the contents of a Composable to move it around your layout hierarchy without losing state. It should be used for content that is perceived to be the same - just in a different location on screen.

Imagine this, you are moving house to a different city, and you pack a box with a clock inside of it. Opening the box in the new home, you’d see that the time would still be ticking from where it left off. It might not be the correct time of your new timezone, but it will definitely have ticked on from where you left it. The contents inside the box don’t reset their internal state when the box is moved around.

What if you could use the same concept in Compose to move items on screen without losing their internal state?

Take the following scenario into account: Define different Tile composables that display an infinitely animating value between 0 and 100 over 5000ms.


@Composable
fun Tile1() {
    val repeatingAnimation = rememberInfiniteTransition()

    val float = repeatingAnimation.animateFloat(
        initialValue = 0f,
        targetValue = 100f,
        animationSpec = infiniteRepeatable(repeatMode = RepeatMode.Reverse,
            animation = tween(5000))
    )
    Box(modifier = Modifier
        .size(100.dp)
        .background(purple, RoundedCornerShape(8.dp))){
        Text("Tile 1 ${float.value.roundToInt()}",
            modifier = Modifier.align(Alignment.Center))
    }
}

We then display them on screen using a Column Layout - showing the infinite animations as they go:

A purple tile stacked in a column above a pink tile. Both tiles show a counter, counting up from 0 to 100 and back down to 0

But what If we wanted to lay the tiles differently, based on if the phone is in a different orientation (or different screen size), and we don’t want the animation values to stop running? Something like the following:

@Composable
fun WithoutMovableContentDemo() {
    val mode = remember {
        mutableStateOf(Mode.Portrait)
    }
    if (mode.value == Mode.Landscape) {
        Row {
           Tile1()
           Tile2()
        }
    } else {
        Column {
           Tile1()
           Tile2()
        }
    }
}

This looks pretty standard, but running this on device - we can see that switching between the two layouts causes our animations to restart.

A purple tile stacked in a column above a pink tile. Both tiles show a counter, counting upward from 0. The column changes to a row and back to a column, and the counter restarts everytime the layout changes

This is the perfect case for movable content - it is the same Composables on screen, they are just in a different location. So how do we use it? We can just define our tiles in a movableContentOf block, using remember to ensure its saved across compositions:

val tiles = remember {
        movableContentOf {
            Tile1()
            Tile2()
        }
 }

Now instead of calling our composables again inside the Column and Row respectively, we call tiles() instead.

@Composable
fun MovableContentDemo() {
    val mode = remember {
        mutableStateOf(Mode.Portrait)
    }
    val tiles = remember {
        movableContentOf {
            Tile1()
            Tile2()
        }
    }
    Box(modifier = Modifier.fillMaxSize()) {
        if (mode.value == Mode.Landscape) {
            Row {
                tiles()
            }
        } else {
            Column {
                tiles()
            }
        }

        Button(onClick = {
            if (mode.value == Mode.Portrait) {
                mode.value = Mode.Landscape
            } else {
                mode.value = Mode.Portrait
            }
        }, modifier = Modifier.align(Alignment.BottomCenter)) {
            Text("Change layout")
        }
    }
}

This will then remember the nodes generated by those Composables and preserve the internal state that these composables currently have.

A purple tile stacked in a column above a pink tile. Both tiles show a counter, counting upward from 0 to 100. The column changes to a row and back to a column, and the counter continues seamlessly when the layout changes

We can now see that our animation state is remembered across the different compositions. Our clock in the box will now keep state when it's moved around the world.

Using this concept, we can keep the animating bubble state of our cards, by placing the cards in movableContentOf:

Language
val timeSleepSummaryCards = remember { movableContentOf { AverageTimeInBedCard() AverageTimeAsleepCard() } } LookaheadScope { FlowRow( modifier = Modifier.fillMaxSize(), horizontalArrangement = Arrangement.Center, verticalArrangement = Arrangement.Center, maxItemsInEachRow = 3 ) { //.. if (windowSizeClass == WindowWidthSizeClass.Compact) { timeSleepSummaryCards() } else { FlowColumn { timeSleepSummaryCards() } } // } }

This allows the cards state to be remembered and the cards won't be recomposed. This is evident when observing the bubbles in the background of the cards, on resizing the screen the bubble animation continues without restarting the animation.

A purple tile showing Average time in bed stacked in a column above a green tile showing average time sleep. Both tiles show moving bubbles. The column changes to a row and back to a column, and the bubbles continue to move across the tiles as the layout changes

Use Modifier.animateBounds() to have fluid animations between different window sizes

From the above example, we can see that state is maintained between changes in layout size (or layout itself), but the difference between the two layouts is a bit jarring. We’d like this to animate between the two states without issue.

In the latest compose-bom-alpha (2024.09.03), there is a new experimental custom Modifier, Modifier.animateBounds(). The animateBounds modifier requires a LookaheadScope.

LookaheadScope enables Compose to perform intermediate measurement passes of layout changes, notifying composables of the intermediate states between them. LookaheadScope is also used for the new shared element APIs, that you may have seen recently.

To use Modifier.animateBounds(), we wrap the top-level FlowRow in a LookaheadScope, and then apply the animateBounds modifier to each card. We can also customize how the animation runs, by specifying the boundsTransform parameter to a custom spring spec:

val boundsTransform = { _ : Rect, _: Rect ->
   spring(
       dampingRatio = Spring.DampingRatioNoBouncy,
       stiffness = Spring.StiffnessMedium,
       visibilityThreshold = Rect.VisibilityThreshold
   )
}


LookaheadScope {
   val animateBoundsModifier = Modifier.animateBounds(
       lookaheadScope = this@LookaheadScope,
       boundsTransform = boundsTransform)
   val timeSleepSummaryCards = remember {
       movableContentOf {
           AverageTimeInBedCard(animateBoundsModifier)
           AverageTimeAsleepCard(animateBoundsModifier)
       }
   }
   FlowRow(
       modifier = Modifier
           .fillMaxSize()
           .windowInsetsPadding(insets),
       horizontalArrangement = Arrangement.Center,
       verticalArrangement = Arrangement.Center,
       maxItemsInEachRow = 3
   ) {
       JetLaggedSleepGraphCard(uiState.value.sleepGraphData, animateBoundsModifier.widthIn(max = 600.dp))
       if (windowSizeClass == WindowWidthSizeClass.Compact) {
           timeSleepSummaryCards()
       } else {
           FlowColumn {
               timeSleepSummaryCards()
           }
       }


       FlowColumn {
           WellnessCard(
               wellnessData = uiState.value.wellnessData,
               modifier = animateBoundsModifier
                   .widthIn(max = 400.dp)
                   .heightIn(min = 200.dp)
           )
           HeartRateCard(
               modifier = animateBoundsModifier
                   .widthIn(max = 400.dp, min = 200.dp),
               uiState.value.heartRateData
           )
       }
   }
}

Applying this to our layout, we can see the transition between the two states is more seamless without jarring interruptions.

A purple tile showing Average time in bed stacked in a column above a green tile showing average time sleep. Both tiles show moving bubbles. The column changes to a row and back to a column, and the bubbles continue to move across the tiles as the layout changes

Applying this logic to our whole dashboard, when resizing our layout, you will see that we now have a fluid UI interaction throughout the whole screen.

Moving image demonstrating responsive design in Jetlagged where items animate positions automatically

Summary

As you can see from this article, using Compose has enabled us to build a responsive dashboard-like layout by leveraging flow layouts, WindowSizeClasses, movable content and LookaheadScope. These concepts can also be used for your own layouts that may have items moving around in them too.

For more information on these different topics, be sure to check out the official documentation, for the detailed changes to JetLagged, take a look at this pull request.

Android Device Streaming, powered by Firebase, is now in Beta

Posted by Adarsh Fernando, Senior Product Manager, Android Developer Tools

Validating your app on a range of Android screens is an important step to developing a high quality Android app. However, getting access to the device you need, when you need it, can be challenging and time consuming. From trying to reproduce a device specific behavior on a Samsung device to testing your adaptive app layouts on the Google Pixel Fold, having the right device at the right time is critical.

To address this app developer use case, we created Android Device Streaming, powered by Firebase. With just a few clicks, you and your team can access real physical devices, such as the latest Pixel and Samsung devices, and use them in the IDE in many of the ways you would use a physical device sitting on your desk.

Animation of using Device Streaming in Android Studio
Android Device Streaming, powered by Firebase, available in Android Studio Jellyfish

Today, Android Device Streaming is in beta and is available to all Android developers using Android Studio Jellyfish or later. We’ve also added new devices to the catalog and introduced flexible pricing that provides low-cost access to the latest Android devices.

Read below to learn what changes are in this release, as well as common questions around uses, security, and pricing. However, if you want to get started right away and try Android Device Streaming at no cost, see our getting started guide.

What can you do with Android Device Streaming?

If you’ve ever used Device Mirroring, you know that Android Studio lets you see the screen of your local physical device within the IDE window. Without having to physically reach out to your device, you’re able to change the device orientation, change the posture of foldables, simulate pressing physical buttons, interact with your app, and more. Android Device Streaming leverages these same capabilities, allowing you to connect and interact with remote physical devices provided by Firebase.

Screen capture of using the debugger with Android Device Streaming
Using the Debugger with Android Device Streaming

When you use Android Studio to request a device from Android Device Streaming, the IDE establishes a secure ADB over SSL connection to the device. The connection also lets you use familiar tools in Android Studio that communicate with the device, such as the Debugger, Profiler, Device Explorer, Logcat, Compose Live Edit, and more. These tools let you more accurately validate, test, and debug the behavior of your app on real OEM hardware.

What devices would my team have access to?

Android Device Streaming gives you and your team access to a number of devices running Android versions 8.1 through 14. You have access to the latest flagship devices from top device manufacturers, such as Google Pixel and Samsung. You can expand testing your app across more form factors with access to the latest foldables and tablets, such as the Samsung Tab S8 Ultra.

Screen capture of browsing the list of devices and selecting the one you want to use in Android Studio
Browse and select devices you want to use from Android Studio

And we’re frequently adding new devices to our existing catalog of 20+ device models, such as the following recent additions:

    • Samsung Galaxy Z Fold5
    • Samsung Galaxy S23 Ultra
    • Google Pixel 8a

Without having to purchase expensive devices, each team member can access Firebase’s catalog of devices in just a few clicks, for as long as they need—giving your team confidence that your app looks great across a variety of popular devices.


Google OEM partner logos - Samsung, Google Pixel, Oppo, and Xiaomi

As we mentioned at Google I/O ‘24, we’re partnering with top Original Equipment Manufacturers (OEMs), such as Samsung, Google Pixel, Oppo, and Xiaomi, to expand device selection and availability even further in the months to come. This helps the catalog of devices grow and stay ahead of ecosystem trends, so that you can validate that your apps work great on the latest devices before they reach the majority of your users.

Is Android Device Streaming secure?

Android Device Streaming, powered by Firebase, takes the security and privacy of your device sessions very seriously. Firebase devices are hosted in secure global data centers and Android Studio uses an SSL connection to connect to the device.

A device that you’ve used to install and test your app on is never shared with another user or Google service before being completely erased and factory reset. When you’re done using a device, you can do this yourself by clicking “Return and Erase Device” to fully erase and factory reset it. The same applies if the session expires and the device is returned automatically.

Screen capture of Reuturn and Erase Device function in Android Device Streaming
When your session ends, the device is fully erased and factory reset.

How much does Android Device Streaming cost?

Depending on your Firebase project’s pricing plan, Android Device Streaming users can use Android Device Streaming with the following pricing:

    • On June 1, 2024, for a promotional period:
        • (no cost) Spark plan: 120 no cost minutes per project, per month
        • Blaze plan: 120 no cost minutes per project, per month, 15 cents for each additional minute
    • On or around February, 2025, the promotional period will end and billing will be based on the following quota limits:
        • (no cost) Spark plan: 30 no cost minutes per project, per month
        • Blaze plan: 30 no cost minutes per project, per month, 15 cents for each additional minute

With no monthly or yearly contracts, Android Device Streaming’s per-minute billing provides unparalleled flexibility for you and your team. And importantly, you don’t pay for any period of time required to set up the device before you connect, or erase the device after you end your session. This allows you and your team to save time and costs compared to purchasing and managing your own device lab.

To learn more, see Usage levels, quotas, and pricing.

What’s next

We’re really excited for you and your team to try Android Device Streaming, powered by Firebase. We think it’s an easy and cost-effective way for you to access the devices you need, when you need them, and right from your IDE, so that you can ensure the best quality and functionality of your app for your users.

The best part is, you can try out this new service in just a few clicks and at no cost. And our economical per-minute pricing provides increased flexibility for your team to go beyond the monthly quota, so that you only pay for the time you’re actively connected to a device—no subscriptions or long-term commitments required.

You can expect that the service will be adding more devices from top OEM partners to the catalog, to ensure that device selection remains up-to-date and becomes increasingly diverse. Try Android Device Streaming today and share your experience with the Android developer committee on LinkedIn, Medium, YouTube, or X.

Android Studio uses Gemini Pro to make Android development faster and easier

Posted by Sandhya Mohan – Product Manager, Android Studio

As part of the next chapter of our Gemini era, we announced we were bringing Gemini to more products. Today we’re excited to announce that Android Studio is using the Gemini 1.0 Pro model to make Android development faster and easier, and we’ve seen significant improvements in response quality over the last several months through our internal testing. In addition, we are making this transition more apparent by announcing that Studio Bot is now called Gemini in Android Studio.

Gemini in Android Studio is an AI-powered coding assistant which can be accessed directly in the IDE. It can accelerate your ability to develop high-quality Android apps faster by helping generate code for your app, providing complex code completions, answering your questions, finding relevant resources, adding code comments and more — all without ever having to leave Android Studio. It is available in 180+ countries and territories in Android Studio Jellyfish.

If you were already using Studio Bot in the canary channel, you’ll continue experiencing the same helpful and powerful features, but you’ll notice improved quality in responses compared to earlier versions.

Ask Gemini your Android development questions

Gemini in Android Studio can understand natural language, so you can ask development questions in your own words. You can enter your questions in the chat window ranging from very simple and open-ended ones to specific problems that you need help with.

Here are some examples of the types of queries it can answer:

    • How do I add camera support to my app?
    • Using Compose, I need a login screen with the following: a username field, a password field, a 'Sign In' button, a 'Forgot Password?' link. I want the password field to obscure the input.
    • What's the best way to get location on Android?
    • I have an 'orders' table with columns like 'order_id', 'customer_id', 'product_id', 'price', and 'order_date'. Can you help me write a query that calculates the average order value per customer over the last month?
Moving image demonstrating a conversation in Android Studio

Gemini in Android Studio remembers the context of the conversation, so you can also ask follow-up questions, such as “Can you give me the code for this in Kotlin?” or “Can you show me how to do it in Compose?”

Code faster with AI powered Code Completions

Gemini in Android Studio can help you be more productive by providing you with powerful AI code completions. You can receive suggestions of multi-line code completions, suggestions for how to do comments for your code, or how to add documentation to your code.

Moving image demonstrating code completion in Android Studio

Designed with privacy in mind

Gemini in Android Studio was designed with privacy in mind. Gemini is only available after you log in and enable it. You don’t need to send your code context to take advantage of most features. By default, Gemini in Android Studio’s chat responses are purely based on conversation history, and you control whether you want to share additional context for customized responses. You can update this anytime in Android Studio > Settings at a granular project level. We also have a custom way for you to opt out certain files and folders through an .aiexclude file. Much like our work on other AI projects, we stick to a set of AI Principles that hold us accountable. Learn more here.

image of share settings in Android Studio

Build a Generative AI app using the Gemini API starter template

Not only does Android Studio use Gemini to help you be more productive, it can also help you take advantage of Gemini models to create AI-powered features in your applications. Get started in minutes using the Gemini API starter template available in the canary release – channel for Android Studio – under File > New Project > Gemini API Starter. You can also use the code sample available at File > Import Sample > Google Generative AI sample.

The Gemini API is multimodal, meaning it can support image and text inputs. For example, it can support conversational chat, summarization, translation, caption generation etc. using both text and image inputs.

image of starter templates in Android Studio

Try Gemini in Android Studio

Gemini in Android Studio is still in preview, but we have added many feature improvements — and now a major model update — since we released the experience in May 2023. It is currently no-cost for developers to try out. Now is a great time to test it and let us know what you think, before we release this experience to stable.


Stay updated on the latest by following us on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!

Better, faster, stronger time zone updates on Android

Posted by Almaz Mingaleev – Software Engineer and Masha Khokhlova – Technical Program Manager

It's that time of year again when many of us move our clocks! Oh wait, your Android devices did it automatically, didn’t they? For Android users living in many countries, this may not be surprising. For example, the US, EU and UK governments haven't changed their time legislation in a while*, so users wake up every morning to see the correct time.

But, what happens when time laws change? If you look globally, governments can and do change their time laws, sometimes every year, and Android devices have to keep up to support our global user base.

To implement a region’s time legislation, Android devices have to follow a set of encoded rules. What are these rules? Let’s start with why rules are needed in the first place. Clearly, 7am in Los Angeles and 7am in London are not the same time. Moreover, if you are in London and want to know the time in Los Angeles, you have to know how many hours to subtract, and this is not fixed throughout the year**. So to tell local time (time your watches should show) it is convenient to have a reference clock that everybody on the planet agrees on. This clock is named UTC, coordinated universal time. Local time in London during winter matches UTC, during summer it is calculated by adding one hour to UTC, usually referred to as UTC+1. For Los Angeles local time during summer is UTC-8 (8 hours behind, UTC offset is -8 hours) and during winter it is UTC-7 correspondingly. When a region changes from one offset to another, we call that a “transition”. Combination of these offsets and rules when a transition happens (such as “last Sunday of March” or “first Sunday on or after 8th March”) defines a time zone. For some countries, the time zone rules can be very simple and primarily determined by their chosen UTC offset: “no transitions, we don’t move our clocks forwards and backwards”.

Governments can decide to change the UTC offset for regions, introduce new time zone regions, or alter the day that daylight saving transitions occur. When governments do this, the time zone rules on every Android device needs to be updated, otherwise the Android device will continue to follow the old rules, which can lead to an incorrect local time being shown to users in the affected areas.

Android is not alone in needing to keep track of this information. Fortunately, there is a database supported by IANA (Internet Assigned Numbers Authority) and maintained by a small group of volunteers known as the TZDB (Time Zone Database) which is used as a basis for local timekeeping on most modern operating systems. The TZDB contains most of the information that Android needs.

There is no schedule, but typically the TZDB releases a new update 4-5 times a year. The Android team wants to release updates that affect its devices as soon as possible.

How do these changes reach your devices?

    1.    Government signs a law / decree.

    2.    Someone lets IANA know about these changes

    3.    Depending on how much lead time was given and changes announced by other countries IANA publishes a new TZDB release.

    4.    The Android team incorporates the TZDB release (along with a small amount additional information we obtain from related projects and derive ourselves) into our codebase.

    5.    We roll-out these updates to your devices. How the roll-out happens depends on the type and age of the Android device.

        a.    Many mobile Android devices are covered by Google’s Project Mainline, which means that Google sends updates to devices directly.

        b.    Some devices are handled by the device’s manufacturer who takes the Android team’s source code updates and releases them to devices themselves according to their own update schedule.

As you can see, there are quite a few steps. Applying, testing and releasing an update can take weeks. And it is not just Android and other computer operating systems like it who need to take action. There are usually telecoms, banks, airlines and software companies that have to make adjustments to their own systems and time tables. Citizens of a country need to be made aware of changes so they know what to expect, especially if they are using older devices that might not receive necessary updates. And it all takes time and can cause problems for countless people if it isn’t handled well. The amount of disruption caused by a change is usually determined by the clarity of the legislation and notice period that governments provide. The TZDB volunteers are good at spotting changes, but it helps if the governments notify IANA directly, especially when it’s not clear the exact regions or existing laws affected. Unfortunately, many of the recent time zone changes were given with about a month or less notice time. Android has a set of recommendations for how much notice to provide. Other operating systems have similar recommendations.

Android is constantly evolving. One of such improvements, Project Mainline, introduced in Android 10, has made a big difference in how we update important parts of the Android operating system. It allows us to deliver select AOSP components directly through Google Play, making updates faster than a full OTA update and reducing duplication of efforts done by each OEM.

From the beginning, time zone rules were a component in Mainline, called Time Zone Data or tzdata module. This integration allowed us to react more quickly to government-mandated time zone changes than before. However until 2023 tzdata updates were still bundled with other Mainline changes, sometimes leading to testing complexities and slower deployment.

In 2023, we made further investments in Mainline's infrastructure and decoupled the tzdata module from the other components. With this isolation, we gained the ability to respond rapidly to time zone legislation changes — often releasing updates to Android users outside of the established release cadence. Additionally, this change means time zone updates can reach a far greater number of Android devices, ensuring you as Android users always see the correct time.

So while your Android phone may not be able to restore that lost hour of sleep, you can rest assured that it will show the accurate time, thanks to volunteers and the Android team.

Curious about the ever-changing world of time zones? Explore the IANA Time Zone Database and learn more about how time and time zones are managed on Android.


*In 2018-2019 there were changes in Alaska. This is a blogpost, not a technical documentation!

**Because the US and UK apply their daylight saving changes at different local times and on different days of the year.

Introducing Jetpack Emoji Picker: A New Way to Add Emojis to Your Android App

Posted by Lin Guo, Software Engineer

The use of emojis in communication has become increasingly popular in recent years. These small icons can be used to express a wide range of emotions and can add a personal touch to messages. However, adding emojis to your Android app can be a bit of a challenge. That's where the Emoji picker library comes in. You can simply add a few lines of code to your app, and you'll be able to start using emojis right away. It's the easiest way to get started with emojis, and it will make your app more fun and expressive.

Moving image of using EmojiPicker on Google Pixel 6 Pro
Figure 1. Emoji Picker

Some useful features provided by the library

Up-to-date emojis without tofu (☐)

Every year, new emoji versions are published, and we will regularly update the library to provide these new emojis. Higher-end phones will be able to render these newer emojis without any problem. For lower-end phones, newer emoji may be displayed as a small square box called tofu (☐). The library guarantees to detect and remove them. This ensures the library is compatible across multiple Android versions/devices.

Smooth UI

The library has several optimizations that attempt to reduce startup latency and speed up scrolling experience, such as caching renderable emojis, drawing emojis asynchronously and RecyclerView optimizations.

Personalized inclusive experience

User selections are persistent in the library. Emojis that are newly chosen will be shown at the top row, making it simpler for users to find and share them. The library also offers a variety of emojis that represent different people and cultures in the variant panels. If the user chooses an emoji from one of the variation panels (Figure 2), the choice is retained and set as the default in the main panel.

Image showijng diversity of characters to choose from in EmojiPicker
Figure 2. Emoji variants

Integrate emoji picker into your app in 3 steps

Step 1: Import the library in build.gradle 
dependencies { implementation "androidx.emoji2:emojipicker:$version" }

Step 2: Inflate the EmojiPickerView

Optionally set emojiGridColumns and emojiGridRows based on the desired size of each emoji cell

An example that uses EmojiPickerView in XML
<androidx.emoji2.emojipicker.EmojiPickerView app:emojiGridColumns="9" />

A very simple emoji picker should now be presented on your app! For the next step, we assume you would like to do something to the picked emoji.


Step 3: Provide listener to the picked emoji
// a listener example emojiPickerView.setOnEmojiPickedListener { findViewById<EditText>(R.id.edit_text).append(it.emoji) }

Now you have a basic functioning emoji picker. To customize it further (e.g, override some styles or provide a different behavior to the recent emoji row), please refer to our api and sample app.

Feel free to file Bug Report or Feature Request to help us improve the library!