Tag Archives: Android

More frequent Android SDK releases: faster innovation, higher quality and more polish

Posted by Matthew McCullough – Vice President, Product Management, Android Developer

Android has always worked to get innovation into the hands of users faster. In addition to our annual platform releases, we’ve invested in Project Treble, Mainline, Google Play services, monthly security updates, and the quarterly releases that help power Pixel Drops.

Going forward, Android will have more frequent SDK releases with two releases planned in 2025 with new developer APIs. These releases will help to drive faster innovation in apps and devices, with higher stability and polish for users and developers.

Two Android releases in 2025

Next year, we’ll have a major release in Q2 and a minor release in Q4, both of which will include new developer APIs. The Q2 major release will be the only release in 2025 to include behavior changes that can affect apps. We’re planning the major release for Q2 rather than Q3 to better align with the schedule of device launches across our ecosystem, so more devices can get the major release of Android sooner.

The Q4 minor release will pick up feature updates, optimizations, and bug fixes since the major release. It will also include new developer APIs, but will not include any app-impacting behavior changes.

Outside of the major and minor Android releases, our Q1 and Q3 releases will provide incremental updates to help ensure continuous quality. We’re actively working with our device partners to bring the Q2 release to as many devices as possible.

2025 SDK release timeline showing a features only update in Q1 and Q3, a major SDK release with behavior changes, APIs, and features in Q2, and a minor SDK release with APIs and features in Q4

What this means for your apps

With the major release coming in Q2, you’ll need to do your annual compatibility testing a few months earlier than in previous years to make sure your apps are ready. Major releases are just like the SDK releases we have today, and can include behavior changes along with new developer APIs – and to help you get started, we’ll soon begin the developer preview and beta program for the Q2 major release.

The minor release in Q4 will include new APIs, but, like the incremental quarterly releases we have today, will have no planned behavior changes, minimizing the need for compatibility testing. To differentiate major releases (which may contain planned behavior changes) from minor releases, minor releases will not increment the API level. Instead, they'll increment a new minor API level value, which will be accessed through a constant that captures both major and minor API levels. A new manifest attribute will allow you to specify a minor API level as the minimum required SDK release for your app. We’ll have an initial version of support for minor API levels in the upcoming Q2 developer preview, so please try building against the SDK and let us know how this works for you.

When planning your targeting for 2026, there’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, and that will be tied to the major API level only.

How to get ready

In addition to compatibility testing on the next major release, you'll want to make sure to test your builds and CI systems with SDK's supporting major and minor API levels – some build systems (including the Android Gradle build) might need adapting. Make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes for early testing.

Meta is a great example of how to embrace and test for new releases: they improved their velocity towards targetSdkVersion adoption by 4x. They compiled apps against each platform Beta and conducted thorough automated and smoke tests to proactively identify potential issues. This helped them seamlessly adopt new platform features, and when the release rolled out to users, Meta’s apps were ready - creating a great user experience.

What’s next?

As always, we plan to work closely with you as we move through the 2025 releases. We will make all of our quarterly releases available to you for testing and feedback, with over-the-air Beta releases for our early testers on Pixel and downloadable system images and tools for developers.

Our aim with these changes is to enable faster innovation and a higher level of quality and polish across releases, without introducing more overhead or costs for developers. At the same time, we’re welcoming an even closer collaboration with you throughout the year. Stay tuned for more information on the first developer preview of Android 16.

The shift in platform releases highlights Android's commitment to constant evolution and collaboration. By working closely with partners and listening to the needs of developers, Android continues to push the boundaries of what's possible in the mobile world. It's an exciting time to be part of the Android ecosystem, and I can't wait to see what the future holds!

Gemini in Android Studio, now helping you across the development lifecycle

Posted by Sandhya Mohan – Product Manager, Android Studio

This is Our Biggest Feature Release Since Launch!

AI can accelerate your development experience, and help you become more productive. That's why we introduced Gemini in Android Studio, your AI-powered coding companion. It’s designed to make it easier for you to build high quality Android apps, faster. Today, we're releasing the biggest set of updates to Gemini in Android Studio since launch, and now Gemini brings the power of AI to every stage of the development lifecycle, directly within the Android Studio IDE experience. And for more updates on how to grow your apps and games businesses, check out the latest updates from Google Play.

Download the latest version of Android Studio in the canary channel to take advantage of all these new features, and read on to unpack what's new.



Gemini Can Now Write, Refactor, and Document Android Code

Gemini goes beyond just guidance. It can edit your code, helping you quickly move from prototype to implementation, implement common design patterns, and refactor your code. Gemini also streamlines your workflow with features like documentation and commit message generation, allowing you to focus more time on writing code.

Moving image demonstrating Gemini writing code for an Android Composable in real time in Android Studio

Coding features we are launching include:

    • Gemini Code Transforms - modify and refactor code using custom prompts.

      using Gemini to modify code in Android Studio

    • Commit message generation - analyze changes and propose VCS commit messages to streamline version control operations.

      using Gemini to analyze changes and propose VCS commit messages in Android Studio

    • Rethink and Rename - generate intuitive names for your classes, methods, and variables. This can be invoked while you’re coding, or as a larger refactor action applied to existing code.

      using Gemini to generate intuitive names for variables while you're coding in Android Studio

    • Prompt library - save and manage your most frequently used prompts. You can quickly recall them when you need them.

      save your frequently used prompts for future use with Gemini in Android Studio

    • Generate documentation - get documentation for selected code snippets with a simple right click.

      generating code documation in Android Studio

Integrating AI into UI Tools

It’s never been easier to build with Compose now that we have integrated AI into Compose workflows. Composable previews help you visualize your composables during design time in Android Studio. We understand that manually crafting mock data for the preview parameters can be time-consuming. Gemini can now help auto-generate Composable previews with relevant context using AI, simplifying the process of visualizing your UI during development.

Visualize your composables during design time in Android Studio

We are continuing to experiment with Multimodal support to speed up your UI development cycle. Coming soon, we will allow for image attachment as context and utilizing Gemini's multimodal understanding to make it easier to create beautiful and engaging user interfaces.

Deploy with Confidence

Gemini's intelligence can help you release higher quality apps with greater confidence. Gemini can analyze, test code, and suggest fixes — and we are continuing to integrate AI into the IDE’s App Quality Insights tool window by helping you analyze crashes reported by Google Play Console and Firebase Crashlytics. Now, with Ladybug Feature Drop, you can generate deeper insights by using your local code context. This means that you will fix bugs faster and your users will see fewer crashes.

Generate insights using the IDE's App Quality Insights tool window

Some of the features we are launching include:

    • Unit test scenario generation generates unit test scenarios based on local code context.

    generate unit test scenarios based on local code context in Android Studio

      • Build / sync error insights now provides improved coverage for build and sync errors.

        build sync error insights are now avaiable in Android Studio

      • App Quality Insights explains and suggests fixes for observed crashes from Android Vitals and Firebase Crashlytics, and now allows you to use local code context for improved insights.

        save your frequently used prompts for future use with Gemini in Android Studio

    A better Gemini in Android Studio for you

    We recently surveyed many of you to see how AI-powered code completion has impacted your productivity, and 86% of respondents said they felt more productive. Please continue to provide feedback as you use Gemini in your day-to-day workflows. In fact, a few of you wanted to share some of your tips and tricks for how to get the most out of Gemini in Android Studio.



    Along with the Gemini Nano APIs that you can integrate with your own app, Android developers now have access to Google's leading edge AI technologies across every step of their development journey — with Gemini in Android Studio central to that developer experience.

    Get these new features in the latest versions of Android Studio

    These features are all available to try today in the Android Studio canary channel. We expect to release many of these features in the upcoming Ladybug Feature Drop, to be released in the stable channel in late December — with the rest to follow shortly after.

      • Gemini Code Transforms - Modify and refactor your code within the editor
      • Commit message generation - Automatically generate commit messages with Gemini
      • Rethink and Rename - Get help renaming your classes, methods, and variables
      • Prompt library - Save and recall your most commonly used prompts
      • Compose Preview Generation - Generate previews for your composables with Gemini
      • Generate documentation - Have Gemini help you document your code
      • Unit test scenario generation - Generate unit test scenarios
      • Build / sync error insights - Ask Gemini for help in troubleshooting build and sync errors
      • App Quality Insights - Insights on how you can fix crashes from Android Vitals and Firebase Crashlytics

    As always, Google is committed to the responsible use of AI. Android Studio won't send any of your source code to servers without your consent — which means you'll need to opt in to enable Gemini's developer assistance features in Android Studio. You can read more on Gemini in Android Studio's commitment to privacy.

    Try enabling Gemini in your project and tell us what you think on social media with #AndroidGeminiEra. We're excited to see how these enhancements help you build amazing apps!

    Set a reminder: tune in for our Fall episode of #TheAndroidShow on October 31, live from Droidcon!

    Posted by Anirudh Dewani – Director, Android Developer Relations

    In just a few days, on Thursday, October 31st at 10AM PT, we’ll be dropping our Fall episode of #TheAndroidShow, on YouTube and on developer.android.com!

    In our quarterly show, this time we’ll be live from Droidcon in London, giving you the latest in Android Developer news with demos of Jetpack Compose and more. You can set a reminder to watch the livestream on YouTube, or click here to add to your calendar.


    In our Fall episode, we’ll be taking the lid off the biggest update to Gemini in Android Studio, so you don’t want to miss out! We also had a number of recent wearable, foldable and large screen device launches and updates, and we’ll be unpacking what you need to know to get building for these form factors.

    Get your #AskAndroid questions answered live!

    And we’ve assembled a team of experts from across Android to answer your #AskAndroid questions on building excellent apps, across devices - share your questions now and tune in to see if they are answered live on the show!

    #TheAndroidShow is your conversation with the Android developer community, this time hosted by Simona Milanović and Alejandra Stamato. You'll hear the latest from the developers and engineers who build Android. Don’t forget to tune in live on October 31 at 10AM PT, live on YouTube and on developer.android.com/events/show!

    5 new protections on Google Messages to help keep you safe

    Every day, over a billion people use Google Messages to communicate. That’s why we’ve made security a top priority, building in powerful on-device, AI-powered filters and advanced security that protects users from 2 billion suspicious messages a month. With end-to-end encrypted1 RCS conversations, you can communicate privately with other Google Messages RCS users. And we’re not stopping there. We're committed to constantly developing new controls and features to make your conversations on Google Messages even more secure and private.

    As part of cybersecurity awareness month, we're sharing five new protections to help keep you safe while using Google Messages on Android:

    1. Enhanced detection protects you from package delivery and job scams. Google Messages is adding new protections against scam texts that may seem harmless at first but can eventually lead to fraud. For Google Messages beta users2, we’re rolling out enhanced scam detection, with improved analysis of scammy texts, starting with a focus on package delivery and job seeking messages. When Google Messages suspects a potential scam text, it will automatically move the message into your spam folder or warn you. Google Messages uses on-device machine learning models to classify these scams, so your conversations stay private and the content is never sent to Google unless you report spam. We’re rolling this enhancement out now to Google Messages beta users who have spam protection enabled.
    2. Intelligent warnings alert you about potentially dangerous links. In the past year, we’ve been piloting more protections for Google Messages users when they receive text messages with potentially dangerous links. In India, Thailand, Malaysia and Singapore, Google Messages warns users when they get a link from unknown senders and blocks messages with links from suspicious senders. We’re in the process of expanding this feature globally later this year.
    3. Controls to turn off messages from unknown international senders. In some cases, scam text messages come from international numbers. Soon, you will be able to automatically hide messages from international senders who are not existing contacts so you don’t have to interact with them. If enabled, messages from international non-contacts will automatically be moved to the “Spam & blocked” folder. This feature will roll out first as a pilot in Singapore later this year before we look at expanding to more countries.
    4. Sensitive Content Warnings give you control over seeing and sending images that may contain nudity. At Google, we aim to provide users with a variety of ways to protect themselves against unwanted content, while keeping them in control of their data. This is why we’re introducing Sensitive Content Warnings for Google Messages.

      Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and then prompts with a “speed bump” that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares.

      All of this happens on-device to protect your privacy and keep end-to-end encrypted message content private to only sender and recipient. Sensitive Content Warnings doesn’t allow Google access to the contents of your images, nor does Google know that nudity may have been detected. This feature is opt-in for adults, managed via Android Settings, and is opt-out for users under 18 years of age. Sensitive Content Warnings will be rolling out to Android 9+ devices including Android Go devices3 with Google Messages in the coming months.
    5. More confirmation about who you’re messaging. To help you avoid sophisticated messaging threats where an attacker tries to impersonate one of your contacts, we’re working to add a contact verifying feature to Android. This new feature will allow you to verify your contacts' public keys so you can confirm you’re communicating with the person you intend to message. We’re creating a unified system for public key verification across different apps, which you can verify through QR code scanning or number comparison. This feature will be launching next year for Android 9+ devices, with support for messaging apps including Google Messages.

      These are just some of the new and upcoming features that you can use to better protect yourself when sending and receiving messages. Download Google Messages from the Google Play Store to enjoy these protections and controls and learn more about Google Messages here.

      Notes


      1. End-to-end encryption is currently available between Google Messages users. Availability of RCS varies by region and carrier. 

      2. Availability of features may vary by market and device. Sign up for beta testing and a data plan may be required.  

      3. Requires 2 GB of RAM. 

    Improved comments experience in Google Docs, Sheets, and Slides on Android tablets

    What’s changing

    Earlier this year, we introduced a new comments experience in Google Docs, Sheets, and Slides on web. Today, we’re announcing a similar update to Android tablets for viewing, navigating, and replying to comments, especially on-the-go. In addition to improved design and filtering functionality to match the web experience, you’ll now be able to easily: 

    • Keep a pulse on the latest updates: now you’ll see the first comment and the two most recent replies from a comment thread, with the option to show all comments within a discussion.
    • Review comments with full context: enjoy familiar, in-context commenting, similar to the web experience, while taking advantage of larger screen real estate on tablets. 
    • Navigate and filter comments: navigation tabs and filters within the comments panel help you easily find relevant comments, without having to switch to a separate view.
    Comment experience in Docs

    Comment experience in Docs
    Comment experience in Sheets

    Comment experience in Sheets

    Comment experience in Slides

    Comment experience in Slides

    Getting started 

    Rollout pace 

    Availability 

    • Available to all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts 

    Resources 

    CameraX update makes dual concurrent camera even easier

    Posted by Donovan McMurray – Developer Relations Engineer

    CameraX, Android's Jetpack camera library, is getting an exciting update to its Dual Concurrent Camera feature, making it even easier to integrate this feature into your app. This feature allows you to stream from 2 different cameras at the same time. The original version of Dual Concurrent Camera was released in CameraX 1.3.0, and it was already a huge leap in making this feature easier to implement.

    Starting with 1.5.0-alpha01, CameraX will now handle the composition of the 2 camera streams as well. This update is additional functionality, and it doesn’t remove any prior functionality nor is it a breaking change to your existing Dual Concurrent Camera code. To tell CameraX to handle the composition, simply use the new SingleCameraConfig constructor which has a new parameter for a CompositionSettings object. Since you’ll be creating 2 SingleCameraConfigs, you should be consistent with what constructor you use.

    Nothing has changed in the way you check for concurrent camera support from the prior version of this feature. As a reminder, here is what that code looks like.

    // Set up primary and secondary camera selectors if supported on device.
    var primaryCameraSelector: CameraSelector? = null
    var secondaryCameraSelector: CameraSelector? = null
    
    for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) {
        primaryCameraSelector = cameraInfos.first {
            it.lensFacing == CameraSelector.LENS_FACING_FRONT
        }.cameraSelector
        secondaryCameraSelector = cameraInfos.first {
            it.lensFacing == CameraSelector.LENS_FACING_BACK
        }.cameraSelector
    
        if (primaryCameraSelector == null || secondaryCameraSelector == null) {
            // If either a primary or secondary selector wasn't found, reset both
            // to move on to the next list of CameraInfos.
            primaryCameraSelector = null
            secondaryCameraSelector = null
        } else {
            // If both primary and secondary camera selectors were found, we can
            // conclude the search.
            break
        }
    }
    
    if (primaryCameraSelector == null || secondaryCameraSelector == null) {
        // Front and back concurrent camera not available. Handle accordingly.
    }
    

    Here’s the updated code snippet showing how to implement picture-in-picture, with the front camera stream scaled down to fit into the lower right corner. In this example, CameraX handles the composition of the camera streams.

    // If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
    // and compose them in a picture-in-picture layout.
    val primary = SingleCameraConfig(
        cameraSelectorPrimary,
        useCaseGroup,
        CompositionSettings.Builder()
            .setAlpha(1.0f)
            .setOffset(0.0f, 0.0f)
            .setScale(1.0f, 1.0f)
            .build(),
        lifecycleOwner);
    val secondary = SingleCameraConfig(
        cameraSelectorSecondary,
        useCaseGroup,
        CompositionSettings.Builder()
            .setAlpha(1.0f)
            .setOffset(2 / 3f - 0.1f, -2 / 3f + 0.1f)
            .setScale(1 / 3f, 1 / 3f)
            .build()
        lifecycleOwner);
    
    // Bind to lifecycle
    ConcurrentCamera concurrentCamera =
        cameraProvider.bindToLifecycle(listOf(primary, secondary));
    

    You are not constrained to a picture-in-picture layout. For instance, you could define a side-by-side layout by setting the offsets and scaling factors accordingly. You want to keep both dimensions scaled by the same amount to avoid a stretched preview. Here’s how that might look.

    // If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
    // and compose them in a picture-in-picture layout.
    val primary = SingleCameraConfig(
        cameraSelectorPrimary,
        useCaseGroup,
        CompositionSettings.Builder()
            .setAlpha(1.0f)
            .setOffset(0.0f, 0.25f)
            .setScale(0.5f, 0.5f)
            .build(),
        lifecycleOwner);
    val secondary = SingleCameraConfig(
        cameraSelectorSecondary,
        useCaseGroup,
        CompositionSettings.Builder()
            .setAlpha(1.0f)
            .setOffset(0.5f, 0.25f)
            .setScale(0.5f, 0.5f)
            .build()
        lifecycleOwner);
    
    // Bind to lifecycle
    ConcurrentCamera concurrentCamera =
        cameraProvider.bindToLifecycle(listOf(primary, secondary));
    

    We’re excited to offer this improvement to an already developer-friendly feature. Truly the CameraX way! CompositionSettings in Dual Concurrent Camera is currently in alpha, so if you have feature requests to improve upon it before the API is locked in, please give us feedback in the CameraX Discussion Group. And check out the full CameraX 1.5.0-alpha01 release notes to see what else is new in CameraX.

    Chrome on Android to support third-party autofill services natively

    Posted by Eiji Kitamura – Developer Advocate

    Chrome on Android will soon allow third-party autofill services (like password managers) to natively autofill forms on websites. Developers of these services need to tell their users to toggle a setting in Chrome to continue using their service with Chrome.


    Background

    Google is the default autofill service on Chrome, providing passwords, passkeys and autofill for other information like addresses and payment data.

    A third-party password manager can be set as the preferred autofill service on Android through System Settings. The preferred autofill service can fill across all Android apps. However, to autofill forms on Chrome, the autofill service needs to use "compatibility mode". This causes glitches on Chrome such as janky page scrolling and potentially showing duplicate suggestions from Google and a third-party.

    With this coming change, Chrome on Android will allow third-party autofill services to natively autofill forms giving users a smoother and simpler user experience. Third-party autofill services can autofill passwords, passkeys and other information like addresses and payment data, as they would in other Android apps.


    Try the feature yourself

    You can already test the functionality on Chrome 131 and later. First, set a third-party autofill service as preferred in Android 14:

    Note: Instructions may vary by device manufacturer. The below steps are for a Google Pixel device running Android 15.
      1. Open Android's System Settings
      2. Select Passwords, passkeys & accounts
      3. Tap on Change button under Preferred service
      4. Select a preferred service
      5. Confirm changing the preferred autofill service

    Side by side screenshots show the steps involved in enabling third-party autofill service from your device: first tap 'Change', then select the new service, and finally confirm the change.

    Secondly, enable third-party autofill service on Chrome

      1. Open Chrome on Android
      2. Open chrome://flags#enable-autofill-virtual-view-structure
      3. Set the flag to "Enabled" and restart
      4. Open Chrome's Settings and tap Autofill Services
      5. Choose Autofill using another service
      6. Confirm and restart Chrome
    Note: Steps 2 and 3 are not necessary after Chrome 131. Chrome 131 is scheduled to be stable on November 12th, 2024.
    Side by side screenshots show the steps involved in changing your preferred password service on a smartphone: first tap 'Autofill Services', then select 'Autofill using another service', and finally restart Chrome to complete setup.

    You can emulate how Chrome behaves after compatibility mode is disabled by updating chrome://flags#suppress-autofill-via-accessibility to Enabled.

    Actions required from third-party autofill services

    Implementation wise, autofill service developers don't need an additional implementation as long as they have a proper integration with autofill services. Chrome will gracefully respect it and autofill forms.

    Chrome plans to stop supporting compatibility mode in early 2025. Users must select Autofill using another service in Chrome settings to ensure their autofill experience is unaffected. The new setting is available in Chrome 131. You should encourage your users to toggle the setting, to ensure they have the best autofill experience possible with your service and Chrome on Android.


    Timeline

      • October 16th, 2024: Chrome 131 beta is available
      • November 12th, 2024: Chrome 131 is in stable
      • Early 2025: Compatibility mode is no longer available on Chrome

    Creating a responsive dashboard layout for JetLagged with Jetpack Compose

    Posted by Rebecca Franks - Developer Relations Engineer

    This blog post is part of our series: Adaptive Spotlight Week where we provide resources—blog posts, videos, sample code, and more—all designed to help you adapt your apps to phones, foldables, tablets, ChromeOS and even cars. You can read more in the overview of the Adaptive Spotlight Week, which will be updated throughout the week.


    We’ve heard the news, creating adaptive layouts in Jetpack Compose is easier than ever. As a declarative UI toolkit, Jetpack Compose is well suited for designing and implementing layouts that adjust themselves to render content differently across a variety of sizes. By using logic coupled with Window Size Classes, Flow layouts, movableContentOf and LookaheadScope, we can ensure fluid responsive layouts in Jetpack Compose.

    Following the release of the JetLagged sample at Google I/O 2023, we decided to add more examples to it. Specifically, we wanted to demonstrate how Compose can be used to create a beautiful dashboard-like layout. This article shows how we’ve achieved this.

    Moving image demonstrating responsive design in Jetlagged where items animate positions automatically
    Responsive design in Jetlagged where items animate positions automatically

    Use FlowRow and FlowColumn to build layouts that respond to different screen sizes

    Using Flow layouts ( FlowRow and FlowColumn ) make it much easier to implement responsive, reflowing layouts that respond to screen sizes and automatically flow content to a new line when the available space in a row or column is full.

    In the JetLagged example, we use a FlowRow, with a maxItemsInEachRow set to 3. This ensures we maximize the space available for the dashboard, and place each individual card in a row or column where space is used wisely, and on mobile devices, we mostly have 1 card per row, only if the items are smaller are there two visible per row.

    Some cards leverage Modifiers that don’t specify an exact size, therefore allowing the cards to grow to fill the available width, for instance using Modifier.widthIn(max = 400.dp), or set a certain size, like Modifier.width(200.dp).

    FlowRow(
        modifier = Modifier.fillMaxSize(),
        horizontalArrangement = Arrangement.Center,
        verticalArrangement = Arrangement.Center,
        maxItemsInEachRow = 3
    ) {
        Box(modifier = Modifier.widthIn(max = 400.dp))
        Box(modifier = Modifier.width(200.dp))
        Box(modifier = Modifier.size(200.dp))
        // etc 
    }
    

    We could also leverage the weight modifier to divide up the remaining area of a row or column, check out the documentation on item weights for more information.


    Use WindowSizeClasses to differentiate between devices

    WindowSizeClasses are useful for building up breakpoints in our UI for when elements should display differently. In JetLagged, we use the classes to know whether we should include cards in Columns or keep them flowing one after the other.

    For example, if WindowWidthSizeClass.COMPACT, we keep items in the same FlowRow, where as if the layout it larger than compact, they are placed in a FlowColumn, nested inside a FlowRow:

                FlowRow(
                    modifier = Modifier.fillMaxSize(),
                    horizontalArrangement = Arrangement.Center,
                    verticalArrangement = Arrangement.Center,
                    maxItemsInEachRow = 3
                ) {
                    JetLaggedSleepGraphCard(uiState.value.sleepGraphData)
                    if (windowSizeClass == WindowWidthSizeClass.COMPACT) {
                        AverageTimeInBedCard()
                        AverageTimeAsleepCard()
                    } else {
                        FlowColumn {
                            AverageTimeInBedCard()
                            AverageTimeAsleepCard()
                        }
                    }
                    if (windowSizeClass == WindowWidthSizeClass.COMPACT) {
                        WellnessCard(uiState.value.wellnessData)
                        HeartRateCard(uiState.value.heartRateData)
                    } else {
                        FlowColumn {
                            WellnessCard(uiState.value.wellnessData)
                            HeartRateCard(uiState.value.heartRateData)
                        }
                    }
                }
    

    From the above logic, the UI will appear in the following ways on different device sizes:

    Side by side comparisons of the differeces in UI on three different sized devices
    Different UI on different sized devices

    Use movableContentOf to maintain bits of UI state across screen resizes

    Movable content allows you to save the contents of a Composable to move it around your layout hierarchy without losing state. It should be used for content that is perceived to be the same - just in a different location on screen.

    Imagine this, you are moving house to a different city, and you pack a box with a clock inside of it. Opening the box in the new home, you’d see that the time would still be ticking from where it left off. It might not be the correct time of your new timezone, but it will definitely have ticked on from where you left it. The contents inside the box don’t reset their internal state when the box is moved around.

    What if you could use the same concept in Compose to move items on screen without losing their internal state?

    Take the following scenario into account: Define different Tile composables that display an infinitely animating value between 0 and 100 over 5000ms.


    @Composable
    fun Tile1() {
        val repeatingAnimation = rememberInfiniteTransition()
    
        val float = repeatingAnimation.animateFloat(
            initialValue = 0f,
            targetValue = 100f,
            animationSpec = infiniteRepeatable(repeatMode = RepeatMode.Reverse,
                animation = tween(5000))
        )
        Box(modifier = Modifier
            .size(100.dp)
            .background(purple, RoundedCornerShape(8.dp))){
            Text("Tile 1 ${float.value.roundToInt()}",
                modifier = Modifier.align(Alignment.Center))
        }
    }
    

    We then display them on screen using a Column Layout - showing the infinite animations as they go:

    A purple tile stacked in a column above a pink tile. Both tiles show a counter, counting up from 0 to 100 and back down to 0

    But what If we wanted to lay the tiles differently, based on if the phone is in a different orientation (or different screen size), and we don’t want the animation values to stop running? Something like the following:

    @Composable
    fun WithoutMovableContentDemo() {
        val mode = remember {
            mutableStateOf(Mode.Portrait)
        }
        if (mode.value == Mode.Landscape) {
            Row {
               Tile1()
               Tile2()
            }
        } else {
            Column {
               Tile1()
               Tile2()
            }
        }
    }
    

    This looks pretty standard, but running this on device - we can see that switching between the two layouts causes our animations to restart.

    A purple tile stacked in a column above a pink tile. Both tiles show a counter, counting upward from 0. The column changes to a row and back to a column, and the counter restarts everytime the layout changes

    This is the perfect case for movable content - it is the same Composables on screen, they are just in a different location. So how do we use it? We can just define our tiles in a movableContentOf block, using remember to ensure its saved across compositions:

    val tiles = remember {
            movableContentOf {
                Tile1()
                Tile2()
            }
     }
    

    Now instead of calling our composables again inside the Column and Row respectively, we call tiles() instead.

    @Composable
    fun MovableContentDemo() {
        val mode = remember {
            mutableStateOf(Mode.Portrait)
        }
        val tiles = remember {
            movableContentOf {
                Tile1()
                Tile2()
            }
        }
        Box(modifier = Modifier.fillMaxSize()) {
            if (mode.value == Mode.Landscape) {
                Row {
                    tiles()
                }
            } else {
                Column {
                    tiles()
                }
            }
    
            Button(onClick = {
                if (mode.value == Mode.Portrait) {
                    mode.value = Mode.Landscape
                } else {
                    mode.value = Mode.Portrait
                }
            }, modifier = Modifier.align(Alignment.BottomCenter)) {
                Text("Change layout")
            }
        }
    }
    

    This will then remember the nodes generated by those Composables and preserve the internal state that these composables currently have.

    A purple tile stacked in a column above a pink tile. Both tiles show a counter, counting upward from 0 to 100. The column changes to a row and back to a column, and the counter continues seamlessly when the layout changes

    We can now see that our animation state is remembered across the different compositions. Our clock in the box will now keep state when it's moved around the world.

    Using this concept, we can keep the animating bubble state of our cards, by placing the cards in movableContentOf:

    Language
    val timeSleepSummaryCards = remember { movableContentOf { AverageTimeInBedCard() AverageTimeAsleepCard() } } LookaheadScope { FlowRow( modifier = Modifier.fillMaxSize(), horizontalArrangement = Arrangement.Center, verticalArrangement = Arrangement.Center, maxItemsInEachRow = 3 ) { //.. if (windowSizeClass == WindowWidthSizeClass.Compact) { timeSleepSummaryCards() } else { FlowColumn { timeSleepSummaryCards() } } // } }

    This allows the cards state to be remembered and the cards won't be recomposed. This is evident when observing the bubbles in the background of the cards, on resizing the screen the bubble animation continues without restarting the animation.

    A purple tile showing Average time in bed stacked in a column above a green tile showing average time sleep. Both tiles show moving bubbles. The column changes to a row and back to a column, and the bubbles continue to move across the tiles as the layout changes

    Use Modifier.animateBounds() to have fluid animations between different window sizes

    From the above example, we can see that state is maintained between changes in layout size (or layout itself), but the difference between the two layouts is a bit jarring. We’d like this to animate between the two states without issue.

    In the latest compose-bom-alpha (2024.09.03), there is a new experimental custom Modifier, Modifier.animateBounds(). The animateBounds modifier requires a LookaheadScope.

    LookaheadScope enables Compose to perform intermediate measurement passes of layout changes, notifying composables of the intermediate states between them. LookaheadScope is also used for the new shared element APIs, that you may have seen recently.

    To use Modifier.animateBounds(), we wrap the top-level FlowRow in a LookaheadScope, and then apply the animateBounds modifier to each card. We can also customize how the animation runs, by specifying the boundsTransform parameter to a custom spring spec:

    val boundsTransform = { _ : Rect, _: Rect ->
       spring(
           dampingRatio = Spring.DampingRatioNoBouncy,
           stiffness = Spring.StiffnessMedium,
           visibilityThreshold = Rect.VisibilityThreshold
       )
    }
    
    
    LookaheadScope {
       val animateBoundsModifier = Modifier.animateBounds(
           lookaheadScope = this@LookaheadScope,
           boundsTransform = boundsTransform)
       val timeSleepSummaryCards = remember {
           movableContentOf {
               AverageTimeInBedCard(animateBoundsModifier)
               AverageTimeAsleepCard(animateBoundsModifier)
           }
       }
       FlowRow(
           modifier = Modifier
               .fillMaxSize()
               .windowInsetsPadding(insets),
           horizontalArrangement = Arrangement.Center,
           verticalArrangement = Arrangement.Center,
           maxItemsInEachRow = 3
       ) {
           JetLaggedSleepGraphCard(uiState.value.sleepGraphData, animateBoundsModifier.widthIn(max = 600.dp))
           if (windowSizeClass == WindowWidthSizeClass.Compact) {
               timeSleepSummaryCards()
           } else {
               FlowColumn {
                   timeSleepSummaryCards()
               }
           }
    
    
           FlowColumn {
               WellnessCard(
                   wellnessData = uiState.value.wellnessData,
                   modifier = animateBoundsModifier
                       .widthIn(max = 400.dp)
                       .heightIn(min = 200.dp)
               )
               HeartRateCard(
                   modifier = animateBoundsModifier
                       .widthIn(max = 400.dp, min = 200.dp),
                   uiState.value.heartRateData
               )
           }
       }
    }
    

    Applying this to our layout, we can see the transition between the two states is more seamless without jarring interruptions.

    A purple tile showing Average time in bed stacked in a column above a green tile showing average time sleep. Both tiles show moving bubbles. The column changes to a row and back to a column, and the bubbles continue to move across the tiles as the layout changes

    Applying this logic to our whole dashboard, when resizing our layout, you will see that we now have a fluid UI interaction throughout the whole screen.

    Moving image demonstrating responsive design in Jetlagged where items animate positions automatically

    Summary

    As you can see from this article, using Compose has enabled us to build a responsive dashboard-like layout by leveraging flow layouts, WindowSizeClasses, movable content and LookaheadScope. These concepts can also be used for your own layouts that may have items moving around in them too.

    For more information on these different topics, be sure to check out the official documentation, for the detailed changes to JetLagged, take a look at this pull request.

    Bringing new theft protection features to Android users around the world

    Janine Roberta Ferreira was driving home from work in São Paulo when she stopped at a traffic light. A man suddenly appeared and broke the window of her unlocked car, grabbing her phone. She struggled with him for a moment before he wrestled the phone away and ran off. The incident left her deeply shaken. Not only was she saddened at the loss of precious data, like pictures of her nephew, but she also felt vulnerable knowing her banking information was on her phone that was just stolen by a thief.

    Situations like Janine’s highlighted the need for a comprehensive solution to phone theft that exceeded existing tools on any platform. Phone theft is a widespread concern in many countries – 97 phones are robbed or stolen every hour in Brazil. The GSM Association reports millions of devices stolen every year, and the numbers continue to grow.

    With our phones becoming increasingly central to storing sensitive data, like payment information and personal details, losing one can be an unsettling experience. That’s why we developed and thoroughly beta tested, a full suite of features designed to protect you and your data at every stage – before, during, and after device theft.

    These advanced theft protection features are now available to users around the world through Android 15 and a Google Play Services update (Android 10+ devices).

    AI-powered protection for your device the moment it is stolen

    Theft Detection Lock uses powerful AI to proactively protect you at the moment of a theft attempt. By using on-device machine learning, Theft Detection Lock is able to analyze various device signals to detect potential theft attempts. If the algorithm detects a potential theft attempt on your unlocked device, it locks your screen to keep thieves out.

    To protect your sensitive data if your phone is stolen, Theft Detection Lock uses device sensors to identify theft attempts. We’re working hard to bring this feature to as many devices as possible. This feature is rolling out gradually to ensure compatibility with various devices, starting today with Android devices that cover 90% of active users worldwide. Check your theft protection settings page periodically to see if your device is currently supported.

    In addition to Theft Detection Lock, Offline Device Lock protects you if a thief tries to take your device offline to extract data or avoid a remote wipe via Android’s Find My Device. If an unlocked device goes offline for prolonged periods, this feature locks the screen to ensure your phone can’t be used in the hands of a thief.

    If your Android device does become lost or stolen, Remote Lock can quickly help you secure it. Even if you can’t remember your Google account credentials in the moment of theft, you can use any device to visit Android.com/lock and lock your phone with just a verified phone number. Remote Lock secures your device while you regain access through Android’s Find My Device – which lets you secure, locate or remotely wipe your device. As a security best practice, we always recommend backing up your device on a continuous basis, so remotely wiping your device is not an issue.

    These features are now available on most Android 10+ devices1 via a Google Play Services update and must be enabled in settings.

    Advanced security to deter theft before it happens

    Android 15 introduces new security features to deter theft before it happens by making it harder for thieves to access sensitive settings, apps, or reset your device for resale:

    • Changes to sensitive settings like Find My Device now require your PIN, password, or biometric authentication.
    • Multiple failed login attempts, which could be a sign that a thief is trying to guess your password, will lock down your device, preventing unauthorized access.
    • And enhanced factory reset protection makes it even harder for thieves to reset your device without your Google account credentials, significantly reducing its resale value and protecting your data.

    Later this year, we’ll launch Identity Check, an opt-in feature that will add an extra layer of protection by requiring biometric authentication when accessing critical Google account and device settings, like changing your PIN, disabling theft protection, or accessing Passkeys from an untrusted location. This helps prevent unauthorized access even if your device PIN is compromised.

    Real-world protection for billions of Android users

    By integrating advanced technology like AI and biometric authentication, we're making Android devices less appealing targets for thieves to give you greater peace of mind. These theft protection features are just one example of how Android is working to provide real-world protection for everyone. We’re dedicated to working with our partners around the world to continuously improve Android security and help you and your data stay safe.

    You can turn on the new Android theft features by clicking here on a supported Android device. Learn more about our theft protection features by visiting our help center.

    Notes


    1. Android Go smartphones, tablets and wearables are not supported