Tag Archives: Compose

Meet the Android Studio Team: A Conversation with Engineering Director, Tor Norbye

Posted by Ashley Tschudin – Social Media Specialist, MTP at Google

Welcome to "Meet the Android Studio Team," our new ongoing blog series. Each week, we'll introduce you to the talented people behind Android Studio. Get to know the engineers, designers, product managers, and more who create the best possible experience for Android developers like you. Join us and explore their unique perspectives.


Tor Norbye: Building Android Studio for You

Trevor Johns, Staff Developer Programs Engineer

Meet Tor Norbye, an Engineering Director at Google leading the development of Android Studio.

From his early days of coding to leading the charge on AI-powered development tools, Tor shares his insights on the evolution of Android and the vital role Android Studio plays in its future.

We'll delve into the challenges of creating developer tools, the importance of community feedback, and how Google strives to empower developers worldwide.


Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?

I grew up in Norway and I was fascinated by programming; my first exposure was as a middle schooler reading program listings in magazines (yes, in the early 80s, monthly computer magazines would include source code!) and in 1983 I got my hands on a microcomputer, and knew immediately that's what I wanted to do as a career. And now, 40+ years later, I still love programming. It's not my day-job anymore, but I still write bits and pieces of code for Android Studio on the shuttle and during quiet periods.

I've worked on developer tools my whole career - first, 14 years at Sun Microsystems after college. In 2010 I got increasingly interested in the rise of mobile computing and really wanted to be part of it, so I joined the Android team, and I've been here since.

Back then there was no "Android Studio". At the time we were working on Eclipse-based tooling for Android development. But we all knew that IntelliJ was the gold-standard for Java development, so a couple years later we began the work on building Android Studio on top of IntelliJ and with various new and ported code from our Eclipse plugins. I then had the honor of doing the unveiling demo at Google I/O in 2013.

How has the integration of AI and machine learning impacted Android developer capabilities, and how do you see it evolving in the future?

The integration of artificial intelligence has absolutely impacted Android developer capabilities, and this is just the beginning.

I felt very fortunate to be part of bringing about the massive shift from desktop computing to mobile computing when I joined Android, and I can't believe I get to be in the middle of a second massive industry shift as well, with AI and large language models.

I actually spend a lot of my time on this, working with Studio engineers, UX and product managers on our various AI related features, and talking to partner AI teams at Google. We've made a huge amount of progress in the last couple of years, both on the Studio feature integration side, as well as Google-wide on the AI side. While there is some skepticism that we're just doing AI features for AI's sake, I don't see it that way. With AI, we can suddenly, with relatively low effort, build useful features not previously possible.

Here's a very simple example from the latest Studio version: When you invoke the Rename refactoring feature, we use Gemini to add additional naming suggestions into the name popup based on what your code is doing. Here we're helping you pick good names – and naming is famously one of the two hardest problems in computer science – naming, cache invalidation and off-by-one errors. Yet LLMs are good at this – so coupled with the safe refactoring machinery in the IDE, we were able to safely add a useful feature with relatively low engineering cost on the IDE side (of course, this is building on top of a massive investment from Google over on the Gemini side).

The field is moving incredibly quickly, so it's hard to predict where things are going, but we're actively working in several areas, making the AI more aware of your codebase, and making it handle larger, complex tasks via AI Agents, and so much more.

What are some of the biggest challenges you've faced in your career as a developer, and how have those experiences shaped your approach to your job?

Earlier in my career, at a different company, we had big annual releases. I took a lot of pride in my productivity, and as my responsibilities grew, I'd try to do the impossible and deliver, no matter what. I'd not only work long hours, but I'd also try to work as quickly as I can. This led to a lot of stress. I remember putting my (at the time) young children to bed and impatiently waiting for them to fall asleep such that I could head back out to the garage office and start the evening coding shift. And I knew that stress isn't healthy, so I'd also stress about being stressed! This obviously wasn't sustainable.

Now, I emphasize work life balance not only for myself, but also for our team. I want to make sure our work is sustainable, and that people can thrive and be in it for the long term. It's a marathon, not a sprint.

Can you share an example of how feedback from the developer community has directly influenced a feature or improvement?

We have a number of feedback channels; the most important one is the Android Studio issue tracker.

We still have a very large backlog of bugs, so it's easy to get the impression that we're ignoring user reports, but that's not true. As a team, we've actually fixed several thousand bugs in 2024 alone. The best bugs are those that are clear and actionable, ideally with steps to reproduce.

I'm also very thankful to everyone who turns on data sharing in Studio; if you don't already, please consider it! Our analytics is more of an indirect, but still vital, feedback channel from the community. In addition to collecting information on, for example, which menu items are clicked, we also use it to collect quality metrics on system health. For instance, when we detect that the UI is lagging (such as a 1+ second freeze in the UI thread), we grab a thread dump and send it to the server, then aggregate these into a dashboard where we can see top freeze spots in the IDE across the user population, and can focus our efforts on fixing those.

How does the Studio team contribute to Google's broader vision for the Android platform?

In Android Studio we're always making sure we support the latest technologies and recommendations from Android, Firebase, Material, and other Google technologies. That way, it's easier for developers to adopt recommendations, like using Kotlin, Coroutines, Compose, Material, and so on.

Explore the Power of AI

Unlock the full potential of AI in your Android development journey. Explore the latest advancements in Android Studio, including intelligent code completion, automated refactoring, and other AI-driven tools.

Stay tuned!

Don't miss our next and final installment in the "Meet the Android Studio Team" series; we'll feature one more talented team member and share their unique perspective. Stay tuned to learn more about the amazing people behind Android Studio.

Find Tor Norbye on Bluesky.

Meet the Android Studio Team: A Conversation with Product Manager, Paris Hsu

Posted by Ashley Tschudin – Social Media Specialist, MTP at Google

Welcome to "Meet the Android Studio Team"; a short blog series where we pull back the curtain and introduce you to the passionate people who build your favorite Android development tools. Get to know the talented minds – engineers, designers, product managers, and more – who pour their hearts into crafting the best possible experience for Android developers.

Join us each week to meet a new member of the team and explore their unique perspectives.


Paris Hsu: Empowering Android developers with Compose tools

Meet Paris Hsu, a Product Manager at Google passionate about empowering developers to build incredible Android apps.

Her journey to the Android Studio team started with a serendipitous internship at Microsoft, where she discovered the power of developer tools. Now, as part of the UI Tools team, Paris champions intuitive solutions that streamline the development process, like the innovative Compose Tools suite.

In this installment of "Meet the Android Studio Team," Paris shares insights into her work, the importance of developer feedback, and her dream Android feature (hint: it involves acing that forehand).


Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?

Honestly, I joined a bit by chance! The summer before my last year of grad school, I was in the Microsoft's Garage incubator internship program. Our project, InkToCode, turned handwritten designs into code. It was my first experience building developer tools and made me realize how powerful developer tools can be, which led me to the Android Studio team. Now, after 6 years, I'm constantly amazed by what Android developers create – from innovative productivity apps to immersive games. It's incredibly rewarding to build tools that empower developers to create more.

In your opinion, what is the most impactful feature or improvement the Android Studio team has introduced in recent years, and why?

As part of the UI Tools team in Android Studio, I'm biased towards Compose Tools! Our team spent a lot of time rethinking how we can take a code-first approach for tools as we transition the community for XML to Compose. Features like the Compose Preview and its submodes (Interactive, Animation, Deploy preview) enable fast UI iteration, while features such as Layout Inspector or Compose UI Check helps find and diagnose UI issues with ease. We are also exploring ways to apply multimodal AI into these tools to help developers write more high quality, adaptive, and inclusive Compose code quicker.

How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?

We are constantly engaging and listening to developer feedback to ensure we are meeting their needs! some examples:

    • Direct feedback: UXR studies, Annual developer surveys, and Buganizer reports provide valuable insights.
    • Early access: We release Early Access Programs (EAPs) for new features, allowing developers to test them and provide feedback before official launch.
    • Community engagement: We have advisory boards with experienced Android developers, gather feedback from Google Developer Experts (GDEs), and attend conferences to connect directly with the community.

How does the Studio team contribute to Google's broader vision for the Android platform?

I think Android Studio contributes to Google's broader mission by providing Android developers with powerful and intuitive tools. This way, developers are empowered to create amazing apps that bring the best of Google's services and information to our users. Whether it's accessing knowledge through Search, leveraging Gemini, staying connected with Maps, or enjoying entertainment on YouTube, Android Studio helps developers build the experiences that connect people to what matters most.

If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why?

Anyone who knows me knows that I am recently super obsessed with tennis. I would love to see more coaching wearables (e.g. Pixel Watch, Pixel Racket?!). I would love real-time feedback on my serve and especially forehand stroke analysis.

Learn more about Compose Tools

Inspired by Paris’ passion for empowering developers to build incredible Android apps? To learn more about how Compose Tools can streamline your app development process, check out the Compose Tools documentation and get started with the Jetpack Compose Tutorial.

Stay tuned

Keep an eye out for the next installment in our “Meet the Android Studio Team” series, where we’ll shine the spotlight on another team member and delve into their unique insights.

Find Paris Hsu on LinkedIn, X, and Medium.

Here’s what happening in our latest Spotlight Week: Adaptive Android Apps

Posted by Alex Vanyo - Developer Relations Engineer

Adaptive Spotlight Week

With Android powering a diverse range of devices, users expect a seamless and optimized experience across their foldables, tablets, ChromeOS, and even cars. To meet these expectations, developers need to build their apps with multiple screen sizes and form factors in mind. Changing how you approach UI can drastically improve users' experiences across foldables, tablets, and more, while preventing tech debt that a portrait-only mindset can create – simply put, building adaptive is a great way to help future-proof your app.

The latest in our Spotlight Week series will focus on Building Adaptive Android apps all this week (October 14-18), and we’ll highlight the many ways you can improve your mobile app to adapt to all of these different environments.



Here’s what we’re covering during Adaptive Spotlight Week

Monday: What is adaptive?

October 14, 2024

Check out the new documentation for building adaptive apps and catch up on building adaptive Android apps if you missed it at I/O 2024. Also, learn how adaptive apps can be made available on another new form factor: cars!

Tuesday: Adaptive UIs with Compose

October 15, 2024

Learn the principles for how you can use Compose to build layouts that adapt to available window size and how the Material 3 adaptive library enables you to create list-detail and supporting pane layouts with out-of-the-box behavior.

Wednesday: Desktop windowing and productivity

October 16, 2024

Learn what desktop windowing on Android is, together with details about how to handle it in your app and build productivity experiences that let users take advantage of more powerful multitasking Android environments.

Thursday: Stylus

October 17, 2024

Take a closer look at how you can build powerful drawing experiences across stylus and touch input with the new Ink API.

Friday: #AskAndroid

October 18, 2024

Join us for a live Q&A on making apps more adaptive. During Spotlight Week, ask your questions on X and LinkedIn with #AskAndroid.


These are just some of the ways that you can improve your mobile app’s experience for more than just the smartphone with touch input. Keep checking this blog post for updates. We’ll be adding links and more throughout the week. Follow Android Developers on X and Android by Google at LinkedIn to hear even more about ways to adapt your app, and send in your questions with #AskAndroid.

Jetpack Compose APIs for building adaptive layouts using Material guidance now stable

Posted by Alex Vanyo – Developer Relations Engineer

The 1.0 stable version of the Compose adaptive APIs with Material guidance is out, ready to be used in production. The library helps you build adaptive layouts that provide an optimized user experience on any window size.

The team at SAP Mobile Start were early adopters of the Compose adaptive APIs. It took their developers only five minutes to integrate the NavigationSuiteScaffold from the new Compose Material 3 adaptive library, rapidly adapting the app’s navigation UI to different window sizes.

Each of the new components in the library, NavigationSuiteScaffold, ListDetailPaneScaffold and SupportingPaneScaffold are adaptive: based on the window size and posture, different components are displayed to the user based on which one is most appropriate in the current context. This helps build UI that adapts to a wide variety of window sizes instead of just stretching layouts.

For an overview of the components, check out the dedicated I/O session and our new documentation pages to get started.

In this post, we’re going to take a more detailed look at the layering of the new library so you have a better understanding of how customisable it is, to fit a wide variety of use cases you might have.

Similar to Compose itself, the adaptive libraries are layered into multiple dependencies, so that you can choose the appropriate level of abstraction for your application.There are four new artifacts as part of the adaptive libraries:

    • For the core building blocks for building adaptive UI, including computing the window size class and the current posture, add androidx.compose.material3.adaptive:adaptive:1.0.0

    • For implementing multi-pane layouts, add androidx.compose.material3.adaptive:adaptive-layout:1.0.0


    • For standalone navigators for the multi-pane scaffold layouts, add androidx.compose.material3.adaptive:adaptive-navigation:1.0.0

    • For implementing adaptive navigation UI, add androidx.compose.material3:material3-adaptive-navigation-suite:1.3.0

The libraries have the following dependencies:

Flow diagram showing dependencies between material3-adaptive 1.0.0 and material 1.3.0 libraries
New library dependency graph

To explore this layering more, let’s start with the highest level example with the most built-in functionality using a NavigableListDetailPaneScaffold from androidx.compose.material3.adaptive:adaptive-navigation:

val navigator = rememberListDetailPaneScaffoldNavigator<Any>()

NavigableListDetailPaneScaffold(
    navigator = navigator,
    listPane = {
        // List pane
    },
    detailPane = {
        // Detail pane
    },
)

This snippet of code gives you all of our recommended adaptive behavior out of the box for a list-detail layout: determining how many panes to show based on the current window size, hiding and showing the correct pane when the window size changes depending on the previous state of the UI, and having the back button conditionally bring the user back to the list, depending on the window size and the current state.

A list layout adapting to and from a list detail layout depending on the window size

This encapsulates a lot of behavior – and this might be all you need, and you don’t need to go any deeper!

However, there may be reasons why you may want to tweak this behavior, or more directly manage the state by hoisting parts of it in a different way.

Remember, each layer builds upon the last. This snippet is at the outermost layer, and we can start unwrapping the layers to customize it where we need.

Let’s go one level deeper with NavigableListDetailPaneScaffold and drop down one layer. Behavior won’t change at all with these direct inlinings, since we are just inlining the default behavior at each step:

(Fun fact: You can follow along with this directly in Android Studio and for any other component you desire. If you choose Refactor > Inline function, you can directly replace a component with its implementation. You can’t delete the original function in the library of course.)

val navigator = rememberListDetailPaneScaffoldNavigator<Any>()

BackHandler(
    enabled = navigator.canNavigateBack(BackNavigationBehavior.PopUntilContentChange)
) {
    navigator.navigateBack(BackNavigationBehavior.PopUntilContentChange)
}
ListDetailPaneScaffold(
    directive = navigator.scaffoldDirective,
    value = navigator.scaffoldValue,
    listPane = {
        // List pane
    },
    detailPane = {
        // Detail pane
    },
)

With the first inlining, we see the BackHandler that NavigableListDetailPaneScaffold includes by default. If using ListDetailPaneScaffold directly, back handling is left up to the developer to include and hoist to the appropriate place.

This also reveals how the navigator provides two pieces of state to control the ListDetailPaneScaffold:

    • directive —- how the panes should be arranged in the ListDetailPaneScaffold, and
    • value —- the current state of the panes, as calculated from the directive and the current navigation state.

These are both controlled by the navigator, and the next unpeeling shows us the default arguments to the navigator for directive and the adapt strategy, which is used to calculate value:

val navigator = rememberListDetailPaneScaffoldNavigator<Any>(
    scaffoldDirective = calculatePaneScaffoldDirective(currentWindowAdaptiveInfo()),
    adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(),
)

BackHandler(
    enabled = navigator.canNavigateBack(BackNavigationBehavior.PopUntilContentChange)
) {
    navigator.navigateBack(BackNavigationBehavior.PopUntilContentChange)
}
ListDetailPaneScaffold(
    directive = navigator.scaffoldDirective,
    value = navigator.scaffoldValue,
    listPane = {
        // List pane
    },
    detailPane = {
        // Detail pane
    },
)

The directive controls the behavior for how many panes to show and the pane spacing, based on currentWindowAdaptiveInfo, which contains the size and posture of the window.

This can be customized with a different directive, to show two panes side-by-side at a smaller medium width:

val navigator = rememberListDetailPaneScaffoldNavigator<Any>(
    scaffoldDirective = calculatePaneScaffoldDirectiveWithTwoPanesOnMediumWidth(currentWindowAdaptiveInfo()),
    adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(),
)

By default, showing two panes at a medium width can result in UI that is too narrow, especially for complex content. However, this can be a good option to use the window space more optimally by showing two panes for less complex content.

The AdaptStrategy controls what happens to panes when there isn’t enough space to show all of them. Right now, this always hides panes for which there isn’t enough space.

This directive is used by the navigator to drive its logic and, combined with the adapt strategy to determine the scaffold value, the resulting target state for each of the panes.

The scaffold directive and the scaffold value are then passed to the ListDetailPaneScaffold, driving the behavior of the scaffold.

This layering allows hoisting the scaffold state away from the display of the scaffold itself. This layering also allows custom implementations for controlling how the scaffold works and for hoisting related state. For example, if you are using a custom navigation solution instead of the navigator, you could drive the ListDetailPaneScaffold directly with state derived from your custom navigation solution.

The layering is enforced in the library with the different artifacts:

    • androidx.compose.material3.adaptive:adaptive contains the underlying methods to calculate the current window adaptive info
    • androidx.compose.material3.adaptive:adaptive-layout contains the layouts ListDetailPaneScaffold and SupportingPaneScaffold
    • androidx.compose.material3.adaptive:adaptive-navigation contains the navigator APIs (like rememberListDetailPaneScaffoldNavigator)

Therefore, if you aren’t going to use the navigator and instead use a custom navigation solution, you can skip using androidx.compose.material3.adaptive:adaptive-navigation and depend on androidx.compose.material3.adaptive:adaptive-layout directly.

When adding the Compose Adaptive library to your app, start with the most fully featured layer, and then unwrap if needed to tweak behavior. As we continue to work on the library and add new features, we’ll keep adding them to the appropriate layer. Using the higher-level layers will mean that you will be able to get these new features most easily. If you need to, you can use lower layers to get more fine-grained control, but that also means that more responsibility for behavior is transferred to your app, just like the layering in Compose itself.

Try out the new components today, and send us your feedback for bugs and feature requests.

SAP integrated NavigationSuiteScaffold in just 5 minutes to create adaptive navigation UI

Posted by Alex Vanyo – Developer Relations Engineer

SAP Mobile Start is an app that centralizes access to SAP's mobile business suite, a hub for users to keep track of their companies’ processes and data so they can efficiently manage their daily to-dos while on the move.

Recently, SAP Mobile Start developers prioritized building an adaptive app that looks great across devices, including tablets and foldables, to create a more seamless user experience. Using Jetpack Compose and Material 3 design, the team efficiently implemented intuitive, user-friendly features to increase accessibility across its users’ preferred devices.


Adaptive design across devices

With over 300 million daily active users on foldables, tablets, and Chromebooks today, building apps that adapt to varied screen sizes is important for providing an optimal user experience. But simply stretching the UI to fit different screen sizes can drastically alter it from its original form, obscuring the interface and impairing the user experience.

“We focused on situations where we could make better use of available space on large screens,” said Laura Bergmann, UX designer for SAP. “We wanted to get rid of screens that are stretched from edge to edge, full-screen drill-downs or dialogs, and use space more efficiently.”

Now, after optimizing for different devices, SAP Mobile Start dynamically adjusts its layouts by swapping components and showing or hiding content based on the available window size instead of stretching UI elements to match a device's screen.

The SAP team also implemented canonical layouts, common UI designs that split a screen into panes according to its size. By separating content into panes, SAP’s users can manage their business workflows more productively. Depending on the window size class, the supporting pane adjusts the UI without additional custom logic. For example, compact windows typically utilize one pane, while larger windows can utilize multiple.

“Adopting the new canonical layouts from Google helped us focus more on designing unique app capabilities for SAP’s business scenarios,” said Laura. “With the available navigational elements and patterns, we can now channel our efforts into creating a more engaging user experience without reinventing the wheel.”

SAP developers started by implementing supporting panes to create multi-pane layouts that efficiently utilize on-screen space. The first place developers added supporting panes was on the app’s “To-Do” details page. To-dos used to be managed in a single pane, making it difficult to review the comments and tickets simultaneously. Now, tickets and comments are reviewed in primary and secondary panes on the same screen using SupportingPaneScaffold.

We focused on making better use of the available space in large screens. We wanted to move away from UIs that are stretched to adaptive layouts that enhance productivity.”  — Laura Bergmann, UX designer at SAP

Fast implementation using Compose Material 3 Adaptive library

SAP Mobile Start is built entirely with Jetpack Compose, Android’s modern declarative toolkit for building native UI. Compose helped SAP developers build new UI faster and easier than ever before thanks to composables, reusable code blocks for building common UI components. The team also used Compose Navigation to integrate seamless navigation between composables, optimizing travel between new UI on all screens.

It took developers only five minutes to integrate the NavigationSuiteScaffold from the new Compose Material 3 adaptive library, rapidly adapting the app’s navigation UI to different window sizes, switching between a bottom navigation bar and a vertical navigation rail. It also eliminated the need for custom logic, which previously determined the navigation component based on various window size classes. The NavigationSuiteScaffold also reduced the custom navigation UI logic code by 59%, from 379 lines to 156.

“Jetpack Compose simplified UI development,” said Aditya Arora, lead Android developer. “Its declarative nature, coupled with built-in support for Material Design and dark theme, significantly increased our development efficiency. By simply describing the desired UI, we've reduced code complexity and improved maintainability.”

SAP developers used live edit and layout inspector in Android Studio to test and optimize the app for large screens. These features were “total game changers” for the SAP team because they helped iterate and inspect layout issues faster when optimizing for new screens.

With its @PreviewScreenSizes annotation and device streaming powered by Firebase, Jetpack Compose also made testing the app's UI across various screen sizes easier. SAP developers look forward to Compose Screenshot Testing being completed, which will further streamline UI testing and ensure greater visual consistency within the app.

Using Jetpack Compose, SAP developers also quickly and easily implemented new Material 3 design concepts from the Compose M3 Adaptive library. Material 3 design emphasizes personalizing the app experience, improving interactions with modern visual aesthetics.

Compose's flexibility made replacing the standard Material Theme with their own custom Fiori Horizon Theme simple, ensuring a consistent visual appearance across SAP apps. “As early adopters of the Compose M3 Adaptive library, we collaborated with Google to refine the API,” said Aditya. “Since our app is completely Compose-based, leveraging the new Compose Material 3 Adaptive library was a piece of cake.”

A list layout adapting to and from a list detail layout depending on the window size

As large-screen devices like tablets, foldables, and Chromebooks become more popular, building layouts that adapt to varied screen sizes becomes increasingly crucial. For SAP Mobile Start developers, reimagining their app across devices using Jetpack Compose and Material 3 design guidelines was simple. Using Android’s collection of tools and resources, creating adaptive UIs for all the new form factors hitting the market today is faster and easier than ever.

“Optimizing for large screens is crucial. The market for tablets, foldables, and Chromebooks is booming. Don't miss out on this opportunity to improve your user experience and expand your app's reach,” said Aditya.

Get started

Learn how to improve your UX by optimizing for large screens and foldables using Jetpack Compose and Material 3 design.

Top 3 Updates with Compose across Form Factors at Google I/O ’24

Posted by Chris Arriola – Developer Relations Engineer

Google I/O 2024 was filled with lots of updates and announcements around helping you be more productive as a developer. Here are the top 3 announcements around Jetpack Compose and Form Factors from Google I/O 2024:

#1 New updates in Jetpack Compose

The June 2024 release of Jetpack Compose is packed with new features and improvements such as shared element transitions, lazy list item animations, and performance improvements across the board.

With shared element transitions, you can create delightful continuity between screens in your app. This feature works together with Navigation Compose and predictive back so that transitions can happen as users navigate your app. Another highly requested feature—lazy list item animations—is also now supported for lazy lists giving it the ability to animate inserts, deletions, and reordering of items.

Jetpack Compose also continues to improve runtime performance with every release. Our benchmarks show a faster time to first pixel of 17% in our Jetsnack Compose sample. Additionally, strong skipping mode graduated from experimental to production-ready status further improving the performance of Compose apps. Simply update your app to take advantage of these benefits.

Read What’s new in Jetpack Compose at I/O ‘24 for more information.


#2 Scaling across screens with new Compose APIs and Tools

During Google I/O, we announced new tools and APIs to make it easier to build across screens with Compose. The new Material 3 adaptive library introduces new APIs that allow you to implement common adaptive scenarios such as list-detail, and supporting pane. These APIs allow your app to display one or two panes depending on the available size for your app.

Watch Building UI with the Material 3 adaptive library and Building adaptive Android apps to learn more. If you prefer to read, you can check out About adaptive layouts in our documentation.

We also announced that Compose for TV 1.0.0 is now available in beta. The latest updates to Compose for TV include better performance, input support, and a whole range of improved components that look great out of the box. New in this release, we’ve added lists, navigation, chips, and settings screens. We’ve also added a new TV Material Catalog app and updated the developer tools in Android Studio to include a new project wizard to get a running start with Compose for TV.

Finally, Compose for Wear OS has added features such as SwipeToReveal, an expandableItem, and a range of WearPreview supporting annotations. During Google I/O 2024, Compose for Wear OS graduated visual improvements and fixes from beta to stable. Learn more about all the updates to Wear OS by checking out the technical session.

Check out case studies from SoundCloud and Adidas to see how apps are leveraging Compose to build their apps and learn more about all the updates for Compose across screens by reading more here!


#3 Glance 1.1

Jetpack Glance is Android’s modern recommended framework for building widgets. The latest version, Glance 1.1, is now stable. Glance is built on top of Jetpack Compose allowing you to use the same declarative syntax that you’re used to when building widgets.

This release brings a new unit test library, Error UIs, and new components. Additionally, we’ve released new Canonical Widget Layouts on GitHub to allow you to get started faster with a set of layouts that align with best practices and we’ve published new design guidance published on the UI design hub — check it out!

To learn more about using Glance, check out Build beautiful Android widgets with Jetpack Glance. Or if you want something more hands-on, check out the codelab Create a widget with Glance.


You can learn more about the latest updates to Compose and Form Factors by checking out the Compose Across Screens and the What’s new in Jetpack Compose at I/O ‘24 blog posts or watching the spotlight playlist!

A Developer’s Roadmap to Predictive Back (Views)

Posted by Ash Nohe and Tram Bui – Developer Relations Engineers

Before you read on, this topic is scoped to Views. Predictive Back with Compose is easier to implement and not included in this blog post. To learn how to implement Compose with Predictive Back, see the Add predictive back animations codelab and the I/O workshop Improve the user experience of your Android app.

This blog post aims to shed light on various dependencies and requirements to support predictive back animations in your views based app.

First, view the Predictive Back Requirements table to understand if a particular animation requires a manifest flag, a compileSDK version, additional libraries or hidden developer options to function.

Then, start your quest. Here are your milestones:

  1. Upgrade Kotlin milestone
  2. Back-to-home animation milestone
  3. Migrate all activities milestone
  4. Fragment milestone
  5. Material Components (Views) milestone
  6. [Optional] AndroidX transitions milestone
Milestones

Upgrade Kotlin milestone

The first milestone is to upgrade to Kotlin 1.8.0 or higher, which is required for other Predictive Back dependencies.

Upgrade to Kotlin 1.8.0 or higher

Back-to-home animation milestone

The back-to-home animation is the keystone predictive back animation.

To get this animation, add android:enableOnBackInvokedCallback=true in your AndroidManifest.xml for your root activity if you are a multi-activity app (see per-activity opt-in) or at the application level if you are a single-activity app. After this, you’ll see both the back-to-home animation and a cross-task animation where applicable, which are visible to users in Android 15+ and behind a developer option in Android 13 and 14.

If you are intercepting back events in your root activity (e.g. MainActivity), you can continue to do so but you’ll need to use supported APIs and you won’t get the back-to-home animation. For this reason, we generally recommend you only intercept back events for UI logic; for example, to show a dialog asking the user to save before they quit.

See the Add support for the predictive back gesture guide for more details.

Milestone grid

Migrate all activities milestone

If you are a multi-activity app, you’ll need to opt-in and handle back events within those activities too to get a system controlled cross-activity animation. Learn more about per-activity opt-in, available for devices running Android 14+. The cross-activity animation is visible to users in Android 15+ and behind a developer option in Android 13 and 14.

Custom cross activity animations are also available with overrideActivityTransition.

Milestone grid

Fragment milestone

Next, you’ll want to focus on your fragment animations and transitions. This requires updating to AndroidX fragment 1.7.0 and transition 1.5.0 or later and using Animator or AndroidX Transitions. Assuming these requirements are met, your existing fragment animations and transitions will animate in step with the back gesture. You can also use material motion with fragments. Most material motions support predictive back as of 1.12.02-alpha02 or higher, including MaterialFadeThrough, MaterialSharedAxis and MaterialFade.

Don’t strive to make your fragment transitions look like the system’s cross-activity transition. We recommend this full screen surface transition instead.

Learn more about Fragments and Predictive Back.

Milestone grid

Material Components milestone

Finally, you’ll want to take advantage of the Material Component View animations available for Predictive Back. Learn more about available components.

Milestone grid

After this, you’ve completed your quest to support Predictive Back animations in your view based app.

[Optional] AndroidX Transitions milestone

If you’re up for more, you might also ensure your AndroidX transitions are supported with Predictive Back. Read more about AndroidX Transitions and the Predictive Back Progress APIs.

Milestone grid

Other Resources

Everything you need to know about Google TV and Android TV OS


Posted by Shobana Radhakrishnan – Senior Director of Engineering, Google TV, and Paul Lammertsma – Developer Relations Engineer

Over the past year, we’ve seen significant growth of Android TV OS, reaching 220 million monthly active devices with a 47% year-over-year increase. This incredible engagement would not be possible without our dedicated developer community. A massive thank you for your contributions.

Android 14 on TV

We’re bringing Android 14 to TV! The next generation of Android provides improvements in performance, sustainability, accessibility, and multitasking to help you build engaging apps for TVs.

  • Performance and sustainability — Android 14 for TV improves on previous OS versions so users get a snappier, more responsive TV experience. We’ve also added new energy modes to put users in control, helping to reduce a TV’s standby power consumption (see Energy saving image). Ensure your app integrates with MediaSession correctly to prevent content from continuing when input modes change or the panel switches off.
  • Accessibility — New features include color correction, enhanced text options, and improved navigation for users, which can all be toggled on or off using remote shortcuts. Review the accessibility best practices to make sure your app supports these features.
  • Multitasking Picture-in-picture mode is now supported on qualified Android 14 TV models. To evaluate whether a device supports the feature, query PackageManager for the picture-in-picture feature flag:
    hasSystemFeature(PackageManager.FEATURE_PICTURE_IN_PICTURE)


    For additional details, consult the updated Android TV app quality guidelines and the Android 14 for TV release notes.

    Compose for TV

    Compose for TV is now available in 1.0.0-beta01. We’ve updated the developer tools in Android Studio to include a new project wizard to give you a running start with Compose for TV.

    Here are just a few ways Compose makes it easier to build apps for TV:

      • Dedicated components for TV apps. Explore these components in our design guide or in practice by using our new TV Material Catalog app. Since the previous alpha release, we’ve added lists, navigation, chips, and settings screens.
      • Improved input support and performance. We’ve worked hard to address focus issues and ensure that the UI appears and animates smoothly.
      • Ease of implementation and extensive styling. Add components to your app and customize them with minimal code.
      • Broad form-factor support. Reuse business logic from your phone, tablet, or foldable app to render a TV UI with changes that can be as small as simply adding a ViewModel.

    Beta01 makes two big changes from alpha10:

      • Several components have graduated from experimental.
      • The ImmersiveList composable has been removed from the androidx-tv-material package.

    Carousel and chip components, such as FilterChip, are still experimental, so you’ll want to keep the @ExperimentalTvMaterial3Api annotation if you are using these components in your app. For all other components, you can now remove the @ExperimentalTvMaterial3Api annotation, since these APIs are now available in beta.

    We heard your feedback about the variety in the data types that represent content, which made it difficult to design a component in such a way that it would result in less code. If you are using the ImmersiveList composable from the alpha release, replace it with a custom implementation of an immersive list. While ImmersiveList is no longer part of Compose for TV, you can create an immersive list with just a few lines of code:

    @Composable
    fun SampleImmersiveList() {
        val selectedMovie = remember { mutableStateOf<Movie?>(null) }
    
    
        // Container
        Box(
            modifier = Modifier
                .fillMaxWidth()
                .height(400.dp)
        ) {
            // Background
            Box(
                modifier = Modifier
                    .fillMaxWidth()
                    .aspectRatio(20f / 7)
                    .background(selectedMovie.background)
            ) {}
    
    
            // Rows
            LazyRow(
                modifier = Modifier.align(Alignment.BottomEnd),
                ...
            ) {
                items(movies) { movie ->
                    MyMovieCard(
                        modifier = Modifier
                            .onFocusChanged {
                                if (it.hasFocus) {
                                    selectedMovie.value = movie
                                }
                            },
                        ...
                    ) {}
                }
            }
        }
    }
    

    A complete snippet is available in the immersive list sample.

    Also consult the comprehensive list of changes in the release notes to migrate any renamed or moved components.

    Migrate from the Leanback UI toolkit

    We recommend following our step-by-step migration guide to switch from Leanback to Compose for Android TV.

    Resources

    Whether you’re new to Compose or are in the process of migrating to Compose already, our large collection of resources are here to help you learn best practices for building TV UIs with the modern Android development toolkit, Jetpack Compose:

    Engage with the active Android developer community on Stack Overflow for any bugs you encounter, or submit the bugs through our public bug tracker.

    Thank you for your continued support of Android TV OS. We can’t wait to see what you’ll do on Google TV with the Android 14 TV OS!

15 Things to know for Android developers at Google I/O

Posted by Matthew McCullough, Vice President, Product Management, Android Developer  

AI is unlocking experiences that were not even possible a few years ago, and we’ve been hard at work reimaging Android with AI at the core, to help enable you to build a whole new class of apps. At this year’s Google I/O, we’re covering how new tools like Gemini can power building the next generations of apps on Android. Plus, we showcased a range of updates to our tools and services grounded in productivity, helping you make it faster and easier to build excellent experiences across form factors. Let’s dive in!

Powering the next generation of Apps with AI

#1: AI in your tools, with Gemini in Android Studio

Gemini in Android Studio (formerly Studio Bot) is your coding companion for Android development, and thanks to your feedback since its preview at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and brought it into the Gemini family of products. Earlier today, we previewed a number of new features coming soon, like Code suggestions, App Quality Insights that leverage Gemini, and a preview of the multi-modal inputs that are coming using Gemini 1.5 Pro. You can read more about the updates here, and make sure to check out What’s new in Android development tools.

#2: Building with Generative AI

Android provides the solution you need to build Generative AI apps. You can use our most capable models over the Cloud with the Gemini API in Google AI or Vertex AI for Firebase directly in your Android apps. For on-device, Gemini Nano is our most efficient model. We’re working closely with a few early adopters such as Patreon, Grammarly, and Adobe to ensure we’re creating the best APIs that unlock the most innovative experiences. For example, Adobe is experimenting with Gemini Nano to enhance the on-device experience of Acrobat AI Assistant, a tool that allows their users to summarize and interact with documents. Be sure to check out the Build your own generative AI powered Android app, Android on-device gen AI under the hood, and the What’s New in Android sessions to learn more!

Moving image of Gemini Nano operating in Adobe

Excellent apps, across devices

#3: Think adaptive: apps on phones, foldables, tablets and more

Build and design apps that adapt beyond the phone, with the new Compose adaptive layout libraries built with Material guidance in beta. Add rich stylus and keyboard support to increase user productivity. Check out three of our key Android adaptive sessions at Google I/O: Designing adaptive apps, Building adaptive Android apps, and Increase user productivity with large screens and accessories.

Moving image of Gemini Nano operating in Adobe

#4: Enhance homescreens with Widgets and Jetpack Glance

Jetpack Glance 1.1 is now available in release candidate and lets you build high quality widgets using your Compose skills. Check out our new canonical layouts, design guidance and figma updates to the Android UI kit. To learn more check out our Improve the user experience of your Android app workshop and Build Android widgets with Jetpack Glance technical session.

#5-9: come back here tomorrow and Thursday!

We’ll continue to share more updates for Android Developers throughout Google I/O, so check back here tomorrow!

Developer Productivity

#10: Use Kotlin Multiplatform for sharing business logic

Kotlin Multiplatform (KMP) enables sharing Kotlin code across different platforms and several of our Jetpack libraries, like DataStore and Room, have already been migrated to take advantage of KMP. We use Kotlin Multiplatform within Google and recommend using KMP for sharing business logic between platforms. Learn more about it here.

#11: Compose: Shared Elements, performance improvements and more

The upcoming Compose June ‘24 release is packed with the features you’ve been asking for! Shared element transitions, lazy list item reordering animations, strong skipping mode, performance improvements, a new lazy flow layout and more. Read more about it in our blog.

#12: Android Studio: the latest preview, with Gemini and more

Android Studio Koala 🐨Feature Drop (2024.1.2) available today in the canary channel, builds on top of IntelliJ 2024.1 and adds new innovative features unlocked by Gemini, such as insights for crashes in App Quality Insights, code transformations and a Gemini API starter template to get you quickly started with Gemini. Additionally, new features such as USB speed detection, shortcut UI to control device settings, a new way to sign into Google services, updated and speedier UI for profilers with a new task centric approach and a deep integration with the Google Play SDK index are intended to make the development process extremely productive. Read more here.

And the latest from the world of Mobile

#13: Grow your business with the latest Google Play updates

Discover new ways to attract and engage users with enhanced custom store listings. Optimize revenue with expanded payment options. Reinforce trust through secure, high-quality experiences made easier with our latest SDK Console improvements. Learn about these updates and more, including our new vertical approach, in our blog.

#14: Simplify app compliance with Checks

Streamline your app's privacy compliance with Checks, Google's AI-powered compliance solution! Checks empowers developers to swiftly identify, address, resolve privacy issues, and enables you to launch apps faster and with confidence. Harness the power of automation with Checks' intelligent reports, saving you valuable time and resources. Get started now at checks.google.com.

#15: And of course, Android 15

…but for that, you’ll have to stay tuned tomorrow, when we’ve got a bit more up our sleeve!

Google I/O 2024: What’s new in Android Development Tools

Posted by Mayank Jain – Product Manager, Android Studio

At Google I/O 2024, we announced an exciting new set of features and tools aimed at making Android development faster and easier. We also shared updates to Android Studio that will help you leverage AI and make it easier for you to build high quality apps for Android across the Android ecosystem.

You can check out the What’s new in Android Developer Tools session at Google I/O 2024 to see some of the new features in action or better yet, try them out yourself by downloading Android Studio Koala 🐨 Feature Drop in the preview release channel. Here’s a look at our announcements:

Leverage Gemini in Android Studio

Since launching AI features in Android Studio last year, we continue to evolve our underlying models, integrate your feedback, and expand availability to more countries and territories so that you can leverage AI in your workflow and become a more productive Android app developer. Using the built-in AI privacy controls, you can opt in to using the latest AI feature improvements that are tailored for your Android app project.

Code suggestions with Gemini in Android Studio

You can now provide custom prompts for Gemini in Android Studio to generate code suggestions. After you enable Gemini from the View > Tool Windows > Gemini tool window, right-click in the code editor and select Gemini > Transform selected code from the context menu to see the prompt field. You can then prompt Gemini to generate a code suggestion that either adds new code or transforms selected code. You can ask Gemini to simplify complex code by rewriting it, perform very specific code transformations such as “make this code idiomatic,” or generate new functions you describe. Android Studio then shows you Gemini’s code suggestion as a code diff, so that you can review and accept only the suggestions you want.

Code suggestions with Gemini in Android Studio

Gemini for recommendations on crash reports

App Quality Insights in Android Studio seamlessly incorporates both Firebase Crashlytics and Android Vitals data into Android Studio so you can access the most important app stability related information, without having to switch tools.

You can now use Gemini in Android Studio to analyze your crash reports, generate insights which are shown in the Gemini tool window, provide a crash summary, and when possible recommend next steps, including sample code and links to relevant documentation.

You can generate all of this information directly from the App Quality Insights tool window in Android Studio after you enable Gemini from View > Tool Windows > Gemini.

Gemini for recommendations on crash reports

Integrate Gemini API into your app with a starter template

Start prototyping with Gemini models in your apps with our new starter app template provided in Android Studio. In this app template, you can issue prompts directly to the Gemini API, add image sources as input, and display the responses on the screen. Additionally, use Google AI Studio to craft custom prompts for your app.

When you are ready to scale your AI features to production with Google Cloud infrastructure, you can also access the powerful capabilities of Gemini models through Vertex AI. This is Google’s fully-managed development platform designed for building and deploying generative AI. Whether you simply need world class inference capabilities, or want to build end-to-end AI workflows with Vertex, the Gemini API is a great solution.

Integrate Gemini API into your app with a starter template

Gemini 1.5 Pro coming to Android Studio

We previously announced that Gemini in Android Studio uses the Gemini 1.0 Pro model to help you by answering Android development questions, generating code, finding resources, or explaining best practices. In this preview stage of Gemini in Android Studio, we are offering Gemini 1.0 Pro at no-cost for all users for now. Gemini 1.0 Pro is a versatile model, making it ideal to scale. However we acknowledge that its quality of responses may be limited in some cases. Based on your feedback, we are committed to improving the quality for Android development, and excited to add more features using Gemini to make your developer experience even more productive.

Along this journey, the Gemini 1.5 Pro model will be coming to Android Studio later this year. Equipped with a Large Context Window, this model notably leads to higher quality responses, and unlocks use cases like multimodal input that you might have seen in the Google I/O 2024 sessions. Stay tuned for more updates on how you can access more capable models in Android Studio.

Productivity enhancements

Release Monitoring with Firebase

Today we announced the general availability of the Firebase Release Monitoring Dashboard. The Firebase Release Monitoring Dashboard is a single dashboard powered by Firebase Crashlytics to monitor your most recent production releases of your Android app. It updates in real time to give you a high-level view of the most important release metrics, like crash-free sessions, comparisons, and benchmarking based on your previous releases.

Android Device Streaming

Android Device Streaming, powered by Firebase, lets you securely connect to remote physical Android devices hosted in Google's data centers. It is a convenient way to test your app against physical units of some of the latest Android devices, including the Google Pixel 8 and 8 Pro, Pixel Fold, and more.

Starting today, Android Device Streaming now includes the following devices, in addition to the portfolio of 20+ device models already available:

    • Samsung Galaxy Fold5
    • Samsung Galaxy S23 Ultra
    • Google Pixel 8a

Additionally, if you’re new to Firebase, Android Studio automatically creates and sets up a no-cost Firebase project for you when you sign in to Koala Feature Drop to use Device Streaming. So, you can get to streaming the device you need much faster. Learn more about Android Device Streaming quotas, including promotional quota for the Firebase Blaze plan projects available for a limited time.

Connect to the latest physical Android devices in moments with Android Device Streaming, 
powered by Firebase

USB cable speed detection

Did you know that USB cable bandwidth varies from 480 Mbps (USB-2) to up to 40,000 Mbps (USB-4)? Android Studio Koala Feature Drop now makes it trivial to differentiate low performing USB cables from the high performing ones.

When you connect an Android device, Android Studio automatically detects the device and USB cable bandwidth and warns you if there’s a mismatch in USB bandwidth.

Note: USB cable speed detection requires an updated ADB found in Android SDK Platform Tools v34+, and is currently available for macOS and Linux.

USB cable speed detection.
Learn more about USB speeds here

A new way to sign in with Google in Android Studio

It’s now easier to sign in to multiple Google services with one authentication step. Whether you want to use Gemini in Android Studio, Firebase for Android Device Streaming, Google Play for Android Vitals reports, or all these useful services, the new sign in flow makes it easier to get up and running. If you’re new to Firebase and want to use Android Device Streaming, Android Studio automatically creates a project for you, so you can quickly start streaming a real physical Firebase device. With granular permissions scoping, you will always be in control of which services have access to your account. To get started, just click the profile avatar and sign in with your developer account.

A new way to sign in with Google in Android Studio

Device UI setting shortcut

Using the device UI setting shortcut, you can now effortlessly configure your devices to desired settings related to dark theme, font size, display size, app language, and more, all directly through the Running Devices window. You can now test and debug your UI seamlessly for any of the possible scenarios required by your use case.

Device UI settings shortcuts

Faster and improved Profiler with a task-centric approach

The internals of the Android Studio Profiler have been dramatically improved. Popular profiling tasks like capturing a system trace with profileable apps now start up to 60% faster.*

We’ve redesigned the profiler to make it easier to start the task you’re interested in, whether it’s profiling your app’s CPU, memory, or power usage. For example, initiating a system trace task to profile and improve your app’s startup time is integrated right in the UI as you open the profiler.

Faster and improved Profiler with a task-centric approach 
*Based on internal data, as tested in April 2024

Google Play SDK Index integration

Android Studio is integrated with the Google Play SDK Index to inform when there are known policy or version issues with SDKs used by your app. This enables you to update those dependencies and avoid issues that could prevent you from publishing new versions of your app.

In the Android Studio Koala Feature Drop release, the integration has been expanded to also include warnings from the Google Play SDK Console. This gives you a complete view of any potential version or policy issues in your dependencies before submitting your app to the Google Play Console.

Notes from SDK authors are now also displayed directly in Android Studio to save you time.

A warning from the SDK Index with the corresponding SDK author note

Preview tiles for Wear OS apps

Android Studio now has preview support for Tiles. You can now iterate much quicker when creating tiles, enabling you to quickly see what a Tile looks like on different configurations without needing to run it on a device.

Tiles previews usage for Wear OS apps

Generate synthetic sensor data for testing on Wear OS apps

To help simulate real life scenarios you can now generate synthetic (fake) data for a Wear OS emulator for health related sensors such as heart rate, speed, steps, and more. You are now able to set up and perform testing for a multi-sport training session in minutes, end-to-end in Android Studio, without ever leaving your desk.

Generate synthetic sensor data for testing on Wear OS apps

Compose Glance widget previews

Android Studio Koala Feature Drop makes it easy to preview your Jetpack Compose Glance widgets (1.1.0-rc01) directly within the IDE. Catch potential UI issues and fine-tune your widget's appearance early in the development process. Learn more about how to get started.

Previews for Compose Glance widgets

Live Edit for Compose enabled by default

Live Edit for Compose can accelerate your Compose development experience by automatically deploying code changes to the running application on an emulator or physical device. Live Edit can help you see the effect of updates to UX elements—for example new composables, modifier updates, and animations—on the overall app experience. As you become more familiar with Live Edit you will find many creative ways it can help improve your development experience and productivity.

In Android Studio Koala Feature Drop, Live Edit is enabled by default in manual mode and has increased stability and more robust change detection, including support for import statements.

ALT TEXT
Compose Preview Screenshot Testing with Now in Android app

Compose preview screenshot testing plugin (alpha)

Host-side screenshot testing is an easy and powerful way to test UIs and prevent regressions. Today, the first alpha version of the Compose Preview Screenshot Testing plugin is available as a separate plugin, to be used together with AGP 8.5.0-beta01 or higher. Add your Compose Previews to the src/main/screenshotTest folder and run the task to generate a diff report after UI updates. The generated HTML test report lets you visually detect any changes to your app’s UI.

This alpha version of the plugin is designed for rapid iteration and feedback. We plan to merge it back into AGP in the future, but for now, this separate plugin lets us experiment and improve the feature quickly. Learn more about how to get started.

IntelliJ Platform Update (2024.1)

Android Studio Koala Feature Drop includes the IntelliJ 2024.1 platform release, which comes with some very useful IDE improvements:

    • An overhauled terminal featuring both visual and functional enhancements to streamline command-line tasks. Learn more in this blog post.
    • A new feature called sticky lines in the editor simplifies working with large files and exploring new codebases. This feature keeps key structural elements, like the beginnings of classes or methods, pinned to the top of the editor as you scroll and provides an option to promptly navigate through the code by clicking on a pinned line.
    • Basic IDE functionalities like code highlighting and completion now work for Java and Kotlin during project indexing, which should enhance your startup experience.
    • You can now scale the IDE down to 90%, 80%, or 70%, giving you the flexibility to adjust the size of IDE elements both upward and downward.

Read the detailed IntelliJ release notes here.

To summarize

Android Studio Koala Feature Drop (2024.1.2) is now available in the Android Studio canary channel with

    • Gemini in Android Studio
        • Code suggestions with Gemini in Android Studio
        • Gemini for recommendations on crash reports
        • Gemini API starter app template to help integrate Gemini into your app (also available in Koala 2024.1.1)

    • Productivity enhancements
        • Release Monitoring with Firebase
        • Android Device Streaming
        • USB cable speed detection
        • A new way to sign in with Google in Android Studio
        • Device UI setting shortcut
        • Faster and improved Profiler with a task-centric approach
        • Google Play SDK Index integration
        • Preview tiles for Wear OS apps
        • Generate synthetic sensor data for testing on Wear OS apps
        • Compose Glance widget previews
        • Live Edit for Compose enabled by default
        • Compose preview screenshot testing plugin (alpha) - to be installed additionally

    • IntelliJ Platform Update (2024.1): also available in Koala 2024.1.1
        • An overhauled terminal
        • Sticky lines in editor simplifies working with large files
        • Code highlighting and completion now work during project indexing
        • Flexible IDE size adjustments

And last, a quick reminder that going forward, the initial Android Studio releases will have the .1 Android Studio major version and introduce the updated IntelliJ platform version, while subsequent Feature Drops will increase the Android major version to .2 and focus on introducing Android-specific features that help you be more productive for Android app development.

How to get started

Ready to try the exciting new features in Android Studio?

You can download the canary version Android Studio Koala 🐨 Feature Drop (2024.1.2) today to incorporate these new features into your workflow or try the stable version Android Studio Jellyfish 🪼. You can also install them side by side by following these instructions.

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X. Let's build the future of Android apps together!