Tag Archives: Google I/O 2023

New Resources to Build with Google AI

Posted by Jaimie Hwang, ML Product Marketing and Danu Mbanga, ML Product Management

Today's development environment is only getting more complex and as machine learning becomes increasingly integrated with mobile, web, and cloud platforms, developers are looking for clear pathways to cut through this growing complexity. To help developers of all levels, we've unified our machine learning products, tools, and guidance on Google AI, so you can spend less time searching, and more time building AI solutions.

Whether you are looking for a pre-trained dataset, the latest in Generative AI, or tracking the latest announcements from Google I/O, you’ll be able to find it on ai.google/build/machinelearning. It’s a single destination for building AI solutions, no matter where you are in your machine learning workflow, or where your models are deployed.

cropped screenshot of Google AI landing page

Toolkits: end-to-end guidance

We're taking it one step further with new toolkits that provide you end-to-end guidance to build the latest AI solutions. These toolkits combine many of our products, many of which are open source, alongside a walkthrough so you can learn best practices and implement code. Check out how to build a text classifier using Keras or how you can take a large language model and shrink it to run on Android using Keras and TensorFlow Lite. And we are just getting started. These are our first two toolkits but have many more to come soon – so stay tuned!

moving image of finding a toolkit on Google AI to build an LLM on Android and Keras and TensorFlow Lite
Toolkit to build a LLM on Android with Keras and TensorFlow Lite

Whether you're just starting out with machine learning or you're an experienced developer looking for the latest tools and resources, Google AI has the resources you need to build AI solutions. Visit ai.google/build/machinelearning today to learn more.

14 Things to know for Android developers at Google I/O!

Posted by Matthew McCullough, Vice President, Product Management, Android Developer

Today, at Google I/O 2023, you saw how we are ushering in important breakthroughs in AI across all of Google. For Android developers, we see this technology helping you out in your flow, saving you time so you can focus on building engaging new experiences for your users. Time saving tools are going to be even more important, as your users are asking you to support their experiences across an expanding portfolio of screens, like large screens and wearables in particular. Across the Google and Developer Keynotes, Android showed you a number of ways to support you in this mission to help build great experiences for your users; read on for our 14 new things to know in the world of Android Developer (and yes, we also showed you the latest Beta for Android 14!).


BRINGING AI INTO YOUR WORKFLOW

#1: Leverage AI in your development with Studio Bot

As part of Google’s broader push to help unlock the power of AI to help you throughout your day, we introduced Studio Bot, an AI powered conversational experience within Android Studio that helps you generate code, fix coding errors, and be more productive. Studio Bot is in its very early days, and we’re training it to become even better at answering your questions and helping you learn best practices. We encourage you to read the Android Studio blog, download the latest version of Android Studio, and read the documentation to learn how you can get started.


#2: Generate Play Store Listings with AI

Starting today, when you draft a store listing in English, you’ll be able to use Google’s Generative-AI technology to help you get started. Just open our AI helper in Google Play Console, enter a couple of prompts, like an audience and a key theme, and it will generate a draft you can edit, discard, or use. Because you can always review, you’re in complete control of what you submit and publish on Google Play.

Moving image shaing generating Google Play listings with AI

BUILDING FOR A MULTI-DEVICE WORLD

#3: Going big on Android foldables & tablets

Google is all in on large screens, with two new Android devices coming from Pixel - the Pixel Fold and the Pixel Tablet - and 50+ Google apps optimized to look great on the Android large screen ecosystem, alongside a range of apps from developers around the world. It is a great time to invest, with improved tools and guidance like the new Pixel Fold and Pixel Tablet emulator configurations in Android Studio Hedgehog Canary 3, expanded Material design updates, and inspiration for gaming and creativity apps. You can start optimizing for these and other large screen devices by reading the do’s and don’ts of optimizing your Android app for large screens and watching the Developing high quality apps for large screens and foldables session.


#4: Wear OS: Watch faces, Wear OS 4, & Tiles animations

Wear OS active devices have grown 5x since launching Wear OS 3, so there’s more reason than ever to build a great app experience for the wrist. To help you on your way, we announced the new Watch Face Format, a new declarative XML format built in partnership with Samsung to help you bring your great idea to the watch face market. We’re also releasing new APIs to bring rich animations to tiles and helping you get ready for the next generation of platform updates with the Wear OS 4 Developer Preview. Learn more about all the latest updates by checking out our blog, watching the session, and taking a look at the brand new Wear OS gallery.

Moving image shaing generating Google Play listings with AI

#5: Android Health: An interconnected health experience across apps and devices

With 50+ apps in our Health Connect ecosystem and 100+ apps integrated with Health Services, we’re improving Android Health offerings so more developers can work together to bring unique health and fitness experiences to users. Health Connect is coming to Android 14 this fall, making it even easier for users to control how their health data is being shared across apps directly from Settings on their device. Read more about what we announced at I/O and check out our Health Services documentation, Health Connect documentation, and code samples to get started!

#6: Android for Cars: New apps & experiences

Our efforts in cars continue to grow: Android Auto will be available in 200 million cars this year and the number of cars with Google built-in will double in the same period. It’s easier than ever to port existing Android apps to cars and bring entirely new experiences to cars, like video and games. To get started, check out the What’s New with Android for Cars session and check out the developer blog.

#7: Android TV: Compose for TV and more!

We continue our commitment to bringing the best of the app ecosystem to Android TV OS. Today, we’re announcing Compose for TV, the latest UI framework for developing beautiful and functional apps for Android TV OS. To learn more, read the blog post and check out the developer guides, design reference, our new codelab and sample code. Also, please continue to give us feedback so we can continue shaping Compose for TV to fit your needs.

#8: Assistant: Simplified voice experiences across Android

Building Google Assistant integrations inside familiar Android development paths is even easier than before. With the new App Actions Test Library and the Google Assistant plugin for Android Studio–which is now also available for Wear and Auto–it is now easier to code, easier to emulate your user’s experience to forecast user expectations, and easier to deploy App Actions integrations across primary and complementary Android devices. To get started, check out the session What's new in Android development tools and check out the developer documentation.


MODERN ANDROID DEVELOPMENT

#9: Build UI with Compose across screens

Jetpack Compose, our modern UI toolkit for Android development has been steadily growing in the Android community: 24% of the top 1000 apps on Google Play are using Jetpack Compose, which has doubled from last year. We’re bringing Compose to even more surfaces with Compose for TV in alpha, and homescreen widgets with Glance, now in beta. Read more about what we announced at Google I/O, and get started with Compose for building UI across screens.


#10: Use Kotlin everywhere, throughout your app

The Kotlin programming language is at the core of our development platform, and we keep expanding the scale of Kotlin support for Android apps. We’re collaborating with JetBrains on the new K2 compiler, and are actively working on integration into our tools such as Android Studio, Android Lint, KSP, Compose etc and leveraging Google’s large Kotlin codebases to verify compatibility of the new compiler. We now recommend using Kotlin DSL for build scripts. Watch the What’s new in Kotlin for Android talk to learn more.

#11: App Quality Insights now contain Android Vitals reports

Android Studio’s App Quality Insights enables you to access Firebase Crashlytics issue reports directly from the IDE, allowing you to navigate between stack trace and code with a click, use filters to see only the most important issues, and see report details to help you reproduce issues. In the latest release of Android Studio, you can now view important crash reports from Android Vitals, all without adding any additional SDKs or instrumentation to your app. Read more about Android Studio Hedgehog for updates on your favorite Android Studio features.


AND THE LATEST FROM ANDROID & PLAY

#12: What’s new in Play

Get the latest updates from Google Play, including new ways to drive audience growth and monetization. You can now create custom store listings for more user segments including inactive users, and soon for traffic from specific Google Ads campaigns. New listing groups also make it easier to create and maintain multiple listings. Optimize your monetization strategy with price experiments for in-app products and new subscription capabilities that allow you to offer multiple prices per billing period. Learn about these updates and more in our blog post.

#13: Design beautiful Android apps with the new Android UI Design Hub

To make it even easier to build compelling UI across form factors, check out the new Android UI Design Hub. A comprehensive resource to understand how to create user-friendly interfaces for Android with guidance - sharing takeaways, examples and do’s and don’ts, figma starter kits, UI code samples and inspirational galleries.

#14: And of course, Android 14!

We just launched Android 14 Beta 2, bringing enhancements to the platform around camera and media, privacy and security, system UI, and developer productivity. Get excited about new features and changes including Health Connect, ultra HDR for images, predictive back, and ML. ML Kit is launching new APIs like face mesh and document scanner, and Acceleration Service in custom ML stack is now in public beta so you can deliver more fluid, lower latency user experiences. Learn more about Beta 2 and get started by downloading the beta onto a supported device or testing your app in the Emulator.

This was just a small peek of some of the new ways Android is here to help support you. Don’t forget to check out the Android track at Google I/O, including some of our favorite talks like how to Reduce reliance on passwords in Android apps with passkey support and Building for the future of Android. The new Activity embedding learning pathway is also now available to enable you to differentiate your apps on tablets, foldables, and ChromeOS devices. Whether you’re joining us online or in-person at one of the events around the world, we hope you have a great Google I/O - and we can’t wait to see the great experiences you build with the updates that are coming out today!

What’s new in Jetpack Compose

Posted by Jolanda Verhoef, Android Developer Relations Engineer

It has been almost two years since we launched the first stable version of Jetpack Compose, and since then, we’ve seen its adoption and feature set grow spectacularly. Whether you write an application for smartphones, foldables, tablets, ChromeOS devices, smartwatches, or TVs, Compose has got you covered! We recommend you to use Compose for all new Wear OS, phone and large-screen apps. With new tooling and library features, extended Material Design 3, large screen, and Wear OS support, and alpha versions of Compose for homescreen widgets and TV… This is an exciting time!

Compose in the community

In the last year, we’ve seen many companies investigating and choosing Compose to build new features and migrate screens in their production applications. 24% of the top 1000 apps on Google Play have already chosen to adopt Compose! For example, Dropbox engineers told us that they rewrote their search experience in Compose in just a few weeks, which was 40% less time than anticipated, and less than half the time it took the team to build the feature on iOS. They also shared that they were interested in adopting Compose “because of its first-class support for design systems and tooling support”. Our Google Drive team cut their development time nearly in half when using Compose combined with architecture improvements.

It’s great to see how these teams experience faster development cycles, and also feel their UI code is more testable. Inspired? Start by reading our guide How to Adopt Compose for your Team, which outlines how and where to start, and shows the areas of development where Compose can bring huge added value.


Library features & development

Since we released the first Compose Bill of Materials in October last year, we’ve been working on new features, bug fixes, performance improvements, and bringing Compose to everywhere you build UI: phones, tablets, foldables, watches, TV, and your home screen. You can find all changes in the May 2023 release and the latest alpha versions of the Compose libraries.

We’ve heard from you that performance is something you care about, and that it’s not always clear how to create performant Compose applications. We’re continuously improving the performance of Compose. For example, as of last October, we started migrating modifiers to a new and more efficient system, and we’re starting to see the results of that migration. For text alone, this work resulted in an average 22% performance gain that can be seen in the latest alpha release, and these improvements apply across the board. To get these benefits in your app, all you have to do is update your Compose version!

Text and TextField got many upgrades in the past months. Next to the performance improvements we already mentioned, Compose now supports the latest emoji version 🫶 and includes new text features such as outlining text, hyphenation support, and configuring line breaking behavior. Read more in the release notes of the compose-foundation and compose-ui libraries.

The new pager component allows you to horizontally or vertically flip through content, which is similar to ViewPager2 in Views. It allows deep customization options, making it possible to create visually stunning effects:

Moving image showing Hoizontal Pager composable
Choose a song using the HorizontalPager composable. Learn how to implement this and other fancy effects in Rebecca Franks' blog post.

The new flow layouts FlowRow and FlowColumn make it easy to arrange content in a vertical or horizontal flow, much like lines of text in a paragraph. They also enable dynamic sizing using weights to distribute the items across the container.

Image of search filters in a real estate app created with flow layouts
Using flow layouts to show the search filters in a real estate app

To learn more about the new features, performance improvements, and bug fixes, see the release notes of the latest stable and newest alpha release of the Compose libraries.

Tools

Developing your app using Jetpack Compose is much easier with the new and improved tools around it. We added tons of new features to Android Studio to improve your workflow and efficiency. Here are some highlights:

Android Studio Flamingo is the latest stable release, bringing you:

  • Project templates that use Compose and Material 3 by default, reflecting our recommended practices.
  • Material You dynamic colors in Compose previews to quickly see how your composable responds to differently colored wallpapers on a user device.
  • Compose functions in system traces when you use the System Trace profiler to help you understand which Compose functions are being recomposed.

Android Studio Giraffe is the latest beta release, containing features such as:

  • Live Edit, allowing you to quickly iterate on your code on emulator or physical device without rebuilding or redeploying your app.
  • Support for new animations APIs in Animation preview so you can debug any animations using animate*AsStateCrossFaderememberInfiniteTransition, and AnimatedContent.
  • Compose Preview now supports live updates across multiple files, for example, if you make a change in your Theme.kt file, you can see all Previews updates automatically in your UI files.
  • Improving auto-complete behavior. For example, we now show icon previews when you’re adding Material icons, and we keep the @Composable annotation when running “Implement Members".

Android Studio Hedgehog contains canary features such as:

  • Showing Compose state information in the debugger. While debugging your app, the debugger will tell you exactly which parameters have “Changed” or have remained “Unchanged”, so you can more efficiently investigate the cause of the recomposition.
  • You can try out the new Studio Bot, an experimental AI powered conversational experience in Android Studio to help you generate code, fix issues, and learn about best practices, including all things Compose. This is an early experiment, but we would love for you to give it a try!
  • Emulator support for the newly announced Pixel Fold and Tablet Virtual Devices, so that you can test your Compose app before these devices launch later this year.
  • A new Espresso Device API that lets you apply rotation changes, folds, and other synchronous configuration changes to your virtual devices under test.

We’re also actively working on visual linting and accessibility checks for previews so you can automatically audit your Compose UI and check for issues across different screen sizes, and on multipreview templates to help you quickly add common sets of previews.

Material 3

Material 3 is the recommended design system for Android apps, and the latest 1.1 stable release adds a lot of great new features. We added new components like bottom sheets, date and time pickers, search bars, tooltips, and others. We also graduated many of the core components to stable, added more motion and interaction support, and included edge-to-edge support in many components. Watch this video to learn how to implement Material You in your app:


Extending Compose to more surfaces

We want Compose to be the programming model for UI wherever you run Android. This means including first-class support for large screens such as foldables and tablets and publishing libraries that make it possible to use Compose to write your homescreen widgets, smartwatch apps, and TV applications.

Large screen support

We’ve continued our efforts to make development for large screens easy when you use Compose. The pager and flow layouts that we released are common patterns on large screen devices. In addition, we added a new Compose library that lets you observe the device’s window size class so you can easily build adaptive UI.

When attaching a mouse to an Android device, Compose now correctly changes the mouse cursor to a caret when you hover the cursor over text fields or selectable text. This helps the user to understand what elements on screen they can interact with.

Moving image of Compose adjusting the mouse cursor to a caret when the mouse is hovering over text field

Glance

Today we publish the first beta version of the Jetpack Glance library! Glance lets you develop widgets optimized for Android phone, tablet, and foldable homescreens using Jetpack Compose. The library gives you the latest Android widget improvements out of the box, using Kotlin and Compose:

  • Glance simplifies the implementation of interactive widgets, so you can showcase your app’s top features, right on a user’s home screen.
  • Glance makes it easy to build responsive widgets that look great across form factors.
  • Glance enables faster UI Iteration with your designers, ensuring a high quality user experience.
Image of search filters in a real estate app created with flow layouts

Wear OS

We launched Compose for Wear OS 1.1 stable last December, and we’re working hard on the new 1.2 release which is currently in alpha. Here’s some of the highlights of the continuous improvements and new features that we are bringing to your wrist:

  • The placeholder and placeholderShimmer add elegant loading animations that can be used on chips and cards while content is loading.
  • expandableItems make it possible to fold long lists or long text, and only expand to show their full length upon user interaction.
  • Rotary input enhancements available in Horologist add intuitive snap and fling behaviors when a user is navigating lists with rotary input.
  • Android Studio now lets you preview multiple watch screen and text sizes while building a Compose app. Use the Annotations that we have added here.

Compose for TV

You can now build pixel perfect living room experiences with the alpha release of Compose for TV! With the new AndroidX TV library, you can apply all of the benefits of Compose to the unique requirements for Android TV. We worked closely with the community to build an intuitive API with powerful capabilities. Engineers from Soundcloud shared with us that “thanks to Compose for TV, we are able to reuse components and move much faster than the old Leanback View APIs would have ever allowed us to.” And Plex shared that “TV focus and scrolling support on Compose has greatly improved our developer productivity and app performance.”

Compose for TV comes with a variety of components such as ImmersiveList and Carousel that are specifically optimized for the living room experience. With just a few lines of code, you can create great TV UIs.

Moving image of TVLazyGrid on a screen

TvLazyColumn {   items(contentList) { content ->     TvLazyRow { items(content) { cardItem -> Card(cardItem) }   } }

Learn more about the release in this blog post, check out the “What’s new with TV and intro to Compose” talk, or see the TV documentation!

Compose support in other libraries

It’s great to see more and more internally and externally developed libraries add support for Compose. For example, loading pictures asynchronously can now be done with the GlideImage composable from the Glide library. And Google Maps released a library which makes it much easier to declaratively create your map implementations.

GoogleMap( //... ) { Marker( state = MarkerState(position = LatLng(-34, 151)), title = "Marker in Sydney" ) Marker( state = MarkerState(position = LatLng(35.66, 139.6)), title = "Marker in Tokyo" ) }

New and updated guidance

No matter where you are in your learning journey, we’ve got you covered! We added and revamped a lot of the guidance on Compose:

Happy Composing!

We hope you're as excited by these developments as we are! If you haven't started yet, it's time to learn Jetpack Compose and see how your team and development process can benefit from it. Get ready for improved velocity and productivity. Happy Composing!

Bringing Kotlin to the Web

Posted by Vivek Sekhar, Product Manager

This post describes early experimental work from JetBrains and Google. You can learn more in the session on WebAssembly at Google I/O 2023.

Application developers want to reach as many users on as many platforms as they can. Until now, that goal has meant building an app on each of Android, iOS and the Web, as well as building the backend servers and infrastructure to power them.

Image showing infrastructure of Web, Android, and iOS Apps in relation to backend servers and programming support - JavaScript, Kotlin, and Swift respectively

To reduce effort, some developers use multiplatform languages and frameworks to develop their app's business logic and UI. Bringing these multiplatform apps to the Web has previously meant "compiling" shared application code to a slower JavaScript version that can run in the browser. Instead, developers often rewrite their apps in JavaScript, or simply direct Web users to download their native mobile apps.

The Web community is developing a better alternative: direct Web support for modern languages thanks to a new technology called WebAssembly GC. This new Web feature allows cross-platform code written in supported languages to run with near-native performance inside all major browsers.

We're excited to roll-out experimental support for this new capability on the Web for Kotlin, unlocking new code sharing opportunities with faster performance for Android and Web developers.


Kotlin Multiplatform Development on the Web

Kotlin is a productive and powerful language used in 95% of the top 1,000 Android apps. Developers say they are more productive and produce fewer bugs after switching to Kotlin.

The Kotlin Multiplatform Mobile and Compose Multiplatform frameworks from JetBrains help developers share code between their Android and iOS apps. These frameworks now offer experimental support for Kotlin compilation to WebAssembly. Early experiments indicate Kotlin code runs up to 2x faster on the Web using WebAssembly instead of JavaScript.

Image showing infrastructure of Web, Android, and iOS Apps in relation to backend servers and programming support - JavaScript, Kotlin, and Swift respectively

JetBrains shares more details in the release notes for version 1.18.20 of their K2 compiler, as well as documentation on how you can try Kotlin/Wasm with your app.


Pulling it off

Bringing modern mobile languages like Kotlin to the Web required solving challenging technical problems like multi-language garbage collection and JavaScript interoperability. You can learn more in the session on new WebAssembly languages from this year's Google I/O conference.

This work wouldn't have been possible without an open collaboration between browser vendors, academics, and service providers across the Web as part of the W3C WebAssembly Community Group. In the coming weeks, we'll share technical details about this innovative work on the V8 Blog.


Looking ahead: Web and Native Development

For decades, developers have dreamed of the Web as a kind of "universal runtime," while at the same time acknowledging certain feature or performance gaps relative to native platforms. Developers have long had to switch between working on the Web or their native mobile apps.

However, we want to make it possible for you to work on the Web and your native experiences together, not only to help you reduce effort, but also to help you tap into the Web's unique superpowers.

On the open web, your app is just a click away from new users, who can discover it and share it just as easily as they share a web page, with no app stores getting in the way and no revenue split affecting your profitability.

The productivity of cross-platform development, the performance of native mobile apps and the openness of the web. That's why we love WebAssembly.

We can't wait to see what you build next!


"The productivity of cross-platform development, the performance of native mobile apps, and the openness of the Web."

Google I/O 2023: What’s new in Jetpack

Posted by Amanda Alexander, Product Manager, Android

Android Jetpack is a key pillar of Modern Android Development. It is a suite of over 100 libraries, tools and guidance to help developers follow best practices, reduce boilerplate code, and write code that works consistently across Android versions and devices so that you can focus on building unique features for your app. The majority of apps on Google Play rely on Jetpack, in fact over 90% of the top 1000 apps use Jetpack.

Below we’ll cover highlights of recent updates in three major areas of Jetpack:

  • Architecture Libraries and Guidance
  • Performance Optimization of Applications
  • User Interface Libraries and Guidance

And then conclude with some additional key updates.


1. Architecture Libraries and Guidance

App architecture libraries and components ensure that apps are robust, testable, and maintainable.

Data Persistence

Most applications need to persist local state - whether it be caching results, managing local lists of user enter data, or powering data returned in the UI. Room is the recommended data persistence layer which provides an abstraction layer over SQLite, allowing for increased usability and safety over the platform.

In Room, we have added many brand-new features, such as the Upsert operation, which attempts to insert an entity when there is no uniqueness conflict or update the entity if there is a conflict, and support for using Kotlin value classes for KSP. These new features are available in Room 2.6-alpha with all library sources written in Kotlin and supports both the Java programming language and Kotlin code generation.

Managing tasks with WorkManager

The WorkManager library makes it easy to schedule deferrable, asynchronous tasks that must be run reliably for instance uploading backups or analytics. These APIs let you create a task and hand it off to WorkManager to run when the work constraints are met.

Now, WorkManager allows you to update a WorkRequest after you have already enqueued it. This is often necessary in larger apps that frequently change constraints or need to update their workers on the fly. As of WorkManager 2.8.0, the updateWork() API is the means of doing this without having to go through the process of manually canceling and enqueuing a new WorkRequest. This greatly simplifies the development process.

DataStore

The DataStore library is a robust data storage solution that addresses issues with SharedPreferences and provides a modern coroutines based API.

In DataStore 1.1 alpha we added a widely requested feature: multi-process support which allows you to access the DataStore from multiple processes while providing data consistency guarantees between them. Additional features include a new storage interface that enables the underlying storage mechanism for Datastore to be switched out (we have provided implementations for java.io and okio), and we have also added support for Kotlin Multiplatform.

Lifecycle management

Lifecycle-aware components perform actions in response to a change in the lifecycle status of another component, such as activities and fragments. These components help you produce better-organized, and often lighter-weight code, that is easier to maintain.

We released a stable version of Lifecycle 2.6.0 that includes more Compose integration. We added a new extension method on Flow, collectAsStateWithLifecycle(), that collects from flows and represents its latest value as Compose State in a lifecycle-aware manner. Additionally, a large number of classes are converted to Kotlin and still retain their binary compatibility with previous versions.

Predictive Back Gesture

moving image illustrating predictive back texture

In Android 13, we introduced a predictive back gesture for Android devices such as phones, large screens, and foldables. It is part of a multi-year release; when fully implemented, this feature will let users preview the destination or other result of a back gesture before fully completing it, allowing them to decide whether to continue or stay in the current view.

The Activity APIs for Predictive Back for Android are stable and we have updated the best practices for using the supported system back callbacks; BackHandler (for Compose), OnBackPressedCallback, or OnBackInvokedCallback. We are excited to see Google apps adopt Predictive Back including PlayStore, Calendar, News, and TV!

In the Activity 1.8 alpha releases, The OnBackPressedCallback class now contains new Predictive Back progress callbacks for handling the back gesture starting, progress throughout the gesture, and the back gesture being canceled in addition to the previous handleOnBackPressed() callback for when the back gesture is committed. We also added ComponentActivity.setUpEdgeToEdge() to easily set up the edge-to-edge display in a backward-compatible manner.

Activity updates for more consistent Photo Picker experience

The Android photo picker is a browsable interface that presents the user with their media library. In Activity 1.7.0, the Photo Picker activity contracts have been updated to contain an additional fallback that allows OEMs and system apps, such as Google Play services, to provide a consistent Photo Picker experience on a wider range of Android devices and API levels by implementing the fallback action. Read more in the Photo Picker Everywhere blog.

Incremental Data Fetching

The Paging library allows you to load and display small chunks of data to improve network and system resource consumption. App data can be loaded gradually and gracefully within RecyclerViews or Compose lazy lists.

In Paging Compose 1.0.0-alpha19, there is support for all lazy layouts including custom layouts provided by the Wear and TV libraries. To support more lazy layouts, Paging Compose now provides slightly lower level extension methods on LazyPagingItems in itemKey and itemContentType. These APIs focus on helping you implement the key and contentType parameters to the standard items APIs that already exist for LazyColumnLazyVerticalGrid as well as their equivalents in APIs like HorizontalPager. While these changes do make the LazyColumn and LazyRow examples a few lines longer, it provides consistency across all lazy layouts.


2. Performance Optimization of Applications

Using performance libraries allows you to build performant apps and identify optimizations to maintain high performance, resulting in better end-user experiences.

Improving Start-up Times

Baseline Profiles allow you to partially compile your app at install time to improve runtime and launch performance, and are getting big improvements in new tooling and libraries:

Jetpack provides a new Baseline Profile Gradle Plugin in alpha, which supports AGP 8.0+, and can be easily added to your project in Studio Hedgehog (now in canary). The plugin lets you automate the task of running generation tasks, and pulling profiles from the device and integrating them into your build either periodically, or as part of your release process.

The plugin also allows you to easily automate the new Dex Layout Optimization feature in AGP 8.1, which lets you define BaselineProfileRule tests that collect classes used during startup, and move them to the primary dex file in a multidex app to increase locality. In a large app, this can improve cold startup time by 30% on top of Baseline Profiles!

Macrobenchmark 1.2 has shipped a lot of new features in alpha, such as Power metrics and Custom trace metrics, generation of Baseline Profiles without root on Android 13, and recompilation without clearing app data on Android 14.

You can read everything in depth in the blog "What's new in Android Performance".


3. User Interface Libraries and Guidance

Several changes have been made to our UI libraries to provide better support for large-screen compatibility, foldables, and emojis.

Jetpack Compose

Jetpack Compose, Android’s modern toolkit for building native UI, recently had its May 2023 release which includes new features for text and layouts, continued performance improvements, enhanced tooling support, increased support for large screens, and updated guidance. You can read more in the What’s New in Jetpack Compose I/O blog to learn more.

Glance

The Glance library, now in 1.0-beta, lets you develop app widgets optimized for Android phone, tablet, and foldable homescreens using Jetpack Compose. The library gives you the latest Android widget improvements out of the box, using Kotlin and Compose.

Compose for TV

With the alpha release of the TV library, you can now build experiences for Android TV using components optimized for the living room experience. Compose for TV unlocks all the benefits of Jetpack Compose for your TV apps, allowing you to build apps with less code, easier maintenance and a modern Material 3 look straight out of the box. See the Compose for TV blog for details.

Material 3 for Compose

Material Design 3 is the next evolution of Material Design, enabling you to build expressive, spirited and personal apps. It is the recommended design system for Android apps and the 1.1 stable release brings exciting new features such as bottom sheets, date and time pickers, search bars, tooltips, and added more motion and interaction support. Read more in the release blog.

Understanding Window State

The new WindowManager library helps developers adapt their apps to support multi-window environments and new device form factors by providing a common API surface with support back to API level 14.

In 1.1.0-beta01, new features and capabilities have been added to activity embedding and window layout that enables you to optimize your multi-activity apps for large screens. With the 1.1 release of Jetpack WindowManager, activity embedding APIs are no longer experimental and are recommended for multi-activity applications to provide improved large screen layouts. Check out the What’s new in WindowManager 1.1.0-beta01 blog for details and migration steps.


Other key updates

Kotlin Multiplatform

We continue to experiment with using Kotlin Multiplatform to share business logic between Android and iOS. The Collections 1.3.0-alpha03 and DataStore 1.1.0-alpha02 have been updated so you can now use these libraries in KMM projects. If you are using Kotlin Multiplatform in your app, we would like your feedback!

This was a look at all the changes in Jetpack over the past few months to help you build apps more productively. For more details on each Jetpack library, check out the AndroidX release notes, quickly find relevant libraries with the API picker and watch the Google I/O talks for additional highlights.

Java is a trademark or registered trademark of Oracle and/or its affiliates.

Build transformative augmented reality experiences with new ARCore and geospatial features

  Posted by Eric Lai, Group Product Manager

With ARCore, Google’s platform for building augmented reality experiences, we continue to enhance the ways we interact with information and experience the people and things around us. ARCore is now available on 1.4 billion Android devices and select features are also available on compatible iOS devices, making it the largest cross-device augmented reality platform.

Last year, we launched the ARCore Geospatial API, which leverages our understanding of the world through Google Maps and helps developers build AR experiences that are more immersive, richer, and more useful. We further engaged with all of you through global hackathons, such as the ARCore Geospatial API Challenge, where we saw a number of high quality submissions across a number of use cases, including gaming, local discovery, and navigation.

Today, we are introducing new ARCore Geospatial capabilities, including Streetscape Geometry API, Geospatial Depth API, and Scene Semantics API to help you build transformative, world-scale immersive experiences.


Introducing Streetscape Geometry API

With the new Streetscape Geometry API, you can interact, visualize, and transform building geometry around the user. The Streetscape Geometry API makes it easy for developers to build experiences that interact with real world geometry, like reskinning buildings, power more accurate occlusion, or just placing a virtual asset on a building, by providing a 3D mesh within a 100m radius of the user’s mobile device location.

moving image showing streetscape geometry
Streetscape Geometry API provides a 3D mesh of nearby buildings and terrain geometry

You can use this API to build immersive experiences like transforming building geometry into live plants growing on top of them or using the building geometry as a feature in your game by having virtual balls bounce off and interact with them.

Streetscape Geometry API is available on Android and iOS.


Introducing Rooftop Anchors and Geospatial Depth

Previously, we launched Geospatial anchors which allow developers to place stable geometry at exact locations using latitude, longitude, and altitude. Over the past year, we added Terrain anchors which are placed on Earth's terrain, using only longitude and latitude coordinates, with the altitude being calculated automatically.

Today we are introducing a new type of anchor: Rooftop anchors. Rooftop anchors let you anchor digital content securely to building rooftops, respecting the building geometry and the height of buildings.

moving image showing rooftop anchors
Rooftop anchors make it easier to
anchor digital content to building rooftops
moving image showing geospatial depth
Geospatial depth combines
real time depth measurement from
users' device with Streetscape Geometry data
to generate a depth map of up to 65 meters

In addition to new anchoring features, we are also leveraging the Streetscape Geometry API to improve one of the most important capabilities in AR: Depth. Depth is critical to enable more realistic occlusion or collision of virtual objects in the real world.

Today, we are launching Geospatial Depth. It combines the mobile device real time depth measurement with Streetscape Geometry data to improve depth measurements using building and terrain data providing depth for up to 65m. With Geospatial Depth you can build increasingly realistic geospatial experiences in the real world.

Rooftop Anchors are available on Android and iOS. Geospatial Depth is available on Android.


Introducing Scene Semantics API

The Scene Semantics API uses AI to provide a class label to every pixel in an outdoor scene, so you can create custom AR experiences based on the features in an area around your user. At launch, twelve class labels are available, including sky, building, tree, road, sidewalk, vehicle, person, water and more.

moving image showing streetscape geometry
Scene Semantics API uses AI to provide accurate labels for different features that are present in a scene outdoors

You can use the Scene Semantics API to enable different experiences in your app. For example, you can identify specific scene components, such as roads and sidewalks to help guide a user through the city, people and vehicles to render realistic occlusions, the sky to create a sunset at any time of the day, and buildings to modify their appearance and anchor virtual objects.

The Scene Semantics API is available on Android.


Mega Golf: The game that brings augmented mini-golf to your neighborhood

To help you get started, we’re also releasing Mega Golf, an open source demo that helps you experience the new APIs in action. In Mega Golf you will use buildings in your city to bounce off and propel a golf ball towards a hole while avoiding 3D virtual obstacles. This open source demo is available on GitHub. We're excited to see what you can do with this project.

moving image showing streetscape geometry
Mega Golf uses Streetscape Geometry API to transform neighborhoods into a playable mini golf course where players use nearby buildings to bounce and propel a golf ball towards a hole

With these new ARCore features improvements and the new Geospatial Creator in Adobe Aero and Unity, we’ll make it easier than ever for developers and creators to build realistic augmented reality experiences that delight and provide utility for users. Get started today at g.co/ARCore. We’re excited to see what you create when the world is your canvas, playground, gallery, or more!

Create world-scale augmented reality experiences in minutes with Google’s Geospatial Creator

Posted by Stevan Silva, Senior Product Manager

ARCore, our augmented reality developer platform, provides developers and creators alike with simple yet powerful tools to build world-scale and room-scale immersive experiences on 1.4 billion Android devices.

Since last year, we have extended coverage of the ARCore Geospatial API from 87 countries to over 100 countries provided by Google’s Visual Positioning System and the expansion of Street View coverage, helping developers build and publish more transformative and robust location-based, immersive experiences. We continue to push the boundaries of introducing helpful applications and delightful new world-scale use cases, whether it's the innovative hackathon submissions from the ARCore Geospatial API Challenge or our partnership with Gorillaz, where we transformed Times Square and Piccadilly Circus into a music stage to witness Gorillaz play in a larger-than-life immersive performance.

One thing we’ve consistently heard from you over the past year is to broaden access to these powerful resources and ensure anyone can create, visualize, and deploy augmented reality experiences around the world.

Introducing Geospatial Creator


Today, we are launching Geospatial Creator, a tool that helps anyone easily visualize, design, and publish world-anchored immersive content in minutes straight from platforms you already know and love — Unity or Adobe Aero.

Easily visualize, create, and publish augmented reality experiences with Geospatial Creator in Unity (left) and Adobe Aero (right)

Geospatial Creator, powered by ARCore and Photorealistic 3D Tiles from Google Maps Platform, enables developers and creators to easily visualize where in the real-world they want to place their digital content, similar to how Google Earth or Google Street View visualize the world. Geospatial Creator also includes new capabilities, such as Rooftop anchors, to make it even easier to anchor virtual content with the 3D Tiles, saving developers and creators time and effort in the creation process.

These tools help you build world-anchored, cross-platform experiences on supported devices on both Android and iOS. Immersive experiences built in Adobe Aero can be shared via a simple QR code scan or link with no full app download required. Everything you create in Geospatial Creator can be experienced in the physical world through real time localization and real world augmentation.


With Geospatial Creator, developers and creators can now build on top of Photorealistic 3D Tiles from Google Maps Platform (left) which provide real time localization and real time augmentation (right)

When the physical world is augmented with digital content, it redefines the way people play, shop, learn, create, shop and get information. To give you an idea of what you can achieve with these tools, we’ve been working with partners in gaming, retail, and local discovery including Gap, Mattel, Global Street Art, Singapore Tourism Board, Gensler, TAITO, and more to build real world use cases.

SPACE INVADERS: World Defense immersive game turns the world into a playground

Later this summer you’ll be able to play one of the most acclaimed arcade games in real life, in the real world. To celebrate the 45 year anniversary of the original release, TAITO will launch SPACE INVADERS: World Defense. The game, powered by ARCore and Geospatial Creator, is inspired by the original gameplay where players will have to defend the Earth from SPACE INVADERS in their neighborhood. It will combine AR and 3D gameplay to deliver a fully contextual and highly engaging immersive experience that connects multi-generations of players.



Gap and Mattel transform a storefront into an interactive immersive experience

Gap and Mattel will transform the iconic Times Square Gap Store into an interactive Gap x Barbie experience powered by Geospatial Creator in Adobe Aero. Starting May 23, customers will see the store come to life with colors and shapes and be able to interact with Barbie and her friends modeling the new limited edition Gap x Barbie collection of clothing.

moving image of Gap by Mattel

Global Street Art brings street art to a new dimension with AR murals

Google Arts & Culture partnered with Global Street Art and three world-renowned artists to augment physical murals in London (Camille Walala), Mexico City (Edgar Saner), and Los Angeles (Tristan Eaton). The artists used Geospatial Creator in Adobe Aero to create the virtual experience, augmenting physical murals digitally in AR and bringing to life a deeper and richer story about the art pieces.



Singapore Tourism Board creates an immersive guided tour to explore Singapore

Google Partner Innovation team partnered with Singapore Tourism Board to launch a preview of an immersive Singapore guided tour in their VisitSingapore app. Merli, Singapore's tourism mascot, leads visitors on an interactive augmented tour of the city’s iconic landmarks and hidden gems, beginning with the iconic Merlion Park and engaging visitors with an AR symphony performance at Victoria Theatre and Concert Hall. The full guided tour is launching later this summer, and will help visitors discover the best local hawker food, uncover the city's history through scenes from the past, and more.


Gensler helps communities visualize new urban projects

Gensler used Geospatial Creator in Adobe Aero to help communities easily envision what new city projects might look like for the unhoused. The immersive designs of housing projects allows everyone to better visualize the proposed urban changes and their social impact—ultimately bringing suitable shelter to those who need it.

moving image of city projects from Gensler

Geospatial Creator gives anyone the superpower of creating world scale AR experience remotely. Both developers and creators can build and publish immersive experiences in minutes in countries where Photorealistic 3D Tiles are available. In just a few clicks, you can create applications that help communities, delight your users, and provide solutions to businesses. Get started today at goo.gle/geospatialcreator. We’re excited to see what you create when the world is your canvas, playground, gallery, or more!

Android Studio @ I/O ‘23: Announcing Studio Bot, an AI-powered coding assistant

Posted by Adarsh Fernando, Senior Product Manager, Android Studio

We first announced Android Studio at I/O 2013 with a promise to deliver a best-in-class integrated development environment (IDE) focused on Android app developers. 10 years later, this commitment to developer productivity still drives the team to deliver new tools and solutions that help teams around the world to create amazing app experiences for their users. And with Google's push to unlock the power of AI to help you throughout your day, Android Studio Hedgehog introduces a key breakthrough: an AI-powered conversational experience designed to make you more productive.

In addition to accelerating coding productivity, this latest version of the IDE provides better tools when you develop for multiple form factors, and helps you improve app quality with new insights, debugging, and testing solutions. All these improvements add to the many updates we’ve included in Android Studio Giraffe, which is now in the Beta channel and helps make it easier to configure your builds with Kotlin DSL support, improve sync times with new data and guidance, target the latest Android SDK version with the new Android SDK Upgrade Assistant, and more.

To see highlights of the new features in action including Studio Bot, watch the What’s new in Android Developer Tools session from Google I/O 2023.

What’s new in Android Development Tools - with Studio Bot Demo

Jump right in and download Android Studio Hedgehog, or learn more about the most exciting new features below.

Coding productivity

Introducing Android Studio Bot

At the heart of our mission is to accelerate your ability to write high-quality code for Android. In this release we are excited to introduce an AI-powered conversational experience called Studio Bot, that leverages Codey, Google's foundation model for coding that is a descendant of PaLM 2, to help you generate code for your app and make you more productive. You can also ask questions to learn more about Android development or help fix errors in your existing code — all without ever having to leave Android Studio. Studio Bot is in its very early days, and we’re training it to become even better at answering your questions and helping you learn best practices. We encourage you to try it out for yourselves, and help it improve by sharing your feedback directly with Studio Bot.

Privacy is top of mind, and what is unique in this integration is that you don’t need to send your source code to Google to use Studio Bot—only the chat dialogue between you and Studio Bot is shared. Much like our work on other AI projects, we stick to a set of principles that hold us accountable. We’re taking a measured approach to our rollout; for this initial launch, Studio Bot is only available to Android developers in the US. You can read more here

Studio Bot

Live Edit

Live Edit helps keep you in the flow by minimizing interruptions when you make updates to your Compose UI and validates those changes on a running device. You can use it in manual mode to control when the running app should be updated or in automatic mode to update the running app as you make code changes. Live Edit is available in Android Studio Giraffe Beta, with the Hedgehog release providing additional improvements in error handling and reporting.

Moving image showing live edit with Compose
Live Edit with Compose

Build productivity

Kotlin DSL and Version Catalogs

A number of updates help you leverage more modern syntax and conventions when configuring your build. Kotlin is the recommended language when developing for Android. Now, with official support for Kotlin DSL in your Gradle build scripts, it’s also the preferred way to configure your build because Kotlin is more readable and offers better compile-time checking and IDE support. Additionally, we’ve also added experimental support for TOML-based Gradle Version Catalogs, a feature that lets you manage dependencies in one central location and share dependencies across modules or projects. Android Studio now makes it easier to configure version catalogs through editor suggestions and integrations with the Project Structure dialog, plus the New Project Wizard.

Screengrab showing Kotlin DSL and Version Catalogs in the New Project Wizard
Kotlin DSL and Version Catalogs in the New Project Wizard

Per-app language preferences

Typically, multilingual users set their system language to one language—such as English—but they want to select other languages for specific apps, such as Dutch, Chinese, or Hindi. Android 13 introduced support for per-app language preferences, and now Android Gradle plugin 8.1 and higher can configure your app to support it automatically. Learn more.

Download impact during Sync

When using Android Gradle Plugin 7.3 or higher, The Build > Sync tool window now includes a summary of time spent downloading dependencies and a detailed view of downloads per repository, so you can easily determine whether unexpected downloads are impacting build performance. Additionally, it can help you identify inefficiencies in how you configure your repositories. Learn more.

Screengrab of Build Analyzer showing impact of downloads during build
Build Analyzer showing impact of downloads during build

New Android SDK Upgrade Assistant

Android Studio Giraffe introduces the Android SDK Upgrade Assistant, a new tool that helps you upgrade the targetSdkVersion, which is the API level that your app targets. Instead of having to navigate every API change with an Android SDK release, the Android SDK Upgrade Assistant guides you through upgrading targetSdkVersion level by level by creating a customized filter of API changes that are relevant to your app. For each migration step, it highlights the major breaking changes and how to address them, helping you get to taking advantage of what the latest versions of Android have to offer much more quickly. To open the Android SDK Upgrade Assistant, go to Tools > Android SDK Upgrade Assistant. In the Assistant panel, select the API level that you want to upgrade to for guidance.

Screengrab of Build Analyzer showing impact of downloads during build
Upgrade more quickly with the Android SDK Upgrade Assistant

Developing for form factors

Google Pixel Fold and Tablet Virtual Devices

Although these devices won’t launch until later this year, you can start preparing your app to take full advantage of the expanded screen sizes and functionality of these devices by creating virtual devices using new Google Pixel Fold and Google Pixel Tablet device profiles in Android Studio Hedgehog. To start, open Device Manager and select Create Device.

Screengrab of Pixel Tablet running on the Android Emulator
Pixel Tablet running on the Android Emulator

Emulator Support for Wear OS 4 Developer Preview

Wear OS 4 is the next generation OS for Wear. Based on Android 13, it officially launches in the fall and has a great selection of new features and optimizations. We’re giving you a preview of all the new platform features with the new Wear OS 4 emulator. We recommend you try it with Android Studio Hedgehog and test that your Wear OS app works as intended with the latest platform updates. The Wear OS 4 emulator will give you a faster and smoother transition to Wear OS 4, and help you make apps ready in time for the official Wear OS 4 release on real devices. Check out the Wear 4 Preview site for how to get started with the new Wear OS 4 emulator.

Watch Face Format support in Wear OS 4 Emulator

Together with Samsung, we’re excited to announce the launch of the Watch Face Format, a new way to build watch faces for Wear OS. The Watch Face Format is a declarative XML format, meaning there will be no code in your watch face APK. The platform takes care of the logic needed to render the watch face so you no longer have to worry about code optimizations or battery performance. Use watch face creation tools such as Watch Face Studio to design watch faces, or you can manually or dynamically edit the watch face format to build watch faces directly. You can test the new Watch Face Format on the Wear OS 4 emulator.

Moving image of Watch Face Format Watchface on Wear 4 Emulator
Watch Face Format Watchface on Wear 4 Emulator

Device Mirroring for local devices

Whether you use a direct USB connection or ADB over Wi-Fi, Device Mirroring lets you see and interact with your local physical devices directly within the Android Studio Running Devices window. This feature lets you focus on how you develop and test your app all in one place. With the Hedgehog release, we’re adding more functionality, including the ability to mirror Wear OS devices and simulate folding actions on foldable devices directly from the IDE.

Screengrab showing device mirroring with the Pixel Fold
Device Mirroring with the Pixel Fold

Android Device Streaming

We know sometimes it’s critical for you to see and test how your apps work on physical hardware to ensure that your users have the best experience. However, accessing the latest flagship devices isn’t always easy. Building on top of Device Mirroring for local devices, we’re introducing device streaming of remote physical Google Pixel devices, such as the Pixel Fold and Pixel Tablet, directly within Android Studio. Device streaming will let you deploy your app to these remote devices and interact with them, all without having to leave the IDE. If you’re interested in getting early access later this year, enroll now.

Espresso Device API

Automated testing of your app using Espresso APIs helps you catch potential issues early, before they reach users. However, testing your app across configuration changes, such as rotating or folding a device, has always been a challenge. Espresso Device API is now available to help you write tests that perform synchronous configuration changes when testing on Android virtual devices running API level 24 and higher. You can also set up test filters to ensure that tests that require certain device features, such as a folding action, only run on devices that support them. Learn more.

Example of test code for synchronous device configuration changes using the Espresso Device API
Synchronous device configuration changes using the Espresso Device API

Improve your app quality

App Quality Insights with Android vitals

App Quality Insights launched in Android Studio Electric Eel to provide access to Firebase Crashlytics issue reports directly from the IDE. The integration lets you navigate between your stack trace and code with a click, use filters to see only the most important issues, and see report details to help you reproduce issues. In Android Studio Hedgehog, you can now view important crash reports from Android vitals, powered by Google Play. Android vitals reports also include useful insights, such as notes from SDK providers so that you can quickly diagnose and resolve crashes related to SDKs your app might be using.

Screengrab showing Android vitals crash reports in the App Quality Insights window
Android vitals crash reports in the App Quality Insights window

App Quality Insights with improved code navigation

When you publish your app using the latest version of AGP 8.2, crash reports now attach minimal git commit hash data to help Android Studio navigate to your code when investigating Crashlytics crash reports in the IDE. Now, when you view a report that includes the necessary metadata, you can choose to either navigate to the line of code in your current git checkout, or view a diff between the checkout and the version of your codebase that generated the crash. To get started with the right dependencies, see the documentation.

Compose State information in Debugger

When parts of your Compose UI recompose unexpectedly, it can sometimes be difficult to understand why. Now, when setting a breakpoint on a Composable function, the debugger lists the parameters of the composable and their state, so you can more easily identify what changes might have caused the recomposition. For example, when you pause on a composable, the debugger can tell you exactly which parameters have “Changed” or have remained “Unchanged”, so you can more efficiently investigate the cause of the recomposition.

Screengrab showing Compose state information in the debugger
Compose state information in the debugger

New Power Profiler

We are excited to announce a brand new Power Profiler in Android Studio Hedgehog, which shows power consumption on the Pixel 6 and higher devices running Android 10 and higher. Data is segmented by each sub-system (such as, Camera, GPS, and more). This data is made available when recording a System Trace via the Profiler and helps you to visually correlate power consumption of the device to the actions happening in your app. For example, you can A/B test multiple algorithms of your video calling app to optimize power consumed by the camera sensor.

Image of the new power profiler
The new Power Profiler

Device Explorer

The Device File Explorer in Giraffe has been renamed to Device Explorer and updated to include information about debuggable processes running on connected devices. In addition to the Files tab, which includes existing functionality that allows you to explore a device’s file hierarchy, the new Processes tab allows you to view a list of debuggable processes for the connected device. From there you can also select a process and perform a Kill process action (which runs am kill), a Force stop (which runs am force-stop) , or attach the debugger to a selected process.

Image of the new power profiler
Processes tab in the Device Explorer window

Compose animation preview

Compose Animation Preview in Android Studio Hedgehog now supports a number of additional Compose APIs, animate*AsState, CrossFade, rememberInfiniteTransition, and AnimatedContent (in addition to updateTransition and AnimatedVisibility). Compose Animation Preview also has new pickers that let you set non-enum or boolean states to debug your Compose animation using precise inputs. For all supported Compose Animation APIs, you can play, pause, scrub, control speed, and coordinate.

Moving image of Compose Animation preview
Compose Animation Preview

Embedded Layout Inspector

You can now run Layout Inspector directly embedded in the Running Device Window in Android Studio! Try out this feature today in Android Studio Hedgehog to conserve screen real estate and organize your UI debugging workflow in a single tool window. You can access common Layout Inspector features such as debugging the layout of your app by showing a view hierarchy and allowing you to inspect the properties of each view. Additionally, because the embedded Layout Inspector overlays on top of the existing device mirroring stream, overall performance when using the inspector is now much faster. To get started and understand known limitations, read the release notes.

Screengrab of embedded Layout Inspector
Embedded Layout Inspector

Firebase Test Lab support for Gradle Managed Devices

Gradle Managed Devices launched in Android Gradle Plugin (AGP) 7.3 to make it easier to utilize virtual devices when running automated tests in your continuous integration (CI) infrastructure by allowing Gradle to manage all aspects of device provisioning. All you need to do is use the AGP DSL to describe the devices you wanted Gradle to use. But sometimes you need to run your tests on physical Android devices. With AGP 8.2, we have expanded Gradle Managed Devices with the ability to target real physical (and virtual) devices running in Firebase Test Lab (FTL). The capability makes it easier than ever to scalably test across the large selection of FTL devices with only a few simple steps. Additionally, this version of AGP can also take advantage of FTL’s new Smart Sharding capabilities, which allows you to get test results back much more quickly by utilizing multiple devices that run in parallel. To learn more and get started, read the release notes.

Image of gradle managed devices with support for Firebase Test Lab
Gradle Managed Devices with support for Firebase Test Lab

IntelliJ

IntelliJ Platform Update

Android Studio Hedgehog (2023.1) includes the IntelliJ 2023.1 platform release, which comes with IDE startup performance improvements, faster import of Maven projects, and a more streamlined commit process. Read the IntelliJ release notes here.

New UI

Along with the IntelliJ platform update comes further improvements to the New UI. In large part due to community feedback, there’s a new Compact Mode, which provides a more consolidated look and feel of the IDE, and an option to vertically split the tool window area and conveniently arrange the windows, just like in the old UI. We also improved the Android-specific UI by updating the main toolbar, tool windows, and new iconography. To use the New UI, enable it in Settings > Appearance & Behavior > New UI. For a full list of changes, see the IntelliJ New UI documentation.

Screengrab showing the new UI adopted from IntelliJ
The New UI adopted from IntelliJ

Summary

To recap, Android Studio Giraffe is available in the Beta channel. Android Studio Hedgehog is the latest version of the IDE and is available in the Canary channel, and includes all of these new enhancements and features:

Coding productivity

  • Android Studio Bot, is a tightly integrated, AI-powered assistant in Android Studio designed to make you more productive.
  • (Beta) Live Edit, which helps keep you in the flow by minimizing interruptions when you make updates to your Compose UI and validate those changes on a running device.

Build productivity

  • (Beta) Kotlin DSL and Version Catalogs, which helps you take advantage of more modern syntax and conventions when configuring your build.
  • (Beta) Per-app language preferences, built-in support in AGP for automatically configuring per-app language preferences.
  • (Beta) Download impact in Build Analyzer, which provides a summary of time spent downloading dependencies and a detailed view of downloads per repository, so you can easily determine whether unexpected downloads are impacting build performance.
  • (Beta) New Android SDK Upgrade Assistant, which helps you upgrade the targetSdkVersion, which is the API level that your app targets, much more quickly.

Developing for form factors

  • Google Pixel Fold and Google Pixel Tablet Virtual Devices, which can help you start preparing your app to take full advantage of the expanded screen sizes and functionality of these devices before they are available in stores.
  • Wear OS 4 Developer Preview Emulator, which similarly provides you early access to test and optimize your app against the next generation of Wear OS by Google.
  • Watch Face Format support in Wear OS 4 Developer Preview Emulator, a new way to build watch faces for Wear OS.
  • Device Mirroring for local devices, which lets you see and interact with your local physical devices directly within Android Studio’s Running Devices window.
  • Android Device Streaming, a device streaming of remote physical Google Pixel devices, which you can register for early access today!
  • Espresso Device API, which helps you write tests that perform synchronous configuration changes when testing on Android virtual devices running API level 24 and higher.

Improve your app quality

  • App Quality Insights: Android vitals, which now lets your view, filter, and navigate important crash reports from Android vitals, powered by Google Play.
  • App Quality Insights with improved code navigation, which lets you now choose to either navigate to the line of code in your current git checkout, or view a diff between the checkout and the version of your codebase that generated the crash.
  • Compose State information in Debugger, which lists the parameters of the composable and their state when paused on a breakpoint in a composable, so you can more easily identify what changes might have caused the recomposition.
  • New Power Profiler, which shows highly accurate power consumption from the device segmented by each sub-system.
  • (Beta) Device Explorer, which now includes information about debuggable processes running on connected devices and actions you can perform on them.
  • (Beta) Compose animation preview, now supports a number of additional Compose APIs and new pickers that let you set non-enum or boolean states to debug your Compose animation using precise inputs.
  • Embedded Layout Inspector, which runs Layout Inspector directly embedded in the Running Device Window in Android Studio, leading to a more seamless debugging experience and significant performance improvements.
  • Firebase Test Lab support for Gradle Managed Devices, which leverages GMD to help you seamlessly configure Firebase Test Lab devices for your automated testing, and now with additional support for smart sharding.

IntelliJ

  • IntelliJ Platform Update to the IntelliJ 2023.1 platform release, which includes a number of performance and quality of life improvements.
  • New UI update that allows Android Studio to adopt a number of improvements to IntilliJ’s modern design language.

See the Android Studio Preview release notes and the Android Emulator release notes for more details.


Download Android Studio Today!

You can download Android Studio Hedgehog Canary or Android Studio Giraffe Beta today to incorporate the new features into your workflow. You can install them side by side with a stable version of Android Studio by following these instructions. The Beta release is near stable release quality, but bugs might still exist, and Canary features are leading edge features. As always, we appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue and also check out known issues. Remember to also follow us on Twitter, Medium, or YouTube for more Android development updates!

What’s new in Android Health

Posted by Sara Hamilton, Developer Relations Engineer

Health and fitness data is interconnected – sleep, nutrition, workouts and more all inform one another. For example, consider that your sleep impacts your recovery, which impacts your readiness to run your favorite 5k. Over time, your recovery and workout habits drive metrics like heart rate variability, resting heart rate, VO2Max and more! Often this data exists in silos, making it hard for users to get a holistic view of their health data.

We want to make it simple for people to use their favorite apps and devices to track their health by bringing this data together. They should have full control of what data they share, and when they share it. And, we want to make sure developers can enable this with less complexity and fewer lines of code.

This is why we’ve continued to improve our Android Health offerings, and why today at I/O 2023, we’re announcing key updates across both Health Connect and Health Services for app developers and users.

What is Android Health?

Android Health brings together two important platforms for developers to deliver robust health and fitness app to users; Health Connect and Health Services.

Health Connect is an on-device data store that provides APIs for storing and sharing health and fitness data between Android apps. Before Health Connect, there was not a consistent way for developers to share data across Android apps. They had to integrate with many different APIs, each with a different set of data types and different permissions management frameworks.

Now, with Health Connect, there is less fragmentation. Health Connect provides a consistent set of 40+ data types and a single permissions management framework for users to control data permissions. This means that developers can share data with less effort, enabling people to access their health data in their favorite apps, and have more control over data permissions.

Screenshot of permissions via Health Connect

Health Services is our API surface for accessing sensor data on Wear OS devices in a power-efficient way. Before Health Services, developers had to work directly with low-level sensors, which required different configurations on different devices, and was not battery-efficient.

With Health Services, there is now a consistent API surface across all Wear OS 3+ devices, allowing developers to write code once and run it across all devices. And, the Health Services architecture means that developers get great power savings in the process, allowing people to track longer workouts.

Health Connect is coming to Android 14 with new features

Health Connect and Android 14 logos with an X between them to indicate collaboration

Health Connect is currently available for download as an app on the Play Store. We are excited to announce that starting with the release of Android 14 later this year, Health Connect will be a core part of Android and available on all Android mobile devices. Users will be able to access Health Connect directly from Settings on their device, helping to control how their health data is shared across apps.

Screenshot showing Health Connect avaialble in the privacy settings of an Android device

Several new features will be shipped with the Health Connect Android 14 release. We’re adding a new exercise routes feature to allow users to share maps of their workouts through Health Connect. We’ve also made improvements to make it easier for people to log their menstrual cycles. And, Health Connect updates will be delivered through Google Play System Updates, which will allow new features to be updated often.

Health Services now supports more uses cases with new API capabilities

We’ve released several exciting changes to Health Services this year to support more use cases. Our new Batching Modes feature allows developers to adjust the data delivery frequency of heart rate data to support home gym use cases. We’ve also added new API capabilities, like golf shot detection.

The new version of Wear OS arrives later this year. Wear OS 4 will be the most performant yet, delivering improved battery life for the next generation of Wear OS watches. We will be releasing additional Health Services updates with this change, including improved background body sensor permissions.

Our developer ecosystem is growing

There are over 50 apps already integrated with Health Connect and hundreds of apps with health services, including Peloton, Withings, Oura, and more. These apps are using Health Connect to incorporate new data, to give people an interconnected health experience, without building out many new API integrations. Learn more about how these health and fitness apps are creating new experiences for users in areas like sleep, exercise, nutrition, and more in our I/O technical session.

We also have over 100 apps integrated with Health Services. Apps using Health Services are seeing higher engagement from users with Wear apps, and are giving their users longer battery life in the process. For example, Strava found that users with their Wear app did 25% more activities than those without.

Get started with Health Connect

We hope many more developers will join us in bringing unique experiences within Android Health to your users this year.

If you’d like to create a more interconnected health experience for your users, we encourage you to integrate with Health Connect. And if you are a Wear developer, make sure you are using Health Services to get the best battery performance and future proofing for all upcoming Wear OS devices.

Check out our Health Services documentation, Health Connect documentation, and code samples to get started!

To learn more, watch the I/O session:

Price in-app products with confidence by running price experiments in Play Console

Posted by Phalene Gowling, Product Manager, Google Play

At this year’s Google I/O, our “Boost your revenue with Play Commerce” session highlights the newest monetization tools that are deeply integrated into Google Play, with a focus on helping you optimize your pricing strategy. Pricing your products or content correctly is foundational to driving better user lifetime value and can result in reaching new buyers, improving conversion, and encouraging repeat orders. It can be the difference between a successful sale and pricing yourself out of one, or even undervaluing your products and missing out on key sales opportunities.

To help you price with confidence, we’re excited to announce price experiments for in-app products in Play Console, allowing you to test price points and optimize for local purchasing power at scale. Price experiements will launch in the coming weeks - so read on to get the details on the new tool and learn how you can prepare to take full advantage when it's live.

  • A/B test to find optimal local pricing that’s sensitive to the purchasing power of buyers in different markets. Adjusting your price to local markets has already been an industry-wide practice amongst developers, and at launch you will be able to test and manage your global prices, all within Play Console. An optimized price helps reach both new and existing buyers who may have previously been priced out of monetized experiences in apps and games. Additionally, an optimized price can help increase repeat purchases by buyers of their favorite products.
  • Image of two mobile devices showing A/B price testing in Google Play Console
    Illustrative example only. A/B test price points with ease in Play Console 
  • Experiment with statistical confidence: price experiments enables you to track how close you are to statistical significance with confidence interval tracking, or for a quick summary, you can view the top of the analysis when enough data has been collected in the experiment to determine a statistically significant result. To help make your decision on whether to apply the ‘winning’ price easier, we’ve also included support for tracking key monetization metrics such as revenue uplift, revenue derived from new installers, buyer ratio, orders, and average revenue per paying user. This gives you a more detailed understanding of how buyers behave differently for each experiment arm per market. This can also inspire further refinements towards a robust global monetization strategy.
  • Improve return on investment in user acquisition. Having a localized price and a better understanding of buyer behavior in each market, allows you to optimize your user acquisition strategy having known how buyers will react to market-specific products or content. It could also inform which products you chose to feature on Google Play.

Set up price experiments in minutes in Play Console

Price experiments will be easy to run with the new dedicated section in Play Console under Monetize > Products > Price experiments. You’ll first need to determine the in-app products, markets, and the price points you’d like to test. The intuitive interface will also allow you to refine the experiment settings by audience, confidence level and sensitivity. And once your experiment has reached statistical significance, simply apply the winning price to your selected products within the tool to automatically populate your new default price point for your experiment markets and products. You also have the flexibility to stop any experiment before it reaches statistical significance if needed.

You’ll have full control of what and how you want to test, reducing any overhead of managing tests independently or with external tools – all without requiring any coding changes.

Learn how to run an effective experiment with Play Academy

Get Started

You can start preparing now by strategizing what type of price experiment you might want to run first. For a metric-driven source of inspiration, game developers can explore strategic guidance, which can identify country-specific opportunities for buyer conversion. Alternatively, start building expertise on running effective pricing experiments for in-app products by taking our new Play Academy course, in
preparation for price experiments rolling out in the coming weeks.