Tag Archives: mobile

Deep dive into Live Edit for Jetpack Compose UI

Posted by Alan Leung, Staff Software Engineer, Fabien Sanglard, Senior Software Engineer, Juan Sebastian Oviedo, Senior Product Manager
A closeup look into how the Android Studio team built Live Edit; a feature that accelerates the Compose development process by continuously updating the running application as code changes are made.

What’s Live Edit and how can it help me?

Live Edit introduces a new way to edit your app’s Jetpack Compose UI by instantly deploying code changes to the running application on a physical device or emulator. This means that you can make changes to your app’s UI and immediately see their effect on the running application, enabling you to iterate faster and be more productive in your development. Live Edit was recently released to the stable channel with Android Studio Giraffe and can be enabled in the Editor settings. Developers like Plex and Pocket Casts are already using Live Edit and it has accelerated their development process for Compose UI. It is also helping them in the process of migrating from XML views to Compose.


Moving image illustrating Live Edit in action on Android Studio Hedgehog
Live Edit in action on Android Studio Hedgehog

When should I use Live Edit?

Live Edit is a different feature from Compose Preview and Apply Changes. These features provide value in different ways:

Feature

Description

When should I use it?

Live Edit[Kotlin only, supports live recomposition] Make changes to your Compose app’s UI and immediately see their effect on the running application on an emulator or physical device. Quickly see the effect of updates to UX elements (such as modifier updates and animations) on the overall app experience while the application is running.
Compose Preview

[Compose only] Visualize Compose elements in the Design tab within Android Studio and see them automatically refresh as you make code changes. Preview individual Compose elements in one or many different configurations and states, such as dark theme, locales, and font scale.
Apply Changes

Deploy code and resource updates to a running app without restarting it—and, in some cases, without restarting the current activity. Update code and resources in a non-Compose app without having to redeploy it to an emulator or physical device.

How does it work?

At a high level, Live Edit does the following:

  1. Detects source code changes.
  2. Compiles classes that were updated.
  3. Pushes new classes to the device.
  4. Adds a hook in each class method bytecode to redirect calls to the new bytecode.
  5. Edits the app classpath to ensure changes persist even if the app is restarted.

Illustration of Live Edit architechture
Live Edit architecture

Keystroke detection

This step is handled via the Intellij IDEA Program Structure Interface (PSI) tree. Listeners allow LE to detect the moment a developer makes a change in the Android Studio editor.

Compilation

Fundamentally, Live Edit still relies on the Kotlin compiler to generate code for each incremental change.

Our goal was to create a system where there is less than 250ms latency between the last keystroke and the moment the recomposition happens on the device. Doing a typical incremental build or invoking an external compiler in a traditional sense would not achieve our performance requirement. Instead, Live Edit leverages Android Studio’s tight integration with the Kotlin compiler.

On the highest level, the Kotlin compiler’s compilation can be divided into two stages.

  • Analysis
  • Code generation

The analysis performed as the first step is not entirely restricted to a build process. In fact, the same step is frequently done outside the build system as part of an IDE. From basic syntax checking to auto-complete suggestions, the IDE is constantly performing the same analysis (Step 1 of Diagram 1) and caching the result to provide Kotlin- and Compose-specific functionality to the developer. Our experiment shows that the majority of the compilation time is spent in the analysis stage during build. Live Edit uses that information to invoke the Compose compiler. This allows compilation to happen within 200ms using typical laptops used by developers. Live Edit further optimizes the code generation process and focuses solely on generating code that is only necessary to update the application.

The result is a plain .class file (not a .dex file) that is passed to the next step in the pipeline, desugaring.

How to desugar

When Android app source code is processed by the build system, it is usually “desugared” after it is compiled. This transformation step lets an app run on a set of Android versions devoid of syntactic sugar support and recent API features. This allows developers to use new APIs in their app while still making it available to devices that run older versions of Android.

There are two kinds of desugaring, known as language desugaring and library desugaring. Both of these transformations are performed by R8. To make sure the injected bytecode will match what is currently running on the device, Live Edit must make sure each class file is desugared in a way that is compatible with the desugaring done by the build system.

Language desugaring:

This type of bytecode rewrite aims to provide newer language features for lower targeted API level devices. The goal is to support language features such as the default interface method, lambda expression, method reference, and so on, allowing support down to the min API level. This value is extracted from the .apk file's DEX files using markers left in there by R8.

API desugaring:

Also known as library desugaring, this form of desugaring aims to support JAVA SDK methods and classes. This is configured by a JSON file. Among other things, method call sites are rewritten to target functions located in the desugar library (which is also embedded in the app, in a DEX file). To perform this step, Gradle collaborates with Live Edit by providing the JSON file used during library desugaring.

Function trampoline

To facilitate a rapid “per-key-stroke” speed update to a running application, we decided to not constantly utilize the JVMTI codeswap ability of the Android Runtime (ART) for every single edit. Instead, JVMTI is only used once to perform a code swap that installs trampolines onto a subset of methods within the soon-to-be modified classes inside the VMs. Utilizing something we called the “Primer” (Step 3 of Diagram 1), invocation of the methods is redirected to a specialized interpreter. When the application no longer sees updates for a period of time, Live Edit will replace the code with traditional DEX code for performance benefits of ART. This saves developers time by immediately updating the running application as code changes are made.

Illustration of Function trampoline process
Function trampoline process

How code is interpreted

Live Edit compiles code on the fly. The resulting .class files are pushed, trampolined (as previously described), and then interpreted on the device. This interpretation is performed by the LiveEditInterpreter. The interpreter is not a full VM inside ART. It is a Frame interpreter built on top of ASM Frame. ASM Frame handles the low level logistics such as the stack/local variables's push/load, but it needs an Interpreter to actually execute opcode. This is what the OpcodeInterpreter is for.

Flow chart of Live Edit interpretation
Live Edit interpretation flow

Live Edit Interpreter is a simple loop which drives ASM/Interpreter opcodes interpretation.

Some JVM instructions cannot be implemented using a pure Java interpreter (in particular invokeSpecial and monitorEnter/Exit are problematic). For these, Live Edit uses JNI.

Dealing with lambdas

Lambdas are handled in a different manner because changes to lambda captures can result in changes in VM classes that are different in many method signatures. Instead, new lambda-related updates are sent to the running device and loaded as new classes instead of redefining any existing loaded classes as described in the previous section.

How does recomposition work?

Developers wanted a seamless and frictionless new approach to program Android applications. A key part of the Live Edit experience is the ability to see the application updated while the developer continuously writes code, without having to explicitly trigger a re-run with a button press. We needed a UI framework that has the ability to listen to model changes within the application and perform optimal redraws accordingly. Luckily, Jetpack Compose fits this task perfectly. With Live Edit, we added an extra dimension to the reactive programming paradigm where the framework also observes changes to the functions’ code.

To facilitate code modification monitoring, the Jetpack Compose compiler supplies Android Studio with a mapping of function elements to a set of recomposition group keys. The attached JVMTI agent invalidates the Compose state of a changed function in an asynchronous manner and the Compose runtime performs recomposition on Composables that are invalidated.

How we handle runtime errors during recomposition

Moving image of Live edit handling a runtime error
Live Edit handling a runtime error

While the concept of a continuously updating application is rather exhilarating, our field studies showed that sometimes when developers are writing code, the program can be in an incomplete state where updating and re-executing certain functions would lead to undesirable results. Besides the automatic mode where updates are happening almost continuously, we have introduced two manual modes for the developer who wants a bit more control on when the application gets updated after new code is detected.

Even with that in mind, we want to make sure common issues caused by executing incomplete functions do not cause the application to terminate prematurely. Cases where a loop’s exit condition is still being written are detected by Live Edit to avoid an infinite loop within the program. Also, if a Live Edit update triggers recomposition and causes a runtime exception to be thrown, the Compose runtime will catch such an exception and recompose using the last known good state.

Consider the following piece of code:

var x = y / 10

Suppose the developer would like to change 10 to 50 by deleting the character 1 and inserting character 5 after. Android Studio could potentially update the application before the 5 is inserted and thus create a division-by-zero ArithmeticException. However, with the added error handling mentioned, the application would simply revert to “y / 10” until further updates are done in the editor.

What’s coming?

The Android Studio team believes Live Edit will change how UI code is written in a positive way and we are committed to continuously improve the Live Edit development experience. We are working on expanding the types of edits developers can perform. Furthermore, future versions of Live Edit will eliminate the need to invalidate the whole application during certain scenarios.

Additionally, PSI event detection comes with limitations such as when the user edits import statements. To solve this problem, future versions of Live Edit will rely on .class diffing to detect changes. Lastly, the full persisting functionality isn't currently available. Future versions of Live Edit will allow the application to be restarted outside of Android Studio and retain the Live Edit changes.

Get started with Live Edit

Live Edit is ready to be used in production and we hope it can greatly improve your experience developing for Android, especially for UI-heavy iterations. We would love to hear more about your interesting use cases, best practices and bug reports and suggestions.

Java is a trademark or registered trademark of Oracle and/or its affiliates.

Google I/O 2023 recap: Updates across mobile, web, AI, and cloud

Posted by Jeanine Banks, VP & General Manager of Developer X & Head of Developer Relations

Thank you for another great Google I/O! We’re continuing to make deep investments across AI, mobile, web, and the cloud to make your life easier as a developer. Today you saw many of the ways we’re using generative AI to improve our products. We’re excited about the opportunities these tools can unlock and to see what you build. From simplifying your end to end workflows to improving productivity, catch up on key announcements below.


AI

Making it possible for everyone to build AI-powered products in the most productive and responsible way.

PaLM API and MakerSuite
Build generative AI applications with access to Google’s state-of-the-art large language model through the PaLM API. Quickly create and prototype prompts directly in your browser with MakerSuite — no machine learning expertise or coding required. 
Firebase AI extensions
Developers can now access the PaLM API with Firebase Extensions. The new Chatbot with PaLM API extension allows you to add a chat interface for continuous dialog, text summarization, and more.
MediaPipe Studio and solutions
MediaPipe is an open source cross-platform framework for building machine learning solutions on mobile, desktop, and the web. You can try nine new solutions, like a face landmarker, running locally on-device in the browser with MediaPipe Studio. 
Tools across your workflow
From datasets and pre-trained models with Kaggle to easy-to-use modular libraries for computer vision and natural language processing with KerasCV and KerasNLP, we’re proud to power end-to-end experiences with a diverse set of tools across your workflow.


Mobile

Increase productivity with the power of AI, build for a multi-device world, and do more faster with Modern Android Development.

Studio Bot
We’re introducing Studio Bot, an AI-powered conversational experience in Android Studio which makes you more productive. This is an early experiment that helps you write and debug code, and answers your Android development questions.
Going big on Android foldables & tablets
With two new Android devices coming from Pixel - the Pixel Fold and the Pixel Tablet, Google and our partners are all in on large screens; it's a great time to invest, with improved tools and guidance like the new Pixel Fold and Pixel Tablet emulator configurations in Android Studio Hedgehog Canary 3, expanded Material design updates, and inspiration for gaming and creativity apps.
Wear OS: Watch faces, Wear OS 4, & Tiles animations
Wear OS active devices have grown 5x since launching Wear OS 3, so there’s more reason to build a great app experience for the wrist. To help you on your way, we announced the new Watch Face Format, a new declarative XML format built in partnership with Samsung to help you bring your great idea to the watch face market.
Modern Android Development
Several updates to Jetpack Compose make it easier to build rich UIs across more surfaces like Compose for TV in alpha and screen widgets with Glance, now in beta. Meanwhile, the new features in Android Studio help you stay productive, including added functionality in App Quality Insights and more.
Flutter 3.10
Tap into Impeller for enhanced graphics performance. The latest version of Flutter now includes a JNI bridge to Jetpack libraries written in Kotlin, enabling you to call a new Jetpack library directly from Dart without needing an external plugin.
Geospatial Creator
Easily design and publish AR content with the new Geospatial Creator powered by ARCore and 3D maps from Google Maps Platform. Geospatial Creator is available in Unity or Adobe Aero.
 

Web

Experience a more powerful and open web, made easier and AI-ready.

WebAssembly (aka WASM) - managed memory language support
WASM now supports Kotlin and Dart, extending its benefit of reaching new customers on the web with native performance while reusing existing code, to Android and Flutter developers.
WebGPU
This newly available API unlocks the power of GPU hardware and makes the web AI-ready. Save money, increase speed, and build privacy-preserving AI features with access to on device computing power.
Support for web frameworks
Chrome DevTools has improved debugging for various frameworks. Firebase Hosting is also expanding experimental support to Nuxt, Flutter, and many more. Angular v16, includes better server side rendering, hydration, Signals, and more. Last, Flutter 3.10 reduces load time for web apps and integrates with existing web components.
Baseline
We introduced Baseline, a stable and predictable view of the web, alongside browser vendors in the W3C and framework providers. Baseline captures an evergreen set of cross-browser features and will be updated every year.
 

Cloud

New generative AI cloud capabilities open the door for developers with all different skill levels to build enterprise-ready applications.

Duet AI
Duet AI is a new generative AI-powered interface that acts as your expert pair programmer, providing assistance within Cloud Workstations, Cloud Console, and Chat. It will also allow you to call Google trained models and custom code models, trained directly on your code.
Vertex AI
Vertex AI lets you tune, customize, and deploy foundation models with simple prompts, no ML expertise required. Now you can access foundational models like Imagen 2, our text-to-image foundation model, with enterprise-grade security and governance controls.
Text Embeddings API
This new API endpoint lets developers build recommendation engines, classifiers, question-answering systems, similarity matching, and other sophisticated applications based on semantic understanding of text or images.
Workspace additions
New Chat APIs in Google Workspace will help you build apps that provide link previews and let users create or update records, generally available in the coming weeks. And coming to Preview this summer, new Google Meet APIs and two new SDKs will enable Google Meet and its data capabilities in your apps.
 

And that’s a wrap

These are just a few highlights of a number of new tools and technologies we announced today to help developers more easily harness the power of AI, and to more easily create applications for a variety of form factors and platforms. And we’re not done yet. Visit the Google I/O website to find over 200 sessions and other learning material, and connect with Googlers and fellow developers in I/O Adventure Chat.

We’re also excited to come to you with four Google I/O Connect events, which will bring Google experts and developers together for hands-on demos, code labs, office hours, and more. In addition, you can join one of the more than 250 I/O Extended meetups taking place across the globe over the next few months. We can’t wait to see what you will build next!

Get ready for Google I/O

Posted by Timothy Jordan, Director, Developer Relations & Open Source

I/O is just a few days away and we couldn’t be more excited to share the latest updates across Google’s developer products, solutions, and technologies. From keynotes to technical sessions and hands-on workshops, these announcements aim to help you build smarter and ship faster.

Here are some helpful tips to maximize your experience online.


Start building your personal I/O agenda

Starting now, you can save the Google and developer keynotes to your calendar and explore the program to preview content. Here are just a few noteworthy examples of what you’ll find this year:

What's new in Android
Get the latest news in Android development: Android 14, form factors, Jetpack + Compose libraries, Android Studio, and performance.
What’s new in Web
Explore new features and APIs that became stable across browsers on the Web Platform this year.
What’s new in Generative AI
Discover a new suite of tools that make it easy for developers to leverage and build on top of Google's large language models.
What’s new in Google Cloud
Learn how Google Cloud and generative AI will help you develop faster and more efficiently.

For the best experience, create or connect a developer profile and start saving content to My I/O to build your personal agenda. With over 200 sessions and other learning material, there’s a lot to cover, so we hope this will help you get organized.

This year we’ve introduced development focus filters to help you navigate content faster across mobile, web, AI, and cloud technologies. You can also peruse content by topic, type, or experience level so you can find what you’re interested in, faster.


Connect with the community

After the keynotes, you can talk to Google experts and other developers online in I/O Adventure chat. Here you can ask questions about new releases and learn best practices from the global developer community.

If you’re craving community now, visit the Community page to meet people with similar interests in your area or find a watch party to attend.

We hope these updates are useful, and we can’t wait to connect online in May!

Managed Android devices must upgrade to Android Device Policy during March 2023

What’s changing 

In 2019, we announced that a new Android management client, Android Device Policy, would replace the legacy Google Apps Device Policy client. We’re now in the final stages of this upgrade. 


All devices with the Google Apps Device Policy will lose access during March 2023 if they have not already upgraded. Existing Google Apps Device Policy app users must switch to Android Device Policy before then to continue syncing work data. Note that, per our last update, the new user registration flow on the legacy Google Apps Device Policy will be blocked and users may see errors during the registration process as of January 2022. Admins can act directly from the alert in the Admin console to identify users who need to upgrade.




Visit the Help Center to learn more about migrating to Android Device Policy and our previous announcement for more information.


Getting started 


Rollout pace

  • Devices on the old agent will lose access during March 2023. 
  • Android Device Policy is available now and all users should upgrade to avoid disruption.  


Availability

  • This change impacts Google Workspace customers who use basic and advanced mobile management.


Resources


Tips from Android Dev Summit 2022: How to scale made-for-mobile apps to ChromeOS

Posted by Patrick Fuentes, Developer Relations Engineer, Google ChromeOSPeople’s appetite for apps on larger screens is growing fast. In Q1 2022 alone, there were 270 million active Android users across Chromebooks, tablets, and foldables. So if you want to grow reach, engagement, and loyalty, taking your app beyond mobile will unlock a world of opportunity.

If your app is available in Google Play, there’s a good chance users are already engaging with it on ChromeOS. And if you’re just starting to think about larger screens, tailoring your app to ChromeOS — which runs a full Android framework — is a great place to start. What’s more is that optimizing for ChromeOS is very similar to optimizing for other larger-screen devices, so any work you do for one will scale to the other.

At Android Dev Summit 2022, I shared a few ChromeOS-specific nuances to keep in mind when tailoring your app to larger screens. Let’s explore the top five things devs should consider, as well as workarounds to common challenges.

1) Finessing input compatibility

One of the biggest differences between user behavior on mobile and larger-screen devices is people’s preference for input devices. About 90% of ChromeOS users interact with apps using a mouse and keyboard, and Android users across tablets and foldables often do the same.
About 90% of ChromeOS users interact with apps using a mouse and keyboard
The first step to meeting people’s expectations is testing your app’s support for a keyboard, mouse, and stylus. Once you’ve got your basics covered, you can add enhancements such as thoughtful focus states and context menus. You can also further enhance input compatibility on larger screens by testing app-specific input devices, such as game controllers.
Focus states and context menus shown on Chromebooks

2) Creating a fit-for-larger-screen UI

People freely resize apps on ChromeOS, so it’s important to think about how your app looks and feels in a variety of aspect ratios — including landscape orientations. Although ChromeOS offers automatic windowing compatibility support for made-for-mobile experiences, apps that specifically optimize for larger screens tend to drive more engagement.

The extra screen real estate on Chromebooks, tablets, and foldables gives both you and your users more room to play, explore, and create. So why not make the most of it? You can implement a responsive UI for larger screens with toolkits such as Jetpack Compose and create adaptive experiences by sticking to design best practices.


3) Implementing binary compatibility

If you’ve exclusively run your app on Android phones, you might only be familiar with ARM devices. But Chromebooks and many other desktops often use x86 architectures, which makes binary support critical. Although Gradle builds for all non-deprecated ABIs by default, you’ll still need to specifically account for x86 support if your app or one of your libraries includes C++ code.

Thanks to binary translation, many Android apps will run on x86 ChromeOS devices even if a compatible version isn’t available. But this can hinder app performance and hurt battery life, so it’s best to provide x86 support explicitly whenever you can.


4) Giving apps a thorough test run

The surefire way of ensuring a great user experience? Run rigorous checks to make sure your apps and games work as expected on the devices you’re optimizing for. When you’re building for ChromeOS, testing your apps on Chromebooks or another larger-screen device is ideal. But you've still got options if a physical device isn’t available.

For instance, you can still test a keyboard or mouse on an Android handset by plugging them into the USB-C port. And with the new desktop emulator in Android Studio, you can take your app for a spin in a larger-screen setting and test desktop features such as window resizing.

A Chromebook featuring the Desktop Android Virtual Device in Android Studio

5) Polishing apps for publishing

Sometimes, even apps tested on Chromebooks — and listed in Google Play — aren’t actually available to ChromeOS users. This usually happens because there’s an entry in the app’s manifest declaring it requires features that aren’t available on the unsupported device.

Let’s say you specify your app requires “android.hardware.camera.” That entry refers to a rear-facing camera — so any devices with only a user-facing camera would be considered unsupported. If any camera will work for your app, you can use “android.hardware.camera.any” instead. And if a hardware feature isn’t a must for your app, it’s best to specify in your manifest that it’s not required by using “required=false.”

A Chromebook featuring recommended manifest entries for hardware features. These manifest entries are also featured on their own next to the Chromebook
Once you’ve got your manifest squared away, your app is ready to ship. Your app listing is often your first chance to impress and attract users. That’s why we’re excited the Play Console now enables you to upload screenshots specific to different form factors. With this new functionality, you can show off what your app experience is like on users’ favorite devices and entice them to download.


Connect with millions of larger-screen users

As people’s love for desktops, tablets, and foldables continues to grow, building for these form factors is becoming more and more important. Check out other talks from Android Dev Summit 2022 as well as resources on ChromeOS.dev and developer.android.com for more inspiration and how-tos as you optimize for larger screens. And don’t forget to sign up for the ChromeOS newsletter to keep up with the latest.

Ability to mute all Google Meet participants at once rolling out to mobile platforms

Quick launch summary

Earlier this year, we announced the ability for meeting hosts to mute everyone all at once in Google Meet on desktops/laptop devices. This change gives the host more control by helping them prevent or stop disruptions coming from unmuted users.