Tag Archives: Android

Tools, not Rules: become a better Android developer with Compiler Explorer

Posted by Shai Barack – Android Platform Performance lead

Introducing Android support in Compiler Explorer

In a previous blog post you learned how Android engineers continuously improve the Android Runtime (ART) in ways that boost app performance on user devices. These changes to the compiler make system and app code faster or smaller. Developers don’t need to change their code and rebuild their apps to benefit from new optimizations, and users get a better experience. In this blog post I’ll take you inside the compiler with a tool called Compiler Explorer and witness some of these optimizations in action.

Compiler Explorer is an interactive website for studying how compilers work. It is an open source project that anyone can contribute to. This year, our engineers added support to Compiler Explorer for the Java and Kotlin programming languages on Android.

You can use Compiler Explorer to understand how your source code is translated to assembly language, and how high-level programming language constructs in a language like Kotlin become low-level instructions that run on the processor.

At Google our engineers use this tool to study different coding patterns for efficiency, to see how existing compiler optimizations work, to share new optimization opportunities, and to teach and learn. Learning is best when it’s done through tools, not rules. Instead of teaching developers to memorize different rules for how to write efficient code or what the compiler might or might not optimize, give the engineers the tools to find out for themselves what happens when they write their code in different ways, and let them experiment and learn. Let’s learn together!

Start by going to godbolt.org. By default we see C++ sample code, so click the dropdown that says C++ and select Android Java. You should see this sample code:

class Square {
   static int square(int num) {
       return num * num;
   }
}
screenshot of sample code in Compiler Explorer
click to enlarge

On the left you’ll see a very simple program. You might say that this is a one line program. But this is not a meaningful statement in terms of performance - how many lines of code there are doesn’t tell us how long this program will take to run, or how much memory will be occupied by the code when the program is loaded.

On the right you’ll see a disassembly of the compiler output. This is expressed in terms of assembly language for the target architecture, where every line is a CPU instruction. Looking at the instructions, we can say that the implementation of the square(int num) method consists of 2 instructions in the target architecture. The number and type of instructions give us a better idea for how fast the program is than the number of lines of source code. Since the target architecture is AArch64 aka ARM64, every instruction is 4 bytes, which means that our program’s code occupies 8 bytes in RAM when the program is compiled and loaded.

Let’s take a brief detour and introduce some Android toolchain concepts.


The Android build toolchain (in brief)

a flow diagram of the Android build toolchain

When you write your Android app, you’re typically writing source code in the Java or Kotlin programming languages. When you build your app in Android Studio, it’s initially compiled by a language-specific compiler into language-agnostic JVM bytecode in a .jar. Then the Android build tools transform the .jar into Dalvik bytecode in .dex files, which is what the Android Runtime executes on Android devices. Typically developers use d8 in their Debug builds, and r8 for optimized Release builds. The .dex files go in the .apk that you push to test devices or upload to an app store. Once the .apk is installed on the user’s device, an on-device compiler which knows the specific target device architecture can convert the bytecode to instructions for the device’s CPU.

We can use Compiler Explorer to learn how all these tools come together, and to experiment with different inputs and see how they affect the outputs.

Going back to our default view for Android Java, on the left is Java source code and on the right is the disassembly for the on-device compiler dex2oat, the very last step in our toolchain diagram. The target architecture is ARM64 as this is the most common CPU architecture in use today by Android devices.

The ARM64 Instruction Set Architecture offers many instructions and extensions, but as you read disassemblies you will find that you only need to memorize a few key instructions. You can look for ARM64 Quick Reference cards online to help you read disassemblies.

At Google we study the output of dex2oat in Compiler Explorer for different reasons, such as:

    • Gaining intuition for what optimizations the compiler performs in order to think about how to write more efficient code.
    • Estimating how much memory will be required when a program with this snippet of code is loaded into memory.
    • Identifying optimization opportunities in the compiler - ways to generate instructions for the same code that are more efficient, resulting in faster execution or in lower memory usage without requiring app developers to change and rebuild their code.
    • Troubleshooting compiler bugs! 🐞

Compiler optimizations demystified

Let’s look at a real example of compiler optimizations in practice. In the previous blog post you can read about compiler optimizations that the ART team recently added, such as coalescing returns. Now you can see the optimization, with Compiler Explorer!

Let’s load this example:

class CoalescingReturnsDemo {
   String intToString(int num) {
       switch (num) {
           case 1:
               return "1";
           case 2:
               return "2";
           case 3:
               return "3";           
           default:
               return "other";
       }
   }
}
screenshot of sample code in Compiler Explorer
click to enlarge

How would a compiler implement this code in CPU instructions? Every case would be a branch target, with a case body that has some unique instructions (such as referencing the specific string) and some common instructions (such as assigning the string reference to a register and returning to the caller). Coalescing returns means that some instructions at the tail of each case body can be shared across all cases. The benefits grow for larger switches, proportional to the number of the cases.

You can see the optimization in action! Simply create two compiler windows, one for dex2oat from the October 2022 release (the last release before the optimization was added), and another for dex2oat from the November 2023 release (the first release after the optimization was added). You should see that before the optimization, the size of the method body for intToString was 124 bytes. After the optimization, it’s down to just 76 bytes.

This is of course a contrived example for simplicity’s sake. But this pattern is very common in Android code. For instance consider an implementation of Handler.handleMessage(Message), where you might implement a switch statement over the value of Message#what.

How does the compiler implement optimizations such as this? Compiler Explorer lets us look inside the compiler’s pipeline of optimization passes. In a compiler window, click Add New > Opt Pipeline. A new window will open, showing the High-level Internal Representation (HIR) that the compiler uses for the program, and how it’s transformed at every step.

screenshot of the high-level internal representation (HIR) the compiler uses for the program in Compiler Explorer
click to enlarge

If you look at the code_sinking pass you will see that the November 2023 compiler replaces Return HIR instructions with Goto instructions.

Most of the passes are hidden when Filters > Hide Inconsequential Passes is checked. You can uncheck this option and see all optimization passes, including ones that did not change the HIR (i.e. have no “diff” over the HIR).

Let’s study another simple optimization, and look inside the optimization pipeline to see it in action. Consider this code:

class ConstantFoldingDemo {
   static int demo(int num) {
       int result = num;
       if (num == 2) {
           result = num + 2;
       }
       return result;
   }
}

The above is functionally equivalent to the below:

class ConstantFoldingDemo {
   static int demo(int num) {
       int result = num;
       if (num == 2) {
           result = 4;
       }
       return result;
   }
}

Can the compiler make this optimization for us? Let’s load it in Compiler Explorer and turn to the Opt Pipeline Viewer for answers.

screenshot of Opt Pipeline Viewer in Compiler Explorer
click to enlarge

The disassembly shows us that the compiler never bothers with “two plus two”, it knows that if num is 2 then result needs to be 4. This optimization is called constant folding. Inside the conditional block where we know that num == 2 we propagate the constant 2 into the symbolic name num, then fold num + 2 into the constant 4.

You can see this optimization happening over the compiler’s IR by selecting the constant_folding pass in the Opt Pipeline Viewer.

Kotlin and Java, side by side

Now that we’ve seen the instructions for Java code, try changing the language to Android Kotlin. You should see this sample code, the Kotlin equivalent of the basic Java sample we’ve seen before:

fun square(num: Int): Int = num * num
screenshot of sample code in Kotlin in Compiler Explorer
click to enlarge

You will notice that the source code is different but the sample program is functionally identical, and so is the output from dex2oat. Finding the square of a number results in the same instructions, whether you write your source code in Java or in Kotlin.

You can take this opportunity to study interesting language features and discover how they work. For instance, let’s compare Java String concatenation with Kotlin String interpolation.

In Java, you might write your code as follows:

class StringConcatenationDemo {
   void stringConcatenationDemo(String myVal) {
       System.out.println("The value of myVal is " + myVal);
   }
}

Let’s find out how Java String concatenation actually works by trying this example in Compiler Explorer.

screenshot of sample code in Kotlin in Compiler Explorer
click to enlarge

First you will notice that we changed the output compiler from dex2oat to d8. Reading Dalvik bytecode, which is the output from d8, is usually easier than reading the ARM64 instructions that dex2oat outputs. This is because Dalvik bytecode uses higher level concepts. Indeed you can see the names of types and methods from the source code on the left side reflected in the bytecode on the right side. Try changing the compiler to dex2oat and back to see the difference.

As you read the d8 output you may realize that Java String concatenation is actually implemented by rewriting your source code to use a StringBuilder. The source code above is rewritten internally by the Java compiler as follows:

class StringConcatenationDemo {
   void stringConcatenationDemo(String myVal) {
       StringBuilder sb = new StringBuilder();
       sb.append("The value of myVal is ");
       sb.append(myVal);
       System.out.println(sb.toString());
  }
}

In Kotlin, we can use String interpolation:

fun stringInterpolationDemo(myVal: String) {
   System.out.println("The value of myVal is $myVal");
}

The Kotlin syntax is easier to read and write, but does this convenience come at a cost? If you try this example in Compiler Explorer, you may find that the Dalvik bytecode output is roughly the same! In this case we see that Kotlin offers an improved syntax, while the compiler emits similar bytecode.

At Google we study examples of language features in Compiler Explorer to learn about how high-level language features are implemented in lower-level terms, and to better inform ourselves on the different tradeoffs that we might make in choosing whether and how to adopt these language features. Recall our learning principle: tools, not rules. Rather than memorizing rules for how you should write your code, use the tools that will help you understand the upsides and downsides of different alternatives, and then make an informed decision.

What happens when you minify your app?

Speaking of making informed decisions as an app developer, you should be minifying your apps with R8 when building your Release APK. Minifying generally does three things to optimize your app to make it smaller and faster:

      1. Dead code elimination: find all the live code (code that is reachable from well-known program entry points), which tells us that the remaining code is not used, and therefore can be removed.

      2. Bytecode optimization: various specialized optimizations that rewrite your app’s bytecode to make it functionally identical but faster and/or smaller.

      3. Obfuscation: renaming all types, methods, and fields in your program that are not accessed by reflection (and therefore can be safely renamed) from their names in source code (com.example.MyVeryLongFooFactorySingleton) to shorter names that fit in less memory (a.b.c).

Let’s see an example of all three benefits! Start by loading this view in Compiler Explorer.

screenshot of sample code in Kotlin in Compiler Explorer
click to enlarge

First you will notice that we are referencing types from the Android SDK. You can do this in Compiler Explorer by clicking Libraries and adding Android API stubs.

Second, you will notice that this view has multiple source files open. The Kotlin source code is in example.kt, but there is another file called proguard.cfg.

-keep class MinifyDemo {
   public void goToSite(...);
}

Looking inside this file, you’ll see directives in the format of Proguard configuration flags, which is the legacy format for configuring what to keep when minifying your app. You can see that we are asking to keep a certain method of MinifyDemo. “Keeping” in this context means don’t shrink (we tell the minifier that this code is live). Let’s say we’re developing a library and we’d like to offer our customer a prebuilt .jar where they can call this method, so we’re keeping this as part of our API contract.

We set up a view that will let us see the benefits of minifying. On one side you’ll see d8, showing the dex code without minification, and on the other side r8, showing the dex code with minification. By comparing the two outputs, we can see minification in action:

      1. Dead code elimination: R8 removed all the logging code, since it never executes (as DEBUG is always false). We removed not just the calls to android.util.Log, but also the associated strings.

      2. Bytecode optimization: since the specialized methods goToGodbolt, goToAndroidDevelopers, and goToGoogleIo just call goToUrl with a hardcoded parameter, R8 inlined the calls to goToUrl into the call sites in goToSite. This inlining saves us the overhead of defining a method, invoking the method, and returning from the method.

      3. Obfuscation: we told R8 to keep the public method goToSite, and it did. R8 also decided to keep the method goToUrl as it’s used by goToSite, but you’ll notice that R8 renamed that method to a. This method’s name is an internal implementation detail, so obfuscating its name saved us a few precious bytes.

You can use R8 in Compiler Explorer to understand how minification affects your app, and to experiment with different ways to configure R8.

At Google our engineers use R8 in Compiler Explorer to study how minification works on small samples. The authoritative tool for studying how a real app compiles is the APK Analyzer in Android Studio, as optimization is a whole-program problem and a snippet might not capture every nuance. But iterating on release builds of a real app is slow, so studying sample code in Compiler Explorer helps our engineers quickly learn and iterate.

Google engineers build very large apps that are used by billions of people on different devices, so they care deeply about these kinds of optimizations, and strive to make the most use out of optimizing tools. But many of our apps are also very large, and so changing the configuration and rebuilding takes a very long time. Our engineers can now use Compiler Explorer to experiment with minification under different configurations and see results in seconds, not minutes.

You may wonder what would happen if we changed our code to rename goToSite? Unfortunately our build would break, unless we also renamed the reference to that method in the Proguard flags. Fortunately, R8 now natively supports Keep Annotations as an alternative to Proguard flags. We can modify our program to use Keep Annotations:

@UsedByReflection(kind = KeepItemKind.CLASS_AND_METHODS)
public static void goToSite(Context context, String site) {
    ...
}

Here is the complete example. You’ll notice that we removed the proguard.cfg file, and under Libraries we added “R8 keep-annotations”, which is how we’re importing @UsedByReflection.

At Google our engineers prefer annotations over flags. Here we’ve seen one benefit of annotations - keeping the information about the code in one place rather than two makes refactors easier. Another is that the annotations have a self-documenting aspect to them. For instance if this method was kept actually because it’s called from native code, we would annotate it as @UsedByNative instead.

Baseline profiles and you

Lastly, let’s touch on baseline profiles. So far you saw some demos where we looked at dex code, and others where we looked at ARM64 instructions. If you toggle between the different formats you will notice that the high-level dex bytecode is much more compact than low-level CPU instructions. There is an interesting tradeoff to explore here - whether, and when, to compile bytecode to CPU instructions?

For any program method, the Android Runtime has three compilation options:

      1. Compile the method Just in Time (JIT).

      2. Compile the method Ahead of Time (AOT).

      3. Don’t compile the method at all, instead use a bytecode interpreter.

Running code in an interpreter is an order of magnitude slower, but doesn’t incur the cost of loading the representation of the method as CPU instructions which as we’ve seen is more verbose. This is best used for “cold” code - code that runs only once, and is not critical to user interactions.

When ART detects that a method is “hot”, it will be JIT-compiled if it’s not already been AOT compiled. JIT compilation accelerates execution times, but pays the one-time cost of compilation during app runtime. This is where baseline profiles come in. Using baseline profiles, you as the app developer can give ART a hint as to which methods are going to be hot or otherwise worth compiling. ART will use that hint before runtime, compiling the code AOT (usually at install time, or when the device is idle) rather than at runtime. This is why apps that use Baseline Profiles see faster startup times.

With Compiler Explorer we can see Baseline Profiles in action.

Let’s open this example.

screenshot of sample code in Compiler Explorer
click to enlarge

The Java source code has two method definitions, factorial and fibonacci. This example is set up with a manual baseline profile, listed in the file profile.prof.txt. You will notice that the profile only references the factorial method. Consequently, the dex2oat output will only show compiled code for factorial, while fibonacci shows in the output with no instructions and a size of 0 bytes.

In the context of compilation modes, this means that factorial is compiled AOT, and fibonacci will be compiled JIT or interpreted. This is because we applied a different compiler filter in the profile sample. This is reflected in the dex2oat output, which reads: “Compiler filter: speed-profile” (AOT compile only profile code), where previous examples read “Compiler filter: speed” (AOT compile everything).

Conclusion

Compiler Explorer is a great tool for understanding what happens after you write your source code but before it can run on a target device. The tool is easy to use, interactive, and shareable. Compiler Explorer is best used with sample code, but it goes through the same procedures as building a real app, so you can see the impact of all steps in the toolchain.

By learning how to use tools like this to discover how the compiler works under the hood, rather than memorizing a bunch of rules of optimization best practices, you can make more informed decisions.

Now that you've seen how to use the Java and Kotlin programming languages and the Android toolchain in Compiler Explorer, you can level up your Android development skills.

Lastly, don't forget that Compiler Explorer is an open source project on GitHub. If there is a feature you'd like to see then it's just a Pull Request away.


Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

#WeArePlay | Nkenne: The app teaching African languages and culture

Posted by Robbie McLachlan, Developer Marketing

In our latest film for #WeArePlay, which celebrates the people behind apps and games, we meet Michael and Shalom - a mother and son duo driven by a passion for sharing and teaching African languages. Discover how their app, Nkenne, goes beyond language learning—serving as a powerful tool for preserving cultural heritage and reconnecting people with their African language and culture.



What inspired you to create Nkenne?

Michael: Nkenne which means "of the mother," really came from a personal place. I wanted to learn Igbo, my native language from Nigeria, but there weren’t many resources out there that made it easy or accessible. My mom, Shalom, raised me in the U.S., and while I grew up hearing bits of Igbo, there wasn’t enough time or structure for me to fully learn it. During the pandemic, when everything paused, I realized how much I wanted to connect with my heritage, and that’s when the idea sparked. We realized that not just Igbo, but many African languages were becoming less common, even among those who speak them. So, we saw this as an opportunity to preserve these languages and help others reconnect with their roots.

Nkenne Founders Cafe in Maine, US

You’ve mentioned the goal of preserving African languages. How does Nkenne contribute to their preservation?

Shalom: African languages are considered low-resource because they don't have as much digital content, formal documentation, or readily available learning tools. With Nkenne, we’re helping to change that. We’re not just teaching the languages, we’re documenting them, building lessons, and creating a resource for future generations. Many people in Nigeria, for example, don’t speak their native languages anymore. By creating Nkenne, we’re essentially building a digital library of African languages.

How does Nkenne integrate both language learning and cultural education? Why is it important to teach both?

Michael: Understanding the cultural meaning behind a language makes learning richer. It’s not just vocabulary—it’s about connecting people with the culture behind it. We include blogs, podcasts, and lessons that dive into the traditions and customs tied to the language, so people understand not just the words, but the history and meaning behind them.

Shalom: Yes, learning a language without the cultural context leaves gaps. For instance, in Nigeria, using your left hand to hand someone an item is considered rude— we teach these cultural nuances in the app to help the user truly grasp the culture.

The Nkenne app on device, showing avaiable languages

What’s next for Nkenne?

Michael: We're focused on expanding our language offerings to 30 by the end of 2025, including more African languages and Creole dialects from around the world. We're also working on enhancing our AI capabilities for language translation.

Shalom: We’re also deepening the community experience, adding more social features where users can connect, share, and practice together. It’s about building not just a language-learning platform, but a space where people from the diaspora and beyond can truly connect with their heritage.


Discover more global #WeArePlay stories and share your favorites.



How useful did you find this blog post?

Developer Preview: Desktop windowing on Android Tablets

Posted by Francesco Romano – Developer Relations Engineer on Android, and Fahd Imtiaz – Product Manager, Android Developer

To empower tablet users to get more done, we're enhancing freeform windowing, allowing them to run multiple apps simultaneously and resize windows for optimal multitasking. Today, we're excited to share that desktop windowing on Android tablets is available in developer preview.

For app developers, the concept of Android apps running in freeform windows has already existed with solutions like Samsung DeX and ChromeOS. Updating your apps to support adaptive layouts, more robust multitasking, and adaptive inputs will ensure your apps work well on large screens across the Android ecosystem.

Let’s explore how to optimize your apps for desktop windowing and deliver the optimal experience to users.

What is desktop windowing?

Desktop windowing allows users to run multiple apps simultaneously and resize app windows, offering a more flexible and desktop-like experience. This, along with a refreshed System UI and new APIs, allows users to be even more productive and creates a more seamless, desktop-like experience on tablets.

In Figure 1, you can see the anatomy of the screen with desktop windowing enabled. Things to make note of:

    • Users can run multiple apps side-by-side, simultaneously
    • Taskbar is fixed and shows the running apps, users can pin apps for quick access
    • New header bar with window controls at the top of each window which apps can customize
Desktop windowing on a Pixel Tablet
Figure 1: Desktop windowing on a Pixel Tablet.
Note: Images are examples and subject to change

How can users invoke desktop windowing?

By default, apps open in full screen on Android tablets. To run the apps as a desktop window on Pixel Tablet, press and hold the window handle at the top in the middle of the screen and drag it within the UI, as seen in Figure 2.

Once you are in the desktop space, all future apps will be launched as desktop windows as well.

A moving image demonstrating what completing the action 'press, hold, and drag the window handle to enter desktop windowing' looks like.
Figure 2. Press, hold, and drag the window handle to enter desktop windowing.
Note: Images are examples and subject to change

You can also invoke desktop windowing from the menu that shows up below the window handle when you tap/click on it or use the keyboard shortcut meta key (Windows, Command, or Search) + Ctrl + Down.

You can exit desktop windowing and display an app as full screen by closing all active windows or by grabbing the window handle at the top of the window and dragging the app to the top of the screen. You can also use the meta + H keyboard shortcut to run apps as full screen again.

To return to the desktop, move a full screen app to the desktop space by using the methods mentioned above, or simply tap on the desktop space tile in the Recents screen.

What does this mean for app developers?

Desktop windowing on Android tablets creates new opportunities for your apps, particularly around productivity and multitasking. The possibility to resize and reposition multiple app windows allows users to easily compare documents, reference information while composing emails, and multitask efficiently.

By optimizing for desktop windowing, you can deliver unique user experiences to match the growing demand for tablet-based productivity. At the same time, you'll enhance the overall user experience on tablets, making your apps more versatile and adaptable to different scenarios.

If your app already meets the Tier 2 (Large Screens optimized) quality bar in the Large screen app quality guidelines, then there is minimal additional optimization required! If your app has not been optimized for large screens yet, updating it according to the Large screen app quality guidelines becomes even more crucial in the context of desktop windowing. Let’s see why:

    • Freeform resizing enables users to resize apps to their preference for maximized productivity. Considering this, developers should note:
        • Apps with locked orientation are freely resizable. That means, even if an activity is locked to portrait orientation, users can still resize the app to landscape orientation window. In a future update, apps declared as non-resizable will have their UI scaled while keeping the same aspect ratio.
        • Adaptive layouts: By adapting your UI, apps have an opportunity to effortlessly handle a wide range of window sizes, from compact to expanded screen layouts. In desktop windowing, apps can be resized down to a minimum size of 386dp x 352dp, so make sure to leverage window size classes to adjust your app's layout, content, and interactions to adapt to different window dimensions.
        • State management: With freeform resizing, configuration changes happen each time the window resizes, so your app should either handle these configuration changes gracefully or make sure you are preserving the app state when the OS initiates the re-creation of the app. As a reminder, users can change the screen density while your app is running, so it’s best to ensure that your app can handle screen density configuration changes as well.

        A moving image demonstrating how apps are fully resizable
        Figure 3. Apps with locked orientation are freely resizable.

      • Desktop windowing takes productivity on tablets to the next level with multiple apps running simultaneously. Similar to split screen, Desktop windowing encourages users to have multiple windows open. Considering this, developers should note: 
          • Multitasking support: For enhanced productivity, users can have two or more apps open simultaneously, and they expect to easily share content between apps, so add support for drag and drop gestures. Also, ensure your app continues to function correctly even when not in focus, and if your app uses exclusive resources like camera or microphone, the app needs to handle resource loss gracefully when other apps acquire the resource. 
          • Multi-instance support: Users can run multiple instances of your app side-by-side; for example, a document editor application may allow users to start new documents while still being able to reference the already open documents. Apps can set this new Multi-instance property to declare that System UI should be shown for this app to allow it to be launched as multiple instances. Also note that in desktop windowing, new tasks open in a new window, so double-check the user journey if your app starts multiple tasks.

        A moving image demonstrating how you can start another instance of Chrome by dragging a tab out og the app window.
        Figure 4. Start another instance of Chrome by dragging a tab out of the app window.
        Note: Images are examples and subject to change

        • With desktop windowing, input methods beyond touch and insets handling become even more important for a seamless user experience. 
            • More input methods (keyboard, mouse): Users are more likely to use your app with a variety of input methods like external keyboards, mice, and trackpads. Check that users can interact smoothly with your app using keyboard and mouse peripherals or through the emulator. Developers can add support for app shortcuts and publish them using the keyboard shortcuts API, which allows users to easily view the supported app shortcuts through a standardized surface on Android devices.
            • Insets handling: All apps when running in desktop windowing have a header bar, even in immersive mode. Ensure your app's content isn't obscured by this. The new header bar is reported as a caption bar in Compose (androidx.compose.foundation:foundation-layout.WindowInsets.Companion.captionBar) and in Views (android.view.WindowInsets.Type.CAPTION_BAR), which is part of the system bars. API 35 also introduced a new appearance type, to make the header bar transparent, to allow apps to draw custom content inside.

    Get hands-on! 

    Today we’re announcing a developer preview that provides you with an early opportunity to experience and test desktop windowing. You can try it out on Pixel Tablet before it’s released to AOSP more broadly. The preview is available today. Update your Pixel Tablet to the latest Android 15 QPR1 Beta 2 release to try out desktop windowing. If you don’t have a Pixel Tablet handy, access the Pixel Tablet emulator in Android Studio Preview, and select the Android 15.0 (Google APIs Tablet) target. Once your device is set up, select Enable freeform windows option in Developer options to explore the capabilities of desktop windowing and how your app behaves within this new environment.

    By optimizing your apps for desktop windowing on Pixel Tablet, you are not only enhancing the app experience on that specific device but also future-proofing your apps for the broader Android ecosystem where freeform windowing will become prevalent. We're excited about the windows of opportunities enabled by desktop windowing, and we look forward to seeing how you adapt your apps for an enhanced user experience.

    We're committed to improving the desktop windowing experience through future updates. Make sure to test your app and give us feedback. Say tuned for more developer guides and resources!

    Developer Preview: Desktop windowing on Android Tablets

    Posted by Francesco Romano – Developer Relations Engineer on Android, and Fahd Imtiaz – Product Manager, Android Developer

    To empower tablet users to get more done, we're enhancing freeform windowing, allowing them to run multiple apps simultaneously and resize windows for optimal multitasking. Today, we're excited to share that desktop windowing on Android tablets is available in developer preview.

    For app developers, the concept of Android apps running in freeform windows has already existed with solutions like Samsung DeX and ChromeOS. Updating your apps to support adaptive layouts, more robust multitasking, and adaptive inputs will ensure your apps work well on large screens across the Android ecosystem.

    Let’s explore how to optimize your apps for desktop windowing and deliver the optimal experience to users.

    What is desktop windowing?

    Desktop windowing allows users to run multiple apps simultaneously and resize app windows, offering a more flexible and desktop-like experience. This, along with a refreshed System UI and new APIs, allows users to be even more productive and creates a more seamless, desktop-like experience on tablets.

    In Figure 1, you can see the anatomy of the screen with desktop windowing enabled. Things to make note of:

      • Users can run multiple apps side-by-side, simultaneously
      • Taskbar is fixed and shows the running apps, users can pin apps for quick access
      • New header bar with window controls at the top of each window which apps can customize
    Desktop windowing on a Pixel Tablet
    Figure 1: Desktop windowing on a Pixel Tablet.
    Note: Images are examples and subject to change

    How can users invoke desktop windowing?

    By default, apps open in full screen on Android tablets. To run the apps as a desktop window on Pixel Tablet, press and hold the window handle at the top in the middle of the screen and drag it within the UI, as seen in Figure 2.

    Once you are in the desktop space, all future apps will be launched as desktop windows as well.

    A moving image demonstrating what completing the action 'press, hold, and drag the window handle to enter desktop windowing' looks like.
    Figure 2. Press, hold, and drag the window handle to enter desktop windowing.
    Note: Images are examples and subject to change

    You can also invoke desktop windowing from the menu that shows up below the window handle when you tap/click on it or use the keyboard shortcut meta key (Windows, Command, or Search) + Ctrl + Down.

    You can exit desktop windowing and display an app as full screen by closing all active windows or by grabbing the window handle at the top of the window and dragging the app to the top of the screen. You can also use the meta + H keyboard shortcut to run apps as full screen again.

    To return to the desktop, move a full screen app to the desktop space by using the methods mentioned above, or simply tap on the desktop space tile in the Recents screen.

    What does this mean for app developers?

    Desktop windowing on Android tablets creates new opportunities for your apps, particularly around productivity and multitasking. The possibility to resize and reposition multiple app windows allows users to easily compare documents, reference information while composing emails, and multitask efficiently.

    By optimizing for desktop windowing, you can deliver unique user experiences to match the growing demand for tablet-based productivity. At the same time, you'll enhance the overall user experience on tablets, making your apps more versatile and adaptable to different scenarios.

    If your app already meets the Tier 2 (Large Screens optimized) quality bar in the Large screen app quality guidelines, then there is minimal additional optimization required! If your app has not been optimized for large screens yet, updating it according to the Large screen app quality guidelines becomes even more crucial in the context of desktop windowing. Let’s see why:

      • Freeform resizing enables users to resize apps to their preference for maximized productivity. Considering this, developers should note:
          • Apps with locked orientation are freely resizable. That means, even if an activity is locked to portrait orientation, users can still resize the app to landscape orientation window. In a future update, apps declared as non-resizable will have their UI scaled while keeping the same aspect ratio.
          • Adaptive layouts: By adapting your UI, apps have an opportunity to effortlessly handle a wide range of window sizes, from compact to expanded screen layouts. In desktop windowing, apps can be resized down to a minimum size of 386dp x 352dp, so make sure to leverage window size classes to adjust your app's layout, content, and interactions to adapt to different window dimensions.
          • State management: With freeform resizing, configuration changes happen each time the window resizes, so your app should either handle these configuration changes gracefully or make sure you are preserving the app state when the OS initiates the re-creation of the app. As a reminder, users can change the screen density while your app is running, so it’s best to ensure that your app can handle screen density configuration changes as well.

          A moving image demonstrating how apps are fully resizable
          Figure 3. Apps with locked orientation are freely resizable.

        • Desktop windowing takes productivity on tablets to the next level with multiple apps running simultaneously. Similar to split screen, Desktop windowing encourages users to have multiple windows open. Considering this, developers should note: 
            • Multitasking support: For enhanced productivity, users can have two or more apps open simultaneously, and they expect to easily share content between apps, so add support for drag and drop gestures. Also, ensure your app continues to function correctly even when not in focus, and if your app uses exclusive resources like camera or microphone, the app needs to handle resource loss gracefully when other apps acquire the resource. 
            • Multi-instance support: Users can run multiple instances of your app side-by-side; for example, a document editor application may allow users to start new documents while still being able to reference the already open documents. Apps can set this new Multi-instance property to declare that System UI should be shown for this app to allow it to be launched as multiple instances. Also note that in desktop windowing, new tasks open in a new window, so double-check the user journey if your app starts multiple tasks.

          A moving image demonstrating how you can start another instance of Chrome by dragging a tab out og the app window.
          Figure 4. Start another instance of Chrome by dragging a tab out of the app window.
          Note: Images are examples and subject to change

          • With desktop windowing, input methods beyond touch and insets handling become even more important for a seamless user experience. 
              • More input methods (keyboard, mouse): Users are more likely to use your app with a variety of input methods like external keyboards, mice, and trackpads. Check that users can interact smoothly with your app using keyboard and mouse peripherals or through the emulator. Developers can add support for app shortcuts and publish them using the keyboard shortcuts API, which allows users to easily view the supported app shortcuts through a standardized surface on Android devices.
              • Insets handling: All apps when running in desktop windowing have a header bar, even in immersive mode. Ensure your app's content isn't obscured by this. The new header bar is reported as a caption bar in Compose (androidx.compose.foundation:foundation-layout.WindowInsets.Companion.captionBar) and in Views (android.view.WindowInsets.Type.CAPTION_BAR), which is part of the system bars. API 35 also introduced a new appearance type, to make the header bar transparent, to allow apps to draw custom content inside.

      Get hands-on! 

      Today we’re announcing a developer preview that provides you with an early opportunity to experience and test desktop windowing. You can try it out on Pixel Tablet before it’s released to AOSP more broadly. The preview is available today. Update your Pixel Tablet to the latest Android 15 QPR1 Beta 2 release to try out desktop windowing. If you don’t have a Pixel Tablet handy, access the Pixel Tablet emulator in Android Studio Preview, and select the Android 15.0 (Google APIs Tablet) target. Once your device is set up, select Enable freeform windows option in Developer options to explore the capabilities of desktop windowing and how your app behaves within this new environment.

      By optimizing your apps for desktop windowing on Pixel Tablet, you are not only enhancing the app experience on that specific device but also future-proofing your apps for the broader Android ecosystem where freeform windowing will become prevalent. We're excited about the windows of opportunities enabled by desktop windowing, and we look forward to seeing how you adapt your apps for an enhanced user experience.

      We're committed to improving the desktop windowing experience through future updates. Make sure to test your app and give us feedback. Say tuned for more developer guides and resources!

      Jetpack Compose APIs for building adaptive layouts using Material guidance now stable

      Posted by Alex Vanyo – Developer Relations Engineer

      The 1.0 stable version of the Compose adaptive APIs with Material guidance is out, ready to be used in production. The library helps you build adaptive layouts that provide an optimized user experience on any window size.

      The team at SAP Mobile Start were early adopters of the Compose adaptive APIs. It took their developers only five minutes to integrate the NavigationSuiteScaffold from the new Compose Material 3 adaptive library, rapidly adapting the app’s navigation UI to different window sizes.

      Each of the new components in the library, NavigationSuiteScaffold, ListDetailPaneScaffold and SupportingPaneScaffold are adaptive: based on the window size and posture, different components are displayed to the user based on which one is most appropriate in the current context. This helps build UI that adapts to a wide variety of window sizes instead of just stretching layouts.

      For an overview of the components, check out the dedicated I/O session and our new documentation pages to get started.

      In this post, we’re going to take a more detailed look at the layering of the new library so you have a better understanding of how customisable it is, to fit a wide variety of use cases you might have.

      Similar to Compose itself, the adaptive libraries are layered into multiple dependencies, so that you can choose the appropriate level of abstraction for your application.There are four new artifacts as part of the adaptive libraries:

        • For the core building blocks for building adaptive UI, including computing the window size class and the current posture, add androidx.compose.material3.adaptive:adaptive:1.0.0

        • For implementing multi-pane layouts, add androidx.compose.material3.adaptive:adaptive-layout:1.0.0


        • For standalone navigators for the multi-pane scaffold layouts, add androidx.compose.material3.adaptive:adaptive-navigation:1.0.0

        • For implementing adaptive navigation UI, add androidx.compose.material3:material3-adaptive-navigation-suite:1.3.0

      The libraries have the following dependencies:

      Flow diagram showing dependencies between material3-adaptive 1.0.0 and material 1.3.0 libraries
      New library dependency graph

      To explore this layering more, let’s start with the highest level example with the most built-in functionality using a NavigableListDetailPaneScaffold from androidx.compose.material3.adaptive:adaptive-navigation:

      val navigator = rememberListDetailPaneScaffoldNavigator<Any>()
      
      NavigableListDetailPaneScaffold(
          navigator = navigator,
          listPane = {
              // List pane
          },
          detailPane = {
              // Detail pane
          },
      )
      

      This snippet of code gives you all of our recommended adaptive behavior out of the box for a list-detail layout: determining how many panes to show based on the current window size, hiding and showing the correct pane when the window size changes depending on the previous state of the UI, and having the back button conditionally bring the user back to the list, depending on the window size and the current state.

      A list layout adapting to and from a list detail layout depending on the window size

      This encapsulates a lot of behavior – and this might be all you need, and you don’t need to go any deeper!

      However, there may be reasons why you may want to tweak this behavior, or more directly manage the state by hoisting parts of it in a different way.

      Remember, each layer builds upon the last. This snippet is at the outermost layer, and we can start unwrapping the layers to customize it where we need.

      Let’s go one level deeper with NavigableListDetailPaneScaffold and drop down one layer. Behavior won’t change at all with these direct inlinings, since we are just inlining the default behavior at each step:

      (Fun fact: You can follow along with this directly in Android Studio and for any other component you desire. If you choose Refactor > Inline function, you can directly replace a component with its implementation. You can’t delete the original function in the library of course.)

      val navigator = rememberListDetailPaneScaffoldNavigator<Any>()
      
      BackHandler(
          enabled = navigator.canNavigateBack(BackNavigationBehavior.PopUntilContentChange)
      ) {
          navigator.navigateBack(BackNavigationBehavior.PopUntilContentChange)
      }
      ListDetailPaneScaffold(
          directive = navigator.scaffoldDirective,
          value = navigator.scaffoldValue,
          listPane = {
              // List pane
          },
          detailPane = {
              // Detail pane
          },
      )
      

      With the first inlining, we see the BackHandler that NavigableListDetailPaneScaffold includes by default. If using ListDetailPaneScaffold directly, back handling is left up to the developer to include and hoist to the appropriate place.

      This also reveals how the navigator provides two pieces of state to control the ListDetailPaneScaffold:

        • directive —- how the panes should be arranged in the ListDetailPaneScaffold, and
        • value —- the current state of the panes, as calculated from the directive and the current navigation state.

      These are both controlled by the navigator, and the next unpeeling shows us the default arguments to the navigator for directive and the adapt strategy, which is used to calculate value:

      val navigator = rememberListDetailPaneScaffoldNavigator<Any>(
          scaffoldDirective = calculatePaneScaffoldDirective(currentWindowAdaptiveInfo()),
          adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(),
      )
      
      BackHandler(
          enabled = navigator.canNavigateBack(BackNavigationBehavior.PopUntilContentChange)
      ) {
          navigator.navigateBack(BackNavigationBehavior.PopUntilContentChange)
      }
      ListDetailPaneScaffold(
          directive = navigator.scaffoldDirective,
          value = navigator.scaffoldValue,
          listPane = {
              // List pane
          },
          detailPane = {
              // Detail pane
          },
      )
      

      The directive controls the behavior for how many panes to show and the pane spacing, based on currentWindowAdaptiveInfo, which contains the size and posture of the window.

      This can be customized with a different directive, to show two panes side-by-side at a smaller medium width:

      val navigator = rememberListDetailPaneScaffoldNavigator<Any>(
          scaffoldDirective = calculatePaneScaffoldDirectiveWithTwoPanesOnMediumWidth(currentWindowAdaptiveInfo()),
          adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(),
      )
      

      By default, showing two panes at a medium width can result in UI that is too narrow, especially for complex content. However, this can be a good option to use the window space more optimally by showing two panes for less complex content.

      The AdaptStrategy controls what happens to panes when there isn’t enough space to show all of them. Right now, this always hides panes for which there isn’t enough space.

      This directive is used by the navigator to drive its logic and, combined with the adapt strategy to determine the scaffold value, the resulting target state for each of the panes.

      The scaffold directive and the scaffold value are then passed to the ListDetailPaneScaffold, driving the behavior of the scaffold.

      This layering allows hoisting the scaffold state away from the display of the scaffold itself. This layering also allows custom implementations for controlling how the scaffold works and for hoisting related state. For example, if you are using a custom navigation solution instead of the navigator, you could drive the ListDetailPaneScaffold directly with state derived from your custom navigation solution.

      The layering is enforced in the library with the different artifacts:

        • androidx.compose.material3.adaptive:adaptive contains the underlying methods to calculate the current window adaptive info
        • androidx.compose.material3.adaptive:adaptive-layout contains the layouts ListDetailPaneScaffold and SupportingPaneScaffold
        • androidx.compose.material3.adaptive:adaptive-navigation contains the navigator APIs (like rememberListDetailPaneScaffoldNavigator)

      Therefore, if you aren’t going to use the navigator and instead use a custom navigation solution, you can skip using androidx.compose.material3.adaptive:adaptive-navigation and depend on androidx.compose.material3.adaptive:adaptive-layout directly.

      When adding the Compose Adaptive library to your app, start with the most fully featured layer, and then unwrap if needed to tweak behavior. As we continue to work on the library and add new features, we’ll keep adding them to the appropriate layer. Using the higher-level layers will mean that you will be able to get these new features most easily. If you need to, you can use lower layers to get more fine-grained control, but that also means that more responsibility for behavior is transferred to your app, just like the layering in Compose itself.

      Try out the new components today, and send us your feedback for bugs and feature requests.

      SAP integrated NavigationSuiteScaffold in just 5 minutes to create adaptive navigation UI

      Posted by Alex Vanyo – Developer Relations Engineer

      SAP Mobile Start is an app that centralizes access to SAP's mobile business suite, a hub for users to keep track of their companies’ processes and data so they can efficiently manage their daily to-dos while on the move.

      Recently, SAP Mobile Start developers prioritized building an adaptive app that looks great across devices, including tablets and foldables, to create a more seamless user experience. Using Jetpack Compose and Material 3 design, the team efficiently implemented intuitive, user-friendly features to increase accessibility across its users’ preferred devices.


      Adaptive design across devices

      With over 300 million daily active users on foldables, tablets, and Chromebooks today, building apps that adapt to varied screen sizes is important for providing an optimal user experience. But simply stretching the UI to fit different screen sizes can drastically alter it from its original form, obscuring the interface and impairing the user experience.

      “We focused on situations where we could make better use of available space on large screens,” said Laura Bergmann, UX designer for SAP. “We wanted to get rid of screens that are stretched from edge to edge, full-screen drill-downs or dialogs, and use space more efficiently.”

      Now, after optimizing for different devices, SAP Mobile Start dynamically adjusts its layouts by swapping components and showing or hiding content based on the available window size instead of stretching UI elements to match a device's screen.

      The SAP team also implemented canonical layouts, common UI designs that split a screen into panes according to its size. By separating content into panes, SAP’s users can manage their business workflows more productively. Depending on the window size class, the supporting pane adjusts the UI without additional custom logic. For example, compact windows typically utilize one pane, while larger windows can utilize multiple.

      “Adopting the new canonical layouts from Google helped us focus more on designing unique app capabilities for SAP’s business scenarios,” said Laura. “With the available navigational elements and patterns, we can now channel our efforts into creating a more engaging user experience without reinventing the wheel.”

      SAP developers started by implementing supporting panes to create multi-pane layouts that efficiently utilize on-screen space. The first place developers added supporting panes was on the app’s “To-Do” details page. To-dos used to be managed in a single pane, making it difficult to review the comments and tickets simultaneously. Now, tickets and comments are reviewed in primary and secondary panes on the same screen using SupportingPaneScaffold.

      We focused on making better use of the available space in large screens. We wanted to move away from UIs that are stretched to adaptive layouts that enhance productivity.”  — Laura Bergmann, UX designer at SAP

      Fast implementation using Compose Material 3 Adaptive library

      SAP Mobile Start is built entirely with Jetpack Compose, Android’s modern declarative toolkit for building native UI. Compose helped SAP developers build new UI faster and easier than ever before thanks to composables, reusable code blocks for building common UI components. The team also used Compose Navigation to integrate seamless navigation between composables, optimizing travel between new UI on all screens.

      It took developers only five minutes to integrate the NavigationSuiteScaffold from the new Compose Material 3 adaptive library, rapidly adapting the app’s navigation UI to different window sizes, switching between a bottom navigation bar and a vertical navigation rail. It also eliminated the need for custom logic, which previously determined the navigation component based on various window size classes. The NavigationSuiteScaffold also reduced the custom navigation UI logic code by 59%, from 379 lines to 156.

      “Jetpack Compose simplified UI development,” said Aditya Arora, lead Android developer. “Its declarative nature, coupled with built-in support for Material Design and dark theme, significantly increased our development efficiency. By simply describing the desired UI, we've reduced code complexity and improved maintainability.”

      SAP developers used live edit and layout inspector in Android Studio to test and optimize the app for large screens. These features were “total game changers” for the SAP team because they helped iterate and inspect layout issues faster when optimizing for new screens.

      With its @PreviewScreenSizes annotation and device streaming powered by Firebase, Jetpack Compose also made testing the app's UI across various screen sizes easier. SAP developers look forward to Compose Screenshot Testing being completed, which will further streamline UI testing and ensure greater visual consistency within the app.

      Using Jetpack Compose, SAP developers also quickly and easily implemented new Material 3 design concepts from the Compose M3 Adaptive library. Material 3 design emphasizes personalizing the app experience, improving interactions with modern visual aesthetics.

      Compose's flexibility made replacing the standard Material Theme with their own custom Fiori Horizon Theme simple, ensuring a consistent visual appearance across SAP apps. “As early adopters of the Compose M3 Adaptive library, we collaborated with Google to refine the API,” said Aditya. “Since our app is completely Compose-based, leveraging the new Compose Material 3 Adaptive library was a piece of cake.”

      A list layout adapting to and from a list detail layout depending on the window size

      As large-screen devices like tablets, foldables, and Chromebooks become more popular, building layouts that adapt to varied screen sizes becomes increasingly crucial. For SAP Mobile Start developers, reimagining their app across devices using Jetpack Compose and Material 3 design guidelines was simple. Using Android’s collection of tools and resources, creating adaptive UIs for all the new form factors hitting the market today is faster and easier than ever.

      “Optimizing for large screens is crucial. The market for tablets, foldables, and Chromebooks is booming. Don't miss out on this opportunity to improve your user experience and expand your app's reach,” said Aditya.

      Get started

      Learn how to improve your UX by optimizing for large screens and foldables using Jetpack Compose and Material 3 design.

      Google Workspace Updates Weekly Recap – September 6, 2024

      1 New update

      Unless otherwise indicated, the features below are available to all Google Workspace customers, and are fully launched or in the process of rolling out. Rollouts should take no more than 15 business days to complete if launching to both Rapid and Scheduled Release at the same time. If not, each stage of rollout should take no more than 15 business days to complete.




      Improved user experience for Google Meet on Android devices
      If you’re joining a Google Meet call from Android phone, tablets or large screen devices, you’ll now see a more streamlined, space-efficient experience with edge-to-edge video. We’ve expanded the video feed to encompass spaces where there were previously margins around the video feed. This helps provide a richer, more immersive viewing experience. You’ll also notice a sleeker user interface for meeting controls, and clearer indicators for information such as the meeting title. | Rollout to Rapid Release and Scheduled Release domains is complete. | Available now for all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts. | Visit the Help Center to learn more about joining a meeting.

      mobile experience - Improved user experience for Google Meet on Android devices
      tablet - Improved user experience for Google Meet on Android devices




      Previous announcements

      The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


      Gemini (gemini.google.com) now shows related content links in its responses 
      You can now access additional information on topics directly in Gemini’s (gemini.google.com) responses to your prompts. Specifically, you’ll see links to related content in responses to fact-seeking prompts — you can click the arrow chips to dive deeper into the topic. If you have a Gemini for Workspace license and Google Workspace extensions in Gemini are enabled, Gemini will also now include inline links to relevant emails referenced in responses where the Gmail extension is used. | Learn more about related content links shown in Gemini.

      View your most relevant Google Drive folders and files on a single page 
      You will now see a combined, unified view for file and folder suggestions on the Drive homepage that leverages machine learning to help you find and organize your most relevant content faster and intuitively. | Learn more about the view in Drive.

      Empowering Google Workspace customers to take control of their emissions with Electricity Maps
      To help our customers continue to understand and measure the carbon intensity of their cloud computing, we have partnered with Electricity Maps to provide hourly emissions data within the Carbon Footprint report. | Learn more about Electricity Maps. 


      Completed rollouts

      The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.


      Scheduled Release Domains: 
      Rapid and Scheduled Release Domains: 

      For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).  

      Deploying Rust in Existing Firmware Codebases

      Android's use of safe-by-design principles drives our adoption of memory-safe languages like Rust, making exploitation of the OS increasingly difficult with every release. To provide a secure foundation, we’re extending hardening and the use of memory-safe languages to low-level firmware (including in Trusty apps).


      In this blog post, we'll show you how to gradually introduce Rust into your existing firmware, prioritizing new code and the most security-critical code. You'll see how easy it is to boost security with drop-in Rust replacements, and we'll even demonstrate how the Rust toolchain can handle specialized bare-metal targets.


      Drop-in Rust replacements for C code are not a novel idea and have been used in other cases, such as librsvg’s adoption of Rust which involved replacing C functions with Rust functions in-place. We seek to demonstrate that this approach is viable for firmware, providing a path to memory-safety in an efficient and effective manner.

      Memory Safety for Firmware

      Firmware serves as the interface between hardware and higher-level software. Due to the lack of software security mechanisms that are standard in higher-level software, vulnerabilities in firmware code can be dangerously exploited by malicious actors. Modern phones contain many coprocessors responsible for handling various operations, and each of these run their own firmware. Often, firmware consists of large legacy code bases written in memory-unsafe languages such as C or C++. Memory unsafety is the leading cause of vulnerabilities in Android, Chrome, and many other code bases.


      Rust provides a memory-safe alternative to C and C++ with comparable performance and code size. Additionally it supports interoperability with C with no overhead. The Android team has discussed Rust for bare-metal firmware previously, and has developed training specifically for this domain.

      Incremental Rust Adoption

      Our incremental approach focusing on replacing new and highest risk existing code (for example, code which processes external untrusted input) can provide maximum security benefits with the least amount of effort. Simply writing any new code in Rust reduces the number of new vulnerabilities and over time can lead to a reduction in the number of outstanding vulnerabilities.


      You can replace existing C functionality by writing a thin Rust shim that translates between an existing Rust API and the C API the codebase expects. The C API is replicated and exported by the shim for the existing codebase to link against. The shim serves as a wrapper around the Rust library API, bridging the existing C API and the Rust API. This is a common approach when rewriting or replacing existing libraries with a Rust alternative.

      Challenges and Considerations

      There are several challenges you need to consider before introducing Rust to your firmware codebase. In the following section we address the general state of no_std Rust (that is, bare-metal Rust code), how to find the right off-the-shelf crate (a rust library), porting an std crate to no_std, using Bindgen to produce FFI bindings, how to approach allocators and panics, and how to set up your toolchain.

      The Rust Standard Library and Bare-Metal Environments

      Rust's standard library consists of three crates: core, alloc, and std. The core crate is always available. The alloc crate requires an allocator for its functionality. The std crate assumes a full-blown operating system and is commonly not supported in bare-metal environments. A third-party crate indicates it doesn’t rely on std through the crate-level #![no_std] attribute. This crate is said to be no_std compatible. The rest of the blog will focus on these.

      Choosing a Component to Replace

      When choosing a component to replace, focus on self-contained components with robust testing. Ideally, the components functionality can be provided by an open-source implementation readily available which supports bare-metal environments.


      Parsers which handle standard and commonly used data formats or protocols (such as, XML or DNS) are good initial candidates. This ensures the initial effort focuses on the challenges of integrating Rust with the existing code base and build system rather than the particulars of a complex component and simplifies testing. This approach eases introducing more Rust later on.

      Choosing a Pre-Existing Crate (Rust Library)

      Picking the right open-source crate (Rust library) to replace the chosen component is crucial. Things to consider are:

      • Is the crate well maintained, for example, are open issues being addressed and does it use recent crate versions?

      • How widely used is the crate? This may be used as a quality signal, but also important to consider in the context of using crates later on which may depend on it.

      • Does the crate have acceptable documentation?

      • Does it have acceptable test coverage?


      Additionally, the crate should ideally be no_std compatible, meaning the standard library is either unused or can be disabled. While a wide range of no_std compatible crates exist, others do not yet support this mode of operation – in those cases, see the next section on converting a std library to no_std.


      By convention, crates which optionally support no_std will provide an std feature to indicate whether the standard library should be used. Similarly, the alloc feature usually indicates using an allocator is optional.


      Note: Even when a library declares #![no_std] in its source, there is no guarantee that its dependencies don’t depend on std. We recommend looking through the dependency tree to ensure that all dependencies support no_std, or test whether the library compiles for a no_std target. The only way to know is currently by trying to compile the crate for a bare-metal target.



      For example, one approach is to run cargo check with a bare-metal toolchain provided through rustup:

      $ rustup target add aarch64-unknown-none

      $ cargo check --target aarch64-unknown-none --no-default-features


      Porting a std Library to no_std

      If a library does not support no_std, it might still be possible to port it to a bare-metal environment – especially file format parsers and other OS agnostic workloads. Higher-level functionality such as file handling, threading, and async code may present more of a challenge. In those cases, such functionality can be hidden behind feature flags to still provide the core functionality in a no_std build.

      To port a std crate to no_std (core+alloc):

      • In the cargo.toml file, add a std feature, then add this std feature to the default features

      • Add the following lines to the top of the lib.rs:

      #![no_std]


      #[cfg(feature = "std")]

      extern crate std;

      extern crate alloc;

      Then, iteratively fix all occurring compiler errors as follows:

      1. Move any use directives from std to either core or alloc.

      2. Add use directives for all types that would otherwise automatically be imported by the std prelude, such as alloc::vec::Vec and alloc::string::String.

      3. Hide anything that doesn't exist in core or alloc and cannot otherwise be supported in the no_std build (such as file system accesses) behind a #[cfg(feature = "std")] guard.

      4. Anything that needs to interact with the embedded environment may need to be explicitly handled, such as functions for I/O. These likely need to be behind a #[cfg(not(feature = "std"))] guard.

      5. Disable std for all dependencies (that is, change their definitions in Cargo.toml, if using Cargo).

      This needs to be repeated for all dependencies within the crate dependency tree that do not support no_std yet.

      Custom Target Architectures

      There are a number of officially supported targets by the Rust compiler, however, many bare-metal targets are missing from that list. Thankfully, the Rust compiler lowers to LLVM IR and uses an internal copy of LLVM to lower to machine code. Thus, it can support any target architecture that LLVM supports by defining a custom target.


      Defining a custom target requires a toolchain built with the channel set to dev or nightly. Rust’s Embedonomicon has a wealth of information on this subject and should be referred to as the source of truth. 


      To give a quick overview, a custom target JSON file can be constructed by finding a similar supported target and dumping the JSON representation:


      $ rustc --print target-list

      [...]

      armv7a-none-eabi

      [...]


      $ rustc -Z unstable-options --print target-spec-json --target armv7a-none-eabi


      This will print out a target JSON that looks something like:

      $ rustc --print target-spec-json -Z unstable-options --target=armv7a-none-eabi

      {

        "abi": "eabi",

        "arch": "arm",

        "c-enum-min-bits": 8,

        "crt-objects-fallback": "false",

        "data-layout": "e-m:e-p:32:32-Fi8-i64:64-v128:64:128-a:0:32-n32-S64",

        [...]

      }


      This output can provide a starting point for defining your target. Of particular note, the data-layout field is defined in the LLVM documentation.


      Once the target is defined, libcore and liballoc (and libstd, if applicable) must be built from source for the newly defined target. If using Cargo, building with -Z build-std accomplishes this, indicating that these libraries should be built from source for your target along with your crate module:

      # set build-std to the list of libraries needed

      cargo build -Z build-std=core,alloc --target my_target.json

      Building Rust With LLVM Prebuilts

      If the bare-metal architecture is not supported by the LLVM bundled internal to the Rust toolchain, a custom Rust toolchain can be produced with any LLVM prebuilts that support the target.


      The instructions for building a Rust toolchain can be found in detail in the Rust Compiler Developer Guide. In the config.toml, llvm-config must be set to the path of the LLVM prebuilts.


      You can find the latest Rust Toolchain supported by a particular version of LLVM by checking the release notes and looking for releases which bump up the minimum supported LLVM version. For example, Rust 1.76 bumped the minimum LLVM to 16 and 1.73 bumped the minimum LLVM to 15. That means with LLVM15 prebuilts, the latest Rust toolchain that can be built is 1.75.

      Creating a Drop-In Rust Shim

      To create a drop-in replacement for the C/C++ function or API being replaced, the shim needs two things: it must provide the same API as the replaced library and it must know how to run in the firmware’s bare-metal environment.

      Exposing the Same API

      The first is achieved by defining a Rust FFI interface with the same function signatures.


      We try to keep the amount of unsafe Rust as minimal as possible by putting the actual implementation in a safe function and exposing a thin wrapper type around.


      For example, the FreeRTOS coreJSON example includes a JSON_Validate C function with the following signature:

      JSONStatus_t JSON_Validate( const char * buf, size_t max );


      We can write a shim in Rust between it and the memory safe serde_json crate to expose the C function signature. We try to keep the unsafe code to a minimum and call through to a safe function early:

      #[no_mangle]

      pub unsafe extern "C" fn JSON_Validate(buf: *const c_char, len: usize) -> JSONStatus_t {

          if buf.is_null() {

              JSONStatus::JSONNullParameter as _

          } else if len == 0 {

              JSONStatus::JSONBadParameter as _

          } else {

              json_validate(slice_from_raw_parts(buf as _, len).as_ref().unwrap()) as _

          }

      }


      // No more unsafe code in here.

      fn json_validate(buf: &[u8]) -> JSONStatus {

          if serde_json::from_slice::<Value>(buf).is_ok() {

              JSONStatus::JSONSuccess

          } else {

              ILLEGAL_DOC

          }

      }



      Note: This is a very simple example. For a highly resource constrained target, you can avoid alloc and use serde_json_core, which has even lower overhead but requires pre-defining the JSON structure so it can be allocated on the stack.



      For further details on how to create an FFI interface, the Rustinomicon covers this topic extensively.

      Calling Back to C/C++ Code

      In order for any Rust component to be functional within a C-based firmware, it will need to call back into the C code for things such as allocations or logging. Thankfully, there are a variety of tools available which automatically generate Rust FFI bindings to C. That way, C functions can easily be invoked from Rust.


      The standard means of doing this is with the Bindgen tool. You can use Bindgen to parse all relevant C headers that define the functions Rust needs to call into. It's important to invoke Bindgen with the same CFLAGS as the code in question is built with, to ensure that the bindings are generated correctly.


      Experimental support for producing bindings to static inline functions is also available.

      Hooking Up The Firmware’s Bare-Metal Environment

      Next we need to hook up Rust panic handlers, global allocators, and critical section handlers to the existing code base. This requires producing definitions for each of these which call into the existing firmware C functions.


      The Rust panic handler must be defined to handle unexpected states or failed assertions. A custom panic handler can be defined via the panic_handler attribute. This is specific to the target and should, in most cases, either point to an abort function for the current task/process, or a panic function provided by the environment.


      If an allocator is available in the firmware and the crate relies on the alloc crate, the Rust allocator can be hooked up by defining a global allocator implementing GlobalAlloc.


      If the crate in question relies on concurrency, critical sections will need to be handled. Rust's core or alloc crates do not directly provide a means for defining this, however the critical_section crate is commonly used to handle this functionality for a number of architectures, and can be extended to support more.


      It can be useful to hook up functions for logging as well. Simple wrappers around the firmware’s existing logging functions can expose these to Rust and be used in place of print or eprint and the like. A convenient option is to implement the Log trait.

      Fallible Allocations and alloc

      Rusts alloc crate normally assumes that allocations are infallible (that is, memory allocations won’t fail). However due to memory constraints this isn’t true in most bare-metal environments. Under normal circumstances Rust panics and/or aborts when an allocation fails; this may be acceptable behavior for some bare-metal environments, in which case there are no further considerations when using alloc.


      If there’s a clear justification or requirement for fallible allocations however, additional effort is required to ensure that either allocations can’t fail or that failures are handled. 


      One approach is to use a crate that provides statically allocated fallible collections, such as the heapless crate, or dynamic fallible allocations like fallible_vec. Another is to exclusively use try_* methods such as Vec::try_reserve, which check if the allocation is possible.


      Rust is in the process of formalizing better support for fallible allocations, with an experimental allocator in nightly allowing failed allocations to be handled by the implementation. There is also the unstable cfg flag for alloc called no_global_oom_handling which removes the infallible methods, ensuring they are not used.

      Build Optimizations

      Building the Rust library with LTO is necessary to optimize for code size. The existing C/C++ code base does not need to be built with LTO when passing -C lto=true to rustc. Additionally, setting -C codegen-unit=1 results in further optimizations in addition to reproducibility. 


      If using Cargo to build, the following Cargo.toml settings are recommended to reduce the output library size:


      [profile.release]

      panic = "abort"

      lto = true

      codegen-units = 1

      strip = "symbols"


      # opt-level "z" may produce better results in some circumstances

      opt-level = "s" 


      Passing the -Z remap-cwd-prefix=. flag to rustc or to Cargo via the RUSTFLAGS env var when building with Cargo to strip cwd path strings.


      In terms of performance, Rust demonstrates similar performance to C. The most relevant example may be the Rust binder Linux kernel driver, which found “that Rust binder has similar performance to C binder”.


      When linking LTO’d Rust staticlibs together with C/C++, it’s recommended to ensure a single Rust staticlib ends up in the final linkage, otherwise there may be duplicate symbol errors when linking. This may mean combining multiple Rust shims into a single static library by re-exporting them from a wrapper module.

      Memory Safety for Firmware, Today

      Using the process outlined in this blog post, You can begin to introduce Rust into large legacy firmware code bases immediately. Replacing security critical components with off-the-shelf open-source memory-safe implementations and developing new features in a memory safe language will lead to fewer critical vulnerabilities while also providing an improved developer experience.


      Special thanks to our colleagues who have supported and contributed to these efforts: Roger Piqueras Jover, Stephan Chen, Gil Cukierman, Andrew Walbran, and Erik Gilling