Tag Archives: Android

Eliminating Memory Safety Vulnerabilities at the Source

Memory safety vulnerabilities remain a pervasive threat to software security. At Google, we believe the path to eliminating this class of vulnerabilities at scale and building high-assurance software lies in Safe Coding, a secure-by-design approach that prioritizes transitioning to memory-safe languages.

This post demonstrates why focusing on Safe Coding for new code quickly and counterintuitively reduces the overall security risk of a codebase, finally breaking through the stubbornly high plateau of memory safety vulnerabilities and starting an exponential decline, all while being scalable and cost-effective.

We’ll also share updated data on how the percentage of memory safety vulnerabilities in Android dropped from 76% to 24% over 6 years as development shifted to memory safe languages.

Counterintuitive results

Consider a growing codebase primarily written in memory-unsafe languages, experiencing a constant influx of memory safety vulnerabilities. What happens if we gradually transition to memory-safe languages for new features, while leaving existing code mostly untouched except for bug fixes?

We can simulate the results. After some years, the code base has the following makeup1 as new memory unsafe development slows down, and new memory safe development starts to take over:

In the final year of our simulation, despite the growth in memory-unsafe code, the number of memory safety vulnerabilities drops significantly, a seemingly counterintuitive result not seen with other strategies:

This reduction might seem paradoxical: how is this possible when the quantity of new memory unsafe code actually grew?

The math

The answer lies in an important observation: vulnerabilities decay exponentially. They have a half-life. The distribution of vulnerability lifetime follows an exponential distribution given an average vulnerability lifetime λ:

A large-scale study of vulnerability lifetimes2 published in 2022 in Usenix Security confirmed this phenomenon. Researchers found that the vast majority of vulnerabilities reside in new or recently modified code:

This confirms and generalizes our observation, published in 2021, that the density of Android’s memory safety bugs decreased with the age of the code, primarily residing in recent changes.

This leads to two important takeaways:

  • The problem is overwhelmingly with new code, necessitating a fundamental change in how we develop code.
  • Code matures and gets safer with time, exponentially, making the returns on investments like rewrites diminish over time as code gets older.

For example, based on the average vulnerability lifetimes, 5-year-old code has a 3.4x (using lifetimes from the study) to 7.4x (using lifetimes observed in Android and Chromium) lower vulnerability density than new code.

In real life, as with our simulation, when we start to prioritize prevention, the situation starts to rapidly improve.

In practice on Android

The Android team began prioritizing transitioning new development to memory safe languages around 2019. This decision was driven by the increasing cost and complexity of managing memory safety vulnerabilities. There’s much left to do, but the results have already been positive. Here’s the big picture in 2024, looking at total code:


Despite the majority of code still being unsafe (but, crucially, getting progressively older), we’re seeing a large and continued decline in memory safety vulnerabilities. The results align with what we simulated above, and are even better, potentially as a result of our parallel efforts to improve the safety of our memory unsafe code. We first reported this decline in 2022, and we continue to see the total number of memory safety vulnerabilities dropping3. Note that the data for 2024 is extrapolated to the full year (represented as 36, but currently at 27 after the September security bulletin).

The percent of vulnerabilities caused by memory safety issues continues to correlate closely with the development language that’s used for new code. Memory safety issues, which accounted for 76% of Android vulnerabilities in 2019, and are currently 24% in 2024, well below the 70% industry norm, and continuing to drop.

As we noted in a previous post, memory safety vulnerabilities tend to be significantly more severe, more likely to be remotely reachable, more versatile, and more likely to be maliciously exploited than other vulnerability types. As the number of memory safety vulnerabilities have dropped, the overall security risk has dropped along with it.

Evolution of memory safety strategies

Over the past decades, the industry has pioneered significant advancements to combat memory safety vulnerabilities, with each generation of advancements contributing valuable tools and techniques that have tangibly improved software security. However, with the benefit of hindsight, it’s evident that we have yet to achieve a truly scalable and sustainable solution that achieves an acceptable level of risk:

1st generation: reactive patching. The initial focus was mainly on fixing vulnerabilities reactively. For problems as rampant as memory safety, this incurs ongoing costs on the business and its users. Software manufacturers have to invest significant resources in responding to frequent incidents. This leads to constant security updates, leaving users vulnerable to unknown issues, and frequently albeit temporarily vulnerable to known issues, which are getting exploited ever faster.

2nd generation: proactive mitigating. The next approach consisted of reducing risk in vulnerable software, including a series of exploit mitigation strategies that raised the costs of crafting exploits. However, these mitigations, such as stack canaries and control-flow integrity, typically impose a recurring cost on products and development teams, often putting security and other product requirements in conflict:

  • They come with performance overhead, impacting execution speed, battery life, tail latencies, and memory usage, sometimes preventing their deployment.
  • Attackers are seemingly infinitely creative, resulting in a cat-and-mouse game with defenders. In addition, the bar to develop and weaponize an exploit is regularly being lowered through better tooling and other advancements.

3rd generation: proactive vulnerability discovery. The following generation focused on detecting vulnerabilities. This includes sanitizers, often paired with fuzzing like libfuzzer, many of which were built by Google. While helpful, these methods address the symptoms of memory unsafety, not the root cause. They typically require constant pressure to get teams to fuzz, triage, and fix their findings, resulting in low coverage. Even when applied thoroughly, fuzzing does not provide high assurance, as evidenced by vulnerabilities found in extensively fuzzed code.

Products across the industry have been significantly strengthened by these approaches, and we remain committed to responding to, mitigating, and proactively hunting for vulnerabilities. Having said that, it has become increasingly clear that those approaches are not only insufficient for reaching an acceptable level of risk in the memory-safety domain, but incur ongoing and increasing costs to developers, users, businesses, and products. As highlighted by numerous government agencies, including CISA, in their secure-by-design report, "only by incorporating secure by design practices will we break the vicious cycle of constantly creating and applying fixes."

The fourth generation: high-assurance prevention

The shift towards memory safe languages represents more than just a change in technology, it is a fundamental shift in how to approach security. This shift is not an unprecedented one, but rather a significant expansion of a proven approach. An approach that has already demonstrated remarkable success in eliminating other vulnerability classes like XSS.

The foundation of this shift is Safe Coding, which enforces security invariants directly into the development platform through language features, static analysis, and API design. The result is a secure by design ecosystem providing continuous assurance at scale, safe from the risk of accidentally introducing vulnerabilities.

The shift from previous generations to Safe Coding can be seen in the quantifiability of the assertions that are made when developing code. Instead of focusing on the interventions applied (mitigations, fuzzing), or attempting to use past performance to predict future security, Safe Coding allows us to make strong assertions about the code's properties and what can or cannot happen based on those properties.

Safe Coding's scalability lies in its ability to reduce costs by:

  • Breaking the arms race: Instead of an endless arms race of defenders attempting to raise attackers’ costs by also raising their own, Safe Coding leverages our control of developer ecosystems to break this cycle by focusing on proactively building secure software from the start.
  • Commoditizing high assurance memory safety: Rather than precisely tailoring interventions to each asset's assessed risk, all while managing the cost and overhead of reassessing evolving risks and applying disparate interventions, Safe Coding establishes a high baseline of commoditized security, like memory-safe languages, that affordably reduces vulnerability density across the board. Modern memory-safe languages (especially Rust) extend these principles beyond memory safety to other bug classes.
  • Increasing productivity: Safe Coding improves code correctness and developer productivity by shifting bug finding further left, before the code is even checked in. We see this shift showing up in important metrics such as rollback rates (emergency code revert due to an unanticipated bug). The Android team has observed that the rollback rate of Rust changes is less than half that of C++.

From lessons to action

Interoperability is the new rewrite

Based on what we’ve learned, it's become clear that we do not need to throw away or rewrite all our existing memory-unsafe code. Instead, Android is focusing on making interoperability safe and convenient as a primary capability in our memory safety journey. Interoperability offers a practical and incremental approach to adopting memory safe languages, allowing organizations to leverage existing investments in code and systems, while accelerating the development of new features.

We recommend focusing investments on improving interoperability, as we are doing with

Rust ↔︎ C++ and Rust ↔︎ Kotlin. To that end, earlier this year, Google provided a $1,000,000 grant to the Rust Foundation, in addition to developing interoperability tooling like Crubit and autocxx.

Role of previous generations

As Safe Coding continues to drive down risk, what will be the role of mitigations and proactive detection? We don’t have definitive answers in Android, but expect something like the following:

  • More selective use of proactive mitigations: We expect less reliance on exploit mitigations as we transition to memory-safe code, leading to not only safer software, but also more efficient software. For instance, after removing the now unnecessary sandbox, Chromium's Rust QR code generator is 95% faster.
  • Decreased use, but increased effectiveness of proactive detection: We anticipate a decreased reliance on proactive detection approaches like fuzzing, but increased effectiveness, as achieving comprehensive coverage over small well-encapsulated code snippets becomes more feasible.

Final thoughts

Fighting against the math of vulnerability lifetimes has been a losing battle. Adopting Safe Coding in new code offers a paradigm shift, allowing us to leverage the inherent decay of vulnerabilities to our advantage, even in large existing systems. The concept is simple: once we turn off the tap of new vulnerabilities, they decrease exponentially, making all of our code safer, increasing the effectiveness of security design, and alleviating the scalability challenges associated with existing memory safety strategies such that they can be applied more effectively in a targeted manner.

This approach has proven successful in eliminating entire vulnerability classes and its effectiveness in tackling memory safety is increasingly evident based on more than half a decade of consistent results in Android.

We'll be sharing more about our secure-by-design efforts in the coming months.

Acknowledgements

Thanks Alice Ryhl for coding up the simulation. Thanks to Emilia Kasper, Adrian Taylor, Manish Goregaokar, Christoph Kern, and Lars Bergstrom for your helpful feedback on this post.

Notes


  1. Simulation was based on numbers similar to Android and other Google projects. The code base doubles every 6 years. The average lifetime for vulnerabilities is 2.5 years. It takes 10 years to transition to memory safe languages for new code, and we use a sigmoid function to represent the transition. Note that the use of the sigmoid function is why the second chart doesn’t initially appear to be exponential. 

  2. Alexopoulos et al. "How Long Do Vulnerabilities Live in the Code? A Large-Scale Empirical Measurement Study on FOSS Vulnerability Lifetimes". USENIX Security 22. 

  3. Unlike our simulation, these are vulnerabilities from a real code base, which comes with higher variance, as you can see in the slight increase in 2023. Vulnerability reports were unusually high that year, but in line with expectations given code growth, so while the percentage of memory safety vulnerabilities continued to drop, the absolute number increased slightly. 

AllTrails gains over 1 million downloads after implementing its Wear OS app

Posted by Kseniia Shumelchyk – Developer Relations Engineer

With more than 65 million global users, AllTrails is one of the world’s most popular and trusted platforms for outdoor exploration. The app is designed to be the ultimate adventure companion, so the AllTrails team always works to improve users’ outdoor experience using the latest technology. Recently, its developers created a new Wear OS application. Now, users can access their favorite AllTrails features using their favorite Android wearables.

Growing the AllTrails ecosystem

AllTrails has had a great deal of growth from its Android users, and the app’s developers wanted to meet the needs of this growing segment by delivering new ways to get outside. That meant creating an ecosystem of connected experiences, and Wear OS was the perfect starting point. The team started by building essential functions for controlling the app, like pausing, resuming, and finishing hikes, straight from wearables.

“We know that the last thing you want as you’re pulling into the trailhead is to fumble with your phone and look for the trail, so we wanted to bring the trails to your fingertips,” said Sydney Cho, director of product management at AllTrails. “There’s so much cool stuff we want to do with our Wear OS app, but we decided to start by focusing on the fundamentals.”

After implementing core controls, AllTrails developers added more features to take advantage of the watch screen, like a circular progress ring to show users how far they are on their current route. Implementing new user interfaces is efficient since Compose for Wear OS provides built-in Material components for developers, like a CircularProgressIndicator.

AllTrails’ mobile app warns users when they start to wander off-trail with wrong-turn alerts. AllTrails developers incorporated these alerts into the new Wear OS app, so users can get notified straight from their wrists and keep their phones in their pockets.

The new AllTrails Wear OS application has been super popular among its user base, and the team has received substantial positive feedback on the new wearable experience. AllTrails Wear OS app has had over 1 million downloads since implementing the Wear OS app.

'We’re seeing a lot of growth from Android users, and we want to provide them an ecosystem of connected experiences. Wearables are a core part of that experience.'— Sydney Cho, Director of product management at AllTrails

Streamlined development with Compose for Wear OS

To build the new wearable experience, AllTrails developers used Jetpack Compose for Wear OS. The modern declarative toolkit simplifies UI development by letting developers create reusable code blocks for basic functions, allowing for fast and efficient wearable app development.

“Compose for Wear OS definitely sped up development,” said Sydney. “It also gave our dev team exposure to the toolkit, which we’re obviously huge fans of and use for the majority of our new development.”

This was the first app AllTrails developers created entirely using Jetpack Compose, even though they currently use it for parts of the mobile app. Even with their brief experience using the toolkit, they knew it would greatly improve development, so it was an obvious choice for the Wear OS integration.

“Jetpack Compose allowed us to iterate much more quickly,” said Sydney. “It’s incredibly simple to create composables, and the simplicity of previewing the app in various states is extremely helpful.”



Connecting health and fitness via Health Connect

AllTrails developers saw another opportunity to improve the user experience while building the new Wear OS application by integrating Health Connect. Health Connect is one of Android’s latest API offerings that gives users a simpler way to consolidate and share their health and fitness data across applications.

When users opt-in for Health Connect, they can share their various health and fitness data between applications, giving them a more comprehensive understanding of their activity regardless of the apps tracking it.

“Health Connect allows our users to sync their AllTrails activity recordings, like hiking, biking, running, and so on, directly on their phone,” said Sydney. “This activity can then be viewed within Health Connect or from other apps, giving users more freedom to see all their physical activity data, regardless of which app it was recorded on.”

Health Connect streamlines health data management using simple APIs and a straightforward data model. It acts as a centralized repository, consolidating health and fitness data from various apps, simply by having each app write its data to Health Connect. This means that even partial adoption of the API can yield benefits.

AllTrails developers enjoyed how easy it was to integrate Health Connect, thanks to its straightforward and well-documented APIs that were “very simple but extremely powerful.”

moving asset of 3D Droid figure on the right gesticulating toward tect on the left that reads 'AllTrails +1million downloads since implementing the Wear OS app'

What’s ahead with Wear OS

Implementing a new Wear OS application did more than give AllTrails’ users a new way to interact with the app. It lets them put their phones back in their pockets so they can enjoy more of what’s on the trail. By prioritizing core functionalities like nearby trail access, recording control, and real-time alerts, AllTrails delivered a seamless and intuitive wearable experience, enriching UX with impressive user adoption and retention rates.

Get started

Learn more about building wearable apps with design and developer guidance for Wear OS.

Attestation format change for the Android FIDO2 API

Posted by Christiaan Brand – Group Product Manager

In 2019 we introduced a FIDO2 API, adopted by many leading developers, which allows users to generate an attested, device-bound FIDO2 credential on Android devices.

Since this launch, Android has generated an attestation statement based on the SafetyNet API. As the underlying SafetyNet API is being deprecated, the FIDO2 API must move to a new attestation scheme based on hardware-backed key attestation. This change will require action from developers using the FIDO2 API to ensure a smooth transition.

The FIDO2 API is closely related to, but distinct from, the passkeys API and is invoked by setting the residentKey parameter to discouraged. While our goal is over time to migrate developers to the passkey API, we understand that not all developers who are currently using the FIDO2 API are ready for that move and we continue working on ways to converge these two APIs.

We will update the FIDO2 API on Android to produce attestation statements based on hardware-backed key attestation. As of November 2024, developers can opt in to this attestation scheme with controls for individual requests. This should be useful for testing and incremental rollouts, while also allowing developers full control over the timing of the switch over the next 6 months.

We will begin returning hardware-backed key attestation by default for all developers in early April 2025. From that point, SafetyNet certificates will no longer be granted. It is important to implement support for the new attestation statement, or move to the passkey API before the cutover date, otherwise your applications might not be able to parse the new attestation statements.

For web apps, requesting hardware-backed key attestation requires Chrome 130 or higher to enroll in the WebAuthn attestationFormats origin trial. (Learn more about origin trials.) Once these conditions are met, you can specify the attestationFormats parameter in your navigator.credentials.create call with the value ["android-key"].

If you're using the FIDO2 Play Services API in an Android app, switching to hardware-backed key attestation requires Play Services version 22.0.0 on the device. Developers can then specify android-key as the attestation format in the PublicKeyCredentialCreationOptions. You must update your Play Services dependencies to see this new option.

We will continue to evolve FIDO APIs. Please continue to provide feedback using [email protected] to connect with the team and developer community.

Google & Arm – Raising The Bar on GPU Security

Who cares about GPUs?

You, me, and the entire ecosystem! GPUs (graphics processing units) are critical in delivering rich visual experiences on mobile devices. However, the GPU software and firmware stack has become a way for attackers to gain permissions and entitlements (privilege escalation) to Android-based devices. There are plenty of issues in this category that can affect all major GPU brands, for example, CVE-2023-4295, CVE-2023-21106, CVE-2021-0884, and more. Most exploitable GPU vulnerabilities are in the implementation of the GPU kernel mode modules. These modules are pieces of code that load/unload during runtime, extending functionality without the need to reboot the device.

Proactive testing is good hygiene as it can lead to the detection and resolution of new vulnerabilities before they’re exploited. It’s also one of the most complex investigations to do as you don’t necessarily know where the vulnerability will appear (that’s the point!). By combining the expertise of Google’s engineers with IP owners and OEMs, we can ensure the Android ecosystem retains a strong measure of integrity.

Why investigate GPUs?

When researching vulnerabilities, GPUs are a popular target due to:

  1. Functionality vs. Security Tradeoffs

    Nobody wants a slow, unresponsive device; any hits to GPU performance could result in a noticeably degraded user experience. As such, the GPU software stack in Android relies on an in-process HAL model where the API & user space drivers communicating with the GPU kernel mode module are running directly within the context of apps, thus avoiding IPC (interprocess communication). This opens the door for potentially untrusted code from a third party app being able to directly access the interface exposed by the GPU kernel module. If there are any vulnerabilities in the module, the third party app has an avenue to exploit them. As a result, a potentially untrusted code running in the context of the third party application is able to directly access the interface exposed by the GPU kernel module and exploit potential vulnerabilities in the kernel module.

  2. Variety & Memory Safety

    Additionally, the implementation of GPU subsystems (and kernel modules specifically) from major OEMs are increasingly complex. Kernel modules for most GPUs are typically written in memory unsafe languages such as C, which are susceptible to memory corruption vulnerabilities like buffer overflow.

Can someone do something about this?

Great news, we already have! Who’s we? The Android Red Team and Arm! We’ve worked together to run an engagement on the Mali GPU (more on that below), but first, a brief introduction:

Android Red Team

The Android Red Team performs time-bound security assessment engagements on all aspects of the Android open source codebase and conducts regular security reviews and assessments of internal Android components. Throughout these engagements, the Android Red Team regularly collaborates with 3rd party software and hardware providers to analyze and understand proprietary and “closed source” code repositories and relevant source code that are utilized by Android products with the sole objective to identify security risks and potential vulnerabilities before they can be exploited by adversaries outside of Android. This year, the Android Red Team collaborated directly with our industry partner, Arm, to conduct the Mali GPU engagement and further secure millions of Android devices.

Arm Product Security and GPU Teams

Arm has a central product security team that sets the policy and practice across the company. They also have dedicated product security experts embedded in engineering teams. Arm operates a systematic approach which is designed to prevent, discover, and eliminate security vulnerabilities. This includes a Security Development Lifecycle (SDL), a Monitoring capability, and Incident Response. For this collaboration the Android Red Teams were supported by the embedded security experts based in Arm’s GPU engineering team.

Working together to secure Android devices

Google’s Android Security teams and Arm have been working together for a long time. Security requirements are never static, and challenges exist with all GPU vendors. By frequently sharing expertise, the Android Red Team and Arm were able to accelerate detection and resolution. Investigations of identified vulnerabilities, potential remediation strategies, and hardening measures drove detailed analyses and the implementation of fixes where relevant.

Recent research focused on the Mali GPU because it is the most popular GPU in today's Android devices. Collaborating on GPU security allowed us to:

  1. Assess the impact on the broadest segment of the Android Ecosystem: The Arm Mali GPU is one of the most used GPUs by original equipment manufacturers (OEMs) and is found in many popular mobile devices. By focusing on the Arm Mali GPU, the Android Red Team could assess the security of a GPU implementation running on millions of Android devices worldwide.
  2. Evaluate the reference implementation and vendor-specific changes: Phone manufacturers often modify the upstream implementation of GPUs. This tailors the GPU to the manufacturer's specific device(s). These modifications and enhancements are always challenging to make, and can sometimes introduce security vulnerabilities that are not present in the original version of the GPU upstream. In this specific instance, the Google Pixel team actively worked with the Android Red Team to better understand and secure the modifications they made for Pixel devices.

Improvements

Investigations have led to significant improvements, leveling up the security of the GPU software/firmware stack across a wide segment of the Android ecosystem.

Testing the kernel driver

One key component of the GPU subsystem is its kernel mode driver. During this engagement, both the Android Red Team and Arm invested significant effort looking at the Mali kbase kernel driver. Due to its complexity, fuzzing was chosen as the primary testing approach for this area. Fuzzing automates and scales vulnerability discovery in a way not possible via manual methods. With help from Arm, the Android Red Team added more syzkaller fuzzing descriptions to match the latest Mali kbase driver implementation.

The team built a few customizations to enable fuzzing the Mali kbase driver in the cloud, without physical hardware. This provided a huge improvement to fuzzing performance and scalability. With the Pixel team’s support, we also were able to set up fuzzing on actual Pixel devices. Through the combination of cloud-based fuzzing, Pixel-based fuzzing, and manual review, we were able to uncover two memory issues in Pixel’s customization of driver code (CVE-2023-48409 and CVE-2023-48421).

Both issues occurred inside of the gpu_pixel_handle_buffer_liveness_update_ioctl function, which is implemented by the Pixel team as part of device specific customization. These are both memory issues caused by integer overflow problems. If exploited carefully alongside other vulnerabilities, these issues could lead to kernel privilege escalation from user space. Both issues were fixed and the patch was released to affected devices in Pixel security bulletin 2023-12-01.

Testing the firmware

Firmware is another fundamental building block of the GPU subsystem. It’s the intermediary working with kernel drivers and GPU hardware. In many cases, firmware functionality is directly/indirectly accessible from the application. So “application ⇒ kernel ⇒ firmware ⇒ kernel” is a known attack flow in this area. Also, in general, firmware runs on embedded microcontrollers with limited resources. Commonly used security kernel mitigations (ASLR, stack protection, heap protection, certain sanitizers, etc.) might not be applicable to firmware due to resource constraints and performance impact. This can make compromising firmware easier, in some cases, than directly compromising kernel drivers from user space. To test the integrity of existing firmware, the Android Red Team and Arm worked together to perform both fuzzing and formal verification along with manual analysis. This multi-pronged approach led to the discovery of CVE-2024-0153, which had a patch released in the July 2024 Android Security Bulletin.

CVE-2024-0153 happens when GPU firmware handles certain instructions. When handling such instructions, the firmware copies register content into a buffer. There are size checks before the copy operation. However, under very specific conditions, an out-of-bounds write happens to the destination buffer, leading to a buffer overflow. When carefully manipulated, this overflow will overwrite some other important structures following the buffer, causing code execution inside of the GPU firmware.

The conditions necessary to reach and potentially exploit this issue are very complex as it requires a deep understanding of how instructions are executed. With collective expertise, the Android Red Team and Arm were able to verify the exploitation path and leverage the issue to gain limited control of GPU firmware. This eventually circled back to the kernel to obtain privilege escalation. Arm did an excellent job to respond quickly and remediate the issue. Altogether, this highlights the strength of collaboration between both teams to dive deeper.

Time to Patch

It’s known that attackers exploit GPU vulnerabilities in the wild, and time to patch is crucial to reduce risk of exploitation and protect users. As a result of this engagement, nine new Security Test suite (STS) tests were built to help partners automatically check their builds for missing Mali kbase patches. (Security Test Suite is software provided by Google to help partners automate the process of checking their builds for missing security patches.)

What’s Next?

The Arm Product Security Team is actively involved in security-focused industry communities and collaborates closely with its ecosystem partners. The engagement with the Android Red Team, for instance, provides valuable enablement that drives best practices and product excellence. Building on this collaborative approach, Arm is complementing its product security assurance capabilities with a bug bounty program. This investment will expand Arm’s efforts to identify potential vulnerabilities. For more information on Arm's product security initiatives, please visit this product security page.

The Android Red Team and Arm continue to work together to proactively raise the bar on GPU security. With thorough testing, rapid fixing, and updates to the security test suite, we’re improving the ecosystem for Android users. The Android Red Team looks forward to replicating this working relationship with other ecosystem partners to make devices more secure.

AI on Android Spotlight Week begins September 30th

Posted by Joseph Lewis – Technical Writer, Android AI

AI on Android Spotlight Week is our latest installment of the Spotlight Weeks series. We'll have a full week of investigation into the latest advancements in AI for Android developers. We’ll feature a variety of exciting activities, including an AMA with Google AI experts, technical talks, early access to our new tools and API, and demos of the latest Android generative AI technologies. AI on Android Spotlight Week kicks off next week on September 30th through October 4th, and will feature information and activities for developers, researchers, and enthusiasts interested in the future of generative AI app development on Android-powered devices.

Get the latest on Android AI developer strategies

During our Spotlight Week: AI on Android, we’ll feature a number of new and exciting opportunities to learn more about how to work with generative AI and machine learning for Android app development, including:

    • Conversations about on-device and cloud based GenAI solutions with Gemini Nano, Vertex AI in Firebase, and LiteRT (formerly known as TensorFlow Lite)
    • Partner demos and deep dives into the latest AI technologies and how to integrate them in Android apps
    • Discussions around model capabilities, developer tools and integration strategies from web to mobile
    • Answers to top questions from dev community about AI on Android

How to participate

Our Spotlight Week: AI on Android will happen entirely online, across Android Developer’s channels - YouTube, X, LinkedIn, and on d.android.com: check the Android AI developer page on Monday, September 30, 2024 to read our next blog post with full details!

Follow @AndroidDev on X for the latest updates, and help spread the word about AI on Android Spotlight Week, and use #AndroidAI on your favorite social media platforms to ask questions and share your AI projects with the community. We’re excited for you to join us!

Tools, not Rules: become a better Android developer with Compiler Explorer

Posted by Shai Barack – Android Platform Performance lead

Introducing Android support in Compiler Explorer

In a previous blog post you learned how Android engineers continuously improve the Android Runtime (ART) in ways that boost app performance on user devices. These changes to the compiler make system and app code faster or smaller. Developers don’t need to change their code and rebuild their apps to benefit from new optimizations, and users get a better experience. In this blog post I’ll take you inside the compiler with a tool called Compiler Explorer and witness some of these optimizations in action.

Compiler Explorer is an interactive website for studying how compilers work. It is an open source project that anyone can contribute to. This year, our engineers added support to Compiler Explorer for the Java and Kotlin programming languages on Android.

You can use Compiler Explorer to understand how your source code is translated to assembly language, and how high-level programming language constructs in a language like Kotlin become low-level instructions that run on the processor.

At Google our engineers use this tool to study different coding patterns for efficiency, to see how existing compiler optimizations work, to share new optimization opportunities, and to teach and learn. Learning is best when it’s done through tools, not rules. Instead of teaching developers to memorize different rules for how to write efficient code or what the compiler might or might not optimize, give the engineers the tools to find out for themselves what happens when they write their code in different ways, and let them experiment and learn. Let’s learn together!

Start by going to godbolt.org. By default we see C++ sample code, so click the dropdown that says C++ and select Android Java. You should see this sample code:

class Square {
   static int square(int num) {
       return num * num;
   }
}
screenshot of sample code in Compiler Explorer
click to enlarge

On the left you’ll see a very simple program. You might say that this is a one line program. But this is not a meaningful statement in terms of performance - how many lines of code there are doesn’t tell us how long this program will take to run, or how much memory will be occupied by the code when the program is loaded.

On the right you’ll see a disassembly of the compiler output. This is expressed in terms of assembly language for the target architecture, where every line is a CPU instruction. Looking at the instructions, we can say that the implementation of the square(int num) method consists of 2 instructions in the target architecture. The number and type of instructions give us a better idea for how fast the program is than the number of lines of source code. Since the target architecture is AArch64 aka ARM64, every instruction is 4 bytes, which means that our program’s code occupies 8 bytes in RAM when the program is compiled and loaded.

Let’s take a brief detour and introduce some Android toolchain concepts.


The Android build toolchain (in brief)

a flow diagram of the Android build toolchain

When you write your Android app, you’re typically writing source code in the Java or Kotlin programming languages. When you build your app in Android Studio, it’s initially compiled by a language-specific compiler into language-agnostic JVM bytecode in a .jar. Then the Android build tools transform the .jar into Dalvik bytecode in .dex files, which is what the Android Runtime executes on Android devices. Typically developers use d8 in their Debug builds, and r8 for optimized Release builds. The .dex files go in the .apk that you push to test devices or upload to an app store. Once the .apk is installed on the user’s device, an on-device compiler which knows the specific target device architecture can convert the bytecode to instructions for the device’s CPU.

We can use Compiler Explorer to learn how all these tools come together, and to experiment with different inputs and see how they affect the outputs.

Going back to our default view for Android Java, on the left is Java source code and on the right is the disassembly for the on-device compiler dex2oat, the very last step in our toolchain diagram. The target architecture is ARM64 as this is the most common CPU architecture in use today by Android devices.

The ARM64 Instruction Set Architecture offers many instructions and extensions, but as you read disassemblies you will find that you only need to memorize a few key instructions. You can look for ARM64 Quick Reference cards online to help you read disassemblies.

At Google we study the output of dex2oat in Compiler Explorer for different reasons, such as:

    • Gaining intuition for what optimizations the compiler performs in order to think about how to write more efficient code.
    • Estimating how much memory will be required when a program with this snippet of code is loaded into memory.
    • Identifying optimization opportunities in the compiler - ways to generate instructions for the same code that are more efficient, resulting in faster execution or in lower memory usage without requiring app developers to change and rebuild their code.
    • Troubleshooting compiler bugs! 🐞

Compiler optimizations demystified

Let’s look at a real example of compiler optimizations in practice. In the previous blog post you can read about compiler optimizations that the ART team recently added, such as coalescing returns. Now you can see the optimization, with Compiler Explorer!

Let’s load this example:

class CoalescingReturnsDemo {
   String intToString(int num) {
       switch (num) {
           case 1:
               return "1";
           case 2:
               return "2";
           case 3:
               return "3";           
           default:
               return "other";
       }
   }
}
screenshot of sample code in Compiler Explorer
click to enlarge

How would a compiler implement this code in CPU instructions? Every case would be a branch target, with a case body that has some unique instructions (such as referencing the specific string) and some common instructions (such as assigning the string reference to a register and returning to the caller). Coalescing returns means that some instructions at the tail of each case body can be shared across all cases. The benefits grow for larger switches, proportional to the number of the cases.

You can see the optimization in action! Simply create two compiler windows, one for dex2oat from the October 2022 release (the last release before the optimization was added), and another for dex2oat from the November 2023 release (the first release after the optimization was added). You should see that before the optimization, the size of the method body for intToString was 124 bytes. After the optimization, it’s down to just 76 bytes.

This is of course a contrived example for simplicity’s sake. But this pattern is very common in Android code. For instance consider an implementation of Handler.handleMessage(Message), where you might implement a switch statement over the value of Message#what.

How does the compiler implement optimizations such as this? Compiler Explorer lets us look inside the compiler’s pipeline of optimization passes. In a compiler window, click Add New > Opt Pipeline. A new window will open, showing the High-level Internal Representation (HIR) that the compiler uses for the program, and how it’s transformed at every step.

screenshot of the high-level internal representation (HIR) the compiler uses for the program in Compiler Explorer
click to enlarge

If you look at the code_sinking pass you will see that the November 2023 compiler replaces Return HIR instructions with Goto instructions.

Most of the passes are hidden when Filters > Hide Inconsequential Passes is checked. You can uncheck this option and see all optimization passes, including ones that did not change the HIR (i.e. have no “diff” over the HIR).

Let’s study another simple optimization, and look inside the optimization pipeline to see it in action. Consider this code:

class ConstantFoldingDemo {
   static int demo(int num) {
       int result = num;
       if (num == 2) {
           result = num + 2;
       }
       return result;
   }
}

The above is functionally equivalent to the below:

class ConstantFoldingDemo {
   static int demo(int num) {
       int result = num;
       if (num == 2) {
           result = 4;
       }
       return result;
   }
}

Can the compiler make this optimization for us? Let’s load it in Compiler Explorer and turn to the Opt Pipeline Viewer for answers.

screenshot of Opt Pipeline Viewer in Compiler Explorer
click to enlarge

The disassembly shows us that the compiler never bothers with “two plus two”, it knows that if num is 2 then result needs to be 4. This optimization is called constant folding. Inside the conditional block where we know that num == 2 we propagate the constant 2 into the symbolic name num, then fold num + 2 into the constant 4.

You can see this optimization happening over the compiler’s IR by selecting the constant_folding pass in the Opt Pipeline Viewer.

Kotlin and Java, side by side

Now that we’ve seen the instructions for Java code, try changing the language to Android Kotlin. You should see this sample code, the Kotlin equivalent of the basic Java sample we’ve seen before:

fun square(num: Int): Int = num * num
screenshot of sample code in Kotlin in Compiler Explorer
click to enlarge

You will notice that the source code is different but the sample program is functionally identical, and so is the output from dex2oat. Finding the square of a number results in the same instructions, whether you write your source code in Java or in Kotlin.

You can take this opportunity to study interesting language features and discover how they work. For instance, let’s compare Java String concatenation with Kotlin String interpolation.

In Java, you might write your code as follows:

class StringConcatenationDemo {
   void stringConcatenationDemo(String myVal) {
       System.out.println("The value of myVal is " + myVal);
   }
}

Let’s find out how Java String concatenation actually works by trying this example in Compiler Explorer.

screenshot of sample code in Kotlin in Compiler Explorer
click to enlarge

First you will notice that we changed the output compiler from dex2oat to d8. Reading Dalvik bytecode, which is the output from d8, is usually easier than reading the ARM64 instructions that dex2oat outputs. This is because Dalvik bytecode uses higher level concepts. Indeed you can see the names of types and methods from the source code on the left side reflected in the bytecode on the right side. Try changing the compiler to dex2oat and back to see the difference.

As you read the d8 output you may realize that Java String concatenation is actually implemented by rewriting your source code to use a StringBuilder. The source code above is rewritten internally by the Java compiler as follows:

class StringConcatenationDemo {
   void stringConcatenationDemo(String myVal) {
       StringBuilder sb = new StringBuilder();
       sb.append("The value of myVal is ");
       sb.append(myVal);
       System.out.println(sb.toString());
  }
}

In Kotlin, we can use String interpolation:

fun stringInterpolationDemo(myVal: String) {
   System.out.println("The value of myVal is $myVal");
}

The Kotlin syntax is easier to read and write, but does this convenience come at a cost? If you try this example in Compiler Explorer, you may find that the Dalvik bytecode output is roughly the same! In this case we see that Kotlin offers an improved syntax, while the compiler emits similar bytecode.

At Google we study examples of language features in Compiler Explorer to learn about how high-level language features are implemented in lower-level terms, and to better inform ourselves on the different tradeoffs that we might make in choosing whether and how to adopt these language features. Recall our learning principle: tools, not rules. Rather than memorizing rules for how you should write your code, use the tools that will help you understand the upsides and downsides of different alternatives, and then make an informed decision.

What happens when you minify your app?

Speaking of making informed decisions as an app developer, you should be minifying your apps with R8 when building your Release APK. Minifying generally does three things to optimize your app to make it smaller and faster:

      1. Dead code elimination: find all the live code (code that is reachable from well-known program entry points), which tells us that the remaining code is not used, and therefore can be removed.

      2. Bytecode optimization: various specialized optimizations that rewrite your app’s bytecode to make it functionally identical but faster and/or smaller.

      3. Obfuscation: renaming all types, methods, and fields in your program that are not accessed by reflection (and therefore can be safely renamed) from their names in source code (com.example.MyVeryLongFooFactorySingleton) to shorter names that fit in less memory (a.b.c).

Let’s see an example of all three benefits! Start by loading this view in Compiler Explorer.

screenshot of sample code in Kotlin in Compiler Explorer
click to enlarge

First you will notice that we are referencing types from the Android SDK. You can do this in Compiler Explorer by clicking Libraries and adding Android API stubs.

Second, you will notice that this view has multiple source files open. The Kotlin source code is in example.kt, but there is another file called proguard.cfg.

-keep class MinifyDemo {
   public void goToSite(...);
}

Looking inside this file, you’ll see directives in the format of Proguard configuration flags, which is the legacy format for configuring what to keep when minifying your app. You can see that we are asking to keep a certain method of MinifyDemo. “Keeping” in this context means don’t shrink (we tell the minifier that this code is live). Let’s say we’re developing a library and we’d like to offer our customer a prebuilt .jar where they can call this method, so we’re keeping this as part of our API contract.

We set up a view that will let us see the benefits of minifying. On one side you’ll see d8, showing the dex code without minification, and on the other side r8, showing the dex code with minification. By comparing the two outputs, we can see minification in action:

      1. Dead code elimination: R8 removed all the logging code, since it never executes (as DEBUG is always false). We removed not just the calls to android.util.Log, but also the associated strings.

      2. Bytecode optimization: since the specialized methods goToGodbolt, goToAndroidDevelopers, and goToGoogleIo just call goToUrl with a hardcoded parameter, R8 inlined the calls to goToUrl into the call sites in goToSite. This inlining saves us the overhead of defining a method, invoking the method, and returning from the method.

      3. Obfuscation: we told R8 to keep the public method goToSite, and it did. R8 also decided to keep the method goToUrl as it’s used by goToSite, but you’ll notice that R8 renamed that method to a. This method’s name is an internal implementation detail, so obfuscating its name saved us a few precious bytes.

You can use R8 in Compiler Explorer to understand how minification affects your app, and to experiment with different ways to configure R8.

At Google our engineers use R8 in Compiler Explorer to study how minification works on small samples. The authoritative tool for studying how a real app compiles is the APK Analyzer in Android Studio, as optimization is a whole-program problem and a snippet might not capture every nuance. But iterating on release builds of a real app is slow, so studying sample code in Compiler Explorer helps our engineers quickly learn and iterate.

Google engineers build very large apps that are used by billions of people on different devices, so they care deeply about these kinds of optimizations, and strive to make the most use out of optimizing tools. But many of our apps are also very large, and so changing the configuration and rebuilding takes a very long time. Our engineers can now use Compiler Explorer to experiment with minification under different configurations and see results in seconds, not minutes.

You may wonder what would happen if we changed our code to rename goToSite? Unfortunately our build would break, unless we also renamed the reference to that method in the Proguard flags. Fortunately, R8 now natively supports Keep Annotations as an alternative to Proguard flags. We can modify our program to use Keep Annotations:

@UsedByReflection(kind = KeepItemKind.CLASS_AND_METHODS)
public static void goToSite(Context context, String site) {
    ...
}

Here is the complete example. You’ll notice that we removed the proguard.cfg file, and under Libraries we added “R8 keep-annotations”, which is how we’re importing @UsedByReflection.

At Google our engineers prefer annotations over flags. Here we’ve seen one benefit of annotations - keeping the information about the code in one place rather than two makes refactors easier. Another is that the annotations have a self-documenting aspect to them. For instance if this method was kept actually because it’s called from native code, we would annotate it as @UsedByNative instead.

Baseline profiles and you

Lastly, let’s touch on baseline profiles. So far you saw some demos where we looked at dex code, and others where we looked at ARM64 instructions. If you toggle between the different formats you will notice that the high-level dex bytecode is much more compact than low-level CPU instructions. There is an interesting tradeoff to explore here - whether, and when, to compile bytecode to CPU instructions?

For any program method, the Android Runtime has three compilation options:

      1. Compile the method Just in Time (JIT).

      2. Compile the method Ahead of Time (AOT).

      3. Don’t compile the method at all, instead use a bytecode interpreter.

Running code in an interpreter is an order of magnitude slower, but doesn’t incur the cost of loading the representation of the method as CPU instructions which as we’ve seen is more verbose. This is best used for “cold” code - code that runs only once, and is not critical to user interactions.

When ART detects that a method is “hot”, it will be JIT-compiled if it’s not already been AOT compiled. JIT compilation accelerates execution times, but pays the one-time cost of compilation during app runtime. This is where baseline profiles come in. Using baseline profiles, you as the app developer can give ART a hint as to which methods are going to be hot or otherwise worth compiling. ART will use that hint before runtime, compiling the code AOT (usually at install time, or when the device is idle) rather than at runtime. This is why apps that use Baseline Profiles see faster startup times.

With Compiler Explorer we can see Baseline Profiles in action.

Let’s open this example.

screenshot of sample code in Compiler Explorer
click to enlarge

The Java source code has two method definitions, factorial and fibonacci. This example is set up with a manual baseline profile, listed in the file profile.prof.txt. You will notice that the profile only references the factorial method. Consequently, the dex2oat output will only show compiled code for factorial, while fibonacci shows in the output with no instructions and a size of 0 bytes.

In the context of compilation modes, this means that factorial is compiled AOT, and fibonacci will be compiled JIT or interpreted. This is because we applied a different compiler filter in the profile sample. This is reflected in the dex2oat output, which reads: “Compiler filter: speed-profile” (AOT compile only profile code), where previous examples read “Compiler filter: speed” (AOT compile everything).

Conclusion

Compiler Explorer is a great tool for understanding what happens after you write your source code but before it can run on a target device. The tool is easy to use, interactive, and shareable. Compiler Explorer is best used with sample code, but it goes through the same procedures as building a real app, so you can see the impact of all steps in the toolchain.

By learning how to use tools like this to discover how the compiler works under the hood, rather than memorizing a bunch of rules of optimization best practices, you can make more informed decisions.

Now that you've seen how to use the Java and Kotlin programming languages and the Android toolchain in Compiler Explorer, you can level up your Android development skills.

Lastly, don't forget that Compiler Explorer is an open source project on GitHub. If there is a feature you'd like to see then it's just a Pull Request away.


Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

#WeArePlay | Nkenne: The app teaching African languages and culture

Posted by Robbie McLachlan, Developer Marketing

In our latest film for #WeArePlay, which celebrates the people behind apps and games, we meet Michael and Shalom - a mother and son duo driven by a passion for sharing and teaching African languages. Discover how their app, Nkenne, goes beyond language learning—serving as a powerful tool for preserving cultural heritage and reconnecting people with their African language and culture.



What inspired you to create Nkenne?

Michael: Nkenne which means "of the mother," really came from a personal place. I wanted to learn Igbo, my native language from Nigeria, but there weren’t many resources out there that made it easy or accessible. My mom, Shalom, raised me in the U.S., and while I grew up hearing bits of Igbo, there wasn’t enough time or structure for me to fully learn it. During the pandemic, when everything paused, I realized how much I wanted to connect with my heritage, and that’s when the idea sparked. We realized that not just Igbo, but many African languages were becoming less common, even among those who speak them. So, we saw this as an opportunity to preserve these languages and help others reconnect with their roots.

Nkenne Founders Cafe in Maine, US

You’ve mentioned the goal of preserving African languages. How does Nkenne contribute to their preservation?

Shalom: African languages are considered low-resource because they don't have as much digital content, formal documentation, or readily available learning tools. With Nkenne, we’re helping to change that. We’re not just teaching the languages, we’re documenting them, building lessons, and creating a resource for future generations. Many people in Nigeria, for example, don’t speak their native languages anymore. By creating Nkenne, we’re essentially building a digital library of African languages.

How does Nkenne integrate both language learning and cultural education? Why is it important to teach both?

Michael: Understanding the cultural meaning behind a language makes learning richer. It’s not just vocabulary—it’s about connecting people with the culture behind it. We include blogs, podcasts, and lessons that dive into the traditions and customs tied to the language, so people understand not just the words, but the history and meaning behind them.

Shalom: Yes, learning a language without the cultural context leaves gaps. For instance, in Nigeria, using your left hand to hand someone an item is considered rude— we teach these cultural nuances in the app to help the user truly grasp the culture.

The Nkenne app on device, showing avaiable languages

What’s next for Nkenne?

Michael: We're focused on expanding our language offerings to 30 by the end of 2025, including more African languages and Creole dialects from around the world. We're also working on enhancing our AI capabilities for language translation.

Shalom: We’re also deepening the community experience, adding more social features where users can connect, share, and practice together. It’s about building not just a language-learning platform, but a space where people from the diaspora and beyond can truly connect with their heritage.


Discover more global #WeArePlay stories and share your favorites.



How useful did you find this blog post?

Developer Preview: Desktop windowing on Android Tablets

Posted by Francesco Romano – Developer Relations Engineer on Android, and Fahd Imtiaz – Product Manager, Android Developer

To empower tablet users to get more done, we're enhancing freeform windowing, allowing them to run multiple apps simultaneously and resize windows for optimal multitasking. Today, we're excited to share that desktop windowing on Android tablets is available in developer preview.

For app developers, the concept of Android apps running in freeform windows has already existed with solutions like Samsung DeX and ChromeOS. Updating your apps to support adaptive layouts, more robust multitasking, and adaptive inputs will ensure your apps work well on large screens across the Android ecosystem.

Let’s explore how to optimize your apps for desktop windowing and deliver the optimal experience to users.

What is desktop windowing?

Desktop windowing allows users to run multiple apps simultaneously and resize app windows, offering a more flexible and desktop-like experience. This, along with a refreshed System UI and new APIs, allows users to be even more productive and creates a more seamless, desktop-like experience on tablets.

In Figure 1, you can see the anatomy of the screen with desktop windowing enabled. Things to make note of:

    • Users can run multiple apps side-by-side, simultaneously
    • Taskbar is fixed and shows the running apps, users can pin apps for quick access
    • New header bar with window controls at the top of each window which apps can customize
Desktop windowing on a Pixel Tablet
Figure 1: Desktop windowing on a Pixel Tablet.
Note: Images are examples and subject to change

How can users invoke desktop windowing?

By default, apps open in full screen on Android tablets. To run the apps as a desktop window on Pixel Tablet, press and hold the window handle at the top in the middle of the screen and drag it within the UI, as seen in Figure 2.

Once you are in the desktop space, all future apps will be launched as desktop windows as well.

A moving image demonstrating what completing the action 'press, hold, and drag the window handle to enter desktop windowing' looks like.
Figure 2. Press, hold, and drag the window handle to enter desktop windowing.
Note: Images are examples and subject to change

You can also invoke desktop windowing from the menu that shows up below the window handle when you tap/click on it or use the keyboard shortcut meta key (Windows, Command, or Search) + Ctrl + Down.

You can exit desktop windowing and display an app as full screen by closing all active windows or by grabbing the window handle at the top of the window and dragging the app to the top of the screen. You can also use the meta + H keyboard shortcut to run apps as full screen again.

To return to the desktop, move a full screen app to the desktop space by using the methods mentioned above, or simply tap on the desktop space tile in the Recents screen.

What does this mean for app developers?

Desktop windowing on Android tablets creates new opportunities for your apps, particularly around productivity and multitasking. The possibility to resize and reposition multiple app windows allows users to easily compare documents, reference information while composing emails, and multitask efficiently.

By optimizing for desktop windowing, you can deliver unique user experiences to match the growing demand for tablet-based productivity. At the same time, you'll enhance the overall user experience on tablets, making your apps more versatile and adaptable to different scenarios.

If your app already meets the Tier 2 (Large Screens optimized) quality bar in the Large screen app quality guidelines, then there is minimal additional optimization required! If your app has not been optimized for large screens yet, updating it according to the Large screen app quality guidelines becomes even more crucial in the context of desktop windowing. Let’s see why:

    • Freeform resizing enables users to resize apps to their preference for maximized productivity. Considering this, developers should note:
        • Apps with locked orientation are freely resizable. That means, even if an activity is locked to portrait orientation, users can still resize the app to landscape orientation window. In a future update, apps declared as non-resizable will have their UI scaled while keeping the same aspect ratio.
        • Adaptive layouts: By adapting your UI, apps have an opportunity to effortlessly handle a wide range of window sizes, from compact to expanded screen layouts. In desktop windowing, apps can be resized down to a minimum size of 386dp x 352dp, so make sure to leverage window size classes to adjust your app's layout, content, and interactions to adapt to different window dimensions.
        • State management: With freeform resizing, configuration changes happen each time the window resizes, so your app should either handle these configuration changes gracefully or make sure you are preserving the app state when the OS initiates the re-creation of the app. As a reminder, users can change the screen density while your app is running, so it’s best to ensure that your app can handle screen density configuration changes as well.

        A moving image demonstrating how apps are fully resizable
        Figure 3. Apps with locked orientation are freely resizable.

      • Desktop windowing takes productivity on tablets to the next level with multiple apps running simultaneously. Similar to split screen, Desktop windowing encourages users to have multiple windows open. Considering this, developers should note: 
          • Multitasking support: For enhanced productivity, users can have two or more apps open simultaneously, and they expect to easily share content between apps, so add support for drag and drop gestures. Also, ensure your app continues to function correctly even when not in focus, and if your app uses exclusive resources like camera or microphone, the app needs to handle resource loss gracefully when other apps acquire the resource. 
          • Multi-instance support: Users can run multiple instances of your app side-by-side; for example, a document editor application may allow users to start new documents while still being able to reference the already open documents. Apps can set this new Multi-instance property to declare that System UI should be shown for this app to allow it to be launched as multiple instances. Also note that in desktop windowing, new tasks open in a new window, so double-check the user journey if your app starts multiple tasks.

        A moving image demonstrating how you can start another instance of Chrome by dragging a tab out og the app window.
        Figure 4. Start another instance of Chrome by dragging a tab out of the app window.
        Note: Images are examples and subject to change

        • With desktop windowing, input methods beyond touch and insets handling become even more important for a seamless user experience. 
            • More input methods (keyboard, mouse): Users are more likely to use your app with a variety of input methods like external keyboards, mice, and trackpads. Check that users can interact smoothly with your app using keyboard and mouse peripherals or through the emulator. Developers can add support for app shortcuts and publish them using the keyboard shortcuts API, which allows users to easily view the supported app shortcuts through a standardized surface on Android devices.
            • Insets handling: All apps when running in desktop windowing have a header bar, even in immersive mode. Ensure your app's content isn't obscured by this. The new header bar is reported as a caption bar in Compose (androidx.compose.foundation:foundation-layout.WindowInsets.Companion.captionBar) and in Views (android.view.WindowInsets.Type.CAPTION_BAR), which is part of the system bars. API 35 also introduced a new appearance type, to make the header bar transparent, to allow apps to draw custom content inside.

    Get hands-on! 

    Today we’re announcing a developer preview that provides you with an early opportunity to experience and test desktop windowing. You can try it out on Pixel Tablet before it’s released to AOSP more broadly. The preview is available today. Update your Pixel Tablet to the latest Android 15 QPR1 Beta 2 release to try out desktop windowing. If you don’t have a Pixel Tablet handy, access the Pixel Tablet emulator in Android Studio Preview, and select the Android 15.0 (Google APIs Tablet) target. Once your device is set up, select Enable freeform windows option in Developer options to explore the capabilities of desktop windowing and how your app behaves within this new environment.

    By optimizing your apps for desktop windowing on Pixel Tablet, you are not only enhancing the app experience on that specific device but also future-proofing your apps for the broader Android ecosystem where freeform windowing will become prevalent. We're excited about the windows of opportunities enabled by desktop windowing, and we look forward to seeing how you adapt your apps for an enhanced user experience.

    We're committed to improving the desktop windowing experience through future updates. Make sure to test your app and give us feedback. Say tuned for more developer guides and resources!

    Developer Preview: Desktop windowing on Android Tablets

    Posted by Francesco Romano – Developer Relations Engineer on Android, and Fahd Imtiaz – Product Manager, Android Developer

    To empower tablet users to get more done, we're enhancing freeform windowing, allowing them to run multiple apps simultaneously and resize windows for optimal multitasking. Today, we're excited to share that desktop windowing on Android tablets is available in developer preview.

    For app developers, the concept of Android apps running in freeform windows has already existed with solutions like Samsung DeX and ChromeOS. Updating your apps to support adaptive layouts, more robust multitasking, and adaptive inputs will ensure your apps work well on large screens across the Android ecosystem.

    Let’s explore how to optimize your apps for desktop windowing and deliver the optimal experience to users.

    What is desktop windowing?

    Desktop windowing allows users to run multiple apps simultaneously and resize app windows, offering a more flexible and desktop-like experience. This, along with a refreshed System UI and new APIs, allows users to be even more productive and creates a more seamless, desktop-like experience on tablets.

    In Figure 1, you can see the anatomy of the screen with desktop windowing enabled. Things to make note of:

      • Users can run multiple apps side-by-side, simultaneously
      • Taskbar is fixed and shows the running apps, users can pin apps for quick access
      • New header bar with window controls at the top of each window which apps can customize
    Desktop windowing on a Pixel Tablet
    Figure 1: Desktop windowing on a Pixel Tablet.
    Note: Images are examples and subject to change

    How can users invoke desktop windowing?

    By default, apps open in full screen on Android tablets. To run the apps as a desktop window on Pixel Tablet, press and hold the window handle at the top in the middle of the screen and drag it within the UI, as seen in Figure 2.

    Once you are in the desktop space, all future apps will be launched as desktop windows as well.

    A moving image demonstrating what completing the action 'press, hold, and drag the window handle to enter desktop windowing' looks like.
    Figure 2. Press, hold, and drag the window handle to enter desktop windowing.
    Note: Images are examples and subject to change

    You can also invoke desktop windowing from the menu that shows up below the window handle when you tap/click on it or use the keyboard shortcut meta key (Windows, Command, or Search) + Ctrl + Down.

    You can exit desktop windowing and display an app as full screen by closing all active windows or by grabbing the window handle at the top of the window and dragging the app to the top of the screen. You can also use the meta + H keyboard shortcut to run apps as full screen again.

    To return to the desktop, move a full screen app to the desktop space by using the methods mentioned above, or simply tap on the desktop space tile in the Recents screen.

    What does this mean for app developers?

    Desktop windowing on Android tablets creates new opportunities for your apps, particularly around productivity and multitasking. The possibility to resize and reposition multiple app windows allows users to easily compare documents, reference information while composing emails, and multitask efficiently.

    By optimizing for desktop windowing, you can deliver unique user experiences to match the growing demand for tablet-based productivity. At the same time, you'll enhance the overall user experience on tablets, making your apps more versatile and adaptable to different scenarios.

    If your app already meets the Tier 2 (Large Screens optimized) quality bar in the Large screen app quality guidelines, then there is minimal additional optimization required! If your app has not been optimized for large screens yet, updating it according to the Large screen app quality guidelines becomes even more crucial in the context of desktop windowing. Let’s see why:

      • Freeform resizing enables users to resize apps to their preference for maximized productivity. Considering this, developers should note:
          • Apps with locked orientation are freely resizable. That means, even if an activity is locked to portrait orientation, users can still resize the app to landscape orientation window. In a future update, apps declared as non-resizable will have their UI scaled while keeping the same aspect ratio.
          • Adaptive layouts: By adapting your UI, apps have an opportunity to effortlessly handle a wide range of window sizes, from compact to expanded screen layouts. In desktop windowing, apps can be resized down to a minimum size of 386dp x 352dp, so make sure to leverage window size classes to adjust your app's layout, content, and interactions to adapt to different window dimensions.
          • State management: With freeform resizing, configuration changes happen each time the window resizes, so your app should either handle these configuration changes gracefully or make sure you are preserving the app state when the OS initiates the re-creation of the app. As a reminder, users can change the screen density while your app is running, so it’s best to ensure that your app can handle screen density configuration changes as well.

          A moving image demonstrating how apps are fully resizable
          Figure 3. Apps with locked orientation are freely resizable.

        • Desktop windowing takes productivity on tablets to the next level with multiple apps running simultaneously. Similar to split screen, Desktop windowing encourages users to have multiple windows open. Considering this, developers should note: 
            • Multitasking support: For enhanced productivity, users can have two or more apps open simultaneously, and they expect to easily share content between apps, so add support for drag and drop gestures. Also, ensure your app continues to function correctly even when not in focus, and if your app uses exclusive resources like camera or microphone, the app needs to handle resource loss gracefully when other apps acquire the resource. 
            • Multi-instance support: Users can run multiple instances of your app side-by-side; for example, a document editor application may allow users to start new documents while still being able to reference the already open documents. Apps can set this new Multi-instance property to declare that System UI should be shown for this app to allow it to be launched as multiple instances. Also note that in desktop windowing, new tasks open in a new window, so double-check the user journey if your app starts multiple tasks.

          A moving image demonstrating how you can start another instance of Chrome by dragging a tab out og the app window.
          Figure 4. Start another instance of Chrome by dragging a tab out of the app window.
          Note: Images are examples and subject to change

          • With desktop windowing, input methods beyond touch and insets handling become even more important for a seamless user experience. 
              • More input methods (keyboard, mouse): Users are more likely to use your app with a variety of input methods like external keyboards, mice, and trackpads. Check that users can interact smoothly with your app using keyboard and mouse peripherals or through the emulator. Developers can add support for app shortcuts and publish them using the keyboard shortcuts API, which allows users to easily view the supported app shortcuts through a standardized surface on Android devices.
              • Insets handling: All apps when running in desktop windowing have a header bar, even in immersive mode. Ensure your app's content isn't obscured by this. The new header bar is reported as a caption bar in Compose (androidx.compose.foundation:foundation-layout.WindowInsets.Companion.captionBar) and in Views (android.view.WindowInsets.Type.CAPTION_BAR), which is part of the system bars. API 35 also introduced a new appearance type, to make the header bar transparent, to allow apps to draw custom content inside.

      Get hands-on! 

      Today we’re announcing a developer preview that provides you with an early opportunity to experience and test desktop windowing. You can try it out on Pixel Tablet before it’s released to AOSP more broadly. The preview is available today. Update your Pixel Tablet to the latest Android 15 QPR1 Beta 2 release to try out desktop windowing. If you don’t have a Pixel Tablet handy, access the Pixel Tablet emulator in Android Studio Preview, and select the Android 15.0 (Google APIs Tablet) target. Once your device is set up, select Enable freeform windows option in Developer options to explore the capabilities of desktop windowing and how your app behaves within this new environment.

      By optimizing your apps for desktop windowing on Pixel Tablet, you are not only enhancing the app experience on that specific device but also future-proofing your apps for the broader Android ecosystem where freeform windowing will become prevalent. We're excited about the windows of opportunities enabled by desktop windowing, and we look forward to seeing how you adapt your apps for an enhanced user experience.

      We're committed to improving the desktop windowing experience through future updates. Make sure to test your app and give us feedback. Say tuned for more developer guides and resources!