Tag Archives: Android Studio

A Smoother Ride: Android Emulator Stability and Performance Updates

Posted by Neville Sicard-Gregory – Senior Product Manager, Android Studio


Looking for a more stable, reliable, and performant Emulator? Download the latest version of Android Studio or ensure your Emulator is up to date in the SDK Manager.

A split screen shows Kotlin code on the left and the corresponding Android app display on the right in Android Studio. The app displays the Google Play Store, Photos, YouTube, Gmail, and Chrome icons.

We know how critical the stability, reliability, and performance of the Android Emulator is to your everyday work as an Android developer. After listening to valuable feedback about stability, reliability, and performance, the Android Studio team took a step back from large feature work on the Android Emulator for six months and started an initiative called Project Quartz. This initiative was made up of several workstreams aimed at reducing crashes, speeding up startup time, closing out bugs, and setting up better ways to detect and prevent issues in the future.

Improved stability and reliability

A key goal of Project Quartz aimed to reduce Emulator crashes, which can frustrate and block developers, decreasing their productivity. We focused on fixing issues causing backend and UI crashes and freezes, updated the UI framework, updated our hypervisor framework, and our graphics libraries, and eliminated tech debt. This included:

    • Moving to a newer version of Qt, the cross-platform framework for building the graphical user interfaces of the Android Emulator, and making it stable on all platforms (as of version 34.2.13/ This was also a required change to ensure things like Google Maps and the location settings UI continued to work in the Android Emulator.
    • Updating gfxstream, the graphics rendering system used in the Android Emulator, to improve our graphics layer.
    • Adding more than 600 end-to-end tests to the existing pytests test suite.

As a result, we have seen 30% fewer crashes in the latest stable version of Android Studio, as reported by developers who have opted-in to sharing crash details with us. Along with additional end-to-end testing, this means a more stable, reliable, and higher quality experience with fewer interruptions while using the Android Emulator to test your apps.

A horizontal bar graph showing performance times of different versions of the Android emulator in milliseconds

This chart illustrates the reduction in reported crashes by stable versions of the Android Emulator (newer versions are at the top and shorter is better).

We have also enhanced our opt-in telemetry and logging to better understand and identify the root causes of crashes, and added more testing to our pre-launch release process to improve our ability to detect potential issues prior to release.

Improved release quality

We also implemented several measures to improve release quality, including increasing the number and frequency of end-to-end, automated, and integration tests on macOS, Microsoft Windows, and Linux. Now, more than 1,100 end-to-end tests are ran in postsubmit, up from 500 tests in the past implementation, on all supported operating system platforms . These tests cover various scenarios, including (among other features) different Android Emulator snapshot configurations, diverse graphics card considerations , networking and Bluetooth functionality, and performance benchmarks between Android Emulator system image versions.

This comprehensive testing ensures these critical components function correctly and translates to a more reliable testing environment for developers. As a result, Android app developers can accurately assess their app's behavior in a wider range of scenarios.

Reduced open issues and bugs

It was also important for us to reduce the number of open issues and bugs logged for the Android Emulator by addressing their root cause and ensuring we cover more of the use cases you run into in production. During Project Quartz, we reduced our open issues by 43.5% from 4,605 to 2,605. 17% of these were actively fixed during Quartz and the remaining were closed as either obsoleted or previously fixed (e.g. in an earlier version of the Android Emulator) or duplicates of other issues.

Next Steps

While these improvements are exciting, it's not the end. We will continue to build on the quality improvements from Project Quartz to further enhance the Android Emulator experience for Android app developers.

As always, your feedback has and continues to be invaluable in helping us make the Android Emulator and Android Studio more robust and effective for your development needs. Sharing your metrics and crashdumps is crucial in helping us understand what specifically causes your crashes so we can prioritize fixes.

You can opt-in by going to Settings, then Appearance and Behavior, then System Settings, then Data Sharing, and selecting the checkbox marked ‘Send usage statistics to Google.'

The Android Studio settings menu displays the Data Sharing settings page, where 'Send usage statistics to Google' option is selected.

Be sure to download the latest version of the Android Emulator alongside Android Studio to experience these improvements.

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Together, we can create incredible Android experiences for users worldwide!

Gemini in Android Studio: Code Completion Gains Powerful Model Improvements

Posted by Sandhya Mohan – Product Manager, Android Studio and Sarmad Hashmi – Software Engineer, Labs

The Android team believes AI has the potential to revolutionize coding, drive unprecedented innovation and productivity in software development, and supercharge your development productivity. AI code completion is a key part of this effort within Gemini in Android Studio.

Since launching in May 2024, we've been hard at work improving this feature to provide the best possible experience for all Android developers. In this post, we want to take you “under the hood” on how we achieved a 40% relative increase in acceptance rate since release, and share some of our excitement for how we have seen Android developers use this feature. We hope you'll give it a try and let us know what you think.


An AI coding companion for every developer

Our vision for Gemini in Android Studio is to empower developers to build high quality Android apps — making it easy for developers to quickly write correct code aligned with Android's best practices. Launched last year, the first version of Studio Bot provided a chat experience where developers could access Android-specific guidance, powered by Google's latest AI models. Developers are able to ask Gemini in Android Studio to provide developer guidance, summarize technical documentation, and critique their Android code. But in all these cases the feedback is reactive, responding to a user's question.

AI code completion takes these capabilities a step further by providing real-time feedback as you work as a developer, thinking ahead and suggesting the next few lines of code that you are likely to type based on the context from the surrounding file and what was just typed. You can think of AI Code Completion as a partner in your work — a coding companion waiting to offer guidance when you need it.

This feature is particularly well suited for tasks like defining business logic, creating database schemas, making network requests, or even writing tests — tasks that are often time-consuming and distract from building the core experience for your app. Many developers have told us how much they enjoy the speed AI completions brings to their app development workflow.

A moving image demonstrating AI autocomplete in Android Studio

Bringing more intelligent code completion to Android development

While we are excited to see how AI Code Completions have improved developers’ workflows, we know there's still more we can do to improve developer productivity. Development of Gemini in Android Studio is an ongoing, large-scale collaborative effort by many teams across Google. Earlier this year, we switched to Gemini 1.5 models and saw a significant improvement in the quality of code completions, resulting in a 2x increase in our developer productivity metrics, including overall acceptance rate for suggestions.

Once we started doing A/B test experiments to improve AI code completion we found several improvements around model quality, context, and heuristics. This overall effort led to a 40% relative increase in acceptance rate — how often users accept the AI's proposed code suggestions — since we launched. Since then, we've been exploring several improvements like:

    • Retrieval augmentation: With your opt-in consent, we use the files and dependencies most relevant to your current coding context to enhance the accuracy of suggestions. This is just the first step and we're continuing to experiment with adding even more context from the IDE as part of each request.
    • Filtering out low-confidence completions: Prioritize showing high quality suggestions where they are most relevant, and therefore most likely to be accepted. We do this by using a combination of the probabilities returned by the model and using a classifier trained to identify high-quality completions based on developer feedback.
    • Smarter post-processing: The LLM's output for AI Code Completion is fundamentally different from the output users expect in a chat session. Responses need to be tightly scoped in order to quickly output useful code, without surrounding expository text. We apply additional heuristics on the model output to ensure responses are concise and accurate, as well as making sure that the generated code is valid within the context of the user's codebase.
    • Improved models: We use opt-in feedback from Android Studio users, such as noting when a code suggestion is accepted or rejected, to adapt the code completion model to their coding style and preferences over time. We regularly ship new models with higher quality data based on your feedback.

We are also exploring metrics beyond acceptance rate to better measure AI impact on developer velocity, such as the percentage of total code written by AI.


Try it out!

We are rolling out these successful experiments and others as quickly as possible.

If you haven't tried AI code completions yet, you can enable this feature by clicking on the Gemini sparkGemini button in your editor window and signing in to your Google account.

A screenshot of Android Studio with a pop-up notification about the Gemini AI coding companion. The notification explains that Gemini is a free feature in preview and requires a Google account login to use.
Figure 1. Launching Gemini in Android Studio for the first time

After doing so, navigate to Settings > Tools > Gemini and select "Enable AI-based inline code completions".

A screenshot of the settings menu within Android Studio, with the 'Gemini' section expanded showing options related to the AI coding companion, including privacy and context awareness.
Figure 2. Enabling "AI-based inline code completions"

As always, Google is committed to the responsible use of AI. Android Studio won't send any of your source code to servers without your consent — which means you'll need to opt-in to enable Gemini's developer assistance features in Android Studio. You can read more on Gemini in Android Studio's commitment to privacy.

Try enabling AI Code Completions in your project and tell us what you think on social media with #AndroidGeminiEra. We're excited to see how these enhancements help you build amazing apps!


This blog post is part of our series: AI on Android Spotlight Week, where we provide resources — blog posts, videos, sample code, and more — all designed to to explore the latest in AI and its potential for Android app development.

Android Studio Koala Feature Drop is Stable!

Posted by Sandhya Mohan, Product Manager, Android Studio

Today, we are thrilled to announce the stable release of Android Studio Koala Feature Drop (2024.1.2)!🐨

Earlier this year, we announced that every Android Studio animal version will have two releases: a platform release and a feature drop release. These more frequent updates get important IntelliJ updates to you faster, while we focus on quality and polish for Android-specific features. The Koala platform release was launched in June. Today, we'll walk through the feature drop release.

Get access to cutting-edge features like new devices in device streaming, Compose previews for Glance widgets, USB cable speed detection, support for Android 15 in the Android SDK Upgrade Assistant, and much more. All of these new features are designed to accelerate your Android app development workflow in building next-generation and high-quality apps.

Read on to learn more about all the updates, quality improvements, and new features across your key workflows in Android Studio Koala Feature Drop, and download the latest stable version today to try them out!


Develop

Android Device Streaming: more devices and improved sign-up

Android Device Streaming now includes the following devices, in addition to the portfolio of 20+ device models already available:

    • Google Pixel 9
    • Google Pixel 9 Pro
    • Google Pixel 9 Pro XL
    • Google Pixel 9 Pro Fold
    • Google Pixel 8a
    • Samsung Galaxy Fold5
    • Samsung Galaxy S23 Ultra

Additionally, if you're new to Firebase, Android Studio automatically creates and sets up a no-cost Firebase project for you when you sign-in to Android Studio to use Device Streaming. As a result, you can get to streaming the device you need much faster. Learn more about Android Device Streaming quotas, including promotional quota for the Firebase Blaze plan projects available for a limited time.

As we announced at Google I/O 2024, we’re further expanding the selection of devices available by working with partners, such as Samsung, Xiaomi, and OnePlus, to allow you to connect to devices hosted in their device labs. To learn more and enroll in the upcoming Early Access Preview, see the official blog post.

a screengrab showing device streaming in Android Studio
Device Streaming

Target Android 15 using Android SDK Upgrade Assistant

The Android SDK Upgrade Assistant provides a step-by-step wizard to help you upgrade your targetSdkVersion. It also pulls documentation directly into Android Studio, saving you time and effort. Android Studio Koala Feature Drop adds support for upgrading projects to Android 15 (API Level 35).

a screengrab showing Android SDK Upgrade Assistant in Android Studio
Android SDK Upgrade Assistant

Updated sign-in flow to Google services

It's now easier to sign in to multiple Google services with one authentication step. Whether you use Gemini in Android Studio, Firebase for Android Device Streaming, Crashlytics in App Quality Insights, Google Play for Android Vitals reports, or some combination of these services, the new sign-in flow makes it easier to get up and running. With granular permissions scoping, you'll always be in control of which services have access to your account. To get started, click the profile avatar on the top right corner and sign-in with your developer account.

a moving image showing the updated sign-in wizard in Android Studio
Updated sign-in wizard

Wear OS Tile Preview Panel

You can now view snapshots of your Wear OS app's tiles by including version 1.4 of the Jetpack Tiles library. This preview panel is particularly useful if your tile's appearance changes based on certain conditions, such as content that depends on the device's display size, or a sports event reaching halftime.

Wear OS Tile Preview Panel in Android Studio
Wear OS Tile Preview Panel

Compose Glance widget previews

Android Studio Koala Feature Drop makes it easy to preview your Jetpack Compose Glance widgets directly within the IDE. You can even use multi-previews to preview at standard widget sizes and their designed widget breakpoints (sample code). Catch potential UI issues and fine-tune your widget's appearance early in the development process or while debugging any UI issues. Learn more.

Previews for Compose Glance widgets in Android Studio
Previews for Compose Glance widgets

Live Edit (Compose)

Live Edit is now enabled in manual mode by default. It has increased stability and more robust change detection, including support for import statements. Note that starting with Android Studio Koala Feature Drop, the default shortcut to push your changes in manual mode has been updated to Control+' (Command+' on macOS). You can customize the shortcut on the Keymap settings page.

a moving demonstration of making an update with Live Edit in manual mode in Android Studio
Making an update with Live Edit in manual mode

Debug

USB Cable Speed Detection

Android Studio now detects when it's possible to connect your Android device with a faster USB cable and suggests an upgrade that maximizes your device capabilities. Using an appropriate USB cable optimizes app installation time and minimizes latency when using tools such as the Android Studio debugger. USB cable speed detection is currently available for macOS and Linux. Learn more.

While most readily available USB cables are still the older USB 2.0 standard, the majority of modern devices support the significantly faster USB 3.0. Upgrading to a USB 3.0 cable can potentially increase your data transfer speeds up to 10x.

USB cable speed detection warning in Android Studio
USB cable speed detection

Device UI Shortcuts

To help you build and debug your UI, we've introduced Device UI shortcuts button action in the Running Devices tool window in Android Studio. Use the shortcuts to view the effect of common UI settings such as dark theme, font size, screen size, app language and TalkBack. You can use the shortcuts with emulators, mirrored physical devices, and devices streamed from Firebase Test Lab. Device UI shortcuts are available for devices running API level 33 or higher. Learn more.

Device UI Setting Shortcuts in Running Device Window in Android Studio
Device UI Setting Shortcuts in Running Device Window

Pixel 8a in Emulator

The Android Emulator (35.1+) now supports the Pixel 8a in the stable channel, enabling you to test your apps on more Pixel devices without needing a physical device. Find the new Pixel 8a in the phone category when you create a new virtual device. Additionally, you can find Pixel 9 devices in the canary release channel of Android Studio.

Pixel 8a in Emulator in Android Studio
Pixel 8a in Emulator

Optimize

Faster and improved Profiler with a task-centric approach

Popular performance optimization tasks like capturing a system trace with profileable apps now start up to 60% faster*. The Profiler's task-centric redesign also makes it easier to start the task you're interested in, whether it's profiling your app's CPU, memory, or power usage. For example, you can start a system trace task to profile and improve your app's startup time right from the UI as soon as you open the Profiler.

Pixel 8a in Emulator in Android Studio
Faster and improved Profiler with a task-centric approach
* Based on internal data, as tested in April 2024

Quality improvements

Beyond new features, we also continue to improve the overall quality and stability of Android Studio. In fact, the Android Studio team addressed over 520 bugs during the Koala Feature Drop development cycle.

IntelliJ platform update

Android Studio Koala Feature Drop (2024.1.2) includes the IntelliJ 2024.1 platform release, which has many new features such as comprehensive support for the latest Java** 22 features, an improved terminal, and sticky lines in the editor to simplify working with large files and exploring new codebases.

    • The improved terminal features a fresh new look, with commands separated into distinct blocks, along with an expanded set of features, such as smooth navigation between blocks, command completion, and easy access to the command history. Learn more.
    • Sticky lines in the editor keeps key structural elements, like the beginnings of classes or methods, pinned to the top of the editor as you scroll and provides an option to promptly navigate through the code by clicking on a pinned line. Learn more.
    • Basic IDE functionalities like code highlighting and completion now work for Java and Kotlin during project indexing, which should enhance your startup experience.

See the full release notes here.

Summary

To recap, Android Studio Koala Feature Drop includes the following enhancements and features:

Develop

    • Android Device Streaming: more devices and improved sign-up
    • Target Android 15 using Android SDK Upgrade Assistant
    • Updated sign-in flow to Google services
    • Wear OS Tile Preview Panel
    • Compose Glance widget previews
    • Live Edit (Compose)

Debug

    • USB Cable Speed Detection
    • Device UI Settings Picker
    • Pixel 8a in Emulator

Optimize

    • New Task UX for Profilers

Quality Improvements

    • 520+ bugs addressed

IntelliJ Platform Update

    • Improved terminal
    • Sticky lines in the editor to simplify working with large codebases
    • Enhanced startup experience

Getting Started

Ready for next-level Android development? Download Android Studio Koala Feature Drop and unlock these cutting-edge features today! As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!


**Java is a trademark or registered trademark of Oracle and/or its affiliates.

Create exceptional experiences on Pixel’s new watches and foldables

Posted by Maru Ahues Bouza – Product Management Director

Pixel just announced the latest devices coming to the Android ecosystem, including Pixel 9 Pro Fold and Pixel Watch 3. These devices bring innovation to the foldable and wearable spaces, with larger screen sizes and exceptional performance.

Not only are these devices exciting for consumers, but they are also important for developers to consider when building their apps. To prepare you for the new Pixel devices and all the innovations in large screens and wearables, we’re diving into everything you need to know about building adaptive UIs, creating great Wear OS 5 experiences, and enhancing your app for larger watch displays.

Building for Pixel 9 Pro Fold with Adaptive UIs

Pixel unveiled their new foldable, Pixel 9 Pro Fold with Gemini, at Made By Google. This device has the largest inner display on a phone1 and is 80% brighter than last year’s Pixel Fold. When it’s folded, it’s just like a regular phone, with a 6.3-inch front display. Users have options for how to engage and multitask based on the screen they are using and the folded state of their device - meaning there are multiple different experiences that developers should be considering when building their apps.

the Pixel 9 Pro Fold

Developers can help their app look great across the four different postures – inner, front, tabletop, and tent – available on Pixel 9 Pro Fold by making their app adaptive. By dynamically adjusting their layouts—swapping components and showing or hiding content based on the available window size rather than simply stretching UI elements—adaptive apps take full advantage of the available window size to provide a great user experience.

When building an adaptive app, our core guidance remains the same – use WindowSizeClasses to define specific breakpoints for your UI. Window size classes enable you to change your app layout as the display space available to your app changes, for example, when a device folds or unfolds, the device orientation changes, or the app window is resized in multi‑window mode.

Announced at Google I/O 2024, we’ve introduced APIs that, under the hood, take advantage of these WindowSizeClasses for you. These APIs provide a new way to implement common adaptive layouts in Compose. The three components in the library – NavigationSuiteScaffold, ListDetailPaneScaffold, and SupportingPaneScaffold – are designed to help you build an adaptive app with UI that looks great across window sizes.

Finally, developers who want to build a truly exceptional experience for foldables should consider supporting tabletop mode, where the phone sits on a surface, the hinge is in a horizontal position, and the foldable screen is half opened. You can use the Jetpack WindowManager library, leveraging FoldingFeature.State and FoldingFeature.Orientation to determine whether the device is in tabletop mode. Once you know the posture the device is in, update your app layout accordingly. For example, media apps that adapt to tabletop mode typically show audio information or a video above the fold and include controls and supplementary content just below the fold for a hands-free viewing or listening experience.

Screenshot of gameplay from Asphalt Legends Unite (Gameloft)
Asphalt Legends Unite (Gameloft)

Even games are making use of foldable features: from racing games like Asphalt Legends Unite and Disney Speedstorm to action games like Modern Combat 5 and Dungeon Hunter 5, Gameloft optimized their games so that you can play not just in full-screen but also in split-view tabletop mode which provides a handheld game console experience. With helpful features like detailed game maps and enhanced controls for more immersive gameplay, you’ll be drifting around corners, leveling up your character, and beating the bad guys in record time!

Preparing for Pixel Watch 3: Wear OS 5 and Larger Displays

Pixel Watch 3 is the latest smartwatch engineered by Google, designed for performance inside and out. With this new device, there are also new considerations for developers. Pixel Watch 3 rings in the stable release of Wear OS 5, the latest platform version, and has the largest display ever from the Pixel Watch series - meaning developers should think about the updates introduced in Wear OS 5 and how their UI will look on varied display sizes.

the Pixel Watch 3

Wear OS 5 is based on Android 14, so developers should take note of the system behavior changes specific to Android 14. The system includes support for the privacy dashboard, giving users a centralized view of the data usage for all apps running on Wear OS 5. For apps that have updated their target SDK version to Android 14, there are a few additional changes. For example, the system moves always-on apps to the background after they're visible in ambient mode for a certain period of time. Additionally, watches that launch with Wear OS 5 or higher will only support watch faces that use the Watch Face Format, so we recommend that developers migrate to using the format. You can see all the behavior changes you should prepare your app for.

Another important consideration for developers is that the Pixel Watch 3 is available in two sizes, 41 mm and 45 mm. Both sizes offer more display space than ever2, having 16% smaller bezels, which gives the 41 mm watch 10% more screen area and the 45 mm watch 40% more screen area than on the Pixel Watch 2! As a developer, review and apply the principles on building adaptive layouts to give users an optimal experience. We created tools and guidance on how to develop apps and tiles for different screen sizes. This guidance will help to build responsive layouts on the wrist using the latest Jetpack libraries, and make use of Android Studio’s preview support and screenshot testing to confirm that your app works well across all screens.

Learn more about all these exciting updates in the Building for the future of Wear OS technical session, shared during this year’s Google I/O event.

Learn more about how to get started preparing your app

With these new announcements from Pixel, it’s a great time to make sure your app looks great on all the screens your users love most. Get your app ready for large screens by building adaptive layouts and learn more about all things Wear OS on our Wear OS developer site. For game developers, be sure to read our large screen game optimization guide and check the sample project to learn the best practices for leveling up your game for large screen and foldable devices.

For even more of the latest from Android, tune into the Android Show on August 27th. We’ll talk about Wear OS, adaptive apps, Jetpack Compose, and more!


1 Among foldable phones in the United States. Based on inner display. 
2 Compared with Pixel Watch 2.

Android Device Streaming: Announcing Early Access to Samsung, Xiaomi, and Oppo Device Labs

Posted by Grant Yang (Product Manager for OmniLab) & Adarsh Fernando (Product Manager for Android Studio)

At Google I/O 2024, we announced Android Device Streaming in open beta, which allows you as a developer to more easily access and interactively test your app on real physical devices located in Google data centers and streamed directly to Android Studio. This enables teams in any location to access a variety of devices across top Android device manufacturers, including the latest family of Google Pixel and Samsung Galaxy series devices.

We’re significantly expanding on the diversity of devices available in this service by working closely with Android device manufacturers (also known as original equipment manufacturers, or OEMs)—such as Samsung, Xiaomi, and Oppo—to connect their device labs to Android Device Streaming, so you can access even more physical devices directly in your workflow in Android Studio. This integration is offered with the same performance, stability, and security benefits you get with devices provided by Google. Keep reading for more details below, as well as how you can sign up for the early access and take advantage of these new devices.

screen grab of Device Streaming in Android Studio
Access devices hosted by Google and other OEMs, such as Samsung, with Android Device Streaming, powered by Firebase

Signup for Early Access to OEM Lab Devices

If you haven’t already done so, follow the steps to get up and running with the beta release of Android Device Streaming, which will give you access to all the Google-hosted devices to test with directly from Android Studio. Later this year, we will start an Early Access Program that allows participants to use Android Device Streaming to connect to devices hosted by our OEM partners. This expands the catalog of test devices available to you with Android Device Streaming.

To kick off this program, we’re first partnering with Samsung, Xiaomi, and Oppo. These labs will be situated in various locations around the world, and you will be able to use the Firebase project you’re already using with Android Device Streaming in Android Studio to access them. Your Firebase project’s administrator will have control to enable or disable individual OEM labs.

If you’d like to participate in the EAP for accessing OEM device labs, fill out this form, and we will let you know if you and your team have been accepted. During the EAP, OEM-provided devices will not be billed or counted against your promotional monthly quota.

We look forward to sharing more details during Google’s I/O Connect Beijing in early August 2024.

In the meantime, we encourage you to try out the devices currently available in Android Device Streaming. Currently, the Android Device Streaming program is in a promotional period, with a higher amount of monthly minutes offered at no cost, which will last until approximately February 2025.

OEM Labs powered by OmniLab

Omnilab Logo

Some of you may wonder how these devices are being connected through to Android Studio. Under the hood, Android Device Streaming is built on top of the device platform for Google, OmniLab. OmniLab, the same device platform that powers all internal device labs, is also powering the OEM labs. Omnilab did this by open sourcing their Android Test Station (ATS) framework available to its open source.

OmniLab provides a framework to ensure that your Android Device Streaming session is secure and performant. You’re able to deploy, debug, and interact with your app on these remote devices through a direct ADB over SSL connection, all without having to leave the IDE. And when the session ends, the device data is fully wiped and factory reset before it’s made available to another developer.


In summary, if you’d like to participate in the EAP for accessing OEM device labs, fill out this form, and we will let you know if you and your team have been accepted. During the EAP, OEM-provided devices will not be billed or counted against your promotional monthly quota.

Be part of our vibrant community on LinkedIn, Medium, YouTube, or X and share your experiences on using Android Device streaming in Android Studio.

Making security easy: How we are helping you fix vulnerabilities in your Android apps

Posted by Bessie Jiang – Software Engineer and Chris Schneider – Security Engineer

Contributors: Maciej Szawłowski – Security Engineer, Hannah Barnes – Technical Program Manager, Dirk Göhmann – Technical Writer, Patrick Mutchler – Software Engineer

Security is tricky, but vital to protecting your users and their data. We’re here to help you build secure Android apps with fewer vulnerabilities for an even safer Android ecosystem for everybody.

Vulnerability Detection – How it Works

Google currently scans every app on Google Play for dozens of common security vulnerability classes. If we spot something, we let you know so you can fix the problem. Imagine a pentesting team hunting for bugs in each of the millions of apps published on Play, rooting out issues like bad TLS configurations that expose network traffic or directory traversal vulnerabilities that let adversaries read from or write to an app’s private files.

We are committed to keeping our joint users protected. In serious cases, if a security vulnerability doesn't get fixed, Google may remove the app from Google Play to keep users safe.

Android Application Security Knowledge Base

We know that it isn’t always enough to just tell you about a vulnerability in your app; you need to know how to fix the issue and how to prevent similar issues from cropping up in the future. To this end, we are introducing our security guidance and recommendations under a new program: the Android Application Security Knowledge Base (AAKB).

AAKB aims to establish guidelines for writing secure Android software. It is a repository of common code issues, with remediation examples and explanations for implementing specific code patterns. Organic in nature, new issues are identified automatically for review with experts across the industry – ensuring broad but well-tested approaches and guidance.

Data collected from your engagement with AAKB is used to improve guidance, and to identify how to make the Android ecosystem more secure by default.

How Does it Work?

AAKB establishes clear, vetted guidance with code examples. Guidance is aligned to OWASP MASVS standards, and content is vetted in partnership with technical peers, such as Microsoft. This helps ensure the content is not biased to one party and represents state-of-the-art standards. This also provides an educational place for you to proactively remediate security risks in your applications using industry-wide standards, with direct access to knowledge from subject-matter experts.

The guidance is available through two mechanisms:

The AAKB homepage lists each article independently, aligned to the relevant OWASP MASVS category (e.g. MASVS-STORAGE). Anyone can view or provide direct feedback to this content. Security is an ever-changing field, and being able to update guidance on the fly means software development lifecycles can be updated dynamically with as little friction as possible.

Android Studio triggers remediation guidance from lint checks by pointing directly to AAKB articles. You can fix problems as you're building the app and before they ever reach users.

There are two methods to view remediation guidance with Android Studio:

Existing security lint checks within Android Studio Giraffe+ have had their descriptions updated to include a link to the relevant AAKB article, allowing you get more context as to why a particular code snippet might be potentially “at-risk”.

Example of a finding with a link to a relevant AAKB article in the Android Studio IDE
Figure 1. Example of a finding with a link to a relevant AAKB article in the Android Studio IDE

Meanwhile, the open-source Android Security lint checks give you access to our most recent guidance and experiments to further protect your mobile applications and get ahead of future security concerns.

Add the open source checks to your project by following the README. These lint checks all contain click-to-fix functionality that make it easy for you to write safer code with minimal effort, as well as links to the relevant AAKB articles like the built-in IDE checks.

Example of an open-source security lint finding, highlighting a vulnerable code snippet and click-to-fix solution
Figure 2. Example of an open-source security lint finding, highlighting a vulnerable code snippet and click-to-fix solution

All built-in IDE lint checks can be found in this list, with many under the Security category containing links to relevant AAKB articles. We would love to hear your feedback and suggestions for new lint checks and other improvements to the open-source lint library.

3 fun experiments to try for your next Android app, using Google AI Studio

Posted by Paris Hsu – Product Manager, Android Studio

We shared an exciting live demo from the Developer Keynote at Google I/O 2024 where Gemini transformed a wireframe sketch of an app's UI into Jetpack Compose code, directly within Android Studio. While we're still refining this feature to make sure you get a great experience inside of Android Studio, it's built on top of foundational Gemini capabilities which you can experiment with today in Google AI Studio.

Specifically, we'll delve into:

    • Turning designs into UI code: Convert a simple image of your app's UI into working code.
    • Smart UI fixes with Gemini: Receive suggestions on how to improve or fix your UI.
    • Integrating Gemini prompts in your app: Simplify complex tasks and streamline user experiences with tailored prompts.

Note: Google AI Studio offers various general-purpose Gemini models, whereas Android Studio uses a custom version of Gemini which has been specifically optimized for developer tasks. While this means that these general-purpose models may not offer the same depth of Android knowledge as Gemini in Android Studio, they provide a fun and engaging playground to experiment and gain insight into the potential of AI in Android development.

Experiment 1: Turning designs into UI code

First, to turn designs into Compose UI code: Open the chat prompt section of Google AI Studio, upload an image of your app's UI screen (see example below) and enter the following prompt:

"Act as an Android app developer. For the image provided, use Jetpack Compose to build the screen so that the Compose Preview is as close to this image as possible. Also make sure to include imports and use Material3."

Then, click "run" to execute your query and see the generated code. You can copy the generated output directly into a new file in Android Studio.

Image uploaded: Designer mockup of an application's detail screen
Image uploaded: Designer mockup of an application's detail screen

Moving image showing a custom chat prompt being created from the imagev provided in Google AI Studio
Google AI Studio custom chat prompt: Image → Compose

Moving image showing running the generated code in Android Studio
Running the generated code (with minor fixes) in Android Studio

With this experiment, Gemini was able to infer details from the image and generate corresponding code elements. For example, the original image of the plant detail screen featured a "Care Instructions" section with an expandable icon — Gemini's generated code included an expandable card specifically for plant care instructions, showcasing its contextual understanding and code generation capabilities.


Experiment 2: Smart UI fixes with Gemini in AI Studio

Inspired by "Circle to Search", another fun experiment you can try is to "circle" problem areas on a screenshot, along with relevant Compose code context, and ask Gemini to suggest appropriate code fixes.

You can explore with this concept in Google AI Studio:

    1. Upload Compose code and screenshot: Upload the Compose code file for a UI screen and a screenshot of its Compose Preview, with a red outline highlighting the issue—in this case, items in the Bottom Navigation Bar that should be evenly spaced.

Example: Preview with problem area highlighted
Example: Preview with problem area highlighted

    2. Prompt Gemini: Open the chat prompt section and enter

    "Given this code file describing a UI screen and the image of its Compose Preview, please fix the part within the red outline so that the items are evenly distributed."
Screenshot of Google AI Studio: Smart UI Fixes with Gemini
Google AI Studio: Smart UI Fixes with Gemini

    3. Gemini's solution: Gemini returned code that successfully resolved the UI issue.

Screenshot of Example: Generated code fixed by Gemini
Example: Generated code fixed by Gemini

Example: Preview with fixes applied
Example: Preview with fixes applied

Experiment 3: Integrating Gemini prompts in your app

Gemini can streamline experimentation and development of custom app features. Imagine you want to build a feature that gives users recipe ideas based on an image of the ingredients they have on hand. In the past, this would have involved complex tasks like hosting an image recognition library, training your own ingredient-to-recipe model, and managing the infrastructure to support it all.

Now, with Gemini, you can achieve this with a simple, tailored prompt. Let's walk through how to add this "Cook Helper" feature into your Android app as an example:

    1. Explore the Gemini prompt gallery: Discover example prompts or craft your own. We'll use the "Cook Helper" prompt.

Gemini prompt gallery in Google AI for Developers
Google AI for Developers: Prompt Gallery

    2. Open and experiment in Google AI Studio: Test the prompt with different images, settings, and models to ensure the model responds as expected and the prompt aligns with your goals.

Moving image showing the Cook Helper prompt in Google AI for Developers
Google AI Studio: Cook Helper prompt

    3. Generate the integration code: Once you're satisfied with the prompt's performance, click "Get code" and select "Android (Kotlin)". Copy the generated code snippet.

Screengrab of using 'Get code' to obtain a Kotlin snippet in Google AI Studio
Google AI Studio: get code - Android (Kotlin)

    4. Integrate the Gemini API into Android Studio: Open your Android Studio project. You can either use the new Gemini API app template provided within Android Studio or follow this tutorial. Paste the copied generated prompt code into your project.

That's it - your app now has a functioning Cook Helper feature powered by Gemini. We encourage you to experiment with different example prompts or even create your own custom prompts to enhance your Android app with powerful Gemini features.

Our approach on bringing AI to Android Studio

While these experiments are promising, it's important to remember that large language model (LLM) technology is still evolving, and we're learning along the way. LLMs can be non-deterministic, meaning they can sometimes produce unexpected results. That's why we're taking a cautious and thoughtful approach to integrating AI features into Android Studio.

Our philosophy towards AI in Android Studio is to augment the developer and ensure they remain "in the loop." In particular, when the AI is making suggestions or writing code, we want developers to be able to carefully audit the code before checking it into production. That's why, for example, the new Code Suggestions feature in Canary automatically brings up a diff view for developers to preview how Gemini is proposing to modify your code, rather than blindly applying the changes directly.

We want to make sure these features, like Gemini in Android Studio itself, are thoroughly tested, reliable, and truly useful to developers before we bring them into the IDE.

What's next?

We invite you to try these experiments and share your favorite prompts and examples with us using the #AndroidGeminiEra tag on X and LinkedIn as we continue to explore this exciting frontier together. Also, make sure to follow Android Developer on LinkedIn, Medium, YouTube, or X for more updates! AI has the potential to revolutionize the way we build Android apps, and we can't wait to see what we can create together.

Top 3 Updates for Building with AI on Android at Google I/O ‘24

Posted by Terence Zhang – Developer Relations Engineer

At Google I/O, we unveiled a vision of Android reimagined with AI at its core. As Android developers, you're at the forefront of this exciting shift. By embracing generative AI (Gen AI), you'll craft a new breed of Android apps that offer your users unparalleled experiences and delightful features.

Gemini models are powering new generative AI apps both over the cloud and directly on-device. You can now build with Gen AI using our most capable models over the Cloud with the Google AI client SDK or Vertex AI for Firebase in your Android apps. For on-device, Gemini Nano is our recommended model. We have also integrated Gen AI into developer tools - Gemini in Android Studio supercharges your developer productivity.

Let’s walk through the major announcements for AI on Android from this year's I/O sessions in more detail!

#1: Build AI apps leveraging cloud-based Gemini models

To kickstart your Gen AI journey, design the prompts for your use case with Google AI Studio. Once you are satisfied with your prompts, leverage the Gemini API directly into your app to access Google’s latest models such as Gemini 1.5 Pro and 1.5 Flash, both with one million token context windows (with two million available via waitlist for Gemini 1.5 Pro).

If you want to learn more about and experiment with the Gemini API, the Google AI SDK for Android is a great starting point. For integrating Gemini into your production app, consider using Vertex AI for Firebase (currently in Preview, with a full release planned for Fall 2024). This platform offers a streamlined way to build and deploy generative AI features.

We are also launching the first Gemini API Developer competition (terms and conditions apply). Now is the best time to build an app integrating the Gemini API and win incredible prizes! A custom Delorean, anyone?


#2: Use Gemini Nano for on-device Gen AI

While cloud-based models are highly capable, on-device inference enables offline inference, low latency responses, and ensures that data won’t leave the device.

At I/O, we announced that Gemini Nano will be getting multimodal capabilities, enabling devices to understand context beyond text – like sights, sounds, and spoken language. This will help power experiences like Talkback, helping people who are blind or have low vision interact with their devices via touch and spoken feedback. Gemini Nano with Multimodality will be available later this year, starting with Google Pixel devices.

We also shared more about AICore, a system service managing on-device foundation models, enabling Gemini Nano to run on-device inference. AICore provides developers with a streamlined API for running Gen AI workloads with almost no impact on the binary size while centralizing runtime, delivery, and critical safety components for Gemini Nano. This frees developers from having to maintain their own models, and allows many applications to share access to Gemini Nano on the same device.

Gemini Nano is already transforming key Google apps, including Messages and Recorder to enable Smart Compose and recording summarization capabilities respectively. Outside of Google apps, we're actively collaborating with developers who have compelling on-device Gen AI use cases and signed up for our Early Access Program (EAP), including Patreon, Grammarly, and Adobe.

Moving image of Gemini Nano operating in Adobe

Adobe is one of these trailblazers, and they are exploring Gemini Nano to enable on-device processing for part of its AI assistant in Acrobat, providing one-click summaries and allowing users to converse with documents. By strategically combining on-device and cloud-based Gen AI models, Adobe optimizes for performance, cost, and accessibility. Simpler tasks like summarization and suggesting initial questions are handled on-device, enabling offline access and cost savings. More complex tasks such as answering user queries are processed in the cloud, ensuring an efficient and seamless user experience.

This is just the beginning - later this year, we'll be investing heavily to enable and aim to launch with even more developers.

To learn more about building with Gen AI, check out the I/O talks Android on-device GenAI under the hood and Add Generative AI to your Android app with the Gemini API, along with our new documentation.


#3: Use Gemini in Android Studio to help you be more productive

Besides powering features directly in your app, we’ve also integrated Gemini into developer tools. Gemini in Android Studio is your Android coding companion, bringing the power of Gemini to your developer workflow. Thanks to your feedback since its preview as Studio Bot at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and now include this experience in stable builds of Android Studio.

At Google I/O, we previewed a number of features available to try in the Android Studio Koala preview release, like natural-language code suggestions and AI-assisted analysis for App Quality Insights. We also shared an early preview of multimodal input using Gemini 1.5 Pro, allowing you to upload images as part of your AI queries — enabling Gemini to help you build fully functional compose UIs from a wireframe sketch.


You can read more about the updates here, and make sure to check out What’s new in Android development tools.

Android Device Streaming, powered by Firebase, is now in Beta

Posted by Adarsh Fernando, Senior Product Manager, Android Developer Tools

Validating your app on a range of Android screens is an important step to developing a high quality Android app. However, getting access to the device you need, when you need it, can be challenging and time consuming. From trying to reproduce a device specific behavior on a Samsung device to testing your adaptive app layouts on the Google Pixel Fold, having the right device at the right time is critical.

To address this app developer use case, we created Android Device Streaming, powered by Firebase. With just a few clicks, you and your team can access real physical devices, such as the latest Pixel and Samsung devices, and use them in the IDE in many of the ways you would use a physical device sitting on your desk.

Animation of using Device Streaming in Android Studio
Android Device Streaming, powered by Firebase, available in Android Studio Jellyfish

Today, Android Device Streaming is in beta and is available to all Android developers using Android Studio Jellyfish or later. We’ve also added new devices to the catalog and introduced flexible pricing that provides low-cost access to the latest Android devices.

Read below to learn what changes are in this release, as well as common questions around uses, security, and pricing. However, if you want to get started right away and try Android Device Streaming at no cost, see our getting started guide.

What can you do with Android Device Streaming?

If you’ve ever used Device Mirroring, you know that Android Studio lets you see the screen of your local physical device within the IDE window. Without having to physically reach out to your device, you’re able to change the device orientation, change the posture of foldables, simulate pressing physical buttons, interact with your app, and more. Android Device Streaming leverages these same capabilities, allowing you to connect and interact with remote physical devices provided by Firebase.

Screen capture of using the debugger with Android Device Streaming
Using the Debugger with Android Device Streaming

When you use Android Studio to request a device from Android Device Streaming, the IDE establishes a secure ADB over SSL connection to the device. The connection also lets you use familiar tools in Android Studio that communicate with the device, such as the Debugger, Profiler, Device Explorer, Logcat, Compose Live Edit, and more. These tools let you more accurately validate, test, and debug the behavior of your app on real OEM hardware.

What devices would my team have access to?

Android Device Streaming gives you and your team access to a number of devices running Android versions 8.1 through 14. You have access to the latest flagship devices from top device manufacturers, such as Google Pixel and Samsung. You can expand testing your app across more form factors with access to the latest foldables and tablets, such as the Samsung Tab S8 Ultra.

Screen capture of browsing the list of devices and selecting the one you want to use in Android Studio
Browse and select devices you want to use from Android Studio

And we’re frequently adding new devices to our existing catalog of 20+ device models, such as the following recent additions:

    • Samsung Galaxy Z Fold5
    • Samsung Galaxy S23 Ultra
    • Google Pixel 8a

Without having to purchase expensive devices, each team member can access Firebase’s catalog of devices in just a few clicks, for as long as they need—giving your team confidence that your app looks great across a variety of popular devices.


Google OEM partner logos - Samsung, Google Pixel, Oppo, and Xiaomi

As we mentioned at Google I/O ‘24, we’re partnering with top Original Equipment Manufacturers (OEMs), such as Samsung, Google Pixel, Oppo, and Xiaomi, to expand device selection and availability even further in the months to come. This helps the catalog of devices grow and stay ahead of ecosystem trends, so that you can validate that your apps work great on the latest devices before they reach the majority of your users.

Is Android Device Streaming secure?

Android Device Streaming, powered by Firebase, takes the security and privacy of your device sessions very seriously. Firebase devices are hosted in secure global data centers and Android Studio uses an SSL connection to connect to the device.

A device that you’ve used to install and test your app on is never shared with another user or Google service before being completely erased and factory reset. When you’re done using a device, you can do this yourself by clicking “Return and Erase Device” to fully erase and factory reset it. The same applies if the session expires and the device is returned automatically.

Screen capture of Reuturn and Erase Device function in Android Device Streaming
When your session ends, the device is fully erased and factory reset.

How much does Android Device Streaming cost?

Depending on your Firebase project’s pricing plan, Android Device Streaming users can use Android Device Streaming with the following pricing:

    • On June 1, 2024, for a promotional period:
        • (no cost) Spark plan: 120 no cost minutes per project, per month
        • Blaze plan: 120 no cost minutes per project, per month, 15 cents for each additional minute
    • On or around February, 2025, the promotional period will end and billing will be based on the following quota limits:
        • (no cost) Spark plan: 30 no cost minutes per project, per month
        • Blaze plan: 30 no cost minutes per project, per month, 15 cents for each additional minute

With no monthly or yearly contracts, Android Device Streaming’s per-minute billing provides unparalleled flexibility for you and your team. And importantly, you don’t pay for any period of time required to set up the device before you connect, or erase the device after you end your session. This allows you and your team to save time and costs compared to purchasing and managing your own device lab.

To learn more, see Usage levels, quotas, and pricing.

What’s next

We’re really excited for you and your team to try Android Device Streaming, powered by Firebase. We think it’s an easy and cost-effective way for you to access the devices you need, when you need them, and right from your IDE, so that you can ensure the best quality and functionality of your app for your users.

The best part is, you can try out this new service in just a few clicks and at no cost. And our economical per-minute pricing provides increased flexibility for your team to go beyond the monthly quota, so that you only pay for the time you’re actively connected to a device—no subscriptions or long-term commitments required.

You can expect that the service will be adding more devices from top OEM partners to the catalog, to ensure that device selection remains up-to-date and becomes increasingly diverse. Try Android Device Streaming today and share your experience with the Android developer committee on LinkedIn, Medium, YouTube, or X.

15 Things to know for Android developers at Google I/O

Posted by Matthew McCullough, Vice President, Product Management, Android Developer  

AI is unlocking experiences that were not even possible a few years ago, and we’ve been hard at work reimaging Android with AI at the core, to help enable you to build a whole new class of apps. At this year’s Google I/O, we’re covering how new tools like Gemini can power building the next generations of apps on Android. Plus, we showcased a range of updates to our tools and services grounded in productivity, helping you make it faster and easier to build excellent experiences across form factors. Let’s dive in!

Powering the next generation of Apps with AI

#1: AI in your tools, with Gemini in Android Studio

Gemini in Android Studio (formerly Studio Bot) is your coding companion for Android development, and thanks to your feedback since its preview at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and brought it into the Gemini family of products. Earlier today, we previewed a number of new features coming soon, like Code suggestions, App Quality Insights that leverage Gemini, and a preview of the multi-modal inputs that are coming using Gemini 1.5 Pro. You can read more about the updates here, and make sure to check out What’s new in Android development tools.

#2: Building with Generative AI

Android provides the solution you need to build Generative AI apps. You can use our most capable models over the Cloud with the Gemini API in Google AI or Vertex AI for Firebase directly in your Android apps. For on-device, Gemini Nano is our most efficient model. We’re working closely with a few early adopters such as Patreon, Grammarly, and Adobe to ensure we’re creating the best APIs that unlock the most innovative experiences. For example, Adobe is experimenting with Gemini Nano to enhance the on-device experience of Acrobat AI Assistant, a tool that allows their users to summarize and interact with documents. Be sure to check out the Build your own generative AI powered Android app, Android on-device gen AI under the hood, and the What’s New in Android sessions to learn more!

Moving image of Gemini Nano operating in Adobe

Excellent apps, across devices

#3: Think adaptive: apps on phones, foldables, tablets and more

Build and design apps that adapt beyond the phone, with the new Compose adaptive layout libraries built with Material guidance in beta. Add rich stylus and keyboard support to increase user productivity. Check out three of our key Android adaptive sessions at Google I/O: Designing adaptive apps, Building adaptive Android apps, and Increase user productivity with large screens and accessories.

Moving image of Gemini Nano operating in Adobe

#4: Enhance homescreens with Widgets and Jetpack Glance

Jetpack Glance 1.1 is now available in release candidate and lets you build high quality widgets using your Compose skills. Check out our new canonical layouts, design guidance and figma updates to the Android UI kit. To learn more check out our Improve the user experience of your Android app workshop and Build Android widgets with Jetpack Glance technical session.

#5-9: come back here tomorrow and Thursday!

We’ll continue to share more updates for Android Developers throughout Google I/O, so check back here tomorrow!

Developer Productivity

#10: Use Kotlin Multiplatform for sharing business logic

Kotlin Multiplatform (KMP) enables sharing Kotlin code across different platforms and several of our Jetpack libraries, like DataStore and Room, have already been migrated to take advantage of KMP. We use Kotlin Multiplatform within Google and recommend using KMP for sharing business logic between platforms. Learn more about it here.

#11: Compose: Shared Elements, performance improvements and more

The upcoming Compose June ‘24 release is packed with the features you’ve been asking for! Shared element transitions, lazy list item reordering animations, strong skipping mode, performance improvements, a new lazy flow layout and more. Read more about it in our blog.

#12: Android Studio: the latest preview, with Gemini and more

Android Studio Koala 🐨Feature Drop (2024.1.2) available today in the canary channel, builds on top of IntelliJ 2024.1 and adds new innovative features unlocked by Gemini, such as insights for crashes in App Quality Insights, code transformations and a Gemini API starter template to get you quickly started with Gemini. Additionally, new features such as USB speed detection, shortcut UI to control device settings, a new way to sign into Google services, updated and speedier UI for profilers with a new task centric approach and a deep integration with the Google Play SDK index are intended to make the development process extremely productive. Read more here.

And the latest from the world of Mobile

#13: Grow your business with the latest Google Play updates

Discover new ways to attract and engage users with enhanced custom store listings. Optimize revenue with expanded payment options. Reinforce trust through secure, high-quality experiences made easier with our latest SDK Console improvements. Learn about these updates and more, including our new vertical approach, in our blog.

#14: Simplify app compliance with Checks

Streamline your app's privacy compliance with Checks, Google's AI-powered compliance solution! Checks empowers developers to swiftly identify, address, resolve privacy issues, and enables you to launch apps faster and with confidence. Harness the power of automation with Checks' intelligent reports, saving you valuable time and resources. Get started now at checks.google.com.

#15: And of course, Android 15

…but for that, you’ll have to stay tuned tomorrow, when we’ve got a bit more up our sleeve!