Deep Research and Gems in the Gemini app are now available for more Google Workspace customers

What’s changing 

Beginning today, we’re expanding the availability of Deep Research and Gems in the Gemini app to Google Workspace: 
  • Business Starter 
  • Enterprise Starter 
  • Education Fundamentals, Standard, and Plus 
  • Frontline Starter and Standard 
  • Essentials, Enterprise Essentials, and Enterprise Essentials Plus
  • Nonprofits 
Note: There is no change to Google Workspace or Education customers with access to Gemini Advanced already using Deep Research or Gems. 

Who’s impacted

Admins and end users


Why it matters

Gems are AI experts that you can customize across a variety of topics, helping you complete specific goals, tasks, and workflows based on your inputs, while reducing repetitive prompting. You can use Gems to gather target feedback on new products or services, get suggestions on your writing, brainstorm creative learning experiences and more. You can further customize your Gems by anchoring them to specific files, including Google Docs or Google Sheets via Drive, for even more relevant responses. 


You can also take advantage of a variety of our pre-made Gems to quickly get started, like “Sales pitch ideator” to create compelling pitch materials to drive conversions, “Copy creator” to draft on-brand marketing copy, “Learning coach” to build knowledge with step-by-step study guidance and practice activities, and “Sentiment analyzer” to dive into customer feedback and identify trends, and more.




Deep Research helps save hours of work by browsing the web on your behalf, analyzing information in real-time, and developing comprehensive research reports in minutes to get you up to speed on just about anything. You can use Deep Research to understand your business landscape with comprehensive reports on emerging trends in any industry or competitor products and services. It can also help educators and students 18+ with grant writing, lesson planning, class projects and more. 






Additional details

  • Currently, some features in Gems are only available in a limited set of languages. Refer to this article in our Help Center for more information: Gems in Gemini Apps.
  • Currently, Deep Research and Gems are limited to the Gemini web app (gemini.google.com) for Google Workspace business and education users 18 years and older. We’re planning to support Deep Research and Gems in the Gemini mobile app for these users at a later date.
  • Deep Research use is limited to five reports per user per 30 day period for the Google Workspace editions indicated in this post. For full usage of Deep Research, we recommend exploring Google Workspace business and education plans with access to Gemini Advanced. 

Getting started



Rollout pace


Availability

Available for Google Workspace:
  • Business Starter
  • Enterprise Starter
  • Education Standard, Plus, and Fundamentals
  • Frontline Starter and Standard
  • Essentials, Enterprise Essentials, and Enterprise Essentials Plus
  • Nonprofits

The Third Beta of Android 16

Posted by Matthew McCullough – VP of Product Management, Android Developer

Android 16 has officially reached Platform Stability today with Beta 3! That means the API surface is locked, the app-facing behaviors are final, and you can push your Android 16-targeted apps to the Play store right now. Read on for coverage of new security and accessibility features in Beta 3.

Android delivers enhancements and new features year-round, and your feedback on the Android beta program plays a key role in helping Android continuously improve. The Android 16 developer site has more information about the beta, including how to get it onto devices and the release timeline. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that benefits everyone.

New in Android 16 Beta 3

At this late stage in the development cycle, there are only a few new things in the Android 16 Beta 3 release for you to consider when developing your apps.

Android 16 timeline showing we are on time with Beta releases ending in March

Broadcast audio support

Pixel 9 devices on Android 16 Beta now support Auracast broadcast audio with compatible LE Audio hearing aids, part of Android's work to enhance audio accessibility. Built on the LE Audio standard, Auracast enables compatible hearing aids and earbuds to receive direct audio streams from public venues like airports, concerts, and classrooms. Our Keyword post has more on this technology.

Outline text for maximum text contrast

Users with low vision often have reduced contrast sensitivity, making it challenging to distinguish objects from their backgrounds. To help these users, Android 16 Beta 3 introduces outline text, replacing high contrast text, which draws a larger contrasting area around text to greatly improve legibility.

Android 16 also contains new AccessibilityManager APIs to allow your apps to check or register a listener to see if this mode is enabled. This is primarily for UI Toolkits like Compose to offer a similar visual experience. If you maintain a UI Toolkit library or your app performs custom text rendering that bypasses the android.text.Layout class then you can use this to know when outline text is enabled.

Text with enhanced contrast before and after Android 16's new outline text accessibility feature
Text with enhanced contrast before and after Android 16's new outline text accessibility feature

Test your app with Local Network Protection

Android 16 Beta 3 adds the ability to test the Local Network Protection (LNP) feature which is planned for a future Android major release. It gives users more control over which apps can access devices on their local network.

What's Changing?

Currently, any app with the INTERNET permission can communicate with devices on the user's local network. LNP will eventually require apps to request a specific permission to access the local network.

Beta 3: Opt-In and Test

In Beta 3, LNP is an opt-in feature. This is your chance to test your app and identify any parts that rely on local network access. Use this adb command to enable LNP restrictions for your app:

adb shell am compat enable RESTRICT_LOCAL_NETWORK <your_package_name>

After rebooting your device, your app's local network access is restricted. Test features that might interact with local devices (e.g., device discovery, media casting, connecting to IoT devices). Expect to see socket errors like EPERM or ECONNABORTED if your app tries to access the local network without the necessary permission. See the developer guide for more information, including how to re-enable local network access.

This is a significant change, and we're committed to working with you to ensure a smooth transition. By testing and providing feedback now, you can help us build a more private and secure Android ecosystem.

Get your apps, libraries, tools, and game engines ready!

If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates are needed to fully support Android 16.

Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you don't yet target Android 16:

    • JobScheduler: JobScheduler quotas are enforced more strictly in Android 16; enforcement will occur if a job executes while the app is on top, when a foreground service is running, or in the active standby bucket. setImportantWhileForeground is now a no-op. The new stop reason STOP_REASON_TIMEOUT_ABANDONED occurs when we detect that the app can no longer stop the job.
    • Broadcasts: Ordered broadcasts using priorities only work within the same process. Use other IPC if you need cross-process ordering.
    • ART: If you use reflection, JNI, or any other means to access Android internals, your app might break. This is never a best practice. Test thoroughly.
    • 16KB Page Size: If your app isn't 16KB-page-size ready, you can use the new compatibility mode flag, but we recommend migrating to 16KB for best performance.

Other changes that will be impactful once your app targets Android 16:

Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.

Once you’ve published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.

Two Android API releases in 2025

This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. This Q2 major release will be the only release in 2025 that includes behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-breaking behavior changes.

Android API release timeline 2025

We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.

There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.

Get started with Android 16

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 2 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 3.

While the API and behaviors are final, we're still looking for your feedback so please report issues on the feedback page. The earlier we get your feedback, the better chance we'll be able to address it in this or a future release.

For the best development experience with Android 16, we recommend that you use the latest feature drop of Android Studio (Meerkat). Once you’re set up, here are some of the things you should do:

    • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.

We’ll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.

For complete information on Android 16 please visit the Android 16 developer site.

#TheAndroidShow: Multimodal for Gemini in Android Studio, news for gaming devs, the latest devices at MWC, XR and more!

Posted by Anirudh Dewani – Director, Android Developer Relations

We just dropped our Winter episode of #TheAndroidShow, on YouTube and on developer.android.com, and this time we were in Barcelona to give you the latest from Mobile World Congress and across the Android Developer world. We unveiled a big update to Gemini in Android Studio (multi-modal support, so you can translate image to code) and we shared some news for games developers ahead of GDC later this month. Plus we unpacked the latest Android hardware devices from our partners coming out of Mobile World Congress and recapped all of the latest in Android XR. Let’s dive in!


Multimodality image-to-code, now available for Gemini in Android Studio

At every stage of the development lifecycle, Gemini in Android Studio has become your AI-powered companion. Today, we took the wraps off a new feature: Gemini in Android Studio now supports multimodal image to code, which lets you attach images directly to your prompts! This unlocks a wealth of new possibilities that improve collaboration and design workflows. You can try out this new feature by downloading the latest canary - Android Studio Narwal, and read more about multimodal image attachment – now available for Gemini in Android Studio.

Building excellent games with better graphics and performance

Ahead of next week’s Games Developer Conference (GDC), we announced new developer tools that will help improve gameplay across the Android ecosystem. We're making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we're enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplay sessions. Learn more about how we're building excellent games with better graphics and performance.


A deep dive into Android XR

Since we unveiled Android XR in December, it's been exciting to see developers preparing their apps for the next generation of Android XR devices. In the latest episode of #TheAndroidShow we dove into this new form factor and spoke with a developer who has already been building. Developing for this new platform leverages your existing Android development skills and familiar tools like Android Studio, Kotlin, and Jetpack libraries. The Android XR SDK Developer Preview is available now, complete with an emulator, so you can start experimenting and building XR experiences immediately! Visit developer.android.com/xr for more.


New Android foldables and tablets, at Mobile World Congress

Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you’re already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona:

    • OPPO: OPPO launched their Find N5, their slim 8.93mm foldable with a 8.12” large screen - making it as compact or expansive as needed.
    • Xiaomi: Xiaomi debuted the Xiaomi Pad 7 series. Xiaomi Pad 7 provides a crystal-clear display and, with the productivity accessories, users get a desktop-like experience with the convenience of a tablet.
    • Lenovo: Lenovo showcased their Yoga Tab Plus, the latest powerful tablet from their lineup designed to empower creativity and productivity.

These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you’ll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost.


Watch the Winter episode of #TheAndroidShow

That’s a wrap on this quarter’s episode of #TheAndroidShow. A special thanks to our co-hosts for the Fall episode, Simona Milanović and Alejandra Stamato! You can watch the full show on YouTube and on developer.android.com/events/show.

Have an idea for our next episode of #TheAndroidShow? It’s your conversation with the broader community, and we’d love to hear your ideas for our next quarterly episode - you can let us know on X or LinkedIn.

Multimodal image attachment is now available for Gemini in Android Studio

Posted by Paris Hsu – Product Manager, Android Studio

At every stage of the development lifecycle, Gemini in Android Studio has become your AI-powered companion, making it easier to build high quality apps. We are excited to announce a significant expansion: Gemini in Android Studio now supports multimodal inputs, which lets you attach images directly to your prompts! This unlocks a wealth of new possibilities that improve team collaboration and UI development workflows.

You can try out this new feature by downloading the latest Android Studio canary. We’ve outlined a few use cases to try, but we’d love to hear what you think as we work through bringing this feature into future stable releases. Check it out:

Image attachment - a new dimension of interaction

We first previewed Gemini's multimodal capabilities at Google I/O 2024. This technology allows Gemini in Android Studio to understand simple wireframes, and transform them into working Jetpack Compose code.

You'll now find an image attachment icon in the Gemini chat window. Simply attach JPEG or PNG files to your prompts and watch Gemini understand and respond to visual information. We've observed that images with strong color contrasts yield the best results.

New “Attach Image File” icon in chat window
1.1 New “Attach Image File” icon in chat window

Example of multimodal response in chat
1.2 Example multimodal response in chat

We encourage you to experiment with various prompts and images. Here are a few compelling use cases to get you started:

    • Rapid UI prototyping and iteration: Convert a simple wireframe or high-fidelity mock of your app's UI into working code.
    • Diagram explanation and documentation: Gain deeper insights into complex architecture or data flow diagrams by having Gemini explain their components and relationships.
    • UI troubleshooting: Capture screenshots of UI bugs and ask Gemini for solutions.

Rapid UI prototyping and iteration

Gemini's multimodal support lets you convert visual designs into functional UI code. Simply upload your image and use a clear prompt. It works whether you're working from your own sketches or from a designer mockup.

Here’s an example prompt: "For this image provided, write Android Jetpack Compose code to make a screen that's as close to this image as possible. Make sure to include imports, use Material3, and document the code.” And then you can append any specific or additional instructions related to the image.

Example prompt: 'For this image provided, write Android Jetpack Compose code to make a screen that's as close to this image as possible. Make sure to include imports, use Material3, and document the code.'

Example of generating Compose code from high-fidelity mock using Gemini in Android Studio
2. Example of generating Compose code from high-fidelity mock using Gemini in Android Studio (code output)

For more complex UIs, refine your prompts to capture specific functionality. For instance, when converting a calculator mockup, adding "make the interactions and calculations work as you'd expect" results in a fully functional calculator:

Example prompt to convert a calculator mock up

Example of generating Compose code from high-fidelity mock using Gemini in Android Studio
3. Example of generating Compose code from wireframe via Gemini in Android Studio (code output)

Note: this feature provides an initial design scaffold. It’s a good “first draft” and your edits and adjustments will be needed. Common refinements include ensuring correct drawable imports and importing icons. Consider the generated code a highly efficient starting point, accelerating your UI development workflow.

Diagram explanation and documentation

With Gemini's multimodal capabilities, you can also try uploading an image of your diagram and ask for explanations or documentation.

Example prompt: Upload the Now in Android architecture diagram and say "Explain the components and data flow in this diagram" or “Write documentation about this diagram”.

Example of generating Compose code from high-fidelity mock using Gemini in Android Studio
4. Example of asking Gemini to help document the NowInAndroid architecture diagram

UI troubleshooting

Leverage Gemini's visual analysis to identify and resolve bugs quickly. Upload a screenshot of the problematic UI, and Gemini will analyze the image and suggest potential solutions. You can also include relevant code snippets for more precise assistance.

In the example below, we used Compose UI check and found that the button is stretched too wide in tablet screens, so we took a screenshot and asked Gemini for solutions - it was able to leverage the window size classes to provide the right fix.

Example of generating Compose code from high-fidelity mock using Gemini in Android Studio
5. Example of fixing UI bugs using Image Attachment (code output)

Download Android Studio today

Download the latest Android Studio canary today to try the new multimodal features!

As always, Google is committed to the responsible use of AI. Android Studio won't send any of your source code to servers without your consent. You can read more on Gemini in Android Studio's commitment to privacy.

We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue and also check out known issues. Remember to also follow us on X, Medium, or YouTube for more Android development updates!

Building excellent games with better graphics and performance

Posted by Matthew McCullough – VP of Product Management, Android

We’re stepping up our multiplatform gaming offering with exciting news dropping at this year’s Game Developers Conference (GDC). We’re bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we’ll be diving into all of the latest games coming to Play, plus new developer tools that’ll help improve gameplay across the Android ecosystem.

Today, we’re sharing a closer look at what’s new from Android. We’re making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we’re enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out the video or keep reading below.

More immersive visuals built on Vulkan, now the official graphics API

These days, games require more processing power for realistic graphics and cutting-edge visuals. Vulkan is an API used for low level graphics that helps developers maximize the performance of modern GPUs, and today we’re making it the official graphics API for Android. This unlocks advanced features like ray tracing and multithreading for realistic and immersive gaming visuals. For example, Diablo Immortal used Vulkan to implement ray tracing, bringing the world of Sanctuary to life with spectacular special effects, from fiery explosions to icy blasts.

Moving image showing ray tracing in Diablo Immortal on Google Play
Diablo Immortal running on Vulkan

For casual games like Pokémon TCG Pocket, which draws players into the vibrant world of each Pokémon, Vulkan helps optimize graphics across a broad range of devices to ensure a smooth and engaging experience for every player.

Moving image showing gameplay of Pokemon TCG Pocket on Google Play
Pokémon TCG Pocket running on Vulkan

We’re excited to announce that Android is transitioning to a modern, unified rendering stack with Vulkan at its core. Starting with our next Android release, more devices will use Vulkan to process all graphics commands. If your game is running on OpenGL, it will use ANGLE as a system driver that translates OpenGL to Vulkan. We recommend testing your game on ANGLE today to ensure it’s ready for the Vulkan transition.

We’re also partnering with major game engines to make Vulkan integration easier. With Unity 6, you can configure Vulkan per device while older versions can access this setting through plugins. Over 45% of sessions from new games on Unity* use Vulkan, and we expect this number to grow rapidly.

To simplify workflows further, we’re teaming up with the Samsung Austin Research Center to create an integrated GPU profiler toolchain for Vulkan and AI/ML optimization. Coming later this year, this tool will enable developers to make graphics, memory and compute workloads more efficient.

Longer and smoother gameplay sessions with ADPF

Android Dynamic Performance Framework (ADPF) enables developers to adjust between the device and game’s performance in real-time based on the thermal state of the device, and it’s getting a big update today to provide longer and smoother gameplay sessions. ADPF is designed to work across a wide range of devices including models like the Pixel 9 family and the Samsung S25 Series. We’re excited to see MMORPGs like Lineage W integrating ADPF to optimize performance on their core target devices.

Moving image showing gameplay from Lineage w on Google Play
Lineage W running on ADPF

Here’s how we're enhancing ADPF with better performance and simplified integration:

    • Stronger performance: Our collaboration with MediaTek, a leading chip supplier for Android devices, has brought enhanced stability to ADPF. Devices powered by MediaTek's MAGT system-on-chip solution can now fully utilize ADPF's performance optimization capabilities.
    • Easier integration: Major game engines now offer built-in ADPF support with simple interfaces and default configurations. For advanced controls, developers can customize the ADPF behavior in real time.

Performance optimization with more features in Play Console

Once you’ve launched your game, Play Console offers the tools to monitor and improve your game's performance. We’re newly including Low Memory Killers (LMK) in Android vitals, giving you insight into memory constraints that can cause your game to crash. Android vitals is your one-stop destination for monitoring metrics that impact your visibility on the Play Store like slow sessions. You can find this information next to reach and devices which provides updates on your game's user distribution and notifies developers for device-specific issues.

Android vitals details in Google Play Console
Check your Android vitals regularly to ensure high technical quality

Bringing PC games to mobile, and pushing the boundaries of gaming

We're launching a pilot program to simplify the process of bringing PC games to mobile. It provides support starting from Android game development all the way through publishing your game on Play. Starting this month, games like DREDGE and TABS Mobile are growing their mobile audience using this program. Many more are following in their footsteps this year, including Disco Elysium. You can express your interest to join the PC to mobile program.

Moving image displaying thumbnails of titles of new PC games coming to mobile - Disco Elysium, TABS Mobile, and DREDGE
New PC games are coming to mobile

You can learn more about Android game development from our developer site. We can’t wait to see your title join the ranks of these amazing games built for Android. And if you’ll be at GDC next week, we’d love to say hello - stop by at the Moscone Center West Hall!


* Source: Google internal data measuring games on Android 14 or later launched between August 2024 - February 2025.