We’re pleased to announce the general availability of Gemini in the side panel of Docs, Sheets, Slides, and Drive. Through the side panel, Gemini can assist you with summarizing, analyzing, and generating content by utilizing insights gathered from your emails, documents, and more—all without switching applications or tabs. The updated interface automatically summarizes the content you're working on and provides contextually relevant prompts to help you get started.
Who’s impacted
End users
Why you’d use it
The side panel will use Google’s most capable models including the Gemini 1.5 Pro model with a longer context window and more advanced reasoning, allowing you to harness the power of Gemini directly from your most used Google Workspace apps. Here are a few examples of when you’d use it:
Docs: Gemini in Docs side panel can help you write and refine your content, summarize information, help you brainstorm, create content based off of other files, and more.
Gemini in Docs side panel
Slides: Gemini in Slides side panel can help you generate new slides, generate custom images, summarize presentations and more.
Using Gemini in Slides side panel
Sheets: Gemini in Sheets side panel can help you track and organize data. In the side panel, you can quickly create tables, generate formulas and ask how to accomplish certain tasks in Sheets.
Using Gemini in Sheets side panel
Drive: Gemini in Drive side panel can summarize one or multiple documents, get quick facts about a project, or deep dive on a topic without needing to find and click through numerous documents.
Using Gemini in Drive side panel
Additional details
We’re also introducing Gemini in the Gmail side panel, which you can leverage to summarize email threads, draft an email, suggest responses to an email thread, and more. For more information, see our announcement on the Workspace Updates blog.
Getting started
Admins: There is no admin control for this feature.
End users: You can access the Gemini in the side panel by clicking on “Ask Gemini” (spark button) in the top right corner of Docs, Sheets, Slides, and Drive on the web. Visit the Help Center to learn more about collaborating with Gemini in Google Drive, as well as Google Docs, Sheets, and Slides.
Rollout pace
Rapid Release domains: Full rollout (1-3 days for feature visibility) starting on June 24, 2024
Scheduled Release domains: Gradual rollout (up to 15 days for feature visibility) starting on July 8, 2024
In addition to the recent announcement of Gemini in the side panel of Google Docs, Google Sheets, Google Slides, and Drive, we’re excited to introduce the general availability of Gemini in the Gmail side panel. Built to leverage Google’s most capable models, including the Gemini 1.5 Pro model with a longer context window and more advanced reasoning, you can now use Gemini in Gmail on web to:
Summarize an email thread
Suggest responses to an email thread
Get help drafting an email
Ask questions and find specific information from emails within your inbox or from your Google Drive files
While Gemini in Gmail will provide proactive prompts to help you get started, you can also ask freeform questions. For example, you can ask Gemini to search your inbox for things like "What was the PO number for my agency?", "How much did the company spend on the last marketing event?", or "When is the next team meeting?". And just like that, you’ll have the information you need to quickly reply without having to ever leave Gmail.
Starting today, you can also use Gemini in the Gmail mobile app on Android and iOS to analyze email threads and see a summarized view with the key highlights, just as you can with the side panel on the web. This is useful when you’re on the go, especially because reading through long email threads can be time consuming and even a bit of a challenge on a smaller screen. Additional mobile features like Contextual Smart Reply and Gmail Q&A are coming soon.
Who’s impacted
End users
Why you’d use it
While Gemini in Gmail helps you view, understand and respond to email content, it also connects to other Workspace apps like Docs, Sheets, Slides and Drive. For example, let’s say you’re planning a company offsite and get an email from a team member asking for the hotel information so they can book a room. Now you can ask Gemini to look it up from a Google Doc that contains all the offsite details, using a simple “what is the hotel name and sales manager email listed in @Company Offsite 2024.” Then you can easily insert this into your reply to get your team member the help they need.
On web, you can access Gemini in the Gmail side panel by clicking on “Ask Gemini” (star button) in the top right corner of Gmail. Visit the Help Center to learn more about collaborating with Gemini in Gmail.
On mobile, you can access Gemini by tapping on the “summarize this email” chip in an email thread.
Rollout pace
Web:
Rapid Release domains: Full rollout (1-3 days for feature visibility) starting on June 24, 2024
Scheduled Release domains: Gradual rollout (up to 15 days for feature visibility) starting on July 8, 2024
Google is responsibly bringing Gemini to teen students using their school accounts to help them learn confidently and empowering educators to enhance their impact with A…
Posted by Paris Hsu – Product Manager, Android Studio
We shared an exciting live demo from the Developer Keynote at Google I/O 2024 where Gemini transformed a wireframe sketch of an app's UI into Jetpack Compose code, directly within Android Studio. While we're still refining this feature to make sure you get a great experience inside of Android Studio, it's built on top of foundational Gemini capabilities which you can experiment with today in Google AI Studio.
Specifically, we'll delve into:
Turning designs into UI code: Convert a simple image of your app's UI into working code.
Smart UI fixes with Gemini: Receive suggestions on how to improve or fix your UI.
Integrating Gemini prompts in your app: Simplify complex tasks and streamline user experiences with tailored prompts.
Note: Google AI Studio offers various general-purpose Gemini models, whereas Android Studio uses a custom version of Gemini which has been specifically optimized for developer tasks. While this means that these general-purpose models may not offer the same depth of Android knowledge as Gemini in Android Studio, they provide a fun and engaging playground to experiment and gain insight into the potential of AI in Android development.
Experiment 1: Turning designs into UI code
First, to turn designs into Compose UI code: Open the chat prompt section of Google AI Studio, upload an image of your app's UI screen (see example below) and enter the following prompt:
"Act as an Android app developer. For the image provided, use Jetpack Compose to build the screen so that the Compose Preview is as close to this image as possible. Also make sure to include imports and use Material3."
Then, click "run" to execute your query and see the generated code. You can copy the generated output directly into a new file in Android Studio.
Image uploaded: Designer mockup of an application's detail screen
Running the generated code (with minor fixes) in Android Studio
With this experiment, Gemini was able to infer details from the image and generate corresponding code elements. For example, the original image of the plant detail screen featured a "Care Instructions" section with an expandable icon — Gemini's generated code included an expandable card specifically for plant care instructions, showcasing its contextual understanding and code generation capabilities.
Experiment 2: Smart UI fixes with Gemini in AI Studio
Inspired by "Circle to Search", another fun experiment you can try is to "circle" problem areas on a screenshot, along with relevant Compose code context, and ask Gemini to suggest appropriate code fixes.
You can explore with this concept in Google AI Studio:
1. Upload Compose code and screenshot: Upload the Compose code file for a UI screen and a screenshot of its Compose Preview, with a red outline highlighting the issue—in this case, items in the Bottom Navigation Bar that should be evenly spaced.
Example: Preview with problem area highlighted
2. Prompt Gemini: Open the chat prompt section and enter
"Given this code file describing a UI screen and the image of its Compose Preview, please fix the part within the red outline so that the items are evenly distributed."
Google AI Studio: Smart UI Fixes with Gemini
3. Gemini's solution: Gemini returned code that successfully resolved the UI issue.
Example: Generated code fixed by Gemini
Example: Preview with fixes applied
Experiment 3: Integrating Gemini prompts in your app
Gemini can streamline experimentation and development of custom app features. Imagine you want to build a feature that gives users recipe ideas based on an image of the ingredients they have on hand. In the past, this would have involved complex tasks like hosting an image recognition library, training your own ingredient-to-recipe model, and managing the infrastructure to support it all.
Now, with Gemini, you can achieve this with a simple, tailored prompt. Let's walk through how to add this "Cook Helper" feature into your Android app as an example:
1. Explore the Gemini prompt gallery: Discover example prompts or craft your own. We'll use the "Cook Helper" prompt.
2. Open and experiment in Google AI Studio: Test the prompt with different images, settings, and models to ensure the model responds as expected and the prompt aligns with your goals.
3. Generate the integration code: Once you're satisfied with the prompt's performance, click "Get code" and select "Android (Kotlin)". Copy the generated code snippet.
4. Integrate the Gemini API into Android Studio: Open your Android Studio project. You can either use the new Gemini API app template provided within Android Studio or follow this tutorial. Paste the copied generated prompt code into your project.
That's it - your app now has a functioning Cook Helper feature powered by Gemini. We encourage you to experiment with different example prompts or even create your own custom prompts to enhance your Android app with powerful Gemini features.
Our approach on bringing AI to Android Studio
While these experiments are promising, it's important to remember that large language model (LLM) technology is still evolving, and we're learning along the way. LLMs can be non-deterministic, meaning they can sometimes produce unexpected results. That's why we're taking a cautious and thoughtful approach to integrating AI features into Android Studio.
Our philosophy towards AI in Android Studio is to augment the developer and ensure they remain "in the loop." In particular, when the AI is making suggestions or writing code, we want developers to be able to carefully audit the code before checking it into production. That's why, for example, the new Code Suggestions feature in Canary automatically brings up a diff view for developers to preview how Gemini is proposing to modify your code, rather than blindly applying the changes directly.
We want to make sure these features, like Gemini in Android Studio itself, are thoroughly tested, reliable, and truly useful to developers before we bring them into the IDE.
What's next?
We invite you to try these experiments and share your favorite prompts and examples with us using the #AndroidGeminiEra tag on X and LinkedIn as we continue to explore this exciting frontier together. Also, make sure to follow Android Developer on LinkedIn, Medium, YouTube, or X for more updates! AI has the potential to revolutionize the way we build Android apps, and we can't wait to see what we can create together.
In addition to the recent announcement of audit logs for API-based actions, we’re introducing the ability for admins to see new audit logs in Google Drive for activity triggered by Gemini for Google Workspace.
For example, if Gemini for Google Workspace accesses data from a set of files in response to a user query, an ‘item content accessed’ event is generated for each of the accessed files in the Drive log events.
Who’s impacted
Admins
Why it matters
These new audit logs offer admins greater transparency into how Gemini for Google Workspace and Gemini apps leverage your content from Drive, providing granular visibility for security and compliance.
Gemini leverages the content of your Drive to provide more personalized responses based on your prompts. These logs focus on instances where Gemini specifically accesses Drive files on behalf of users to fulfill their requests.
Posted by Terence Zhang – Developer Relations Engineer
At Google I/O, we unveiled a vision of Android reimagined with AI at its core. As Android developers, you're at the forefront of this exciting shift. By embracing generative AI (Gen AI), you'll craft a new breed of Android apps that offer your users unparalleled experiences and delightful features.
Gemini models are powering new generative AI apps both over the cloud and directly on-device. You can now build with Gen AI using our most capable models over the Cloud with the Google AI client SDK or Vertex AI for Firebase in your Android apps. For on-device, Gemini Nano is our recommended model. We have also integrated Gen AI into developer tools - Gemini in Android Studio supercharges your developer productivity.
Let’s walk through the major announcements for AI on Android from this year's I/O sessions in more detail!
#1: Build AI apps leveraging cloud-based Gemini models
To kickstart your Gen AI journey, design the prompts for your use case with Google AI Studio. Once you are satisfied with your prompts, leverage the Gemini API directly into your app to access Google’s latest models such as Gemini 1.5 Pro and 1.5 Flash, both with one million token context windows (with two million available via waitlist for Gemini 1.5 Pro).
If you want to learn more about and experiment with the Gemini API, the Google AI SDK for Android is a great starting point. For integrating Gemini into your production app, consider using Vertex AI for Firebase (currently in Preview, with a full release planned for Fall 2024). This platform offers a streamlined way to build and deploy generative AI features.
We are also launching the first Gemini API Developer competition (terms and conditions apply). Now is the best time to build an app integrating the Gemini API and win incredible prizes! A custom Delorean, anyone?
#2: Use Gemini Nano for on-device Gen AI
While cloud-based models are highly capable, on-device inference enables offline inference, low latency responses, and ensures that data won’t leave the device.
At I/O, we announced that Gemini Nano will be getting multimodal capabilities, enabling devices to understand context beyond text – like sights, sounds, and spoken language. This will help power experiences like Talkback, helping people who are blind or have low vision interact with their devices via touch and spoken feedback. Gemini Nano with Multimodality will be available later this year, starting with Google Pixel devices.
We also shared more about AICore, a system service managing on-device foundation models, enabling Gemini Nano to run on-device inference. AICore provides developers with a streamlined API for running Gen AI workloads with almost no impact on the binary size while centralizing runtime, delivery, and critical safety components for Gemini Nano. This frees developers from having to maintain their own models, and allows many applications to share access to Gemini Nano on the same device.
Gemini Nano is already transforming key Google apps, including Messages and Recorder to enable Smart Compose and recording summarization capabilities respectively. Outside of Google apps, we're actively collaborating with developers who have compelling on-device Gen AI use cases and signed up for our Early Access Program (EAP), including Patreon, Grammarly, and Adobe.
Adobe is one of these trailblazers, and they are exploring Gemini Nano to enable on-device processing for part of its AI assistant in Acrobat, providing one-click summaries and allowing users to converse with documents. By strategically combining on-device and cloud-based Gen AI models, Adobe optimizes for performance, cost, and accessibility. Simpler tasks like summarization and suggesting initial questions are handled on-device, enabling offline access and cost savings. More complex tasks such as answering user queries are processed in the cloud, ensuring an efficient and seamless user experience.
This is just the beginning - later this year, we'll be investing heavily to enable and aim to launch with even more developers.
#3: Use Gemini in Android Studio to help you be more productive
Besides powering features directly in your app, we’ve also integrated Gemini into developer tools. Gemini in Android Studio is your Android coding companion, bringing the power of Gemini to your developer workflow. Thanks to your feedback since its preview as Studio Bot at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and now include this experience in stable builds of Android Studio.
At Google I/O, we previewed a number of features available to try in the Android Studio Koala preview release, like natural-language code suggestions and AI-assisted analysis for App Quality Insights. We also shared an early preview of multimodal input using Gemini 1.5 Pro, allowing you to upload images as part of your AI queries — enabling Gemini to help you build fully functional compose UIs from a wireframe sketch.
Starting today, we’re introducing Gemini for Workspace usage reports in the Admin console. This report gives admins an overarching view of how Gemini is being used in their organization, specifically:
Assigned Gemini licenses,
Active Gemini users,
And the number of users who are using Gemini over time.
Gemini usage reports in the Admin console
These reports will help admins understand how many users are using Gemini features and make informed decisions about expanding Gemini further within their organizations. We plan to introduce more reporting features over time, such as the ability to filter these reports by Organizational Units and Groups.
Additional details
Admins can access these reports via admin console under Menu > Generative AI > Gemini reports. Visit the Help Center to learn more about reviewing Gemini usage in your organization.
Available for Google Workspace customers with the Gemini Business and Gemini Enterprise add-ons.
We plan to introduce Gemini reports for the Gemini Education and Gemini Premium add-ons in the coming weeks. Stay tuned to the Workspace Updates blog for more information.