Tag Archives: Gemini

Gemini Nano is now available on Android via experimental access

Posted by Taj Darra – Product Manager

Gemini, introduced last year, is Google’s most capable family of models yet; designed for flexibility, it can run on everything from data centers to mobile devices. Since announcing Gemini Nano, our most efficient model built for on-device tasks, we've been working with a limited set of partners to support a range of use cases for their apps.

Today, we’re opening up access to experiment with Gemini Nano to all Android developers with the AI Edge SDK via AICore. Developers will initially have access to experiment with text-to-text prompts on Pixel 9 series devices. Support for more devices and modalities will be added in the future. Check out our documentation and video to get started. Note that experimental access is for development purposes, and is not for production usage at this time.


Fast, private and cost-effective on-device AI

On-device generative AI processes prompts directly on your device without server calls. It offers many benefits: sensitive user data is processed locally on the device, full functionality without internet connectivity, and no additional monetary cost for each inference.

Since on-device generative AI models run on devices with less computational power than cloud servers, they are significantly smaller and less generalized than their cloud-based equivalents. As a result, the model works best for tasks where the requests can be clearly specified rather than open-ended use cases such as chatbots. Here are some use cases you can try:

    • Rephrasing - Rephrasing and rewriting text to change the tone to be more casual or formal.
    • Smart reply - Given several chat messages in a thread, suggest the next likely response.
    • Proofreading - Removing spelling or grammatical errors from text.
    • Summarization - Generating a summary of a long document, either as a paragraph or as bullet points.

Check out our prompting strategies to achieve best results when experimenting with the above use-cases. If you want to test your own use case, you can download our sample app for an easy way to start experimenting with Gemini Nano.


Gemini Nano performance and usage

Compared to its predecessor, the model being made available to developers today (referred to in the academic paper as “Nano 2”) delivers a substantial improvement in quality. At nearly twice the size of the predecessor (“Nano 1”), it excels in both academic benchmarks and real-world applications, offering capabilities that rival much larger models.


MMLU (5-shot)*

MATH (4-shot)*

Paraphrasing**

Smart Reply**

Nano 1

46%

14%

44%

44%

Nano 2

56%

23%

90%

82%

* As reported in Gemini: A Family of Highly Capable Multimodal Models. Note that both these models are a part of our Gemini 1.0 series.
** Percentage of good answers measured on public datasets via an autorater powered by Gemini 1.5 Pro.

Gemini Nano is already in use by Google apps. Pixel Screenshots, Talkback, Recorder and many more have leveraged Gemini Nano’s text and image understanding to deliver new experiences:

    • Talkback - Android’s accessibility app leverages Gemini Nano’s multimodal capabilities to improve image descriptions for blind and low vision users.
    moving image of Talkback app UI highlighting improved image descriptions with multimodality model for users with low vision

    • Pixel Recorder - Gemini Nano with Multimodality model enables support for longer recordings and higher quality summaries.
moving image of Talkback app UI highlighting improved image descriptions with multimodality model for users with low vision

Seamless model integration with AI Edge SDK using AICore

Integrating generative AI models directly into mobile apps is challenging due to the significant computational resources and storage space they require. To address this challenge, we developed AICore, a new system service in Android. AICore allows you to benefit from AI running directly on the device without needing to distribute runtimes, models and other components yourself.

To run inference with Gemini Nano in AICore, you use the AI Edge SDK. The AI Edge SDK enables developers to customize prompts and inference parameters to their specific needs, enabling greater control over each inference.

To experiment with the AI Edge SDK, add the following to your apps’ dependency:

implementation("com.google.ai.edge.aicore:aicore:0.0.1-exp01")

The AI Edge SDK allows you to customize inference parameters. Some of the more commonly-used parameters include:

    • Temperature, which controls randomness. Higher values increase diversity and creativity of output.
    • Top K, which specifies how many tokens from the highest-ranking ones are to be considered.
    • Candidate count, which describes the maximum number of responses to return.
    • Max output tokens, which is the length of the desired response.

When you are ready to run the inference with your model, the AI Edge SDK offers an easy way to pass in multiple strings as input to accommodate long inference data.

Here’s an example:

scope.launch {
    // Single string input prompt
    val input = "I want you to act as an English proofreader. I will 
    provide you texts, and I would like you to review them for any 
    spelling, grammar, or punctuation errors. Once you have finished 
    reviewing the text, provide me with any necessary corrections or 
    suggestions for improving the text: 
    These arent the droids your looking for."
    val response = generativeModel.generateContent(input)
    print(response.text)

    // Or multiple strings as input
    val response = generativeModel.generateContent(
        content {
            text("I want you to act as an English proofreader.I will 
            provide you texts and I would like you to review them for 
            any spelling, grammar, or punctuation errors.")
            text("Once you have finished reviewing the text, 
            provide me with any necessary corrections or suggestions 
            for improving the text:")
            text("These arent the droids your looking for.")
        }
    )
    print(response.text)
}

Our integration guide has more information on the AI Edge SDK as well as detailed instructions to start your experimentation with Gemini Nano. To learn more about prompting, check out the Gemini prompting strategies.


Get Started

Learn more about Gemini Nano for app development by watching our video walkthrough, and try out Gemini Nano experimental access in your own app today.

We are excited to see what you build and welcome your input as you evaluate this new technology for your use cases! Post your creations on social media and include the hashtag #AndroidAI to share what you build. To share your ideas and feedback for on-device GenAI and help shape our APIs, you can file a ticket.

There’s a lot more that we’re covering this week for you to build great AI experiences on Android so be sure to check out the rest of the AI on Android Spotlight Week content!

Welcome to AI on Android Spotlight Week

Posted by Joseph Lewis – Technical Writer, Android AI

AI on Android Spotlight Week this year runs September 30th to October 4th! As part of the Android “Spotlight Weeks” series, this week’s content and updates are your gateway to understanding how to integrate cutting-edge AI into your Android apps. Whether you're a seasoned Android developer, an AI enthusiast, or just starting out on your development journey, get ready for a week filled with insightful sessions, practical demos, and inspiring success stories that'll help you build intuitive and powerful AI integrations.

Throughout the week, we'll dive into the core technologies driving AI experiences on Android. This blog will be updated throughout the week with links to new announcements and resources, so check back here daily for updates!


Monday: Getting started with AI

September 30, 2024

Learn how to begin with AI on Android development. Understand which AI models and versions you can work with. Learn about developer tools to help you start building features empowered with AI.

We'll guide you through the differences between traditional programming and machine learning, and contrast traditional machine learning with generative AI. The post explains large language models (LLMs), the transformer architecture, and key concepts like context windows and embeddings. It also touches on fine-tuning and the future of LLMs on Android.

Read the blog post: A quick introduction to large language models for Android Developers

We'll then provide a look behind the scenes at our work improving developer productivity with Gemini in Android Studio. We'll discuss Studio's new AI code completion feature, how we've been working to improve the accuracy and quality of suggested code, and how this feature can benefit your workflow.

Read the blog post: Gemini in Android Studio: Code Completion gains powerful model improvements


Tuesday: On-device AI capabilities with Gemini Nano

October 1, 2024

Discover how Gemini Nano empowers Android developers to unlock the full potential of generative AI, offering personalization and privacy benefits for next-generation apps. We'll share how you can begin integrating your Android apps with on-device LLMs. Look for more information and announcements here on Tuesday!


Wednesday: On-device AI with custom models

October 2, 2024

On Wednesday, we'll help you understand how to bring your own AI model to Android devices, and how you can integrate tools and technologies from Google and other sources. The ability to run sophisticated AI models directly on devices – whether it's a smartphone, tablet, or embedded system – opens up exciting possibilities for better performance, privacy, usability, and cost efficiency.

We'll also give you a detailed walkthrough of how Android developers can leverage Google AI Edge Torch to convert PyTorch machine learning models for on-device execution, using the LiteRT and MediaPipe Tasks libraries. This walkthrough includes code examples and explanations for converting a MobileViT model for image classification and a DIS model for segmentation, and highlights the steps involved in preparing these models for seamless integration into Android applications. By following this guide, developers can harness PyTorch models to enhance their Android apps with advanced machine learning capabilities.


Thursday: Access cloud models with Android SDKs

October 3, 2024

Tap into the boundless potential of Gemini 1.5 Pro and Gemini 1.5 Flash, the revolutionary generative AI models that are redefining the capabilities of Android apps. With Gemini 1.5 Pro and 1.5 Flash, you'll have the tools you need to create apps that are truly intelligent and interactive.

On Thursday, we'll give you a codelab that'll help you understand how to integrate the Gemini API capabilities into your Android projects. We'll guide you through crafting effective prompts and integrate Vertex AI in Firebase. By the end of this hands-on tutorial, you'll be able to implement features like text summarization in your own app all powered by the cutting-edge Gemini models.

Next we'll publish a blog post exploring the potential of the Gemini API with case studies. We'll delve into how Android developers are leveraging generative AI capabilities in innovative ways, showcasing real-world examples of apps that have successfully integrated the Gemini API. From meal planning to journaling and personalized user experiences, the article highlights examples of how Android developers are already taking advantage of Gemini transformative capabilities in their apps.

We'll also share with you examples of advanced features of the Gemini API to go beyond simple text prompting. You'll learn how system instructions can shape the model behavior, how JSON support streamlines development, and how multimodal capabilities and function calling can unlock exciting new use cases for your apps.


Friday: Build with AI on Android and beyond

October 4, 2024

As the capstone for AI on Android Spotlight Week, we'll host a discussion with Kateryna Semenova, Oli Gaymond, Miguel Ramos, and Khanh LeViet to talk about building with AI on Android. We'll explore the latest AI advancements tailored for Android engineers, showcasing how these technologies can elevate your app development game. Through engaging discussions and real-world examples, we will unveil the potential of AI, from fast, private on-device solutions using Gemini Nano to the powerful capabilities of Gemini 1.5 Flash and Pro. We'll discuss building generative AI solutions rapidly using Vertex AI in Firebase. And we'll dive into harnessing the power of AI with safety and privacy in mind.


Work with Gemini beyond Android

As we wrap things up for AI on Android Spotlight Week, know that we're striving to provide comprehensive AI solutions for cross-platform Gemini development. The AI capabilities showcased during Android AI Week can extend to other platforms, such as built-in AI in Chrome. Web developers can leverage similar tools and techniques to create web experiences enhanced by AI. Developers can run Gemini Pro in the cloud for natural language processing and other complex user journeys. Or, you explore the benefits of performing AI inferenceclient-side, with Gemini Nano in Chrome.

Build with usability and privacy in mind

As you embark on your AI development journey, we want you to keep in mind a few important considerations:

    • Privacy: Prioritize user privacy and data security when implementing AI features, especially when handling sensitive user information. When it becomes available, opt for on-device AI solutions like Gemini Nano whenever possible to minimize data exposure.
    • Seamless user experience: Ensure that AI features seamlessly integrate into your app's overall user experience. AI should enhance the user experience, not disrupt it.
    • Ethical considerations: AI technologies are developed and deployed in a way that benefits society while minimizing potential harm. By considering fairness, transparency, privacy, accountability, and societal impact, developers can play a vital role in creating a future where AI serves humanity's best interests. Be mindful of the ethical implications of AI, such as potential biases in your AI models. Strive to create AI-powered features that are fair and inclusive.

AI on Android Spotlight Week is an opportunity to explore the latest in AI and its potential for Android app development. We encourage you to delve into the wealth of resources shared during the week and begin experimenting with AI in your own projects. The future of Android is rooted in AI and machine learning, and with the tools and knowledge shared during Android AI Week, developers are well-equipped to build the next generation of AI-powered apps.


What's next

Come back to this blog post for updates; we’ll add links to blog and video content and more throughout the week. Follow Android Developers on X and Android Developers at LinkedIn, and remember to use the hashtag #AndroidAI to share your AI-powered Android creations, and join the vibrant community of developers pushing the boundaries of mobile AI.

Gemini in Android Studio: Code Completion Gains Powerful Model Improvements

Posted by Sandhya Mohan – Product Manager, Android Studio and Sarmad Hashmi – Software Engineer, Labs

The Android team believes AI has the potential to revolutionize coding, drive unprecedented innovation and productivity in software development, and supercharge your development productivity. AI code completion is a key part of this effort within Gemini in Android Studio.

Since launching in May 2024, we've been hard at work improving this feature to provide the best possible experience for all Android developers. In this post, we want to take you “under the hood” on how we achieved a 40% relative increase in acceptance rate since release, and share some of our excitement for how we have seen Android developers use this feature. We hope you'll give it a try and let us know what you think.


An AI coding companion for every developer

Our vision for Gemini in Android Studio is to empower developers to build high quality Android apps — making it easy for developers to quickly write correct code aligned with Android's best practices. Launched last year, the first version of Studio Bot provided a chat experience where developers could access Android-specific guidance, powered by Google's latest AI models. Developers are able to ask Gemini in Android Studio to provide developer guidance, summarize technical documentation, and critique their Android code. But in all these cases the feedback is reactive, responding to a user's question.

AI code completion takes these capabilities a step further by providing real-time feedback as you work as a developer, thinking ahead and suggesting the next few lines of code that you are likely to type based on the context from the surrounding file and what was just typed. You can think of AI Code Completion as a partner in your work — a coding companion waiting to offer guidance when you need it.

This feature is particularly well suited for tasks like defining business logic, creating database schemas, making network requests, or even writing tests — tasks that are often time-consuming and distract from building the core experience for your app. Many developers have told us how much they enjoy the speed AI completions brings to their app development workflow.

A moving image demonstrating AI autocomplete in Android Studio

Bringing more intelligent code completion to Android development

While we are excited to see how AI Code Completions have improved developers’ workflows, we know there's still more we can do to improve developer productivity. Development of Gemini in Android Studio is an ongoing, large-scale collaborative effort by many teams across Google. Earlier this year, we switched to Gemini 1.5 models and saw a significant improvement in the quality of code completions, resulting in a 2x increase in our developer productivity metrics, including overall acceptance rate for suggestions.

Once we started doing A/B test experiments to improve AI code completion we found several improvements around model quality, context, and heuristics. This overall effort led to a 40% relative increase in acceptance rate — how often users accept the AI's proposed code suggestions — since we launched. Since then, we've been exploring several improvements like:

    • Retrieval augmentation: With your opt-in consent, we use the files and dependencies most relevant to your current coding context to enhance the accuracy of suggestions. This is just the first step and we're continuing to experiment with adding even more context from the IDE as part of each request.
    • Filtering out low-confidence completions: Prioritize showing high quality suggestions where they are most relevant, and therefore most likely to be accepted. We do this by using a combination of the probabilities returned by the model and using a classifier trained to identify high-quality completions based on developer feedback.
    • Smarter post-processing: The LLM's output for AI Code Completion is fundamentally different from the output users expect in a chat session. Responses need to be tightly scoped in order to quickly output useful code, without surrounding expository text. We apply additional heuristics on the model output to ensure responses are concise and accurate, as well as making sure that the generated code is valid within the context of the user's codebase.
    • Improved models: We use opt-in feedback from Android Studio users, such as noting when a code suggestion is accepted or rejected, to adapt the code completion model to their coding style and preferences over time. We regularly ship new models with higher quality data based on your feedback.

We are also exploring metrics beyond acceptance rate to better measure AI impact on developer velocity, such as the percentage of total code written by AI.


Try it out!

We are rolling out these successful experiments and others as quickly as possible.

If you haven't tried AI code completions yet, you can enable this feature by clicking on the Gemini sparkGemini button in your editor window and signing in to your Google account.

A screenshot of Android Studio with a pop-up notification about the Gemini AI coding companion. The notification explains that Gemini is a free feature in preview and requires a Google account login to use.
Figure 1. Launching Gemini in Android Studio for the first time

After doing so, navigate to Settings > Tools > Gemini and select "Enable AI-based inline code completions".

A screenshot of the settings menu within Android Studio, with the 'Gemini' section expanded showing options related to the AI coding companion, including privacy and context awareness.
Figure 2. Enabling "AI-based inline code completions"

As always, Google is committed to the responsible use of AI. Android Studio won't send any of your source code to servers without your consent — which means you'll need to opt-in to enable Gemini's developer assistance features in Android Studio. You can read more on Gemini in Android Studio's commitment to privacy.

Try enabling AI Code Completions in your project and tell us what you think on social media with #AndroidGeminiEra. We're excited to see how these enhancements help you build amazing apps!


This blog post is part of our series: AI on Android Spotlight Week, where we provide resources — blog posts, videos, sample code, and more — all designed to to explore the latest in AI and its potential for Android app development.

Google Workspace Updates Weekly Recap – September 27, 2024

3 New updates

Unless otherwise indicated, the features below are available to all Google Workspace customers, and are fully launched or in the process of rolling out. Rollouts should take no more than 15 business days to complete if launching to both Rapid and Scheduled Release at the same time. If not, each stage of rollout should take no more than 15 business days to complete.



Use Gemini in Google Sheets to generate structured tables
We’re excited to announce that users can now generate structured tables with Gemini in the side panel of Google Sheets. Prior to this update, tables were output as plain text ranges without set column types or structure. Now, Gemini can help you simplify and accelerate spreadsheet building by bringing format and structure to unorganized ranges through table generation. | Rolling out to Rapid Release domains now; launch to Scheduled Release domains planned for October 17, 2024. | Available for Google Workspace customers with Gemini Business, Enterprise, Education, Education Premium add-ons and users with the Google One AI Premium subscription. | Visit the Help Center to learn more about using tables in Google Sheets.
Use Gemini in Google Sheets to generate structured tables
Refreshed illustrations in the Google Calendar app on Android and iOS devices
We’re introducing more modern illustrations for Google Calendar events like coffee, lunch or doctors appointments on Android and iOS. You’ll also notice updated monthly illustrations for background images in the schedule view on your mobile device or tablet. | Rolling out now to Rapid Release and Scheduled Release domains. | Available to all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts. 
Refreshed illustrations in the Google Calendar app on Android and iOS devices

New “Gemini Advanced” branding for Gemini for Google Workspace add-on users
The Gemini app (gemini.google.com) will now display a "Gemini Advanced" label for users with a Gemini Enterprise, Business, Education, Education Premium add-on. This will better indicate to users the advanced capabilities of the app included in these plans. We're also updating this branding across our Help Center to better indicate which features and settings are specific to these add-ons. | Rollout to Rapid Release and Scheduled Release domains is complete. | Visit the Help Center to learn more about Gemini for Google Workspace.
New “Gemini Advanced” branding for Gemini for Google Workspace add-on users

Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Kickstart ideas, brainstorm activities, and differentiate content with the help of Gemini in Google Classroom 
This week, we officially introduced a new Gemini Education tab in Classroom that grants quick access to numerous AI tools. | Learn more about Gemini in Classroom. 

Introducing security advisor, a new set of tools and insights to help small businesses protect their organization against cyber attacks 
To help small businesses, we’re introducing security advisor, a set of new insights and tools designed to enhance security for small businesses – including threat defense, account security, and data protection capabilities. | Learn more about security advisor. 

Customize your Google Docs with polished cover images 
We’re making it easy for you to personalize and differentiate documents with full-bleed cover images that extend from one edge of your document to the other. | Learn more about cover images. 

Preview and test upcoming features on Google Meet hardware devices 
Using the new “Feature preview” setting, specific Google Meet hardware devices can be configured to test upcoming features prior to general availability. | Learn more about testing Meet hardware features. 

Gmail allows more senders to protect their brand using BIMI Common Mark Certificates 
We introduced two additional updates for BIMI that will continue to keep inboxes safe: Gmail now supports Common Mark Certificates (CMC) and BIMI verified check marks are now displayed on Android and iOS. | Learn more about BIMI Common Mark Certificates. 

Google Sheets tables are now integrated with conditional notifications 
There is a new integration between conditional notifications and tables in Sheets. | Learn more about tables and conditional notifications. 

Gemini in Gmail will now provide contextual Smart Replies 
We’re excited to announce a new Gemini in Gmail feature, contextual Smart Replies, that will offer more detailed responses to fully capture the intent of your message. | Learn more about contextual Smart Replies.

Completed rollouts

The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.


Rapid Release Domains: 
Scheduled Release Domains: 
Rapid and Scheduled Release Domains: 
For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases). 

Gemini in the Gmail app will now provide contextual Smart Replies

What’s changing

In 2017, we introduced Smart Reply in Gmail, a feature that utilizes machine learning to suggest three quick responses to emails based on the email's content. Thanks to this feature, users have saved time, especially when on the go, by easily responding to emails with minimal effort. 

However, we realize there are scenarios in which users would like to respond with more than a simple “Sounds good to me!” or a “Yes, I’m working on it”. As a result, we’re excited to announce a new Gemini in Gmail feature, contextual Smart Replies, that will offer more detailed responses to fully capture the intent of your message. 

After initiating an email reply, users will see a few response options at the bottom of their screen that take the full content of the email thread into consideration. Hover over each response to get a quick preview of the text, select the one that feels right for the situation, and edit it as you see fit or send the response immediately. 
Get reply suggestions with Gemini in Gmail

Who’s impacted 

End users 


Why it’s important 

The contextual Smart Reply feature saves time and makes inbox management easier. 


Getting started


Rollout pace 

Web & Mobile: 

Availability 

Available for Google Workspace customers with these add-ons: 
  • Gemini Business, Enterprise, Education, Education Premium 
  • Google One AI Premium 

Resources 

Kickstart ideas, brainstorm activities, and differentiate content with the help of Gemini in Google Classroom

What’s changing

Earlier this year, we introduced the Gemini Education and Gemini Education Premium add-ons to give education customers ​​new and powerful ways of working, teaching and learning with Gemini for Google Workspace. We also piloted Gemini in Google Classroom with new lesson planning features that are informed by LearnLM, our new family of models fine-tuned for learning, based on Gemini and grounded in educational research. 


Today, we’re excited to officially introduce a new Gemini Education tab in Classroom that grants quick access to the following AI tools: 

  • Outline a lesson plan: Use a scaffolded experience to generate lesson plan ideas based on what you’d like students to be able to demonstrate 
  • Craft a compelling hook: Spark curiosity and engage your students with a compelling start for your class 
  • Generate a quiz: Generate a quiz and export to Forms based on target grade level, length, and the types of questions you want to include 
  • Re-level text: Generate a new version of your text based on target grade level 
We are working closely with schools and educators globally to develop additional helpful tools. If you’re interested in joining the pilot program, learn more here

Gemini in Google Classroom



Who’s impacted 

Admins and end users 


Why it’s important 

Gemini in Google Classroom provides educators with a suite of generative AI tools that can generate new and unique content and make learning more personal and engaging for students. 


Additional details 

Gemini in Classroom is only available in English for education users over the age of 18. 


Getting started 

Rollout pace 

Availability 

Available for Google Workspace customers with these add-ons: 
  • Gemini Education and Education Premium 

Resources 

AI on Android Spotlight Week begins September 30th

Posted by Joseph Lewis – Technical Writer, Android AI

AI on Android Spotlight Week is our latest installment of the Spotlight Weeks series. We'll have a full week of investigation into the latest advancements in AI for Android developers. We’ll feature a variety of exciting activities, including an AMA with Google AI experts, technical talks, early access to our new tools and API, and demos of the latest Android generative AI technologies. AI on Android Spotlight Week kicks off next week on September 30th through October 4th, and will feature information and activities for developers, researchers, and enthusiasts interested in the future of generative AI app development on Android-powered devices.

Get the latest on Android AI developer strategies

During our Spotlight Week: AI on Android, we’ll feature a number of new and exciting opportunities to learn more about how to work with generative AI and machine learning for Android app development, including:

    • Conversations about on-device and cloud based GenAI solutions with Gemini Nano, Vertex AI in Firebase, and LiteRT (formerly known as TensorFlow Lite)
    • Partner demos and deep dives into the latest AI technologies and how to integrate them in Android apps
    • Discussions around model capabilities, developer tools and integration strategies from web to mobile
    • Answers to top questions from dev community about AI on Android

How to participate

Our Spotlight Week: AI on Android will happen entirely online, across Android Developer’s channels - YouTube, X, LinkedIn, and on d.android.com: check the Android AI developer page on Monday, September 30, 2024 to read our next blog post with full details!

Follow @AndroidDev on X for the latest updates, and help spread the word about AI on Android Spotlight Week, and use #AndroidAI on your favorite social media platforms to ask questions and share your AI projects with the community. We’re excited for you to join us!

NotebookLM now available as an Additional Service

What’s changing 

Last year, we introduced an Early Access App called NotebookLM, an experimental product using some of Google's most advanced models, like Gemini 1.5 Pro, that helps you gain critical insights grounded in the content of source documents you trust. 

Today, we’re excited to announce that NotebookLM is officially available as an Additional Service

Over the past year, NotebookLM, now available globally in over 100 languages, has been made more powerful with new features, and early users have been using NotebookLM to supercharge their learning and work. For example, NotebookLM can: 
  • Be an interactive expert in your trusted sources: Once you upload source documents (e.g. Google Doc and Slides, PDFs, web URLs, copied text) into a notebook, you can ask NotebookLM questions about the information in your sources. As a result, it will respond with an answer from the sources you’ve uploaded along with inline citations from those documents to show you what NotebookLM based its answers on. 

  • Generate new ideas and connect dots: NotebookLM can also be used to generate a variety of content based on your sources, like summaries, briefing docs, timelines, FAQs, study guides or even audio overviews (a new feature that lets you listen to a conversation about your source). NotebookLM can also spark creativity, help you brainstorm new ideas and make connections in your sources. You can save a response to a note, so you can come back to polish it later.

NotebookLM demo


Who’s impacted

Admins and end users 18+ 


Why you’d use it 

NotebookLM is an AI-powered research assistant that lets people in your organization interact with trusted source content to get grounded insights. You can upload sources, such as your research notes, course materials, interview transcripts, or corporate documents, and instantly NotebookLM becomes an expert in the material that matters most to you. 


Additional details 

As an Enterprise or Education user whose use of Google Drive is subject to the Workspace Terms of Service or the Workspace for Education Terms of Service, your uploads, queries and the model's responses in NotebookLM will not be used or reviewed by human reviewers to train AI models. NotebookLM is an Additional Service covered under the Google Terms of Service. 


Getting started 


Rollout pace

  • This feature is available now. 

Availability

  • Available to all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts 

Resources