Tag Archives: Gemini

3 fun experiments to try for your next Android app, using Google AI Studio

Posted by Paris Hsu – Product Manager, Android Studio

We shared an exciting live demo from the Developer Keynote at Google I/O 2024 where Gemini transformed a wireframe sketch of an app's UI into Jetpack Compose code, directly within Android Studio. While we're still refining this feature to make sure you get a great experience inside of Android Studio, it's built on top of foundational Gemini capabilities which you can experiment with today in Google AI Studio.

Specifically, we'll delve into:

    • Turning designs into UI code: Convert a simple image of your app's UI into working code.
    • Smart UI fixes with Gemini: Receive suggestions on how to improve or fix your UI.
    • Integrating Gemini prompts in your app: Simplify complex tasks and streamline user experiences with tailored prompts.

Note: Google AI Studio offers various general-purpose Gemini models, whereas Android Studio uses a custom version of Gemini which has been specifically optimized for developer tasks. While this means that these general-purpose models may not offer the same depth of Android knowledge as Gemini in Android Studio, they provide a fun and engaging playground to experiment and gain insight into the potential of AI in Android development.

Experiment 1: Turning designs into UI code

First, to turn designs into Compose UI code: Open the chat prompt section of Google AI Studio, upload an image of your app's UI screen (see example below) and enter the following prompt:

"Act as an Android app developer. For the image provided, use Jetpack Compose to build the screen so that the Compose Preview is as close to this image as possible. Also make sure to include imports and use Material3."

Then, click "run" to execute your query and see the generated code. You can copy the generated output directly into a new file in Android Studio.

Image uploaded: Designer mockup of an application's detail screen
Image uploaded: Designer mockup of an application's detail screen

Moving image showing a custom chat prompt being created from the imagev provided in Google AI Studio
Google AI Studio custom chat prompt: Image → Compose

Moving image showing running the generated code in Android Studio
Running the generated code (with minor fixes) in Android Studio

With this experiment, Gemini was able to infer details from the image and generate corresponding code elements. For example, the original image of the plant detail screen featured a "Care Instructions" section with an expandable icon — Gemini's generated code included an expandable card specifically for plant care instructions, showcasing its contextual understanding and code generation capabilities.


Experiment 2: Smart UI fixes with Gemini in AI Studio

Inspired by "Circle to Search", another fun experiment you can try is to "circle" problem areas on a screenshot, along with relevant Compose code context, and ask Gemini to suggest appropriate code fixes.

You can explore with this concept in Google AI Studio:

    1. Upload Compose code and screenshot: Upload the Compose code file for a UI screen and a screenshot of its Compose Preview, with a red outline highlighting the issue—in this case, items in the Bottom Navigation Bar that should be evenly spaced.

Example: Preview with problem area highlighted
Example: Preview with problem area highlighted

    2. Prompt Gemini: Open the chat prompt section and enter

    "Given this code file describing a UI screen and the image of its Compose Preview, please fix the part within the red outline so that the items are evenly distributed."
Screenshot of Google AI Studio: Smart UI Fixes with Gemini
Google AI Studio: Smart UI Fixes with Gemini

    3. Gemini's solution: Gemini returned code that successfully resolved the UI issue.

Screenshot of Example: Generated code fixed by Gemini
Example: Generated code fixed by Gemini

Example: Preview with fixes applied
Example: Preview with fixes applied

Experiment 3: Integrating Gemini prompts in your app

Gemini can streamline experimentation and development of custom app features. Imagine you want to build a feature that gives users recipe ideas based on an image of the ingredients they have on hand. In the past, this would have involved complex tasks like hosting an image recognition library, training your own ingredient-to-recipe model, and managing the infrastructure to support it all.

Now, with Gemini, you can achieve this with a simple, tailored prompt. Let's walk through how to add this "Cook Helper" feature into your Android app as an example:

    1. Explore the Gemini prompt gallery: Discover example prompts or craft your own. We'll use the "Cook Helper" prompt.

Gemini prompt gallery in Google AI for Developers
Google AI for Developers: Prompt Gallery

    2. Open and experiment in Google AI Studio: Test the prompt with different images, settings, and models to ensure the model responds as expected and the prompt aligns with your goals.

Moving image showing the Cook Helper prompt in Google AI for Developers
Google AI Studio: Cook Helper prompt

    3. Generate the integration code: Once you're satisfied with the prompt's performance, click "Get code" and select "Android (Kotlin)". Copy the generated code snippet.

Screengrab of using 'Get code' to obtain a Kotlin snippet in Google AI Studio
Google AI Studio: get code - Android (Kotlin)

    4. Integrate the Gemini API into Android Studio: Open your Android Studio project. You can either use the new Gemini API app template provided within Android Studio or follow this tutorial. Paste the copied generated prompt code into your project.

That's it - your app now has a functioning Cook Helper feature powered by Gemini. We encourage you to experiment with different example prompts or even create your own custom prompts to enhance your Android app with powerful Gemini features.

Our approach on bringing AI to Android Studio

While these experiments are promising, it's important to remember that large language model (LLM) technology is still evolving, and we're learning along the way. LLMs can be non-deterministic, meaning they can sometimes produce unexpected results. That's why we're taking a cautious and thoughtful approach to integrating AI features into Android Studio.

Our philosophy towards AI in Android Studio is to augment the developer and ensure they remain "in the loop." In particular, when the AI is making suggestions or writing code, we want developers to be able to carefully audit the code before checking it into production. That's why, for example, the new Code Suggestions feature in Canary automatically brings up a diff view for developers to preview how Gemini is proposing to modify your code, rather than blindly applying the changes directly.

We want to make sure these features, like Gemini in Android Studio itself, are thoroughly tested, reliable, and truly useful to developers before we bring them into the IDE.

What's next?

We invite you to try these experiments and share your favorite prompts and examples with us using the #AndroidGeminiEra tag on X and LinkedIn as we continue to explore this exciting frontier together. Also, make sure to follow Android Developer on LinkedIn, Medium, YouTube, or X for more updates! AI has the potential to revolutionize the way we build Android apps, and we can't wait to see what we can create together.

Adding audit logs for Gemini for Google Workspace activity

What’s changing 

In addition to the recent announcement of audit logs for API-based actions, we’re introducing the ability for admins to see new audit logs in Google Drive for activity triggered by Gemini for Google Workspace. 

For example, if Gemini for Google Workspace accesses data from a set of files in response to a user query, an ‘item content accessed’ event is generated for each of the accessed files in the Drive log events. 


Who’s impacted 

Admins 


Why it matters 

These new audit logs offer admins greater transparency into how Gemini for Google Workspace and Gemini apps leverage your content from Drive, providing granular visibility for security and compliance. 

Gemini leverages the content of your Drive to provide more personalized responses based on your prompts. These logs focus on instances where Gemini specifically accesses Drive files on behalf of users to fulfill their requests. 


Getting started 

Rollout pace 

  • This feature is now available. 


Availability 

Available for Google Workspace: 
  • Gemini Business, Enterprise, Education, Education Premium 

Resources 

Top 3 Updates for Building with AI on Android at Google I/O ‘24

Posted by Terence Zhang – Developer Relations Engineer

At Google I/O, we unveiled a vision of Android reimagined with AI at its core. As Android developers, you're at the forefront of this exciting shift. By embracing generative AI (Gen AI), you'll craft a new breed of Android apps that offer your users unparalleled experiences and delightful features.

Gemini models are powering new generative AI apps both over the cloud and directly on-device. You can now build with Gen AI using our most capable models over the Cloud with the Google AI client SDK or Vertex AI for Firebase in your Android apps. For on-device, Gemini Nano is our recommended model. We have also integrated Gen AI into developer tools - Gemini in Android Studio supercharges your developer productivity.

Let’s walk through the major announcements for AI on Android from this year's I/O sessions in more detail!

#1: Build AI apps leveraging cloud-based Gemini models

To kickstart your Gen AI journey, design the prompts for your use case with Google AI Studio. Once you are satisfied with your prompts, leverage the Gemini API directly into your app to access Google’s latest models such as Gemini 1.5 Pro and 1.5 Flash, both with one million token context windows (with two million available via waitlist for Gemini 1.5 Pro).

If you want to learn more about and experiment with the Gemini API, the Google AI SDK for Android is a great starting point. For integrating Gemini into your production app, consider using Vertex AI for Firebase (currently in Preview, with a full release planned for Fall 2024). This platform offers a streamlined way to build and deploy generative AI features.

We are also launching the first Gemini API Developer competition (terms and conditions apply). Now is the best time to build an app integrating the Gemini API and win incredible prizes! A custom Delorean, anyone?


#2: Use Gemini Nano for on-device Gen AI

While cloud-based models are highly capable, on-device inference enables offline inference, low latency responses, and ensures that data won’t leave the device.

At I/O, we announced that Gemini Nano will be getting multimodal capabilities, enabling devices to understand context beyond text – like sights, sounds, and spoken language. This will help power experiences like Talkback, helping people who are blind or have low vision interact with their devices via touch and spoken feedback. Gemini Nano with Multimodality will be available later this year, starting with Google Pixel devices.

We also shared more about AICore, a system service managing on-device foundation models, enabling Gemini Nano to run on-device inference. AICore provides developers with a streamlined API for running Gen AI workloads with almost no impact on the binary size while centralizing runtime, delivery, and critical safety components for Gemini Nano. This frees developers from having to maintain their own models, and allows many applications to share access to Gemini Nano on the same device.

Gemini Nano is already transforming key Google apps, including Messages and Recorder to enable Smart Compose and recording summarization capabilities respectively. Outside of Google apps, we're actively collaborating with developers who have compelling on-device Gen AI use cases and signed up for our Early Access Program (EAP), including Patreon, Grammarly, and Adobe.

Moving image of Gemini Nano operating in Adobe

Adobe is one of these trailblazers, and they are exploring Gemini Nano to enable on-device processing for part of its AI assistant in Acrobat, providing one-click summaries and allowing users to converse with documents. By strategically combining on-device and cloud-based Gen AI models, Adobe optimizes for performance, cost, and accessibility. Simpler tasks like summarization and suggesting initial questions are handled on-device, enabling offline access and cost savings. More complex tasks such as answering user queries are processed in the cloud, ensuring an efficient and seamless user experience.

This is just the beginning - later this year, we'll be investing heavily to enable and aim to launch with even more developers.

To learn more about building with Gen AI, check out the I/O talks Android on-device GenAI under the hood and Add Generative AI to your Android app with the Gemini API, along with our new documentation.


#3: Use Gemini in Android Studio to help you be more productive

Besides powering features directly in your app, we’ve also integrated Gemini into developer tools. Gemini in Android Studio is your Android coding companion, bringing the power of Gemini to your developer workflow. Thanks to your feedback since its preview as Studio Bot at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and now include this experience in stable builds of Android Studio.

At Google I/O, we previewed a number of features available to try in the Android Studio Koala preview release, like natural-language code suggestions and AI-assisted analysis for App Quality Insights. We also shared an early preview of multimodal input using Gemini 1.5 Pro, allowing you to upload images as part of your AI queries — enabling Gemini to help you build fully functional compose UIs from a wireframe sketch.


You can read more about the updates here, and make sure to check out What’s new in Android development tools.

Gemini for Workspace usage reports are now available in Admin console

What’s changing 

Starting today, we’re introducing Gemini for Workspace usage reports in the Admin console. This report gives admins an overarching view of how Gemini is being used in their organization, specifically: 
  • Assigned Gemini licenses, 
  • Active Gemini users, 
  • And the number of users who are using Gemini over time.


Gemini usage reports in the Admin console


These reports will help admins understand how many users are using Gemini features and make informed decisions about expanding Gemini further within their organizations. We plan to introduce more reporting features over time, such as the ability to filter these reports by Organizational Units and Groups.


Additional details

Admins can access these reports via admin  console under Menu > Generative AI > Gemini reports. Visit the Help Center to learn more about reviewing Gemini usage in your organization.


Getting started

Rollout pace


Availability

  • Available for Google Workspace customers with the Gemini Business and Gemini Enterprise add-ons.
We plan to introduce Gemini reports for the Gemini Education and Gemini Premium add-ons in the coming weeks. Stay tuned to the Workspace Updates blog for more information. 

Introducing Gemini offerings for Google Workspace for Education customers

What’s changing 

Beginning May 23, 2024, Google for Education customers will be able to leverage new and powerful ways of working, teaching and learning with Gemini for Google Workspace with two new paid add-ons:

  • Gemini Education is a lower price offering best suited to help education institutions get started with generative AI in Workspace, with a monthly usage limit. 

    Gemini Education will be available as an add-on for Google Workspace for Education: Education Fundamentals, Education Standard, the Teaching and Learning Upgrade, and Education Plus. 

  • Gemini Education Premium: includes everything in Gemini Education, plus more advanced features like AI-powered note taking and summaries in Meet, AI-enhanced data loss prevention and more coming soon. This add-on provides full access and usage of generative AI tools in Workspace.

    Gemini Education Premium will be available as an add-on for Google Workspace for Education: Education Fundamentals, Education Standard, Teaching and Learning Upgrade, and Education Plus.

Note that Gemini for Google Workspace features are only available in English, Spanish and Portuguese* for education users over the age of 18.


Who’s impacted

Admins


Why it’s important

Gemini for Google Workspace provides access to our most capable generative AI models widely available across Workspace apps, like Docs, Gmail, Slides, and more. Inside and outside the classroom, you can use Gemini to help transform your work by:

  • Turning a blank page into a lesson plan template or a grant proposal in Docs
  • Creating an agenda for an upcoming professional development session in Sheets
  • Bringing presentations to life or illustrate a topic by creating original images in Slides, and more.

With both add-ons, you’ll also be able to chat with Gemini (gemini.google.com) safely and securely with enterprise-grade data protection. Gemini.google.com can help you speed up time-consuming tasks, like conducting research about IT security best practices to creating an alumni outreach plan. It can also help you generate fresh ideas and make learning more personal for your students, like re-leveling content or creating class exercises or assignments based on their interests.


Check out The Keyword blog for even more information about how we’re bringing Gemini to Google Workspace for Education


Additional details


Coming soon to Gemini for Google Workspace for Education:

Further data protections
To further our robust privacy commitments, in the future educators and students 18 years and older will have added data protection when accessing Gemini at gemini.google.com with their school accounts, free of charge. This added protection ensures that your data is not reviewed by anyone to improve our models, is not used to train artificial intelligence models, or shared with other users or institutions. These protections will be applicable to our free Gemini experience for Workspace for Education customers and will be available in 40+ languages.


As a reminder, gemini.google.com is covered under your Google Workspace for Education Terms of Service*. Check out the Workspace Blog for more information about how we’re protecting your Google Workspace data in the era of Generative AI.


OpenStax and Data Commons extensions
Soon, you’ll be able to use Gemini in combination with OpenStax and Data Commons, along with guided practice quizzes to help people learn more confidently and with trusted sources. For example you can ask OpenStax to discuss the scientific significance of solar eclipses” to pull in accurate, trustworthy responses based on Rice University’s OpenStax educational resources. Or you can leverage Data Commons to visualize data about complex topics like climate change, jobs, economics, and more. You’ll also be able to work through guided practice quizzes and receive conversational feedback on each of your responses. We’ll provide more information on The Keyword and the Workspace Updates blog when this functionality becomes available.


Piloting Gemini in Classroom
We're also piloting Gemini in Classroom with new lesson planning features that are informed by LearnLM, our new family of models fine-tuned for learning, based on Gemini and grounded in educational research. See here for more information on joining the Google for Education Pilot Program.


Getting started

Rollout pace

  • The Gemini Education and Gemini Education Premium add-ons will be available beginning May 23, 2024

Resources


*Spanish and Portuguese currently have a limited feature set — learn more.
*See here for more information on the terms of service if you’re using gemini.google.com with a personal Google account.

Gemini for Google Workspace feature Help me write now available in Spanish and Portuguese

This announcement was part of Google I/O ‘24. Visit the Workspace Blog for more  about new ways to engage with Gemini for Workspace and the Keyword Blog for more ways to stay productive with Gemini for Google Workspace.


What’s changing

Last year, we introduced AI-powered writing features that help you quickly refine existing work or get you started with something new in Google Docs and Gmail using Gemini for Google Workspace. 

Since then, Help me write has assisted numerous users in drafting content for things like emails, blog posts, business proposals, ad copy and so much more. In fact, 70% of Enterprise users who use Help me write in Docs or Gmail end up using Gemini's suggestions. Today, we’re excited to announce this feature is now available in Spanish and Portuguese. 
Help me write in Google Docs using Portuguese
Help me write in Google Docs using Portuguese

Who’s impacted 

Admins and end users 


Why it’s important 

Users who write in Spanish and Portuguese can now benefit from AI-powered creation in their own language. 
Help me write in Gmail using Spanish

Help me write in Gmail using Spanish

Getting started 


Rollout pace 


Availability 

Available for Google Workspace: 
  • Gemini Business, Enterprise, Education, Education Premium 
  • Google One AI Premium 

Resources 

Gemini (gemini.google.com) is now available to Google Workspace users in more territories and languages

This announcement was part of Google I/O ‘24. Visit the Workspace Blog for more  about new ways to engage with Gemini for Workspace and the Keyword Blog for more ways to stay productive with Gemini for Google Workspace.


What’s changing

Earlier this year, we announced that Google Workspace customers with a Gemini Enterprise or Business add-on now have access to chat with Gemini at gemini.google.com. 


Starting today, we’re pleased to announce that Gemini (gemini.google.com) is now available in more than 35 languages:
  • Arabic
  • Bulgarian
  • Chinese (Simplified / Traditional)
  • Croatian
  • Czech
  • Danish
  • Dutch
  • English
  • Estonian
  • Farsi
  • Finnish
  • French
  • German
  • Greek
  • Hebrew
  • Hungarian
  • Indonesian
  • Italian
  • Japanese
  • Korean
  • Latvian
  • Lithuanian
  • Norwegian
  • Polish
  • Portuguese
  • Romanian
  • Russian
  • Serbian
  • Slovak
  • Slovenian
  • Spanish
  • Swahili
  • Swedish
  • Thai
  • Turkish
  • Ukrainian
  • Vietnamese

And is now available to Gemini Enterprise and Business users in the following locales:
  • France and French Territories
  • Hong Kong

Getting started

Rollout pace

  • Available immediately.

Availability

  • Gemini Enterprise is available as an add-on for Google Workspace:
    • Business Standard and Plus 
    • Enterprise Standard and Plus 
    • Education Fundamentals, Standard, Plus and Fundamentals
    • Frontline Starter and Standard
    • Enterprise Essentials and Essentials Plus
    • Nonprofits

  • Gemini Business is available as an add-on for Google Workspace:
    • Business Starter, Standard and Plus 
    • Enterprise Starter, Standard and Plus
    • Frontline Starter and Standard
    • Essentials Starter
    • Enterprise Essentials and Essentials Plus
    • Nonprofits
Note that Gemini for Google Workspace features are only available for users over the age of 18.

Resources