
How Best Buy is using generative AI to create better customer experiences

In February we announced Gemma, our family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. The community's incredible response – including impressive fine-tuned variants, Kaggle notebooks, integration into tools and services, recipes for RAG using databases like MongoDB, and lots more – has been truly inspiring.
Today, we're excited to announce our first round of additions to the Gemma family, expanding the possibilities for ML developers to innovate responsibly: CodeGemma for code completion and generation tasks as well as instruction following, and RecurrentGemma, an efficiency-optimized architecture for research experimentation. Plus, we're sharing some updates to Gemma and our terms aimed at improvements based on invaluable feedback we've heard from the community and our partners.
Harnessing the foundation of our Gemma models, CodeGemma brings powerful yet lightweight coding capabilities to the community. CodeGemma models are available as a 7B pretrained variant that specializes in code completion and code generation tasks, a 7B instruction-tuned variant for code chat and instruction-following, and a 2B pretrained variant for fast code completion that fits on your local computer. CodeGemma models have several advantages:
- Intelligent code completion and generation: Complete lines, functions, and even generate entire blocks of code – whether you're working locally or leveraging cloud resources.
- Enhanced accuracy: Trained on 500 billion tokens of primarily English language data from web documents, mathematics, and code, CodeGemma models generate code that's not only more syntactically correct but also semantically meaningful, helping reduce errors and debugging time.
- Multi-language proficiency: Your invaluable coding assistant for Python, JavaScript, Java, and other popular languages.
- Streamlined workflows: Integrate a CodeGemma model into your development environment to write less boilerplate, and focus on interesting and differentiated code that matters – faster.
![]() |
This table compares the performance of CodeGemma with other similar models on both single and multi-line code completion tasks. Learn more in the technical report. |
Learn more about CodeGemma in our report or try it in this quickstart guide.
RecurrentGemma is a technically distinct model that leverages recurrent neural networks and local attention to improve memory efficiency. While achieving similar benchmark score performance to the Gemma 2B model, RecurrentGemma's unique architecture results in several advantages:
- Reduced memory usage: Lower memory requirements allow for the generation of longer samples on devices with limited memory, such as single GPUs or CPUs.
- Higher throughput: Because of its reduced memory usage, RecurrentGemma can perform inference at significantly higher batch sizes, thus generating substantially more tokens per second (especially when generating long sequences).
- Research innovation: RecurrentGemma showcases a non-transformer model that achieves high performance, highlighting advancements in deep learning research.
![]() |
This chart reveals how RecurrentGemma maintains its sampling speed regardless of sequence length, while Transformer-based models like Gemma slow down as sequences get longer. |
To understand the underlying technology, check out our paper. For practical exploration, try the notebook, which demonstrates how to finetune the model.
Guided by the same principles of the original Gemma models, the new model variants offer:
- Open availability: Encourages innovation and collaboration with its availability to everyone and flexible terms of use.
- High-performance and efficient capabilities: Advances the capabilities of open models with code-specific domain expertise and optimized design for exceptionally fast completion and generation.
- Responsible design: Our commitment to responsible AI helps ensure the models deliver safe and reliable results.
- Flexibility for diverse software and hardware:
- Both CodeGemma and RecurrentGemma: Built with JAX and compatible with JAX, PyTorch, , Hugging Face Transformers, and Gemma.cpp. Enable local experimentation and cost-effective deployment across various hardware, including laptops, desktops, NVIDIA GPUs, and Google Cloud TPUs.
- CodeGemma: Additionally compatible with Keras, NVIDIA NeMo, TensorRT-LLM, Optimum-NVIDIA, MediaPipe, and availability on Vertex AI.
- RecurrentGemma: Support for all the aforementioned products will be available in the coming weeks.
Alongside the new model variants, we're releasing Gemma 1.1, which includes performance improvements. Additionally, we've listened to developer feedback, fixed bugs, and updated our terms to provide more flexibility.
These first Gemma model variants are available in various places worldwide, starting today on Kaggle, Hugging Face, and Vertex AI Model Garden. Here's how to get started:
- Access the models: Visit the Gemma website, Vertex AI Model Garden, Hugging Face, NVIDIA NIM APIs, or Kaggle for download instructions.
- Explore integration options: Find guides and resources for integrating the models with your favorite tools and platforms.
- Experiment and innovate: Add a Gemma model variant to your next project and explore its capabilities.
We invite you to try the CodeGemma and RecurrentGemma models and share your feedback on Kaggle. Together, let's shape the future of AI-powered content creation and understanding.
As part of the next chapter of our Gemini era, we announced we were bringing Gemini to more products. Today we’re excited to announce that Android Studio is using the Gemini 1.0 Pro model to make Android development faster and easier, and we’ve seen significant improvements in response quality over the last several months through our internal testing. In addition, we are making this transition more apparent by announcing that Studio Bot is now called Gemini in Android Studio.
Gemini in Android Studio is an AI-powered coding assistant which can be accessed directly in the IDE. It can accelerate your ability to develop high-quality Android apps faster by helping generate code for your app, providing complex code completions, answering your questions, finding relevant resources, adding code comments and more — all without ever having to leave Android Studio. It is available in 180+ countries and territories in Android Studio Jellyfish.
If you were already using Studio Bot in the canary channel, you’ll continue experiencing the same helpful and powerful features, but you’ll notice improved quality in responses compared to earlier versions.
Gemini in Android Studio can understand natural language, so you can ask development questions in your own words. You can enter your questions in the chat window ranging from very simple and open-ended ones to specific problems that you need help with.
Here are some examples of the types of queries it can answer:
- How do I add camera support to my app?
- Using Compose, I need a login screen with the following: a username field, a password field, a 'Sign In' button, a 'Forgot Password?' link. I want the password field to obscure the input.
- What's the best way to get location on Android?
- I have an 'orders' table with columns like 'order_id', 'customer_id', 'product_id', 'price', and 'order_date'. Can you help me write a query that calculates the average order value per customer over the last month?
Gemini in Android Studio remembers the context of the conversation, so you can also ask follow-up questions, such as “Can you give me the code for this in Kotlin?” or “Can you show me how to do it in Compose?”
Gemini in Android Studio can help you be more productive by providing you with powerful AI code completions. You can receive suggestions of multi-line code completions, suggestions for how to do comments for your code, or how to add documentation to your code.
Gemini in Android Studio was designed with privacy in mind. Gemini is only available after you log in and enable it. You don’t need to send your code context to take advantage of most features. By default, Gemini in Android Studio’s chat responses are purely based on conversation history, and you control whether you want to share additional context for customized responses. You can update this anytime in Android Studio > Settings at a granular project level. We also have a custom way for you to opt out certain files and folders through an .aiexclude file. Much like our work on other AI projects, we stick to a set of AI Principles that hold us accountable. Learn more here.
Not only does Android Studio use Gemini to help you be more productive, it can also help you take advantage of Gemini models to create AI-powered features in your applications. Get started in minutes using the Gemini API starter template available in the canary release – channel for Android Studio – under File > New Project > Gemini API Starter. You can also use the code sample available at File > Import Sample > Google Generative AI sample.
The Gemini API is multimodal, meaning it can support image and text inputs. For example, it can support conversational chat, summarization, translation, caption generation etc. using both text and image inputs.
Gemini in Android Studio is still in preview, but we have added many feature improvements — and now a major model update — since we released the experience in May 2023. It is currently no-cost for developers to try out. Now is a great time to test it and let us know what you think, before we release this experience to stable.
Stay updated on the latest by following us on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!
Keeping people safe and their data secure and private is a top priority for Android. That is why we took our time when designing the new Find My Device, which uses a crowdsourced device-locating network to help you find your lost or misplaced devices and belongings quickly – even when they’re offline. We gave careful consideration to the potential user security and privacy challenges that come with device finding services.
During development, it was important for us to ensure the new Find My Device was secure by default and private by design. To build a private, crowdsourced device-locating network, we first conducted user research and gathered feedback from privacy and advocacy groups. Next, we developed multi-layered protections across three main areas: data safeguards, safety-first protections, and user controls. This approach provides defense-in-depth for Find My Device users.
How location crowdsourcing works on the Find My Device network
The Find My Device network locates devices by harnessing the Bluetooth proximity of surrounding Android devices. Imagine you drop your keys at a cafe. The keys themselves have no location capabilities, but they may have a Bluetooth tag attached. Nearby Android devices participating in the Find My Device network report the location of the Bluetooth tag. When the owner realizes they have lost their keys and logs into the Find My Device mobile app, they will be able to see the aggregated location contributed by nearby Android devices and locate their keys.
Find My Device network protections
Let’s dive into key details of the multi-layered protections for the Find My Device network:
In addition to careful security architectural design, the new Find My Device network has undergone internal Android red team testing. The Find My Device network has also been added to the Android security vulnerability rewards program to take advantage of Android’s global ecosystem of security researchers. We’re also engaging with select researchers through our private grant program to encourage more targeted research.
Prioritizing user safety on Find My Device
Together, these multi-layered user protections help mitigate potential risks to user privacy and safety while allowing users to effectively locate and recover lost devices.
As bad actors continue to look for new ways to exploit users, our work to help keep users safe on Android is never over. We have an unwavering commitment to continue to improve user protections on Find My Device and prioritize user safety.
For more information about Find My Device on Android, please visit our help center. You can read the Find My Device Network Accessory specification here.
For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).
The Dev channel has been updated to 125.0.6396.3 for Windows, Mac and Linux.
A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Prudhvi Bommana
Google Chrome