Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Introducing Gemma models in Keras

Posted by Martin Görner – Product Manager, Keras

The Keras team is happy to announce that Gemma, a family of lightweight, state-of-the art open models built from the same research and technology that we used to create the Gemini models, is now available in the KerasNLP collection. Thanks to Keras 3, Gemma runs on JAX, PyTorch and TensorFlow. With this release, Keras is also introducing several new features specifically designed for large language models: a new LoRA API (Low Rank Adaptation) and large scale model-parallel training capabilities.

If you want to dive directly into code samples, head here:


Get started

Gemma models come in portable 2B and 7B parameter sizes, and deliver significant advances against similar open models, and even some larger ones. For example:

  • Gemma 7B scores a new best-in class 64.3% of correct answers in the MMLU language understanding benchmark (vs. 62.5% for Mistral-7B and 54.8% for Llama2-13B)
  • Gemma adds +11 percentage points to the GSM8K benchmark score for grade-school math problems (46.4% for Gemma 7B vs. Mistral-7B 35.4%, Llama2-13B 28.7%)
  • and +6.1 percentage points of correct answers in HumanEval, a coding challenge (32.3% for Gemma 7B, vs. Mistral 7B 26.2%, Llama2 13B 18.3%).

Gemma models are offered with a familiar KerasNLP API and a super-readable Keras implementation. You can instantiate the model with a single line of code:

gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_2b_en")

And run it directly on a text prompt – yes, tokenization is built-in, although you can easily split it out if needed - read the Keras NLP guide to see how.

gemma_lm.generate("Keras is a", max_length=32)
> "Keras is a popular deep learning framework for neural networks..."

Try it out here: Get started with Gemma models


Fine-tuning Gemma Models with LoRA

Thanks to Keras 3, you can choose the backend on which you run the model. Here is how to switch:

os.environ["KERAS_BACKEND"] = "jax"  # Or "tensorflow" or "torch".
import keras # import keras after having selected the backend

Keras 3 comes with several new features specifically for large language models. Chief among them is a new LoRA API (Low Rank Adaptation) for parameter-efficient fine-tuning. Here is how to activate it:

gemma_lm.backbone.enable_lora(rank=4)
# Note: rank=4 replaces the weights matrix of relevant layers with the 
# product AxB of two matrices of rank 4, which reduces the number of 
# trainable parameters.

This single line drops the number of trainable parameters from 2.5 billion to 1.3 million!

Try it out here: Fine-tune Gemma models with LoRA.


Fine-tuning Gemma models on multiple GPU/TPUs

Keras 3 also supports large-scale model training and Gemma is the perfect model to try it out. The new Keras distribution API offers data-parallel and model-parallel distributed training options. The new API is meant to be multi-backend but for the time being, it is implemented for the JAX backend only, because of its proven scalability (Gemma models were trained with JAX).

To fine-tune the larger Gemma 7B, a distributed setup is useful, for example a TPUv3 with 8 TPU cores that you can get for free on Kaggle, or an 8-GPU machine from Google Cloud. Here is how to configure the model for distributed training, using model parallelism:

device_mesh = keras.distribution.DeviceMesh(
   (1, 8), # Mesh topology
   ["batch", "model"], # named mesh axes
   devices=keras.distribution.list_devices() # actual accelerators
)


# Model config
layout_map = keras.distribution.LayoutMap(device_mesh)
layout_map["token_embedding/embeddings"] = (None, "model")
layout_map["decoder_block.*attention.*(query|key|value).*kernel"] = (
   None, "model", None)
layout_map["decoder_block.*attention_output.*kernel"] = (
   None, None, "model")
layout_map["decoder_block.*ffw_gating.*kernel"] = ("model", None)
layout_map["decoder_block.*ffw_linear.*kernel"] = (None, "model")


# Set the model config and load the model
model_parallel = keras.distribution.ModelParallel(
   device_mesh, layout_map, batch_dim_name="batch")
keras.distribution.set_distribution(model_parallel)
gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_7b_en")
# Ready: you can now train with model.fit() or generate text with generate()

What this code snippet does is set up the 8 accelerators into a 1 x 8 matrix where the two dimensions are called “batch” and “model”. Model weights are sharded on the “model” dimension, here split between the 8 accelerators, while data batches are not partitioned since the “batch” dimension is 1.

Try it out here: Fine-tune Gemma models on multiple GPUs/TPUs.


What’s Next

We will soon be publishing a guide showing you how to correctly partition a Transformer model and write the 6 lines of partitioning setup above. It is not very long but it would not fit in this post.

You will have noticed that layer partitionings are defined through regexes on layer names. You can check layer names with this code snippet. We ran this to construct the LayoutMap above.

# This is for the first Transformer block only,
# but they all have the same structure
tlayer = gemma_lm.backbone.get_layer('decoder_block_0')
for variable in tlayer.weights:
 print(f'{variable.path:<58}  {str(variable.shape):<16}')

Full GSPMD model parallelism works here with just a few partitioning hints because Keras passes these settings to the powerful XLA compiler which figures out all the other details of the distributed computation.


We hope you will enjoy playing with Gemma models. Here is also an instruction-tuning tutorial that you might find useful. And by the way, if you want to share your fine-tuned weights with the community, the Kaggle Model Hub now supports user-tuned weights uploads. Head to the model page for Gemma models on Kaggle and see what others have already created!

Build with Gemini models in Project IDX

Posted by Ali Satter – AI Lead, Roman Nurik – Design Lead

A few weeks ago, we announced a series of product updates to Project IDX to help streamline and simplify full-stack, multiplatform software development. This week, we’re excited to share how Project IDX uses Gemini models to provide you with AI features to further speed up and refine your end-to-end developer workflow.

Project IDX launched with support for AI-powered code completion, an assistive chatbot, and contextual code actions like "add comments" and “explain this code” to help you write high-quality code faster. Since launch, and thanks to your feedback, we’ve been working hard to add new AI functionality to help boost your productivity even more.


Work faster with inline AI assistance

You can now get inline AI assistance inside any file by pressing Cmd/Ctrl + I. Simply describe the changes you want to make to your code and IDX inline AI assistance will provide real-time error correction, code suggestions, and auto-completion in your code.

We integrated these AI enhancements directly into Project IDX’s centralized workspace to equip you with the necessary tools and resources for full-stack app development where and when you need them. From setting up your workspace to testing your app, IDX AI assistance helps accelerate and improve your workflow, ensuring that your end-to-end development experience is faster, easier, and higher quality.

For example, let’s say you want to add an authenticated API endpoint to your server. You can tell IDX AI to write the code necessary to enable secure task management using Firebase Authentication and Cloud Firestore. Given an input prompt, IDX AI assistance can write the code to construct the route, determine which APIs to use to verify the token, and save the data to the database. Instead of writing boilerplate code, you can focus on higher-level design and problem solving.

moving image illustrating the use of an input prompt in Project IDX to generate corresponding code
Input prompt for reference: Create a POST endpoint named /tasks. Get the ID Token from a cookie named _session. Verify this token with the Firebase Admin SDK. Use the UID property to assign the item to the user. Then save a task item with a servertime stamp for createdAt to the Firestore database using the admin SDK.

Then, let's say you want to clean up your code a bit to improve its quality, readability, and maintainability. IDX AI assistance can help you quickly and easily refactor your code, so you can get right into optimizing your work without the hassle of manual refactoring.

moving image illustrating the use of input prompt: Refactor to use Node’s promise API.
Input prompt for reference: Refactor to use Node’s promise API.

And, as you wrap up your project, IDX AI can help you test and debug your code to make sure your application is running smoothly before deployment. Tell IDX AI assistance to write you a unit test for a function to ensure it’s working properly, saving you time and effort as you inspect the quality of your app.

moving image illustrating the use of input prompt: Create a unit test for this function
Input prompt for reference: Create a unit test for this function

Easily add AI features with the Gemini API template

We’re also simplifying the process of building with the Gemini API with Project IDX’s new Gemini API template. The Gemini API template uses the Gemini Pro model to embed AI-powered features into your applications without additional configuration on your end, so you can get started working with the Gemini API quickly and easily. There's even an option to use the Gemini API via the popular LangChain framework to simplify the process of building LLM-powered apps.

The Gemini API template is multimodal, meaning it can provide context-aware prompt output for a myriad of input modalities including images, text and, of course, code. This can help you add features like conversational interfaces, summarization of user reviews, translation, and automatic image caption creation.

To demonstrate its functionality, we pre-configured the Gemini API template with ‘Baking with the Gemini API’, a recipe builder application that, using the Gemini model’s multimodal capabilities, can reverse-engineer possible recipes for baked goods from just a picture.

moving image illustrating the use of an input prompt in Project IDX to generate corresponding code

But this recipe builder is just one example of the Gemini API template in action – with support for different input modalities and context-aware output generation, you can use IDX’s Gemini API template to create a myriad of innovative and impactful applications that deliver AI-enhanced experiences to your users.


Stay tuned for more AI updates

These updates are a continuation of our efforts to leverage Google’s AI innovations for Project IDX, so make sure to keep an eye out for more announcements to come, including the expansion of AI in IDX to more than 150 countries/regions in the coming weeks.

Thank you for your continued support and engagement – please keep the feedback coming by filing bugs and feature requests. For walkthroughs and more information on all the features mentioned above, check out our documentation. If you haven’t already, visit our website to sign up to try Project IDX and join us on our journey. Also, be sure to check out our new Project IDX Blog for the latest product announcements and updates from the team.

We can’t wait to see what you create with Project IDX!

Gemini 1.5: Our next-generation model, now available for Private Preview in Google AI Studio

Posted by Jaclyn Konzelmann and Wiktor Gworek – Google Labs

Last week, we released Gemini 1.0 Ultra in Gemini Advanced. You can try it out now by signing up for a Gemini Advanced subscription. The 1.0 Ultra model, accessible via the Gemini API, has seen a lot of interest and continues to roll out to select developers and partners in Google AI Studio.

Today, we’re also excited to introduce our next-generation Gemini 1.5 model, which uses a new Mixture-of-Experts (MoE) approach to improve efficiency. It routes your request to a group of smaller "expert” neural networks so responses are faster and higher quality.

Developers can sign up for our Private Preview of Gemini 1.5 Pro, our mid-sized multimodal model optimized for scaling across a wide-range of tasks. The model features a new, experimental 1 million token context window, and will be available to try out in Google AI Studio. Google AI Studio is the fastest way to build with Gemini models and enables developers to easily integrate the Gemini API in their applications. It’s available in 38 languages across 180+ countries and territories.


1,000,000 tokens: Unlocking new use cases for developers

Before today, the largest context window in the world for a publicly available large language model was 200,000 tokens. We’ve been able to significantly increase this — running up to 1 million tokens consistently, achieving the longest context window of any large-scale foundation model. Gemini 1.5 Pro will come with a 128,000 token context window by default, but today’s Private Preview will have access to the experimental 1 million token context window.

We’re excited about the new possibilities that larger context windows enable. You can directly upload large PDFs, code repositories, or even lengthy videos as prompts in Google AI Studio. Gemini 1.5 Pro will then reason across modalities and output text.

  1. Upload multiple files and ask questions
  2. We’ve added the ability for developers to upload multiple files, like PDFs, and ask questions in Google AI Studio. The larger context window allows the model to take in more information — making the output more consistent, relevant and useful. With this 1 million token context window, we’ve been able to load in over 700,000 words of text in one go.

    moving image illustrating how Gemini 1.5 Pro can find and reason from particular quotes across the Apollo 11 PDF transcript.
    Gemini 1.5 Pro can find and reason from particular quotes across the Apollo 11 PDF transcript. 
    [Video sped up for demo purposes]

  3. Query an entire code repository
  4. The large context window also enables a deep analysis of an entire codebase, helping Gemini models grasp complex relationships, patterns, and understanding of code. A developer could upload a new codebase directly from their computer or via Google Drive, and use the model to onboard quickly and gain an understanding of the code.

    moving image illustrating how Gemini 1.5 Pro can help developers boost productivity when learning a new codebase.
    Gemini 1.5 Pro can help developers boost productivity when learning a new codebase.  
    [Video sped up for demo purposes]

  5. Add a full length video
  6. Gemini 1.5 Pro can also reason across up to 1 hour of video. When you attach a video, Google AI Studio breaks it down into thousands of frames (without audio), and then you can perform highly sophisticated reasoning and problem-solving tasks since the Gemini models are multimodal.

    moving image illustrating how Gemini 1.5 Pro can perform reasoning and problem-solving tasks across video and other visual inputs.
    Gemini 1.5 Pro can perform reasoning and problem-solving tasks across video and other visual inputs.  
    [Video sped up for demo purposes]

More ways for developers to build with Gemini models

In addition to bringing you the latest model innovations, we’re also making it easier for you to build with Gemini:

  • Easy tuning. Provide a set of examples, and you can customize Gemini for your specific needs in minutes from inside Google AI Studio. This feature rolls out in the next few days. 
  • New developer surfaces. Integrate the Gemini API to build new AI-powered features today with new Firebase Extensions, across your development workspace in Project IDX, or with our newly released Google AI Dart SDK
  • Lower pricing for Gemini 1.0 Pro. We’re also updating the 1.0 Pro model, which offers a good balance of cost and performance for many AI tasks. Today’s stable version is priced 50% less for text inputs and 25% less for outputs than previously announced. The upcoming pay-as-you-go plans for AI Studio are coming soon.

Since December, developers of all sizes have been building with Gemini models, and we’re excited to turn cutting edge research into early developer products in Google AI Studio. Expect some latency in this preview version due to the experimental nature of the large context window feature, but we’re excited to start a phased rollout as we continue to fine-tune the model and get your feedback. We hope you enjoy experimenting with it early on, like we have.

Google Pay – Enabling liability shift for eligible Visa device token transactions globally

Posted by Dominik Mengelt– Developer Relations Engineer, Payments and Florin Modrea - Product Solutions Engineer, Google Pay

We are excited to announce the general availability [1] of liability shift for Visa device tokens for Google Pay.

For Mastercard device tokens the liability already lies with the issuing bank, whereas, for Visa, only eligible device tokens with issuing banks in the European region benefit from liability shift.


What is liability shift?

If liability shift is granted for a transaction, the responsibility of covering the losses from fraudulent transactions is moving from the merchant to the issuing bank. With this change, qualifying Google Pay Visa transactions done with a device token will benefit from this liability shift.


How to know if the liability was shifted to the issuing bank for my transaction?

Eligible Visa transactions will carry an eciIndicator value of 05. PSPs can access the eciIndicator value after decrypting the payment method token. Merchants can check with their PSPs to get a report on liability shift eligible transactions.

   {
    "gatewayMerchantId": "some-merchant-id",
    "messageExpiration": "1561533871082",
    "messageId": "AH2Ejtc8qBlP_MCAV0jJG7Er",
    "paymentMethod": "CARD",
    "paymentMethodDetails": {
        "expirationYear": 2028,
        "expirationMonth": 12,
        "pan": "4895370012003478",
        "authMethod": "CRYPTOGRAM_3DS",
        "eciIndicator": "05",
        "cryptogram": "AgAAAAAABk4DWZ4C28yUQAAAAAA="
    }
  }
A decrypted payment token for a Google Pay Visa transaction with an eciIndicator value of 05 (liability shifted)

Check out the following table for a full list of eciIndicator values we return for our Visa and Mastercard device token transactions:

 eciIndicator value

 Card Network

 Liable Party

 authMethod

 "" (empty)

 Mastercard

 Merchant/Acquirer

 CRYPTOGRAM_3DS

 "02"

 Mastercard

 Card issuer

 CRYPTOGRAM_3DS

 "06"

 Mastercard

 Merchant/Acquirer

 CRYPTOGRAM_3DS

 "05"

 Visa

 Card issuer

 CRYPTOGRAM_3DS

 "07"

 Visa

 Merchant/Acquirer

 CRYPTOGRAM_3DS

 "" (empty)

 Other networks

 Merchant/Acquirer

 CRYPTOGRAM_3DS

Any other eciIndicator values for VISA and Mastercard that aren't present in this table won't be returned.


How to enroll

Merchants may opt-in from within the Google Pay & Wallet console starting this month. Merchants in Europe (already benefiting from liability shift) do not need to take any actions as they will be auto enrolled.

In order for your Google Pay transaction to qualify for enabling liability shift, the following API parameters are required:

totalPrice

Make sure that totalPrice matches with the amount that you use to charge the user. Transactions with totalPrice=0 will not qualify for liability shift to the issuing bank.

totalPriceStatus

Valid values are: FINAL or ESTIMATED

Transactions with the totalPriceStatus value of NOT_CURRENTLY_KNOWN do not qualify for liability shift.

Not all transactions get liability shift


Ineligible merchants

In the US, the following MCC codes are excluded from getting liability shift:

4829

Money Transfer

5967

Direct Marketing – Inbound Teleservices Merchant

6051

Non-Financial Institutions – Foreign Currency, Non-Fiat Currency (for example: Cryptocurrency), Money Orders (Not Money Transfer), Account Funding (not Stored Value Load), Travelers Cheques, and Debt Repayment

6540

Non-Financial Institutions – Stored Value Card Purchase/Load

7801

Government Licensed On-Line Casinos (On-Line Gambling) (US Region only)

7802

Government-Licensed Horse/Dog Racing (US Region only)

7995

Betting, including Lottery Tickets, Casino Gaming Chips, Off-Track Betting, Wagers at Race Tracks and games of chance to win prizes of monetary value


Ineligible transactions

In order for your Google Pay transactions to qualify for liability shift, make sure to include the above mentioned parameters totalPrice and totalPriceStatus. Transactions with totalPrice=0 or a hard coded totalPrice (always the same amount but the users get charged a different amount) will not qualify for liability shift.

Processing transactions

Google Pay API transactions with Visa device tokens are qualified for liability shift at facilitation time if all the conditions are met, but a transaction qualified for liability shift can be downgraded by network during transaction authorization processing.


Getting started with Google Pay

Not yet using Google Pay? Refer to the documentation to start integrating Google Pay today. Learn more about the integration by taking a look at our sample application for Android on GitHub or use one of our button components for your web integration. When you are ready, head over to the Google Pay & Wallet console and submit your integration for production access.

Follow @GooglePayDevs on X (formerly Twitter) for future updates. If you have questions, tag @GooglePayDevs and include #AskGooglePayDevs in your tweets.


[1] For merchants and PSPs using dynamic price updates or other callback mechanisms the Visa device token liability shift changes will be rolled out later this year.

#WeArePlay | How two sea turtle enthusiasts are revolutionizing marine conservation

Posted by Leticia Lago – Developer Marketing

When environmental science student Caitlin returned home from a trip monitoring sea turtles in Western Australia, she was inspired to create a conservation tool that could improve tracking of the species. She connected with a French developer and fellow marine life enthusiast Nicolas to design their app We Spot Turtles!, allowing anyone to support tracking efforts by uploading pictures of them spotted in the wild.

Caitlin and Nicolas shared their journey in our latest film for #WeArePlay, which showcases the amazing stories behind apps and games on Google Play. We caught up with the pair to find out more about their passion and how they are making strides towards advancing sea turtle conservation.

Tell us about how you both got interested in sea turtle conservation?

Caitlin: A few years ago, I did a sea turtle monitoring program for the Department of Biodiversity, Conservation and Attractions in Western Australia. It was probably one of the most magical experiences of my life. After that, I decided I only really wanted to work with sea turtles.

Nicolas: In 2010, in French Polynesia, I volunteered with a sea turtle protection project. I was moved by the experience, and when I came back to France, I knew I wanted to use my tech background to create something inspired by the trip.

How did these experiences lead you to create We Spot Turtles!?

Caitlin: There are seven species of sea turtle, and all are critically endangered. Or rather there’s not enough data on them to inform an accurate endangerment status. This means the needs of the species are going unmet and sea turtles are silently going extinct. Our inspiration is essentially to better track sea turtles so that conservation can be improved.

Nicolas: When I returned to France after monitoring sea turtles, I knew I wanted to make an app inspired by my experience. However, I had put the project on hold for a while. Then, when a friend sent me Caitlin’s social media post looking for a developer for a sea turtle conservation app, it re-ignited my inspiration, and we teamed up to make it together.

close up image of a turtle resting in a reef underwater

What does We Spot Turtles! do?

Caitlin: Essentially, members of the public upload images of sea turtles they spot – and even get to name them. Then, the app automatically geolocates, giving us a date and timestamp of when and where the sea turtle was located. This allows us to track turtles and improve our conservation efforts.

How do you use artificial intelligence in the app?

Caitlin: The advancements in AI in recent years have given us the opportunity to make a bigger impact than we would have been able to otherwise. The machine learning model that Nicolas created uses the facial scale and pigmentations of the turtles to not only identify its species, but also to give that sea turtle a unique code for tracking purposes. Then, if it is photographed by someone else in the future, we can see on the app where it's been spotted before.

How has Google Play supported your journey?

Caitlin: Launching our app on Google Play has allowed us to reach a global audience. We now have communities in Exmouth in Western Australia, Manly Beach in Sydney, and have 6 countries in total using our app already. Without Google Play, we wouldn't have the ability to connect on such a global scale.

Nicolas: I’m a mobile application developer and I use Google’s Flutter framework. I knew Google Play was a good place to release our title as it easily allows us to work on the platform. As a result, we’ve been able to make the app great.

Photo pf Caitlin and Nicolas on the bach in Australia at sunset. Both are kneeling in the sand. Caitlin is using her phone to identify something in the distance, and gesturing to Nicolas who is looking in the same direction

What do you hope to achieve with We Spot Turtles!?

Caitlin: We Spot Turtles! puts data collection in the hands of the people. It’s giving everyone the opportunity to make an impact in sea turtle conservation. Because of this, we believe that we can massively alter and redefine conservation efforts and enhance people’s engagement with the natural world.

What are your plans for the future?

Caitlin: Nicolas and I have some big plans. We want to branch out into other species. We'd love to do whale sharks, birds, and red pandas. Ultimately, we want to achieve our goal of improving the conservation of various species and animals around the world.


Discover other inspiring app and game founders featured in #WeArePlay.



How useful did you find this blog post?

Calling all students: Learn how to become a Google Developer Student Club Lead

Posted by Rachel Francois, Global Program Manager, Google Developer Student Clubs

Does the idea of leading a student community at your university appeal to you? Are you enthusiastic about Google technologies or interested in learning more about them? Do you love planning tech-related events and new ways for your campus community to build skills? If so, consider leading a Google Developer Student Club!

What are Google Developer Student Clubs?

Google Developer Student Clubs (GDSC) are community groups for university students interested in learning and building with Google technologies. There are over 2000 GDSC chapters, represented in over 100 countries around the world where undergraduate and graduate students explore Artificial Intelligence, Machine Learning, Google Cloud, Android development, Flutter, and other innovative technologies together. GDSC chapters host in-person, project-based events, such as hackathons and Solution Challenge with guest speakers and technical experts provided by Google.

Apply to Lead a Google Developer Student Club

You can learn more about the 2024-2025 GDSC Lead application process here.

Leading a GDSC is a great opportunity to learn new programming skills, dive deep into Google technologies and create local impact, while also building your network.

Google Developer Student Club Leads hone their technical and leadership skills as they manage a campus-based community for peers. GDSC Leads:

  • Receive mentorship from Google
  • Join a global community of leaders
  • Train peers to use Google technologies in their developer journey
  • Use technology to find solutions for real-world challenges
Drashtant Chudasama, Lakehead University Google Developer Student Club lead

Meet Drashtant Chudasama, Lakehead University Google Developer Student Club lead. Drashtant hosted a 2-day DevFest On Campus event in Canada to help foster technology in his local area. The city's first DevFest included a handful of guest speakers and a hackathon. These are the types of things you will have the opportunity to do as a GDSC Lead.

If this sounds like your skill set or you’d like to explore a new leadership opportunity in technology, we encourage you to apply to become a GDSC Lead. You can check for application deadlines in your region here.


Google Developer Student Clubs Around the World

GDSC HITS lead, Amitasha Verma and her team

After a year’s hiatus, GDSC HITS lead, Amitasha Verma and her team defied the odds to bring an interactive event to life. More than 80+ students came together for a 3-hour "Unlocking the Power of Blockchain" event in India. This event demonstrated the unwavering spirit of students eager to explore the world of blockchain.

GDSC Fast National University in Islamabad

GDSC Fast National University in Islamabad collaborated with 15 other GDSC chapters to host the exciting "Techbuzz" competition, bringing together a diverse group of tech enthusiasts to showcase their skills through a variety of engaging activities. The event featured intense rapid-fire tech sessions that tested the participants' knowledge and quick thinking, while bringing a game-based learning platform to add an element of fun and excitement.


How to become a GDSC Lead

Learn more about the GDSC Lead role and criteria here. To get started click here.


Note: Google Developer Student Clubs are student-led independent organizations, and their presence does not indicate a relationship between Google and the students' universities.

How We Made the CES 2024 AR Experience: Android Virtual Guide, powered by Geospatial Creator

Posted by Kira Rich – Senior Product Marketing Manager, AR and Bradford Lee – Product Marketing Manager, AR

Navigating a large-scale convention like CES can be overwhelming. To enhance the attendee experience, we've created a 360° event-scale augmented reality (AR) experience in our Google booth. Our friendly Android Bot served as a digital guide, providing:

  • Seamless wayfinding within our booth, letting you know about the must try demos
  • Delightful content, only possible with AR, like replacing the Las Vegas Convention Center facade with our Generative AI Wallpapers or designing an interactive version of Android on Sphere for those who missed it in real life
  • Helpful navigation tips and quick directions to transportation hubs (Monorail, shuttle buses)

In partnership with Left Field Labs and Adobe, we used Google’s latest AR technologies to inspire developers, creators, and brands on how to elevate the conference experience for attendees. Here’s a behind the scenes look at how we used Geospatial Creator, powered by ARCore and Photorealistic 3D Tiles from Google Maps Platform, to promote the power and usefulness of Google on Android.

Moving image showing end-to-end experience
Using Google’s Geospatial Creator, we helped attendees navigate CES with the Android Bot as a virtual guide, providing helpful and delightful immersive tips on what to experience in our Google Booth.


Tools We Used


Geospatial Creator in Adobe Aero Pre-Release

Geospatial Creator in Adobe Aero enables creators and developers to easily visualize where in the real-world they want to place their digital content, similar to how Google Earth visualizes the world. With Geospatial Creator, we were able to bring up Las Vegas Convention Center in Photorealistic 3D Tiles from Google Maps Platform and understand the surroundings of where the Google Booth would be placed. In this case, the booth did not exist in the Photorealistic 3D Tiles because it was a temporary build for the conference. However, by utilizing the 3D model of the booth and the coordinates of where it would be built, we were able to easily estimate and visualize the booth inside of Adobe Aero and build the experience around it seamlessly, including anchoring points for the digital content and the best attendee viewing points for the experience.


"At CES 2024, the Android AR experience, created in partnership with the Google AR, Android, and Adobe teams, brought smiles and excitement to attendees - ultimately that's what it's all about. The experience not only showcased the amazing potential of app-less AR with Geospatial Creator, but also demonstrated its practical applications in enhancing event navigation and engagement, all accessible with a simple QR scan." 
– Yann Caloghiris, Executive Creative Director at Left Field Labs
Moving image of developer timelapse
Adobe Aero provided us with an easy way to visualize and anchor the AR experience around the 3D model of the Google Booth at the Las Vegas Convention Center.

With Geospatial Creator, we had multiple advantages for designing the experience:

  • Rapid iteration with live previews of 3D assets and high fidelity visualization of the location with Photorealistic 3D Tiles from Google Maps Platform were crucial for building a location-based, AR experience without having to be there physically. 
  • Easy selection of the Las Vegas Convention Center and robust previews of the environment, as you would navigate in Google Earth, helped us visualize and develop the AR experience with precision and alignment to the real world location.

In addition, Google Street View imagery generated a panoramic skybox, which helped visualize the sight lines in Cinema 4D for storyboards. We also imported this and Photorealistic 3D Tiles from Google Maps Platform into Unreal Engine to visualize occlusion models at real world scale.

In Adobe Aero, we did the final assembly of all 3D assets and created all interactive behaviors in the experience. We also used it for animating simpler navigational elements, like the info panel assets in the booth.

AR development was primarily done with Geospatial Creator in Adobe Aero. Supplementary tools, including Unreal Engine and Autodesk Maya, were used to bring the experience to life.

Adobe Aero also supports Google Play Instant apps and App Clips1, which means attendees did not have to download an app to access the experience. They simply scanned a QR code at the booth and launched directly into the experience, which proved to be ideal for onboarding users and reducing friction especially at a busy event like CES.

Unreal Engine was used to bring in the Photorealistic 3D Tiles, allowing them to build the 3D animated Android Bot that really interacted closely with the surrounding environment. This approach was crucial for previews of the experience, allowing us to understand sight lines and where to best locate content for optimal viewing from the Google booth.

Autodesk Maya was used to create the Android Bot character, environmental masks, and additional 3D props for the different scenes in the experience. It was also used for authoring the final materials.

Babylon exporter was used for exporting from Autodesk Maya to glTF format for importing into Adobe Aero.

Figma was used for designing flat user interface elements that could be easily imported into Adobe Aero.

Cinema 4D was used for additional visualization and promotional shots, which helped with stakeholder alignment during the development of the experience.



Designing the experience

During the design phase, we envisioned the AR experience to have multiple interactions, so attendees could experience the delight of seeing precise and robust AR elements blended into the real world around them. In addition, they could experience the helpfulness of contextual information embedded into the real objects around them, providing the right information at the right time.

Image of Creative storyboard
To make the AR experience more engaging for attendees, we created several possibilities for people to interact with their environment (click to enlarge).

Creative storyboarding

Creating an effective storyboard for a Geospatial AR experience using Adobe Aero begins with a clear vision of how the digital overlays interact with the real-world locations.

Left Field Labs started by mapping out key geographical points at the Las Vegas Convention Center location where the Google booth was going to stand, integrating physical and digital elements along the way. Each scene sketched in the storyboard illustrated how virtual objects and real-world environments would interplay, ensuring that user interactions and movements felt natural and intuitive.


“Being able to pin content to a location that’s mapped by Google and use Photorealistic 3D Tiles in Google’s Geospatial Creator provided incredible freedom when choosing how the experience would move around the environment. It gave us the flexibility to create the best flow possible.” 
– Chris Wnuk, Technical Director at Left Field Labs

Early on in the storyboarding process, we decided that the virtual 3D Android Bot would act as the guide. Users could follow the Bot around the venue by turning around in 360°, but staying at the same vantage point. This allowed us to design the interactive experience and each element in it for the right perspective from where the user would be standing, and give them a full look around the Google Booth and surrounding Google experiences, like the Monorail or Sphere.

The storyboard not only depicted the AR elements but also considered user pathways, sightlines, and environmental factors like time of day, occlusion, and overall layout of the AR content around the Booth and surrounding environment.

We aimed to connect the attendees with engaging, helpful, and delightful content, helping them visually navigate Google Booth at CES.

User experience and interactivity

When designing for AR, we have learned that user interactivity and ensuring that the experience has both helpful and delightful elements are key. Across the experience, we added multiple interactions that allowed users to explore different demo stations in the Booth, get navigation via Google Maps for the Monorail and shuttles, and interact with the Android Bot directly.

The Android brand team and Left Field Labs created the Android character to be both simple and expressive, showing playfulness and contextual understanding of the environment to delight users while managing the strain on users’ devices. Taking an agile approach, the team iterated on a wide range of both Android and iOS mobile devices to ensure smooth performance across different smartphones, form factors such as foldables, as well as operating system versions, making the AR experience accessible and enjoyable to the widest audience.

testing content in Adobe Aero
With Geospatial Creator in Adobe Aero, we were able to ensure that 3D content would be accurate to specific locations throughout the development process.

Testing the experience

We consistently iterated on the interactive elements based on location testing. We performed two location tests: First, in the middle of the design phase, which helped us validate the performance of the Visual Positioning Service (VPS) at the Las Vegas Convention Center. Second, at the end of the design phase and a few days before CES, which further validated the placement of the 3D content and enabled us to refine any final adjustments once the Google booth structure was built on site.


“It was really nice to never worry about deploying. The tracking on physical objects and quickness of localization was some of the best I’ve seen!” 
– Devin Thompson, Associate Technical Director at Left Field Labs

Attendee Experience

When attendees came to the Google Booth, they saw a sign with the QR code to enter the AR experience. We positioned the sign at the best vantage point at the booth, ensuring that people had enough space around them to scan with their device and engage in the AR experience.

Sign with QR code to scan for entry at the Google Booth, Las Vegas Convention Center
By scanning a QR code, attendees entered directly into the experience and saw the virtual Android Bot pop up behind the Las Vegas Convention Center, guiding them through the full AR experience.

Attendees enjoyed seeing the Android Bot take over the Las Vegas Convention Center. Upon initializing the AR experience, the Bot revealed a Generative AI wallpaper scene right inside of a 3D view of the building, all while performing skateboarding tricks at the edge of the building’s facade.

Moving image of GenAI Wallpaper scene
With Geospatial Creator, it was possible for us to “replace” the facade of the Las Vegas Convention Center, revealing a playful scene where the Android Bot highlighted the depth and occlusion capabilities of the technology while showcasing a Generative AI Wallpaper demo.

Many people also called out the usefulness of seeing location-based, AR content with contextual information, like navigation through Google Maps, embedded into interesting locations around the Booth. Interactive panels overlaid around the Booth then introduced key physical demos located at each station around the Booth. Attendees could quickly scan the different themes and features demoed, orient themselves around the Booth, and decide which area they wanted to visit first.


“I loved the experience! Maps and AR make so much sense together. I found it super helpful seeing what demos are in each booth, right on top of the booth, as well as the links to navigation. I could see using this beyond CES as well!” 
– CES Attendee
Moving image Booth navigation
The Android Bot helped attendees visually understand the different areas and demos at the Google Booth, helping them decide what they wanted to go see first.

From the attendees we spoke to, over half of them engaged with the full experience. They were able to skip parts of the experience that felt less relevant to them and focus only on the interactions that added value. Overall, we’ve learned that most people liked seeing a mix of delightful and helpful content and they felt excited to explore the Booth further with other demos.

Moving image of people navigating augmented reality at CES
Many attendees engaged with the full AR experience to learn more about the Google Booth at CES.

Photo of Shahram Izadi watching a demonstration of the full Geospatial AR experience at CES.
Shahram Izadi, Google’s VP and GM, AR/XR, watching a demonstration of the full Geospatial AR experience at CES.

Location-based, AR experiences can transform event experiences for attendees who desire more ways to discover and engage with exhibitors at events. This trend underscores a broader shift in consumer expectations for a more immersive and interactive world around them and the blurring lines between online and offline experiences. At events like CES, AR content can offer a more immersive and personalized experience that not only entertains but also educates and connects attendees in meaningful ways.

To hear the latest updates about Google AR, Geospatial Creator, and more follow us on LinkedIn (@GoogleARVR) and X (@GoogleARVR). Plus, visit our ARCore and Geospatial Creator websites to learn how to get started building with Google’s AR technology.


1Available on select devices and may depend on regional availability and user settings.

People of AI – Season 3

Posted by Ashley Oldacre

If you are joining us for the first time, you can binge listen to Seasons 1 and 2 wherever you get your podcasts.

We are back for another season of People of AI with a new lineup of incredible guests! I am so excited to continue co-hosting with Luiz Gustavo Martins as we meet inspiring people with interesting stories in the field of Artificial Intelligence.

Last season we focused on the big shift in technology spurred on by Generative AI. Fast forward 12 months, with the launch of multimodal models, we are at an interesting point in history.

In Season 3, we will continue to uncover our guests' personal and professional journeys into the field of AI, highlighting the important work/products they are focusing on. At the same time we want to dig deeper into the societal implications of what our guests create. We will ask questions to understand how they are leveraging AI to solve problems and create new experiences while also looking to understand what challenges they may face and what potential this technology has for both good and bad. We want to hold both truths to light through conversations with our guests. All this with the goal of aligning our technology with the public narrative and paint a realistic picture of how this technology is being used, the amazing things we can do with it and the right questions to make sure it is used safely and responsibly.

Starting today, we will release one new episode of season 3 per week. alternating video and audio. Listen to the first episode on the People of AI site or wherever you get your podcasts.

  • Episode 1: meet Adrit Rao, a 16 year old high school student, app developer, and research intern at Stanford University. We talk about App development and how learning about TensorFlow enabled him to create life changing apps in Healthcare. 
  • Episode 2: meet Indira Negi, a Product and Tech Executive investing in Medical Devices, AI and Digital health at the Bill and Melinda Gates Foundation as we learn about the latest investments in AI and Healthcare.
    • Episode 3: meet Tris Warkentin, Director of Product Management at Google Deepmind as we talk about the exciting new launches from Google’s latest Large Language Models. 
    • Episode 4: meet Kathleen Kenealy, Senior Software Engineer at Google DeepMind as we learn about the engineering genius behind Google’s latest Large Language Model launches. 
    • Episode 5: meet Jeanine Banks, Vice President and General Manager of Google Developer X and Head of Developer Relations. Join us as we learn about Google’s latest AI innovations and how they will change the developer landscape. 
    • Episode 6: meet François Chollet, creator of Keras and senior Software Engineer and AI researcher at Google. Join us as we learn about Google’s latest AI innovations and how they will change the developer landscape. 
    • Episode 7: meet Chansung Park, Google Developer Expert and Researcher as we talk about the importance of building and planning for Large Language Model Infrastructure. 
    • Episode 8: meet Fergus Hurley and Nia Castelly, co-founders of Checks, a privacy platform for mobile app developers that helps create a safer digital ecosystem by simplifying the path to privacy compliance for development teams and the apps they’re building. 
    • Episode 9: meet Sam Sepah and Thad Starner, as they talk about leveraging the power of Generative AI to unlock sign language capabilities.

    Listen now to the first episode of Season 3. We can’t wait to share the stories of these exceptional People of AI with you!


    This podcast is sponsored by Google. Any remarks made by the speakers are their own and are not endorsed by Google.

    How recommerce startup Beni uses AI to help you shop secondhand

    Posted by Lillian Chen – Global Brand and Content Marketing Manager, Google Accelerator Programs

    Sarah Pinner’s passion to reduce waste began as a child when she would reach over and turn off her sibling’s water when they were brushing their teeth. This passion has fueled her throughout her career, from joining zero-waste grocery startup Imperfect Foods to co-founding Beni, an AI-powered browser extension that aggregates and recommends resale options while users shop their favorite brands. Together with her co-founder and Beni CTO Celine Lightfoot, Sarah built Beni to make online apparel resale accessible to everyday shoppers in order to accelerate the circular economy and reduce the burden of fashion on the planet.

    Sarah explains how the platform helps connect shoppers to secondhand clothing: “Let’s say you’re looking at a Nike shoe. While on the Nike site, Beni pulls resale listings for that same shoe from over 40 marketplaces like Poshmark or Ebay or TheRealReal. Users can simply buy the resale version instead of new to save money and purchase more sustainably. On average, Beni users save about 55% from the new item, and it’s also a lot more sustainable to buy the item secondhand.”

    Beni was one of the first companies in the recommerce platform software space, and the competitive landscape is growing. “The more recommerce platforms the better, but Beni is ahead in terms of our partnerships and access to data as well as the ability to search across data,” says Sarah.


    How Beni Uses AI

    AI helps Beni to ingest all data feeds from their 40+ partnerships into Beni’s database so they can surface the most relevant resale items to the shopper. For example, when Beni receives eBay’s feed for a product search, there may be 100,000 different sizes. The team has trained the Beni model to normalize sizing data. That’s one piece of their categorization.

    “When we first started Beni, the intention wasn’t to start a company. It was to solve a problem, and AI has been a great tool to be able to do that,” says Sarah.


    Participating in Google for Startups Accelerator: Circular Economy

    Beni’s product was built using Google technology, is hosted on Google Cloud and utilizes Vision API Product Search, Vertex AI, BigQuery, and the Chrome web store.

    When they heard about the Google for Startups Accelerator: Circular Economy program, it seemed like the perfect fit. “Having been in the circular economy space, and being a software business already using a plethora of Google products, and having a Google Chrome extension - getting plugged into the Google world gave us great insights about very niche questions that are very hard to find online,” says Sarah.

    As an affiliate business in resale, Beni’s revenue per transaction is low—a challenge for a business model that requires scale. The Beni team worked one-on-one with Google mentors to best use Google tools in a cost-effective way. Keeping search results relevant is a core piece of the zero-waste model. “Being plugged in and being able to work through ways to improve that relevancy and that reliability with the people in Google who know how to build Google Chrome extensions, know how to use the AI tools on the backend, and deeply understand Search is super helpful.” The Google for Startups Accelerator: Circular Economy program also educated the team in how to selectively use AI tools such as Google’s Vision API Product Search versus building their own tech in-house.

    “Having direct access to people at Google was really key for our development and sophisticated use of Google tools. And being a part of a cohort of other circular economy businesses was phenomenal for building connections in the same space,” says Sarah.

    Google for Startups Accelerator support extended beyond tech. A program highlight for Sarah was a UX writing deep dive specifically for sustainability. “It showed us all this amazing, tangible research that Google has done about what is actually effective in terms of communicating around sustainability to drive behavior change,” said Sarah. “You can’t shame people into doing things. The way in which you communicate is really important in terms of if people will actually make a change or be receptive.”

    Additionally, the new connections made with other circular economy startups and experts in their space was a huge benefit of participating in Google for Startups Accelerator. Mentorship, in particular, provided product-changing value. Google technical mentors shared advice that had a huge impact on the decision for Beni to move from utilizing Vision API Product Search to their own reverse image search. “Our mentors guided us to shift a core part of our technology. It was a big decision and was one of the biggest pieces of mentorship that helped drive us forward. This was a prime example of how the Google for Startups Accelerator program is truly here to support us in building the best products,” says Sarah.


    What’s next for Beni

    Beni’s mission is straightforward ‐ they’re easing the burden for shoppers to find and buy items second hand so that they can bring new people into resale and make resale the new norm.

    Additionally, Beni is continuing to be built into a search platform, searching across second hand clothing. Beni offers their Chrome extension on desktop and mobile, and they will have a searchable interface. In addition to building out the platform further, Beni is looking at how they can support other e-commerce platforms and integrate resale into their offerings.

    Learn about how to get involved in Google accelerator programs here.