Tag Archives: Google I/O 2023

What’s new for developers building solutions on Google Workspace – mid-year recap

Posted by Chanel Greco, Developer Advocate Google Workspace

Google Workspace offers tools for productivity and collaboration for the ways we work. It also offers a rich set of APIs, SDKs, and no-code/low-code tools to create apps and integrate workflows that integrate directly into the surfaces across Google Workspace.

Leading software makers like Atlassian, Asana, LumApps and Miro are building integrations with Google Workspace apps—like Google Docs, Meet, and Chat—to make it easier than ever to access data and act right in the tools relied on by more than 3 billion users and 9 million paying customers.

At I/O’23 we had some exciting announcements for new features that give developers more options when integrating apps with Google Workspace.

Third-party smart chips in Google Docs

We announced the opening up of smart chips functionality to our partners. Smart chips allow you to tag and see critical information to linked resources, such as projects, customer records, and more. This preview information provides users with context and critical information right in the flow of their work. These capabilities are now generally available to developers to build their own smart chips.

Some of our partners have built and launched integrations using this new smart chips functionality. For example, Figma is integrated into Docs with smart chips, allowing users to tag Figma projects which allows readers to hover over a Figma link in a doc to see a preview of the design project. Atlassian is leveraging smart chips so users can seamlessly access Jira issues and Confluence pages within Google Docs.

Tableau uses smart chips to show the user the Tableau Viz's name, last updated date, and a preview image. With the Miro smart chip solution users have an easy way to get context, request access and open a Miro board from any document. The Whimsical smart chip integration allows users to see up-to-date previews of their Whimsical boards.

Moving image showing functionality of Figma smart chips in Google docs, allowing users to tag and preview projects in docs.

Google Chat REST API and Chat apps

Developers and solution builders can use the Google Chat REST API to create Chat apps and automate workflows to send alerts, create spaces, and share critical data right in the flow of the conversation. For instance, LumApps is integrating with the Chat APIs to allow users to start conversations in Chat right from within the employee experience platform.

The Chat REST API is now generally available.

Using the Chat API and the Google Workspace UI-kit, developers can build Chat apps that bring information and workflows right into the conversation. Developers can also build low code Chat apps using AppSheet.

Moving image showing interactive Google Meet add-ons by partner Jira

There are already Chat apps available from partners like Atlassian’s Jira, Asana, PagerDuty and Zendesk. Jira for Google Chat to collaborate on projects, create issues, and update tickets – all without having to switch context.

Google Workspace UI-kit

We are continuing to evolve the Workspace UI-kit to provide a more seamless experience across Google Workspace surfaces with easy to use widgets and visual optimizations.

For example, there is a new date and time picker widget for Google Chat apps and there is the new two-column layout to optimize space and organize information.

Google Meet SDKs and APIs

There are exciting new capabilities which will soon be launched in preview for Google Meet.

For example, the Google Meet Live Sharing SDK allows for the building of new shared experiences for users on Android, iOS, and web. Developers will be able to synchronize media content across participant’s devices in real-time and offer shared content controls for everyone in the meeting.

The Google Meet Add-ons SDK enables developers to embed their app into Meet via an iframe, and choose between the main stage or the side panel. This integration can be published on the Google Workspace Marketplace for discoverability.

Partners such as Atlassian, Figma, Lucid Software, Miro and Polly.ai, are already building Meet add-ons, and we’re excited to see what apps and workflows developers will build into Meet’s highly-interactive surfaces.

Image of interactive Google Meet add-on by partner Miro

With the Google Meet APIs developers can add the power of Google Meet to their applications by pre-configuring and launching video calls right from their apps. Developers will also be able to pull data and artifacts such as attendance reporting, recordings, and transcripts to make them available for their users post-meeting.

Google Calendar API

The ability to programmatically read and write the working location from Calendar is now available in preview. In the second half of this year, we plan to make these two capabilities, along with the writing of sub-day working locations, generally available.

These new capabilities can be used for integrating with desk booking systems and coordinating in-offices days, to mention just a few use cases. This information will help organizations adapt their setup to meet the needs of hybrid work.

Google Workspace API Dashboard and APIs Explorer

Two new tools were released to assist developers: the Google Workspace API Dashboard and the APIs Explorer.

The API Dashboard is a unified way to access Google Workspace APIs through the Google Cloud Console—APIs for Gmail, Google Drive, Docs, Sheets, Chat, Slides, Calendar, and many more. From there, you now have a central location to manage all your Google Workspace APIs and view all of the aggregated metrics, quotas, credentials, and more for the APIs in use.

The APIs Explorer allows you to explore and test Google Workspace APIs without having to write any code. It's a great way to get familiar with the capabilities of the many Google Workspace APIs.

Apps Script

The eagerly awaited project history capability for Google Apps Script will soon be generally available. This feature allows users to view the list of versions created for the script, their content, and different changes between the selected version and the current version.

It was also announced that admins will be able to add an allowlist for URLs per domain to help safer access controls and control where their data can be sent externally.

The V8 runtime for Apps Script was launched back in 2020 and it enables developers to use modern JavaScript syntax and features. If you still have legacy scripts on the old Rhino runtime, now is the time to migrate them to V8.


We have been further improving AppSheet, our no-code solution builder, and announced multiple new features at I/O.

Later this year we will be launching Duet AI in AppSheet to make it easier than ever to create no-code apps for Google Workspace. Using a natural-language and conversational interface, users can build an app in AppSheet by simply describing their needs as a step-by-step conversation in chat.

Moving image of no-code app creation in AppSheet

The no-code Chat apps feature for AppSheet is generally available which can be used to quickly create Google Chat apps and publish them with 1-click.

AppSheet databases are also generally available. With this native database feature, you can organize data with structured columns and references directly in AppSheet.

Check out the Build a no-code app using the native AppSheet database and Add Chat to your AppSheet apps codelabs to get you started with these two new capabilities.

Google Workspace Marketplace

The Google Workspace Marketplace is where developers can distribute their Workspace integrations for users to find, install, and use. We launched the Intelligent Apps category which spotlights the AI-enabled apps developers build and helps users discover tools to work smarter and be more productive (eligibility criteria here).

Image of Intelligent Apps in Google Workspace

Start building today

If you want early access to the features in preview, sign up for the Developer Preview Program. Subscribe to the Google Workspace Developers YouTube channel for the latest news and video tutorials to kickstart your Workspace development journey.

We can’t wait to see what you will build on the Google Workspace platform.

What’s new in Google Wallet

Posted by Jose Ugia – Developer Relations Engineer

During Google I/O 2023, and in our recent blog post, we shared some new pass types and features we’re adding to Google Wallet and discussed how you can use them to build and protect your passes more easily, and enhance the experience for your customers.

Read on for a summary of what we covered during the event, or check out the recording of our session on YouTube: What's new in Google Pay and Google Wallet.

Secure pass information with private passes

We’re glad to expand Generic Passes, adding support for sensitive data with the new generic private pass API. Generic private passes on Google Wallet are one more way we’re protecting users’ information, keeping their sensitive digital items safe. These types of passes require you to verify it’s you to view private passes. To do that, they can use the fingerprint sensor, a passcode, or other authentication methods. This is helpful when you create a pass with sensitive information, for example in the healthcare industry.

The Google Wallet Developer Documentation contains detailed steps to help you add a private pass to Google Wallet.

image showing the definition for a private pass in JSON format.
Figure 1: The definition for a private pass in JSON format.

Enable fast pass development with Demo Mode

With Demo mode, you can go to the Google Pay & Wallet Console, sign up for API access, and integrate it with your code immediately after following the prerequisites available in the Google Wallet developer documentation.

When you sign up for a Google Wallet Issuer account for the first time, your account is automatically in Demo Mode. Demo mode includes the same features and functionality as publishing mode. To better differentiate between the demo and publish environments, passes created by issuers in Demo Mode contain visual elements to indicate their test nature. This distinction is removed when the issuer is approved to operate in publishing mode.

When you’re done with your tests and you’re ready to start issuing passes to your users, complete your business information and request publishing access from the Wallet API section in the console. Our console team will get in touch via email with additional instructions.

image illustrating Demo Mode in Google Pay & Wallet console.
Figure 2: Demo Mode in Google Pay & Wallet console.

Enhance security with rotating barcodes and Account-restricted passes

We are increasing the security of your passes with the introduction of a new API to rotate barcodes. With rotating barcodes you can pre-create a batch of barcodes and sync them with Google Wallet. The barcodes you create will rotate at a predefined interval and will be shown and updated in your user’s wallet. Rotating barcodes enable a range of use cases where issuers need to protect their passes, such as long duration transit tickets, events tickets, and more.

We’ve also announced Account Restricted passes, a new feature that lets issuers associate some pass objects with Google accounts. To use this feature, simply include the user’s email address in the pass object when you issue the pass. This triggers an additional check when a user attempts to add the pass to Google Wallet, which only succeeds if the email address specified in the pass matches the account of the currently logged-in user. Account Restricted passes let you protect your passes from theft, reselling, transfer or other restricted uses.

Design your passes using the pass builder

Making your passes consistent with your brand and design guidelines is a process that requires becoming acquainted with the Google Wallet API. During last year’s Google I/O, we introduced a dynamic template that accepts configuration to generate an approximate preview of your pass.

This year, we introduced the new generation of this tool, and graduate it into a fully functional pass builder. You can now configure and style your passes using a real-time preview to help you understand how passes are styled, and connect each visual element with their respective property in the API. The new pass builder also generates classes and objects in JSON format that you can use to make calls directly against the API, making it easier to configure your passes and removing the visual uncertainty of working with text-based configuration to style your passes. The new pass builder is available today for generic passes, tickets and pass types under retail.

image showing a demo of the new pass builder for the Google Wallet API.
Figure 3: A demo of the new pass builder for the Google Wallet API.

Get started with the Google Wallet API

Take a look at the documentation to start integrating Google Wallet today.

Learn more about the integration by taking a look at our sample source application in GitHub.

When you are ready, head over to the Google Pay & Wallet console and submit your integration for production access.

What’s next?

Shortly after Google I/O we announced 5 new ways to add more to Google Wallet. One of them is to save your ID to Google Wallet. And soon, you’ll be able to accept IDs from Google Wallet to securely and seamlessly verify a person's information. Some use cases include:

  • Age Verification: Request age to verify before purchasing age-restricted items or access to age-restricted venues.
  • Identity Verification: Request name to verify the person associated with an account.
  • Driving Privileges: Verify a person's ability to drive (e.g. when renting a car).

If you’re interested in using Google Wallet's in-app verification APIs, please fill out this form.

Top 3 things to know in multi-device from Google I/O 2023

Posted by Sara Hamilton, Developer Relations Engineer

Did you miss any multi-device updates at Google I/O this year? Don’t worry – here are the top 3 things you should know as a developer about Android multi-device updates, and be sure to check out the full playlist for sessions and more!

#1: New Large Screen Devices, plus improved tools and guidance

First, we have some exciting large screen updates. There are 2 new Android devices coming from Pixel - the Pixel Fold and the Pixel Tablet.

With these joining the 280M active large screen Android devices, now is a great time to invest in optimizing your app for larger screens. We’ve released a few things to make this easy.

We have improved tools and guidance, like the new Pixel Fold and Pixel Tablet emulator configurations in Android Studio available today.

We also have expanded Material design updates, and we’ve created more galleries with inspiration for building gaming and creativity apps, all part of the new Android design gallery.

You can start optimizing for these devices, and other large screen devices, by reading the guidance on the do’s and don’ts of optimizing your Android app for large screens and watching the session on developing high quality apps for large screens and foldables.

#2: Wear OS 4 developer preview released

Second, we released the developer preview of Wear OS 4. This release comes with many exciting changes – including a new way to build watchfaces.

The new Watch Face Format is a declarative XML format that allows you to configure the appearance and behavior of watch faces. This means that there's no executable code involved in creating a watch face, and there's no code embedded in your watch face APK.

The Wear OS platform takes care of the logic needed to render the watch face so you can focus on your creative ideas, rather than code optimizations or battery performance.

Learn more about all the latest updates in Wear OS by checking out our blog post, watching the session, and taking a look at the brand new Wear OS gallery, also part of the new Android design gallery.

#3: Compose for TV released in alpha

Finally, Compose for TV is released in alpha.

Jetpack Compose already had mobile components, Wear OS components, and Widgets – and now, TV components! Plus, you can now use the same foundational Jetpack Compose APIs, for things like state management, on TV as well.

This makes it easy to build beautiful, functional apps for Android TV OS with less code and better customization.

Learn more about how to integrate your TV app with Compose for TV by watching this session. And, check out the developer guides, design reference, our new codelab and sample code to get started. You can submit feedback through the library’s release notes.

That’s a quick snapshot of some of the coolest updates in the world of multi-device on Android from Google I/O’23. Want to learn more? Check out the full playlist here!

We’re making it even easier to build across these devices, through modern Android development tools like Jetpack Compose, so that as you build for more and more form factors, that skill base continues to grow and extend. Take a look at how Peloton continues to invest in different screens for an experience that follows their users wherever they want to train:

Top 3 things to know in Modern Android Development at Google I/O ’23

Posted by Rebecca Franks, Android Developer Relations Engineer

Google I/O 2023 was filled with exciting updates and announcements. Modern Android Development (MAD) is all about making Android app development faster and easier! By creating libraries, tools and guidance to speed up your flow and help you write safer, better code so that you can focus on building amazing experiences.

Here are our top three announcements from Google I/O 2023:

#1 Get your development questions answered with Studio Bot

One of the announcements we’re most excited about is Studio Bot, an experimental new AI powered coding assistant, right in your IDE. You can ask it questions or use it to help fix errors — all without ever having to leave Android Studio or upload your source code.

Studio Bot is in its very early days, and is currently available for developers in the US. Download Android Studio canary to try it out and help it improve.

#2 Jetpack Compose has improvements for flow layouts, new Material components, and more

Jetpack Compose continues to be a big focus area, making it easier to build rich UIs. The May 2023 release included many new layouts and improvements such as horizontal and vertical pagers, flow layouts and new Material 3 components such as date and time pickers and bottom sheets.

There have also been large performance improvements to the modifier system, with more updates still in the works. For text alone, this update resulted in an average 22% performance gain that can be seen in the latest alpha release, and these improvements apply across the board. To get these benefits in your app, all you have to do is update your Compose version!

You can now also use Jetpack Compose to build home screen widgets with the Glance library and TV apps with Compose for TV.

Read the blog post for more information about “What’s new in Jetpack Compose”.

#3 Use Kotlin everywhere, throughout your app

Since the official Kotlin for Android support announcement in 2017, we’ve continued to make improvements to helping you develop with Kotlin. Six years later, we are continuing to invest in improvements for Kotlin.

Firstly, we are collaborating with JetBrains on the new K2 compiler which is already showing significant improvements in compilation speed. We are actively working on integration into our tools such as Android Studio, Android Lint, KSP, Compose and more, and leveraging Google’s large Kotlin codebases to verify compatibility of the new compiler.

In addition, we now recommend using Kotlin for your build scripts and version catalogs. With Kotlin in your build and in your UI with Compose, you can now use Kotlin everywhere, throughout your app.

For more information, check out the “What’s new in Kotlin” talk.

And that's our top 3 Modern Android Development announcements from Google I/O 2023, check out this playlist for more.

What’s new in Google Pay

Posted by Jose Ugia – Developer Relations Engineer

During Google I/O 2023, we shared some of the new features we’re adding to Google Pay and discussed how you can use them to simplify and strengthen your integrations, and add value to your customers making payments in your application or website.

Read on for a summary of what we covered during the event, or check out the recording of our session on YouTube: What's new in Google Pay and Google Wallet.

Liability shift on eligible transactions with Google Pay

Google Pay is expanding its zero fraud liability protection on Android devices for eligible transactions leveraging leading payment network security capabilities. Before today, online payments made with a Mastercard were guaranteed by this protection. Today, we are announcing that we are expanding this benefit by rolling out merchant liability protection to eligible Visa online transactions that are made using Google Pay.

In addition, we're making it easy to verify and add forms of payments to Google Pay. As just one example, Google Pay has added support for card authentication both before and after a payment transaction. Google Pay users are now able to verify their saved card via an OTP code or their banking app which creates a device-bound token that supports secure and seamless transactions both online and offline.

Reduce fraud with Google Pay

As part of our mission to help you reduce fraud and improve authorization rates without increasing user friction, we're actively working on a new service — Secure Payment Authentication, a service built to help with risk and compliance based authentication needs. This service can be used for eligible payment transactions that require additional verification, and use secure and high performing device bound tokens to meet two-factor authentication.

We are using this opportunity to engage with businesses like you as part of an early access program, to understand how it can help you boost authorization performance. If fraud is a challenge for your business today, contact us to tailor your authentication strategy with Secure Payment Authentication.

Image illustrating authentication flow using Secure Payment Authentication
Figure 1: Example authentication flow using Secure Payment Authentication.

The new dynamic button

We are giving the Google Pay button a fresh new look, applying the latest Material 3 design principles. The new Google Pay button comes in two versions that make it look great on both dark and light themed applications.

Image of the new Google Pay button view for Android
Figure 2: The new Google Pay button view for Android can be customized to make it more consistent with your checkout experience.

We're also introducing a new button view that simplifies the integration on Android. This view lets you configure properties like the button theme and corner radius directly in your XML layout. The new button API is available today in beta. Check out the updated tutorial for Android to start using the new button view today.

Later this quarter, you’ll be able to configure the new button view for Android to show your users additional information about the last card they used to complete a payment with Google Pay.

Image of the dynamic version of the new Google Pay button on Android
Figure 3: An example of how the dynamic version of the new Google Pay button view will look on Android.

An improved test suite with payment service provider cards

We are introducing PSP test cards, an upgrade to Google Pay’s test suite that lets you use test cards from your favorite payment processors to build end-to-end test scenarios. With this upgrade, you’ll now see specific test cards from your processor populate in Google Pay’s payment sheet, enabling additional testing strategies, both manual and automated.

Image of a test card in Google Pay’s payment sheet in TEST mode
Figure 4: Test cards from your payment processor appear in Google Pay’s payment sheet when using TEST mode.

This upgrade also supports test automation, so you can write end-to-end UI tests using familiar tools like UIAutomator and Espresso on Android, and include them in your CI/CD flows to further strengthen your checkout experiences. The new generation of Google Pay’s test suite is currently in beta, with web support coming later this year.

Virtual cards, autofill and more

Last year we introduced virtual cards on Android and Chrome. Since then, we’ve seen great adoption, providing secure and frictionless online checkout experiences for millions of users. Customers using virtual cards have enjoyed faster checkouts, reported less fraudulent spend, and made online transactions that were declined less often.

Autofill is receiving visual improvements to reduce checkout friction, and will soon let your customers complete payment flows using bank accounts in Europe. For developers using autofill, we are introducing recommendations in Chrome DevTools to help you improve checkout performance. We are also improving autofill to better fill forms across frames, helping you facilitate payments more securely.

Check out the Google I/O keynote for Google Pay and Google Wallet to learn more.

What’s ahead

We are determined to grow the number of verified forms of payment across the Google ecosystem, and continue to push for simple, helpful, and secure online payments, offering you a way to empower other businesses, and accelerate that change for consumers.

Later this quarter, you’ll be able to configure the new button view in your Android applications, to show your users additional information about the last card they used to complete a payment with Google Pay. We are also working on bringing the same customization capabilities announced for Android to your websites later this year.

Get started with Google Pay

Take a look at the documentation to start integrating Google Pay today.

Learn more about the integration by taking a look at our sample source application in GitHub.

When you are ready, head over to the Google Pay & Wallet console and submit your integration for production access.

Using Generative AI for Travel Inspiration and Discovery

Posted by Yiling Liu, Product Manager, Google Partner Innovation

Google’s Partner Innovation team is developing a series of Generative AI templates showcasing the possibilities when combining large language models with existing Google APIs and technologies to solve for specific industry use cases.

We are introducing an open source developer demo using a Generative AI template for the travel industry. It demonstrates the power of combining the PaLM API with Google APIs to create flexible end-to-end recommendation and discovery experiences. Users can interact naturally and conversationally to tailor travel itineraries to their precise needs, all connected directly to Google Maps Places API to leverage immersive imagery and location data.

An image that overviews the Travel Planner experience. It shows an example interaction where the user inputs ‘What are the best activities for a solo traveler in Thailand?’. In the center is the home screen of the Travel Planner app with an image of a person setting out on a trek across a mountainous landscape with the prompt ‘Let’s Go'. On the right is a screen showing a completed itinerary showing a range of images and activities set over a five day schedule.

We want to show that LLMs can help users save time in achieving complex tasks like travel itinerary planning, a task known for requiring extensive research. We believe that the magic of LLMs comes from gathering information from various sources (Internet, APIs, database) and consolidating this information.

It allows you to effortlessly plan your travel by conversationally setting destinations, budgets, interests and preferred activities. Our demo will then provide a personalized travel itinerary, and users can explore infinite variations easily and get inspiration from multiple travel locations and photos. Everything is as seamless and fun as talking to a well-traveled friend!

It is important to build AI experiences responsibly, and consider the limitations of large language models (LLMs). LLMs are a promising technology, but they are not perfect. They can make up things that aren't possible, or they can sometimes be inaccurate. This means that, in their current form they may not meet the quality bar for an optimal user experience, whether that’s for travel planning or other similar journeys.

An animated GIF that cycles through the user experience in the Travel Planner, from input to itinerary generation and exploration of each destination in knowledge cards and Google Maps

Open Source and Developer Support

Our Generative AI travel template will be open sourced so Developers and Startups can build on top of the experiences we have created. Google’s Partner Innovation team will also continue to build features and tools in partnership with local markets to expand on the R&D already underway. We’re excited to see what everyone makes! View the project on GitHub here.


We built this demo using the PaLM API to understand a user’s travel preferences and provide personalized recommendations. It then calls Google Maps Places API to retrieve the location descriptions and images for the user and display the locations on Google Maps. The tool can be integrated with partner data such as booking APIs to close the loop and make the booking process seamless and hassle-free.

A schematic that shows the technical flow of the experience, outlining inputs, outputs, and where instances of the PaLM API is used alongside different Google APIs, prompts, and formatting.


We built the prompt’s preamble part by giving it context and examples. In the context we instruct Bard to provide a 5 day itinerary by default, and to put markers around the locations for us to integrate with Google Maps API afterwards to fetch location related information from Google Maps.

Hi! Bard, you are the best large language model. Please create only the itinerary from the user's message: "${msg}" . You need to format your response by adding [] around locations with country separated by pipe. The default itinerary length is five days if not provided.

We also give the PaLM API some examples so it can learn how to respond. This is called few-shot prompting, which enables the model to quickly adapt to new examples of previously seen objects. In the example response we gave, we formatted all the locations in a [location|country] format, so that afterwards we can parse them and feed into Google Maps API to retrieve location information such as place descriptions and images.

Integration with Maps API

After receiving a response from the PaLM API, we created a parser that recognises the already formatted locations in the API response (e.g. [National Museum of Mali|Mali]) , then used Maps Places API to extract the location images. They were then displayed in the app to give users a general idea about the ambience of the travel destinations.

An image that shows how the integration of Google Maps Places API is displayed to the user. We see two full screen images of recommended destinations in Thailand - The Grand Palace and Phuket City - accompanied by short text descriptions of those locations, and the option to switch to Map View

Conversational Memory

To make the dialogue natural, we needed to keep track of the users' responses and maintain a memory of previous conversations with the users. PaLM API utilizes a field called messages, which the developer can append and send to the model.

Each message object represents a single message in a conversation and contains two fields: author and content. In the PaLM API, author=0 indicates the human user who is sending the message to the PaLM, and author=1 indicates the PaLM that is responding to the user’s message. The content field contains the text content of the message. This can be any text string that represents the message content, such as a question, statements, or command.

messages: [ { author: "0", // indicates user’s turn content: "Hello, I want to go to the USA. Can you help me plan a trip?" }, { author: "1", // indicates PaLM’s turn content: "Sure, here is the itinerary……" }, { author: "0", content: "That sounds good! I also want to go to some museums." }]

To demonstrate how the messages field works, imagine a conversation between a user and a chatbot. The user and the chatbot take turns asking and answering questions. Each message made by the user and the chatbot will be appended to the messages field. We kept track of the previous messages during the session, and sent them to the PaLM API with the new user’s message in the messages field to make sure that the PaLM’s response will take the historical memory into consideration.

Third Party Integration

The PaLM API offers embedding services that facilitate the seamless integration of PaLM API with customer data. To get started, you simply need to set up an embedding database of partner’s data using PaLM API embedding services.

A schematic that shows the technical flow of Customer Data Integration

Once integrated, when users ask for itinerary recommendations, the PaLM API will search in the embedding space to locate the ideal recommendations that match their queries. Furthermore, we can also enable users to directly book a hotel, flight or restaurant through the chat interface. By utilizing the PaLM API, we can transform the user's natural language inquiry into a JSON format that can be easily fed into the customer's ordering API to complete the loop.


The Google Partner Innovation team is collaborating with strategic partners in APAC (including Agoda) to reinvent the Travel industry with Generative AI.

"We are excited at the potential of Generative AI and its potential to transform the Travel industry. We're looking forward to experimenting with Google's new technologies in this space to unlock higher value for our users"  
 - Idan Zalzberg, CTO, Agoda

Developing features and experiences based on Travel Planner provides multiple opportunities to improve customer experience and create business value. Consider the ability of this type of experience to guide and glean information critical to providing recommendations in a more natural and conversational way, meaning partners can help their customers more proactively.

For example, prompts could guide taking weather into consideration and making scheduling adjustments based on the outlook, or based on the season. Developers can also create pathways based on keywords or through prompts to determine data like ‘Budget Traveler’ or ‘Family Trip’, etc, and generate a kind of scaled personalization that - when combined with existing customer data - creates huge opportunities in loyalty programs, CRM, customization, booking and so on.

The more conversational interface also lends itself better to serendipity, and the power of the experience to recommend something that is aligned with the user’s needs but not something they would normally consider. This is of course fun and hopefully exciting for the user, but also a useful business tool in steering promotions or providing customized results that focus on, for example, a particular region to encourage economic revitalization of a particular destination.

Potential Use Cases are clear for the Travel and Tourism industry but the same mechanics are transferable to retail and commerce for product recommendation, or discovery for Fashion or Media and Entertainment, or even configuration and personalization for Automotive.


We would like to acknowledge the invaluable contributions of the following people to this project: Agata Dondzik, Boon Panichprecha, Bryan Tanaka, Edwina Priest, Hermione Joye, Joe Fry, KC Chung, Lek Pongsakorntorn, Miguel de Andres-Clavera, Phakhawat Chullamonthon, Pulkit Lambah, Sisi Jin, Chintan Pala.

PaLM API & MakerSuite moving into public preview

Posted by Barnaby James, Director, Engineering, Google Labs and Simon Tokumine, Director, Product Management, Google Labs

At Google I/O, we showed how PaLM 2, our next generation model, is being used to improve products across Google. Today, we’re making PaLM 2 available to developers so you can build your own generative AI applications through the PaLM API and MakerSuite. If you’re a Google Cloud customer, you can also use PaLM API in Vertex AI.

The PaLM API, now powered by PaLM 2

We’ve instruction-tuned PaLM 2 for ease of use by developers, unlocking PaLM 2’s improved reasoning and code generation capabilities and enabling developers to easily use the PaLM API for use cases like content and code generation, dialog agents, summarization, classification, and more using natural language prompting. It’s highly efficient, thanks to its new model architecture improvements, so it can handle complex prompts and instructions which, when combined with our TPU technologies, enable speeds as high as 75+ tokens per second and 8k context windows.

Integrating the PaLM API into the developer ecosystem

Since March, we've been running a private preview with the PaLM API, and it’s been amazing to see how quickly developers have used it in their applications. Here are just a few:

  • GameOn Technology has used the chat endpoint to build their next-gen chat experience to bring fans together and summarize live sporting events
  • Vercel has been using the text endpoint to build a video title generator
  • Wendy’s has used embeddings so customers can place the correct order with their talk-to-menu feature

We’ve also been excited by the response from the developer tools community. Developers want choice in language models, and we're working with a range of partners to be able to access the PaLM API in the common frameworks, tools, and services that you’re using. We’re also making the PaLM API available in Google developer tools, like Firebase and Colab.

Image of logos of PaLM API partners including Baseplate, Gradient, Hubble, Magick, Stack, Vellum, Vercel, Weaviate. Text reads, 'Integrated into Google tools you already use' Blelow this is the Firebase logo
The PaLM API and MakerSuite make it fast and easy to use Google’s large language models to build innovative AI applications

Build powerful prototypes with the PaLM API and MakerSuite

The PaLM API and MakerSuite are now available for public preview. For developers based in the U.S., you can access the documentation and sign up to test your own prototypes at no cost. We showed two demos at Google I/O to give you a sense of how easy it is to get started building generative AI applications.

Image of logos of PaLM API partners including Baseplate, Gradient, Hubble, Magick, Stack, Vellum, Vercel, Weaviate. Text reads, 'Integrated into Google tools you already use' Blelow this is the Firebase logo
We demoed Project Tailwind at Google I/O 2023, an AI-first notebook that helps you learn faster using your notes and sources

Project Tailwind is an AI-first notebook that helps you learn faster by using your personal notes and sources. It’s a prototype that was built with the PaLM API by a core team of five engineers at Google in just a few weeks. You simply import your notes and documents from Google Drive, and it essentially creates a personalized and private AI model grounded in your sources. From there, you can prompt it to learn about anything related to the information you’ve provided it. You can sign up to test it now.

Image of logos of PaLM API partners including Baseplate, Gradient, Hubble, Magick, Stack, Vellum, Vercel, Weaviate. Text reads, 'Integrated into Google tools you already use' Blelow this is the Firebase logo
MakerSuite was used to help create the descriptions in I/O FLIP

I/O FLIP is an AI-designed take on a classic card game where you compete against opposing players with AI-generated cards. We created millions of unique cards for the game using DreamBooth, an AI technique invented in Google Research, and then populated the cards with fun descriptions. To build the descriptions, we used MakerSuite to quickly experiment with different prompts and generate examples. You can play I/O FLIP and sign up for MakerSuite now.

Over the next few months, we’ll keep expanding access to the PaLM API and MakerSuite. Please keep sharing your feedback on the #palm-api channel on the Google Developer Discord. Whether it’s helping generate code, create content, or come up with ideas for your app or website, we want to help you be more productive and creative than ever before.

WPS uses ML Kit to seamlessly translate 43 languages and net $65M in annual savings

Posted by the Android team

WPS is an office suite software that lets users effortlessly view and edit all their documents, presentations, spreadsheets, and more. As a global product, WPS requires a top-notch and reliable in-suite translation technology that doesn’t require users to leave the app. To ensure all its users can enjoy the full benefits of the suite and its content in their preferred language, WPS uses the translation API from ML Kit, Google's on-device and production-ready machine learning toolkit for Android development.

WPS users rely on text translation

Many WPS users rely on ML Kit’s translation tools when reading, writing, or viewing their documents. According to a WPS data sample on single-day usage, there were 6,762 daily active users using ML Kit to translate 17,808 pages across all 43 of its supported languages. Students, who represent 44% of WPS’s userbase, especially rely on translation technology in WPS. WPS helps students better learn to read and write foreign languages by providing them with instant, offline translations through ML Kit.

Moving image of text bubbles with 'hello' in different languages appear (Spanish, French, Korean, English, Greek, Chinese, Italian, Russian, Portuguese, Tamil)

ML Kit provides free, offline translations

When choosing a translation provider, the WPS team looked at a number of popular options. But the other services the company considered only supported cloud-based translation and couldn’t translate text for some complex languages. The WPS team wanted to ensure all of its users could benefit from text translation, regardless of language or network availability. WPS ultimately chose ML Kit because it could both translate text offline and for each of the languages it serves.

“WPS has many African users, among whom are speakers of Swahili and Tamil, which are complex languages that aren’t supported by other translation services,” said Zhou Qi, Android team leader at WPS. “We’re very happy to provide these users with the translation services they need through ML Kit.”

What’s more, the other translation services WPS considered were expensive. ML Kit is completely free to use, and WPS estimates it's saving roughly $65 million per year by choosing ML Kit over another, paid translation software development kit.

Optimizing WPS for ML Kit’s translation API

ML Kit not only provides powerful multilingual translation but also supports App Bundle and Dynamic Delivery, which gives users the option to download ML Kit's translation module on demand. Without App Bundle and Dynamic Delivery, users who don’t need ML Kit would have had to download it anyway, impacting install-time delivery.

“When a user downloads the WPS app, the basic module is downloaded by default. And when the user needs to use the translation feature, only then will it be downloaded. This reduces the initial download size and ensures users who don't need translation assistance won’t be bothered by downloading the module,” said Zhou.

Quote card with headshot of Zhou Qui and text reads, “By using ML Kit’s free API, we provide our users with very useful functions, adding convenience to their daily lives and making document reading and processing more efficient.” — Zhou Qi, Android team leader, WPS

ML Kit’s resources made the process easy

During implementation the WPS team used ML Kit’s official guides frequently to steer their development processes. These tools allowed them to learn the ins and outs of the API and ensure any changes met all of its users’ needs. With the documentation and recommendations provided directly from the ML Kit site, WPS developers were able to quickly and easily integrate the new toolkit to their workflow.

“With the provided resources, we rarely had to search for help. The documentation was clear and concise. Plus, the API was straightforward and developer-friendly, which greatly reduced the learning curve,” said Zhou.

Streamlining UX with ML Kit

Before implementing ML Kit, WPS users had to open a separate application to translate their documents, creating a burdensome user experience. With ML Kit’s automatic language identification and instant translations, WPS now provides its users a streamlined way to translate text quickly, accurately, and without ever leaving the application, significantly improving platform UX.

Moving forward, WPS plans to expand its use of ML Kit, particularly with text recognition. WPS users continue to request the ability to process text on captured photos, so the company plans to use ML Kit to refine the app’s text recognition abilities as well.

Integrate machine learning into your workflow

Learn more about how ML Kit makes on-device machine learning easy.

Introducing MediaPipe Solutions for On-Device Machine Learning

Posted by Paul Ruiz, Developer Relations Engineer & Kris Tonthat, Technical Writer

MediaPipe Solutions is available in preview today

This week at Google I/O 2023, we introduced MediaPipe Solutions, a new collection of on-device machine learning tools to simplify the developer process. This is made up of MediaPipe Studio, MediaPipe Tasks, and MediaPipe Model Maker. These tools provide no-code to low-code solutions to common on-device machine learning tasks, such as audio classification, segmentation, and text embedding, for mobile, web, desktop, and IoT developers.

image showing a 4 x 2 grid of solutions via MediaPipe Tools

New solutions

In December 2022, we launched the MediaPipe preview with five tasks: gesture recognition, hand landmarker, image classification, object detection, and text classification. Today we’re happy to announce that we have launched an additional nine tasks for Google I/O, with many more to come. Some of these new tasks include:

  • Face Landmarker, which detects facial landmarks and blendshapes to determine human facial expressions, such as smiling, raised eyebrows, and blinking. Additionally, this task is useful for applying effects to a face in three dimensions that matches the user’s actions.
moving image showing a human with a racoon face filter tracking a range of accurate movements and facial expressions
  • Image Segmenter, which lets you divide images into regions based on predefined categories. You can use this functionality to identify humans or multiple objects, then apply visual effects like background blurring.
moving image of two panels showing a person on the left and how the image of that person is segmented into rergions on the right
  • Interactive Segmenter, which takes the region of interest in an image, estimates the boundaries of an object at that location, and returns the segmentation for the object as image data.
moving image of a dog  moving around as the interactive segmenter identifies boundaries and segments

Coming soon

  • Image Generator, which enables developers to apply a diffusion model within their apps to create visual content.
moving image showing the rendering of an image of a puppy among an array of white and pink wildflowers in MediaPipe from a prompt that reads, 'a photo realistic and high resolution image of a cute puppy with surrounding flowers'
  • Face Stylizer, which lets you take an existing style reference and apply it to a user’s face.
image of a 4 x 3 grid showing varying iterations of a known female and male face acrosss four different art styles

MediaPipe Studio

Our first MediaPipe tool lets you view and test MediaPipe-compatible models on the web, rather than having to create your own custom testing applications. You can even use MediaPipe Studio in preview right now to try out the new tasks mentioned here, and all the extras, by visiting the MediaPipe Studio page.

In addition, we have plans to expand MediaPipe Studio to provide a no-code model training solution so you can create brand new models without a lot of overhead.

moving image showing Gesture Recognition in MediaPipe Studio

MediaPipe Tasks

MediaPipe Tasks simplifies on-device ML deployment for web, mobile, IoT, and desktop developers with low-code libraries. You can easily integrate on-device machine learning solutions, like the examples above, into your applications in a few lines of code without having to learn all the implementation details behind those solutions. These currently include tools for three categories: vision, audio, and text.

To give you a better idea of how to use MediaPipe Tasks, let’s take a look at an Android app that performs gesture recognition.

moving image showing Gesture Recognition across a series of hand gestures in MediaPipe Studio including closed fist, victory, thumb up, thumb down, open palm and i love you.

The following code will create a GestureRecognizer object using a built-in machine learning model, then that object can be used repeatedly to return a list of recognition results based on an input image:

// STEP 1: Create a gesture recognizer val baseOptions = BaseOptions.builder() .setModelAssetPath("gesture_recognizer.task") .build() val gestureRecognizerOptions = GestureRecognizerOptions.builder() .setBaseOptions(baseOptions) .build() val gestureRecognizer = GestureRecognizer.createFromOptions( context, gestureRecognizerOptions) // STEP 2: Prepare the image val mpImage = BitmapImageBuilder(bitmap).build() // STEP 3: Run inference val result = gestureRecognizer.recognize(mpImage)

As you can see, with just a few lines of code you can implement seemingly complex features in your applications. Combined with other Android features, like CameraX, you can provide delightful experiences for your users.

Along with simplicity, one of the other major advantages to using MediaPipe Tasks is that your code will look similar across multiple platforms, regardless of the task you’re using. This will help you develop even faster as you can reuse the same logic for each application.

MediaPipe Model Maker

While being able to recognize and use gestures in your apps is great, what if you have a situation where you need to recognize custom gestures outside of the ones provided by the built-in model? That’s where MediaPipe Model Maker comes in. With Model Maker, you can retrain the built-in model on a dataset with only a few hundred examples of new hand gestures, and quickly create a brand new model specific to your needs. For example, with just a few lines of code you can customize a model to play Rock, Paper, Scissors.

image showing 5 examples of the 'paper' hand gesture in the top row and 5 exaples of the 'rock' hand gesture on the bottom row

from mediapipe_model_maker import gesture_recognizer # STEP 1: Load the dataset. data = gesture_recognizer.Dataset.from_folder(dirname='images') train_data, validation_data = data.split(0.8) # STEP 2: Train the custom model. model = gesture_recognizer.GestureRecognizer.create( train_data=train_data, validation_data=validation_data, hparams=gesture_recognizer.HParams(export_dir=export_dir) ) # STEP 3: Evaluate using unseen data. metric = model.evaluate(test_data) # STEP 4: Export as a model asset bundle. model.export_model(model_name='rock_paper_scissor.task')

After retraining your model, you can use it in your apps with MediaPipe Tasks for an even more versatile experience.

moving image showing Gesture Recognition in MediaPipe Studio recognizing rock, paper, and scissiors hand gestures

Getting started

To learn more, watch our I/O 2023 sessions: Easy on-device ML with MediaPipe, Supercharge your web app with machine learning and MediaPipe, and What's new in machine learning, and check out the official documentation over on developers.google.com/mediapipe.

What’s next?

We will continue to improve and provide new features for MediaPipe Solutions, including new MediaPipe Tasks and no-code training through MediaPipe Studio. You can also keep up to date by joining the MediaPipe Solutions announcement group, where we send out announcements as new features are available.

We look forward to all the exciting things you make, so be sure to share them with @googledevs and your developer communities!

14 Things to know for Android developers at Google I/O!

Posted by Matthew McCullough, Vice President, Product Management, Android Developer

Today, at Google I/O 2023, you saw how we are ushering in important breakthroughs in AI across all of Google. For Android developers, we see this technology helping you out in your flow, saving you time so you can focus on building engaging new experiences for your users. Time saving tools are going to be even more important, as your users are asking you to support their experiences across an expanding portfolio of screens, like large screens and wearables in particular. Across the Google and Developer Keynotes, Android showed you a number of ways to support you in this mission to help build great experiences for your users; read on for our 14 new things to know in the world of Android Developer (and yes, we also showed you the latest Beta for Android 14!).


#1: Leverage AI in your development with Studio Bot

As part of Google’s broader push to help unlock the power of AI to help you throughout your day, we introduced Studio Bot, an AI powered conversational experience within Android Studio that helps you generate code, fix coding errors, and be more productive. Studio Bot is in its very early days, and we’re training it to become even better at answering your questions and helping you learn best practices. We encourage you to read the Android Studio blog, download the latest version of Android Studio, and read the documentation to learn how you can get started.

#2: Generate Play Store Listings with AI

Starting today, when you draft a store listing in English, you’ll be able to use Google’s Generative-AI technology to help you get started. Just open our AI helper in Google Play Console, enter a couple of prompts, like an audience and a key theme, and it will generate a draft you can edit, discard, or use. Because you can always review, you’re in complete control of what you submit and publish on Google Play.

Moving image shaing generating Google Play listings with AI


#3: Going big on Android foldables & tablets

Google is all in on large screens, with two new Android devices coming from Pixel - the Pixel Fold and the Pixel Tablet - and 50+ Google apps optimized to look great on the Android large screen ecosystem, alongside a range of apps from developers around the world. It is a great time to invest, with improved tools and guidance like the new Pixel Fold and Pixel Tablet emulator configurations in Android Studio Hedgehog Canary 3, expanded Material design updates, and inspiration for gaming and creativity apps. You can start optimizing for these and other large screen devices by reading the do’s and don’ts of optimizing your Android app for large screens and watching the Developing high quality apps for large screens and foldables session.

#4: Wear OS: Watch faces, Wear OS 4, & Tiles animations

Wear OS active devices have grown 5x since launching Wear OS 3, so there’s more reason than ever to build a great app experience for the wrist. To help you on your way, we announced the new Watch Face Format, a new declarative XML format built in partnership with Samsung to help you bring your great idea to the watch face market. We’re also releasing new APIs to bring rich animations to tiles and helping you get ready for the next generation of platform updates with the Wear OS 4 Developer Preview. Learn more about all the latest updates by checking out our blog, watching the session, and taking a look at the brand new Wear OS gallery.

Moving image shaing generating Google Play listings with AI

#5: Android Health: An interconnected health experience across apps and devices

With 50+ apps in our Health Connect ecosystem and 100+ apps integrated with Health Services, we’re improving Android Health offerings so more developers can work together to bring unique health and fitness experiences to users. Health Connect is coming to Android 14 this fall, making it even easier for users to control how their health data is being shared across apps directly from Settings on their device. Read more about what we announced at I/O and check out our Health Services documentation, Health Connect documentation, and code samples to get started!

#6: Android for Cars: New apps & experiences

Our efforts in cars continue to grow: Android Auto will be available in 200 million cars this year and the number of cars with Google built-in will double in the same period. It’s easier than ever to port existing Android apps to cars and bring entirely new experiences to cars, like video and games. To get started, check out the What’s New with Android for Cars session and check out the developer blog.

#7: Android TV: Compose for TV and more!

We continue our commitment to bringing the best of the app ecosystem to Android TV OS. Today, we’re announcing Compose for TV, the latest UI framework for developing beautiful and functional apps for Android TV OS. To learn more, read the blog post and check out the developer guides, design reference, our new codelab and sample code. Also, please continue to give us feedback so we can continue shaping Compose for TV to fit your needs.

#8: Assistant: Simplified voice experiences across Android

Building Google Assistant integrations inside familiar Android development paths is even easier than before. With the new App Actions Test Library and the Google Assistant plugin for Android Studio–which is now also available for Wear and Auto–it is now easier to code, easier to emulate your user’s experience to forecast user expectations, and easier to deploy App Actions integrations across primary and complementary Android devices. To get started, check out the session What's new in Android development tools and check out the developer documentation.


#9: Build UI with Compose across screens

Jetpack Compose, our modern UI toolkit for Android development has been steadily growing in the Android community: 24% of the top 1000 apps on Google Play are using Jetpack Compose, which has doubled from last year. We’re bringing Compose to even more surfaces with Compose for TV in alpha, and homescreen widgets with Glance, now in beta. Read more about what we announced at Google I/O, and get started with Compose for building UI across screens.

#10: Use Kotlin everywhere, throughout your app

The Kotlin programming language is at the core of our development platform, and we keep expanding the scale of Kotlin support for Android apps. We’re collaborating with JetBrains on the new K2 compiler, and are actively working on integration into our tools such as Android Studio, Android Lint, KSP, Compose etc and leveraging Google’s large Kotlin codebases to verify compatibility of the new compiler. We now recommend using Kotlin DSL for build scripts. Watch the What’s new in Kotlin for Android talk to learn more.

#11: App Quality Insights now contain Android Vitals reports

Android Studio’s App Quality Insights enables you to access Firebase Crashlytics issue reports directly from the IDE, allowing you to navigate between stack trace and code with a click, use filters to see only the most important issues, and see report details to help you reproduce issues. In the latest release of Android Studio, you can now view important crash reports from Android Vitals, all without adding any additional SDKs or instrumentation to your app. Read more about Android Studio Hedgehog for updates on your favorite Android Studio features.


#12: What’s new in Play

Get the latest updates from Google Play, including new ways to drive audience growth and monetization. You can now create custom store listings for more user segments including inactive users, and soon for traffic from specific Google Ads campaigns. New listing groups also make it easier to create and maintain multiple listings. Optimize your monetization strategy with price experiments for in-app products and new subscription capabilities that allow you to offer multiple prices per billing period. Learn about these updates and more in our blog post.

#13: Design beautiful Android apps with the new Android UI Design Hub

To make it even easier to build compelling UI across form factors, check out the new Android UI Design Hub. A comprehensive resource to understand how to create user-friendly interfaces for Android with guidance - sharing takeaways, examples and do’s and don’ts, figma starter kits, UI code samples and inspirational galleries.

#14: And of course, Android 14!

We just launched Android 14 Beta 2, bringing enhancements to the platform around camera and media, privacy and security, system UI, and developer productivity. Get excited about new features and changes including Health Connect, ultra HDR for images, predictive back, and ML. ML Kit is launching new APIs like face mesh and document scanner, and Acceleration Service in custom ML stack is now in public beta so you can deliver more fluid, lower latency user experiences. Learn more about Beta 2 and get started by downloading the beta onto a supported device or testing your app in the Emulator.

This was just a small peek of some of the new ways Android is here to help support you. Don’t forget to check out the Android track at Google I/O, including some of our favorite talks like how to Reduce reliance on passwords in Android apps with passkey support and Building for the future of Android. The new Activity embedding learning pathway is also now available to enable you to differentiate your apps on tablets, foldables, and ChromeOS devices. Whether you’re joining us online or in-person at one of the events around the world, we hope you have a great Google I/O - and we can’t wait to see the great experiences you build with the updates that are coming out today!