Tag Archives: Google I/O 2023

Create world-scale augmented reality experiences in minutes with Google’s Geospatial Creator

Posted by Stevan Silva, Senior Product Manager

ARCore, our augmented reality developer platform, provides developers and creators alike with simple yet powerful tools to build world-scale and room-scale immersive experiences on 1.4 billion Android devices.

Since last year, we have extended coverage of the ARCore Geospatial API from 87 countries to over 100 countries provided by Google’s Visual Positioning System and the expansion of Street View coverage, helping developers build and publish more transformative and robust location-based, immersive experiences. We continue to push the boundaries of introducing helpful applications and delightful new world-scale use cases, whether it's the innovative hackathon submissions from the ARCore Geospatial API Challenge or our partnership with Gorillaz, where we transformed Times Square and Piccadilly Circus into a music stage to witness Gorillaz play in a larger-than-life immersive performance.

One thing we’ve consistently heard from you over the past year is to broaden access to these powerful resources and ensure anyone can create, visualize, and deploy augmented reality experiences around the world.

Introducing Geospatial Creator


Today, we are launching Geospatial Creator, a tool that helps anyone easily visualize, design, and publish world-anchored immersive content in minutes straight from platforms you already know and love — Unity or Adobe Aero.

Easily visualize, create, and publish augmented reality experiences with Geospatial Creator in Unity (left) and Adobe Aero (right)

Geospatial Creator, powered by ARCore and Photorealistic 3D Tiles from Google Maps Platform, enables developers and creators to easily visualize where in the real-world they want to place their digital content, similar to how Google Earth or Google Street View visualize the world. Geospatial Creator also includes new capabilities, such as Rooftop anchors, to make it even easier to anchor virtual content with the 3D Tiles, saving developers and creators time and effort in the creation process.

These tools help you build world-anchored, cross-platform experiences on supported devices on both Android and iOS. Immersive experiences built in Adobe Aero can be shared via a simple QR code scan or link with no full app download required. Everything you create in Geospatial Creator can be experienced in the physical world through real time localization and real world augmentation.


With Geospatial Creator, developers and creators can now build on top of Photorealistic 3D Tiles from Google Maps Platform (left) which provide real time localization and real time augmentation (right)

When the physical world is augmented with digital content, it redefines the way people play, shop, learn, create, shop and get information. To give you an idea of what you can achieve with these tools, we’ve been working with partners in gaming, retail, and local discovery including Gap, Mattel, Global Street Art, Singapore Tourism Board, Gensler, TAITO, and more to build real world use cases.

SPACE INVADERS: World Defense immersive game turns the world into a playground

Later this summer you’ll be able to play one of the most acclaimed arcade games in real life, in the real world. To celebrate the 45 year anniversary of the original release, TAITO will launch SPACE INVADERS: World Defense. The game, powered by ARCore and Geospatial Creator, is inspired by the original gameplay where players will have to defend the Earth from SPACE INVADERS in their neighborhood. It will combine AR and 3D gameplay to deliver a fully contextual and highly engaging immersive experience that connects multi-generations of players.



Gap and Mattel transform a storefront into an interactive immersive experience

Gap and Mattel will transform the iconic Times Square Gap Store into an interactive Gap x Barbie experience powered by Geospatial Creator in Adobe Aero. Starting May 23, customers will see the store come to life with colors and shapes and be able to interact with Barbie and her friends modeling the new limited edition Gap x Barbie collection of clothing.

moving image of Gap by Mattel

Global Street Art brings street art to a new dimension with AR murals

Google Arts & Culture partnered with Global Street Art and three world-renowned artists to augment physical murals in London (Camille Walala), Mexico City (Edgar Saner), and Los Angeles (Tristan Eaton). The artists used Geospatial Creator in Adobe Aero to create the virtual experience, augmenting physical murals digitally in AR and bringing to life a deeper and richer story about the art pieces.



Singapore Tourism Board creates an immersive guided tour to explore Singapore

Google Partner Innovation team partnered with Singapore Tourism Board to launch a preview of an immersive Singapore guided tour in their VisitSingapore app. Merli, Singapore's tourism mascot, leads visitors on an interactive augmented tour of the city’s iconic landmarks and hidden gems, beginning with the iconic Merlion Park and engaging visitors with an AR symphony performance at Victoria Theatre and Concert Hall. The full guided tour is launching later this summer, and will help visitors discover the best local hawker food, uncover the city's history through scenes from the past, and more.


Gensler helps communities visualize new urban projects

Gensler used Geospatial Creator in Adobe Aero to help communities easily envision what new city projects might look like for the unhoused. The immersive designs of housing projects allows everyone to better visualize the proposed urban changes and their social impact—ultimately bringing suitable shelter to those who need it.

moving image of city projects from Gensler

Geospatial Creator gives anyone the superpower of creating world scale AR experience remotely. Both developers and creators can build and publish immersive experiences in minutes in countries where Photorealistic 3D Tiles are available. In just a few clicks, you can create applications that help communities, delight your users, and provide solutions to businesses. Get started today at goo.gle/geospatialcreator. We’re excited to see what you create when the world is your canvas, playground, gallery, or more!

Introducing the Watch Face Format for Wear OS

Posted by Anna Bernbaum, Product Manager

We are excited to announce the launch of the Watch Face Format! We worked in partnership with Samsung to introduce a new way for you to build watch faces for Wear OS smartwatches.

The Watch Face Format is a declarative XML format to design the appearance and behavior of watch faces. This means that there is no executable code involved in creating a watch face, and there will be no code embedded in your watch face APK.

The Wear OS platform takes care of the logic needed to render the watch face so you no longer have to worry about code optimizations or battery performance.

Watch faces that are built with this new format require less maintenance and fewer updates than the ones built using the Jetpack Watch Face library. For example, you no longer need to update your watch face to benefit from improvements in performance or battery consumption, or to get the latest bug fixes.

Starting today, you can build watch faces in this new format and publish them on Google Play, ready for when the first Wear OS 4 watches are available.

The Watch Face Format lets you create…

Analog and digital watch faces:

Three watch faces illustrating different analogue and digital styles.

Watch faces with complications:

Three watch faces displaying different complication formats.

Customizable watch faces:

Three instances of the same watch face displayed in different color choices.

And more...

A planets-based watch face, a rainbow-inspired watch face and one based on the sun and moon

Watch Face Editing

With the Watch Face Format, we have included the watch face editor as part of Wear OS itself, so users can customize every watch face using the same editor UI. You no longer need to build your own watch face editor for users to customize their watch face.

Customizing the watch face with the in-built watch face editor
Wear OS 4’s editor for watch faces made using the Watch Face Format

Build Watch Faces, or Watch Face Tools

The new Watch Face Format can be used to build watch faces directly, or it can be integrated into creation tools, allowing designers to create watch faces without having to write any executable code.


Watch Face Studio

Today, Samsung has released the latest version of Watch Face Studio, ready for you to try now. As an alternative to directly writing XML using the Watch Face Format, Watch Face Studio makes it easy for designers to create watch faces without any coding experience.

Watch faces made in the latest version of Watch Face Studio use the Watch Face Format by default when they are run on a Wear OS 4 watch, or they run as traditional watch faces when the watch face runs on a Wear OS 3 watch.

Using Watch Face Studio to create a watch face.
Using Watch Face Studio to create a watch face

Learn more

Build watch faces using the Watch Face Format today:

What’s new with Android for Cars: I/O 2023

Posted by Jennifer Tsau, Product Management Lead and David Dandeneau, Engineering Lead

For more than a decade, Google has been committed to bringing safe and seamless connected experiences to cars. We’re continuing to see strong momentum and adoption across Android for Cars. Android Auto is supported by nearly every major car maker, and will be in nearly 200 million cars by the end of this year. And the number of cars powered by Android Automotive OS with Google built-in — which includes top brands like Chevrolet, Volvo, Polestar, Honda, Renault and more — is expected to nearly double by the end of this year.

With cars becoming more connected and equipped with immersive displays, there’s more opportunities for developers to bring app experiences to cars. We’re excited to share updates and new ways for developers to reach more users in the car.


Apps designed for driving experiences

Helping drivers while on the road - whether they are navigating, listening to music, or checking the weather - is a top priority. We’re continuing to invest in tools and resources, including the Android for Cars App Library, to make it even easier for developers to build new apps or port existing Android apps over to cars.

New capabilities for navigation apps

Today, we announced Waze rolling out globally on the Google Play Store for all cars with Google built-in, expanding its availability beyond Android Auto. As a part of this launch, we created more templates in Android for Cars App Library to help speed up development time across a number of app categories, including navigation.

For navigation apps, it’s also now possible to integrate with the instrument cluster, providing turn-by-turn directions right in the driver's line of sight. And developers can also access car sensor data to surface helpful information like range, fuel level, and speed to provide more contextual assistance to drivers.

A car dashboard shows the Waze app open on the display panel
The Waze app is coming to all cars with Google built-in, including the first-ever Chevrolet Blazer EV launching this year.

Tools to easily port your media apps across Android for Cars

Media apps continue to be a top use case in the car, and it’s quicker than ever to bring your media apps to Android Auto and Android Automotive OS. Audible recently joined popular streaming audio apps like Deezer, Soundcloud, and Spotify to offer their apps across both Android Auto and cars with Google built-in. If you have a media app on mobile, port it over to reach new users in the car.

New app categories for driving experiences

The Android for Cars App Library now allows developers to bring new apps to cars including internet of things (IoT) and weather apps to cars. The IoT category is available for all developers, while weather is in an early access program. In the weather category, The Weather Channel app will join other weather apps like Weather & Radar later this year.

We’re also working with messaging apps like Zoom, Microsoft Teams, and Webex by Cisco to allow you to join meetings by audio from your car display in the coming months.

A car display shows a Zoom meeting schedule next to a route in Google Maps.
Coming soon, join meetings by audio from your car display.

Apps designed for parked and passenger experiences

With screens expanding in size and more being added for passengers, there is growing demand for parked and passenger experiences in cars.

Video, gaming, and browsing in cars

Now, video and gaming app categories are available in the car, with an early access program for browsing apps coming soon. YouTube is now available for car makers to offer in cars with Google built-in. And drivers of cars with Google built-in will soon have access to popular titles like Beach Buggy Racing 2, Solitaire FRVR, and My Talking Tom Friends from publishers like Vector Unit, FRVR and Outfit7 Limited. Developers can now port their large screen optimized apps to cars to take advantage of this opportunity.

A car display shows a YouTube video of an animated character singing.
YouTube is coming to cars with Google built-in, like the Polestar 2.

More screens in cars allows for new experiences between drivers and passengers, including individual and shared entertainment experiences. We're excited to announce multi-screen support is coming to Android Automotive OS 14 — stay tuned for more updates.

A car with a panoramic front display and screens in headrests showing apps and video content.
Support for multiple screens is coming to Android Automotive OS 14.

Start developing apps for cars today

To learn how to bring your apps to cars, check out the technical session, codelab and documentation on the Android for Cars developer site. With all the opportunities across car screens, there has never been a better time to bring your apps and experiences to cars. Thanks for all the contributions to the Android ecosystem. See you on the road!

Building pixel-perfect living room experiences with Compose for TV

Posted by Paul Lammertsma, Developer Relations Engineer

Over the past year, we’ve continued to see significant growth on Android TV OS, now with over 150 million monthly active devices. In fact, according to Strategy Analytics, the Android TV streaming platform shipped on more devices worldwide than any other streaming TV platform in 2022.

Today, we’re launching the Alpha release of Compose for TV, the latest UI framework for developing beautiful and functional apps for Android TV.


Building pixel-perfect living room experiences with Compose for TV

Compose for TV unlocks all the benefits of Jetpack Compose for your TV apps, allowing you to build apps with less code, easier maintenance and a modern Material 3 look straight out of the box:

  • Less code: Do more with less code and avoid entire classes of bugs, so code is simple and easy to maintain. 
  • Intuitive: Describe your UI, and Compose takes care of the rest. As the app state changes, your UI automatically updates. 
  • Accelerate development: Compose for TV is compatible with all your existing code so you can adopt when and where you want. Iterate fast with live previews and full Android Studio support. 
  • Powerful & flexible: Create beautiful apps with direct access to the Android platform APIs that can be easily reused between other form factors, including your existing mobile, tablet, foldable, wearable and TV interfaces.

TV design guidelines

We're also excited to announce the launch of our new TV Design Guidelines for Android TV. This comprehensive guide gives you the tools you need to create TV apps that are visually appealing, intuitive, and immersive. The guidelines cover everything from typography and color to navigation and layout. Follow these guidelines to create high-quality TV apps that are easy to use.

image of a wall mounted, flat screen television in a modern home. The screen is showing the preview for a show titled 'Paws' with an adorable puppy as the show's star, and a Watch Now button

Components you can use today

Here are some components from the TV library that are optimized for the living room experience. You can use them alongside the Material components in Compose you’re already familiar with.


Scroll containers

TvLazyColumn {   items(contentList) { content ->     TvLazyRow { items(content) { cardItem -> Card(cardItem) }   } }

moving image of a grid of content cards
A grid of content cards

TvLazyRow( pivotOffsets = PivotOffsets(0.0f) ) { items(movie) { movie -> MyContentCard(movie) } }

moving image of a grid of content cards
Adjusting the pivot of a TvLazyRow


Immersive List

ImmersiveList( modifier = Modifier.height(130.dp).fillMaxWidth(), background = { index, _ -> AnimatedContent(targetState = index) { MyImmersiveListBackground(it) } }, ) { TvLazyRow { items(featuredContentList.size) { index -> MyCard( Modifier.focusableItem(index), featuredContentList[index] ) } } }

moving image of a grid of content cards
ImmersiveList allows TvLazyRows to be combined with content featuring

Featured carousel

Carousel( itemCount = featuredContentList.size, ) { index -> CarouselItem( background = { MyBackground(index) }, content = { MyFeaturedContent(featuredContentList[index]) } ) }

moving image of a grid of content cards
Carousel features content with custom content and backgrounds

Navigation

var selectedTabIndex by remember { mutableStateOf(0) } TabRow(selectedTabIndex = selectedTabIndex) { tabs.forEachIndexed { index, tab -> Tab( selected = selectedTabIndex == index, onFocus = { selectedTabIndex = index }, ) { Text(tab) } } } MyContentBody(selectedTabIndex)

moving image of a grid of content cards
TabRows can be placed at the top of the screen to provide top navigation

Side navigation with navigation drawer


NavigationDrawer( drawerContent = { if (DrawerValue.Open == it) { MyExpandedSideMenu() } else { MyCompactSideMenu() } } ){ MyContentBody() }

moving image of a grid of content cards
NavigationDrawer makes it easy to implement side navigation that expands and collapses

TV-optimized components

Subtle focus hints that work on phones and tablets might not be optimal for TVs, due to environmental factors such as distance from the screen and contrast ratio. To address this, we’ve built dedicated Material3 inspired components that provide big, bold focus for selected elements like Buttons and Cards, designed with accessibility in mind. You can use these Indications for your own custom surfaces as well.


moving image of a grid of content cards
Component focus can be customized through different indication types: Scale, Border, Glow and Color

Built with developers

We worked closely with a group of early adopters to get their feedback on Compose for TV. Here’s what they have to say:


Quote card with headshot of Dai Williams, Plex, smiling and text reads,'TV focus and scrolling support on Compose from Google has greatly improved our developer productivity and app performance. We are excited to launch more and more features using Compose this year.'

Quote card with headshot of Danny Preussler, Android Platform Lead, Soundcloud, smiling and text reads,'Thanks to Compose for TV, we are able to reuse components and move much faster than the old Leanback View APIs would have ever allowed us to'

Quote card with headshot of Petr Introvič, Showmax, smiling and text reads,'Dev-friendly components like ImmersiveList or Carousel helped us put front the top content, and NavigationDrawer added a top-level navigation—the final piece for our TV app migration to Compose.'

Quote card with headshot of Kishore AK, CTO, Zee5, smiling, and text reads 'We are constantly striving to ensure our users have the best possible experience. We started early on COmpose for TV and are confident that its implementation will help in making our app rendering faster and smoother.'

Learning more

To get started , check out the developer guides, design reference, our new codelab and sample code. Be sure to check the latest release notes to keep up to date with the latest updates to Compose for TV.


Feedback from developers & designers like you

We’ve heard your feedback about the Leanback API and your desire to use a modern UI framework that looks great out of the box, but also lends itself to be thoroughly themed and customized. Please continue to give us feedback so we can continue shaping Compose for TV to fit your needs.

Watch the Wear OS updates at I/O 2023

Posted by Kseniia Shumelchyk, Android Developer Relations Engineer

As we continue to evolve the Wear OS platform, we're excited to share with you some of the newest features and improvements that have been added to help you create innovative and engaging experiences for your users.

Partners like Peloton and Todoist have been building exceptional experiences for Wear OS - and seeing the impact on their feature-adoption and engagement. Hear directly from Peloton engineers about how they built a differentiated experience for the watch with Compose for Wear OS.


In this blog post, we’ll be highlighting some of the key updates we announced at Google I/O this year, so let’s dive in and explore the latest advancements in Wear OS!

Wear OS 4 Developer Preview

Today we’re releasing the first Developer Preview of Wear OS 4, the next version of Google’s smartwatch platform arriving later this year. It has enhancements to security, user customization, and power optimizations.

This preview introduces several new tools to help enhance your Wear OS app experience:

Watch Face Format

We are launching the Watch Face Format, a new way to create watch faces for Wear OS. The format makes it easier to create customizable and more power-efficient watch faces for Wear OS 4. Developed in partnership with Samsung, the Watch Face Format is a declarative XML format, so there is no executable code involved in creating a watch face and there will be no code embedded in your watch face APK. Read more.

Watch faces created using the new Format

Tiles

Wear OS tiles give users fast, predictable access to the information and actions they rely on most. Version 1.2 of the Jetpack Tiles library introduces support for platform data bindings, so if your tile uses platform data sources such as heart rate, step count, or time, your tile is updated once per second.

The new version of tiles also adds support for animations. You can use tween animations to create smooth transitions on changes to part of your layout, and transition animations can animate new or disappearing elements from the tile.

Image showing examples of animated Tiles
Examples of animated Tiles

Get your app ready

Wear OS 4 is based on Android 13, which is several versions newer than the current Wear OS version, so your app will need to handle the system behavior changes that took effect in Android 12 and Android 13. We recommend you start by testing your app and releasing a compatible update first – as devices get upgraded to Wear OS 4, it’s a basic but a critical level of quality that provides a good app experience for users.

Download the Wear OS 4 emulator in Android Studio Hedgehog to explore new features and test your app on Wear OS 4 Developer Preview.


Tooling and library updates

Wear OS support in Firebase Test Lab

Firebase Test Lab will support running tests for your standalone app on physical Google Pixel Watches in the next few weeks. You can run your automated tests on the Google Pixel Watch via Gradle Managed Devices, or use the Firebase Console to also run Robo tests. To learn more, check out available devices.

Wear OS support in the Pre-launch reports

Today we are also excited to announce Wear OS support in Google Play Pre-launch reports for standalone apps. The Pre-launch report helps to identify issues proactively before your app reaches users, so it’s an important tool to help you launch a high-quality app. You can test for stability, accessibility, security and trust, and screenshot previews! At the moment the analysis runs on Wear emulators and it is soon launching on Google Pixel Watches.

Emulator improvements

The Wear OS 4 emulator brings support for emulated Bluetooth, which lets you test more use cases, for example Bluetooth audio.

The new Wear OS 4 emulator doesn’t support unmanaged 32-bit code, so if your app uses native code, make sure that it includes both 32-bit and 64-bit native libraries. This will also prepare your app for upcoming 64-bit only hardware.

In Android Studio Hedgehog we also added capabilities for capturing screenshots and Logcat snapshots in the Wear OS emulator, so it is now much easier to generate screenshots for your app’s store listing.

Jetpack libraries

Since the latest stable Compose for Wear OS 1.1 release, we continue to bring new features and improvements to the toolkit. Version 1.2 already has a number of alpha releases – check out release notes to find out more.

Health Services version 1.0 has introduced a few new features in latest beta releases. Most notably, it includes BatchingMode to deliver batched exercise data at a configured interval instead of the default interval, as well as an ExerciseTypeConfig API which enables updates during ongoing exercises, such as golfing. If you are interested to learn what's new in Android Health, check out this blog.


Start building for Wear OS now

Wear OS active devices have grown 5x since launching Wear OS 3, and it's the fastest growing smartwatch platform.

We’re excited to share our brand new Wear OS Gallery, where you can find even more guidance with proven design and development patterns for messaging, media, and health & fitness apps!

With the latest updates, you'll have even more tools at your disposal to create beautiful, high-quality wearable experiences.


Learn more

Get started building for Wear OS with hands-on experience! Try our Compose for Wear OS codelab, and check out the documentation and samples.

The new Wear OS quality requirements will come into effect on August 31, 2023, so consider them early when designing and developing your app.

We’re looking forward to seeing the experiences that you build!

Introducing the Android Design Hub: The ultimate resource for building exceptional user interfaces across all form factors

Posted by Adhithya Ramakumar, Design Lead, Android Developer Experience and Rebecca Gutteridge, Senior Developer Relations Engineer, Android

Cross posted from Android Medium

We’re introducing the Android Design Hub to make it easier to build compelling UI across form factors.

What is the Android UI design hub?

The design hub is a comprehensive resource with opinionated guidance designed to empower designers and developers like you to create stunning and user-friendly interfaces for Android apps. It's all about sharing takeaways, examples and do’s and don’ts, starter Figma kits, UI samples and inspiring galleries. This is the beginning of a journey we'd love for you to join us on.

Why the Android UI design hub?

A well-designed UI is crucial to the success of an app, and that's why we created a one-stop shop designed to help create outstanding user interfaces across all Android form factors:

The design hub goes into depth about what it means to design for Android, at the same time complementing and extending our Material Design open-source design system. As the Android ecosystem adds an increasing variety of devices, it's more important than ever to create seamless and adaptable experiences for your users. The Android UI Design Hub is your comprehensive resource for mastering the art of designing and implementing UI on a diverse range of devices, from smartphones and tablets to foldables to wearables and TV.

Here’s a sneak peek of what you'll find in the design hub:

Resources organized by form factors

Gain access to a library tailored for each form factor, including guidance, design templates, and sample projects. With these resources at your fingertips, you're well equipped to create exceptional UI experiences for every device category.

moving image of various form factors

Galleries with UI inspiration across form factors

Spark your creativity by exploring our collection of Android app designs for a variety of popular categories of apps such as Social, Productivity, Health & Fitness, Shopping, and more. Get inspired by thoughtfully curated examples and discover design patterns to tackle common design challenges across form factors for

moving image of various Android aopp designs

Design guidelines

Dive into the nitty gritty of Android design principles, learning how to implement high-quality Android designs and understanding app layout with Android system bars, navigation modes, theming, and more. Our comprehensive documentation helps ensure that your app's design adheres to the highest standard and directs you to the latest Material Design guidelines.

Guidelines:

Design tools and resources

image of a 6 x 3 grid of illustrated designs

Equip yourself with an extensive selection of design tools, templates, and resources, specially tailored for Android development. Streamline your workflow and bring your app ideas to life more efficiently:

Check out the code samples with their corresponding Figma Kits:

Whether you're an experienced designer or a developer looking to enhance your design skills, the Android Design Hub is here to support and guide you throughout your journey.

So what are you waiting for? Visit the Android UI design hub today, and start creating exceptional user interfaces that captivate your audience and leave a lasting impression. Happy designing! 🚀

How It’s Made: I/O FLIP adds a twist to a classic card game with generative AI

Posted by Jay Chang, Product Marketing Manager for Flutter & Dart and Glenn Cameron, Product Marketing Manager for Core ML

I/O FLIP is an AI-designed take on a classic card game, powered by Google, and created to inspire developers to experiment with what is possible with Google’s new generative AI technologies. Thousands of custom character images were pre-generated with DreamBooth on Muse and their descriptions were written using the PaLM API. The game’s UI and backend were built in Flutter and Dart, a suite of Firebase tools were used for hosting, and sharing, and Cloud Run was used to help scale.

When a user plays I/O FLIP, they:

  1. Select a character class and a power to generate a pack of 12 cards
  2. Select three cards from the pack to create their team
  3. Join a match and win a best-of-3
  4. Win multiple matches in a row to create a streak of wins for a chance to make the leaderboard
  5. Share their deck with players from all over the globe
Four phones side by side showing the I/O FLIP game, including drop-downs to select classes and powers for cards and various card battles.

Let’s dig into how we built the game.

Flutter and Dart: User Interface, Hologram effects, and backend

I/O FLIP’s game logic and UI is built on a foundation provided by features from the Flutter Casual Games Toolkit, including audio functionality and app navigation via the go_router package. Since FLIP is a web app, it was important that it was responsive – resizing depending on the user’s screen size and that it took input from a variety of devices, mobile, tablet, and desktop.

Much of the logic in FLIP is based on the game cards, so they’re a good place to start. Each card consists of an image of one of four Google mascots: Dash, Sparky, Dino, and Android, and a description – both of which are inspired by the class and power the user selects at the beginning of the game. Cards are also randomly assigned an elemental power (air, water, fire, metal, earth) and a number between 10-100 indicating the card’s strength. Elemental powers can impact each other in match play, indicated in the image below.

Five phones side by side showing the I/O FLIP game, including screens including illustrations of the elemental powers and their effects on each other

Elemental powers aren’t just for show. Cards receive a 10 point penalty if they are on the wrong end of an element matchup, as explained in the images above.

Speaking of matchups, each match is a best-of-3. The winner continues playing with their chosen hand to start (or continue) their streak, while the loser can share their hand or pick a new hand to try again.

New Flutter and Dart features helped us quickly bring this life: For instance, records, a Dart feature announced at Flutter Forward, helped us to render a frame based on the card element, and Flutter’s official support for fragment shaders on web helped us to create a special hologram effect on some cards, which are the only cards in the game that have 100 points.

A screen recording from I/O FLIP, showing a character card with the shader effect applied

Dreambooth on Muse and PaLM API: AI-generated images and descriptions


Four cards side by side from the I/O FLIP game, including screens

Each card in I/O FLIP is unique because it contains an AI-generated image and description.

Images were pre-generated using two technologies pioneered out of Google Research: Muse, a text-to-image AI model from the Imagen family of models, and DreamBooth, a technique running on top of Muse that allows you to personalize text-to-image models to generate novel images of a specific subject using a small set of your own images for training.

Card descriptions were prototyped in MakerSuite and pre-generated using the PaLM API which accesses Google’s large language models. Based on the power a player selects at the beginning of the game, you may get a card description that provides context to the image, including the character’s special powers such as: “Dash the Wizard lives in a castle with his pet dragon. He loves to cast spells and make people laugh.” Join the PaLM API and MakerSuite waitlist here.

Flutter is used to compose the cards from a name, description, image and power using the GameCard widget. Once the card is created, a border indicating its element is applied. If you’re lucky enough to land a hologram card, a special foil shader effect will be applied to the design.


Firebase: game hosting, sharing, and real-time game play

Cloud Storage for Firebase stores all of the images, descriptions, elements, and numbers that generate players’ card decks. Firestore keeps track of the leaderboard for “Highest Win Streak” with new leaders added using the firedart package.

In all cases where the Flutter app directly accesses Firestore, we've used App Check to ensure that only the code that we wrote ourselves is allowed, and we used Firebase security rules to ensure the code can only access data and make changes that it is authorized to.


Dart Frog: sharing code between the backend and frontend

I/O FLIP needed more ways to prevent cheating. This is where Dart Frog came in handy. It allowed us to keep the game logic, such as the winner of each round, on the backend, but also share this code between the Flutter frontend and the Firestore backend, which not only helped with cheating prevention, but also allowed the team to move just a little bit faster, since we were writing our backend and frontend code in the same language.

I/O FLIP is most fun when many players are online and playing. By deploying the I/O FLIP Dart Frog server to Cloud Run, the game can take advantage of features like autoscaling, which allows it to handle many players at once.

Finally, Dart Frog also enables downloading or sharing cards on social media. At the end of a round, a player can choose to download or share to Twitter or Facebook. When a user clicks the share button, Dart Frog generates a pre-populated post that contains text to share and a link to a webpage with the corresponding hand or card and a button for visitors to play the game too!


Try it yourself

We hope you’ve had a chance to try I/O FLIP and that it inspires you to think about ways generative AI can be used in your products, safely and responsibly. We’ve open sourced the code for I/O FLIP so you can take a deeper look at how we built it too. If you’d like to try your hand at some of the generative AI technologies used in I/O FLIP, tune in to Google I/O to learn more.

Introducing the new Google Pay button view on Android

Posted by Jose Ugia – Developer Relations Engineer

The Google Pay button is your customer’s entry point to a swift and familiar checkout with Google Pay. Here are some updates we are bringing to the button to make it easier for you to use it and customize it based on your checkout design, while creating a more consistent and informative experience to help your customers fly through your checkout flows more easily.


A new look and enhanced customization capabilities

Previously, you told us about the importance of a consistent checkout experience for your business. We gave the Google Pay button a fresh new look, applying the latest Material 3 design principles. The new Google Pay button comes in two versions that make it look great on dark and light themed applications. We have also added a subtle stroke that makes the button stand out in low contrast interfaces. In addition, we are introducing new customization capabilities that make it easy to adjust the button shape and corner roundness and help you create more consistent checkout experiences.

image of the updated Pay with Google Pay button annotating the subtle stroke and adjustable rounded corners
Figure 1: The new Google Pay button view for Android can be customized to make it more consistent with your checkout experience.

A new button view for Android

We also improved the Google Pay API for Android, introducing a new button view that simplifies the integration. Now, you can configure the Google Pay button in a more familiar way and add it directly to your XML layout. You can also configure the button programmatically if you prefer. This view lets you configure properties like the button theme and corner radius, and includes graphics and translations so you don't have to download and configure them manually. Simply add the view to your interface, adjust it and that's it.

The new button API is available today in beta. Check out the updated tutorial for Android to start using the new button view.

image of an example implementation showing how you can add and configure the new button view directly to your XML layout on Android
Figure 2: An example implementation of how you can add and configure the new button view directly to your XML layout on Android.

Giving additional context to customers with cards’ last 4 digits

Earlier, we introduced the dynamic Google Pay button for Web, which previewed the card network and last four digits for the customer’s last used form of payment directly on the button. Since then, we have seen great results, with this additional context driving conversion improvements and increases of up to 40% in the number of transactions.

Later this quarter, you’ll be able to configure the new button view for Android to show your users additional information about the last card they used to complete a payment with Google Pay. We are also working to offer the Google Pay button as a UI element in Jetpack Compose, to help you build your UIs more rapidly using intuitive Kotlin APIs.

image showing an example of how the dynamic version of the new Google Pay button view will look on Android.
Figure 3: An example of how the dynamic version of the new Google Pay button view will look on Android.

Next steps

Check out the documentation to start using the new Google Pay button view on Android. Also, check out our own button integration in the sample open source application in GitHub.

Test your payments integrations end-to-end with the new test suite for Google Pay

Posted by Jose Ugia – Developer Relations Engineer

Testing is an integral part of software engineering, especially in the context of payments, where small glitches can have significant implications for your business. 

We previously introduced a set of test cards for you to use with the Google Pay API in TEST mode. These cards allow you to build simple test cases to verify that your Google Pay integration operates as expected. As much as this was a great start, a few predefined cards only let you run a limited number of happy path test scenarios, confined to the domain of your applications.

Improved testing capabilities

Today, we are introducing PSP test cards, an upgrade to Google Pay’s test suite that lets you use test cards from your favorite payment processors to build end-to-end test scenarios, enabling additional testing strategies, both manual and automated.

Image showing test cards in Google Pay TEST mode
Figure 1: Test cards from your payment processor appear in Google Pay’s payment sheet when using TEST mode.

When you select a card, the result is returned back to your application via the API, so you can use it to validate comprehensive payment flows end-to-end, including relaying payment information to your backend to complete the order with your processor. These test cards allow you to verify the behavior of your application against diverse payment outcomes, including successful and failed transactions due to fraud, declines, insufficient funds and more.

Test automation

This upgrade also supports test automation, so you can write end-to-end UI tests using familiar tools like UIAutomator and Espresso on Android, and include them in your CI/CD flows to further strengthen your checkout experiences.

The new generation of Google Pay’s test suite is currently in beta, with web support coming later this year. You’ll be able to use test cards on Android for 5 of the most widely used PSPs – Stripe, Adyen, Braintree, WorldPay and Checkout.com, and we’ll continue to add test cards from your favorite PSPs.

Next steps

Improved testing capabilities have been one of the most frequent requests from the developer community. From Google Pay, we are committed to providing you with the tools you need to harden your payment flows and improve your checkout performance.

Moving image showing automated test being run in the upgraded test suite
Figure 2: With the upgraded test suite you can run end-to-end automated tests for successful and failed payment flows.

Take a look at the documentation to start enhancing your payments tests. Also, check out the sample test suite in the Google Pay demo open source application.