Author Archives: Google Developers

Join us for Google for Games Developer Summit 2021

Posted by Greg Hartrell, Head of Product Management, Games on Android & Google Play

Google for Games Dev Summit header

With a surge of new gamers and an increase in time spent playing games in the last year, it’s more important than ever for game developers to delight and engage players. To help developers with this opportunity, the games teams at Google are back to announce the return of the Google for Games Developer Summit 2021 on July 12th-13th.

Hear from experts across Google about new game solutions they’re building to make it easier for you to continue creating great games, connecting with players, and scaling your business. Registration is free and open to all game developers.

Register for the free online event at g.co/gamedevsummit to get more details in the coming weeks. We can’t wait to share our latest innovations with the developer community.

What’s new for Android developers at Google I/O

Cross-posted on the Android Developers blog by Karen Ng, Director, Product Management & Jacob Lehrbaum, Director of Developer Relations, Android & Play

As Android developers, we are all driven by building experiences that delight people around the world. And with people depending on your apps more than ever, expectations are higher and your jobs as developers aren’t getting easier. Today, at Google I/O, we covered a few ways that we’re trying to help out, whether it be through Android 12 - one of the biggest design changes ever, Jetpack, Jetpack Compose, Android Studio, and Kotlin to help you build beautiful high quality apps. We’re also helping when it comes to extending your apps wherever your users go, like through wearables and larger-screened devices. You can watch the full Developer Keynote, but here are a few highlights:

Android 12: one of the biggest design updates ever.

The first Beta of Android 12 just started rolling out, and it’s packed with lots of cool stuff. From new user safety features like permissions for bluetooth and approximate location, enhancements to performance like expedited jobs and start up animations, to delightful experiences with more interactive widgets and stretch overscrolling, this release is one of the biggest design updates to Android ever. You can read more about what’s in Android 12 Beta 1 here, so you can start preparing your apps for the consumer release coming out later this year. Download the Beta and try it with your apps today!

Android 12 visual

Jetpack Compose: get ready for 1.0 in July!

For the last few years, we’ve been hard at work modernizing the Android development experience, listening to your feedback to keep the openness–a hallmark of Android, but becoming more opinionated about the right way to do things. You can see this throughout, from Android Studio, a performant IDE that can keep up with you, to Kotlin, a programming language that enables you to do more with less code, to Jetpack libraries that solve the hardest problems on mobile with backward compatibility.

The next step in this offering is Jetpack Compose - our modern UI toolkit to easily build beautiful apps for all Android devices. We announced Compose here at Google I/O two years ago and since then have been building it in the open, listening to your feedback to make sure we got it right. With the Compose Beta earlier this year, developers around the world have created some truly beautiful, innovative experiences in half the time, and the response to the #AndroidDevChallenge blew our socks off!

With the forthcoming update of Material You (which you can read more about here), we’ll be adding new Material components as well as further support for building for large screens, making it fast and easy to build a gorgeous UI. We’re pressure testing the final bits in Compose and will release 1.0 Stable in July—so get ready!

Android Studio Arctic Fox: Design, Devices, & Developer Productivity!

Android Studio Arctic Fox (2020.3.1) Beta, the latest release of the official powerful Android IDE, is out today to help you build quality apps easier and faster. We have delivered and updated the suite of tools to empower three major themes: accelerate your UI design, extend your app to new devices, and boost your developer productivity. With this latest release you can create modern UIs with Compose tooling, see test results across multiple devices, and optimize debugging databases and background tasks with the App Inspector. We’re also making your apps more accessible with the Accessibility Scanner and more performant with Memory Profiler. And for faster build speeds, we have the Android Gradle plugin 7.0, new DSL, and variant APIs. You can learn more about the Android Studio updates here.

Android Studio Arctic Fox

Kotlin: the most used language by professional Android devs

Kotlin is now the most used primary language by professional Android developers according to our recent surveys; in fact, over 1.2M apps in the Play Store use Kotlin, including 80% of the top 1000 apps. And here at Google, we love it too: 70+ Google apps like Drive, Home, Maps and Play use Kotlin. And with a brand-new native solution to annotation processing for Kotlin built from the ground up, Kotlin Symbol Processing is available today, a powerful and yet simple API for parsing Kotlin code directly, showing speeds up to 2x faster with libraries like Room.

Android Jetpack: write features, not boilerplate

With Android Jetpack, we built a suite of libraries to help reduce boilerplate code so you can focus on the code you care about. Over 84% of the top 10,000 apps are now using a Jetpack library. And today, we’re unpacking some new releases for Jetpack, including Jetpack Macrobenchmark (Alpha) to capture large interactions that affect your app startup and jank before your app is released, as well as a new Kotlin Coroutines API for persisting data more efficiently via Jetpack DataStore (Beta). You can read about all the updates in Android Jetpack here.

Now is the time: a big step for Wear

The best thing about modern Android development is that these tools have been purpose built to help make it easy for you to build for the next era of Android, which is all about enabling devices connected to your phone–TVs, cars, watches, tablets–to work better together.

Starting today, we take a huge step forward with wearables. First, we introduced a unified platform built jointly with Samsung, combining the best of Wear and Tizen. Second, we shared a new consumer experience with revamped Google apps. And third, a world-class health and fitness service from Fitbit is coming to the platform. As an Android developer, it means you’ll have more reach, and you’ll be able to use all of your existing skills, tools, and APIs that make your mobile apps great, to build for a single wearables platform used by people all over the world.

Whether it’s new Jetpack APIs for Wear tailored for small screens and designed to optimize battery life, to the Jetpack Tiles API, so you can create a custom Tile for all the devices in the Wear ecosystem, there are a number of new features to help you build on Wear. And with a new set of APIs for Health and Fitness, created in collaboration with Samsung, data collection from sensors and metrics computation is streamlined, consistent, and accurate–like heart rate to calories to daily distance–from one trusted source. All this comes together in new tooling, with the release of Android Studio Arctic Fox Beta, like easier pairing to test apps, and even a virtual heart rate sensor in the emulator. And when your app is ready, users will have a much easier time discovering the world of Wear apps on Google Play, with some big updates to discoverability. You can read more about all of the Wear updates here.

Tapping the momentum of larger screens, like tablets, Chrome OS and foldables

When it comes to larger screens -- tablets, foldables, and Chrome OS laptops-- there is huge momentum. People are increasingly relying on large screen devices to stay connected with family and friends, go to school, or work remotely. In fact, there are over 250 million active large screen Android devices. Last year, Chrome OS grew +92% year over year–5 times the rate of the PC market, making Chrome OS the fastest growing and the second-most popular desktop OS. To help you take advantage of this momentum, we’re giving you APIs and tools to make optimizing that experience easier: like having your content resize automatically to more space by using SlidingpaneLayout 1.2.0 and a new vertical navigation rail component, Max widths on components to avoid stretched UIs, as well as updates to the platform, Chrome OS, and Jetpack windowmanager, so apps work better by default. You can learn more here.

Google Duo's optimized experience for foldable devices

Google Duo's optimized experience for foldable devices

This is just a taste of some of the new ways we’re making it easier for you to build high quality Android apps. Later today, we’ll be releasing more than 20 technical sessions on Android and Play, covering a wide range of topics such as background tasks, privacy, and Machine Learning on Android, or the top 12 tips to get you ready for Android 12. If building for cars, TVs, and wearables is your thing, we got that covered, too. You can find all these sessions - and more - on the I/O website. Beyond the sessions and news, there’s a number of fun ways to virtually connect with Googlers and other developers at this year’s Google I/O. You can check out the Android dome in I/O Adventure, where you can see new blog posts, videos, codelabs, and more. Maybe even test out your Jetpack Compose skills or take a virtual tour of the cars inside our dome!

Google Pay introduces a Flutter plugin for payments

Posted by Jose Ugia, Developer Programs Engineer, Google Pay and Anthony Panissidi, Technical Writer, Google Developer Studio

Flutter and Firebase logos

We made it easier than ever to integrate Google Pay in Flutter apps!

Our open source Flutter plugin simplifies the addition of payments to Flutter apps on iOS and Android.

The plugin gives you the ability to add functionality to your apps across platforms with a single and familiar codebase written in Dart.

It adapts common steps required to facilitate payments that adhere to how Flutter constructs components, works with the user interface of the app, and exchanges information between the native and Dart ends.

Now, as a Flutter developer, you can easily reap the benefits of Google Pay, which lets you provide users with a secure and fast checkout experience that increases conversions, and frees you from the need to manage credit cards and payments.

How it works

To use the plugin, add pay as a dependency in your pubspec.yaml file. For more information, see Adding a package dependency to an app.

To configure a payment, load a payment profile with the desired configuration, either with a local file or one retrieved from a remote server. For a complete list of all configuration options, see the PaymentDataRequest object.

Here's an example of a JSON file that defines payment options:

sample_payment_configuration.json

{
"provider": "google_pay",
"data": {
"environment": "TEST",
"apiVersion": 2,
"apiVersionMinor": 0,
"allowedPaymentMethods": [{
"type": "CARD",
"tokenizationSpecification": {
"type": "PAYMENT_GATEWAY",
"parameters": {
"gateway": "example",
"gatewayMerchantId": "gatewayMerchantId"
}
},
"parameters": {
"allowedCardNetworks": ["VISA", "MASTERCARD"],
"allowedAuthMethods": ["PAN_ONLY", "CRYPTOGRAM_3DS"],
"billingAddressRequired": true,
"billingAddressParameters": {
"format": "FULL",
"phoneNumberRequired": true
}
}
}],
"merchantInfo": {
"merchantId": "01234567890123456789",
"merchantName": "Example Merchant Name"
},
"transactionInfo": {
"countryCode": "US",
"currencyCode": "USD"
}
}
}

For more examples of JSON files that define payment options, take a look at the example/assets/ folder.

Now you can use this configuration to add the Google Pay button to your app and forward the payment method selected by your users.

Here's an example of a Dart file:

import 'package:pay/pay.dart';

const _paymentItems = [
PaymentItem(
label: 'Total',
amount: '99.99',
status: PaymentItemStatus.final_price,
)
];

// In your Widget build() method
GooglePayButton(
paymentConfigurationAsset: 'sample_payment_configuration.json',
paymentItems: _paymentItems,
style: GooglePayButtonStyle.black,
type: GooglePayButtonType.pay,
onPaymentResult: onGooglePayResult,
),


// In your Stateless Widget class or State
void onGooglePayResult(paymentResult) {
// Send the resulting Google Pay token to your server or PSP
}

How to use it

The best part of this news is that you can use the plugin today. To get started with it, check out the pay package on pub.dev. We also want to hear your thoughts and feature requests, and look forward to your contributions on GitHub.

Learn more

Want to learn more about Google Pay? Here's what you can do:

A new open source content library from Google

Posted by Sebastian Trzcinski-Clément, Program Manager, Developer Relations

Developers around the world are constantly creating open source tools and tutorials but have a hard time getting them discovered. The content published often spanned many different sites - from GitHub to Medium. Therefore we decided to create a space where we can highlight the best projects related to Google technologies in one place - introducing the Developer Library.

GIF scrolling through Developer Library

The platform showcases blog posts and open source tools with easy-to-use navigation. Content is categorized by product areas; Machine Learning, Flutter, Firebase, Angular, Cloud, Android, with more to come.

What makes the Developer Library unique is that each piece featured on the site is reviewed, in detail, by a team of Google experts for accuracy and relevancy, so you know when you view the content on the site it has the stamp of approval from Google.

To demonstrate the breadth of content on the site here are some examples of published content pieces and video interviews with the developers who authored these posts:

There are two ways you can help us grow the Developer Library.

Firstly, If you have great content that you would like to see published on the Developer Library, please submit it for review here.

Secondly, the team welcomes feedback, so if you have anything you’d like to see added or changed on the Developer Library site, do complete this short feedback form or just file an issue on GitHub.

We can't wait to see what you build together!

Mercari improves UI development productivity by 56% with Jetpack Compose

Posted by Chiko Shimizu, Partner Developer Advocate and Tamao Imura, Developer Marketing Manager

Mercari improves UI development productivity by 56% with Jetpack Compose

Mercari allows millions of people to shop and sell almost anything. The company was founded in 2013 in Japan, and it now is the largest smartphone-focused C2C marketplace in Japan. Mercari’s Client Architect Team started using Jetpack Compose in 2020 with the goal of using modern solutions and technologies that can scale for the long term to build their tech stack for new applications.

What they did

The Mercari team needed to implement a design system with complex state management and styling on Android Views — a very complex task. Using Jetpack Compose, they were not only able to implement this complex system, it helped them spend less time developing each screen.

Jetpack Compose also helped the team write UI code for their new app utilizing the design system, making their UI code concise and easy to understand. As a result, the team can spend more time writing screens and business logic, such as practical support for the dark theme.

In addition, the Mercari team wrote a proof-of-concept tool for integrating Figma with the design system, which automatically generates UI code from the component designs. The team said that developing this tool was easier with Compose due to its declarative nature.

“Once Android developers get used to writing Jetpack Compose code, they wouldn’t wish to go back.” - Anthony Allan Conda, Android Tech Lead at Mercari

Results

Between Jetpack Compose and their new design system, Mercari was able to use far less code to write screens. On screens with infinitely-scrollable content — a common use case — they actually reduced their code by about 56%. As a result, they were able to write more screens in the same amount of time, giving them more time to write business logic and other parts of the code.

Also, they were able to do more with the UI itself, such as incorporating animations and using intuitive APIs such as AnimatedVisibility, Crossfade, and Animatable.

Mercari is planning to continue using Jetpack Compose in their new application until its release. Their design system, with the Android SDK written in Jetpack Compose, is also designed to work with multiple applications within Mercari.

Get started

Learn more about Jetpack Compose.

New for I/O: Assistant tools and features for Android apps and Smart Displays

Posted by Rebecca Nathenson, Director of Product for the Google Assistant Developer Platform

New Assistant tools at Google IO header

Today at I/O, we shared some exciting new product announcements to help you more easily bring Google Assistant to your Android apps and create more engaging content on smart displays.

Assistant development made easy with new Android APIs

App Actions helps you easily bring Google Assistant to your Android app and complete user queries of all kinds, from booking a ride to posting a message on social media. Companies such as MyFitnessPal and Twitter are already using App Actions to help their users get things done, just by using their voice. You can enable App Actions in Android Studio by mapping built-in intents to specific features and experiences within your apps. Here are new ways you can help users easily navigate your content through voice queries and proactive suggestions.

Better support for Assistant built-in intents with Capabilities

Capabilities is a new framework API available in beta today that lets you declare support for common tasks defined by built-in intents. By leveraging pre-built requests from our catalog of intents, you can offer users ways to jump to specific activities within your app.

For example, the Yahoo Finance app uses Capabilities to let users jump directly to the Verizon stock page just by saying “Hey Google, show me Verizon’s stock on Yahoo Finance.” Similarly, Snapchat users can use their voice to add filters and send them to friends: “Hey Google, send a snap with my Curry sneakers.”

Improved user discoverability with Shortcuts in Android 12

App shortcuts are already a popular way to automate most common tasks on Android. Thanks to the new APIs for Shortcuts in Android 12, it’s now easier to find all the Assistant queries that are supported with apps. If you build an Android Shortcut, it will automatically show up in the Assistant Shortcuts gallery, so users can choose to set up a personal voice command in your app, when they say “Hey Google, shortcuts.”

3 phones showing shortcuts from Assistant

Google Assistant can also suggest relevant shortcuts to help drive traffic to your app. For example, when using the eBay app, people will see a suggested Google Assistant Shortcut appear on the screen and have the option to create a shortcut for "show my bids."

We also introduced the Google Shortcuts Integration library, which identifies shortcuts pushed by Shortcuts Jetpack Module and makes them available to Assistant for use in managing related voice queries. By doing so, Google Assistant can suggest relevant shortcuts to users and help drive traffic to your app.

Get immediate answers and updates right from Assistant using Widgets, coming soon

Improvements to Android 12 also makes it easier to discover glanceable content with Widgets by mapping them to specific built-in intents using the Capabilities API. We're also looking at how to easily bring driving optimized widgets to Android Auto in the future. The integration with Assistant will enable one shot answers, quick updates and multi-step interactions with the same widget.

For example, with Dunkin’s widget implementation, you can say “Hey Google, reorder from Dunkin’ to select from previous drinks and place the order. Strava’s widget helps a user track how many miles they ran in a week by saying “Hey Google, check my miles on Strava”, and it will show up right on the lock screen.

Strava widget showing how many miles ran in a week

Build high quality Conversational Actions for smart displays

Last year, we introduced a number of improvements to the Assistant platform for smart displays, such as Actions Builder, Actions SDK and new built-in intents to improve the experience for both developers and users. Here are more improvements rolling out soon to make building conversational actions on smart displays even better.

New features to improve the developer experience

Interactive Canvas helps you build touch- and voice-controlled games and storytelling experiences for the Assistant using web technologies like HTML, CSS, and JavaScript. Companies such as CoolGames, Zynga, and GC Turbo have already used Canvas to build games for smart displays.

Since launch, we've gotten great feedback from developers that it would be simpler and faster to implement core logic in web code. To enable this, the Interactive Canvas API will soon provide access to text-to-speech (TTS), natural language understanding (NLU), and storage APIs that will allow developers to trigger these capabilities from client-side code. These APIs will provide experienced web developers with a familiar development flow and enable more responsive Canvas actions.

We’re also giving you a wider set of options around how to release your actions. Coming soon, in the Actions Console, you will be able to manage your releases by launching in stages. For example, you can launch to one country first and then expand to more later, or you can launch to just a smaller percentage and gradually roll out over time.

Improving the user experience on smart displays

You'll also see improvements that will enhance visual experiences on the smart display. For example, you can now remove the persistent header, which allows you to utilize full real estate of the device and provide users with fully immersive experiences.

Before Interactive Canvas brought customized touch interfaces to the Smart Display, we provided a simple way to stop TTS from playing by tapping anywhere on the screen of the device. However, with more multi-modal experiences being released on Smart Displays, there are use cases where it is important to continue playing TTS while the user touches the display. Developers will soon have the option to enable persistent TTS for their actions.

We’ve also added support for long-form media sessions with updates to the Media API so you can start playback from a specific moment, resume where a previous session stopped, and adapt conversational responses based on media playback context.

Easier transactions for your voice experiences

We know how important it is to have the tools you need to build a successful business on our platform. In October of last year, we made a commitment to make it easier for you to add seamless voice-based and display-based monetization capabilities to your experience. On-device CVC and credit card entry will soon be available on smart displays. Both of these features make on-device transactions much easier reducing the need to redirect users to their mobile devices.

We hope you are able to leverage all these new features to build engaging experiences and reach your users easily, both on mobile and at home. Check out our technical sessions, workshops and more from Google I/O on YouTube and get started with App Actions and Conversational Actions today!

Unlock new use cases and increase developer velocity with the latest ARCore updates

Posted by Ian Zhang, Product Manager, AR & Zeina Oweis, Product Manager, AR

Two phones showing animated screens

ARCore was created to provide developers with simple yet powerful tools to seamlessly blend the digital and physical worlds. Over the last few years, we’ve seen developers create apps that entertain, engage, and help people in different ways–from letting fans interact with their favorite characters, to placing virtual electronics and furniture for the perfect home setup and beyond.

At I/O this year, we continue on the mission of improving and building AR developer tools. With the launch of ARCore 1.24, we’re introducing the Raw Depth API and the Recording and Playback API. These new APIs will enable developers to create new types of AR experiences and speed up their development cycles.

Increase AR realism and precision with depth

When we launched the Depth API last year, hundreds of millions of Android devices gained the ability to generate depth maps in real time without needing specialized depth sensors. Data in these depth maps was smoothed, filling in any gaps that would otherwise occur due to missing visual information, making it easy for developers to create depth effects like occlusion.

The new ARCore Raw Depth API provides more detailed representations of the geometry of objects in the scene by generating “raw” depth maps with corresponding confidence images. These raw depth maps include unsmoothed data points, and the confidence images provide the confidence of the depth estimate for each pixel in the raw depth map.

4 examples of ARCore Raw Depth API

Improved geometry from the Raw Depth API enables more accurate depth measurements and spatial awareness. In the ARConnect app, these more accurate measurements give users a deeper understanding of their physical surroundings. The AR Doodads app utilizes raw depth’s spatial awareness to allow users to build realistic virtual Rube Goldberg machines.

ARConnect by PHORIA (left) and AR Doodads by Jam3 (right) use the improved geometry from the Raw Depth AP

ARConnect by PHORIA (left) and AR Doodads by Jam3 (right) use the improved geometry from the Raw Depth API

The confidence image in the Raw Depth API allows developers to filter depth data in real time. For example, TikTok’s newest effect enables users to upload an image and wrap it onto real world objects. The image conforms to surfaces where there is high confidence in the underlying depth estimate. The ability for developers to filter for high confidence depth data is also essential for 3D object and scene reconstruction. This can be seen in the 3D Live Scanner app, which enables users to scan their space and create, edit, and share 3D models.

TikTok by TikTok Pte. Ltd. (left) and  3D Live Scanner by Lubos Vonasek Programmierung (right) use confidence images from the ARCore Raw Depth API

TikTok by TikTok Pte. Ltd. (left) and 3D Live Scanner by Lubos Vonasek Programmierung (right) use confidence images from the ARCore Raw Depth API

We’re also introducing a new type of hit-test that uses the geometry from the depth map to provide more hit-test results, even in low-texture and non-planar areas. Previously, hit-test worked best on surfaces with lots of visual features.

Hit Results with Planes (left)

Works best on horizontal, planar surfaces with 

good texture

Hit Results with Depth (right)

Gives more results, even on non-planar or
low-texture areas

The lifeAR app uses this improved hit-test to bring AR to video calls. Users see accurate virtual annotations on the real-world objects as they tap into the expertise of their social circle for instant help to tackle everyday problems.

lifeAR by TeamViewer uses the improved depth hit-test

As with the previous Depth API, these updates leverage depth from motion, making them available on hundreds of millions of Android devices without relying on specialized sensors. Although depth sensors such as time-of-flight (ToF) sensors are not required, having them will further improve the quality of your experiences.

In addition to these apps, the ARCore Depth Lab has been updated with examples of both the Raw Depth API and the depth hit-test. You can find those and more on the Depth API documentation page and start building with Android and Unity today.

Increase developer velocity and post-capture AR

A recurring pain point for AR developers is the need to continually test in specific places and scenarios. Developers may not always have access to the location, lighting will change, and sensors won’t catch the exact same information during every live camera session.

The new ARCore Recording and Playback API addresses this by enabling developers to record not just video footage, but also IMU and depth sensor data. On playback, this same data can be accessed, enabling developers to duplicate the exact same scenario and test the experience from the comfort of their workspace.

DiDi used the Recording and Playback API to build and test AR directions in their DiDi-Rider app. They were able to save 25% on R&D and testing costs, 60% on travel costs, and accelerated their development cycle by 6 months.

DiDi-Rider by Didi Chuxing saves on development resources with the Recording and Playback API

DiDi-Rider by Didi Chuxing saves on development resources with the Recording and Playback API

In addition to increasing developer velocity, recording and playback unlocks opportunities for new AR experiences, such as post-capture AR. Using videos enables asynchronous AR experiences that remove time and place constraints. For instance, when visualizing AR furniture, users no longer have to be in their home. They can instead pull up a video of their home and accurately place AR assets, enabling them to “take AR anywhere”.

Jump AR by SK Telecom uses the Recording and Playback API to transport scenes from South Korea right into users’ homes to augment with culturally relevant volumetric and 3D AR content.

JumpAR by SKT uses Recording and Playback to bring SouthKorea to your home

JumpAR by SKT uses Recording and Playback to bring SouthKorea to your home

VoxPlop! by Nexus Studios is experimenting with the notion of Spatial Video co-creation, where users can reach in and interact with a recorded space rather than simply placing content on top of a video. The Recording and Playback API enables users to record videos, drop in 3D characters and messages, and share them with family and friends.

VoxPlop! by Nexus Studios uses the Recording and Playback API to experiment with Spatial Video co-creation

VoxPlop! by Nexus Studios uses the Recording and Playback API to experiment with Spatial Video co-creation

Learn more and get started with the Recording and Playback API docs.

Get started with ARCore today

These latest ARCore updates round out a robust set of powerful developer tools for creating engaging and realistic AR experiences. With over a billion lifetime installs and 850 million compatible devices, ARCore makes augmented reality accessible to nearly everyone with a smartphone. We're looking forward to seeing how you innovate and reach more users with ARCore. To learn more and get started with the new APIs, visit the ARCore developer website.

New for I/O: Assistant tools and features for Android apps and Smart Displays

Posted by Rebecca Nathenson, Director of Product for the Google Assistant Developer Platform

New Assistant tools at Google IO header

Today at I/O, we shared some exciting new product announcements to help you more easily bring Google Assistant to your Android apps and create more engaging content on smart displays.

Assistant development made easy with new Android APIs

App Actions helps you easily bring Google Assistant to your Android app and complete user queries of all kinds, from booking a ride to posting a message on social media. Companies such as MyFitnessPal and Twitter are already using App Actions to help their users get things done, just by using their voice. You can enable App Actions in Android Studio by mapping built-in intents to specific features and experiences within your apps. Here are new ways you can help users easily navigate your content through voice queries and proactive suggestions.

Better support for Assistant built-in intents with Capabilities

Capabilities is a new framework API available in beta today that lets you declare support for common tasks defined by built-in intents. By leveraging pre-built requests from our catalog of intents, you can offer users ways to jump to specific activities within your app.

For example, the Yahoo Finance app uses Capabilities to let users jump directly to the Verizon stock page just by saying “Hey Google, show me Verizon’s stock on Yahoo Finance.” Similarly, Snapchat users can use their voice to add filters and send them to friends: “Hey Google, send a snap with my Curry sneakers.”

Improved user discoverability with Shortcuts in Android 12

App shortcuts are already a popular way to automate most common tasks on Android. Thanks to the new APIs for Shortcuts in Android 12, it’s now easier to find all the Assistant queries that are supported with apps. If you build an Android Shortcut, it will automatically show up in the Assistant Shortcuts gallery, so users can choose to set up a personal voice command in your app, when they say “Hey Google, shortcuts.”

3 phones showing shortcuts from Assistant

Google Assistant can also suggest relevant shortcuts to help drive traffic to your app. For example, when using the eBay app, people will see a suggested Google Assistant Shortcut appear on the screen and have the option to create a shortcut for "show my bids."

We also introduced the Google Shortcuts Integration library, which identifies shortcuts pushed by Shortcuts Jetpack Module and makes them available to Assistant for use in managing related voice queries. By doing so, Google Assistant can suggest relevant shortcuts to users and help drive traffic to your app.

Get immediate answers and updates right from Assistant using Widgets, coming soon

Improvements to Android 12 also makes it easier to discover glanceable content with Widgets by mapping them to specific built-in intents using the Capabilities API. We're also looking at how to easily bring driving optimized widgets to Android Auto in the future. The integration with Assistant will enable one shot answers, quick updates and multi-step interactions with the same widget.

For example, with Dunkin’s widget implementation, you can say “Hey Google, reorder from Dunkin’ to select from previous drinks and place the order. Strava’s widget helps a user track how many miles they ran in a week by saying “Hey Google, check my miles on Strava”, and it will show up right on the lock screen.

Strava widget showing how many miles ran in a week

Build high quality Conversational Actions for smart displays

Last year, we introduced a number of improvements to the Assistant platform for smart displays, such as Actions Builder, Actions SDK and new built-in intents to improve the experience for both developers and users. Here are more improvements rolling out soon to make building conversational actions on smart displays even better.

New features to improve the developer experience

Interactive Canvas helps you build touch- and voice-controlled games and storytelling experiences for the Assistant using web technologies like HTML, CSS, and JavaScript. Companies such as CoolGames, Zynga, and GC Turbo have already used Canvas to build games for smart displays.

Since launch, we've gotten great feedback from developers that it would be simpler and faster to implement core logic in web code. To enable this, the Interactive Canvas API will soon provide access to text-to-speech (TTS), natural language understanding (NLU), and storage APIs that will allow developers to trigger these capabilities from client-side code. These APIs will provide experienced web developers with a familiar development flow and enable more responsive Canvas actions.

We’re also giving you a wider set of options around how to release your actions. Coming soon, in the Actions Console, you will be able to manage your releases by launching in stages. For example, you can launch to one country first and then expand to more later, or you can launch to just a smaller percentage and gradually roll out over time.

Improving the user experience on smart displays

You'll also see improvements that will enhance visual experiences on the smart display. For example, you can now remove the persistent header, which allows you to utilize full real estate of the device and provide users with fully immersive experiences.

Before Interactive Canvas brought customized touch interfaces to the Smart Display, we provided a simple way to stop TTS from playing by tapping anywhere on the screen of the device. However, with more multi-modal experiences being released on Smart Displays, there are use cases where it is important to continue playing TTS while the user touches the display. Developers will soon have the option to enable persistent TTS for their actions.

We’ve also added support for long-form media sessions with updates to the Media API so you can start playback from a specific moment, resume where a previous session stopped, and adapt conversational responses based on media playback context.

Easier transactions for your voice experiences

We know how important it is to have the tools you need to build a successful business on our platform. In October of last year, we made a commitment to make it easier for you to add seamless voice-based and display-based monetization capabilities to your experience. On-device CVC and credit card entry will soon be available on smart displays. Both of these features make on-device transactions much easier reducing the need to redirect users to their mobile devices.

We hope you are able to leverage all these new features to build engaging experiences and reach your users easily, both on mobile and at home. Check out our technical sessions, workshops and more from Google I/O on YouTube and get started with App Actions and Conversational Actions today!

Google Pay integration patterns that drive conversions on Android

Posted by Jose Ugia, Developer Relations Engineer, Google Pay & Anthony Panissidi, Technical Writer, Google Developer Studio

How to drive conversions with Google Pay for Android

What do Gilt, MTS, Panera Bread, and SpotHero have in common?

At first glance, you probably only see four totally different businesses:

  • Gilt is an online shopping and lifestyle website.
  • MTS is a mobile network operator with 80 million users in Armenia, Belarus, and Russia.
  • Panera Bread is a chain of more than 2,000 fast-casual bakery-cafe restaurants in the US and Canada.
  • SpotHero is a digital parking marketplace that lets drivers reserve and pay for parking spots in more than 300 cities in the US and Canada.

However, all four businesses partnered with us to identify and adopt integration patterns that drive the most conversions on Google Pay for Android. In this blog post, we share these proven integration practices so that you can get the most out of Google Pay in your Android apps, as well as additional security tips that you can use to further secure your payment flows.

UI and UX patterns

Take a look at the following strategies to improve user experience in your app:

  • Payment-method selection
  • Express checkout
  • Guest checkout
  • Payment notifications

Payment-method selection

If you set Google Pay as a default payment option for ready-to-pay users, your users only need to click or tap twice to complete their transactions, so they enjoy a more-seamless payment experience and they're less likely to abandon their carts.

Phone with Gilt user interface

Our partners who implemented this pattern reported a significant increase in their success metrics. For example, at Gilt, 34% of total Google Pay checkouts were net-new Gilt member conversions and 57% of total Google Pay checkouts were reactivations of lapsed Gilt members.

Gilt member conversions increase

Express checkout

This feature lets your users purchase an item directly from the item's detail page without adding it to a cart, which shortens their path to purchase completion.

For example, Gilt integrated this feature into their checkout process so its users can complete the checkout process with only a few clicks or taps. The Google Pay button on their product page lets users move directly to checkout with Google Pay set as a default payment option.

Gilt Google Pay Integration

Guest checkout

This feature makes it easier for your users to complete purchases and convert, and more likely to create an account and engage again later.

To enable guest checkout, add Google Pay as an option to continue with the payment process alongside your account-creation elements.

For example, Panera Bread enabled guest checkout, and found a 7% increase in order value and 30% increase in wallet share.

Panera increase in order value and wallet share

As another example, SpotHero enabled guest checkout, and found that its sales funnel increased by 20 times while 87% of total checkouts were completed with Google Pay.

SpotHero increase in sales and total checkouts

Payment notifications

This feature lets your users pay directly from notifications, which reduces friction in the payment process and further increases conversions.

Users sometimes receive payment notifications that they expect, such as after they abandon carts, make donations, or need to add credit to a prepaid card. They typically find these transactions simple and familiar, so they're ready to pay quickly with a little nudge.

MTS credit adding option interface

MTS adopted this pattern to let their customers add credit to their accounts directly from notifications and experienced a 80% increase in conversions.

MTS users in Russia and increase in conversions

Learn more

For more information about how to implement these UI and UX patterns, see our sample open source app and developer documentation.

Security tips

Before we go, we also want to share these security tips to further secure your payment flows:

  • Use SSL for all connections between your apps and backend services over the public internet.
  • Do not collect or store payment data, or any other sensitive information in the clear within your app.
  • Order price can be calculated on the client side to show it in your UI and keep the user informed, but only allow for payments with calculations applied in your backend services.
Security Basics

Learn more

Want to learn more about Google Pay? Here's what you can do:

Updated Google Pay app offers more consumer touchpoints

Posted by Soc Sieng, Developer Advocate, Payments & Ola Ben Har, Payments DevRel Lead

What's new in Google Pay header

We redesigned the Google Pay app to boost user engagement with your business.

The redesigned app makes it easy for users to find your business and provides you with a branded surface that lets you build relationships with your customers at scale.

The app is available in the App Store and Google Play Store in the US, India, and Singapore with availability in more markets on the way. In this blog post, we focus on features available in the US version of the app.

New in Google Pay

The Google Pay app focuses on users' relationships with people, businesses, and other everyday essentials.

Centers around your relationships

The app lets users send money, save money, and see spending insights.

Understand and organize money

It makes it easy for users to save money at their favorite businesses and discover new ones.

Save money and discover businesses

It also provides your brand with another surface to initiate meaningful reengagement with your customers. The branded experience is automatically created when customers check out with Google Pay or a Google Pay-enrolled card in the app, in stores, or online. This dedicated space for your business is also where customers can redeem offers, sign up for loyalty rewards, and view their transaction histories.

Branded experience

How it works

Google Pay's new features are only part of the story.

Behind the scenes, we worked on the Google Pay APIs and developer tools to enable those experiences, help you acquire new customers, and better serve existing ones.

Google Pay APIs for Web and Android

Google Pay APIs for Web and Android enable your transaction history within your branded experience on Google Pay in addition to contactless payments in store. After a user makes a purchase with Google Pay or a Google Pay-enrolled card, they can search for your brand and view their transaction history in Google Pay.

Two phones showing inside your app and inside google pay

When you integrate with the Google Pay APIs, you're not only providing a convenient and secure checkout option in your app or on your website, but you also let your users track their transactions, independent of the channel, in one central place. Your brand becomes searchable for millions of active Google Pay users, which provides you with more reengagement opportunities.

Loyalty Enrollment and Sign-in API

The Loyalty Enrollment and Sign-in API lets users discover, and sign up or sign in to your loyalty program from your branded experience with a few taps in Google Pay.

Loyalty enrollment and sign-in API

When users sign up, they provide their consent and Google Pay securely shares sign-up details with your loyalty program’s sign-up process. They can use information that they already saved to their Google Accounts, which makes the sign-up process a snap. Afterward, users can easily access their loyalty passes at checkout.

4 phones

That does it for now, but these updates are only the beginning, so stay tuned for more news in this space!

Learn more

Want to learn more about Google Pay? Here's what you can do: