Tag Archives: codelab

Faster Rust Toolchains for Android

Posted by Chris Wailes - Senior Software Engineer

The performance, safety, and developer productivity provided by Rust has led to rapid adoption in the Android Platform. Since slower build times are a concern when using Rust, particularly within a massive project like Android, we've worked to ship the fastest version of the Rust toolchain that we can. To do this we leverage multiple forms of profiling and optimization, as well as tuning C/C++, linker, and Rust flags. Much of what I’m about to describe is similar to the build process for the official releases of the Rust toolchain, but tailored for the specific needs of the Android codebase. I hope that this post will be generally informative and, if you are a maintainer of a Rust toolchain, may make your life easier.

Android’s Compilers

While Android is certainly not unique in its need for a performant cross-compiling toolchain this fact, combined with the large number of daily Android build invocations, means that we must carefully balance tradeoffs between the time it takes to build a toolchain, the toolchain’s size, and the produced compiler’s performance.

Our Build Process

To be clear, the optimizations listed below are also present in the versions of rustc that are obtained using rustup. What differentiates the Android toolchain from the official releases, besides the provenance, are the cross-compilation targets available and the codebase used for profiling. All performance numbers listed below are the time it takes to build the Rust components of an Android image and may not be reflective of the speedup when compiling other codebases with our toolchain.

Codegen Units (CGU1)

When Rust compiles a crate it will break it into some number of code generation units. Each independent chunk of code is generated and optimized concurrently and then later re-combined. This approach allows LLVM to process each code generation unit separately and improves compile time but can reduce the performance of the generated code. Some of this performance can be recovered via the use of Link Time Optimization (LTO), but this isn’t guaranteed to achieve the same performance as if the crate were compiled in a single codegen unit.

To expose as many opportunities for optimization as possible and ensure reproducible builds we add the -C codegen-units=1 option to the RUSTFLAGS environment variable. This reduces the size of the toolchain by ~5.5% while increasing performance by ~1.8%.

Be aware that setting this option will slow down the time it takes to build the toolchain by ~2x (measured on our workstations).

GC Sections

Many projects, including the Rust toolchain, have functions, classes, or even entire namespaces that are not needed in certain contexts. The safest and easiest option is to leave these code objects in the final product. This will increase code size and may decrease performance (due to caching and layout issues), but it should never produce a miscompiled or mislinked binary.

It is possible, however, to ask the linker to remove code objects that aren’t transitively referenced from the main()function using the --gc-sections linker argument. The linker can only operate on a section-basis, so, if any object in a section is referenced, the entire section must be retained. This is why it is also common to pass the -ffunction-sections and -fdata-sections options to the compiler or code generation backend. This will ensure that each code object is given an independent section, thus allowing the linker’s garbage collection pass to collect objects individually.

This is one of the first optimizations we implemented and, at the time, it produced significant size savings (on the order of 100s of MiBs). However, most of these gains have been subsumed by those made from setting -C codegen-units=1 when they are used in combination and there is now no difference between the two produced toolchains in size or performance. However, due to the extra overhead, we do not always use CGU1 when building the toolchain. When testing for correctness the final speed of the compiler is less important and, as such, we allow the toolchain to be built with the default number of codegen units. In these situations we still run section GC during linking as it yields some performance and size benefits at a very low cost.

Link-Time Optimization (LTO)

A compiler can only optimize the functions and data it can see. Building a library or executable from independent object files or libraries can speed up compilation but at the cost of optimizations that depend on information that’s only available when the final binary is assembled. Link-Time Optimization gives the compiler another opportunity to analyze and modify the binary during linking.

For the Android Rust toolchain we perform thin LTO on both the C++ code in LLVM and the Rust code that makes up the Rust compiler and tools. Because the IR emitted by our clang might be a different version than the IR emitted by rustc we can’t perform cross-language LTO or statically link against libLLVM. The performance gains from using an LTO optimized shared library are greater than those from using a non-LTO optimized static library however, so we’ve opted to use shared linking.

Using CGU1, GC sections, and LTO produces a speedup of ~7.7% and size improvement of ~5.4% over the baseline. This works out to a speedup of ~6% over the previous stage in the pipeline due solely to LTO.

Profile-Guided Optimization (PGO)

Command line arguments, environment variables, and the contents of files can all influence how a program executes. Some blocks of code might be used frequently while other branches and functions may only be used when an error occurs. By profiling an application as it executes we can collect data on how often these code blocks are executed. This data can then be used to guide optimizations when recompiling the program.

We use instrumented binaries to collect profiles from both building the Rust toolchain itself and from building the Rust components of Android images for x86_64, aarch64, and riscv64. These four profiles are then combined and the toolchain is recompiled with profile-guided optimizations.

As a result, the toolchain achieves a ~19.8% speedup and 5.3% reduction in size over the baseline compiler. This is a 13.2% speedup over the previous stage in the compiler.

BOLT: Binary Optimization and Layout Tool

Even with LTO enabled the linker is still in control of the layout of the final binary. Because it isn’t being guided by any profiling information the linker might accidentally place a function that is frequently called (hot) next to a function that is rarely called (cold). When the hot function is later called all functions on the same memory page will be loaded. The cold functions are now taking up space that could be allocated to other hot functions, thus forcing the additional pages that do contain these functions to be loaded.

BOLT mitigates this problem by using an additional set of layout-focused profiling information to re-organize functions and data. For the purposes of speeding up rustc we profiled libLLVM, libstd, and librustc_driver, which are the compiler’s main dependencies. These libraries are then BOLT optimized using the following options:

--peepholes=all
--data=<path-to-profile>
--reorder-blocks=ext-tsp
–-reorder-functions=hfsort
--split-functions
--split-all-cold
--split-eh
--dyno-stats

Any additional libraries matching lib/*.so are optimized without profiles using only --peepholes=all.

Applying BOLT to our toolchain produces a speedup over the baseline compiler of ~24.7% at a size increase of ~10.9%. This is a speedup of ~6.1% over the PGOed compiler without BOLT.

If you are interested in using BOLT in your own project/build I offer these two bits of advice: 1) you’ll need to emit additional relocation information into your binaries using the -Wl,--emit-relocs linker argument and 2) use the same input library when invoking BOLT to produce the instrumented and the optimized versions.

Conclusion

Graph of normalized size and duration comparison between Toolchain size and Android Rust build time

Optimizations

Speedup vs Baseline
Monolithic 1.8%
Mono + GC Sections

1.9%
Mono + GC + LTO 7.7%
Mono + GC + LTO + PGO 19.8%

Mono + GC + LTO + PGO + BOLT

24.7%

By compiling as a single code generation unit, garbage collecting our data objects, performing both link-time and profile-guided optimizations, and leveraging the BOLT tool we were able to speed up the time it takes to compile the Rust components of Android by 24.8%. For every 50k Android builds per day run in our CI infrastructure we save ~10K hours of serial execution.

Our industry is not one to stand still and there will surely be another tool and another set of profiles in need of collecting in the near future. Until then we’ll continue making incremental improvements in search of additional performance. Happy coding!

Records in Android Studio Flamingo

Posted by Clément Béra, Senior software engineer

Records are a new Java feature for immutable data carrier classes introduced in Java 16 and Android 14. To use records in Android Studio Flamingo, you need an Android 14 (API level 34) SDK so the java.lang.Record class is in android.jar. This is available from the "Android UpsideDownCake Preview" SDK revision 4. Records are essentially classes with immutable properties and implicit hashCode, equals, and toString methods based on the underlying data fields. In that respect they are very similar to Kotlin data classes. To declare a Person record with the fields String name and int age to be compiled to a Java record, use the following code:

@JvmRecord data class Person(val name: String, val age: Int)

The build.gradle file also needs to be extended to use the correct SDK and Java source and target. Currently the Android UpsideDownCake Preview is required, but when the Android 14 final SDK is released use "compileSdk 34" and "targetSdk 34" in place of the preview version.

android { compileSdkPreview "UpsideDownCake" defaultConfig { targetSdkPreview "UpsideDownCake" } compileOptions { sourceCompatibility JavaVersion.VERSION_17 targetCompatibility JavaVersion.VERSION_17 } kotlinOptions { jvmTarget = '17' } }

Records don’t necessarily bring value compared to data classes in pure Kotlin programs, but they let Kotlin programs interact with Java libraries whose APIs include records. For Java programmers this allows Java code to use records. Use the following code to declare the same record in Java:

public record Person(String name, int age) {}

Besides the record flags and attributes, the record Person is roughly equivalent to the following class described using Kotlin source:

class PersonEquivalent(val name: String, val age: Int) { override fun hashCode() : Int { return 31 * (31 * PersonEquivalent::class.hashCode() + name.hashCode()) + Integer.hashCode(age) } override fun equals(other: Any?) : Boolean { if (other == null || other !is PersonEquivalent) { return false } return name == other.name && age == other.age } override fun toString() : String { return String.format( PersonEquivalent::class.java.simpleName + "[name=%s, age=%s]", name, age.toString() ) } } println(Person(“John”, 42).toString()) >>> Person[name=John, age=42]

It is possible in a record class to override the hashCode, equals, and toString methods, effectively replacing the JVM runtime generated methods. In this case, the behavior is user-defined for these methods.

Record desugaring

Since records are not supported on any Android device today, the D8/R8 desugaring engine needs to desugar records: it transforms the record code into code compatible with the Android VMs. Record desugaring involves transforming the record into a roughly equivalent class, without generating or compiling sources. The following Kotlin source shows an approximation of the generated code. For the application code size to remain small, records are desugared so that helper methods are shared in between records.

class PersonDesugared(val name: String, val age: Int) { fun getFieldsAsObjects(): Array<Any> { return arrayOf(name, age) } override fun hashCode(): Int { return SharedRecordHelper.hash( PersonDesugared::class.java, getFieldsAsObjects()) } override fun equals(other: Any?): Boolean { if (other == null || other !is PersonDesugared) { return false } return getFieldsAsObjects().contentEquals(other.getFieldsAsObjects()) } override fun toString(): String { return SharedRecordHelper.toString( getFieldsAsObjects(), PersonDesugared::class.java, "name;age") } // The SharedRecordHelper is present once in each app using records and its // methods are shared in between all records. class SharedRecordHelper { companion object { fun hash(recordClass: Class<*>, fieldValues: Array<Any>): Int { return 31 * recordClass.hashCode() + fieldValues.contentHashCode() } fun toString( fieldValues: Array<Any>, recordClass: Class<*>, fieldNames: String ): String { val fieldNamesSplit: List<String> = if (fieldNames.isEmpty()) emptyList() else fieldNames.split(";") val builder: StringBuilder = StringBuilder() builder.append(recordClass.simpleName).append("[") for (i in fieldNamesSplit.indices) { builder .append(fieldNamesSplit[i]) .append("=") .append(fieldValues[i]) if (i != fieldNamesSplit.size - 1) { builder.append(", ") } } builder.append("]") return builder.toString() } } } }

Record shrinking

R8 assumes that the default hashCode, equals, and toString methods generated by javac effectively represent the internal state of the record. Therefore, if a field is minified, the methods should reflect that; toString should print the minified name. If a field is removed, for example because it has a constant value across all instances, then the methods should reflect that; the field is ignored by the hashCode, equals, and toString methods. When R8 uses the record structure in the methods generated by javac, for example when it looks up fields in the record or inspects the printed record structure, it's using reflection. As is the case for any use of reflection, you must write keep rules to inform the shrinker of the reflective use so that it can preserve the structure.

In our example, assume that age is the constant 42 across the application while name isn’t constant across the application. Then toString returns different results depending on the rules you set:

Person(“John”, 42).toString(); // With D8 or R8 with -dontobfuscate -dontoptimize >>> Person[name=John, age=42] // With R8 and no keep rule. >>> a[a=John] // With R8 and -keep,allowshrinking,allowoptimization class Person >>> Person[b=John] // With R8 and -keepclassmembers,allowshrinking,allowoptimization class Person { <fields>; } >>> a[name=John] // With R8 and -keepclassmembers,allowobfuscation class Person { <fields>; } >>> a[a=John, b=42] // With R8 and -keep class Person { <fields>; } >>> Person[name=John, age=42]
Reflective use cases

Preserve toString behavior

Say you have code that uses the exact printing of the record and expects it to be unchanged. For that you must keep the full content of the record fields with a rule such as:

-keep,allowshrinking class Person -keepclassmembers,allowoptimization class Person { <fields>; }

This ensures that if the Person record is retained in the output, any toString callproduces the exact same string as it would in the original program. For example:

Person("John", 42).toString(); >>> Person[name=John, age=42]

However, if you only want to preserve the printing for the fields that are actually used, you can let the unused fields to be removed or shrunk with allowshrinking:

-keep,allowshrinking class Person -keepclassmembers,allowshrinking,allowoptimization class Person { <fields>; }

With this rule, the compiler drops the age field:

Person("John", 42).toString(); >>> Person[name=John]

Preserve record members for reflective lookup

If you need to reflectively access a record member, you typically need to access its accessor method. For that you must keep the accessor method:

-keep,allowshrinking class Person -keepclassmembers,allowoptimization class Person { java.lang.String name(); }

Now if instances of Person are in the residual program you can safely look up the existence of the accessor reflectively:

Person("John", 42)::class.java.getDeclaredMethod("name").invoke(obj); >>> John

Notice that the previous code accesses the record field using the accessor. For direct field access, you need to keep the field itself:

-keep,allowshrinking class Person -keepclassmembers,allowoptimization class Person { java.lang.String name; }

Build systems and the Record class

If you’re using another build system than AGP, using records may require you to adapt the build system. The java.lang.Record class is not present until Android 14, introduced in the SDK from "Android UpsideDownCake Preview" revision 4. D8/R8 introduces the com.android.tools.r8.RecordTag, an empty class, to indicate that a record subclass is a record. The RecordTag is used so that instructions referencing java.lang.Record can directly be rewritten by desugaring to reference RecordTag and still work (instanceof, method and field signatures, etc.).

This means that each build containing a reference to java.lang.Record generates a synthetic RecordTag class. In a situation where an application is split in shards, each shard being compiled to a dex file, and the dex files put together without merging in the Android application, this could lead to duplicate RecordTag class.

To avoid the issue, any D8 intermediate build generates the RecordTag class as a global synthetic, in a different output than the dex file. The dex merge step is then able to correctly merge global synthetics to avoid unexpected runtime behavior. Each build system using multiple compilation such as sharding or intermediate outputs is required to support global synthetics to work correctly. AGP fully supports records from version 8.1.

Media3 is ready to play!

Posted by Nevin Mital - Developer Relations Engineer, Android Media

Today, we’re pleased to announce the full release of the Jetpack Media3 library. After sharing a first look at the library at Android Developer Summit 2021, we published several alpha and beta releases over the past several months to ensure a high-quality set of APIs that we now encourage everyone to adopt.

Media3 is the new home for APIs that enable you to create rich audio and video experiences. If you’ve used libraries like ExoPlayer, MediaCompat, or Media2, you’ll find Media3 to be familiar. However, instead of using these separate libraries, Media3 provides a unified API for playback use-cases and also expands to cover new use-cases like video editing and transcoding. The APIs are simple to use yet powerful, customizable to meet your needs, and reliable and optimized so you can build for the diverse Android device ecosystem.

In this blog post, we’ll focus on the playback APIs in Media3, so please stay tuned for an upcoming post where we’ll dive deeper into the video editing and transcoding APIs. As a brief introduction, the following table describes key components for playback in Media3:

Player

An interface that defines traditional high-level functionality for an audio or video player, such as playback controls.

ExoPlayer

The default implementation of the Player interface in Media3.

MediaSession

An API that advertises media playback to and receives playback command requests from external clients.

MediaSessionService

A service that holds a MediaSession to enable background playback.

MediaLibraryService

A service that additionally allows you to expose a content library to external clients.

MediaController

An API that is generally used by external clients to retrieve playback information and send playback command requests to your media app. Complementary to a MediaSession. Examples of external clients include the notification and lock screen media controls on mobile and large screen devices, Android Auto, WearOS, and Google Assistant.

MediaBrowser

An API that additionally enables external clients to navigate your media app’s content library. Complementary to a MediaLibraryService.

Our developer documentation has more details on these components. Let’s take a closer look into what this new library offers and how you can start using it.

Keeping it simple

By consolidating the APIs for the playback developer journey into a single library, Media3 is able to introduce a Player interface that is used by several components, such as MediaSession and MediaController. This interface outlines traditional high-level functionality for audio and video playback, such as playback controls and the ability to query properties of the currently playing media.

Having a common interface for all “player-like” components means that creating new instances of these objects is straightforward:

val player = ExoPlayer.builder(context).build() val session = MediaSession.Builder(context, player).build() val controller = MediaController.Builder(context, session.token).build()

Media3's MediaSession and MediaController will automatically reflect the state of the components they're connected to. As a result, you can also simplify your app’s architecture by removing connectors like ExoPlayer’s MediaSessionConnector and more easily follow the flow of logic through your app. Calling play() on the MediaController will forward the action to the MediaSession, which will then forward it to the player.

Similarly, Media3 aims to make background playback cases easier to handle. The PlayerNotificationManager from ExoPlayer is no longer needed, as Media3’s MediaSessionService and MediaLibraryService automatically handle publishing a media notification as needed. The library handles configuring, starting, and stopping a foreground service for you as needed, but please also note some known issues summarized in this comment.

ExoPlayer is deprecated, long live ExoPlayer!

ExoPlayer has a new home and is the default implementation of the aforementioned Player interface in Media3. The standalone ExoPlayer project, with package name com.google.android.exoplayer2, will soon be discontinued, and future updates will be published in Media3. For the next few months, we’ll continue publishing equivalent releases of both Media3 and ExoPlayer to help you make the transition to Media3. For example, this means that ExoPlayer 2.18.5 and ExoPlayer in Media3 1.0.0 are identical aside from their package names. However, this is only temporary and we will deprecate the standalone ExoPlayer later this year, so we highly recommend migrating to Media3 as soon as possible. The “Migrating to Media3” section below describes the process in more detail, which includes a script that handles most of the work for you.

Note that Media3 is developed with the same philosophy as ExoPlayer (and in fact, is developed by the same team!). In other words, Media3 retains ExoPlayer’s customizable components, open source development on GitHub, receptivity to pull requests, and public issue tracker, to name a few similarities.

Migrating to Media3

As mentioned previously, the standalone ExoPlayer project, with package name com.google.android.exoplayer2, will soon be discontinued, so to continue receiving updates, you will need to migrate to Media3 ExoPlayer. Other Media APIs that should be migrated to Media3 include, but are not limited to, MediaSessionConnectorMediaBrowserServiceCompat, and MediaBrowserCompat.

We’ve prepared two key resources to help you achieve this migration as smoothly as possible:

  1. migration guide to walk you through the process step-by-step
  2. migration script to convert your standalone ExoPlayer project packages to the corresponding new modules and packages under Media3

The good news is that if you’re currently using ExoPlayer, there’s no need for any code changes and no need to re-integrate or re-write any customizations. The standalone ExoPlayer and Media3 ExoPlayer are identical aside from the package name, and the conversion can be done automatically with the aforementioned migration script. Just make sure you’ve updated your project to use the latest version of ExoPlayer before getting started. For full details and steps, please refer to the migration guide.

Furthermore, since Media3 is fully backwards-compatible with prior media APIs such as MediaControllerCompat and MediaMetadataCompat, your existing integrations will continue to work as before even after the migration. Note that new features such as per-controller customization of commands are only available for clients using Media3. That is to say, for example, all legacy controllers, such as MediaControllerCompat, will receive the same set of available commands. You can identify a legacy controller by checking if getControllerVersion() returns 0 in the MediaSession.ControllerInfo.

The power of Media3, in the palm of your hand

Media3 offers several options for you to adjust its behavior to better fit your needs. The next few sections describe some such mechanisms.

Play it your own way

Although ExoPlayer is the recommended Player implementation to use for audio and video streaming apps, Media3 also introduces the SimpleBasePlayer to minimize the number of methods you need to implement to integrate with a custom player. Start by implementing the getState method. This is where you can declare the Command set supported by your player and configure metadata such as the currently playing media item index and the current timestamp.

class CustomPlayer : SimpleBasePlayer(looper) { override fun getState(): State { // Set available Commands // Configure playWhenReady, mediaItemIndex, currentPosition, etc. } // Implement methods required by available Commands }

The SimpleBasePlayer class will enforce valid player state and handle informing listeners of state changes. Additionally, any methods related to a Command you don’t declare as available are ignored, so beyond getState, you only need to implement the methods that will actually be used.

Better control over your commands

The MediaSession and MediaController APIs have also been updated to give you more control. With Media3, you can advertise your app’s playback capabilities on a per-controller basis. Modify the commands available to a client app in the onConnect method of your MediaSession.Callback. For example, to prevent a client app with package name com.example.myClient from having access to the “seek to next media item” Player.Command:

var sessionCallback = object : MediaSession.Callback { override fun onConnect( session: MediaSession, controller: MediaSession.ControllerInfo ): MediaSession.ConnectionResult { val connectionResult = super.onConnect(session, controller) if (controller.packageName == "com.example.myClient") { val availablePlayerCommands = connectionResult.availablePlayerCommands.buildUpon() .remove(Player.COMMAND_SEEK_TO_NEXT_MEDIA_ITEM) // Disallow myClient from being able to skip to the next media item .build() return MediaSession.ConnectionResult.accept( connectionResult.availableSessionCommands, availablePlayerCommands ) } return connectionResult // Other clients retain normal command access } } var mediaSession = MediaSession.Builder(context, player) .setCallback(sessionCallback) // Remember to set the callback on your MediaSession! .build()

Creating custom commands

Of course, as with the previous media APIs, you can add custom commands tailored to your app. To implement a custom command, create a new SessionCommand. Similar to as shown above, you can give controllers access to this custom command by including it in the list of available session commands. You can handle custom command behavior in the onCustomCommand method of the same Callback:

override fun onCustomCommand( session: MediaSession, controller: MediaSession.ControllerInfo, customCommand: SessionCommand, args: Bundle ): ListenableFuture<SessionResult> { if (customCommand.customAction == MY_CUSTOM_COMMAND) { // Do custom action return Futures.immediateFuture(SessionResult(SessionResult.RESULT_SUCCESS)) } // Return error for invalid custom command return Futures.immediateFuture(SessionResult(SessionResult.RESULT_ERROR_BAD_VALUE)) }

You can also ask client apps to display your custom command by including it in a setCustomLayout call in the onPostConnect method of the MediaSession.Callback.

Next steps

We’d love for you to start using Media3 in your app! 

To start exploring the library, feel free to check out the demo app to see an example of audio and video playback, including how to integrate with a media session. Stay tuned to our developer guides for more detailed guidance on the different components in Media3 landing soon. Our sample app, the Universal Android Music Player, and our testing tool, the Media Controller Test app, will also be updated to Media3 on their main branches in the coming weeks.

If you run into any issues, have any feature requests, or would like to share any other sort of feedback, please let us know using the Media3 issue tracker on GitHub. We look forward to hearing from you!

Introducing the Google Wallet API

Posted by Petra Cross, Engineer, Google Wallet and Jose Ugia, Google Developer Relations Engineer

Google Pay API for Passes is now called Google Wallet API

Formerly known as Google Pay API for Passes, the Google Wallet API lets you digitize everything from boarding passes to loyalty programs, and engage your customers with notifications and real-time updates.

New features in Google Wallet API

Support for Generic Pass Type

The Google Pay API for Passes supported 7 types of passes: offers, loyalty cards, gift cards, event tickets, boarding passes, transit tickets and vaccine cards. But what if you want to issue passes or cards that do not fit into any of these categories, such as membership cards, or insurance cards?

We are thrilled to announce support for generic passes to the Google Wallet API so you can customize your pass objects to adapt to your program characteristics. The options are endless. If it is a card and has some text, a barcode or a QR code, it can be saved as a generic card.

You now have the flexibility to control the look and design of the card itself, by providing a card template that can contain up to 3 rows with 1-3 fields per row. You can also configure a number of attributes such as the barcode, QR code, or a hero image. Check out our documentation to learn more about how to create generic passes.

While generic passes can be used to mimic the appearance of any existing supported pass type (such as a loyalty card), we recommend you to continue to use specialized pass types when available. For example, when you use the boarding pass type for boarding passes your users receive flight delay notifications.

Grouping passes and mixing pass types

With the new Google Wallet API, you can also group passes to offer a better experience to your users when multiple passes are needed. For example, you can group the entry ticket, a parking pass, and food vouchers for a concert.

In your user’s list of passes, your users see a pass tile with a badge showing the number of items in the group. When they tap on this tile, a carousel with all passes appears, allowing them to easily swipe between all passes in the group.


Here is an example JSON Web Token payload showing one offer and one event ticket, mixed together and sharing the same groupingId. Later, if you need to add or remove passes to/from the group, you can use the REST API to update the grouping information.

{

  "iss""OWNER_EMAIL_ADDRESS",

  "aud""google",

  "typ""savetowallet",

  "iat""UNIX_TIME",

  "origins": [],

  "payload": {

    "offerObjects": [

      {

        "classId""YOUR_ISSUER_ID.OFFER_CLASS_ID",

        "id""YOUR_ISSUER_ID.OFFER_ID",

        "groupingInfo": {

          "groupingId""groupId1",

          "sortIndex"2

        }

      }

    ],

    "eventTicketObjects": [

      {

        "classId""YOUR_ISSUER_ID.EVENT_CLASS_ID",

        "id""YOUR_ISSUER_ID.EVENT_ID",

        "groupingInfo": {

          "groupingId""groupId1",

          "sortIndex"1

        }

      }

    ] 

  }

}


A note about Google Pay API for Passes:

Although we are introducing the Google Wallet API, all existing developer integrations with the previous Google Pay Passes API will continue to work. When the Google Wallet app is launched in just a few weeks, make sure to use the new “Add to Google Wallet” button in the updated button guidelines.

We’re really excited to build a great digital wallet experience with you, and can’t wait to see how you use the Google Wallet API to enhance your user experience.

Learn more