Introducing the new Google Pay button view on Android

Posted by Jose Ugia – Developer Relations Engineer

The Google Pay button is your customer’s entry point to a swift and familiar checkout with Google Pay. Here are some updates we are bringing to the button to make it easier for you to use it and customize it based on your checkout design, while creating a more consistent and informative experience to help your customers fly through your checkout flows more easily.


A new look and enhanced customization capabilities

Previously, you told us about the importance of a consistent checkout experience for your business. We gave the Google Pay button a fresh new look, applying the latest Material 3 design principles. The new Google Pay button comes in two versions that make it look great on dark and light themed applications. We have also added a subtle stroke that makes the button stand out in low contrast interfaces. In addition, we are introducing new customization capabilities that make it easy to adjust the button shape and corner roundness and help you create more consistent checkout experiences.

image of the updated Pay with Google Pay button annotating the subtle stroke and adjustable rounded corners
Figure 1: The new Google Pay button view for Android can be customized to make it more consistent with your checkout experience.

A new button view for Android

We also improved the Google Pay API for Android, introducing a new button view that simplifies the integration. Now, you can configure the Google Pay button in a more familiar way and add it directly to your XML layout. You can also configure the button programmatically if you prefer. This view lets you configure properties like the button theme and corner radius, and includes graphics and translations so you don't have to download and configure them manually. Simply add the view to your interface, adjust it and that's it.

The new button API is available today in beta. Check out the updated tutorial for Android to start using the new button view.

image of an example implementation showing how you can add and configure the new button view directly to your XML layout on Android
Figure 2: An example implementation of how you can add and configure the new button view directly to your XML layout on Android.

Giving additional context to customers with cards’ last 4 digits

Earlier, we introduced the dynamic Google Pay button for Web, which previewed the card network and last four digits for the customer’s last used form of payment directly on the button. Since then, we have seen great results, with this additional context driving conversion improvements and increases of up to 40% in the number of transactions.

Later this quarter, you’ll be able to configure the new button view for Android to show your users additional information about the last card they used to complete a payment with Google Pay. We are also working to offer the Google Pay button as a UI element in Jetpack Compose, to help you build your UIs more rapidly using intuitive Kotlin APIs.

image showing an example of how the dynamic version of the new Google Pay button view will look on Android.
Figure 3: An example of how the dynamic version of the new Google Pay button view will look on Android.

Next steps

Check out the documentation to start using the new Google Pay button view on Android. Also, check out our own button integration in the sample open source application in GitHub.

Stable Channel Update for Desktop

The Stable and extended stable channel has been updated to 113.0.5672.92/.93 Windows and 113.0.5672.92 for Mac and Linux which will roll out over the coming days/weeks. A full list of changes in this build is available in the log.

Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvikumar Bommana
Google Chrome

PJRT: Simplifying ML Hardware and Framework Integration

Infrastructure fragmentation in Machine Learning (ML) across frameworks, compilers, and runtimes makes developing new hardware and toolchains challenging. This inhibits the industry’s ability to quickly productionize ML-driven advancements. To simplify the growing complexity of ML workload execution across hardware and frameworks, we are excited to introduce PJRT and open source it as part of the recently available OpenXLA Project.

PJRT (used in conjunction with OpenXLA’s StableHLO) provides a hardware- and framework-independent interface for compilers and runtimes. It simplifies the integration of hardware with frameworks, accelerating framework coverage for the hardware, and thus hardware targetability for workload execution.

PJRT is the primary interface for TensorFlow and JAX and fully supported for PyTorch, and is well integrated with the OpenXLA ecosystem to execute workloads on TPU, GPU, and CPU. It is also the default runtime execution path for most of Google’s internal production workloads. The toolchain-independent architecture of PJRT allows it to be leveraged by any hardware, framework, or compiler, with extensibility for unique features. With this open-source release, we're excited to allow anyone to begin leveraging PJRT for their own devices.

If you’re developing an ML hardware accelerator or developing your own compiler and runtime, check out the PJRT source code on GitHub and sign up for the OpenXLA mailing list to quickly bootstrap your work.

Vision: Simplifying ML Hardware and Framework Integration

We are entering a world of ambient experiences where intelligent apps and devices surround us, from edge to the cloud, in a range of environments and scales. ML workload execution currently supports a combinatorial matrix of hardware, frameworks, and workflows, mostly through tight vertical integrations. Examples of such vertical integrations include specific kernels for TPU versus GPU, specific toolchains to train and serve in TensorFlow versus PyTorch. These bespoke 1:1 integrations are perfectly valid solutions but promote lock-in, inhibit innovation, and are expensive to maintain. This problem of a fragmented software stack is compounded over time as different computing hardware needs to be supported.

A variety of ML hardware exists today and hardware diversity is expected to increase in the future. ML users have options and they want to exercise them seamlessly: users want to train a large language model (LLM) on TPU in the Cloud, batch infer on GPU or even CPU, distill, quantize, and finally serve them on mobile processors. Our goal is to solve the challenge of making ML workloads portable across hardware by making it easy to integrate the hardware into the ML infrastructure (framework, compiler, runtime).

Portability: Seamless Execution

The workflow to enable this vision with PJRT is as follows (shown in Figure 1):

  1. The hardware-specific compiler and runtime provider implement the PJRT API, package it as a plugin containing the compiler and runtime hooks, and register it with the frameworks. The implementation can be opaque to the frameworks.
  2. The frameworks discover and load one or multiple PJRT plugins as dynamic libraries targeting the hardware on which to execute the workload.
  3. That’s it! Execute the workload from the framework onto the target hardware.

The PJRT API will be backward compatible. The plugin would not need to change often and would be able to do version-checking for features.

Diagram of PJRT architecture
Figure 1: To target specific hardware, provide an implementation of the PJRT API to package a compiler and runtime plugin that can be called by the framework.

Cohesive Ecosystem

As a foundational pillar of the OpenXLA Project, PJRT is well-integrated with projects within the OpenXLA Project including StableHLO and the OpenXLA compilers (XLA, IREE). It is the primary interface for TensorFlow and JAX and fully supported for PyTorch through PyTorch/XLA. It provides the hardware interface layer in solving the combinatorial framework x hardware ML infrastructure fragmentation (see Figure 2).

Diagram of PJRT hardware interface layer
Figure 2: PJRT provides the hardware interface layer in solving the combinatorial framework x hardware ML infrastructure fragmentation, well-integrated with OpenXLA.

Toolchain Independent

PJRT is hardware and framework independent. With framework integration through the self-contained IR StableHLO, PJRT is not coupled with a specific compiler, and can be used outside of the OpenXLA ecosystem, including with other proprietary compilers. The public availability and toolchain-independent architecture allows it to be used by any hardware, framework or compiler, with extensibility for unique features. If you are developing an ML hardware accelerator, compiler, or runtime targeting any hardware, or converging siloed toolchains to solve infrastructure fragmentation, PJRT can minimize bespoke hardware and framework integration, providing greater coverage and improving time-to-market at lower development cost.

Driving Impact with Collaboration

Industry partners such as Intel and others have already adopted PJRT.

Intel

Intel is leveraging PJRT in Intel® Extension for TensorFlow to provide the Intel GPU backend for TensorFlow and JAX. This implementation is based on the PJRT plugin mechanism (see RFC). Check out how this greatly simplifies the framework and hardware integration with this example of executing a JAX program on Intel GPU.

"At Intel, we share Google's vision of modular interfaces to make integration easier and enable faster, framework-independent development. Similar in design to the PluggableDevice mechanism, PJRT is a pluggable interface that allows us to easily compile and execute XLA's High Level Operations on Intel devices. Its simple design allowed us to quickly integrate it into our systems and start running JAX workloads on Intel® GPUs within just a few months. PJRT enables us to more efficiently deliver hardware acceleration and oneAPI-powered AI software optimizations to developers using a wide range of AI Frameworks." - Wei Li, VP and GM, Artificial Intelligence and Analytics, Intel.

Technology Leader

We’re also working with a technology leader to leverage PJRT to provide the backend targeting their proprietary processor for JAX. More details on this to follow soon.

Get Involved

PJRT is available on GitHub: source code for the API and a reference openxla-pjrt-plugin, and integration guides. If you develop ML frameworks, compilers, or runtimes, or are interested in improving portability of workloads across hardware, we want your feedback. We encourage you to contribute code, design ideas, and feature suggestions. We also invite you to join the OpenXLA mailing list to stay updated with the latest product and community announcements and to help shape the future of an interoperable ML infrastructure.

Acknowledgements

Allen Hutchison, Andrew Leaver, Chuanhao Zhuge, Jack Cao, Jacques Pienaar, Jieying Luo, Penporn Koanantakool, Peter Hawkins, Robert Hundt, Russell Power, Sagarika Chalasani, Skye Wanderman-Milne, Stella Laurenzo, Will Cromar, Xiao Yu.

By Aman Verma, Product Manager, Machine Learning Infrastructure

Test your payments integrations end-to-end with the new test suite for Google Pay

Posted by Jose Ugia – Developer Relations Engineer

Testing is an integral part of software engineering, especially in the context of payments, where small glitches can have significant implications for your business. 

We previously introduced a set of test cards for you to use with the Google Pay API in TEST mode. These cards allow you to build simple test cases to verify that your Google Pay integration operates as expected. As much as this was a great start, a few predefined cards only let you run a limited number of happy path test scenarios, confined to the domain of your applications.

Improved testing capabilities

Today, we are introducing PSP test cards, an upgrade to Google Pay’s test suite that lets you use test cards from your favorite payment processors to build end-to-end test scenarios, enabling additional testing strategies, both manual and automated.

Image showing test cards in Google Pay TEST mode
Figure 1: Test cards from your payment processor appear in Google Pay’s payment sheet when using TEST mode.

When you select a card, the result is returned back to your application via the API, so you can use it to validate comprehensive payment flows end-to-end, including relaying payment information to your backend to complete the order with your processor. These test cards allow you to verify the behavior of your application against diverse payment outcomes, including successful and failed transactions due to fraud, declines, insufficient funds and more.

Test automation

This upgrade also supports test automation, so you can write end-to-end UI tests using familiar tools like UIAutomator and Espresso on Android, and include them in your CI/CD flows to further strengthen your checkout experiences.

The new generation of Google Pay’s test suite is currently in beta, with web support coming later this year. You’ll be able to use test cards on Android for 5 of the most widely used PSPs – Stripe, Adyen, Braintree, WorldPay and Checkout.com, and we’ll continue to add test cards from your favorite PSPs.

Next steps

Improved testing capabilities have been one of the most frequent requests from the developer community. From Google Pay, we are committed to providing you with the tools you need to harden your payment flows and improve your checkout performance.

Moving image showing automated test being run in the upgraded test suite
Figure 2: With the upgraded test suite you can run end-to-end automated tests for successful and failed payment flows.

Take a look at the documentation to start enhancing your payments tests. Also, check out the sample test suite in the Google Pay demo open source application.

Google Workspace Updates Weekly Recap – May 5, 2023

New updates 

There are no new updates to share this week. Please see below for a recap of published announcements. 



Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Providing greater visibility with additional Google Calendar statuses in Google Chat
Chat statuses now include even more information, such as: how much longer someone is in a meeting or focus time, if someone has an upcoming commitment within the next 10 minutes, and if someone has an upcoming out of office event within the next business day. Learn more about additional Google Calendar statuses in Google Chat.


Expanding upon Gmail security with BIMI
Users will now see a checkmark icon for senders who adopted Brand Indicators for Message Identification (BIMI) in Gmail. This will help users identify messages from legitimate senders versus impersonators. | Learn more about Gmail security with BIMI.


Manage all spaces in your organization through the Admin console
We've introduced a new section in the Admin console dedicated to managing spaces in Google Chat. Here, admins can see a list of all spaces in their domain, the members of those spaces, and take actions, such as adding members or changing a member role. | Learn more about managing all spaces in your organization through the Admin console.


Quote a previous message in Google Chat
We've introduced a new feature that allows you to quote a previous message when sending a reply in a direct message, group message, or space (if configured with in-line threading). | Learn more about quoting a previous message in Google Chat.

Turn access to Bard on or off for your users
Earlier this year, we announced Bard, an early experiment by Google that lets you collaborate with generative AI. In the coming days, Google Workspace admins will be able to turn access to Bard on for their users, in the Admin console under Apps > Additional Google services > Early Access Apps. | Learn more about turning access to Bard on or off for your users.

Completed rollouts

The features below completed their rollouts to Rapid Release domainsScheduled Release domains, or both. Please refer to the original blog post for additional details.


Rapid Release Domains:

Scheduled Release Domains: