Performer-MPC: Navigation via real-time, on-robot transformers

Despite decades of research, we don’t see many mobile robots roaming our homes, offices, and streets. Real-world robot navigation in human-centric environments remains an unsolved problem. These challenging situations require safe and efficient navigation through tight spaces, such as squeezing between coffee tables and couches, maneuvering in tight corners, doorways, untidy rooms, and more. An equally critical requirement is to navigate in a manner that complies with unwritten social norms around people, for example, yielding at blind corners or staying at a comfortable distance. Google Research is committed to examining how advances in ML may enable us to overcome these obstacles.

In particular, Transformers models have achieved stunning advances across various data modalities in real-world machine learning (ML) problems. For example, multimodal architectures have enabled robots to leverage Transformer-based language models for high-level planning. Recent work that uses Transformers to encode robotic policies opens an exciting opportunity to use those architectures for real-world navigation. However, the on-robot deployment of massive Transformer-based controllers can be challenging due to the strict latency constraints for safety-critical mobile robots. The quadratic space and time complexity of the attention mechanism with respect to the input length is often prohibitively expensive, forcing researchers to trim Transformer-stacks at the cost of expressiveness.

As part of our ongoing exploration of ML advances for robotic products we partnered across Robotics at Google and Everyday Robots to present “Learning Model Predictive Controllers with Real-Time Attention for Real-World Navigation” at the Conference on Robot Learning (CoRL 2022). Here, we introduce Performer-MPC, an end-to-end learnable robotic system that combines (1) a JAX-based differentiable model predictive controller (MPC) that back-propagates gradients to its cost function parameters, (2) Transformer-based encodings of the context (e.g., occupancy grids for navigation tasks) that represent the MPC cost function and adapt the MPC to complex social scenarios without hand-coded rules, and (3) Performer architectures: scalable low-rank implicit-attention Transformers with linear space and time complexity attention modules for efficient on-robot deployment (providing 8ms on-robot latency). We demonstrate that Performer-MPC can generalize across different environments to help robots navigate tight spaces while demonstrating socially acceptable behaviors.


Performer-MPC

Performer-MPC aims to blend classic MPCs with ML via their learnable cost functions. Thus Performer-MPCs can be thought of as an instantiation of the inverse reinforcement learning algorithms, where the cost function is inferred by learning from expert demonstrations. Critically, the learnable component of the cost function is parameterized by latent embeddings produced by the Performer-Transformer. The linear inference provided by Performers is a gateway to on-robot deployment in real time.

In practice, the occupancy grid provided by fusing the robot's sensors serves as an input to the Vision Performer model. This model never explicitly materializes the attention matrix, but rather leverages its low-rank decomposition for efficient linear computation of the attention module, resulting in scalable attention. Then, the embedding of the particular fixed input-patch token from the last layer of the model parameterizes the quadratic, learnable part of the MPC model’s cost function. That part is added to the regular hand-engineered cost (distance from the obstacles, penalty-terms for sudden velocity changes, etc.). The system is trained end-to-end via imitation learning to mimic expert demonstrations.

Performer-MPC overview. The final latent embedding of the patch highlighted in red is used to construct context dependent learnable cost. The backpropagation (red arrows) is through the parameters of the Transformer. Performer provides scalable attention module computation via low-rank approximate decomposition of the regular attention matrix (matrices Query’ and Key’) and by changing the order of matrix multiplications (indicated by the black brackets).

Real-world robot navigation

Although, in principle, Performer-MPC can be applied in various robotic settings, we evaluate its performance on navigation in confined spaces with the potential presence of people. We deployed Performer-MPC on a differential wheeled robot that has a 3D LiDAR camera in the front and depth sensors mounted on its head. Our robot-deployable 8ms-latency Performer-MPC has 8.3M Performer parameters. The actual time of a single Performer run is about 1ms and we use the fastest Performer-ReLU variant.

We compare Performer-MPC with two baselines, a regular MPC policy (RMPC) without the learned cost components, and an Explicit Policy (EP) that predicts a reference and goal state using the same Performer architecture, but without being coupled to the MPC structure. We evaluate Performer-MPC in a simulation and in three real world scenarios. For each scenario, the learned policies (EP and Performer-MPC) are trained with scenario-specific demonstrations.

Experiment Scenarios: (a) Learning to avoid local minima during doorway traversal, (b) maneuvering through highly constrained spaces, (c) enabling socially compliant behaviors for blind corner, and (d) pedestrian obstruction interactions.

Our policies are trained through behavior cloning with a few hours of human-controlled robot navigation data in the real world. For more data collection details, see the paper. We visualize the planning results of Performer-MPC (green) and RMPC (red) along with expert demonstrations (gray) in the top half and the train and test curves in the bottom half of the following two figures. To measure the distance between the robot trajectory and the expert trajectory, we use Hausdorff distance.

Top: Visualization of test examples in the doorway traversal (left) and highly constrained obstacle course (right). Performer-MPC trajectories aiming at the goal are always closer to the expert demonstrations compared to the RMPC trajectories. Bottom: Train and test curves, where the vertical axis represents Hausdorff distance and horizontal axis represents training steps.
Top: Visualization of test examples in the blind corner (left) and pedestrian obstruction (right) scenarios. Performer-MPC trajectories aiming at the goal are always closer to the expert demonstrations compared to the RMPC trajectories. Bottom: Train and test curves, where the vertical axis represents Hausdorff distance and horizontal axis represents training steps.

Learning to avoid local minima

We evaluate Performer-MPC in a simulated doorway traversal scenario in which 100 start and goal pairs are randomly sampled from opposing sides of the wall. A planner, guided by a greedy cost function, often leads the robot to a local minimum (i.e., getting stuck at the closest point to the goal on the other side of the wall). Performer-MPC learns a cost function that steers the robot to pass the doorway, even if it must veer away from the goal and travel further. Performer-MPC shows a success rate of 86% compared to RMPC’s 24%.

Comparison of the Performer-MPC with Regular MPC on the doorway passing task.

Learning highly constrained maneuvers

Next, we test Performer-MPC in a challenging real-world scenario, where the robot must perform sharp, near-collision maneuvers in a cluttered home or office setting. A global planner provides coarse way points (a skeleton navigation path) that the robot follows. Each policy is run ten times and we report a success rate (SR) and an average completion percentage (CP) with variance (VAR) of navigating the obstacle course, where the robot is able to traverse without failure (collisions or getting stuck). Performer-MPC outperforms both RMPC and EP in SR and CP.

An obstacle course with policy trajectories and failure locations (indicated by crosses) for RMPC, EP, and Performer-MPC.
An Everyday Robots helper robot maneuvering through highly constrained spaces using Regular MPC, Explicit Policy, and Performer-MPC.

Learning to navigate in spaces with people

Going beyond static obstacles, we apply Performer-MPC to social robot navigation, where robots must navigate in a socially-acceptable manner for which cost functions are difficult to design. We consider two scenarios: (1) blind corners, where robots should avoid the inner side of a hallway corner in case a person suddenly appears, and (2) pedestrian obstruction, where a person unexpectedly impedes the robot’s prescribed path.

Performer-MPC deployed on an Everyday Robots helper robot. Left: Regular MPC efficiently cuts blind corners, forcing the person to move back. Right: Performer-MPC avoids cutting blind corners, enabling safe and socially acceptable navigation around people.
Comparison with an Everyday Robots helper robot using Regular MPC, Explicit Policy, and Performer-MPC in unseen blind corners.
Comparison with an Everyday Robots helper robot using Regular MPC, Explicit Policy, and Performer-MPC in unseen pedestrian obstruction scenarios.

Conclusion

We introduce Performer-MPC, an end-to-end learnable robotic system that combines several mechanisms to enable real-world, robust, and adaptive robot navigation with real-time, on-robot transformers. This work shows that scalable Transformer-architectures play a critical role in designing expressive attention-based robotic controllers. We demonstrate that real-time millisecond-latency inference is feasible for policies leveraging Transformers with a few million parameters. Furthermore, we show that such policies enable robots to learn efficient and socially acceptable behaviors that can generalize well. We believe this opens an exciting new chapter on applying Transformers to real-world robotics and look forward to continuing our research with Everyday Robots helper robots.


Acknowledgements

Special thanks to Xuesu Xiao for co-leading this effort at Everyday Robots as a Visiting Researcher. This research was done by Xuesu Xiao, Tingnan Zhang, Krzysztof Choromanski, Edward Lee, Anthony Francis, Jake Varley, Stephen Tu, Sumeet Singh, Peng Xu, Fei Xia, Sven Mikael Persson, Dmitry Kalashnikov, Leila Takayama, Roy Frostig, Jie Tan, Carolina Parada and Vikas Sindhwani. Special thanks to Vincent Vanhoucke for his feedback on the manuscript.

Source: Google AI Blog


Distributed differential privacy for federated learning

Federated learning is a distributed way of training machine learning (ML) models where data is locally processed and only focused model updates and metrics that are intended for immediate aggregation are shared with a server that orchestrates training. This allows the training of models on locally available signals without exposing raw data to servers, increasing user privacy. In 2021, we announced that we are using federated learning to train Smart Text Selection models, an Android feature that helps users select and copy text easily by predicting what text they want to select and then automatically expanding the selection for them.

Since that launch, we have worked to improve the privacy guarantees of this technology by carefully combining secure aggregation (SecAgg) and a distributed version of differential privacy. In this post, we describe how we built and deployed the first federated learning system that provides formal privacy guarantees to all user data before it becomes visible to an honest-but-curious server, meaning a server that follows the protocol but could try to gain insights about users from data it receives. The Smart Text Selection models trained with this system have reduced memorization by more than two-fold, as measured by standard empirical testing methods.


Scaling secure aggregation

Data minimization is an important privacy principle behind federated learning. It refers to focused data collection, early aggregation, and minimal data retention required during training. While every device participating in a federated learning round computes a model update, the orchestrating server is only interested in their average. Therefore, in a world that optimizes for data minimization, the server would learn nothing about individual updates and only receive an aggregate model update. This is precisely what the SecAgg protocol achieves, under rigorous cryptographic guarantees.

Important to this work, two recent advancements have improved the efficiency and scalability of SecAgg at Google:

  • An improved cryptographic protocol: Until recently, a significant bottleneck in SecAgg was client computation, as the work required on each device scaled linearly with the total number of clients (N) participating in the round. In the new protocol, client computation now scales logarithmically in N. This, along with similar gains in server costs, results in a protocol able to handle larger rounds. Having more users participate in each round improves privacy, both empirically and formally.
  • Optimized client orchestration: SecAgg is an interactive protocol, where participating devices progress together. An important feature of the protocol is that it is robust to some devices dropping out. If a client does not send a response in a predefined time window, then the protocol can continue without that client’s contribution. We have deployed statistical methods to effectively auto-tune such a time window in an adaptive way, resulting in improved protocol throughput.

The above improvements made it easier and faster to train Smart Text Selection with stronger data minimization guarantees.


Aggregating everything via secure aggregation

A typical federated training system not only involves aggregating model updates but also metrics that describe the performance of the local training. These are important for understanding model behavior and debugging potential training issues. In federated training for Smart Text Selection, all model updates and metrics are aggregated via SecAgg. This behavior is statically asserted using TensorFlow Federated, and locally enforced in Android’s Private Compute Core secure environment. As a result, this enhances privacy even more for users training Smart Text Selection, because unaggregated model updates and metrics are not visible to any part of the server infrastructure.


Differential privacy

SecAgg helps minimize data exposure, but it does not necessarily produce aggregates that guarantee against revealing anything unique to an individual. This is where differential privacy (DP) comes in. DP is a mathematical framework that sets a limit on an individual's influence on the outcome of a computation, such as the parameters of a ML model. This is accomplished by bounding the contribution of any individual user and adding noise during the training process to produce a probability distribution over output models. DP comes with a parameter (ε) that quantifies how much the distribution could change when adding or removing the training examples of any individual user (the smaller the better).

Recently, we announced a new method of federated training that enforces formal and meaningfully strong DP guarantees in a centralized manner, where a trusted server controls the training process. This protects against external attackers who may attempt to analyze the model. However, this approach still relies on trust in the central server. To provide even greater privacy protections, we have created a system that uses distributed differential privacy (DDP) to enforce DP in a distributed manner, integrated within the SecAgg protocol.


Distributed differential privacy

DDP is a technology that offers DP guarantees with respect to an honest-but-curious server coordinating training. It works by having each participating device clip and noise its update locally, and then aggregating these noisy clipped updates through the new SecAgg protocol described above. As a result, the server only sees the noisy sum of the clipped updates.

However, the combination of local noise addition and use of SecAgg presents significant challenges in practice:

  • An improved discretization method: One challenge is properly representing model parameters as integers in SecAgg's finite group with integer modular arithmetic, which can inflate the norm of the discretized model and require more noise for the same privacy level. For example, randomized rounding to the nearest integers could inflate the user's contribution by a factor equal to the number of model parameters. We addressed this by scaling the model parameters, applying a random rotation, and rounding to nearest integers. We also developed an approach for auto-tuning the discretization scale during training. This led to an even more efficient and accurate integration between DP and SecAgg.
  • Optimized discrete noise addition: Another challenge is devising a scheme for choosing an arbitrary number of bits per model parameter without sacrificing end-to-end privacy guarantees, which depend on how the model updates are clipped and noised. To address this, we added integer noise in the discretized domain and analyzed the DP properties of sums of integer noise vectors using the distributed discrete Gaussian and distributed Skellam mechanisms.
An overview of federated learning with distributed differential privacy.

We tested our DDP solution on a variety of benchmark datasets and in production and validated that we can match the accuracy to central DP with a SecAgg finite group of size 12 bits per model parameter. This meant that we were able to achieve added privacy advantages while also reducing memory and communication bandwidth. To demonstrate this, we applied this technology to train and launch Smart Text Selection models. This was done with an appropriate amount of noise chosen to maintain model quality. All Smart Text Selection models trained with federated learning now come with DDP guarantees that apply to both the model updates and metrics seen by the server during training. We have also open sourced the implementation in TensorFlow Federated.


Empirical privacy testing

While DDP adds formal privacy guarantees to Smart Text Selection, those formal guarantees are relatively weak (a finite but large ε, in the hundreds). However, any finite ε is an improvement over a model with no formal privacy guarantee for several reasons: 1) A finite ε moves the model into a regime where further privacy improvements can be quantified; and 2) even large ε’s can indicate a substantial decrease in the ability to reconstruct training data from the trained model. To get a more concrete understanding of the empirical privacy advantages, we performed thorough analyses by applying the Secret Sharer framework to Smart Text Selection models. Secret Sharer is a model auditing technique that can be used to measure the degree to which models unintentionally memorize their training data.

To perform Secret Sharer analyses for Smart Text Selection, we set up control experiments which collect gradients using SecAgg. The treatment experiments use distributed differential privacy aggregators with different amounts of noise.

We found that even low amounts of noise reduce memorization meaningfully, more than doubling the Secret Sharer rank metric for relevant canaries compared to the baseline. This means that even though the DP ε is large, we empirically verified that these amounts of noise already help reduce memorization for this model. However, to further improve on this and to get stronger formal guarantees, we aim to use even larger noise multipliers in the future.


Next steps

We developed and deployed the first federated learning and distributed differential privacy system that comes with formal DP guarantees with respect to an honest-but-curious server. While offering substantial additional protections, a fully malicious server might still be able to get around the DDP guarantees either by manipulating the public key exchange of SecAgg or by injecting a sufficient number of "fake" malicious clients that don’t add the prescribed noise into the aggregation pool. We are excited to address these challenges by continuing to strengthen the DP guarantee and its scope.


Acknowledgements

The authors would like to thank Adria Gascon for significant impact on the blog post itself, as well as the people who helped develop these ideas and bring them to practice: Ken Liu, Jakub Konečný, Brendan McMahan, Naman Agarwal, Thomas Steinke, Christopher Choquette, Adria Gascon, James Bell, Zheng Xu, Asela Gunawardana, Kallista Bonawitz, Mariana Raykova, Stanislav Chiknavaryan, Tancrède Lepoint, Shanshan Wu, Yu Xiao, Zachary Charles, Chunxiang Zheng, Daniel Ramage, Galen Andrew, Hugo Song, Chang Li, Sofia Neata, Ananda Theertha Suresh, Timon Van Overveldt, Zachary Garrett, Wennan Zhu, and Lukas Zilka. We’d also like to thank Tom Small for creating the animated figure.

Source: Google AI Blog


AY Creative drives big online impact for clients

Next up in our SMB series - AY Creative in Salt Lake City, a community-based digital media agency helping Utah businesses make the most of their online presence.


At AY Creative, we’re a digital media agency — but we’re also much more than that. At our core, we’re a community based organization that isn’t afraid to get involved.


The vast majority of our customers are immigrant owned businesses that don’t have the resources or time to spend on any aspect of their business pursuits that aren’t day-to-day, operational tasks. Those clients are local, Salt Lake City area businesses that span from restaurants, doctors, lawyers, nonprofits, markets, construction firms and contractors, and many others. Our job at AY Creative is to work with those small businesses and help create a digital presence and cohesive branding. Many of our clients don’t have much digital literacy to begin with, so we help them through a sort of ‘digital transformation.’ 


We help them do this in English and Spanish. When I first came to the states, language was a huge barrier for me. That’s part of the reason it’s so important to me to do the work that we do and that every employee at AY Creative is bilingual. Each one of us is dedicated to helping immigrant owned businesses thrive and learn new skills, including the importance of a digital presence. That also means that when there’s a need for help, we step up.


When businesses are really struggling, we sometimes put our resources and knowledge to use for low rates. Right now, we’re working with a couple of restaurants that have been struggling to recover from the effects of COVID. From photography of their dishes, to menu design, to building a new website, to social presence, to helping them integrate with online ordering (which is now an essential component to owning a restaurant) — we are there to walk them through a complete takeover of branding and digital marketing.

Thumbnail


We do this work because it’s important to us to lift up the members of our community — and from a business standpoint, it’s actually been a great investment. A lot of those customers come back in more fruitful times because they remember we helped them when they needed it. Or they refer us to other businesses across the Salt Lake City area. To our team, there’s nothing better than that. 


Functionally speaking, the work we do wouldn’t be possible without the internet. We’re often downloading and uploading massive design and video files from and to the cloud, and that’s part of what makes having Google Fiber as our internet provider so important. 


From the time we signed up for Google Fiber, to the installation, to any time we’ve needed to reach out for assistance — the response time has been great and customer service has been top notch. For example, we were on a 250 Meg plan but I wanted to get us on 1 Gig. I reached out to our local rep one day and by the morning I came in the next day, our service had been fully upgraded. Our files were flying faster than ever.


The switch to Google Fiber also cut our previous internet bill in half. For us, that means a little more wiggle room to be able to help more struggling Salt Lake City businesses — and to AY Creative, that also means just about everything. 


Posted by Alfonso Ayala, Small Business Advisor, AY Creative



Beta Channel Update for ChromeOS / ChromeOS Flex

The Beta channel is being updated to OS version: 15329.37.0, Browser version: 111.0.5563.54 for most ChromeOS devices.

If you find new issues, please let us know one of the following waysInterested in switching channels? Find out how.

Daniel Gagnon,
Google ChromeOS

Long Term Support Channel Update for ChromeOS

LTS-102 is being updated in the LTS channel to 102.0.5005.197 (Platform Version: 14695.187.0) for most ChromeOS devices. Want to know more about Long Term Support? Click here.


This update contains multiple Security fixes, including:


1407701  High CVE-2023-0931 Use after free in Video

1353208 High CVE-2023-0128 Use after free in Overview Mode

High CVE-2022-4139 Linux kernel vulnerability advisory

High CVE-2022-4378 Linux kernel vulnerability advisory

High CVE-2022-45934 Linux kernel vulnerability advisory



Giuliana Pritchard 

Google Chrome OS

Google Keep notes now available on home screen of Android devices

What’s changing

In addition to dual pane view, drag out, and a number of other features supporting the mission to provide a top-class user experience on Android devices when using Google Keep, we’re introducing the Keep single note widget. 

With this new feature, you can “pin” a note or list to your home screen and edit them in the Keep app with a single tap. Lists enable you to toggle checkboxes directly on the note without having to open the Keep app. Also, your notes and lists will reflect any background colors and reminders present within the Keep app. Finally, a collaborator icon will appear at the bottom of the note to indicate if it is a shared note between two or more people. 

By giving you quick and easy access to your most important notes and lists on your home screen, we hope this feature increases your productivity while using the Keep app. 

Getting started 

Rollout pace 

Availability 

  • Available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers 
  • Available to users with personal Google Accounts 

Resources

Dev Channel Update for Desktop

The dev channel has been updated to 112.0.5615.12 for Windows, Linux and Mac.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Srinivas Sista
Google Chrome

Google Dev Library Letters: 19th Edition

Posted by the Dev Library team

In this newsletter, we’re highlighting the best projects developed with Google technologies that have been contributed to the Google Dev Library platform. We hope this will spark some inspiration for your next project!


Contributions of the Month


[ML] Serving Stable Diffusion by Chansung Park

Learn the various ways to deploy Stable Diffusion with TensorFlow Serving, Hugging Face Endpoint, and FastAPI.


[ML] Textual inversion pipeline for Stable Diffusion by Chansung Park

Dive into this repository which demonstrates how to manage multiple models and their prototype applications of fine-tuned Stable Diffusion on new concepts by Textual Inversion.

Read more on DevLibrary 


[Flutter] Animated soccer rating hexagon by Prateek Sharma

Create a hexagon widget in Flutter that displays the ratings of a soccer player or team. The six sides represent a different aspect of the player or team's rating such as speed, strength, and accuracy.

Read more on DevLibrary 


Android & Kotlin


Mastering Kotlin Coroutines by Amit Shekhar

Dive into an introduction to coroutines in Kotlin programming language. Coroutines are a way to write asynchronous and non-blocking code in a sequential and easy-to-understand manner.

Kotlin Symbol Processing (KSP) for code generation by Tim Lin

Discover more about KSP API you can use to develop lightweight compiler plugins, which helps you get the complete source code information during compile time.

Form Conductor by Naing Aung Luu

Learn about form conductor. More than form validation, it provides a handful of reusable API to construct a form in simple easy steps.

MovieDB by Gabriel Bronzatti Moro

Discover how to fetch data from Movie DB API and allow users to search for movies and view details and store them on a local database in this Android project.


Angular


A complete guide to Angular Multilingual Application by Hossein Mousavi

Dive into the technical aspects of building a multilingual Angular application, starting with the localization of the application's text.


Flutter


Bank cards UI by Ethiel Adiassa

See how Flutter can be used to create aesthetically pleasing and functional UI designs for banking applications.

macOS UI by Reuben Turner

Dive into the repo resource for designers and developers looking to create beautiful templates and tutorials to create macOS applications and interfaces.


Google Cloud


Search for Brazilian laws using Dialogflow CX and matching engine by Rubens Zimbres

Develop a chatbot using Dialogflow CX and a matching engine to help users search for something specific in legislation.

Awesome CloudOps automation by Doug Sillars

Learn how a single repository could satisfy all your day-to-day CloudOps automation needs.

Serverless Kubernetes on Google Cloud Platform by Gursimar Singh

Learn how serverless technologies like Cloud Run can be used to simplify and expedite the process of designing software applications.

Implement secure CI/CD with Workload Identity Federation, GitLab CI, and Cloud Deploy by Ezekias Bokove

See how to implement a secure Continuous Integration/Continuous Deployment (CI/CD) pipeline using Workload Identity Federation and GitLab CI.


Google Trust Services now offers TLS certificates for Google Domains customers



We’re excited to announce changes that make getting Google Trust Services TLS certificates easier for Google Domains customers. With this integration, all Google Domains customers will be able to acquire public certificates for their websites at no additional cost, whether the site runs on a Google service or uses another provider. Additionally, Google Domains is now making an API available to allow for DNS-01 challenges with Google Domains DNS servers to issue and renew certificates automatically.



Like the existing Google Cloud integration, Automatic Certificate Management Environment (ACME) protocol is used to enable seamless automatic lifecycle management of TLS certificates. 



These certificates are issued by the same Certificate Authority (CA) Google uses for its own sites, so they are widely supported across the entire spectrum of devices used to access your services.



How do I use it?



Using ACME ensures your certificates are renewed automatically and many hosting services already support ACME. If you're running your own web servers / services, there are ACME clients that integrate easily with common servers. To use this feature, you will need an API key called an External Account Binding key. This enables your certificate requests to be associated with your Google Domains account. You can get an API key by visiting Google Domains and navigating to the Security page for your domain. There you’ll see a section for Google Trust Services where you can get your EAB Key.



Example of EAB Credentials in Google Domains



As an example, with the popular Certbot ACME client, the configuration to register an account looks like:


certbot register --email <CONTACT_EMAIL> --no-eff-email --server "https://dv.acme-v02.api.pki.goog/directory"  --eab-kid "<EAB_KEY_ID>" --eab-hmac-key "<EAB_HMAC_KEY>"




The EAB_KEY_ID and EAB_HMAC_KEY are both provided on your Google Domains security page.



After the account is created, you may issue certificates by running:

certbot certonly -d <domain.com> --server "https://dv.acme-v02.api.pki.goog/directory" --standalone



Then follow the prompts to complete validation and download your certificate. If you need additional information please visit the Google Domains help center.



Google Domains and ACME DNS-01




ACME uses challenges to validate domain control before issuing certificates. The ACME DNS-01 challenge can be an efficient way for users to automate the validation process and integrate with existing websites and web hosting services.



Google Domains now provides an API for ACME DNS-01 challenges that helps streamline the process for users to authenticate domain control quickly and securely. This is now offered in some popular ACME clients like Certbot via this plugin, Caddy, Certify The Web, Posh-ACME. You can find additional information on the Google Domains site.






Example of DNS API Access Token in Google Domains



To set up automatic certificate provisioning with ACME and DNS-01, follow these steps:



  1. Sign in to Google Domains.
  2. Select the domain that you want to use.
  3. At the top left, click “Menu” and select “Security”.
  4. Under section “ACME DNS API”, click “Create token”.
  5. A dialog box will appear with an “API Token”. This is the API Token you will need to enter into your ACME client. You will need to copy this value and can do so by clicking the copy button next to the API Token. 
    • NOTE: This value is only shown once. After the dialog box is closed you  will not be able to see this API Token again. Store this token in a safe place, since anyone that has it gains the ability to modify some DNS TXT records for your Domain.  
    • If you did not save this value before closing the dialog box, you can easily delete and create a new API token.
    • A limit of 10 API tokens per domain can exist at a time. 
  6. Once the dialog box is closed you will be able to see in the list that the token has been created. You can delete this token at any time to revoke its access. 
  7. The API token can now be used in an ACME client that supports the Google Domains ACME DNS API. Each ACME client differs slightly on how to specify this API Token so you will need to read the documentation on your desired ACME client. 




Regardless of which ACME client you use, Google Domains and Google Trust Services are excited to offer a reliable option for no-cost TLS certificates. This continues the mission of helping build a safer internet by providing a transparent, trusted, and reliable Certificate Authority.