Form Factors at Android Developer Summit ‘22

Posted by Alex Vanyo, Developer Relations EngineerThe Android Developer Summit is live with the second stop on our world tour - and we are thrilled to give you the latest updates on Android form factors! Discover the latest tools, APIs and guidance that make it easier to build apps that look great on large screens, wearables, and TVs. Here are the three things you need to know about form factors at ADS, and check out the full YouTube playlist here:

#1: Android developers are finding BIG success when optimizing their apps for large screens

The large screen category is growing, with over 270 million active large screen Android devices and an expanding portfolio of tablets, desktops, and foldables to choose from. That’s why there has never been a better time to be sure your app looks great across all screen sizes and postures. To learn practical tips for optimizing your app for large screens, check out the Do’s and Don’ts: Mindset for optimizing apps for larger screens session. Throughout the session, the Android team highlights design guidance, app quality, and additional tips for large screens on everything from reachability to canonical layouts. New Android Studio tools like emulators and reference devices make it easier to build and test.In-depth guides help you improve your app by optimizing layouts, avoiding camera issues, and enhancing support for peripherals like mouse, keyboard and stylus.

Large screens enable users to see more, do more, and experience more. With large screen sizes, there are ever-expanding opportunities to excite and delight your users with differentiated app experiences. That’s why we launched our new large screens gallery page during the Android Dev Summit kickoff, with general design tips and verticalized use cases, and implementation ideas.

#2: It’s easier than ever to develop for Wear OS

Compose for Wear OS is stable, bringing the modern UI toolkit to the wrist and making it simpler than ever to build exceptional Wear OS apps. This toolkit is designed to help you get your app up and running faster than before; Outdooractive adopted Compose for Wear OS and enhanced their wearable experience with 30% fewer development hours. Equally important as development time is the user experience you are able to provide. Todoist rebuilt their app using Compose for Wear OS, saw their growth rate on Google Play increase by 50%, and heard positive feedback internally and on their social media channels. To begin developing with Compose for Wear OS, get started on our curated learning pathway for a step-by-step learning journey. Where you can find documentation including a quick start guide and get hands on experience with the Compose for Wear OS codelab!

Outdooractive cut development time by an estimated 30% with Compose for Wear OS
The Android Developer Summit technical sessions dive deep into the content you need to build Wear OS apps, with guidance on app architecture, testing, handling rotary input and verticalized sessions for media and fitness. We have seen the impact that Health Services has had on developing health and fitness apps for the wrist, and how powerful this can be when extended with Health Connect on mobile. Using Google APIs and tools, Strava improved their user engagement and retention - with 30% more active days from Wear OS users on Strava than users without a wearable device. For more information on how to start building apps for Wear OS check out the developer site.

#3: Find tips and tricks for developing a great Android TV app

Finally, for Android TV we have collected tips for building amazing living room user experiences, including some new platform features in Android 12 and 13. TV is an important part of the Android ecosystem; of US households watch 25+ hours of content each week. Plus, there are now over 110 million monthly active AndroidTV OS devices. There is a ton to learn about how you can tap into this audience in our Improving the TV User Experience technical session including an update on Compose, seeing how App Bundles relate to TV, and guidance and best practices around energy savings and user preferences.

Those were the top three announcements about Form Factors at Android Developer Summit. Want to learn more? Check out the full form factors playlist on YouTube!

What’s next for Android Dev Summit’ 22? The Platform track, on November 14

This was the second stop on the Android Dev Summit ‘22 tour. Last month, we kicked things off with the keynote as well as our first track on Modern Android Development. After today’s second track on Form Factors, there’s more to come in our third and final track on the Platform, which will be broadcast live on YouTube next week on November 14. We can’t wait to see you again next week!

“Reach” Your Users on Large Screens

Posted by Diana Wong, Product Manager, Android Large screen devices like foldables and tablets mean your users have more screen to interact with. But they also can make it more difficult for those users to reach certain parts of their screen. Reachability, or what parts of the screen users can comfortably reach without stretching or adjusting their grip, is an important factor in user experience and accessibility and can help you decide where to place your app’s UI elements.

UI Elements on Large Screens

Large screens, such as tablets and foldables, are not always held and engaged with the same way as a smaller device like a phone. In the image below, you can see an example of how easily users can reach each area of a tablet with a width greater than nine inches.



The green area is easy for the majority of users to reach, the yellow and orange areas are only reachable for some users, and the red area is most difficult for users to reach. Within the red area, a user may need to adjust their grip or stretch to reach UI elements. It is important to consider how reachable each of your UI elements are to provide your users with the most optimal experiences.






Reachability isn’t “one size fits all”

Reachability can be impacted by a number of factors. First, device size can change what areas are reachable; larger devices mean it will be more difficult for users to reach the center of the screen. Another factor impacting reachability is the task a user is executing as users may hold their device in different ways for tasks like taking a photo versus using the keyboard. Hand size, measured from base of the wrist to the tip of the middle finger, can also affect how much of the device a user can reach. For example, take a look at the hand size data below. For tablets with a diagonal size greater than nine inches, users with hands larger than the US average can reach significantly more of the screen than users with hands smaller than average.
Hand size data showing differences in reachability between users with large hands and users with small hands
Additionally, how users hold their device changes depending on device orientation. As shown in the images below, devices used in portrait mode versus landscape impact the areas a user can comfortably reach.
Hand size data showing differences in reachability between users who hold their devices in landscape mode versus those who hold their devices in portrait mode

Finally, mostly due to screen size, foldable devices show some slightly different reachability patterns. Because they often have smaller screens than tablets, it is easier to reach the center of the device. However, the general pattern holds when it comes to reachability. When unfolded, the average user cannot reach the top 25% of the screen on a foldable device.

The DOs and DON’Ts of Large Screen Reachability

Reachability may vary by user, but there are some guidelines that can help your users’ large screen app experience. We have found that placing UI elements in the corners can be less than optimal. UI elements that are too close to the edges are going to be more likely to interact with user grip.Additionally, our reachability data shows that elements too close to the corners or edges of the device can be more difficult to reach, especially when a user is holding the device with both hands.

Now that you’ve learned all about reachability and the factors that impact it, here’s what you need to remember when building or updated an app for large screens:

DO: Limit interactions on the top 25% of the screen

The upper quarter of the screen can be hard to reach without changing one's grip.

DONT: Place critical and frequently used elements close to the screen's bottom edge and corners

Placing essential interactive elements too close to the bottom edge of the screen makes it more difficult for some users, particularly those with larger hands, to reach.

You can learn more about designing your app for large screens in our new gallery page or by checking out the Material Design guidance for large screens and foldables.

New feature to help people navigate the energy crisis in Europe

Europe is gearing up for a challenging winter - with rising prices and pressures on the European grid, driven by Russia’s illegal war in Ukraine, further driving the need for secure, reliable, sustainable and affordable energy sources.

We know that energy security and affordability are top of mind for many across Europe right now. People are turning to Google to ask questions about conserving energy and managing their costs. In the UK, a year ago just one in ten searches on the topic of energy prices was a ‘why’, ‘how’ or ‘when’ question - now it’s one in four. In Germany, we’ve seen search interest trending for queries like ‘how to save natural gas’, ‘heating cost’ and how to save energy - while in Belgium, searches for ‘how to save on gas’ are up more than 5,000% since this time last year.

In times of uncertainty, people turn to Google for help and information. As people look for new ways to stay on top of their energy consumption and keep costs manageable, we’re launching a new feature in 29 countries and 22 languages across Europe to enable people to find relevant and actionable information to help them navigate this crisis and save energy.

Animation showing information about the energy crisis on Google Search

Starting today, when people search for information on the energy landscape in Europe, they'll see dedicated features with helpful and reliable information. When you search for things like ‘Europe energy crisis’ and ‘energy price’, you'll see news articles, local information including financial assistance that may be available, and recommended actions from the International Energy Agency to help conserve energy.

Search results showing locally relevant information on energy conservation

Whether it’s turning down the heat or adjusting the settings of your boiler, you will be able to see, at a glance, information about saving energy in your home. These information panels will surface alongside other relevant results from the open web.

The launch of the energy crisis feature is a further addition to products and tools we have already launched in Europe to help people learn more about accessing energy affordably, reliably, and efficiently. For example, earlier this year we launched updates to Google Maps that help you find more fuel-efficient routes to reduce emissions and costs when you need to drive.

Technology can contribute to addressing the challenges facing Europe today. We remain committed to connecting people with timely, relevant, and actionable information when they need it most.

Power your Wear OS fitness app with the latest version of Health Services

Posted by Breana Tate, Developer Relations EngineerThe Health Services API enables developers to use on-device sensor data and related algorithms to provide their apps with high-quality data related to activity, exercise, and health. What’s more, you don’t have to choose between conserving battery life and delivering high frequency data–Health Services makes it possible to do both. Since announcing Health Services Alpha at I/O ‘21, we’ve introduced a number of improvements to the platform aimed at simplifying the development experience. Read on to learn about the exciting features from Health Services Beta in Android Jetpack that your app will be able to take advantage of when you migrate from Alpha.


Capture more with new metrics

The Health Services Jetpack Beta introduces new data and exercise types, including DataType.GOLF_SHOT_COUNT, ExerciseType.HORSE_RIDING, and ExerciseType.BACKPACKING. You can review the full list of new exercise and data types here. These supplement the already large library of data and exercise types available to developers building Wear OS apps with Health Services. Additionally, we’ve added the ability to listen for health events, such as fall detection, through PassiveMonitoringClient.

In addition to new data types, we’ve also introduced a new organization model for data in Health Services. This new model makes the Health Services API more type-safe by adding additional classification information to data types and data points, reducing the chance of errors in code. In Beta, all DataPoint types have their own subclass and are derived from the DataPoint class. You can choose from:

  • SampleDataPoints 
  • IntervalDataPoints 
  • StatisticalDataPoints
  • CumulativeDataPoints

DataTypes are categorized as AggregateDataTypes or DeltaDataTypes.

As a result of this change, Health Services can guarantee the correct type at compile time instead of at runtime, reducing errors and improving the developer experience. For example, location data points are now represented as a strongly-typed LocationData object instead of as a DoubleArray. Take a look at the example below:

Previously:

exerciseUpdate.latestMetrics[DataType.LOCATION]?.forEach {
  val loc = it.value.asDoubleArray()

  val lat = loc[DataPoints.LOCATION_DATA_POINT_LATITUDE_INDEX]
  val lon = loc[DataPoints.LOCATION_DATA_POINT_LONGITUDE_INDEX]
  val alt = loc[DataPoints.LOCATION_DATA_POINT_ALTITUDE_INDEX]

  println("($lat,$lon,$alt) @ ${it.startDurationFromBoot}")
}

Health Services Beta:

exerciseUpdate.latestMetrics.getData(DataType.LOCATION).forEach {
  // it.value is of type LocationData
  val loc = it.value
  val time = it.timeDurationFromBoot
  println("loc = [${loc.latitude}, ${loc.longitude}, ${loc.altitude}] @ $time")

}

As you can see, due to the new approach, Health Services knows that loc is of type List<SampleDataPoint<LocationData>> because DataType.LOCATION is defined as a DeltaDataType<LocationData, SampleDataPoint<LocationData>>.


Consolidated exercise end state

ExerciseState is now included within ExerciseUpdate’s ExerciseStateInfo property. To give you more control over how your app responds to an ending exercise, we’ve added new ExerciseStates called ExerciseState.ENDED and ExerciseState.ENDING to replace what was previously multiple variations of ended and ending states. These new states also include an endReason, such as USER_END, AUTO_END_PREPARE_EXPIRED, and AUTO_END_PERMISSION_LOST.

The following example shows how to check for exercise termination:

val callback = object : ExerciseUpdateCallback {
    override fun onExerciseUpdateReceived(update: ExerciseUpdate) {
        if (update.exerciseStateInfo.state.isEnded) {
            // Workout has either been ended by the user, or otherwise terminated
            val reason = update.exerciseStateInfo.endReason
        }
        ...
    }
    ...
}


Improvements to passive monitoring

Health Services Beta also transitions to a new set of passive listener APIs. These changes largely focus on making daily metrics better typed and easier to integrate. For example, we renamed the PassiveListenerConfig function setPassiveGoals to setDailyGoals. This change reinforces that Health Services only supports daily passive goals.We’ve also condensed multiple APIs for registering Passive Listeners into a single registration call. Clients can directly implement the desired overrides for only the data your app needs.

Additionally, the Passive Listener BroadcastReceiver was replaced by the PassiveListenerService, which offers stronger typing, along with better reliability and performance. Clients can now register both a service and a callback simultaneously with different requests, making it easier to register a callback for UI updates while reserving the background request for database updates.


Build for even more devices on Wear OS 3

Health Services is only available for Wear OS 3. The Wear OS 3 ecosystem now includes even more devices, which means your apps can reach even more users. Montblanc, Samsung, and Fossil are just a few of the OEMs that have recently released new devices running Wear OS 3 (with more coming later this year!). The newly released Pixel Watch also features Fitbit health tracking powered by Health Services.

If you haven’t used Health Services before, now is the time to try it out! And if your app is still using Health Services Alpha, here is why you should consider migrating:

  • Ongoing Health Services Development: Since Health Services Beta is the newest version, bug fixes and feature improvements are likely to be prioritized over older versions.
  • Prepares your app infrastructure for when Health Services goes to stable release
  • Improvements to type safety - less chance of error in code!
  • Adds additional functionality to make it easier to work with Health Services data

You can view the full list of changes and updated documentation at developer.android.com.


Dev Channel Update for ChromeOS

The Dev channel is being updated to 109.0.5399.0 (Platform version: 15231.0.0) for most ChromeOS devices. This build contains a number of bug fixes and security updates.

If you find new issues, please let us know one of the following ways

Interested in switching channels? Find out how.


Matt Nelson,
Google ChromeOS  

Chrome for Android Update

Hi, everyone! We've just released Chrome 107 (107.0.5304.105) for Android: it'll become available on Google Play over the next few days.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Android releases contain the same security fixes as their corresponding Desktop release (Windows: 107.0.5304.106/.107, Mac & Linux: 107.0.5304.110), unless otherwise noted.


Krishna Govind
Google Chrome

Stable Channel Update for Desktop

The Stable channel has been updated to 107.0.5304.110 for Mac and Linux and 107.0.5304.106/.107 for Windowswhich will roll out over the coming days/weeks. A full list of changes in this build is available in the log.


 Security Fixes and Rewards

Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.


This update includes 10 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.


[$21000][1377816] High CVE-2022-3885: Use after free in V8. Reported by gzobqq@ on 2022-10-24

[$10000][1372999] High CVE-2022-3886: Use after free in Speech Recognition. Reported by anonymous on 2022-10-10

[$7000][1372695] High CVE-2022-3887: Use after free in Web Workers. Reported by anonymous on 2022-10-08

[$7000][1375059] High CVE-2022-3888: Use after free in WebCodecs. Reported by Peter Nemeth on 2022-10-16

[$TBD][1380063] High CVE-2022-3889: Type Confusion in V8. Reported by anonymous on 2022-11-01

[$TBD][1380083] High CVE-2022-3890: Heap buffer overflow in Crashpad. Reported by anonymous on 2022-11-01


We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.

As usual, our ongoing internal security work was responsible for a wide range of fixes:

  • [1382280] Various fixes from internal audits, fuzzing and other initiatives


Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.


Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Prudhvikumar Bommana
Google Chrome

Trust rules in Google Drive now generally available

What’s changing 

In July 2022, we announced an open beta for trust rules in Google Drive. Beginning today, this feature will become available for eligible Google Workspace customers. 

Trust rules give admins more control over how files can be shared, both within and outside of their organization. For example, admins can limit what specific departments can access versus other parts of their organization. See our original announcement for more information. 



Getting started 

  • Admins: Eligible Admins can enable this feature in the Admin console by going to the Rules > Turn on trust rules. Visit the Help Center to learn more about trust rules


  • End users: Your Admin’s trust rules will determine who you can share and collaborate with on Drive files.

Rollout pace 


Availability 

  • Available to Google Workspace Enterprise Plus, Enterprise Standard, Education Plus, and Education Standard Customers 
  • Not available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Education Fundamentals, Frontline, and Nonprofits, as well as G Suite Basic and Business customers 

Resources 

ReAct: Synergizing Reasoning and Acting in Language Models

Recent advances have expanded the applicability of language models (LM) to downstream tasks. On one hand, existing language models that are properly prompted, via chain-of-thought, demonstrate emergent capabilities that carry out self-conditioned reasoning traces to derive answers from questions, excelling at various arithmetic, commonsense, and symbolic reasoning tasks. However, with chain-of-thought prompting, a model is not grounded in the external world and uses its own internal representations to generate reasoning traces, limiting its ability to reactively explore and reason or update its knowledge. On the other hand, recent work uses pre-trained language models for planning and acting in various interactive environments (e.g., text games, web navigation, embodied tasks, robotics), with a focus on mapping text contexts to text actions via the language model’s internal knowledge. However, they do not reason abstractly about high-level goals or maintain a working memory to support acting over long horizons.

In “ReAct: Synergizing Reasoning and Acting in Language Models”, we propose a general paradigm that combines reasoning and acting advances to enable language models to solve various language reasoning and decision making tasks. We demonstrate that the Reason+Act (ReAct) paradigm systematically outperforms reasoning and acting only paradigms, when prompting bigger language models and fine-tuning smaller language models. The tight integration of reasoning and acting also presents human-aligned task-solving trajectories that improve interpretability, diagnosability, and controllability..


Model Overview

ReAct enables language models to generate both verbal reasoning traces and text actions in an interleaved manner. While actions lead to observation feedback from an external environment (“Env” in the figure below), reasoning traces do not affect the external environment. Instead, they affect the internal state of the model by reasoning over the context and updating it with useful information to support future reasoning and acting.

Previous methods prompt language models (LM) to either generate self-conditioned reasoning traces or task-specific actions. We propose ReAct, a new paradigm that combines reasoning and acting advances in language models.

ReAct Prompting

We focus on the setup where a frozen language model, PaLM-540B, is prompted with few-shot in-context examples to generate both domain-specific actions (e.g., “search” in question answering, and “go to” in room navigation), and free-form language reasoning traces (e.g., “Now I need to find a cup, and put it on the table”) for task solving.

For tasks where reasoning is of primary importance, we alternate the generation of reasoning traces and actions so that the task-solving trajectory consists of multiple reasoning-action-observation steps. In contrast, for decision making tasks that potentially involve a large number of actions, reasoning traces only need to appear sparsely in the most relevant positions of a trajectory, so we write prompts with sparse reasoning and let the language model decide the asynchronous occurrence of reasoning traces and actions for itself.

As shown below, there are various types of useful reasoning traces, e.g., decomposing task goals to create action plans, injecting commonsense knowledge relevant to task solving, extracting important parts from observations, tracking task progress while maintaining plan execution, handling exceptions by adjusting action plans, and so on.

The synergy between reasoning and acting allows the model to perform dynamic reasoning to create, maintain, and adjust high-level plans for acting (reason to act), while also interacting with the external environments (e.g., Wikipedia) to incorporate additional information into reasoning (act to reason).


ReAct Fine-tuning

We also explore fine-tuning smaller language models using ReAct-format trajectories. To reduce the need for large-scale human annotation, we use the ReAct prompted PaLM-540B model to generate trajectories, and use trajectories with task success to fine-tune smaller language models (PaLM-8/62B).

Comparison of four prompting methods, (a) Standard, (b) Chain of thought (CoT, Reason Only), (c) Act-only, and (d) ReAct, solving a HotpotQA question. In-context examples are omitted, and only the task trajectory is shown. ReAct is able to retrieve information to support reasoning, while also using reasoning to target what to retrieve next, demonstrating a synergy of reasoning and acting.

Results

We conduct empirical evaluations of ReAct and state-of-the-art baselines across four different benchmarks: question answering (HotPotQA), fact verification (Fever), text-based game (ALFWorld), and web page navigation (WebShop). For HotPotQA and Fever, with access to a Wikipedia API with which the model can interact, ReAct outperforms vanilla action generation models while being competitive with chain of thought reasoning (CoT) performance. The approach with the best results is a combination of ReAct and CoT that uses both internal knowledge and externally obtained information during reasoning.


HotpotQA (exact match, 6-shot)    FEVER (accuracy, 3-shot)
Standard 28.7 57.1
Reason-only (CoT) 29.4 56.3
Act-only 25.7 58.9
ReAct 27.4 60.9
Best ReAct + CoT Method 35.1 64.6
Supervised SoTA 67.5 (using ~140k samples) 89.5 (using ~90k samples)

PaLM-540B prompting results on HotpotQA and Fever.

On ALFWorld and WebShop, ReAct with both one-shot and two-shot prompting outperforms imitation and reinforcement learning methods trained with ~105 task instances, with an absolute improvement of 34% and 10% in success rates, respectively, over existing baselines.


AlfWorld (2-shot) WebShop (1-shot)
Act-only 45 30.1
ReAct 71 40
Imitation Learning Baselines     37 (using ~100k samples)     29.1 (using ~90k samples)

PaLM-540B prompting task success rate results on AlfWorld and WebShop.
Scaling results for prompting and fine-tuning on HotPotQA with ReAct and different baselines. ReAct consistently achieves best fine-tuning performances.
A comparison of the ReAct (top) and CoT (bottom) reasoning trajectories on an example from Fever (observation for ReAct is omitted to reduce space). In this case ReAct provided the right answer, and it can be seen that the reasoning trajectory of ReAct is more grounded on facts and knowledge, in contrast to CoT’s hallucination behavior.

We also explore human-in-the-loop interactions with ReAct by allowing a human inspector to edit ReAct’s reasoning traces. We demonstrate that by simply replacing a hallucinating sentence with inspector hints, ReAct can change its behavior to align with inspector edits and successfully complete a task. Solving tasks becomes significantly easier when using ReAct as it only requires the manual editing of a few thoughts, which enables new forms of human-machine collaboration.

A human-in-the-loop behavior correction example with ReAct on AlfWorld. (a) ReAct trajectory fails due to a hallucinating reasoning trace (Act 17). (b) A human inspector edits two reasoning traces (Act 17, 23), ReAct then produces desirable reasoning traces and actions to complete the task.

Conclusion

We present ReAct, a simple yet effective method for synergizing reasoning and acting in language models. Through various experiments that focus on multi-hop question-answering, fact checking, and interactive decision-making tasks, we show that ReAct leads to superior performance with interpretable decision traces.

ReAct demonstrates the feasibility of jointly modeling thought, actions and feedback from the environment within a language model, making it a versatile agent that is capable of solving tasks that require interactions with the environment. We plan to further extend this line of research and leverage the strong potential of the language model for tackling broader embodied tasks, via approaches like massive multitask training and coupling ReAct with equally strong reward models.


Acknowledgements

We would like to thank Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran and Karthik Narasimhan for their great contribution in this work. We would also like to thank Google’s Brain team and the Princeton NLP Group for their joint support and feedback, including project scoping, advising and insightful discussions.

Source: Google AI Blog


The new Gmail user interface is becoming the standard experience

 What's changing

At the beginning of 2022, we announced a new user interface and a customizable, integrated view for Gmail, bringing critical applications like Gmail, Chat, and Meet in one unified location. 


Starting this month, this user interface will become the standard experience for Gmail, with no option to revert back to the “original view.” With the new UI, users are still able to change their Gmail theme, inbox type, and more through quick settings. 



The new Gmail interface updated with Material 3 look and feel


The integrated view with Gmail, Chat, Spaces, and Meet on the left side of the window will also become standard for users who have turned on Chat. Through quick settings, you can customize this new interface to include the apps most important to you, whether it’s Gmail by itself or a combination of Gmail, Chat, Spaces, and Meet. This makes it easier to stay on top of what’s important and reduces the need to switch between various applications, windows, or tabs. With Chat now available on the left, users will no longer have the option to configure Chat on the right side of Gmail.  

Easily select the applications you want to use in Gmail

Visit the Help Center and The Keyword to learn more. 


Rollout


Availability

  • Available to Google Workspace Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, Frontline, and Nonprofits, as well as legacy G Suite Basic and Business customers
  • Not available to Google Workspace Essentials customers

Resources