Google Workspace Updates Weekly Recap – October 21, 2022

New updates 

Unless otherwise indicated, the features below are fully launched or in the process of rolling out (rollouts should take no more than 15 business days to complete), launching to both Rapid and Scheduled Release at the same time (if not, each stage of rollout should take no more than 15 business days to complete), and available to all Google Workspace and G Suite customers. 

Initiate dialog workflows from the Chat app using message cards
Previously, the only way for developers and Chat app users to open dialogs was through slash commands. Now, we’re adding the ability to trigger the dialog by using buttons on an in-stream message card. This addition provides a much more convenient way to initiate workflows that involve dialog surfaces. | Learn more


Add shared drives to specific organizational units, now generally available 
Earlier this year, we launched a beta that allows admins to place shared drives into sub organizational units (OUs). Doing so enables admins to configure sharing policies, data regions, access management, and more at a granular level. We’re excited to announce this is now generally available. | Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Standard, Education Plus, the Teaching and Learning Upgrade, and Nonprofits customers only. | Learn more


See when colleagues are out of the office on Android 
When viewing a person information card in Google Voice, Calendar, Gmail, and Chat on Android, you are now able to see your colleagues’ out-of-office status via an out-of-office banner. The banner also shows when the person is expected to return. 

More ways to work with, display, and organize your content across Google Workspace on Android
  • Link previews in Google Sheets: We’re improving the Android experience by adding link previews to Sheets. This feature is already available on the web and allows you to get context from linked content without bouncing between apps and screens. | Gradual rollout (up to 15 days for feature visibility) starting on October 24, 2022. | Learn more
  • Google Sheets drag & drop improvements: We’ve enhanced drag & drop support for the Sheets Android app by adding the ability to drag, copy, and share charts and in-cell images.


Previous announcements


The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Workspace Admins are now notified when Label editing is restricted by set rules
We’ve added a new Label Manager UI feature showing which rules a label is used within. Specifically, a message identifying and linking the label to the exact rule(s) will now appear in the Label Manager to ensure admins understand why label modification is disabled. | Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, Education Standard customers only. | Learn more.


Encouraging Working Location coverage across organizations 
Admins now have access to a new tool that aims to drive Working Location usage across their organizations. This setting adds a customizable banner to users’ Calendar either encouraging or requiring them to set up their working location. | Available to Google Workspace Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, Education Standard, and the Teaching and Learning Upgrade, as well as legacy G Suite Business customers only. | Learn more.


Enhanced menus in Google Slides and Drawings improves findability of key features 
We’re updating the menus in Google Slides and Google Drawings to make it easier to locate the most commonly-used features. | Learn more.


Preview or download client-side encrypted files with Google Drive on Android and iOS 
Admins for select Google Workspace editions can update their client-side encryption configurations to include Drive Android and iOS. When enabled, users can preview or download client-side encrypted files. | Learn more.


Split table cells in Google Docs to better organize information
You can now split table cells into a desired number of rows and columns in Google Docs. | Learn more.


Updates to storage management tools in the Admin console 
To further enhance the set of tools for managing storage, we’re rolling out a new Storage Admin role. The ability to apply storage limits to shared drives and a new column called Shared drive ID in the Manage Shared Drives page are coming soon. | Learn more.


Hold separate conversations in Google Chat spaces with in-line threading 
You can now reply directly to any message in new spaces and some existing spaces. This creates a separate in-line thread where smaller groups of people can continue a conversation on a specific topic. | Learn more.


Conversation summaries in Google Chat help you stay on top of messages in Spaces 
We've introduced conversation summaries in Google Chat on web, which provide a helpful digest of conversations in a space, allowing you to quickly catch-up on unread messages and navigate to the most relevant threads. | Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, Education Standard, the Teaching and Learning Upgrade, Frontline, and Nonprofits customers only. | Learn more.


Present Google Slides directly in Google Meet 
You will now be able to control your Slides and engage with your audience all in one screen by presenting Slides from Meet. This updated experience can help you present with greater confidence and ultimately make digital interactions feel more like when you’re physically together. | Available to Available to Google Workspace Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Education Standard, Enterprise Plus, Education Plus, the Teaching and Learning Upgrade, and Nonprofits customers only. | Learn more.


Easily find Google Workspace Marketplace apps with enhanced search filters 
We’ve introduced enhanced search filters in the Google Workspace Marketplace to help you quickly find relevant apps. These new filters allow you to search by category, price, rating whether it’s a private app for the organization, if it works with other apps, and more. | Available to Google Workspace Business Starter, Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, and Nonprofits, as well as legacy G Suite Basic and Business customers only. | Learn more.


Improving the Google Chat and Gmail search experience on web and mobile 
In order to help you find more accurate and customized search suggestions and results, we’ve introduced three features that improve the Google Chat and Gmail search experience on web and mobile: Search suggestions, Gmail labels, and Related results. | Learn more.



How AI can help in the fight against breast cancer

In 2020, there were 2.3 million people diagnosed with breast cancer and 685,000 deaths globally. Early cancer detection is key to better health outcomes. But screenings are work intensive, and patients often find getting mammogramsand waiting for results stressful.

In response to these challenges, Google Health and Northwestern Medicine partnered in 2021 on a clinical research study to explore whether artificial intelligence (AI) models can reduce the time to diagnosis during the screening process, narrowing the assessment gap and improving the patient experience. This work is among the first prospective randomized controlled studies for AI in breast cancer screening, and the results will be published in early 2023.

Behind this work, are scientists and researchers united in the fight against breast cancer. We spoke with Dr. Sunny Jansen, a technical program manager at Google, and Sally Friedewald, MD, the division chief of Breast and Women's Imaging at Northwestern University’s Feinberg School of Medicine, on how they hope this work will help screening providers catch cancer earlier and improve the patient experience.

What were you hoping to achieve with this work in the fight against breast cancer?

Dr. Jansen:Like so many of us, I know how breast cancer can impact families and communities, and how critical early detection can be. The experiences of so many around me have influenced my work in this area. I hope that AI can make the future of breast cancer screening easier, faster, more accurate — and, ultimately, more accessible for women globally.

So we sought to understand how AI can reduce diagnostic delays and help patients receive diagnoses as soon as possible by streamlining care into a single visit. For patients with abnormal findings at screening, the diagnostic delay to get additional imaging tests is typically a couple of weeks in the U.S.Often, the results are normal after the additional imaging tests, but that waiting period can be nerve-racking. Additionally, it can be harder for some patients to come back to get additional imaging tests, which exacerbates delays and leads to disparities in the timeliness of care.

Dr. Friedewald:I anticipate an increase in the demand for screenings and challenges in having enough providers with the necessary specialized training. Using AI, we can identify patients who need additional imaging when they are still in the clinic. We can expedite their care, and, in many cases, eliminate the need for return visits. Patients who aren’t flagged still receive the care they need as well. This translates into operational efficiencies and ultimately leads to patients getting a breast cancer diagnosis faster. We already know the earlier treatment starts, the better.

What were your initial beliefs about applying AI to identify breast cancer? How have these changed through your work on this project?

Dr. Jansen: Most existing publications about AI and breast cancer analyze AI performance retrospectively by reviewing historical datasets. While retrospective studies have a lot of value, they don’t necessarily represent how AI works in the real world. Sally decided early on that it would be important to do a prospective study, incorporating AI into real-world clinical workflows and measuring the impact. I wasn’t sure what to expect!

Dr. Friedewald:Computer-aided detection (CAD), which was developed a few decades ago to help radiologists identify cancers via mammogram, has proven to be helpful in some environments. Overall, in the U.S., CAD has not resulted in increased cancer detection. I was concerned that AI would be similar to CAD in efficacy. However, AI gathers data in a fundamentally different way. I am hopeful that with this new information we can identify cancers earlier with the ultimate goal of saving lives.

The research will be published in early 2023. What did you find most inspiring and hopeful about what you learned?

Dr. Jansen:The patients who consented to participate in the study inspired me. Clinicians and scientists must conduct quality real-world research so that the best ideas can be identified and moved forward, and we need patients as equal partners in our research.

Dr. Friedewald:Agreed! There’s an appetite to improve our processes and make screening easier and less anxiety-provoking. I truly believe that if we can streamline care for our patients, we will decrease the stress associated with screening and hopefully improve access for those who need it.

Additionally, AI has the potential to go beyond the prioritization of patients who need care. By prospectively identifying patients who are at higher risk of developing breast cancer, AI could help us determine patients that might need a more rigorous screening regimen. I am looking forward to collaborating with Google on this topic and others that could ultimately improve cancer survival.

Helping all New Yorkers pursue a career in tech

As New York emerges from the COVID-19 pandemic, the tech sector continues to play a critical role in the city’s economic recovery. While hiring has slowed in many of the city’s industries, tech is still among the fastest areas of job growth. In fact, there were more openings for tech positions during the pandemic than in any other industry.

We believe the city’s good-paying tech jobs should be within reach of all New Yorkers. That’s why earlier this year we announced the Google NYC Tech Opportunity Fund — a $4 million commitment to computer science (CS) education, career development and job-preparedness to make sure every New Yorker, today and in the future, has the chance to get into tech.

With over 680,000 good-paying tech jobs, New York has more tech workers than any other U.S. city. That means for every one Googler in New York, there are over 50 additional tech jobs here. So we’ve extended our support for tech in New York beyond our own hiring to the city’s overall tech employment pipeline — starting from the classroom all the way to the office.

We’ve had some early success: We’ve trained 1,200 New York City high school students through our CS education programs like Code Next and the Computer Science Summer Institute (CSSI). Meanwhile, Grow with Google has partnered with over 530 organizations to train more than 430,000 New Yorkers on digital skills with the help of organizations like public libraries and chambers of commerce. We also launched an apprenticeship program where over 90% of participants nationally landed quality jobs in tech, including at Google, within six months of completing the program. And we’re supporting New York-based startups through Google’s Black Founders Fund and Latino Founders Fund.

With the Google NYC Tech Opportunity Fund, we’re going a step further. We’ve identified key areas we believe Google can help address larger systemic issues and where we’ll focus our investments.

Support for teaching early tech skills

P-12 students with access to CS classes in school are nearly three times more likely to aspire to have a job in the field. But to offer these courses, schools need teachers who are trained in computational skills. After supporting a CS teacher training program at Hunter College in 2021, we committed an additional $1.5 million to The City University of New York (CUNY) and Hunter College to help them train more CS teachers and incorporate computational thinking into their curricula.

New York City's public libraries are essential learning environments for many, especially in under-resourced communities. Thousands of teens use the city’s three library systems annually to get college and career mentoring, build digital literacy, borrow books and more. So we granted a total of $1.5 million to Brooklyn Public Library, The New York Public Library and Queens Public Library to help them create special teen centers. These spaces will offer access to technology, resources and programs teens need to develop essential career skills for the future.

Resources for job seekers

We’re also providing a $1 million Google.org grant to the New York City Employment and Training Coalition (NYCETC) to assemble a consortium of leaders in tech education and workforce development, and to seed a grant fund for organizations that support BIPOC job seekers in NYC.

As part of this effort, we also offer free Google Career Certificates for community colleges, such as The State University of New York’s (SUNY) online center. Over 10,000 New Yorkers have already completed a Google Career certificate and built up their qualifications for high-demand tech jobs.

By taking steps to support students and those already in the workforce, we can help ensure all New Yorkers have access to career opportunities so the tech sector in New York really looks like New York.

Google Workspace Client-side encryption beta expanded to include Google Calendar

 This announcement was made at Google Cloud Next ‘22. Visit the Cloud Blog to learn more about the latest Google Workspace innovations for the ever-changing world of work. 



What’s changing 

In 2021, we announced Google Workspace Client-side encryption to help customers strengthen the confidentiality of their data while helping to address a broad range of data sovereignty and compliance requirements. 


Since then, we’ve made this feature available for Google Meet, Drive, Docs, Sheets, and Slides, with support for multiple file types including Office files, PDFs, and more. Today, we’re happy to announce the beta for Client-side encryption for Google Calendar. When using Client-side encryption for Calendar events, your event description, attachments, and Meet data is indecipherable to Google servers. You have control over encryption keys and the identity service to access those keys. 


Google Workspace Enterprise Plus, Education Plus, and Education Standard customers are eligible to apply for the beta here until November 11, 2022. 

Who’s impacted 

Admins and end users 


Why it’s important 

Google Workspace already uses the latest cryptographic standards to encrypt all data at rest and in transit between our facilities. With Client-side encryption, we’re taking this a step further by giving customers direct control of encryption keys and the identity provider used to access those keys. This can help you strengthen the confidentiality of your data while helping to address a broad range of data sovereignty and compliance needs. 


When using Client-side encryption, your event description, attachments, and Meet data is indecipherable to Google. You can create a fundamentally stronger privacy posture, whether that’s to help your organization comply with regulations like ITAR and CJIS or simply to better protect the privacy of your confidential data. 


Getting started 

  • Admins: This feature will be OFF by default and can be enabled at the domain, OU, and Group levels by going to the Admin console > Security > Access and data control > Client-side encryption. Visit the Help Center to learn more about client side encryption
  • End users: 
    • You will need to be logged in with your Identity Provider to have access to encrypted content.
    • To add encryption to any event in Calendar, click on the shield icon at the top of the event creation card. This will add encryption to event description, attachments, and Meet, while other items such as event tile, time, and guests remain on standard encryption. 

Availability 

  • Available to Google Workspace Enterprise Plus, Education Plus, and Education Standard customers 
  • Not available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Education Fundamentals, Frontline, and Nonprofits, as well as legacy G Suite Basic and Business customers 
  • Not available to users with personal Google Accounts 

Resources 

Beta Channel Update for ChromeOS

The Beta channel is being updated to 107.0.5304.51 (Platform version: 15117.66.0 / 15117.67.0) for most ChromeOS devices. This build contains a number of bug fixes and security updates.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Daniel Gagnon,
Google ChromeOS

Beta Channel Update for ChromeOS

The Beta channel is being updated to 107.0.5304.51 (Platform version: 15117.66.0 / 15117.67.0) for most ChromeOS devices. This build contains a number of bug fixes and security updates.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Daniel Gagnon,
Google ChromeOS

Beta Channel Update for ChromeOS

The Beta channel is being updated to 107.0.5304.51 (Platform version: 15117.66.0 / 15117.67.0) for most ChromeOS devices. This build contains a number of bug fixes and security updates.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Daniel Gagnon,
Google ChromeOS

Beta Channel Update for ChromeOS

The Beta channel is being updated to 107.0.5304.51 (Platform version: 15117.66.0 / 15117.67.0) for most ChromeOS devices. This build contains a number of bug fixes and security updates.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Daniel Gagnon,
Google ChromeOS

PI-ARS: Accelerating Evolution-Learned Visual-Locomotion with Predictive Information Representations

Evolution strategy (ES) is a family of optimization techniques inspired by the ideas of natural selection: a population of candidate solutions are usually evolved over generations to better adapt to an optimization objective. ES has been applied to a variety of challenging decision making problems, such as legged locomotion, quadcopter control, and even power system control.

Compared to gradient-based reinforcement learning (RL) methods like proximal policy optimization (PPO) and soft actor-critic (SAC), ES has several advantages. First, ES directly explores in the space of controller parameters, while gradient-based methods often explore within a limited action space, which indirectly influences the controller parameters. More direct exploration has been shown to boost learning performance and enable large scale data collection with parallel computation. Second, a major challenge in RL is long-horizon credit assignment, e.g., when a robot accomplishes a task in the end, determining which actions it performed in the past were the most critical and should be assigned a greater reward. Since ES directly considers the total reward, it relieves researchers from needing to explicitly handle credit assignment. In addition, because ES does not rely on gradient information, it can naturally handle highly non-smooth objectives or controller architectures where gradient computation is non-trivial, such as meta–reinforcement learning. However, a major weakness of ES-based algorithms is their difficulty in scaling to problems that require high-dimensional sensory inputs to encode the environment dynamics, such as training robots with complex vision inputs.

In this work, we propose “PI-ARS: Accelerating Evolution-Learned Visual-Locomotion with Predictive Information Representations”, a learning algorithm that combines representation learning and ES to effectively solve high dimensional problems in a scalable way. The core idea is to leverage predictive information, a representation learning objective, to obtain a compact representation of the high-dimensional environment dynamics, and then apply Augmented Random Search (ARS), a popular ES algorithm, to transform the learned compact representation into robot actions. We tested PI-ARS on the challenging problem of visual-locomotion for legged robots. PI-ARS enables fast training of performant vision-based locomotion controllers that can traverse a variety of difficult environments. Furthermore, the controllers trained in simulated environments successfully transfer to a real quadruped robot.

PI-ARS trains reliable visual-locomotion policies that are transferable to the real world.

Predictive Information
A good representation for policy learning should be both compressive, so that ES can focus on solving a much lower dimensional problem than learning from raw observations would entail, and task-critical, so the learned controller has all the necessary information needed to learn the optimal behavior. For robotic control problems with high-dimensional input space, it is critical for the policy to understand the environment, including the dynamic information of both the robot itself and its surrounding objects.

As such, we propose an observation encoder that preserves information from the raw input observations that allows the policy to predict the future states of the environment, thus the name predictive information (PI). More specifically, we optimize the encoder such that the encoded version of what the robot has seen and planned in the past can accurately predict what the robot might see and be rewarded in the future. One mathematical tool to describe such a property is that of mutual information, which measures the amount of information we obtain about one random variable X by observing another random variable Y. In our case, X and Y would be what the robot saw and planned in the past, and what the robot sees and is rewarded in the future. Directly optimizing the mutual information objective is a challenging problem because we usually only have access to samples of the random variables, but not their underlying distributions. In this work we follow a previous approach that uses InfoNCE, a contrastive variational bound on mutual information to optimize the objective.

Left: We use representation learning to encode PI of the environment. Right: We train the representation by replaying trajectories from the replay buffer and maximize the predictability between the observation and motion plan in the past and the observation and reward in the future of the trajectory.

Predictive Information with Augmented Random Search
Next, we combine PI with Augmented Random Search (ARS), an algorithm that has shown excellent optimization performance for challenging decision-making tasks. At each iteration of ARS, it samples a population of perturbed controller parameters, evaluates their performance in the testing environment, and then computes a gradient that moves the controller towards the ones that performed better.

We use the learned compact representation from PI to connect PI and ARS, which we call PI-ARS. More specifically, ARS optimizes a controller that takes as input the learned compact representation PI and predicts appropriate robot commands to achieve the task. By optimizing a controller with smaller input space, it allows ARS to find the optimal solution more efficiently. Meanwhile, we use the data collected during ARS optimization to further improve the learned representation, which is then fed into the ARS controller in the next iteration.

An overview of the PI-ARS data flow. Our algorithm interleaves between two steps: 1) optimizing the PI objective that updates the policy, which is the weights for the neural network that extracts the learned representation; and 2) sampling new trajectories and updating the controller parameters using ARS.

Visual-Locomotion for Legged Robots
We evaluate PI-ARS on the problem of visual-locomotion for legged robots. We chose this problem for two reasons: visual-locomotion is a key bottleneck for legged robots to be applied in real-world applications, and the high-dimensional vision-input to the policy and the complex dynamics in legged robots make it an ideal test-case to demonstrate the effectiveness of the PI-ARS algorithm. A demonstration of our task setup in simulation can be seen below. Policies are first trained in simulated environments, and then transferred to hardware.

An illustration of the visual-locomotion task setup. The robot is equipped with two cameras to observe the environment (illustrated by the transparent pyramids). The observations and robot state are sent to the policy to generate a high-level motion plan, such as feet landing location and desired moving speed. The high-level motion plan is then achieved by a low-level Motion Predictive Control (MPC) controller.

Experiment Results
We first evaluate the PI-ARS algorithm on four challenging simulated tasks:

  • Uneven stepping stones: The robot needs to walk over uneven terrain while avoiding gaps.
  • Quincuncial piles: The robot needs to avoid gaps both in front and sideways.
  • Moving platforms: The robot needs to walk over stepping stones that are randomly moving horizontally or vertically. This task illustrates the flexibility of learning a vision-based policy in comparison to explicitly reconstructing the environment.
  • Indoor navigation: The robot needs to navigate to a random location while avoiding obstacles in an indoor environment.

As shown below, PI-ARS is able to significantly outperform ARS in all four tasks in terms of the total task reward it can obtain (by 30-50%).

Left: Visualization of PI-ARS policy performance in simulation. Right: Total task reward (i.e., episode return) for PI-ARS (green line) and ARS (red line). The PI-ARS algorithm significantly outperforms ARS on four challenging visual-locomotion tasks.

We further deploy the trained policies to a real Laikago robot on two tasks: random stepping stone and indoor navigation. We demonstrate that our trained policies can successfully handle real-world tasks. Notably, the success rate of the random stepping stone task improved from 40% in the prior work to 100%.

PI-ARS trained policy enables a real Laikago robot to navigate around obstacles.

Conclusion
In this work, we present a new learning algorithm, PI-ARS, that combines gradient-based representation learning with gradient-free evolutionary strategy algorithms to leverage the advantages of both. PI-ARS enjoys the effectiveness, simplicity, and parallelizability of gradient-free algorithms, while relieving a key bottleneck of ES algorithms on handling high-dimensional problems by optimizing a low-dimensional representation. We apply PI-ARS to a set of challenging visual-locomotion tasks, among which PI-ARS significantly outperforms the state of the art. Furthermore, we validate the policy learned by PI-ARS on a real quadruped robot. It enables the robot to walk over randomly-placed stepping stones and navigate in an indoor space with obstacles. Our method opens the possibility of incorporating modern large neural network models and large-scale data into the field of evolutionary strategy for robotics control.

Acknowledgements
We would like to thank our paper co-authors: Ofir Nachum, Tingnan Zhang, Sergio Guadarrama, and Jie Tan. We would also like to thank Ian Fischer and John Canny for valuable feedback.

Source: Google AI Blog


Improving the Google Chat and Gmail search experience on web and mobile

What’s changing

In order to help you find more accurate and customized search suggestions and results, we’re introducing three features that improve the Google Chat and Gmail search experience on web and mobile:
  • Search suggestions: Search-query suggestions based on your past search history in Chat will now appear as you type in the Chat search bar. This will help you quickly recall important messages, files, and more on mobile. 
  • Gmail labels: You can now search messages under a specific Gmail label in the app to return results only within that label. You can also use search chips in the Gmail search bar to refine label searches. 
 
  • Related results: For Gmail search-queries that return no results, related results will be shown to improve the overall search experience. 

Getting started 


Rollout pace 

Search suggestions: 
  • This feature is now available on Android devices
  • Rollout to iOS devices will complete by the end of October 
Gmail labels: 
  • This feature is now available on Android and iOS devices 
Related results: 
  • This feature is now available on web 

Availability 

  • Available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers 
  • Available to users with personal Google Accounts 

Resources 


Roadmap