Google Workspace Updates Weekly Recap – November 17, 2023

4 New updates

Unless otherwise indicated, the features below are available to all Google Workspace customers, and are fully launched or in the process of rolling out. Rollouts should take no more than 15 business days to complete if launching to both Rapid and Scheduled Release at the same time. If not, each stage of rollout should take no more than 15 business days to complete.


Enhancing Google Drive usability on large screen Android devices 
Building upon improvements to the Google Workspace experience on large screen Android devices, we’re excited to announce additional enhancements that bring our tablet experience more inline with our web experience. Specifically, you'll notice: 
  • Above the main doclist, users will now see a tappable folder hierarchy for their current view. This allows a user to keep track of where they are in Drive and easily navigate out of nested folders. 
  • Per-file data columns to show when a file was last modified and how much storage is used by each file. 
  • A color palette that matches the Google Material Design 3 guidelines. 
Rolling out to Rapid Release domains now; launch to Scheduled Release domains planned for November 27th, 2023. | Available to all Google Workspace customers and users with personal Google Accounts. | Learn more about using Google Drive
Enhancing Google Drive usability on large screen Android devices
Expanding Google Drive log events to additional Google Workspace editions 
Drive log events, a feature that enables admins to access an audit and investigation page to run searches related to Drive log events, is now available for Google Workspace Cloud Identity Free and Cloud Identity Premium editions. | Rolling out now to Rapid Release and Scheduled Release domains at an extended pace (potentially longer than 30 days for feature visibility). | Learn more about Drive log events


Easily convert hyperlinked text to smart chips using the tab key in Google Sheets 
Building upon the tab to convert feature in Google Sheets, when your hyperlinked text matches the text of a smart chip in Sheets, you will now be prompted to convert an inserted file, people, calendar event, Youtube or place link into a smart chip. For example, if the hyperlinked text is a file name, Sheets will automatically recommend converting it to a file chip. | Rolling out now to Rapid Release and Scheduled Release domains at a gradual pace (up to 15 days for feature visibility). | Available to all Google Workspace customers and users with personal Google Accounts. | Learn more about inserting smart chips in Google Sheets
Easily convert hyperlinked text to smart chips using the tab key in Google Sheets
More languages available for Google Meet captions 
You can now use captions in Google Meet in Finnish & Hebrew. You can use captions to view subtitles as everyone speaks during a meeting — captions are only visible to you. Note that because this is a newly supported language, it will be denoted with a “beta” tag as we continue to optimize performance. See our Help Center for a complete list of supported languages for captions in Meet. We’ve also removed the “beta” tag from the following languages, as they have been validated and are out of beta: 
  • English (UK) 
  • French (Canada) 
  • Thai 
  • Vietnamese 
  • Polish 
  • Romanian 
  • Turkish 
Available now to all Google Workspace customers and users with personal Google accounts. | Learn more about using captions in Google Meet.


Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.



Take action on Google Drive requests and comments directly in Google Chat 
You can now collaborate more easily on Docs, Sheets and Slides comments without ever leaving Chat. | Learn more about Drive comments and requests in Chat.

AppSheet smart chips for Google Docs 
You can now insert smart chips for AppSheet content into documents, allowing you to access AppSheet data directly into Docs. | Learn more about AppSheet smart chips

View full screen tasks lists on Google Calendar 
You will now be able to see all your tasks and task lists in a single full screen view on Calendar web. | Learn more about full screen tasks lists in Calendar

Star important messages in Google Chat 
Following the recent announcement of home and mentions in Google Chat, we’re excited to introduce starred on web, an additional shortcut in the redesigned navigation panel that helps you stay on top of your most important messages in Chat. | Learn more about starring messages in Chat.

Read and write out of office and focus time events using the Calendar API
In addition to reading and writing working location data, we’re expanding the Calendar API functionality to encompass out of office and focus time data. | Learn more about using the Calendar API.

Improved search query suggestions in Google Chat web
In conjunction with recent updates to search in Google Chat, we’re introducing enhanced search query suggestions, a feature already available on mobile, that helps you find the right message, person, file, or space in Chat on the web. | Learn more about searching in Chat.


Completed rollouts

The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.


Rapid Release Domains: 
Scheduled Release Domains: 
Rapid and Scheduled Release Domains: 


For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).



  

Improved search query suggestions in Google Chat web

What’s changing

In conjunction with recent updates to search in Google Chat, we’re introducing enhanced search query suggestions, a feature already available on mobile, that helps you find the right message, person, file, or space in Chat on the web. 

When searching in Chat, you will now see query suggestions based on historical search activity, sorted by relevance. The top suggestion will also appear in the search box and you can press “Tab” to autofill the query. You can also delete a suggested query by hovering over it and clicking on the “x” icon next to it. 
Improved search query suggestions in Google Chat web


Getting started 


Rollout pace 


Availability 

  • Available to all Google Workspace customers and users with personal Google Accounts 

Resources 

Emerging practices for Society-Centered AI

The first of Google’s AI Principles is to “Be socially beneficial.” As AI practitioners, we’re inspired by the transformative potential of AI technologies to benefit society and our shared environment at a scale and swiftness that wasn’t possible before. From helping address the climate crisis to helping transform healthcare, to making the digital world more accessible, our goal is to apply AI responsibly to be helpful to more people around the globe. Achieving global scale requires researchers and communities to think ahead — and act — collectively across the AI ecosystem.

We call this approach Society-Centered AI. It is both an extension and an expansion of Human-Centered AI, focusing on the aggregate needs of society that are still informed by the needs of individual users, specifically within the context of the larger, shared human experience. Recent AI advances offer unprecedented, societal-level capabilities, and we can now methodically address those needs — if we apply collective, multi-disciplinary AI research to society-level, shared challenges, from forecasting hunger to predicting diseases to improving productivity.

The opportunity for AI to benefit society increases each day. We took a look at our work in these areas and at the research projects we have supported. Recently, Google announced that 70 professors were selected for the 2023 Award for Inclusion Research Program, which supports academic research that addresses the needs of historically marginalized groups globally. Through evaluation of this work, we identified a few emerging practices for Society-Centered AI:

  • Understand society’s needs
    Listening to communities and partners is crucial to understanding major issues deeply and identifying priority challenges to address. As an emerging general purpose technology, AI has the potential to address major global societal issues that can significantly impact people’s lives (e.g., educating workers, improving healthcare, and improving productivity). We have found the key to impact is to be centered on society’s needs. For this, we focus our efforts on goals society has agreed should be prioritized, such as the United Nations’ 17 Sustainable Development Goals, a set of interconnected goals jointly developed by more than 190 countries to address global challenges.
  • Collective efforts to address those needs
    Collective efforts bring stakeholders (e.g., local and academic communities, NGOs, private-public collaborations) into a joint process of design, development, implementation, and evaluation of AI technologies as they are being developed and deployed to address societal needs.
  • Measuring success by how well the effort addresses society’s needs
    It is important and challenging to measure how well AI solutions address society’s needs. In each of our cases, we identified primary and secondary indicators of impact that we optimized through our collaborations with stakeholders.

Why is Society-Centered AI important?

The case examples described below show how the Society-Centered AI approach has led to impact across topics, such as accessibility, health, and climate.


Understanding the needs of individuals with non-standard speech

There are millions of people with non-standard speech (e.g., impaired articulation, dysarthria, dysphonia) in the United States alone. In 2019, Google Research launched Project Euphonia, a methodology that allows individual users with non-standard speech to train personalized speech recognition models. Our success began with the impact we had on each individual who is now able to use voice dictation on their mobile device.

Euphonia started with a Society-Centered AI approach, including collective efforts with the non-profit organizations ALS Therapy Development Institute and ALS Residence Initiative to understand the needs of individuals with amyotrophic lateral sclerosis (ALS) and their ability to use automatic speech recognition systems. Later, we developed the world’s largest corpus of non-standard speech recordings, which enabled us to train a Universal Speech Model to better recognize disordered speech by 37% on real conversation word error rate (WER) measurement. This also led to the 2022 collaboration between the University of Illinois Urbana-Champaign, Alphabet, Apple, Meta, Microsoft, and Amazon to begin the Speech Accessibility Project, an ongoing initiative to create a publicly available dataset of disordered speech samples to improve products and make speech recognition more inclusive of diverse speech patterns. Other technologies that use AI to help remove barriers of modality and languages, include live transcribe, live caption and read aloud.


Focusing on society’s health needs

Access to timely maternal health information can save lives globally: every two minutes a woman dies during pregnancy or childbirth and 1 in 26 children die before reaching age five. In rural India, the education of expectant and new mothers around key health issues pertaining to pregnancy and infancy required scalable, low-cost technology solutions. Together with ARMMAN, Google Research supported a program that uses mobile messaging and machine learning (ML) algorithms to predict when women might benefit from receiving interventions (i.e., targeted preventative care information) and encourages them to engage with the mMitra free voice call program. Within a year, the mMitra program has shown a 17% increase in infants with tripled birth weight and a 36% increase in women understanding the importance of taking iron tablets during pregnancy. Over 175K mothers and growing have been reached through this automated solution, which public health workers use to improve the quality of information delivery.

These efforts have been successful in improving health due to the close collective partnership among the community and those building the AI technology. We have adopted this same approach via collaborations with caregivers to address a variety of medical needs. Some examples include: the use of the Automated Retinal Disease Assessment (ARDA) to help screen for diabetic retinopathy in 250,000 patients in clinics around the world; our partnership with iCAD to bring our mammography AI models to clinical settings to aid in breast cancer detection; and the development of Med-PaLM 2, a medical large language model that is now being tested with Cloud partners to help doctors provide better patient care.


Compounding impact from sustained efforts for crisis response

Google Research’s flood prediction efforts began in 2018 with flood forecasting in India and expanded to Bangladesh to help combat the catastrophic damage from yearly floods. The initial efforts began with partnerships with India’s Central Water Commission, local governments and communities. The implementation of these efforts used SOS Alerts on Search and Maps, and, more recently, broadly expanded access via Flood Hub. Continued collaborations and advancing an AI-based global flood forecasting model allowed us to expand this capability to over 80 countries across Africa, the Asia-Pacific region, Europe, and South, Central, and North America. We also partnered with networks of community volunteers to further amplify flood alerts. By working with governments and communities to measure the impact of these efforts on society, we refined our approach and algorithms each year.

We were able to leverage those methodologies and some of the underlying technology, such as SOS Alerts, from flood forecasting to similar societal needs, such as wildfire forecasting and heat alerts. Our continued engagements with organizations led to the support of additional efforts, such as the World Meteorological Organization's (WMO) Early Warnings For All Initiative. The continued engagement with communities has allowed us to learn about our users' needs on a societal level over time, expand our efforts, and compound the societal reach and impact of our efforts.


Further supporting Society-Centered AI research

We recently funded 18 university research proposals exemplifying a Society-Centered AI approach, a new track within the Google Award for Inclusion Research Program. These researchers are taking the Society-Centered AI methodology and helping create beneficial applications across the world. Examples of some of the projects funded include:

  • AI-Driven Monitoring of Attitude Polarization in Conflict-Affected Countries for Inclusive Peace Process and Women’s Empowerment: This project’s goal is to create LLM-powered tools that can be used to monitor peace in online conversations in developing nations. The initial target communities are where peace is in flux and the effort will put a particular emphasis on mitigating polarization that impacts women and promoting harmony.
  • AI-Assisted Distributed Collaborative Indoor Pollution Meters: A Case Study, Requirement Analysis, and Low-Cost Healthy Home Solution for Indian Communities: This project is looking at the usage of low-cost pollution monitors combined with AI-assisted methodology for identifying recommendations for communities to improve air quality and at home health. The initial target communities are highly impacted by pollution, and the joint work with them includes the goal of developing how to measure improvement in outcomes in the local community.
  • Collaborative Development of AI Solutions for Scaling Up Adolescent Access to Sexual and Reproductive Health Education and Services in Uganda: This project’s goal is to create LLM-powered tools to provide personalized coaching and learning for users' needs on topics of sexual and reproductive health education in low-income settings in Sub-Saharan Africa. The local societal need is significant, with an estimated 25% rate of teenage pregnancy, and the project aims to address the needs with a collective development process for the AI solution.

Future direction

Focusing on society’s needs, working via multidisciplinary collective research, and measuring the impact on society helps lead to AI solutions that are relevant, long-lasting, empowering, and beneficial. See the AI for the Global Goals to learn more about potential Society-Centered AI research problems. Our efforts with non-profits in these areas is complementary to the research that we are doing and encouraging. We believe that further initiatives using Society-Centered AI will help the collective research community solve problems and positively impact society at large.


Acknowledgements

Many thanks to the many individuals who have worked on these projects at Google including Shruti Sheth, Reena Jana, Amy Chung-Yu Chou, Elizabeth Adkison, Sophie Allweis, Dan Altman, Eve Andersson, Ayelet Benjamini, Julie Cattiau, Yuval Carny, Richard Cave, Katherine Chou, Greg Corrado, Carlos De Segovia, Remi Denton, Dotan Emanuel, Ashley Gardner, Oren Gilon, Taylor Goddu, Brigitte Hoyer Gosselink, Jordan Green, Alon Harris, Avinatan Hassidim, Rus Heywood, Sunny Jansen, Pan-Pan Jiang, Anton Kast, Marilyn Ladewig, Ronit Levavi Morad, Bob MacDonald, Alicia Martin, Shakir Mohamed, Philip Nelson, Moriah Royz, Katie Seaver, Joel Shor, Milind Tambe, Aparna Taneja, Divy Thakkar, Jimmy Tobin, Katrin Tomanek, Blake Walsh, Gal Weiss, Kasumi Widner, Lihong Xi, and teams.

Source: Google AI Blog


Read and write out of office and focus time events using the Calendar API

What’s changing 

In addition to reading and writing working location data, we’re expanding the Calendar API functionality to encompass out of office and focus time data. Developers can use the API to read and write this information and synchronize users’ availability with external systems. For example, you can use the API in conjunction with HR systems to automatically add OOO entries to a user’s calendar when they submit vacation time. Or the API can be used to automatically block focus time on a user’s calendar to complete training courses. 

  • Reading and writing out of office and focus time is helpful in a variety of situations such as: 
  • Creating and updating OOO and Focus Time events (Events.Insert, Events.Update, Events.Patch). 
  • Specifying OOO and Focus Time specific features, such as auto-declining meetings, and setting do-not-disturb statuses. 
  • Selecting any combination of event types to read from a calendar (Events.List). 

Further, reading and writing this information eliminates the need for users to enter the same information into multiple systems, helping to cut down on manual churn.


Who’s impacted

Developers


Why you’d use it

Out of office and focus time event support joins support for working location, which was announced earlier this year, to round out API functionality for calendar events. Each specific event type can be synced throughout your organization's IT ecosystem, creating seamless user journeys and helping to connect users with resources and each other. This includes things such as:


  • Mapping working location data to better adapt on-site resources and update other third-party surfaces, such as hot desk booking tools. 
  • Automatically blocking OOO based on vacation or PTO requests.
  • Blocking off focus time events to give users time to go through onboarding or other company training programs.


Additional details

Prior to this update, if you requested to read a user’s calendar via API v3, out of office and focus time events were returned with [email protected] in the organizer field, and without their specific features. With this update, these events will return with all their properties and the specific user as organizer. Please check your code to ensure it does not make implicit assumptions about the previous API return values, and use the eventType parameter to perform different operations with regular, OOO, Focus Time, or Working Location events 


Getting started


Rollout pace

Availability

  • The Calendar API is available to all. 

  • Out of Office events are available to Google Workspace Essentials, Enterprise Essentials, Frontline, Enterprise Starter, Enterprise Standard, Enterprise Plus, Nonprofits, Business Starter, Business Standard, Business Plus, Education Fundamentals, Education Standard, and Education Plus customers.

  • Focus Time events are available to Google Workspace Enterprise Starter, Enterprise Standard, Enterprise Plus, Nonprofits, Business Standard, Business Plus, Education Fundamentals, Education Standard, and Education Plus customers.

Resources


Long Term Support Channel Update for ChromeOS

LTS-114 is being updated in the LTS channel to 114.0.5735.340 (Platform Version: 15437.78.0) for most ChromeOS devices. Want to know more about Long Term Support? Click here.


This update contains multiple Security fixes, including:


1492698 High  CVE-2023-5480 Inappropriate implementation in Payments




Giuliana Pritchard
Google Chrome OS

Chrome Dev for Android Update

Hi everyone! We've just released Chrome Dev 121 (121.0.6127.2) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Chrome Dev for Desktop Update

The Dev channel has been updated to 121.0.6129.0 for Windows, Mac and Linux.

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Daniel Yip
Google Chrome

Responsible AI at Google Research: Adversarial testing for generative AI safety

The Responsible AI and Human-Centered Technology (RAI-HCT) team within Google Research is committed to advancing the theory and practice of responsible human-centered AI through a lens of culturally-aware research, to meet the needs of billions of users today, and blaze the path forward for a better AI future. The BRAIDS (Building Responsible AI Data and Solutions) team within RAI-HCT aims to simplify the adoption of RAI practices through the utilization of scalable tools, high-quality data, streamlined processes, and novel research with a current emphasis on addressing the unique challenges posed by generative AI (GenAI).

GenAI models have enabled unprecedented capabilities leading to a rapid surge of innovative applications. Google actively leverages GenAI to enhance its products' utility and to improve lives. While enormously beneficial, GenAI also presents risks for disinformation, bias, and security. In 2018, Google pioneered the AI Principles, emphasizing beneficial use and prevention of harm. Since then, Google has focused on effectively implementing our principles in Responsible AI practices through 1) a comprehensive risk assessment framework, 2) internal governance structures, 3) education, empowering Googlers to integrate AI Principles into their work, and 4) the development of processes and tools that identify, measure, and analyze ethical risks throughout the lifecycle of AI-powered products. The BRAIDS team focuses on the last area, creating tools and techniques for identification of ethical and safety risks in GenAI products that enable teams within Google to apply appropriate mitigations.


What makes GenAI challenging to build responsibly?

The unprecedented capabilities of GenAI models have been accompanied by a new spectrum of potential failures, underscoring the urgency for a comprehensive and systematic RAI approach to understanding and mitigating potential safety concerns before the model is made broadly available. One key technique used to understand potential risks is adversarial testing, which is testing performed to systematically evaluate the models to learn how they behave when provided with malicious or inadvertently harmful inputs across a range of scenarios. To that end, our research has focused on three directions:

  1. Scaled adversarial data generation
    Given the diverse user communities, use cases, and behaviors, it is difficult to comprehensively identify critical safety issues prior to launching a product or service. Scaled adversarial data generation with humans-in-the-loop addresses this need by creating test sets that contain a wide range of diverse and potentially unsafe model inputs that stress the model capabilities under adverse circumstances. Our unique focus in BRAIDS lies in identifying societal harms to the diverse user communities impacted by our models.
  2. Automated test set evaluation and community engagement
    Scaling the testing process so that many thousands of model responses can be quickly evaluated to learn how the model responds across a wide range of potentially harmful scenarios is aided with automated test set evaluation. Beyond testing with adversarial test sets, community engagement is a key component of our approach to identify “unknown unknowns” and to seed the data generation process.
  3. Rater diversity
    Safety evaluations rely on human judgment, which is shaped by community and culture and is not easily automated. To address this, we prioritize research on rater diversity.

Scaled adversarial data generation

High-quality, comprehensive data underpins many key programs across Google. Initially reliant on manual data generation, we've made significant strides to automate the adversarial data generation process. A centralized data repository with use-case and policy-aligned prompts is available to jump-start the generation of new adversarial tests. We have also developed multiple synthetic data generation tools based on large language models (LLMs) that prioritize the generation of data sets that reflect diverse societal contexts and that integrate data quality metrics for improved dataset quality and diversity.

Our data quality metrics include:

  • Analysis of language styles, including query length, query similarity, and diversity of language styles.
  • Measurement across a wide range of societal and multicultural dimensions, leveraging datasets such as SeeGULL, SPICE, the Societal Context Repository.
  • Measurement of alignment with Google’s generative AI policies and intended use cases.
  • Analysis of adversariality to ensure that we examine both explicit (the input is clearly designed to produce an unsafe output) and implicit (where the input is innocuous but the output is harmful) queries.

One of our approaches to scaled data generation is exemplified in our paper on AI-Assisted Red Teaming (AART). AART generates evaluation datasets with high diversity (e.g., sensitive and harmful concepts specific to a wide range of cultural and geographic regions), steered by AI-assisted recipes to define, scope and prioritize diversity within an application context. Compared to some state-of-the-art tools, AART shows promising results in terms of concept coverage and data quality. Separately, we are also working with MLCommons to contribute to public benchmarks for AI Safety.


Adversarial testing and community insights

Evaluating model output with adversarial test sets allows us to identify critical safety issues prior to deployment. Our initial evaluations relied exclusively on human ratings, which resulted in slow turnaround times and inconsistencies due to a lack of standardized safety definitions and policies. We have improved the quality of evaluations by introducing policy-aligned rater guidelines to improve human rater accuracy, and are researching additional improvements to better reflect the perspectives of diverse communities. Additionally, automated test set evaluation using LLM-based auto-raters enables efficiency and scaling, while allowing us to direct complex or ambiguous cases to humans for expert rating.

Beyond testing with adversarial test sets, gathering community insights is vital for continuously discovering “unknown unknowns”. To provide high quality human input that is required to seed the scaled processes, we partner with groups such as the Equitable AI Research Round Table (EARR), and with our internal ethics and analysis teams to ensure that we are representing the diverse communities who use our models. The Adversarial Nibbler Challenge engages external users to understand potential harms of unsafe, biased or violent outputs to end users at scale. Our continuous commitment to community engagement includes gathering feedback from diverse communities and collaborating with the research community, for example during The ART of Safety workshop at the Asia-Pacific Chapter of the Association for Computational Linguistics Conference (IJCNLP-AACL 2023) to address adversarial testing challenges for GenAI.


Rater diversity in safety evaluation

Understanding and mitigating GenAI safety risks is both a technical and social challenge. Safety perceptions are intrinsically subjective and influenced by a wide range of intersecting factors. Our in-depth study on demographic influences on safety perceptions explored the intersectional effects of rater demographics (e.g., race/ethnicity, gender, age) and content characteristics (e.g., degree of harm) on safety assessments of GenAI outputs. Traditional approaches largely ignore inherent subjectivity and the systematic disagreements among raters, which can mask important cultural differences. Our disagreement analysis framework surfaced a variety of disagreement patterns between raters from diverse backgrounds including also with “ground truth” expert ratings. This paves the way to new approaches for assessing quality of human annotation and model evaluations beyond the simplistic use of gold labels. Our NeurIPS 2023 publication introduces the DICES (Diversity In Conversational AI Evaluation for Safety) dataset that facilitates nuanced safety evaluation of LLMs and accounts for variance, ambiguity, and diversity in various cultural contexts.


Summary

GenAI has resulted in a technology transformation, opening possibilities for rapid development and customization even without coding. However, it also comes with a risk of generating harmful outputs. Our proactive adversarial testing program identifies and mitigates GenAI risks to ensure inclusive model behavior. Adversarial testing and red teaming are essential components of a Safety strategy, and conducting them in a comprehensive manner is essential. The rapid pace of innovation demands that we constantly challenge ourselves to find “unknown unknowns” in cooperation with our internal partners, diverse user communities, and other industry experts.

Source: Google AI Blog