Google Workspace Updates Weekly Recap – September 22, 2023

New updates 

There are no new updates to share this week. Please see below for a recap of published announcements. 


Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Find and install third-party add-ons directly within Google Meet 
You can now find, install, and use third-party applications all without having to leave Google Meet. | Learn more about third-party add-ons for Google Meet.

Add Google Groups to spaces in Google Chat 
We’re introducing the ability for space managers and users with the permission to manage members to add Google Groups to a space. With this update, the group members are automatically added to a space and any changes to the group's membership, such as adding and removing members, are also automatically reflected in the space. | Learn more about Google Groups in spaces

Collaborate more seamlessly with live pointers in Google Slides 
To boost collaboration in Google Slides, we’re introducing live pointers, a new feature that allows you and your colleagues to see each other’s mouse pointers in real-time. With this update, co-creators can easily point out specific text or visual elements within a Slide in order to highlight important information and content. | Learn more about live pointers in Slides

Pair your video tile in Google Meet to improve accessibility for users with language interpreters 
We’re introducing tile pairing for Google Meet, which will allow you to pair your video tile with another meeting participant's tile. Once you pair your tile, other meeting participants will see both tiles shown next to each other. Both pairing partners will have their borders outlined in blue when speaking. Tile pairing will be indicated in the meeting captions as well. | Learn more about pairing tiles in Google Meet.

Differentiate messages better with additional modernizations in Google Chat 
We’re introducing message bubbles to enable users to more easily differentiate incoming versus outgoing messages in the Chat message stream. | Learn more about message bubbles in Chat

Turn Q&As on or off for Google Meet livestream viewers 
Earlier this year, we announced that meeting hosts can now enable Q&A and poll features, which previously were only offered in traditional Meet meetings. Beginning this week, meeting hosts can turn Q&A on and off for livestreams. | Learn more about Q&As in Google Meet.

Additional improvements for monitoring Google Meet hardware issues in the Admin console
Recently, we announced the ability to detect and monitor several additional Google Meet hardware issues from the Admin console. Now that ChromeOS M108 has rolled out to Meet hardware devices, we’re sharing an update on the rollout of some of those features. | Learn more about improvements for monitoring Google Meet hardware issues


Completed rollouts

The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.


Rapid Release Domains:

Additional improvements for monitoring Google Meet hardware issues in the Admin console

What’s changing

Recently, we announced the ability to detect and monitor several additional Google Meet hardware issues from the Admin console. Now that ChromeOS M108 has rolled out to Meet hardware devices, we’re sharing an update on the rollout of some of those features, including new options to fine tune your alerts: 

  • Missing display issues began rolling out in the Admin console on September 21 and may take up to 10 days to go into effect on all domains. 
  • You will be able to select which specific peripheral issue types you want to be alerted about from a new Admin console setting that also began rolling out on September 21 and may take up to 10 days to go into effect on all domains. If you don’t want to receive display alerts (or any other type of peripheral issues), you can opt out using the new setting. Note that the setting can be modified as soon as it appears in your Admin console, but it won’t actually go into effect until October 11. 
  • Unless you’ve turned them off using the aforementioned setting, you will begin seeing email alerts for missing display issues beginning October 11. Note that it may take up to 10 days for settings to go into effect on all domains.
Monitoring Google Meet hardware issues, like devices going offline or missing cameras, is crucial to ensuring a smooth meeting experience for your users. We hope this update continues to make it easier and faster for admins to be alerted of issues in their fleet and quickly remedy them. See our original announcement for more information. 


Getting started 

  • Admins: 
    • To view these new issues, you can monitor the status of your peripherals in the Google Meet hardware Admin console.
    • Missing display alerts will begin being sent by email or SMS on or soon after October 11.
    • The new Peripheral issue types setting will go into effect on or soon after October 11. If you want to disable any specific peripheral issue types, be sure to change it ahead of this date. 

Rollout 

Missing display issues in the Admin console and peripheral issue type setting: 
Configurability of peripheral email alerts by issue type


Availability

  • Available to all Google Workspace customers with Google Meet hardware devices 

Resources 

Chrome Dev for Desktop Update

The Dev channel has been updated to 119.0.6020.3 for Windows, Mac and Linux.

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvi Bommana
Google Chrome

Supporting young indigenous journalists through the Te Rito Journalism Training Camp

Image: Te Rito cadets engaging in a korero during the training camp.

Indigenous people are the first storytellers of any land - from Waitangi to Rakiura Stewart Island. It’s important that their stories are told to nurture communities and tell histories. This is critical too in our news ecosystem. Which is why Te Rito Journalism Project was set up to help address today’s shortage of Māori and Pasifika journalists and cultural awareness in newsrooms.



For the second year, the Google News Initiative has supported Te Rito with a digital skills training camp for its cadets, to bring the rich history of indigenous storytelling into the smartphones, laptops and desktops of news audiences across Aotearoa and beyond.



In August last year, we hosted the first Te Rito Journalism Training Camp which saw 23 cadets representing multiple ethnicities, languages, and the rainbow and disability communities from all over the motu participate in training focused on digital skilling and fundamental principles of digital tools and reporting. 



This year, for the first time Te Rito included young Indigenous journalists from Australia, supporting critical First Nation storytelling.



Across four days, 24 cadets identifying as First Nations, Māori or Pasifika learnt fundamentals in indigenous journalism. A News Lab Teaching Fellow taught skills in recognising and verifying fake images or information, engaging audiences through digital storytelling and First Nation editors led sessions in Indigenous storytelling and building resilience -including to raise issues of conflict when they arise and deal with trauma.  They were also trained on Pinpoint, a research tool from Google powered by AI, that can analyse large numbers of documents.

Image: Te Rito cadets participated in training on digital skills and storytelling.

Te Rito was established by New Zealand Media and Entertainment (NZME), Whakaata Māori, Warner Bros. Discovery ANZ and the Pacific Media Network, with support from NZ On Air's Public Interest Journalism Fund.



News is dependent on the people that tell the stories. The journalism and broadcast industry will have much to gain from voices across diverse backgrounds that are representative of all communities in Aotearoa and Australia.




Post content

Stable Channel Update for Desktop

The Stable channel has been updated to 117.0.5938.92 for Windows, Mac and Linuxwhich will roll out over the coming days/weeks. A full list of changes in this build is available in the log.


Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Prudhvikumar Bommana
Google Chrome

Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes

Large language models (LLMs) have enabled a new data-efficient learning paradigm wherein they can be used to solve unseen new tasks via zero-shot or few-shot prompting. However, LLMs are challenging to deploy for real-world applications due to their sheer size. For instance, serving a single 175 billion LLM requires at least 350GB of GPU memory using specialized infrastructure, not to mention that today's state-of-the-art LLMs are composed of over 500 billion parameters. Such computational requirements are inaccessible for many research teams, especially for applications that require low latency performance.

To circumvent these deployment challenges, practitioners often choose to deploy smaller specialized models instead. These smaller models are trained using one of two common paradigms: fine-tuning or distillation. Fine-tuning updates a pre-trained smaller model (e.g., BERT or T5) using downstream manually-annotated data. Distillation trains the same smaller models with labels generated by a larger LLM. Unfortunately, to achieve comparable performance to LLMs, fine-tuning methods require human-generated labels, which are expensive and tedious to obtain, while distillation requires large amounts of unlabeled data, which can also be hard to collect.

In “Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes”, presented at ACL2023, we set out to tackle this trade-off between model size and training data collection cost. We introduce distilling step-by-step, a new simple mechanism that allows us to train smaller task-specific models with much less training data than required by standard fine-tuning or distillation approaches that outperform few-shot prompted LLMs’ performance. We demonstrate that the distilling step-by-step mechanism enables a 770M parameter T5 model to outperform the few-shot prompted 540B PaLM model using only 80% of examples in a benchmark dataset, which demonstrates a more than 700x model size reduction with much less training data required by standard approaches.

While LLMs offer strong zero and few-shot performance, they are challenging to serve in practice. On the other hand, traditional ways of training small task-specific models require a large amount of training data. Distilling step-by-step provides a new paradigm that reduces both the deployed model size as well as the number of data required for training.


Distilling step-by-step

The key idea of distilling step-by-step is to extract informative natural language rationales (i.e., intermediate reasoning steps) from LLMs, which can in turn be used to train small models in a more data-efficient way. Specifically, natural language rationales explain the connections between the input questions and their corresponding outputs. For example, when asked, “Jesse's room is 11 feet long and 15 feet wide. If she already has 16 square feet of carpet, how much more carpet does she need to cover the whole floor?”, an LLM can be prompted by the few-shot chain-of-thought (CoT) prompting technique to provide intermediate rationales, such as, “Area = length * width. Jesse’s room has 11 * 15 square feet.” That better explains the connection from the input to the final answer, “(11 * 15 ) - 16”. These rationales can contain relevant task knowledge, such as “Area = length * width”, that may originally require many data for small models to learn. We utilize these extracted rationales as additional, richer supervision to train small models, in addition to the standard task labels.

Overview on distilling step-by-step: First, we utilize CoT prompting to extract rationales from an LLM. We then use the generated rationales to train small task-specific models within a multi-task learning framework, where we prepend task prefixes to the input examples and train the model to output differently based on the given task prefix.

Distilling step-by-step consists of two main stages. In the first stage, we leverage few-shot CoT prompting to extract rationales from LLMs. Specifically, given a task, we prepare few-shot exemplars in the LLM input prompt where each example is composed of a triplet containing: (1) input, (2) rationale, and (3) output. Given the prompt, an LLM is able to mimic the triplet demonstration to generate the rationale for any new input. For instance, in a commonsense question answering task, given the input question “Sammy wanted to go to where the people are. Where might he go? Answer Choices: (a) populated areas, (b) race track, (c) desert, (d) apartment, (e) roadblock”, distilling step-by-step provides the correct answer to the question, “(a) populated areas”, paired with the rationale that provides better connection from the question to the answer, “The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people.” By providing CoT examples paired with rationales in the prompt, the in-context learning ability allows LLMs to output corresponding rationales for future unseen inputs.

We use the few-shot CoT prompting, which contains both an example rationale (highlighted in green) and a label (highlighted in blue), to elicit rationales from an LLM on new input examples. The example is from a commonsense question answering task.

After the rationales are extracted, in the second stage, we incorporate the rationales in training small models by framing the training process as a multi-task problem. Specifically, we train the small model with a novel rationale generation task in addition to the standard label prediction task. The rationale generation task enables the model to learn to generate the intermediate reasoning steps for the prediction, and guides the model to better predict the resultant label. We prepend task prefixes (i.e., [label] and [rationale] for label prediction and rationale generation, respectively) to the input examples for the model to differentiate the two tasks.


Experimental setup

In the experiments, we consider a 540B PaLM model as the LLM. For task-specific downstream models, we use T5 models. For CoT prompting, we use the original CoT prompts when available and curate our own examples for new datasets. We conduct the experiments on four benchmark datasets across three different NLP tasks: e-SNLI and ANLI for natural language inference; CQA for commonsense question answering; and SVAMP for arithmetic math word problems. We include two sets of baseline methods. For comparison to few-shot prompted LLMs, we compare to few-shot CoT prompting with a 540B PaLM model. In the paper, we also compare standard task-specific model training to both standard fine-tuning and standard distillation. In this blogpost, we will focus on the comparisons to standard fine-tuning for illustration purposes.


Less training data

Compared to standard fine-tuning, the distilling step-by-step method achieves better performance using much less training data. For instance, on the e-SNLI dataset, we achieve better performance than standard fine-tuning when using only 12.5% of the full dataset (shown in the upper left quadrant below). Similarly, we achieve a dataset size reduction of 75%, 25% and 20% on ANLI, CQA, and SVAMP.

Distilling step-by-step compared to standard fine-tuning using 220M T5 models on varying sizes of human-labeled datasets. On all datasets, distilling step-by-step is able to outperform standard fine-tuning, trained on the full dataset, by using much less training examples.


Smaller deployed model size

Compared to few-shot CoT prompted LLMs, distilling step-by-step achieves better performance using much smaller model sizes. For instance, on the e-SNLI dataset, we achieve better performance than 540B PaLM by using a 220M T5 model. On ANLI, we achieve better performance than 540B PaLM by using a 770M T5 model, which is over 700X smaller. Note that on ANLI, the same 770M T5 model struggles to match PaLM’s performance using standard fine-tuning.

We perform distilling step-by-step and standard fine-tuning on varying sizes of T5 models and compare their performance to LLM baselines, i.e., Few-shot CoT and PINTO Tuning. Distilling step-by-step is able to outperform LLM baselines by using much smaller models, e.g., over 700× smaller models on ANLI. Standard fine-tuning fails to match LLM’s performance using the same model size.


Distilling step-by-step outperforms few-shot LLMs with smaller models using less data

Finally, we explore the smallest model sizes and the least amount of data for distilling step-by-step to outperform PaLM’s few-shot performance. For instance, on ANLI, we surpass the performance of the 540B PaLM using a 770M T5 model. This smaller model only uses 80% of the full dataset. Meanwhile, we observe that standard fine-tuning cannot catch up with PaLM’s performance even using 100% of the full dataset. This suggests that distilling step-by-step simultaneously reduces the model size as well as the amount of data required to outperform LLMs.

We show the minimum size of T5 models and the least amount of human-labeled examples required for distilling step-by-step to outperform LLM’s few-shot CoT by a coarse-grained search. Distilling step-by-step is able to outperform few-shot CoT using not only much smaller models, but it also achieves so with much less training examples compared to standard fine-tuning.

Conclusion

We propose distilling step-by-step, a novel mechanism that extracts rationales from LLMs as informative supervision in training small, task-specific models. We show that distilling step-by-step reduces both the training dataset required to curate task-specific smaller models and the model size required to achieve, and even surpass, a few-shot prompted LLM’s performance. Overall, distilling step-by-step presents a resource-efficient paradigm that tackles the trade-off between model size and training data required.


Availability on Google Cloud Platform

Distilling step-by-step is available for private preview on Vertex AI. If you are interested in trying it out, please contact [email protected] with your Google Cloud Project number and a summary of your use case.


Acknowledgements

This research was conducted by Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. Thanks to Xiang Zhang and Sergey Ioffe for their valuable feedback.

Source: Google AI Blog


Studio Bot expands to 170+ international markets!

Posted by Isabella Fiterman – Product Marketing Manager, and Sandhya Mohan – Product Manager

At this year’s Google I/O, one of the most exciting announcements for Android developers was the introduction of Studio Bot, an AI-powered coding assistant which can be accessed directly in Android Studio. Studio Bot can accelerate your ability to write high-quality Android apps faster by helping generate code for your app, answering your questions, finding relevant resources— all without ever having to leave Android Studio. After our announcement you told us how excited you were about this AI-powered coding companion, and those of you outside of the U.S were eager to get your hands on it. We heard your feedback, and expanded Studio Bot to over 170 countries and territories in the canary release channel of Android Studio.

Ask Studio Bot your Android development questions

Studio Bot is powered by artificial intelligence and can understand natural language, so you can ask development questions in your own words. While it’s now available in most countries, it is designed to be used in English. You can enter your questions in Studio Bot’s chat window ranging from very simple and open-ended ones to specific problems that you need help with. Here are some examples of the types of queries it can answer:

How do I add camera support to my app?


I want to create a Room database.

Can you remind me of the format for javadocs?

What's the best way to get location on Android?

Studio Bot remembers the context of the conversation, so you can also ask follow-up questions, such as “Can you give me the code for this in Kotlin?” or “Can you show me how to do it in Compose?”


Moving image showing a user having conversation with Studio Bot

Designed with privacy in mind

Studio Bot was designed with privacy in mind. You don’t need to send your source code to take advantage of Studio Bot’s features. By default, Studio Bot’s responses are purely based on conversation history, and you control whether you want to share additional context or code for customized responses. Much like our work on other AI projects, we stick to a set of AI Principles that hold us accountable.

Focus on quality

Studio Bot is still in its early days, and we suggest validating its responses before using them in a production app. We’re continuing to improve its Android development knowledge base and quality of responses so that it can better support your development needs in the future. You can help us improve Studio Bot by trying it out and sharing your feedback on its responses using the thumbs up and down buttons.

Try it out!

Download the latest canary release of Android Studio and read more about how you can get started with Studio Bot. You can also sign up to receive updates on Studio Bot as the experience evolves.