The YouTube Effect: New ways to reach engaged audiences

People are gravitating toward content they relate to, whether they’re watching in their living rooms or scrolling short-form video on their phones. And they’re finding it all on YouTube, where creators are constantly pushing creative boundaries to stay relevant with today’s viewers, shaping and reflecting culture as it happens. We’re committed to helping them continue to do this. In fact, last month, we shared major updates in how we support creators — including that we’ve paid creators, artists and media companies over $50 billion over the past three years.

For advertisers, investing in YouTube means reaching people in the moments and places where they’ve sought out and really connect with what they’re watching. The unique dynamic between creators and viewers creates a halo effect, or what we call the YouTube Effect, for brands.

This week at Advertising Week New York, we’re announcing new ways you can tap into the YouTube Effect across streaming, shopping and audio.

New ways to drive awareness in key moments

Linear TV watching continues to decline. And now, even streaming platforms are struggling to hold onto loyal viewers. According to Nielsen, YouTube recently became the co-leader in streaming watchtime.[83b4e3]

People have always connected in front of the TV screen, but YouTube gives them the unique chance to bond over shared passions — like watching live-streamed concerts, fitness classes or even religious ceremonies together. They feel a similar connection to the ads they get, too. In a new study with Latitude, 59% of respondents agreed that ads on YouTube are more relevant than ads on linear TV or other streaming apps.[070c44]

"59% of respondents agreed that ads on YouTube are more relevant than ads on linear TV or other streaming apps."

To help you reach these engaged viewers, we’re launching a new offering called Moment Blast. Designed for brands looking to raise awareness during key moments — like major sporting events, movie releases or product launches — Moment Blast gives advertisers prime positioning on YouTube Select content on connected TVs (CTV) and other devices, plus a Branded Title Card and optional Masthead placement.

Help informed buyers shop what they love

As e-commerce continues to rise, so has the informed buyer. New research from theSocial Commerce and Video Study in partnership with TalkShoppe shows that having trusted information and confidence in purchases is more important to shoppers than ever. Respondents ranked YouTube number one against other video services and social media platforms in finding honest and detailed information.[3f84a0]In short, shoppers have more confidence in the products they find on YouTube.

"Respondents ranked YouTube number one against other video services and social media platforms in finding honest and detailed information."

Today, we’re making it easier for people to shop what they love on YouTube. Advertisers can already use product feeds on YouTube, which turns ads into a virtual storefront. We've recently expanded product feeds to Shorts and found that, on average, Video action campaigns with product feeds saw an over 70% increase in conversions on Shorts over those without product feeds.

We’re now expanding product feeds to Discovery ads to help you scale your social media creative and reach even more engaged viewers. Soon, product feeds will also include local offers, allowing brands to show real-time availability for products in their Google Merchant Center so people can find the most convenient place to buy. Creators will also be able to transform their content into virtual storefronts; this quarter, more creators will have the ability to tag products in their videos and Shorts.

You’ll see these features, some of your favorite creators and more at the second annual YouTube Shopping holiday event, “From YouTube to You," kicking off on November 10. This year’s event will include livestreams, videos and Shorts featuring brands and retailers like Ulta Beauty and TULA Skincare.

Reach music lovers and podcast listeners

YouTube has long been a destination for music lovers to find official albums, music videos, live performances and more. And now, according to Edison, YouTube is the second most popular destination for listening to podcasts.[4f2b72]

YouTube is the second most popular destination for listening to podcasts.

To help you reach these audiences, Audio ads are now globally available to buy in Google Ads and Display & Video 360. Audio ads are designed to reach people on audio surfaces and in listening-first states.

Podcast targeting is also now available globally. With Podcast targeting, brands and agencies can specifically reach podcast listeners.

From the big screen to the mobile screen, CTV to Shorts, immersive video to audio-first formats, YouTube is the only platform that can help advertisers reach viewers wherever they are.

Enhanced menus in Google Slides and Drawings improves findability of key features

What’s changing 

We’re updating the menus in Google Slides and Google Drawings to make it easier to locate the most commonly-used features. In this update you’ll notice: 
  • Shortened menus for better navigation 
  • Reorganization for more intuitive feature location 
  • Prominent icons for faster recognition 
enhanced menu slides

This new design improves findability of key features, making it quicker and easier to use Slides and Drawings. 


Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: This feature will be ON by default and cannot be disabled. Use the menus as you would regularly. Visit the Help Center to learn more about using Google Slides. 

Rollout pace 


Availability 

  • Available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers 
  • Available to users with personal Google Accounts 

Resources 

Dev Library Letters: 14th Issue

Posted by Garima Mehra, Program Manager

‘Google Dev Library letters’ is curated to bring you some of the best projects developed with Google tech that have been submitted to the Dev Library platform. We hope this brings you the inspiration you need for your next project!


Android



Image-compressor 
by Vinod Baste

Check out Vinod’s Android Image compress library that helps reduce the size of the image by 90% without losing any of its pixels.


SealedX 
by Jaewoong Eum

Learn how to auto-generate extensive sealed classes and interfaces for Android and Kotlin.

Flutter



GitHub Actions to deploy
Flutter Web to gh-pages
 
by Sai Rajendra Immadi

Tired of manually deploying the app every time? Or do you want to deploy your flutter web applications to gh-pages? Use this blog as your guide.



Double And Triple Dots in Flutter 
by Lakshydeep Vikram

Learn the reason for using double and triple dots in flutter and where to use them.



Machine Learning



Nystromformer 
by Rishit Dagli

Learn how to use the Nystrom method to approximate standard self-attention. 


Google Cloud



by Ezekias Bokove

Learn how to set up a notification system for Cloud Run services. 



Switch to GCP for cost savings and better performance
by Gaurav Madan

Learn why architects dealing with complex application design and who use well-known Google services should consider the Google Cloud Platform. 




"The Google community includes people with diverse backgrounds. No matter what an individual circumstance is, the platform should support anyone to explore and be creative. We encourage authors to boldly consider diverse backgrounds and to be inclusive when authoring."

Vinesh Prasanna M

Customer Engineer | Google Cloud 





"Authoring a good code sample is hard. The difficulty comes from the additional pieces you need to add to your respository to keep the code sample fresh and appealing to your developers."

Brett Morgan

Developer Relations Engineer | Flutter







Want to read more? 
Check out the latest projects and community-authored content by visiting Google Dev Library
Submit your projects to showcase your work and inspire developers!


Encouraging Working Location coverage across organizations

What’s changing

Starting today, admins have access to a new tool that aims to drive Working Location usage across their organizations. This setting adds a customizable banner to users’ Calendar either encouraging or requiring them to set up their working location. 


By increasing the usage of working location, admins and colleagues will have better context for location planning, meeting room management, preparing meetings for virtual and in-room attendees, and more.


Example of a default message on a non-dismissible banner 

working location banner 2

Example of a custom message on a dismissible banner


Who’s impacted 

Admins 


Why it’s important 

This feature furthers our effort to enable better planning around in-person collaboration and meeting and event coordination, especially in a hybrid work environment. Additionally, the banner will encourage users to take advantage of the many enhancements to Working Location capabilities over the last few months: 

Additional details 

The Calendar banners are easy, flexible, and can be customized to include a message or a link to a landing page. Admins can also determine how long they want banners to appear. 


Getting started 


Rollout pace 


Availability 

  • Available to Google Workspace Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, Education Standard, and the Teaching and Learning Upgrade, as well as legacy G Suite Business customers 
  • Not available to Google Workspace Essentials, Business Starter, Business Standard, Enterprise Essentials, Nonprofits, Frontline customers as well as legacy G Suite Basic customers 
  • Not available to users with personal Google Accounts 

Resources 

Workspace Admins are now notified when Label editing is restricted by set rules

What’s changing

In addition to a recent feature allowing admins to programmatically manage and apply Drive Labels using new API functionality, we’ve added a new Label Manager UI feature showing which rules a label is used within. 

When labels are published, their semantic meaning can be leveraged for the enforcement of rules, such as a DLP policy based on the presence of a label. Labels are locked to prevent the possibility of breaking a related rule, and to make it easier to use labels to enforce rules, we've added warnings and feedback to the Labels Manager UI. 

Specifically, a message identifying and linking the label to the exact rule(s) will now appear in the Label Manager to ensure admins understand why label modification is disabled. 

Label locking prevents admins from inadvertently renaming, deleting, or disabling a Label, which could result in policy breakage. 

label locking

Getting started 

  • Admins: Drive Labels must be turned ON for your organization to use this feature. Visit the Help Center to learn more about managing Drive Labels. Once labels are enabled for your organization, Developers can head over to the API Documentation to get started. 
  • End users: There is no end user setting for this feature. 

Rollout pace 


Availability 

  • Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, Education Standard customers 
  • Not available to Business Starter, Education Fundamentals, the Teaching and Learning Upgrade, Nonprofits, and Frontline, as well as legacy G Suite Basic and Business customers 
  • Not available to users with personal Google Accounts 

Resources 

Show what you know with the new ChromeOS administrator certification

Around the world, ChromeOS admins are hard at work, collectively managing 50 million students and educators using Chromebooks and other ChromeOS devices. Some of them are looking after huge fleets across entire school districts, and others are just starting out.

Whatever size or type of organization they support as ChromeOS admins, we’re here to make life a little easier for them with a range of new policies and updates. We’re also announcing a new Professional ChromeOS Administrator certification, to recognize and reward their ChromeOS expertise.

Configure policies using Google Groups

We’re always adding new policies in Google Admin Console, and now have more than 600 to help customize and curate environments for different schools’ and organizations’ unique needs.

To make it easier to configure apps and extension permissions, we’ve introduced group-based policies for new and existing Google groups. Now if an admin needs to install an app for a specific set of users — who may or may not belong to different organizational units — they can simply add them to a group instead of moving them into a new organizational unit.

Here’s how it could play out in a school. Imagine a small group of students needs access to an app for Science Club. These students are from different grades, so they belong to different organizational units — let’s call them Third Grade, Fourth Grade and Fifth Grade. Instead of setting the app policy as “Allow Install” for all kids in those grades (and then hoping the right kids will install the app themselves), you can create a group-based policy that sets the science app to “Force Install” onto the devices of students in Science Club.

On the flip side, if a group of students were getting distracted by an app or abusing it, you could create a custom group to block their access — without having to disable it for all students.

Group-based policies not only reduce the time and effort involved in configuring apps and extensions, but also help avoid the forced install of apps to entire organizational units, saving valuable disk space and network bandwidth.

Gif showing the benefits of group based policy in Google Admin Console

Become a certified ChromeOS administrator

We’ve long been asked about creating a certification for proficiency in administering for ChromeOS, much like our certification for Google Workspace admins. Certifications not only are great for training, but also help with career development and progression and help establish professional credibility. According to the Global Knowledge IT Skills and Salary Report, certified IT professionals earn more than non-certified peers, and the more certifications, the higher the salary. Today, we’re introducing a new Professional ChromeOS Administrator certification. It’s a great opportunity for people to demonstrate their skills as a ChromeOS IT admin, and earn a badge that proves proficiency to peers and prospective employers.

Designed for enterprise and education systems administrators, and junior engineers with at least one year of holistic IT infrastructure experience, the three-hour exam has 50 multiple choice questions and 30 hands-on lab questions. Test-takers have 90 minutes to complete each section. The exam assesses the ability to perform actions from Google Admin Console, including configuring ChromeOS policies and understanding the tenets of ChromeOS.

For the next 12 months, to help organizations build highly skilled and effective teams, Google is waiving the $125 fee and offering the Professional ChromeOS Administrators exam for free to all enterprise and education IT admins. The exam is English only to start, and will be offered in Japanese in early 2023.

Find more information about repairing devices

With 40 million students and educators using Chromebooks, it can be challenging for school IT administrators to find information about which devices they can repair. As part of the Chromebook repair program, we’re partnering with companies like Acer and Lenovo, and now CTL, to spotlight more Chromebooks that are repairable. On our site, schools can easily identify which Chromebooks have commonly repaired components, and find information on how to get them repaired. We’ll continue to expand the program globally soon.

Google Publisher Tags add official TypeScript type definitions

Today we're happy to announce the release of official TypeScript type definitions for Google Publisher Tags (GPT)!

Why TypeScript?

According to a recent State of JS developer survey, nearly 70% of developers regularly use TypeScript in some capacity, up from 60% the year before. As this segment of the community continues to grow, we are committed to providing the best experience possible for those working with TypeScript and GPT. We believe this is important not just because TypeScript is popular, but because it helps developers validate the correctness of their code and provides a number of quality of life improvements that make working with GPT more delightful.

How we got here

Until now, a number of community-led projects such as @types/doubleclick-gpt and @types/googletag have provided unofficial GPT type definitions. While these projects have done a great job of making our API more accessible to TypeScript developers, manually curated type definitions inevitably lag behind changes made to GPT, leading to those definitions sometimes being out of date. To address this, we've updated our release process to automatically generate type definitions from internal source code and sync changes to our own GitHub repository and the DefinitelyTyped project. This ensures that our official definitions are always up to date with the most recently released versions of GPT.

Try it and let us know what you think

For users new to TypeScript, we've published a TypeScript and Google Publisher Tags guide that covers the basics and provides a demo of the new type definitions in action. For those already familiar who want to try the new definitions right away, they can be installed via NPM by running:

npm install --save-dev @types/google-publisher-tag

If you'd like to make a suggestion, report a bug, or leave any other feedback about this new offering, feel free to open an issue on our GitHub issue tracker.

Google Workspace Updates Weekly Recap – October 14, 2022

New updates 


There are no new updates to share this week. Please see below for a recap of published announcements. 


Previous announcements


The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.



In-room meeting participants can now join break out rooms 
When using Google Meet Hardware devices, meeting hosts can now assign conference rooms to breakout rooms. | Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Starter, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, the Teaching and Learning Upgrade, Frontline, and Nonprofits customers only. | Learn more


Transcribe speech during Google Meet calls into a Google Doc 
You can now transcribe a Google Meet video meeting into a Google Doc. The transcribed file is saved in the hosts “Meet Recordings” folder in Google Drive, similar to meeting recordings. | Available to Google Workspace Business Standard, Business Plus, Enterprise Starter, Enterprise Standard, Enterprise Plus, Education Plus, and the Teaching and Learning Upgrade customers only. | Learn more


Use SIP Link to link phone numbers from local carriers to Google Voice 
For Google Voice Standard and Premier customers, admins can now connect a Session Initiation Protocol (SIP) trunk with Voice. This allows phone numbers (PSTN services) from local carriers to be used for Google Voice through a secure set of certified Session Border Controllers (SBCs), such as Audiocodes, Cisco, Oracle, and Ribbon. | Available with Voice Standard and Voice Premier licenses only. | Learn more


Preview and interact with files using smart chips in Google Sheets 
As an extension of smart canvas, you can now add Google Drive files directly into a Google Sheet as a smart chip. | Learn more


Expanding smart chips to include events in Google Sheets 
In addition to the recent announcement of adding files to Google Sheets using smart chips, we're also making it easier for you to quickly insert Calendar events into Sheets. | Learn more


Join or start a meeting directly from Jamboard on the web to kickstart collaboration 
We’re expanding interoperability with Google Meet and Jamboard with the option to join or start a meeting directly from Jamboard on the web. This makes it easier for you to seamlessly present your jam and start collaborating. | Learn more


Data loss prevention for Google Chat now generally available 
Over the next several weeks, data loss prevention (DLP) rules for Google Chat will become generally available for select Google Workspace editions. Data protection rules for Chat help admins and security experts build a stronger framework around sensitive data to prevent personal or proprietary information from ending up in the wrong hands. | Learn more

Improve your visibility in Google Meet video calls
Google Meet can now automatically frame your video before joining a meeting to help ensure equal visibility for all participants. The automatic framing happens only once, so there are no motion distractions that can divert attention from the content of the meeting. | Available to Google Workspace Business Standard, Business Plus, Enterprise Essentials, Enterprise Starter, Enterprise Standard, Enterprise Plus, Education Plus, Education Teaching and Learning Upgrade, and Workspace Individual customers with eligible devices. Also available to Google One subscribers with 2TB or more storage space with eligible devices. | Learn more

For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).

UL2 20B: An Open Source Unified Language Learner

Building models that understand and generate natural language well is one the grand goals of machine learning (ML) research and has a direct impact on building smart systems for everyday applications. Improving the quality of language models is a key target for researchers to make progress toward such a goal.

Most common paradigms to build and train language models use either autoregressive decoder-only architectures (e.g., PaLM or GPT-3), where the model is trained to predict the next word for a given prefix phrase, or span corruption-based encoder-decoder architectures (e.g., T5, ST-MoE), where the training objective is to recover the subset of words masked out of the input. On the one hand, T5-like models perform well on supervised fine-tuning tasks, but struggle with few-shot in-context learning. On the other hand, autoregressive language models are great for open-ended generation (e.g., dialog generation with LaMDA) and prompt-based learning (e.g., in-context learning with PaLM), but may perform suboptimally on fine-tuning tasks. Thus, there remains an opportunity to create an effective unified framework for pre-training models.

In “Unifying Language Learning Paradigms”, we present a novel language pre-training paradigm called Unified Language Learner (UL2) that improves the performance of language models universally across datasets and setups. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. We demonstrate that models trained using the UL2 framework perform well in a variety of language domains, including prompt-based few-shot learning and models fine-tuned for down-stream tasks. Additionally, we show that UL2 excels in generation, language understanding, retrieval, long-text understanding and question answering tasks. Finally, we are excited to publicly release the checkpoints for our best performing UL2 20 billion parameter model.

Background: Language Modeling Objectives and Architectures
Common objective functions for training language models can mostly be framed as learning data transformations that map inputs to targets. The model is conditioned on different forms of input to predict target tokens. To this end, different objectives utilize different properties of the inputs.

The standard Causal Language modeling objective (CausalLM) is trained to predict full sequence lengths and so, only recognizes tokens in the target output. The prefix language modeling objective (PrefixLM) modifies this process by randomly sampling a contiguous span of k tokens from the given tokenized text to form the input of the model, referred to as the “prefix”. The span corruption objective masks contiguous spans from the inputs and trains the model to predict these masked spans.

In the table below, we list the common objectives on which state-of-the-art language models are trained along with different characteristics of the input, i.e., how it is presented to the model. Moreover, we characterize the example efficiency of each objective in terms of the ability of the model for exploiting supervision signals from a single input, e.g., how much of the input tokens contribute to the calculation of the loss.

Objective
Function
Inputs
(Bi-directional)
Targets
(Causal)
Input
Properties
Example
Efficiency
     
CausalLM none text N/A full seq_len
     
PrefixLM text (up to position k) text (after position k) contiguous seq_len - k
     
Span corruption masked text masked_tokens non-contiguous, may be bi-directional typically lower than others
Common objectives used in today’s language models. Throughout, “text” indicates tokenized text.

UL2 leverages the strengths of each of these objective functions through a framework that generalizes over each of them, which enables the ability to reason and unify common pre-training objectives. Based on this framework, the main task for training a language model is to learn the transformation of a sequence of input tokens to a sequence of target tokens. Then all the objective functions introduced above can be simply reduced to different ways of generating input and target tokens. For instance, the PrefixLM objective can be viewed as a transformation that moves a segment of k contiguous tokens from the inputs to the targets. Meanwhile, the span corruption objective is a data transformation that corrupts spans (a subsequence of tokens in the input), replacing them with mask tokens that are shifted to the targets.

It is worth noting that one can decouple the model architecture and the objective function with which it’s trained. Thus, it is possible to train different architectures, such as the common single stack decoder-only and two-stack encoder-decoder models, with any of these objectives.

Mixture of Denoisers
The UL2 framework can be used to train a model on a mixture of pre-training objectives and supply it with capabilities and inductive bias benefits from different pre-training tasks. Training on the mixture helps the model leverage the strengths of different tasks and mitigates the weaknesses of others. For instance, the mixture-of-denoisers objective can strongly improve the prompt-based learning capability of the model as opposed to a span corruption-only T5 model.

UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios (i.e., different combinations of the R, X, and S-denoisers) and prepare the input and target appropriately. Then, a paradigm token is appended to the input (one of [R], [X], or [S]) indicating the denoising task at hand.

An overview of the denoising objectives used in UL2’s mixture-of-denoisers.

Improving Trade-Offs Across Learning Paradigms
Many existing commonly used language learning paradigms typically excel at one type of task or application, such as fine-tuning performance or prompt-based in-context learning. In the plot below, we show baseline objective functions on different tasks compared to UL2: CausalLM (referred to as GPT-like), PrefixLM, Span Corrupt (also referred to as T5 in the plot), and a baseline objective function proposed by UniLM. We use these objectives for training decoder only architectures (green) and encoder-decoder architectures (blue) and evaluate different combinations of objective functions and architectures on two main sets of tasks:

  1. Fine-tuning, by measuring performance on SuperGLUE (y-axis of the plot below)
  2. In-context learning, by measuring performance of the model on a suite of 1-shot GEM tasks (e.g., XSUM, SGD or Schema guided dialog and TOTTO) (x-axis of the plot below).

For most of the existing language learning paradigms, there is a trade-off between the quality of the model on these two sets of tasks. We show that UL2 bridges this trade-off across in-context learning and fine-tuning.

In both decoder-only and encoder-decoder setups, UL2 strikes a significantly improved balance in performance between fine-tuned discriminative tasks and prompt-based 1-shot open-ended text generation compared to previous methods. (All models are comparable in terms of computational costs, i.e., FLOPs (EncDec models are 300M and Dec models are 150M parameters).

UL2 for Few-Shot Prompting and Chain-of-Thought Reasoning
We scale up UL2 and train a 20 billion parameter encoder-decoder model on the public C4 corpus and demonstrate some impressive capabilities of the UL2 20B model.

UL2 is a powerful in-context learner that excels at both few-shot and chain-of-thought (CoT) prompting. In the table below, we compare UL2 with other state-of-the-art models (e.g, T5 XXL and PaLM) for few-shot prompting on the XSUM summarization dataset. Our results show that UL2 20B outperforms PaLM and T5, both of which are in the same ballpark of compute cost.

Model ROUGE-1 ROUGE-2 ROUGE-L
LaMDA 137B 5.4
PaLM 62B 11.2
PaLM 540B 12.2
PaLM 8B 4.5
T5 XXL 11B 0.6 0.1 0.6
T5 XXL 11B + LM 13.3 2.3 10.7
UL2 20B 25.5 8.6 19.8
Comparison of UL2 with T5 XXL, PaLM and LamDA 137B on 1-shot summarization (XSUM) in terms of ROUGE-1/2/L (higher is better), which captures the quality by comparing the generated summaries with the gold summaries as reference.

Most CoT prompting results have been obtained using much larger language models, such as GPT-3 175B, PaLM 540B, or LaMDA 137B. We show that reasoning via CoT prompting can be achieved with UL2 20B, which is both publicly available and several times smaller than prior models that leverage chain-of-thought prompting. This enables an open avenue for researchers to conduct research on CoT prompting and reasoning at an accessible scale. In the table below, we show that for UL2, CoT prompting outperforms standard prompting on math word problems with a range of difficulties (GSM8K, SVAMP, ASDiv, AQuA, and MAWPS). We also show that self-consistency further improves performance.

Chain-of-thought (CoT) prompting and self-consistency (SC) results on five arithmetic reasoning benchmarks.

Conclusion and Future Directions
UL2 demonstrates superior performance on a plethora of fine-tuning and few-shot tasks. We publicly release checkpoints of our best performing UL2 model with 20 billion parameters, which we hope will inspire faster progress in developing better language models in the machine learning community as a whole.

Acknowledgements
It was an honor and privilege to work on this with Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Denny Zhou, Neil Houlsby and Donald Metzler. We further acknowledge Alexey Gritsenko, Andrew M. Dai, Jacob Devlin, Jai Gupta, William Fedus, Orhan Firat, Sebastian Gerhmann, Nan Du, Dave Uthus, Siamak Shakeri, Slav Petrov and Quoc Le for support and discussions. We thank the Jax and T5X team for building such wonderful infrastructure that made this research possible.

Source: Google AI Blog


Improve your visibility in Google Meet video calls

This announcement was made at Google Cloud Next ‘22. Check out Next OnAir to tune into the livestream or watch session recordings following the event. Visit the Cloud Blog to learn more about the latest Google Workspace innovations for the ever-changing world of work. 



Quick summary 

Depending on their camera placement, some meeting participants might be less visible than others. Google Meet can now automatically frame your video before joining a meeting to help ensure equal visibility for all participants. The automatic framing happens only once, so there are no motion distractions that can divert attention from the content of the meeting. You can manually reframe the video at any time from the settings. 

Meet frames you in the center of the screen before joining a meeting to improve your visibility




Getting started 


Rollout pace 

  • Rapid Release domains: Extended rollout (potentially longer than 15 days for feature visibility) starting on October 14, 2022 
  • Scheduled Release domains: Gradual rollout (up to 15 days for feature visibility) starting on November 2, 2022 

Availability 

  • Available to Google Workspace Business Standard, Business Plus, Enterprise Essentials, Enterprise Starter, Enterprise Standard, Enterprise Plus, Education Plus, Education Teaching and Learning Upgrade, and Workspace Individual customers with eligible devices. 
  • Also available to Google One subscribers with 2TB or more storage space with eligible devices. Visit the Help Center to learn about device requirements for video framing
  • Not available to Google Workspace Essentials, Business Starter, Education Fundamentals, Frontline, and Nonprofits, as well as G Suite Basic and Business customers. 
  • Not available to users with personal Google Accounts. 

Resources