Monthly Archives: October 2020

Beta Channel Update for Chrome OS

 The Beta channel has been updated to 87.0.4280.38 (Platform version: 13505.27.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. 

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).  

Cindy Bayless

Google Chrome

$10 million to increase diversity in Bay Area STEM classrooms

Editor's Note:This guest post comes from Dr. Allison Scott, Chief Research Officer of the Kapor Center, a nonprofit aiming to increase diversity and inclusion in technology.


Science, technology, engineering and mathematics (also known as STEM) play a critical role in our society, touching every aspect of our lives. STEM occupations are among the fastest-growing and highest-paying, and contribute significantly to our nation’s economy. To get students on track for STEM careers, they have to start early: students who take advanced STEM courses in high school are much more likely to major in equivalent subjects in college and specifically, Black and Latinx students who take advanced Computer Science (CS) in high school are 7-8 times more likely to major in CS in college. 


AP Stem

But unfortunately, access to advanced STEM and CScourses is not evenly distributed. Low-income students and students of color across California are less likely to have access to computer science courses than their peers, and as a result, students of color are underrepresented across every AP® math, science, and CS course in California. But we can change these trends.

AP Stem

With a $10 million contribution from Google.org, we’re launching the Rising STEM Scholars Initiative to increase the number of low income students and students of color in AP STEM and CS courses across the Bay Area. Through a partnership with Equal Opportunity Schools, UC Berkeley’s Graduate School of Education, Kingmakers of Oakland, Donorschoose.org, we’ll collaborate with districts, schools, administrators, educators, students and families to place and support 3,000 students of color and low income students in Bay Area AP STEM and CS classrooms. The project started last year in 15 schools across the Bay Area. Within the first year, the number of Black and LatinX students taking AP STEM classes doubled. 

The Rising STEM Scholars initiative will address the challenges in STEM and CS equity by providing data insights on equity gaps, coaching schools to address these gaps, and providing professional development opportunities for teachers. We’ll also provide money for educators to get resources for their classrooms and find ways to inspire students to take AP courses.

Students sitting in high school classrooms right now have the potential to become future leaders in fields from technology to education—they just need the opportunities to get there. Let’s ensure all students in the Bay Area have access to the classes they need to succeed. If you’re located in the Bay Area, help us spread the word to join the movement


Building a Google Workspace Add-on with Adobe

Posted by Jon Harmer, Product Manager, Google Cloud

We recently introduced Google Workspace, which seamlessly brings together messaging, meetings, docs, and tasks and is a great way for teams to create, communicate, and collaborate. Google Workspace has what you need to get anything done, all in one place. This includes giving developers the ability to extend Google Workspace’s standard functionality like with Google Workspace Add-ons, launched earlier this year.

Google Workspace Add-ons, at launch, allowed a developer to build a single integration for Google Workspace that surfaces across Gmail, Google Drive, and Google Calendar. We recently announced that we added to the functionality of Google Workspace Add-ons by enabling more of the Google Workspace applications with the newer add-on framework, Google Docs, Google Sheets, and Google Slides. With Google Workspace Add-ons, developers can scale their presence across multiple touchpoints in which users can engage, and simplifies processes for building and managing add-ons.

One of our early developers for Google Workspace Add-ons has been Adobe. Adobe has been working to integrate Creative Cloud Libraries into Google Workspace. Using Google Workspace Add-ons, Adobe was able to quickly design a Creative Cloud Libraries experience that felt native to Google Workspace. “With the new add-ons framework, we were able to improve the overall performance and unify our Google Workspace and Gmail Add-ons.” said Ryan Stewart, Director of Product Management at Adobe. “This means a much better experience for our customers and much higher productivity for our developers. We were able to quickly iterate with the updated framework controls and easily connect it to the Creative Cloud services.”

One of the big differences between the Gmail integration and the Google Workspace integration is how it lets users work with Libraries. With Gmail, they’re sharing links to Libraries, but with Docs and Slides, they can add Library elements to their document or presentation. So by offering all of this in a single integration, we are able to provide a more complete Libraries experience. Being able to offer that breadth of experiences in a consistent way for users is exciting for our team.

Adobe’s Creative Cloud Libraries API announced at Adobe MAX, was also integral to integrating Creative Cloud with Google Workspace, letting developers retrieve, browse, create, and get renditions of the creative elements in libraries.

Adobe’s new Add-on for Google Workspace lets you add brand colors, character styles and graphics from Creative Cloud Libraries to Google Workspace apps like Docs and Slides. You can also save styles and assets back to Creative Cloud.

With Google Workspace Add-ons, we understand that teams require many applications to get work done, and we believe that process should be simple, and those productivity applications should connect all of a company’s workstreams. With Google Workspace Add-ons, teams can bring their favorite workplace apps like Adobe Creative Cloud into Google Workspace, enabling a more productive day-to-day experience for design and marketing teams. With quick access to Creative Cloud Libraries, the Adobe Creative Cloud Add-on for Google Workspace lets eveyone easily access and share assets in Gmail and apply brand colors, character styles, and graphics to Google Docs and Slides to keep deliverables consistent and on-brand. There’s a phased rollout to users, first with Google Docs, then Slides, so if you don’t see it in the Add-on yet, stay tuned as it is coming soon.

For developers, Google Workspace Add-ons lets you build experiences that not only let your customers manage their work, but also simplify how they work.

To learn more about Google Workspace Add-ons, please visit our Google Workspace developer documentation.

Releasing the Healthcare Text Annotation Guidelines

The Healthcare Text Annotation Guidelines are blueprints for capturing a structured representation of the medical knowledge stored in digital text. In order to automatically map the textual insights to structured knowledge, the annotations generated using these guidelines are fed into a machine learning algorithm that learns to systematically extract the medical knowledge in the text. We’re pleased to release to the public the Healthcare Text Annotation Guidelines as a standard.

Google Cloud recently launched AutoML Entity Extraction for Healthcare, a low-code tool used to build information extraction models for healthcare applications. There remains a significant execution roadblock on AutoML DIY initiatives caused by the complexity of translating the human cognitive process into machine-readable instructions. Today, this translation occurs thanks to human annotators who annotate text for relevant insights. Yet, training human annotators is a complex endeavor which requires knowledge across fields like linguistics and neuroscience, as well as a good understanding of the business domain. With AutoML, Google wanted to democratize who can build AI. The Healthcare Text Annotation Guidelines are a starting point for annotation projects deployed for healthcare applications.

The guidelines provide a reference for training annotators in addition to explicit blueprints for several healthcare annotation tasks. The annotation guidelines cover the following:
  • The task of medical entity extraction with examples from medical entity types like medications, procedures, and body vitals.
  • Additional tasks with defined examples, such as entity relation annotation and entity attribute annotation. For instance, the guidelines specify how to relate a medical procedure entity to the source medical condition entity, or how to capture the attributes of a medication entity like dosage, frequency, and route of administration.
  • Guidance for annotating an entity’s contextual information like temporal assessment (e.g., current, family history, clinical history), certainty assessment (e.g., unlikely, somewhat likely, likely), and subject (e.g., patient, family member, other).
Google consulted with industry experts and academic institutions in the process of assembling the Healthcare Text Annotation Guidelines. We took inspiration from other open source and research projects like i2b2 and added context to the guidelines to support information extraction needs for industry-applications like Healthcare Effectiveness Data and Information Set (HEDIS) quality reporting. The data types contained in the Healthcare Text Annotation Guidelines are a common denominator across information extraction applications. Each industry application can have additional information extraction needs that are not captured in the current version of the guidelines. We chose to open source this asset so the community can tailor this project to their needs.

We’re thrilled to open source this project. We hope the community will contribute to the refinement and expansion of the Healthcare Text Annotation Guidelines, so they mirror the ever-evolving nature of healthcare.

By Andreea Bodnari, Product Manager and Mikhail Begun, Program Manager—Google Cloud AI

Releasing the Healthcare Text Annotation Guidelines

The Healthcare Text Annotation Guidelines are blueprints for capturing a structured representation of the medical knowledge stored in digital text. In order to automatically map the textual insights to structured knowledge, the annotations generated using these guidelines are fed into a machine learning algorithm that learns to systematically extract the medical knowledge in the text. We’re pleased to release to the public the Healthcare Text Annotation Guidelines as a standard.

Google Cloud recently launched AutoML Entity Extraction for Healthcare, a low-code tool used to build information extraction models for healthcare applications. There remains a significant execution roadblock on AutoML DIY initiatives caused by the complexity of translating the human cognitive process into machine-readable instructions. Today, this translation occurs thanks to human annotators who annotate text for relevant insights. Yet, training human annotators is a complex endeavor which requires knowledge across fields like linguistics and neuroscience, as well as a good understanding of the business domain. With AutoML, Google wanted to democratize who can build AI. The Healthcare Text Annotation Guidelines are a starting point for annotation projects deployed for healthcare applications.

The guidelines provide a reference for training annotators in addition to explicit blueprints for several healthcare annotation tasks. The annotation guidelines cover the following:
  • The task of medical entity extraction with examples from medical entity types like medications, procedures, and body vitals.
  • Additional tasks with defined examples, such as entity relation annotation and entity attribute annotation. For instance, the guidelines specify how to relate a medical procedure entity to the source medical condition entity, or how to capture the attributes of a medication entity like dosage, frequency, and route of administration.
  • Guidance for annotating an entity’s contextual information like temporal assessment (e.g., current, family history, clinical history), certainty assessment (e.g., unlikely, somewhat likely, likely), and subject (e.g., patient, family member, other).
Google consulted with industry experts and academic institutions in the process of assembling the Healthcare Text Annotation Guidelines. We took inspiration from other open source and research projects like i2b2 and added context to the guidelines to support information extraction needs for industry-applications like Healthcare Effectiveness Data and Information Set (HEDIS) quality reporting. The data types contained in the Healthcare Text Annotation Guidelines are a common denominator across information extraction applications. Each industry application can have additional information extraction needs that are not captured in the current version of the guidelines. We chose to open source this asset so the community can tailor this project to their needs.

We’re thrilled to open source this project. We hope the community will contribute to the refinement and expansion of the Healthcare Text Annotation Guidelines, so they mirror the ever-evolving nature of healthcare.

By Andreea Bodnari, Product Manager and Mikhail Begun, Program Manager—Google Cloud AI

Stadia Savepoint: October updates

It’s time for another update to our Stadia Savepoint series, recapping the new games, features, and changes for Stadia in October.

This month we celebrated some Good Stuff on Stadia, teaming up with YouTube Creators Lamarr Wilson and LaurenZside who revealed free and exclusive week-long demos for PAC-MAN™ Mega Tunnel Battle and Immortals Fenyx Rising, plus an OpenDev Beta for HUMANKIND. We can’t wait for these amazing games to launch on Stadia, starting with PAC-MAN Mega Tunnel Battle on November 17. Over three days we also revealed new games and content coming to Stadia including the Drastic Steps expansion for Orcs Must Die! 3 (Nov. 6), Star Wars Jedi: Fallen Order (Nov. 24), ARK: Survival Evolved (early 2021), Hello Engineer (2021), Young Souls (2021), and Phoenix Point (2021).

Throughout October, players explored a Dungeons & Dragons adventure in Baldur’s Gate 3 Early Access, fought against a surveillance state in Watch Dogs: Legion, and carved their path to vengeance in Sekiro: Shadows Die Twice. All of these games, plus many others that arrived this month, are now available for purchase on the Stadia store. For players that are subscribed to Stadia Pro, they received instant access to a library of 29 games in October, with even more available on November 1.

Stadia Games Available Now and Coming Soon

Crowd Choice now available

Crowd Choice, available in Baldur’s Gate 3 and Dead by Daylight, changes how games unfold when live streaming on YouTube. Viewers have the power to vote on decisions made by the player in each game.

Play Stadia with mobile data

Mobile data gameplay has graduated from Experiments and is now a fully supported feature on Stadia to play games using 4G and 5G. Data usage may use up to 2.7GB/hr. Gameplay is service-, network-, and connection-dependent, and this feature may not be available in all areas.

Referral rewards for friends and family

Refer someone for a free trial of Stadia Pro and they’ll get an extra free month of games. Plus, if they subscribe after their trial is up, you’ll get an extra free month of Stadia Pro as well. Terms apply.

Push notifications on mobile

Receive notifications in the Stadia app on Android and iOS devices about Stadia Pro games, incoming friend requests, and more.

Stadia Pro updates

October content launches on Stadia

New games coming to Stadia announced this month

That’s all for October—we’ll be back soon to share more updates. As always, stay tuned to the Stadia Community Blog, Facebook and Twitter for the latest news.

Accessory inspiration, courtesy of the Pixel team

A few weeks ago, Google introduced the Pixel 4a (5G) and the Pixel 5. And new Pixels mean new Pixel accessories, starting with the new Made by Google cases. 

As part of Google’s ongoing commitment to sustainability, the outer fabric of our new cases are made with 70 percent recycled materials. In fact, assuming each bottle contains about nine grams of plastic, two recycled plastic bottles can provide enough knitted outer fabric for five cases. 

We did all of this while delivering a pop of color. In addition to the Blue Confetti, Static Gray and Basically Black, we’re adding two new colors: Green Chameleon for the Pixel 5 and Chili Flakes for Pixel 4a (5G).

Cases are only the beginning, though. How you outfit your phone says a lot about you, so we decided to find out what different members of the Pixel team are using in order to get some accessory inspiration. 

Nicole Laferriere, Pixel Program Manager
No more battery anxiety! The iOttie iON Duo is my perfect WFH companion because it allows me to simultaneously wirelessly charge my Pixel 5 and Pixel Buds. The stand provides a great angle so I never miss a notification and charges my Pixel quickly. And I love the custom Pixel Bud-shaped charging pad because it fits them so perfectly, and there’s no waiting to see if the device starts charging.

Ocie Henderson, Digital Store Certification Lead
What’s your favorite accessory and why?: I love the Power Support CLAW for Stadia because it’s my favorite way to game-on-the-go. 2020 has definitely impacted the amount of places I can go, of course, and the places I’m able to eat. Fortunately the drive-thru is still an option, and my Power Support CLAW can sit atop my Stadia Controller and transform my wait into an opportunity to adventure, too. 

Helen Hui, Technical Program Manager 
Moment lenses are my go-to accessory whenever I go hiking. With the lens on Pixel phones, I can skip the heavy digital camera and still achieve stunning results. Last December, I used the Moment Telephoto 58mm Lens and my Pixel 4 to capture stunning photos of Antelope Canyon in Arizona. I can't wait to try the new Moment case for Pixel 5.

Janelle Stribling, Pixel Product Marketing
When I'm not working, I'm always on the go—I especially love discovering new hiking trails, so my must-have accessory is my iOttie wireless car charger. I can attach my Pixel 5 with one hand and then I'm hands-free the rest of the drive since I can use Google Assistant and Google Maps to find my destination. I love arriving with a full battery so I can start capturing photos of the views immediately!

Nasreen Shad, Pixel Product Manager
Now more than ever, I like starting off each day working from home with a morning routine with my Pixel Stand. I keep it on my nightstand and use the Sunrise Alarm to gradually brighten my phone’s screen for a gentle wake up. With the new home controls, I can easily change my thermostat settings and turn on my living room lights before even getting out of bed. Once I’m up and at it, Google Assistant gives me daily briefing of headlines from my favorite news outlets And lucky for me, my San Francisco apartment is small enough that I can leave my Pixel on the Pixel Stand and play some music while I get warmed up for a morning jog.

MAD Skills Navigation Wrap-Up

Posted by Chet Haase

MAD Skills navigation illustration of mobile and desktop with Android logo

It’s a Wrap!

We’ve just finished the first series in the MAD Skills series of videos and articles on Modern Android Development. This time, the topic was Navigation component, the API and tool that helps you create and edit navigation paths through your application.

The great thing about videos and articles is that, unlike performance art, they tend to stick around for later enjoyment. So if you haven’t had a chance to see these yet, check out the links below to see what we covered. Except for the Q&A episode at the end, each episode has essentially identical content in the video and article version, so use whichever format you prefer for content consumption.

Episode 1: Overview

The first episode provides a quick, high-level overview of Navigation Component, including how to create a new application with navigation capability (using Android Studio’s handy application templates), details on the containment hierarchy of a navigation-enabled UI, and an explanation of some of the major APIs and pieces involved in making Navigation Component work.

Or in article form: https://medium.com/androiddevelopers/navigation-component-an-overview-4697a208c2b5

Episode 2: Dialog Destinations

Episode 2 explores how to use the API to navigate to dialog destinations. Most navigation takes place between different fragment destinations, which are swapped out inside of the NavHostFragment object in the UI. But it is also possible to navigate to external destinations, including dialogs, which exist outside of the NavHostFragment.

Or in article form: https://medium.com/androiddevelopers/navigation-component-dialog-destinations-bfeb8b022759

Episode 3: SafeArgs

This episode covers SafeArgs, the facility provided by Navigation component for easily passing data between destinations.

Or in article form: https://medium.com/androiddevelopers/navigating-with-safeargs-bf26c17b1269

Episode 4: Deep Links

This episode is on Deep Links, the facility provided by Navigation component for helping the user get to deeper parts of your application from UI outside the application.

Or in article form: https://medium.com/androiddevelopers/navigating-with-deep-links-910a4a6588c

Episode 5: Live Q&A

Finally, to wrap up the series (as we plan to do for future series), I hosted a Q&A session with Ian Lake. Ian fielded questions from you on Twitter and YouTube, and we discussed everything from feature requests like multiple backstacks (spoiler: it’s in the works!) to Navigation support for Jetpack Compose (spoiler: the first version of this was just released!) to other questions people had about navigation, fragments, Up-vs-Back, saving state, and other topics. It was pretty fun — more like a podcast with cameras than a Q&A.

(There is no article for this one; enjoy the video above)

Sample App: DonutTracker

The application used for most of the episodes above is DonutTracker, an app that you can use for tracking important data about donuts you enjoy (or don’t). Or you can just use it for checking out the implementation details of these Navigation features; your choice.

Background Features in Google Meet, Powered by Web ML

Video conferencing is becoming ever more critical in people's work and personal lives. Improving that experience with privacy enhancements or fun visual touches can help center our focus on the meeting itself. As part of this goal, we recently announced ways to blur and replace your background in Google Meet, which use machine learning (ML) to better highlight participants regardless of their surroundings. Whereas other solutions require installing additional software, Meet’s features are powered by cutting-edge web ML technologies built with MediaPipe that work directly in your browser — no extra steps necessary. One key goal in developing these features was to provide real-time, in-browser performance on almost all modern devices, which we accomplished by combining efficient on-device ML models, WebGL-based rendering, and web-based ML inference via XNNPACK and TFLite.

Background blur and background replacement, powered by MediaPipe on the web.

Overview of Our Web ML Solution
The new features in Meet are developed with MediaPipe, Google's open source framework for cross-platform customizable ML solutions for live and streaming media, which also powers ML solutions like on-device real-time hand, iris and body pose tracking.

A core need for any on-device solution is to achieve high performance. To accomplish this, MediaPipe’s web pipeline leverages WebAssembly, a low-level binary code format designed specifically for web browsers that improves speed for compute-heavy tasks. At runtime, the browser converts WebAssembly instructions into native machine code that executes much faster than traditional JavaScript code. In addition, Chrome 84 recently introduced support for WebAssembly SIMD, which processes multiple data points with each instruction, resulting in a performance boost of more than 2x.

Our solution first processes each video frame by segmenting a user from their background (more about our segmentation model later in the post) utilizing ML inference to compute a low resolution mask. Optionally, we further refine the mask to align it with the image boundaries. The mask is then used to render the video output via WebGL2, with the background blurred or replaced.

WebML Pipeline: All compute-heavy operations are implemented in C++/OpenGL and run within the browser via WebAssembly.

In the current version, model inference is executed on the client’s CPU for low power consumption and widest device coverage. To achieve real-time performance, we designed efficient ML models with inference accelerated by the XNNPACK library, the first inference engine specifically designed for the novel WebAssembly SIMD specification. Accelerated by XNNPACK and SIMD, the segmentation model can run in real-time on the web.

Enabled by MediaPipe's flexible configuration, the background blur/replace solution adapts its processing based on device capability. On high-end devices it runs the full pipeline to deliver the highest visual quality, whereas on low-end devices it continues to perform at speed by switching to compute-light ML models and bypassing the mask refinement.

Segmentation Model
On-device ML models need to be ultra lightweight for fast inference, low power consumption, and small download size. For models running in the browser, the input resolution greatly affects the number of floating-point operations (FLOPs) necessary to process each frame, and therefore needs to be small as well. We downsample the image to a smaller size before feeding it to the model. Recovering a segmentation mask as fine as possible from a low-resolution image adds to the challenges of model design.

The overall segmentation network has a symmetric structure with respect to encoding and decoding, while the decoder blocks (light green) also share a symmetric layer structure with the encoder blocks (light blue). Specifically, channel-wise attention with global average pooling is applied in both encoder and decoder blocks, which is friendly to efficient CPU inference.

Model architecture with MobileNetV3 encoder (light blue), and a symmetric decoder (light green).

We modified MobileNetV3-small as the encoder, which has been tuned by network architecture search for the best performance with low resource requirements. To reduce the model size by 50%, we exported our model to TFLite using float16 quantization, resulting in a slight loss in weight precision but with no noticeable effect on quality. The resulting model has 193K parameters and is only 400KB in size.

Rendering Effects
Once segmentation is complete, we use OpenGL shaders for video processing and effect rendering, where the challenge is to render efficiently without introducing artifacts. In the refinement stage, we apply a joint bilateral filter to smooth the low resolution mask.

Rendering effects with artifacts reduced. Left: Joint bilateral filter smooths the segmentation mask. Middle: Separable filters remove halo artifacts in background blur. Right: Light wrapping in background replace.

The blur shader simulates a bokeh effect by adjusting the blur strength at each pixel proportionally to the segmentation mask values, similar to the circle-of-confusion (CoC) in optics. Pixels are weighted by their CoC radii, so that foreground pixels will not bleed into the background. We implemented separable filters for the weighted blur, instead of the popular Gaussian pyramid, as it removes halo artifacts surrounding the person. The blur is performed at a low resolution for efficiency, and blended with the input frame at the original resolution.

Background blur examples.

For background replacement, we adopt a compositing technique, known as light wrapping, for blending segmented persons and customized background images. Light wrapping helps soften segmentation edges by allowing background light to spill over onto foreground elements, making the compositing more immersive. It also helps minimize halo artifacts when there is a large contrast between the foreground and the replaced background.

Background replacement examples.

Performance
To optimize the experience for different devices, we provide model variants at multiple input sizes (i.e., 256x144 and 160x96 in the current release), automatically selecting the best according to available hardware resources.

We evaluated the speed of model inference and the end-to-end pipeline on two common devices: MacBook Pro 2018 with 2.2 GHz 6-Core Intel Core i7, and Acer Chromebook 11 with Intel Celeron N3060. For 720p input, the MacBook Pro can run the higher-quality model at 120 FPS and the end-to-end pipeline at 70 FPS, while the Chromebook runs inference at 62 FPS with the lower-quality model and 33 FPS end-to-end.

 Model   FLOPs   Device   Model Inference   Pipeline 
 256x144   64M   MacBook Pro 18   8.3ms (120 FPS)   14.3ms (70 FPS) 
 160x96   27M   Acer Chromebook 11   16.1ms (62 FPS)   30ms (33 FPS) 
Model inference speed and end-to-end pipeline on high-end (MacBook Pro) and low-end (Chromebook) laptops.

For quantitative evaluation of model accuracy, we adopt the popular metrics of intersection-over-union (IOU) and boundary F-measure. Both models achieve high quality, especially for having such a lightweight network:

  Model     IOU     Boundary  
  F-measure  
  256x144     93.58%     0.9024  
  160x96     90.79%     0.8542  
Evaluation of model accuracy, measured by IOU and boundary F-score.

We also release the accompanying Model Card for our segmentation models, which details our fairness evaluations. Our evaluation data contains images from 17 geographical subregions of the globe, with annotations for skin tone and gender. Our analysis shows that the model is consistent in its performance across the various regions, skin-tones, and genders, with only small deviations in IOU metrics.

Conclusion
We introduced a new in-browser ML solution for blurring and replacing your background in Google Meet. With this, ML models and OpenGL shaders can run efficiently on the web. The developed features achieve real-time performance with low power consumption, even on low-power devices.

Acknowledgments
Special thanks to those on the Meet team and others who worked on this project, in particular Sebastian Jansson, Rikard Lundmark, Stephan Reiter, Fabian Bergmark, Ben Wagner, Stefan Holmer, Dan Gunnarson, Stéphane Hulaud and to all our team members who worked on the technology with us: Siargey Pisarchyk, Karthik Raveendran, Chris McClanahan, Marat Dukhan, Frank Barchard, Ming Guang Yong, Chuo-Ling Chang, Michael Hays, Camillo Lugaresi, Gregory Karpiak, Siarhei Kazakou, Matsvei Zhdanovich, and Matthias Grundmann.

Source: Google AI Blog


Replace your background in Google Meet

What’s changing

You can now replace your background with an image in Google Meet. You can either use Google’s hand-picked images, which include office spaces, landscapes, and abstract backgrounds, or upload your own image.




Who’s impacted


End users

Why you’d use it


Custom backgrounds can help you show more of your personality, as well as help hide your surroundings.

Additional details

We recently launched the ability to filter out disruptive background noise and blur your background in Google Meet. Together, these features reduce audio and visual distractions, and help ensure more productive meetings.

Virtual backgrounds work directly within your browser and do not require an extension or any additional software. At launch, they’ll work on ChromeOS and on the Chrome browser on Windows and Mac desktop devices. Support on Meet mobile apps will be coming soon; we’ll announce on the Google Workspace Updates blog when they become available.

Getting started

Admins: At launch, there will be no admin control for this feature. Admin controls to select which organizational units can use custom and preset backgrounds for meetings they organize will be introduced later this year. We’ll announce on the Google Workspace Updates blog when they’re available.

End users: This feature is OFF by default. Visit our Help Center to learn more about how to change your background on Google Meet.




Rollout pace

  • Rapid Release domains: Gradual rollout to eligible devices (up to 7 days for feature visibility) starting on October 30, 2020
  • Scheduled Release domains: Gradual rollout to eligible devices (up to 7 days for feature visibility) starting on November 6, 2020

Availability

  • Available to Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Enterprise for Education, and Nonprofits customers and users with personal Google accounts.
  • Selecting your own picture is not available to participants of meetings organized by Education customers.

Resources

Roadmap