Beta Channel Update for Desktop

 The Beta channel has been updated to 94.0.4606.50 for Windows ,linux and 94.0.4606.51 for Mac.


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.




Srinivas Sista
Google Chrome

Stable Channel Update for ChromeOS

    The Stable channel is being updated to 93.0.4577.85 (Platform version: 14092.57.0) for most Chrome OS devices. Systems will be receiving updates over the next several days.

This build contains a number of features, bug fixes and security updates.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 

Geo Hsu

Google Chrome OS

Stable Channel Update for ChromeOS

    The Stable channel is being updated to 93.0.4577.85 (Platform version: 14092.57.0) for most Chrome OS devices. Systems will be receiving updates over the next several days.

This build contains a number of features, bug fixes and security updates.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 

Geo Hsu

Google Chrome OS

Stable Channel Update for ChromeOS

    The Stable channel is being updated to 93.0.4577.85 (Platform version: 14092.57.0) for most Chrome OS devices. Systems will be receiving updates over the next several days.

This build contains a number of features, bug fixes and security updates.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 

Geo Hsu

Google Chrome OS

Revisiting Mask-Head Architectures for Novel Class Instance Segmentation

Instance segmentation is the task of grouping pixels in an image into instances of individual things, and identifying those things with a class label (countable objects such as people, animals, cars, etc., and assigning unique identifiers to each, e.g., car_1 and car_2). As a core computer vision task, it is critical to many downstream applications, such as self-driving cars, robotics, medical imaging, and photo editing. In recent years, deep learning has made significant strides in solving the instance segmentation problem with architectures like Mask R-CNN. However, these methods rely on collecting a large labeled instance segmentation dataset. But unlike bounding box labels, which can be collected in 7 seconds per instance with methods like Extreme clicking, collecting instance segmentation labels (called “masks”) can take up to 80 seconds per instance, an effort that is costly and creates a high barrier to entry for this research. And a related task, pantopic segmentation, requires even more labeled data.

The partially supervised instance segmentation setting, where only a small set of classes are labeled with instance segmentation masks and the remaining (majority of) classes are labeled only with bounding boxes, is an approach that has the potential to reduce the dependence on manually-created mask labels, thereby significantly lowering the barriers to developing an instance segmentation model. However this partially supervised approach also requires a stronger form of model generalization to handle novel classes not seen at training time—e.g., training with only animal masks and then tasking the model to produce accurate instance segmentations for buildings or plants. Further, naïve approaches, such as training a class-agnostic Mask R-CNN, while ignoring mask losses for any instances that don’t have mask labels, have not worked well. For example, on the typical “VOC/Non-VOC” benchmark, where one trains on masks for a subset of 20 classes in COCO (called “seen classes”) and is tested on the remaining 60 classes (called “unseen classes”), a typical Mask R-CNN with Resnet-50 backbone gets to only ~18% mask mAP (mean Average Precision, higher is better) on unseen classes, whereas when fully supervised it can achieve a much higher >34% mask mAP on the same set.

In “The surprising impact of mask-head architecture on novel class segmentation”, to be presented at ICCV 2021, we identify the main culprits for Mask R-CNN’s poor performance on novel classes and propose two easy-to-implement fixes (one training protocol fix, one mask-head architecture fix) that work in tandem to close the gap to fully supervised performance. We show that our approach applies generally to crop-then-segment models, i.e., a Mask R-CNN or Mask R-CNN-like architecture that computes a feature representation of the entire image and then subsequently passes per-instance crops to a second-stage mask prediction network—also called a mask-head network. Putting our findings together, we propose a Mask R-CNN–based model that improves over the current state-of-the-art by a significant 4.7% mask mAP without requiring more complex auxiliary loss functions, offline trained priors, or weight transfer functions proposed by previous work. We have also open sourced the code bases for two versions of the model, called Deep-MAC and Deep-MARC, and published a colab to interactively produce masks like the video demo below.

A demo of our model, DeepMAC, which learns to predict accurate masks, given user specified boxes, even on novel classes that were not seen at training time. Try it yourself in the colab. Image credits: Chris Briggs, Wikipedia and Europeana.

Impact of Cropping Methodology in Partially Supervised Settings
An important step of crop-then-segment models is cropping—Mask R-CNN is trained by cropping a feature map as well as the ground truth mask to a bounding box corresponding to each instance. These cropped features are passed to another neural network (called a mask-head network) that computes a final mask prediction, which is then compared against the ground truth crop in the mask loss function. There are two choices for cropping: (1) cropping directly to the ground truth bounding box of an instance, or (2) cropping to bounding boxes predicted by the model (called, proposals). At test time, cropping is always performed with proposals as ground truth boxes are not assumed to be available.

Cropping to ground truth boxes vs. cropping to proposals predicted by a model during training. Standard Mask R-CNN implementations use both types of crops, but we show that cropping exclusively to ground truth boxes yields significantly stronger performance on novel categories.
We consider a general family of Mask R-CNN–like architectures with one small, but critical difference from typical Mask R-CNN training setups: we crop using ground truth boxes (instead of proposal boxes) at training time.

Typical Mask R-CNN implementations pass both types of crops to the mask head. However, this choice has traditionally been considered an unimportant implementation detail, because it does not affect performance significantly in the fully supervised setting. In contrast, for partially supervised settings, we find that cropping methodology plays a significant role—while cropping exclusively to ground truth boxes during training doesn’t change the results significantly in the fully supervised setting, it has a surprising and dramatic positive impact in the partially supervised setting, performing significantly better on unseen classes.

Performance of Mask R-CNN on unseen classes when trained with either proposals and ground truth (the default) or with only ground truth boxes. Training mask heads with only ground truth boxes yields a significant boost to performance on unseen classes, upwards of 9% mAP. We report performance with the ResNet-101-FPN backbone.

Unlocking the Full Generalization Potential of the Mask Head
Even more surprisingly, the above approach unlocks a novel phenomenon—with cropping-to-ground truth enabled during training, the mask head of Mask R-CNN takes on a disproportionate role in the ability of the model to generalize to unseen classes. As an example, in the following figure, we compare models that all have cropping-to-ground-truth enabled, but different out-of-the-box mask-head architectures on a parking meter, cell phone, and pizza (classes unseen during training).

Mask predictions for unseen classes with four different mask-head architectures (from left to right: ResNet-4, ResNet-12, ResNet-20, Hourglass-20, where the number refers to the number of layers of the neural network). Despite never having seen masks from the ‘parking meter’, ‘pizza’ or ‘mobile phone’ class, the rightmost mask-head architecture can segment these classes correctly. From left to right, we show better mask-head architectures predicting better masks. Moreover, this difference is only apparent when evaluating on unseen classes — if we evaluate on seen classes, all four architectures exhibit similar performance.

Particularly notable is that these differences between mask-head architectures are not as obvious in the fully supervised setting. Incidentally, this may explain why previous works in instance segmentation have almost exclusively used shallow (i.e., low number of layers) mask heads, as there has been no benefit to the added complexity. Below we compare the mask mAP of three different mask-head architectures on seen versus unseen classes. All three models do equally well on the set of seen classes, but the deep hourglass mask heads stand out when applied to unseen classes. We find hourglass mask heads to be the best among the architectures we tried and we use hourglass mask heads with 50 or more layers to get the best results.

Performance of ResNet-4, Hourglass-10 and Hourglass-52 mask-head architectures on seen and unseen classes. There is a significant difference in performance on unseen classes, even though the performance on seen classes barely changes.

Finally, we show that our findings are general, holding for a variety of backbones (e.g., ResNet, SpineNet, Hourglass) and detector architectures including anchor-based and anchor-free detectors and even when there is no detector at all.

Putting It Together
To achieve the best result, we combined the above findings: We trained a Mask R-CNN model with cropping-to-ground-truth enabled and a deep Hourglass-52 mask head with a SpineNet backbone on high resolution images (1280x1280). We call this model Deep-MARC (Deep Mask heads Above R-CNN). Without using any offline training or other hand-crafted priors, Deep-MARC exceeds previous state-of-the-art models by > 4.5% (absolute) mask mAP. Demonstrating the general nature of this approach, we also see strong results with a CenterNet-based (as opposed to Mask R-CNN-based) model (called Deep-MAC), which also exceeds the previous state of the art.

Comparison of Deep-MAC and Deep-MARC to other partially supervised instance segmentation approaches like MaskX R-CNN, ShapeMask and CPMask.

Conclusion
We develop instance segmentation models that are able to generalize to classes that were not part of the training set. We highlight the role of two key ingredients that can be applied to any crop-then-segment model (such as Mask R-CNN): (1) cropping-to-ground truth boxes during training, and (2) strong mask-head architectures. While neither of these ingredients have a large impact on the classes for which masks are available during training, employing both leads to significant improvement on novel classes for which masks are not available during training. Moreover, these ingredients are sufficient for achieving state-of-the-art-performance on the partially-supervised COCO benchmark. Finally, our findings are general and may also have implications for related tasks, such as panoptic segmentation and pose estimation.

Acknowledgements
We thank our co-authors Zhichao Lu, Siyang Li, and Vivek Rathod. We thank David Ross and our anonymous ICCV reviewers for their comments which played a big part in improving this research.

Source: Google AI Blog


Google Supports Open Source Technology Improvement Fund

 

We recently pledged to provide $100 million to support third-party foundations that manage open source security priorities and help fix vulnerabilities. As part of this commitment, we are excited to announce our support of the Open Source Technology Improvement Fund (OSTIF) to improve security of eight open-source projects.

Google’s support will allow OSTIF to launch the Managed Audit Program (MAP), which will expand in-depth security reviews to critical projects vital to the open source ecosystem. The eight libraries, frameworks and apps that were selected for this round are those that would benefit the most from security improvements and make the largest impact on the open-source ecosystem that relies on them. The projects include:
  • Git - de facto version control software used in modern DevOps.
  • Lodash - a modern JavaScript utility library with over 200 functions to facilitate web development, can be found in most environments that support JavaScript, which is most of the world wide web.
  • Laravel - a php web application framework that is used by many modern, full-stack web applications, including integrations with Google Cloud.
  • Slf4j - a logging facade for various Java logging frameworks.
  • Jackson-core & Jackson-databind - a JSON for Java, Streaming API, and extra shared components and the base for Jackson data-bind package.
  • Httpcomponents-core & Httpcomponents-client - these projects are responsible for creating and maintaining a toolset of low-level Java components focused on HTTP and associated protocols. 
We are excited to help OSTIF build a safer open source environment for everyone. If you are interested in getting involved or learning more please visit the OSTIF blog.

Set the default state for Quick access and Host Management in Google Meet with new Admin settings

Quick summary 

Recently, we announced the expansion of the meeting safety features and the ability to add up to 25 co-hosts in Google Meet. We’re now adding two new controls that will allow admins to configure whether the Host Management and Quick access features will be on or off by default in their domain. 


In both cases, if the feature is set to OFF by default, meeting hosts can manually enable Quick access or Host Management from inside the meeting. Please see below for detailed information on the default Admin setting for each feature. 

Getting started 


Rollout pace 


Availability 

Host management is:
  • OFF by default for: Google Workspace Business Starter, Business Basic, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Essentials, and Non-profits customers. 

  • ON by default for: Google Workspace for Google Workspace for Education Fundamentals, Education Standard, Education Plus, the Teaching and Learning upgrade, and Frontline customers 

  • Not available to Google Workspace Individual customers or users with personal Google Accounts. 

Quick access is:
  • ON by default for all Google Workspace customers, as well as G Suite Basic and Business customers. 

  • Not available to Google Workspace Individual customers or users with personal Google Accounts. 

Resources 

Wear OS Jetpack libraries now in stable!

Posted by Jeremy Walker, Engineer

jetpack banner

In order to help you develop high quality Wear OS apps, we have been busy updating the Android Jetpack Wear OS libraries and recently delivered the first five stable Jetpack Wear OS libraries:

Library Featured functionality
wear Lay out elements in an arch to support round watches (ArcLayout) and write curved text following the curvature of a device (CurvedText).
wear-input Identify and interact with hardware buttons on the Wear OS device.
wear-ongoing Surface Ongoing Notifications in new Wear specific surfaces (code lab).
wear-phone-interactions Detect the type of phone a watch is paired with (iOS or Android) and handle all Notification bridging options.
wear-remote-interaction Open Android intents on other devices, for example, when a user wants the app on both the phone and watch, open the Play Store on a device where your app isn't installed.

How these compare to the Wearable Support library

The Android Jetpack Wear OS libraries contain all the familiar functionality you’ve grown used to in the old Wearable Support library, better support for Wear OS 3.0, and the features listed above (many of which are written 100% in Kotlin).

As always with Android Jetpack, the new Wear OS libraries help you follow best practices, reduce boilerplate, and create performant, glanceable experiences for your users.

The core stable libraries are available now. The Watch Face and Complications libraries are in alpha and will be released as stable later this year. Once that launches, the Wearable Support Library will officially be deprecated.

We strongly recommend you migrate the libraries within your Wear OS apps from the Wearable Support library to their AndroidX equivalents as we make them available in stable.

Note: The Android Jetpack libraries are meant to be replacements for the Wearable Support Libraries and aren't designed to be used together.

Try them out and let us know what you think!

Thank you!

Announcing the latest Open Source Peer Bonus winners

 

Image that says Google Open Source Peer Bonus with a graphic of a trophy with the open source logo inside

The Google Open Source Peer Bonus program is designed to reward external open source contributors nominated by Googlers for their exceptional contributions to open source. We are very excited to announce our latest round of 112 winners—a new record—from 33 countries! We’re also sharing some comments by Googlers about what the Open Source Peer Bonus program means to them.

“I've nominated a number of open source contributors for the Peer Bonus program. Since most people volunteer out of passion for a project and expect nothing in return, getting an email from Google thanking them for their contribution carries a lot of meaning.” — Jason Miller

The Open Source Peer Bonus program rewards open source enthusiasts for contributions across open source, including code contributions, community work, documentation, mentoring, and other types of open source contribution—if a Googler believes that someone has made a positive contribution to an open source project, that person can be nominated for an Open Source Peer Bonus.

“Open Source is core to work at Google—it's the very spirit of its community and users. The Open Source Peer Bonus represents the way we want to share the spirit with everyone who feels the same spirit and puts it into developing cool stuff out there!” — Cristina Conti

Collaboration and innovation lie at the core of open source, advancing modern technology and removing barriers. Google relies on open source for many of our products and services and we are thrilled to have an opportunity to give back to the community by rewarding open source contributors.

“I've been active in the open-source community for many years. I've often been amazed by some contributors who go out of their way to help me and others; fix bugs, implement features, provide support and do code reviews. Since I started working at Google, I've had the privilege of nominating a few of these contributors for the Open Source Peer Bonus. I'm happy to see their effort get support and recognition from the corporate world. I hope that other big tech companies follow Google's lead in this regard.” — Ram Rachum

“Developers that take the time to share their code and expertise with the larger developer community help empower us all to make better software. Android demos can help other devs get their apps working and also helps Google see gaps and room for improvements in APIs or documentation. Open-source developers are an invaluable part of the ecosystem! Thank you!” — Emilie Roberts

Below is the list of current winners who gave us permission to thank them publicly:

Winner

Open Source Project

Neil Pang

acmesh-official

Bryn Rhodes

Android FHIR SDK

Simon Marquis

Android Install Referrer

Alexey Knyazev

ANGLE

Mike Hardy

ankidroid

Jeff Geerling

Ansible, Drupal

Jan Lukavský

Apache Beam

Phil Sturgeon

APIs You Won't Hate

Joseph Kearney

autoimpute

Olek Wojnar

Bazel

Jesse Chan

Bazel Hardware Description Language Build Rules

Pierre Quentel

Brython

Elizabeth Barron

CHAOSS

Mathias Buus

chromecasts

Matthew Kotsenas

CIPD (Part of Chrome CI software)

Orta Therox

CocoaPods

Matt Godbolt

Compiler Explorer

Dmitry Safonov

CRIU

Adrian Reber

CRIU (Checkpoint/Restore in User-space)

Prerak Mann

Dart - package:ffigen

Alessandro Arzilli

delve

Derek Parker

delve

Sarthak Gupta

DRS-Filer (elixir-cloud-aai)

Eddie Jaoude

Eddiehub

Josh Holtz

fastlane

Eduardo Silva

Fluent Bit

Mike Rydstrom

Flutter

Balvinder Singh Gambhir

Flutter

James Clarke

Flutter

Jody Donetti

FusionCache

Jenny Bryan

gargle

Gennadii Donchyts

gee-community

Ævar Arnfjörð Bjarmason

Git

Joel Sing

Go

Sean Liao

Go

Cuong Manh Le

Go

Daniel Martí

gofumpt

Cristian Bote

Goober

Romulo Santos

Google Cloud Community

Jenn Viau

GoogleCloudPlatform / gke-poc-toolkit

Nikita Shoshin

gopls

Mulr Manders

gopls

Shirou Wakayama

gopsutil

Pontus Leitzler

govim

Paul Jolly

govim

Arsala Bangash

Grey Software

Santiago Torres-Arias

In-Toto

David Wu

KataGo

Alexey Odinokov

kpt, kpt-functions-catalog, and kustomize

Alvaro Aleman

Kubernetes

Manuel de Brito Fontes

Kubernetes

Arnaud Meukam

Kubernetes

Federico Gimenez

Kubernetes

Elana Hashman

Kubernetes

Katrina Verey

Kustomize

Max Kellermann

MusicPlayerDaemon/MPD

Kamil Myśliwiec

NestJS

Weyert de Boer

Node.js Pub/Sub Client Library

James McKinney

Open Civic Data Division Identifiers

Angelos Tzotsos

OSGeo-Live, pycsw, GeoNode, OSGeo Foundation board member (non-paid), and more ...

Daniel Axtens

Patchwork

Ero Carrera

pefile

Nathaniel Brough

Pigweed

Alex Hall

PySnooper

Loic Mathieu

Quarkus Google Cloud Services

Federico Brigante

Refined Github

Michael Long

Resolver

Bruno Levy

RISC-V Ecosystem on FPGAs

Mara Bos

Rust

Eddy B.

Rust

Aleksey Kladov

Rust Analyzer

Noel Power

Samba

David Barri

scalajs-react

Marco Vermeulen

SDKman

Naveen Srinivasan

Security Scorecards

Marina Moore

Sigstore

Feross Aboukhadijeh

simple-peer

Ajay Ramachandran

SponsorBlock

Eddú Meléndez Gonzales

Spring Cloud GCP

Dominik Honnef

staticcheck

Zoe Carver

Swift

Rodrigo Melo

SymbiFlow + Open Source FPGA Tooling Ecosystem

Carlos de Paula

SymbiFlow and RISC-V ecosystem

Naoya Hatta

System Verilog Test Suite

Mike Popoloski

System Verilog Test Suite

Soule Ba

Tekton

Priti Desai

Tekton

Joyce Er

TensorBoard

Vignesh Kothapalli

TensorFlow

Hyeyoon Lee

TensorFlow

Akhil Chinnakotla

TensorFlow

Stephen Wu

TensorFlow

Vishnu Banna

TensorFlow

Haidong Rong

TensorFlow

Sean Morgan

TensorFlow

Jason Zaman

TensorFlow

Yong Tang

TensorFlow

Mahamed Ali

Terraform Provider Google

Sayak Paul

tfhub.dev

Aidan Doherty

The Good Docs Project

Alyssa Rock

The Good Docs Project

Heinrich Schuchardt

U-Boot

Aditya Sharma

User Story (GSoC project)

Dan Clark

V8

Armin Brauns

Verilog to Routing & SymbiFlow

Marwan Sulaiman

vscode-go

Ryan Christian

WMR & Microbundle

Yaroslav Podorvanov

yaroslav-harakternik

Anirudh Vegesana

Yolo

Alistair Miles

zarr

Thank you for your contributions to open source! Congratulations!

By Erin McKean and Maria Tabak —Google Open Source Programs Office