Lock audio and video during a Google Meet meeting from iOS devices

Quick launch summary

Google Meet hosts and co-hosts can now lock all participants’ audio and video from iOS devices, which locks all participants’ audio so they’re muted or prevents participants’ from using their camera respectively. These settings can help prevent disruptions, keeping your meetings on track and productive.

Previously  it was only possible to use these locks when using Google Meet on a computer. We anticipate this feature to be available for Android in early 2022 — we will provide an update on the Workspace Updates Blog once available.

Additional details 

Please note:

The Audio Lock & Video Lock setting applies to all devices regardless of whether it’s set on a computer or an iOS device.

When Audio Lock or Video Lock is enabled, mobile participants may be removed from the meeting if their device doesn’t have:

  • The most updated version of the Meet or Gmail app
  • Android OS version M or newer 
  • iOS version 12 or newer

Once Audio or Video Lock is disabled, removed participants will be able to rejoin.

Getting started

  • Admins: There is no admin control for this feature.
  • End users: Visit the Help Center to learn more about locking audio or video during a Google Meet meeting.


Rollout pace


Availability

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers


Resources


Start and join meetings and audio calls from 1:1 chats using Google Chat in Gmail on mobile

What’s changing

You can now start or join meetings and audio calls from 1:1 chats in Google Chat in Gmail on Android and iOS. At the moment, this feature will be available for 1:1 chats only.

To ring someone directly, select the phone or video icon in the top right corner of a 1:1 chat.


 


To join a call, select the phone or video chip within the 1:1 chat. While on a call, you’ll see a banner of the person you’re on a call with, the call duration and a Meet icon in the chat roster.




Missed calls will be indicated with a red phone or video icon within the conversation and the chat roster.





Who’s impacted

End users


Why you’d use it

As some teams begin to return to office, while others remain distributed, we hope this makes it easier to connect with your colleagues in the hybrid work world. This feature will allow you to seamlessly switch between chat to a video or audio call when needed, helping you collaborate and move your work forward.


Additional details

While you can select “Join a call” from the Google Chat app, you will be redirected to the Gmail app, where the call will take place. If you do not have the Gmail app on your device, you’ll be prompted to download it via Google Play store or the App Store. We’ll provide an update on the Google Workspace Updates Blog when this feature becomes available for the Google Chat mobile app.


Getting started



Rollout pace



Availability

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers
  • Available to users with personal Google accounts

Resources


Start and join meetings and audio calls from 1:1 chats using Google Chat in Gmail on mobile

What’s changing

You can now start or join meetings and audio calls from 1:1 chats in Google Chat in Gmail on Android and iOS. At the moment, this feature will be available for 1:1 chats only.

To ring someone directly, select the phone or video icon in the top right corner of a 1:1 chat.


 


To join a call, select the phone or video chip within the 1:1 chat. While on a call, you’ll see a banner of the person you’re on a call with, the call duration and a Meet icon in the chat roster.




Missed calls will be indicated with a red phone or video icon within the conversation and the chat roster.





Who’s impacted

End users


Why you’d use it

As some teams begin to return to office, while others remain distributed, we hope this makes it easier to connect with your colleagues in the hybrid work world. This feature will allow you to seamlessly switch between chat to a video or audio call when needed, helping you collaborate and move your work forward.


Additional details

While you can select “Join a call” from the Google Chat app, you will be redirected to the Gmail app, where the call will take place. If you do not have the Gmail app on your device, you’ll be prompted to download it via Google Play store or the App Store. We’ll provide an update on the Google Workspace Updates Blog when this feature becomes available for the Google Chat mobile app.


Getting started



Rollout pace



Availability

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers
  • Available to users with personal Google accounts

Resources


Stable Channel Update for Desktop

The Stable channel has been updated to 96.0.4664.93 for Windows, Mac and Linux which will roll out over the coming days/weeks. Extended stable channel has also been updated to 96.0.4664.93 for Windows and Mac which will roll out over the coming days/weeks

A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues

Security Fixes and Rewards

Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.


This update includes 20 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.


[$15000][1267661] High CVE-2021-4052: Use after free in web apps. Reported by Wei Yuan of MoyunSec VLab on 2021-11-07

[$10000][1267791] High CVE-2021-4053: Use after free in UI. Reported by Rox on 2021-11-08

[$5000][1239760] High CVE-2021-4054: Incorrect security UI in autofill. Reported by Alesandro Ortiz on 2021-08-13

[$1000][1266510] High CVE-2021-4055: Heap buffer overflow in extensions. Reported by Chen Rong on 2021-11-03

[$TBD][1260939] High CVE-2021-4056: Type Confusion in loader. Reported by @__R0ng of 360 Alpha Lab on 2021-10-18

[$TBD][1262183] High CVE-2021-4057: Use after free in file API. Reported by Sergei Glazunov of Google Project Zero on 2021-10-21

[$TBD][1267496] High CVE-2021-4058: Heap buffer overflow in ANGLE. Reported by Abraruddin Khan and Omair  on 2021-11-06

[$TBD][1270990] High CVE-2021-4059: Insufficient data validation in loader. Reported by Luan Herrera (@lbherrera_) on 2021-11-17

[$TBD][1271456] High CVE-2021-4061: Type Confusion in V8. Reported by Paolo Severini on 2021-11-18

[$TBD][1272403] High CVE-2021-4062: Heap buffer overflow in BFCache. Reported by Leecraso and Guang Gong of 360 Alpha Lab on 2021-11-22

[$TBD][1273176] High CVE-2021-4063: Use after free in developer tools. Reported by Abdulrahman Alqabandi, Microsoft Browser Vulnerability Research on 2021-11-23

[$TBD][1273197] High CVE-2021-4064: Use after free in screen capture. Reported by @ginggilBesel on 2021-11-23

[$TBD][1273674] High CVE-2021-4065: Use after free in autofill. Reported by 5n1p3r0010 on 2021-11-25

[$TBD][1274499] High CVE-2021-4066: Integer underflow in ANGLE. Reported by Jaehun Jeong(@n3sk) of Theori on 2021-11-29

[$TBD][1274641] High CVE-2021-4067: Use after free in window manager. Reported by @ginggilBesel on 2021-11-29

[$500][1265197] Low CVE-2021-4068: Insufficient validation of untrusted input in new tab page. Reported by NDevTK on 2021-10-31


We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.

As usual, our ongoing internal security work was responsible for a wide range of fixes:

  • [1276568] Various fixes from internal audits, fuzzing and other initiatives


Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.


Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Srinivas Sista
Google Chrome

Google at NeurIPS 2021

This week marks the beginning of the 35th annual Conference on Neural Information Processing Systems (NeurIPS 2021), the biggest machine learning conference of the year. NeurIPS 2021 will be held virtually and includes invited talks, demonstrations and presentations of some of the latest in machine learning research. This year, NeurIPS also announced a new Datasets and Benchmarks track, which will include publications, talks, posters, and discussions related to this research area.

Google will have a strong presence with more than 170 accepted papers, additionally contributing to and learning from the broader academic research community via talks, posters, workshops, and tutorials. You can learn more about our work being presented in the list below (Google affiliations highlighted in bold).

Organizing Committee

Communications Co-Chair: Emily Denton
Program Co-Chair: Yann Dauphin
Workshop Co-Chair: Sanmi Koyejo

Senior Area Chairs: Alekh Agarwal, Amir Globerson, Been Kim, Charles Sutton, Claudio Gentile, Corinna Cortes, Dale Schuurmans, David Duvenaud, Elad Hazan, Hugo Larochelle, Jean-Philippe Vert, Kevin Murphy, Marco Cuturi, Mehryar Mohri, Mohammad Ghavamzadeh, Samory Kpotufe, Sanjiv Kumar, Satyen Kale, Sergey Levine, Tara N. Sainath, Yishay Mansour

Area Chairs: Abhishek Kumar, Abhradeep Guha Thakurta, Alex Kulesza, Alexander A. Alemi, Alexander T. Toshev, Amin Karbasi, Amit Daniely, Ananda Theertha Suresh, Ankit Singh Rawat, Ashok Cutkosky, Badih Ghazi, Balaji Lakshminarayanan, Ben Poole, Bo Dai, Boqing Gong, Chelsea Finn, Chiyuan Zhang, Christian Szegedy, Cordelia Schmid, Craig Boutilier, Cyrus Rashtchian, D. Sculley, Daniel Keysers, David Ha, Denny Zhou, Dilip Krishnan, Dumitru Erhan, Dustin Tran, Ekin Dogus Cubuk, Fabian Pedregosa, George Tucker, Hanie Sedghi, Hanjun Dai, Heinrich Jiang, Hossein Mobahi, Izhak Shafran, Jaehoon Lee, Jascha Sohl-Dickstein, Jasper Snoek, Jeffrey Pennington, Jelani Nelson, Jieming Mao, Justin Gilmer, Karol Hausman, Karthik Sridharan, Kevin Swersky, Maithra Raghu, Mario Lucic, Mathieu Blondel, Matt Kusner, Matthew Johnson, Matthieu Geist, Ming-Hsuan Yang, Mohammad Mahdian, Mohammad Norouzi, Nal Kalchbrenner, Naman Agarwal, Nicholas Carlini, Nicolas Papernot, Olivier Bachem, Olivier Pietquin, Paul Duetting, Praneeth Netrapalli, Pranjal Awasthi, Prateek Jain, Quentin Berthet, Renato Paes Leme, Richard Nock, Rif A. Saurous, Rose Yu, Roy Frostig, Samuel Stern Schoenholz, Sashank J. Reddi, Sercan O. Arik, Sergei Vassilvitskii, Sergey Ioffe, Shay Moran, Silvio Lattanzi, Simon Kornblith, Srinadh Bhojanapalli, Thang Luong, Thomas Steinke, Tim Salimans, Tomas Pfister, Tomer Koren, Uri Stemmer, Vahab Mirrokni, Vikas Sindhwani, Vincent Dumoulin, Virginia Smith, Vladimir Braverman, W. Ronny Huang, Wen Sun, Yang Li, Yasin Abbasi-Yadkori, Yinlam Chow,Yujia Li, Yunhe Wang, Zoltán Szabó

NeurIPS Foundation Board 2021: Michael Mozer, Corinna Cortes, Hugo Larochelle, John C. Platt, Fernando Pereira

Test of Time Award

Online Learning for Latent Dirichlet Allocation
Matthew D. Hoffman, David M. Blei, Francis Bach

Publications

Deep Reinforcement Learning at the Edge of the Statistical Precipice (see blog post)
Outstanding Paper Award Recipient
Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, Marc G. Bellemare

A Separation Result Between Data-Oblivious and Data-Aware Poisoning Attacks
Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Guha Thakurta

Adversarial Robustness of Streaming Algorithms Through Importance Sampling
Vladimir Braverman, Avinatan Hassidim, Yossi Matias, Mariano Schain, Sandeep Silwal, Samson Zhou

Aligning Silhouette Topology for Self-Adaptive 3D Human Pose Recovery
Mugallodi Rakesh, Jogendra Nath Kundu, Varun Jampani, R. Venkatesh Babu

Attention Bottlenecks for Multimodal Fusion
Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, Chen Sun

Autonomous Reinforcement Learning via Subgoal Curricula
Archit Sharma, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn

Calibration and Consistency of Adversarial Surrogate Losses
Pranjal Awasthi, Natalie S. Frank, Anqi Mao, Mehryar Mohri, Yutao Zhong

Compressive Visual Representations
Kuang-Huei Lee, Anurag Arnab, Sergio Guadarrama, John Canny, Ian Fischer

Counterfactual Invariance to Spurious Correlations in Text Classification
Victor Veitch, Alexander D'Amour, Steve Yadlowsky, Jacob Eisenstein

Deep Learning Through the Lens of Example Difficulty
Robert J.N. Baldock, Hartmut Maennel, Behnam Neyshabur

Deep Neural Networks as Point Estimates for Deep Gaussian Processes
Vinent Dutordoir, James Hensman, Mark van der Wilk, Carl Henrik Ek, Zoubin Ghahramani, Nicolas Durrande

Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning
Ligeng Zhu, Hongzhou Lin, Yao Lu, Yujun Lin, Song Han

Discrete-Valued Neural Communication
Dianbo Liu, Alex Lamb, Kenji Kawaguchi, Anirudh Goyal, Chen Sun, Michael Curtis Mozer, Yoshua Bengio

Do Vision Transformers See Like Convolutional Neural Networks?
Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, Alexey Dosovitskiy

Dueling Bandits with Team Comparisons
Lee Cohen, Ulrike Schmidt-Kraepelin, Yishay Mansour

End-to-End Multi-Modal Video Temporal Grounding
Yi-Wen Chen, Yi-Hsuan Tsai, Ming-Hsuan Yang

Environment Generation for Zero-Shot Compositional Reinforcement Learning
Izzeddin Gur, Natasha Jaques, Yingjie Miao, Jongwook Choi, Manoj Tiwari, Honglak Lee, Aleksandra Faust

H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction of Humans in Motion
Hongyi Xu, Thiemo Alldieck, Cristian Sminchisescu

Improving Calibration Through the Relationship with Adversarial Robustness
Yao Qin, Xuezhl Wang, Alex Beutel, Ed Chi

Learning Generalized Gumbel-Max Causal Mechanisms
Guy Lorberbom, Daniel D. Johnson, Chris J. Maddison, Daniel Tarlow, Tamir Hazan

MICo: Improved Representations via Sampling-Based State Similarity for Markov Decision Processes
Pablo Samuel Castro, Tyler Kastner, Prakash Panangaden, Mark Rowland

Near-Optimal Lower Bounds For Convex Optimization For All Orders of Smoothness
Ankit Garg, Robin Kothari, Praneeth Netrapalli, Suhail Sherif

Neural Circuit Synthesis from Specification Patterns
Frederik Schmitt, Christopher Hahn, Markus N. Rabe, Bernd Finkbeiner

Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation
Jogendra Nath Kundu, Siddharth Seth, Anirudh Jamkhandi, Pradyumna YM, Varun Jampani, Anirban Chakraborty, R. Venkatesh Babu

Object-Aware Contrastive Learning for Debiased Scene Representation
Sangwoo Mo, Hyunwoo Kang, Kihyuk Soh, Chun-Liang Li, Jinwoo Shin

On Density Estimation with Diffusion Models
Diederik P. Kingma, Tim Salimans, Ben Poole, Jonathan Ho

On Margin-Based Cluster Recovery with Oracle Queries
Marco Bressan, Nicolo Cesa-Bianchi, Silvio Lattanzi, Andrea Paudice

On Model Calibration for Long-Tailed Object Detection and Instance Segmentation
Tai-Yu Pan, Cheng Zhang, Yandong Li, Hexiang Hu, Dong Xuan, Soravit Changpinyo, Boqing Gong, Wei-Lun Chao

Parallelizing Thompson Sampling
Amin Karbasi, Vahab Mirrokni, Mohammad Shadravan

Reverse-Complement Equivariant Networks for DNA Sequences
Vincent Mallet, Jean-Philippe Vert

Revisiting ResNets: Improved Training and Scaling Strategies
Irwan Bello, William Fedus, Xianzhi Du, Ekin Dogus Cubuk, Aravind Srinivas, Tsung-Yi Lin, Jonathon Shlens, Barret Zoph

Revisiting the Calibration of Modern Neural Networks
Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Ann Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, Mario Lucic

Scaling Vision with Sparse Mixture of Experts
Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, Neil Houlsby

SE(3)-Equivariant Prediction of Molecular Wavefunctions and Electronic Densities
Oliver Thorsten Unke, Mihail Bogojeski, Michael Gastegger, Mario Geiger, Tess Smidt, Klaus Robert Muller

Stateful ODE-Nets Using Basis Function Expansions
Alejandro Francisco Queiruga, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney

Statistically and Computationally Efficient Linear Meta-Representation Learning
Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh

Streaming Belief Propagation for Community Detection
Yuchen Wu, Jakab Tardos, Mohammad Hossein Bateni, André Linhares, Filipe Miguel Gonçalves de Almeida, Andrea Montanari, Ashkan Norouzi-Fard

Synthetic Design: An Optimization Approach to Experimental Design with Synthetic Controls
Nick Doudchenko, Khashayar Khosravi, Jean Pouget-Abadie, Sebastien Lahaie, Miles Lubin, Vahab Mirrokni, Jann Spiess, Guido Imbens

The Difficulty of Passive Learning in Deep Reinforcement Learning
George Ostrovski, Pablo Samuel Castro, Will Dabney

The Pareto Frontier of Model Selection for General Contextual Bandits
Teodor Marinov, Julian Zimmert

VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text
Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, Boqing Gong

Co-Adaptation of Algorithmic and Implementational Innovations in Inference-Based Deep Reinforcement Learning
Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Gu

Conservative Data Sharing for Multi-Task Offline Reinforcement Learning
Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn

Does Knowledge Distillation Really Work?
Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew Gordon Wilson

Exponential Graph is Provably Efficient for Decentralized Deep Training
Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Pan Pan, Wotao Yin

Faster Matchings via Learned Duals
Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, Sergei Vassilvitskii

Improved Transformer for High-Resolution GANs
Long Zhao, Zizhao Zhang, Ting Chen, Dimitris N. Metaxas, Han Zhang

Near-Optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems
Prateek Jain, Suhas S. Kowshik, Dheeraj Mysore Nagaraj, Praneeth Netrapalli

Nearly Horizon-Free Offline Reinforcement Learning
Tongzheng Ren, Jialian Li, Bo Dai, Simon S. Du, Sujay Sanghavi

Overparameterization Improves Robustness to Covariate Shift in High Dimensions
Nilesh Tripuraneni, Ben Adlam, Jeffrey Pennington

Pay Attention to MLPs
Hanxiao Liu, Zihang Dai, David R. So, Quoc V. Le

PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair
Zimin Chen*, Vincent Josua Hellendoorn*, Pascal Lamblin, Petros Maniatis, Pierre-Antoine Manzagol, Daniel Tarlow, Subhodeep Moitra

Prior-Independent Dynamic Auctions for a Value-Maximizing Buyer
Yuan Deng, Hanrui Zhang

Remember What You Want to Forget: Algorithms for Machine Unlearning
Ayush Sekhari, Jayadev Acharya, Gautam Kamath, Ananda Theertha Suresh

Reverse Engineering Learned Optimizers Reveals Known and Novel Mechanisms
Niru Maheswaranathan*, David Sussillo*, Luke Metz, Ruoxi Sun, Jascha Sohl-Dickstein

Revisiting 3D Object Detection From an Egocentric Perspective
Boyang Deng, Charles R. Qi, Mahyar Najibi, Thomas Funkhouser, Yin Zhou, Dragomir Anguelov

Robust Auction Design in the Auto-Bidding World
Santiago Balseiro, Yuan Deng, Jieming Mao, Vahab Mirrokni, Song Zuo

Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training Data
Qi Zhu, Natalia Ponomareva, Jiawei Han, Bryan Perozzi

Understanding How Encoder-Decoder Architectures Attend
Kyle Aitken, Vinay V. Ramasesh, Yuan Cao, Niru Maheswaranathan

Understanding the Effect of Stochasticity in Policy Optimization
Jincheng Mei, Bo Dai, Chenjun Xiao, Csaba Szepesvari, Dale Schuurmans

Accurately Solving Rod Dynamics with Graph Learning
Han Shao, Tassilo Kugelstadt, Torsten Hädrich, Wojtek Palubicki, Jan Bender, Sören Pirk, Dominik L. Michels

GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training
Chen Zhu, Renkun Ni, Zheng Xu, Kezhi Kong, W. Ronny Huang, Tom Goldstein

Learnability of Linear Thresholds from Label Proportions
Rishi Saket

MLP-Mixer: An All-MLP Architecture for Vision
Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, Alexey Dosovitskiy

Neural Additive Models: Interpretable Machine Learning with Neural Nets
Rishabh Agarwal, Levi Melnick, Nicholas Frosst, Xuezhou Zhang, Ben Lengerich, Rich Caruana, Geoffrey Hinton

Neural Production Systems
Anirudh Goyal, Aniket Didolkar, Nan Rosemary Ke, Charles Blundell, Philippe Beaudoin, Nicolas Heess, Michael Mozer, Yoshua Bengio

Physics-Aware Downsampling with Deep Learning for Scalable Flood Modeling
Niv Giladi, Zvika Ben-Haim, Sella Nevo, Yossi Matias, Daniel Soudry

Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving Objects
Denys Rozumnyi, Martin R. Oswald, Vittorio Ferrari, Marc Pollefeys

What Matters for Adversarial Imitation Learning?
Manu Orsini, Anton Raichuk, Léonard Hussenot, Damien Vincent, Robert Dadashi, Sertan Girgin, Matthieu Geist, Olivier Bachem, Olivier Pietquin, Marcin Andrychowicz

A Convergence Analysis of Gradient Descent on Graph Neural Networks
Pranjal Awasthi, Abhimanyu Das, Sreenivas Gollapudi

A Geometric Analysis of Neural Collapse with Unconstrained Features
Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, Qing Qu

Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations
Christoph Dann, Yishay Mansour, Mehryar Mohri, Ayush Sekhari, Karthik Sridharan

Controlled Text Generation as Continuous Optimization with Multiple Constraints
Sachin Kumar, Eric Malmi, Aliaksei Severyn, Yulia Tsvetkov

Coupled Gradient Estimators for Discrete Latent Variables
Zhe Dong, Andriy Mnih, George Tucker

Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-Training Ensembles
Jiefeng Chen*, Frederick Liu, Besim Avci, Xi Wu, Yingyu Liang, Somesh Jha

Neural Active Learning with Performance Guarantees
Zhilei Wang, Pranjal Awasthi, Christoph Dann, Ayush Sekhari, Claudio Gentile

Optimal Sketching for Trace Estimation
Shuli Jiang, Hai Pham, David Woodruff, Qiuyi (Richard) Zhang

Representing Long-Range Context for Graph Neural Networks with Global Attention
Zhanghao Wu, Paras Jain, Matthew A. Wright, Azalia Mirhoseini, Joseph E. Gonzalez, Ion Stoica

Scaling Up Exact Neural Network Compression by ReLU Stability
Thiago Serra, Xin Yu, Abhinav Kumar, Srikumar Ramalingam

Soft Calibration Objectives for Neural Networks
Archit Karandikar, Nicholas Cain, Dustin Tran, Balaji Lakshminarayanan, Jonathon Shlens, Michael Curtis Mozer, Rebecca Roelofs

Sub-Linear Memory: How to Make Performers SLiM
Valerii Likhosherstov, Krzysztof Choromanski, Jared Davis, Xingyou Song, Adrian Weller

A New Theoretical Framework for Fast and Accurate Online Decision-Making
Nicolò Cesa-Bianchi, Tommaso Cesari, Yishay Mansour, Vianney Perchet

Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning
Nan Ding, Xi Chen, Tomer Levinboim, Sebastian Goodman, Radu Soricut

Differentially Private Multi-Armed Bandits in the Shuffle Model
Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer

Efficient and Local Parallel Random Walks
Michael Kapralov, Silvio Lattanzi, Navid Nouri, Jakab Tardos

Improving Anytime Prediction with Parallel Cascaded Networks and a Temporal-Difference Loss
Michael Louis Iuzzolino, Michael Curtis Mozer, Samy Bengio*

It Has Potential: Gradient-Driven Denoisers for Convergent Solutions to Inverse Problems
Regev Cohen, Yochai Blau, Daniel Freedman, Ehud Rivlin

Learning to Combine Per-Example Solutions for Neural Program Synthesis
Disha Shrivastava, Hugo Larochelle, Daniel Tarlow

LLC: Accurate, Multi-purpose Learnt Low-Dimensional Binary Codes
Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi

There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning (see blog post)

Nathan Grinsztajn, Johan Ferret, Olivier Pietquin, Philippe Preux, Matthieu Geist

A Near-Optimal Algorithm for Debiasing Trained Machine Learning Models
Ibrahim Alabdulmohsin, Mario Lucic

Adaptive Sampling for Minimax Fair Classification
Shubhanshu Shekhar, Greg Fields, Mohammad Ghavamzadeh, Tara Javidi

Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Alon Cohen, Amit Daniely, Yoel Drori, Tomer Koren, Mariano Schain

Boosting with Multiple Sources
Corinna Cortes, Mehryar Mohri, Dmitry Storcheus, Ananda Theertha Suresh

Breaking the Centralized Barrier for Cross-Device Federated Learning
Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stitch, Ananda Theertha Sureshi

Canonical Capsules: Self-Supervised Capsules in Canonical Pose
Weiwei Sun, Andrea Tagliasacchi, Boyang Deng, Sara Sabour, Soroosh Yazdani, Geoffrey Hinton, Kwang Moo Yi

Contextual Recommendations and Low-Regret Cutting-Plane Algorithms
Sreenivas Gollapudi, Guru Guruganesh, Kostas Kollias, Pasi Manurangsi, Renato Paes Leme, Jon Schneider

Decision Transformer: Reinforcement Learning via Sequence Modeling
Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee|Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch

Deep Learning on a Data Diet: Finding Important Examples Early in Training
Mansheej Paul, Surya Ganguli, Gintare Karolina Dziugaite

Deep Learning with Label Differential Privacy
Badih Ghazi, Noah Golowich*, Ravi Kumar, Pasin Manurangsi, Chiyuan Zhang

Efficient Training of Retrieval Models Using Negative Cache
Erik Lindgren, Sashank J. Reddi, Ruiqi Guo, Sanjiv Kumar

Exploring Cross-Video and Cross-Modality Signals for Weakly-Supervised Audio-Visual Video Parsing
Yan-Bo Lin, Hung-Yu Tseng, Hsin-Ying Lee, Yen-Yu Lin, Ming-Hsuan Yang

Federated Reconstruction: Partially Local Federated Learning
Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, Keith Rush, Sushant Prakash

Framing RNN as a Kernel Method: A Neural ODE Approach
Adeline Fermanian, Pierre Marion, Jean-Philippe Vert, Gérard Biau

Learning Semantic Representations to Verify Hardware Designs
Shobha Vasudevan, Wenjie Jiang, David Bieber, Rishabh Singh, Hamid Shojaei, C. Richard Ho, Charles Sutton

Learning with User-Level Privacy
Daniel Asher Nathan Levy*, Ziteng Sun*, Kareem Amin, Satyen Kale, Alex Kulesza, Mehryar Mohri, Ananda Theertha Suresh

Logarithmic Regret from Sublinear Hints
Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

Margin-Independent Online Multiclass Learning via Convex Geometry
Guru Guruganesh, Allen Liu, Jon Schneider, Joshua Ruizhi Wang

Multiclass Boosting and the Cost of Weak Learning
Nataly Brukhim, Elad Hazan, Shay Moran, Indraneel Mukherjee, Robert E. Schapire

Neural-PIL: Neural Pre-integrated Lighting for Reflectance Decomposition
Mark Boss, Varun Jampani, Raphael Braun, Ce Liu*, Jonathan T. Barron, Hendrik Lensch

Never Go Full Batch (in Stochastic Convex Optimization)
Idan Amir, Yair Carmon, Tomer Koren, Roi Livni

On Large-Cohort Training for Federated Learning
Zachary Charles, Zachary Garrett, Zhouyuan Huo, Sergei Shmulyian, Virginia Smith

On the Sample Complexity of Privately Learning Axis-Aligned Rectangles
Menachem Sadigurschi, Uri Stemmer

Online Control of Unknown Time-Varying Dynamical Systems
Edgar Minasyan, Paula Gradu, Max Simchowitz, Elad Hazan

Online Knapsack with Frequency Predictions
Sungjin Im, Ravi Kumar,Mahshid Montazer Qaem, Manish Purohit

Optimal Rates for Random Order Online Optimization
Uri Sherman, Tomer Koren, Yishay Mansour

Oracle-Efficient Regret Minimization in Factored MDPs with Unknown Structure
Aviv Rosenberg, Yishay Mansour

Practical Large-Scale Linear Programming Using Primal-Dual Hybrid Gradient
David Applegate, Mateo Díaz*, Oliver Hinder, Haihao Lu*, Miles Lubin, Brendan O'Donoghue, Warren Schudy

Private and Non-Private Uniformity Testing for Ranking Data
Robert Istvan Busa-Fekete, Dimitris Fotakis, Manolis Zampetakis

Privately Learning Subspaces
Vikrant Singhal, Thomas Steinke

Provable Representation Learning for Imitation with Contrastive Fourier Features
Ofir Nachum, Mengjiao Yang

Safe Reinforcement Learning with Natural Language Constraints
Tsung-Yen Yang, Michael Hu, Yinlam Chow, Peter J. Ramadge, Karthik Narasimhan

Searching for Efficient Transformers for Language Modeling
David R. So, Wojciech Mańke, Hanxiao Liu, Zihang Dai, Noam Shazeer, Quoc V. Le

SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression
Steve Yadlowsky, Taedong Yun, Cory McLean, Alexander D'Amour

Streaming Linear System Identification with Reverse Experience Replay
Prateek Jain, Suhas S. Kowshik, Dheeraj Mysore Nagaraj, Praneeth Netrapalli

The Skellam Mechanism for Differentially Private Federated Learning
Naman Agarwal, Peter Kairouz, Ziyu Liu*

TokenLearner: Adaptive Space-Time Tokenization for Videos
Michael S. Ryoo, AJ Piergiovanni, Anurag Arnab, Mostafa Dehghani, Anelia Angelova

Towards Best-of-All-Worlds Online Learning with Feedback Graphs
Liad Erez, Tomer Koren

Training Over-Parameterized Models with Non-decomposable Objectives
Harikrishna Narasimhan, Aditya Krishna Menon

Twice Regularized MDPs and the Equivalence Between Robustness and Regularization
Esther Derman, Matthieu Geist, Shie Mannor

Unsupervised Learning of Compositional Energy Concepts
Yilun Du, Shuang Li, Yash Sharma, Joshua B. Tenenbaum, Igor Mordatch

User-Level Differentially Private Learning via Correlated Sampling
Badih Ghazi, Ravi Kumar, Pasin Manurangsi

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction
Gengshan Yang, Deqing Sun, Varun Jampani, Daniel Vlasic, Forrester Cole, Ce Liu*, Deva Ramanan

A Minimalist Approach to Offline Reinforcement Learning
Scott Fujimoto, Shixiang Gu

A Unified View of cGANs With and Without Classifiers
Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin

CoAtNet: Marrying Convolution and Attention for All Data Sizes (see blog post)
Zihang Dai, Hanxiao Liu, Quoc V. Le, Mingxing Tan

Combiner: Full Attention Transformer with Sparse Computation Cost
Hongyu Ren*, Hanjun Dai, Zihang Dai, Mengjiao Yang, Jure Leskovec, Dale Schuurmans, Bo Dai

Contrastively Disentangled Sequential Variational Autoencoder
Junwen Bai, Weiran Wang, Carla P. Gomes

Controlling Neural Networks with Rule Representations
Sungyong Seo, Sercan O. Arik, Jinsung Yoon, Xiang Zhang, Kihyuk Sohn, Tomas Pfister

Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen*, Roman Novak, Lechao Xiao, Jaehoon Lee

Deep Synoptic Monte-Carlo Planning in Reconnaissance Blind Chess
Gregory Clark

Differentially Private Learning with Adaptive Clipping
Galen Andrew, Om Thakkar, Swaroop Ramaswamy, Hugh Brendan McMahan

Differentially Private Model Personalization
Prateek Jain, Keith Rush, Adam Smith, Shuang Song, Abhradeep Thakurta

Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations
Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan

Efficiently Identifying Task Groupings for Multi-Task Learning
Christopher Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil, Chelsea Finn

Generalized Shape Metrics on Neural Representations
Alex H. Williams, Erin Kunz, Simon Kornblith, Scott Linderman

High-Probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails
Ashok Cutkosky, Harsh Mehta

Identity Testing for Mallows Model
Róbert Busa-Fekete, Dimitris Fotakis, Balázs Szörényi, Manolis Zampetakis

Learnable Fourier Features for Multi-dimensional Spatial Positional Encoding
Yang Li, Si Si, Gang Li, Cho-Jui Hsieh, Samy Bengio*

Learning to Select Exogenous Events for Marked Temporal Point Process
Ping Zhang, Rishabh K. Iyer, Ashish V. Tendulkar, Gaurav Aggarwal, Abir De

Meta-learning to Improve Pre-training
Aniruddh Raghu, Jonathan Peter Lorraine, Simon Kornblith, Matthew B.A. McDermott, David Duvenaud

Pointwise Bounds for Distribution Estimation Under Communication Constraints
Wei-Ning Chen, Peter Kairouz, Ayfer Özgür

REMIPS: Physically Consistent 3D Reconstruction of Multiple Interacting People Under Weak Supervision
Mihai Fieraru, Mihai Zanfir, Teodor Alexandru Szente, Eduard Gabriel Bazavan, Vlad Olaru, Cristian Sminchisescu

Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov

Revealing and Protecting Labels in Distributed Training
Trung Dang, Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Peter Chin, Françoise Beaufays

Robust Predictable Control
Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Robust Visual Reasoning via Language Guided Neural Module Networks
Arjun Reddy Akula, Varun Jampani, Soravit Changpinyo, Song-Chun Zhu

Towards Understanding Retrosynthesis by Energy-Based Models
Ruoxi Sun, Hanjun Dai, Li Li, Steven Kearnes, Bo Dai

Exploring the Limits of Out-of-Distribution Detection
Stanislav Fort, Jie Ren, Balaji Lakshminarayanan

Minimax Regret for Stochastic Shortest Path
Alon Cohen, Yonathan Efroni, Yishay Mansour, Aviv Rosenberg

No Regrets for Learning the Prior in Bandits
Soumya Basu, Branislav Kveton, Manzil Zaheer, Csaba Szepesvari

Structured Denoising Diffusion Models in Discrete State-Spaces
Jacob Austin, Daniel D. Johnsonv, Jonathan Ho, Daniel Tarlow, Rianne van den Berg

The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning (see blog post)
Yujin Tang, David Ha

On the Existence of The Adversarial Bayes Classifier
Pranjal Awasthi, Natalie Frank, Mehyrar Mohri

Beyond Value-Function Gaps: Improved Instance-Dependent Regret Bounds for Episodic Reinforcement Learning
Christopher Dann, Teodor Vanislavov Marinov, Mehryar Mohri, Julian Zimmert

A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning
Christopher Dann, Mehryar Mohri, Tong Zhang, Julian Zimmert

Datasets & Benchmarks Accepted Papers

Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research
Bernard Koch, Emily Denton, Alex Hanna, Jacob G. Foster
Datasets & Benchmarks Best Paper

Constructing a Visual Dataset to Study the Effects of Spatial Apartheid in South Africa
Raesetje Sefala, Timnit Gebru, Luzango Mfupe, Nyalleng Moorosi

AI and the Everything in the Whole Wide World Benchmark
Inioluwa Deborah Raji, Emily M. Bender, Amandalynne Paullada, Emily Denton, Alex Hannah

A Unified Few-Shot Classification Benchmark to Compare Transfer and Meta Learning Approaches
Vincent Dumoulin, Neil Houlsby, Utku Evci, Xiaohua Zhai, Ross Goroshin, Sylvain Gelly, Hugo Larochelle

The Neural MMO Platform for Massively Multi-agent Research
Joseph Suarez, Yilun Du, Clare Zhu, Igor Mordatch, Phillip Isola

Systematic Evaluation of Causal Discovery in Visual Model-Based Reinforcement Learning
Nan Rosemary Ke, Aniket Didolkar, Sarthak Mittal, Anirudh Goyal, Guillaume Lajole, Stefan Bauer, Danilo Rezende, Yoshua Bengio, Michael Mozer, Christopher Pal

STEP: Segmenting and Tracking Every Pixel
Mark Weber, Jun Xie, Maxwell Collins, Yukun Zhu, Paul Voigtlaender, Hartwig Adam, Bradley Green, Andreas Geiger, Bastian Leibe, Daneil Cremers, Aljosa Osep, Laura Leal-Taixe, Liang-Chieh Chen

Artsheets for Art Datasets
Ramya Srinivisan, Emily Denton, Jordan Famularo, Negar Rostamzadeh, Fernando Diaz, Beth Coleman

SynthBio: A Case in Human–AI Collaborative Curation of Text Datasets
Ann Yuan, Daphne Ippolito, Vitaly Niolaev, Chris Callison-Burch, Andy Coenen, Sebastian Gehrmann

Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks
Neil Band, Tim G. J. Rudner, Qixuan Feng, Angelos Filos, Zachary Nado, Michael W. Dusenberry, Ghassen Jerfel, Dustin Tran, Yarin Gal

Brax - A Differentiable Physics Engine for Large Scale Rigid Body Simulation (see blog post)
C. Daniel Freeman, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, Olivier Bachem

MLPerf Tiny Benchmark
Colby Banbury, Vijay Janapa Reddi, Peter Torelli, Jeremy Holleman, Nat Jeffries, Csaba Kiraly, Pietro Montino, David Kanter, Sebastian Ahmed, Danilo Pau, Urmish Thakker, Antonio Torrini, Peter Warden, Jay Cordaro, Giuseppe Di Guglielmo, Javier Duarte, Stephen Gibellini, Videet Parekh, Honson Tran, Nhan Tran, Niu Wenxu, Xu Xuesong

Automatic Construction of Evaluation Suites for Natural Language Generation Datasets
Simon Mille, Kaustubh D. Dhole, Saad Mahamood, Laura Perez-Beltrachini, Varun Gangal, Mihir Kale, Emiel van Miltenburg, Sebastian Gehrmann

An Empirical Investigation of Representation Learning for Imitation
Xin Chen, Sam Toyer, Cody Wild, Scott Emmons, Ian Fischer, Kuang-Huei Lee, Neel Alex, Steven Wang, Ping Luo, Stuart Russell, Pieter Abbeel, Rohin Shah

Multilingual Spoken Words Corpus
Mark Mazumder, Sharad Chitlangia, Colby Banbury, Yiping Kang, Juan Manuel Ciro, Keith Achorn, Daniel Galvez, Mark Sabini, Peter Mattson, David Kanter, Greg Diamos, Pete Warden, Josh Meyer, Vijay Janapa Reddi

Workshops

4th Robot Learning Workshop: Self-Supervised and Lifelong Learning
Sponsor: Google
Organizers include Alex Bewley, Vincent Vanhoucke

Differentiable Programming Workshop
Sponsor: Google

Machine Learning for Creativity and Design
Sponsor: Google
Organizers include: Daphne Ippolito, David Ha

LatinX in AI (LXAI) Research @ NeurIPS 2021
Sponsor: Google
Sponsorship Level: Platinum
Workshop Chairs include: Andres Munoz Medina
Mentorship Roundtables include: Jonathan Huang, Pablo Samuel Castro

Algorithmic Fairness Through the Lens of Causality and Robustness
Organizers include: Jessica Schrouff, Awa Dieng

ImageNet: Past, Present, and Future
Organizers include: Lucas Beyer, Xiaohua Zhai
Speakers include: Emily Denton, Vittorio Ferrari, Alex Hanna, Alex Kolesnikov, Rebecca Roelofs

Optimal Transport and Machine Learning
Organizers include: Marco Cuturi

Safe and Robust Control of Uncertain Systems
Speakers include: Aleksandra Faust

CtrlGen: Controllable Generative Modeling in Language and Vision
Speakers include: Sebastian Gehrmann

Deep Reinforcement Learning
Organizers include: Chelsea Finn
Speakers include: Karol Hausam, Dale Schuurmans

Distribution Shifts: Connecting Methods and Applications (DistShift)
Speakers include: Chelsea Finn

ML For Systems
Organizers include: Anna Goldie, Martin Maas, Azade Nazi, Azalia Mihoseini, Milad Hashemi, Kevin Swersky

Learning in Presence of Strategic Behavior
Organizers include: Yishay Mansour

Bayesian Deep Learning
Organizers include: Zoubin Ghahramani, Kevin Murphy

Advances in Programming Languages and Neurosymbolic Systems (AIPLANS)
Organizers include: Disha Shrivastava, Vaibhav Tulsyan, Danny Tarlow

Ecological Theory of Reinforcement Learning: How Does Task Design Influence Agent Learning?
Organizers include: Shixiang Shane Gu, Pablo Samuel Castro, Marc G. Bellemare

The Symbiosis of Deep Learning and Differential Equations
Organizers include: Lily Hu

Out-of-Distribution Generalization and Adaptation in Natural and Artificial Intelligence
Speakers include: Chelsea Finn

Cooperative AI
Organizers include: Natasha Jaques

Offline Reinforcement Learning
Organizers include: Rishabh Agarwal, George Tucker
Speakers include: Minmin Chen

2nd Workshop on Self-Supervised Learning: Theory and Practice
Organizers include: Kristina Toutanova

Data Centric AI
Organizers include: Lora Aroyo

Math AI for Education (MATHAI4ED): Bridging the Gap Between Research and Smart Education
Organizers include: Yuhai (Tony) Wu

Tutorials

Beyond Fairness in Machine Learning
Organizers include: Emily Denton

Competitions

Evaluating Approximate Inference in Bayesian Deep Learning
Organizers include: Matthew D. Hoffman, Sharad Vikram

HEAR 2021 NeurIPS Challenge Holistic Evaluation of Audio Representations
Organizers include: Jesse Engel

Machine Learning for Combinatorial Optimization
Organizers include: Pawel Lichocki, Miles Lubin



*Work done while at Google.  

Currently at Google.  

Source: Google AI Blog


5 ways to watch more for less with Google TV

There’s more streaming entertainment than ever — and if you want to watch it all, the cost of subscriptions can add up quickly. In fact, our research showed that 67% of TV streamers are concerned about how much they’ll pay for streaming in the future.

With Google TV, you don’t have to limit your watchlist — or break the bank. Just in time for the holidays, here are a few ways you can watch more, for less, with Google TV in the U.S.

TV showing the Google TV Live Tab with integrated Pluto TV linear channels

Free Live TV channels from Pluto TV now in the Google TV Live tab


  1. Watch free live, linear streaming TV channels on the Live tab. Starting today, we’re partnering with Pluto TV so you can access more than 300 free live TV channels on Google TV. Visit the Live tab to see what’s on now or check out the Free Live TV recommendations in the For You tab. This new integration with Pluto TV will be available on all Google TV devices in the coming weeks.
  2. Check out movies on YouTube. Access thousands of feature-length movies for free with ads from the Movies & Shows tab in the YouTube app. Don’t miss holiday favorites like “Jingle All The Way,” “Serendipity” and “Bridget Jones’s Diary.”
  3. Try out more free Google TV apps. Head on over to the Apps tab to find a row of “Free movies & TV” apps to download, including Tubi, Xumo and Red Bull TV.
  4. Enjoy six months of Peacock Premium, at no extra cost. For a limited time, when you activate a new Google TV (or other Android TV OS device) in the U.S., you can get Peacock Premium at no extra cost. (After that, it’s $4.99 a month plus tax, but you can cancel at any time — check out all the details for more information.) You'll get everything Peacock has to offer — hit movies and shows, exclusive originals, WWE, live sports and more. Visit the For You or Apps tab after you set up your device to redeem the offer.
  5. Redeem Google Play Points for movies, shows, apps and more. If the movie or TV show you’re looking for isn't available from your services or free providers, you can rent or buy over 200,000 movies and TV episodes directly from Google TV, starting at $2.99. Whether it’s a new release or an old favorite, just search for it with your voice and click “Rent.” And with the Play Points loyalty program, you can earn points for every dollar you spend and redeem them for free Play Credits, which you can use to buy more movies, shows, books, apps and games.

And in 2022, we’ll be bringing you more ways to watch for free. In the meantime, kick back with free live TV, premium shows from Peacock and on demand movies on your Chromecast with Google TV and TVs from Sony andTCL.

Snap faster, hear better and do more with your Pixel

One of the sweet things about being a Pixel user is that your phone continues to get a boost of helpfulness with Feature Drops. Whether you want to quickly tap to access Snapchat from your Pixel lock screen or control the bass levels on your Pixel Buds A-Series, we’ve got an update you’ll love.

This latest Feature Drop will roll out to users over the next few weeks, starting today with relevant updates coming to Pixel 3a through Pixel 5a (5G) devices - see g.co/pixel/updates for details. Pixel 6 and Pixel 6 Pro devices will begin receiving their updates next week.

Snapchat, digital car key and ultra-wideband help Pixel do more

You can already customize the actions your Pixel takes when you use Quick Tap, from taking a screenshot to playing music. With Quick Tap to Snap, you can access Snapchat directly from your lock screen, making Pixel the fastest phone to make a Snap. Quick Tap to Snap is available to all Pixel 4a with 5G or newer Pixel phones. Plus, starting this month, you’ll be able to add a new Pixel-exclusive Lens – Pixel Face – to your Snaps. Look out for more Pixel-exclusive Lenses in future Feature Drops.

Image showing  Quick Tap to Snap on Pixel 6 and Pixel 6 Pro. Two people, an adult and a child, looking into the camera. One is smiling and the other is making a silly face.

As you saw from our friends at Android, we’ve partnered with BMW to enable digital car key for Pixel 6 and Pixel 6 Pro. On select 2020-2022 BMW models in certain countries, you can now unlock and lock your car by tapping your phone on the door handle, and you can start your car by placing your Pixel on the interior key reader and pressing the engine start button.

And ultra-wideband is now enabled on Pixel 6 Pro. This technology improves Nearby Share so you can quickly and securely send files, videos, map locations and more to other ultra-wideband devices nearby.

Personalize your devices

Conversation mode, an early-stage accessibility feature in the Sound Amplifier app, is now available in beta first on Pixel. This feature uses on-device machine learning to help anyone better hear conversations in loud environments by tuning into their conversation partner and tuning out competing noise. While Google Research continues to work on conversation mode you can get a sneak peek as an early tester and help make it better for everyone.

Animated GIF showing how Sound Amplifier works. A person's face is centered in a circle in the middle of the phone and while they speak, abstract sound icons illustrate the app amplifying their words.

Have you ever heard a catchy new track, but have no idea what it is? We’ve updated the Now Playing experience on Pixel to help you find your next favorite song. As always, Now Playing's automatic recognition is done entirely on-device. If Now Playing hasn’t automatically identified a song playing nearby, turn on the new search button and tap to let Pixel find it for you (available on Pixel 4 or newer Pixel phones). And if you’re really digging it, smash that music note icon next to the track’s recognition on your lock screen to save it as a favorite.

Animated GIF showing how Now Playing recognizes songs that are playing nearby on a Pixel phone.

On-screen experience is simulated for illustrative purposes. Now Playing may not recognize every song.

Speaking of music: We’re also introducing improved bass-level control for the Pixel Buds A-Series. With any Android 6.0+ device, you can now open the Pixel Buds app and use a slider to adjust bass from -1 to +4, giving you twice the bass range you currently have.

We've also added to our wallpapers. In celebration of International Day of Persons with Disabilities, we collaborated with Dana Kearly, a disabled multidisciplinary artist from Vancouver B.C., to create three beautiful new wallpapers for the Curated Culture collection.

Image showing a Wallpaper by Dana Kearly on a Pixel phone lock screen. It has cartoon flowers standing up on grass with an abstract pink, yellow, purple and orange background behind them.

Wallpaper by Dana Kearly.

Car crash detection and Recorder

Car crash detection is now supported in Taiwan, Italy and France, in addition to Spain, Ireland, Japan, the U.K., Australia, Singapore and the U.S. When car crash detection is turned on in the Personal Safety app, your Pixel 3, Pixel 4 or newer Pixel phone can help detect if you’ve been in a severe car accident. If a crash is detected, your phone will check in with you to see if you’re OK. If there’s no response, Pixel can share your location and other relevant details with emergency responders. (This feature is dependent upon network connectivity and other factors and may not be reliable for emergency communications or available in all areas.)

And while car crash detection is expanding to new countries, we’re also enabling new languages for transcription in the Recorder app. These include Japanese, French and German on Pixel 3 and newer Pixel phones.

If you want to learn more about these updates visit our Pixel forum. Otherwise, that’s all for now — until our next Feature Drop!

7 apps we couldn’t live without in 2021

As 2021 draws to a close, our Chromebook Apps team is taking the time to reflect on all the ways Chromebooks have helped us tackle another year of doing just about everything from home. This year, we’re starting a new tradition: sharing a few of the many apps we couldn’t live without, from our team to you.

Designing holiday cards

Pixlr. ‘Tis the season to create memories that bring smiles to friends and family. But capturing a photo of my family of five, including toddlers, is no small feat. Pixlr lets you edit photos and create great designs right in your browser. I combined a few photos into one to give the appearance of a calm and serene group, while giving the background a perfect blur. – Maria Lundahl Schmidt, Chrome OS Apps Partnerships

A card that reads “Happy Holidays” with four photos of children playing outside

Maria’s family holiday card created with Pixlr

Staying entertained with Cloud gaming

Stadia. Gaming played a crucial role in keeping me entertained (and sane!) in 2021. This year I have been all about cloud-gaming and Celeste is the first game that sold me. Latency was my main hesitation with cloud gaming, so I put it to the test with a pixel-perfect platformer. I had played Celeste locally, so I knew that any delay in responsiveness would render one of my favorite indie games unplayable. To my delight, I didn’t notice any lag when playing on Stadia. –Sam Richard, Chrome OS Developer Advocate

Nvidia GeForce NOW. And for those looking for a new game that can show off the graphical capabilities of cloud-gaming, be sure to check out Marvel’s Guardians of the Galaxy on NVIDIA GeForce NOW. RTX support means it can be played with beautifully ray traced graphics (available on Chromebooks that support 4K), turning your Chromebook into the ultimate high-fidelity gaming rig! – Greg Nemeth, Chrome OS Games Partnerships

Painting with my kids during shelter in place

Krita. Sheltering in place in a cabin outside of Sweden has given my family some extra time to embrace our creative side. Krita – which is in beta – has been an amazing tool for us, and we have been able to create a plethora of princesses, unicorns and cat-like creatures. Krita is designed primarily for digital painting and 2D animation; it is open source and completely free of charge. The name "Krita" is inspired by the Swedish words krita, meaning "crayon," and rita, which means "to draw," so it made perfect sense for us to use this wonderful tool for digital artists. – Maria Lundahl Schmidt, Chrome OS Apps Partnerships

Connecting virtually with family and friends

Rave. When my kids are asleep, I use Rave, a watch party app, with my friends to text and voice message while binging Netflix and watching YouTube videos together. We even hosted a few karaoke nights with our friends who live outside of California. It became the weekend highlight for us. – Sanj Nathwani, Chrome OS Product Manager

Zoom. Making sure my 2 and 4-year-olds and I can spend virtual time with our loved ones has been important for my family. Zoom’s new progressive web app (PWA) for Chromebooks makes it incredibly easy to join any call with a single click. It works in Chrome browser on any operating system — so I never need to worry about whether my friends or family will be able to access a group meeting. – James Wagner, Chrome OS Apps Program Manager

Unleashing my creative side

Sumo. One of my resolutions this year was to get into painting again. When I started using the web-based app Sumopaint, it was impossible to miss the other tools they have — like making music, 3D modeling, coding or editing photos and videos. My favorite part: how easy everything was to learn, and how you can share assets between apps in the suite through a common asset library. – Neel Kshetramade, Chrome OS Apps Program Manager

A colorful painting of Inara, her parents and her little brother

Painting by Neel’s daughter, created using Sumopaint

Hopefully you’ll have some down-time over the holidays. Some of the ways my team plans to spend that time is watching their favorite holiday movies — like Home Alone or The Nightmare Before Christmas on Disney+, or learning to code as a family with Piper Make, or making music with Cubasis 3’s custom Chromebooks app.

We hope you and your family enjoy these apps as much as we do. Give them a spin during the holidays. Be sure to check out the Perks page to find special offers on some great apps — created exclusively for Chromebooks.

Google, MLSE and the NBA extend partnership through to 2026

Here at Google, we believe progress happens through the power of the ‘we’, not the ‘I’. That’s why we create truly open and helpful products that bring people together, while partnering with organizations that share this same vision. 

With that in mind, we’re thrilled to announce our partnership with Maple Leaf Sports & Entertainment (MLSE) and the NBA to make Google Pixel the official smartphone of the Toronto Raptors, Toronto Maple Leafs, and the NBA for the next five seasons. 

This partnership will leverage the strengths of MLSE’s full ecosystem of sports brands, including the Raptors, Maple Leafs, Toronto FC and Argos, reaching millions of fans across Canada. 

The beginnings of this partnership date back to October 2017, when we collaborated with the Raptors for the launch of Google Home. As part of this deal, Google Nest will remain the official smart home technology partner of both MLSE and the NBA through to the 2025-2026 season, and Chromebook will become the official laptop of the Raptors, Leafs, TFC and Argos. 


Why is this important? The answer is simple. Having an open ecosystem of accessible and friendly smartphones, home computing, powered by Google software and connected objects helps Canadians reach their goals while staying connected. 

We think human progress, big and small, can only happen when technology speaks to everybody, for everybody, in the most inclusive, open, friendly and human way. This aligns with MLSE’s commitment to establish partnerships with organizations and co-create purpose-driven programs. 

This partnership will further showcase the innovation and helpfulness that Google hardware and products bring through a series of advertisements, entertaining content and fresh activations featuring exciting sports talent from our own backyard. 

This is bold and changemaking, just like our culture and our communities. Together, we’re better - and we can’t wait to share more in the coming months.

Google Korea’s volunteering spirit runs deep

I grew up in a challenging environment, but I always felt fortunate to be surrounded by people who would always lend a helping hand when I needed it. Even at the young age of 10, their actions motivated me to extend help to people I interact with and that brought me joy. Thanks to the generosity of the people around me, I was able to complete my studies and build a career. Someone once told me that the best way to repay kindness was to pay it forward, and I made it a part of who I am today.

Since joining Google seven years ago, I’ve seen how Google has built a vibrant volunteering culture. Every year, we see Googlers around the world come together to participate in community service projects through GoogleServe — our annual volunteering event. I've led GoogleServe in Korea several times, encouraging Googlers to dedicate their time to volunteering. It’s incredibly motivating to hear positive comments from Googlers who have volunteered for the first time — and to see them return the following year to do more for their community.

I also became the local ambassador for Google.org, our philanthropic arm, helping Googlers understand how we can make a bigger impact by connecting our corporate grants with donations and volunteering activities. I truly believe that when we’re able to get everyone involved in doing good, we’re able to keep volunteering an integral part of our culture.

A photo of Googler Eunjin and Jacquelline Fuller, President of Google.org

As a Google.org ambassador, I had the opportunity to meet Jacquelline Fuller, President of Google.org while she was visiting the Seoul office

As we commemorate International Volunteer Day, I’d like to highlight other Googlers from our Korea office who share the same passion for giving back.


Narae Jeon

Site Administrative Business Partner

A photo of Googler Narae Jeon

What was your most memorable experience through GoogleServe?

I decided to take care of abandoned dogs as part of my volunteering experience. A long time ago, a dog I’d been raising died in an accident, and I felt guilty for not responding in the right way. I started deepening my knowledge of topics like animal protection and breeding, and looked for opportunities to get involved in the community. I started volunteering with an animal protection center, where I helped rescue an abandoned dog that resembled the dog I had raised before — and made snacks for other abandoned dogs. I also created a Google group named ‘Doglers’ for Googlers looking to get involved with animal shelters, and ran a donation drive to raise awareness among Googlers.

Photo of an animal shelter in Korea

Abandoned animal shelter in the Gyeonggi province where our ‘Doglers’ go to on a regular basis

Dogs at the animal shelter

I rescued the dog on the right in this photo from the highway.

What is one takeaway you’d like to share with others from your volunteering experience?

Take the first step. You can always start by going to a volunteering site and observing how others are helping the community. You’ll be surprised how being on-site can inspire you to take action. Once you experience giving back, you’ll realize what a rewarding experience volunteering can be.


Jaey Park

Strategy and Insights Manager, Korea

A photo of Googler Jaey Park

What was your most memorable experience through GoogleServe?

This year, I had the opportunity to mentor college students preparing for employment. I was able to share my experiences and knowledge in data analytics. We often think that we don’t have much insightful knowledge to pass onto others, but I was surprised that what I shared with these students was valuable. From this experience, I decided to continue volunteering in this space.

Group mentoring session with other Googlers as part of GoogleServe 2021

Group mentoring session with other Googlers as part of GoogleServe 2021

What is one takeaway you’d like to share with others from your volunteering experience?

Once you start volunteering, you’ll realize how you’re impacting not only others but yourself too. It helps you feel more connected, and it creates a sense of belonging and purpose. I truly believe when we come together to do good, we’re able to make a bigger contribution to the community we live in.