Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 92 (92.0.4515.105) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Google at ICML 2021

Groups across Google are actively pursuing research across the field of machine learning, ranging from theory to application. With scalable tools and architectures, we build machine learning systems to solve deep scientific and engineering challenges in areas of language, music, visual processing, and more.

Google is proud to be a Platinum Sponsor of the thirty-eighth International Conference on Machine Learning (ICML 2021), a premier annual event happening this week. As a leader in machine learning research — with over 100 accepted publications and Googlers participating in workshops — we look forward to our continued partnership with the broader machine learning research community.

Registered for ICML 2021? We hope you’ll visit the Google virtual booth to learn more about the exciting work, creativity, and fun that goes into solving a portion of the field’s most interesting challenges. Take a look below to learn more about the Google research being presented at ICML 2021 (Google affiliations in bold).

Organizing Committee
ICML Board Members include: Corinna Cortes, Hugo Larochelle, Shakir Mohamed
ICML Emeritus Board includes: William Cohen, Andrew McCallum
Tutorial Co-Chair member: Quoc Lee

Publications
Attention Is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
Yihe Dong, Jean-Baptiste Cordonnier, Andreas Loukas

Scalable Evaluation of Multi-agent Reinforcement Learning with Melting Pot
Joel Z. Leibo, Edgar Duéñez-Guzmán, Alexander Sasha Vezhnevets, John P. Agapiou, Peter Sunehag, Raphael Koster, Jayd Matyas, Charles Beattie, Igor Mordatch, Thore Graepel

On the Optimality of Batch Policy Optimization Algorithms
Chenjun Xiao, Yifan Wu, Tor Lattimore, Bo Dai, Jincheng Mei, Lihong Li*, Csaba Szepesvari, Dale Schuurmans

Low-Rank Sinkhorn Factorization
Meyer Scetbon, Marco Cuturi, Gabriel Peyré

Oops I Took A Gradient: Scalable Sampling for Discrete Distributions
Will Grathwohl, Kevin Swersky, Milad Hashemi, David Duvenaud, Chris J. Maddison

PID Accelerated Value Iteration Algorithm
Amir-Massoud Farahmand, Mohammad Ghavamzadeh

Dueling Convex Optimization
Aadirupa Saha, Tomer Koren, Yishay Mansour

What Are Bayesian Neural Network Posteriors Really Like?
Pavel Izmailov, Sharad Vikram, Matthew D. Hoffman, Andrew Gordon Wilson

Offline Reinforcement Learning with Pseudometric Learning
Robert Dadashi, Shideh Rezaeifar, Nino Vieillard, Léonard Hussenot, Olivier Pietquin, Matthieu Geist

Revisiting Rainbow: Promoting More Insightful and Inclusive Deep Reinforcement Learning Research (see blog post)
Johan S. Obando-Ceron, Pablo Samuel Castro

EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Seyed Kamyar Seyed Ghasemipour*, Dale Schuurmans, Shixiang Shane Gu

Variational Data Assimilation with a Learned Inverse Observation Operator
Thomas Frerix, Dmitrii Kochkov, Jamie A. Smith, Daniel Cremers, Michael P. Brenner, Stephan Hoyer

Tilting the Playing Field: Dynamical Loss Functions for Machine Learning
Miguel Ruiz-Garcia, Ge Zhang, Samuel S. Schoenholz, Andrea J. Liu

Model-Based Reinforcement Learning via Latent-Space Collocation
Oleh Rybkin, Chuning Zhu, Anusha Nagabandi, Kostas Daniilidis, Igor Mordatch, Sergey Levine

Momentum Residual Neural Networks
Michael E. Sander, Pierre Ablin, Mathieu Blondel, Gabriel Peyré

OmniNet: Omnidirectional Representations from Transformers
Yi Tay, Mostafa Dehghani, Vamsi Aribandi, Jai Gupta, Philip Pham, Zhen Qin, Dara Bahri, Da-Cheng Juan, Donald Metzler

Synthesizer: Rethinking Self-Attention for Transformer Models
Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, Che Zheng

Towards Domain-Agnostic Contrastive Learning
Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le

Randomized Entity-wise Factorization for Multi-agent Reinforcement Learning
Shariq Iqbal, Christian A. Schroeder de Witt, Bei Peng, Wendelin Böhmer, Shimon Whiteson, Fei Sha

LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning
Yuhuai Wu, Markus Rabe, Wenda Li, Jimmy Ba, Roger Grosse, Christian Szegedy

Emergent Social Learning via Multi-agent Reinforcement Learning
Kamal Ndousse, Douglas Eck, Sergey Levine, Natasha Jaques

Improved Contrastive Divergence Training of Energy-Based Models
Yilun Du, Shuang Li, Joshua Tenenbaum, Igor Mordatch

Characterizing Structural Regularities of Labeled Data in Overparameterized Models
Ziheng Jiang*, Chiyuan Zhang, Kunal Talwar, Michael Mozer

Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine

PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar

EfficientNetV2: Smaller Models and Faster Training
Mingxing Tan, Quoc V. Le

Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies
Paul Vicol, Luke Metz, Jascha Sohl-Dickstein

Federated Composite Optimization
Honglin Yuan*, Manzil Zaheer, Sashank Reddi

Light RUMs
Flavio Chierichetti, Ravi Kumar, Andrew Tomkins

Catformer: Designing Stable Transformers via Sensitivity Analysis
Jared Quincy Davis, Albert Gu, Krzysztof Choromanski, Tri Dao, Christopher Re, Chelsea Finn, Percy Liang

Representation Matters: Offline Pretraining for Sequential Decision Making
Mengjiao Yang, Ofir Nachum

Variational Empowerment as Representation Learning for Goal-Conditioned Reinforcement Learning
Jongwook Choi*, Archit Sharma*, Honglak Lee, Sergey Levine, Shixiang Shane Gu

Beyond Variance Reduction: Understanding the True Impact of Baselines on Policy Optimization
Wesley Chung, Valentin Thomas, Marlos C. Machado, Nicolas Le Roux

Whitening and Second Order Optimization Both Make Information in the Dataset Unusable During Training, and Can Reduce or Prevent Generalization
Neha S. Wadia, Daniel Duckworth, Samuel S. Schoenholz, Ethan Dyer, Jascha Sohl-Dickstein

Understanding Invariance via Feedforward Inversion of Discriminatively Trained Classifiers
Piotr Teterwak*, Chiyuan Zhang, Dilip Krishnan, Michael C. Mozer

Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu

Hyperparameter Selection for Imitation Learning
Leonard Hussenot, Marcin Andrychowicz, Damien Vincent, Robert Dadashi, Anton Raichuk, Lukasz Stafiniak, Sertan Girgin, Raphael Marinier, Nikola Momchev, Sabela Ramos, Manu Orsini, Olivier Bachem, Matthieu Geist, Olivier Pietquin

Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces
Ankit Singh Rawat, Aditya Krishna Menon, Wittawat Jitkrittum, Sadeep Jayasumana, Felix X. Yu, Sashank J. Reddi, Sanjiv Kumar

Revenue-Incentive Tradeoffs in Dynamic Reserve Pricing
Yuan Deng, Sebastien Lahaie, Vahab Mirrokni, Song Zuo

Debiasing a First-Order Heuristic for Approximate Bi-Level Optimization
Valerii Likhosherstov, Xingyou Song, Krzysztof Choromanski, Jared Davis, Adrian Weller

Characterizing the Gap Between Actor-Critic and Policy Gradient
Junfeng Wen, Saurabh Kumar, Ramki Gummadi, Dale Schuurmans

Composing Normalizing Flows for Inverse Problems
Jay Whang, Erik Lindgren, Alexandros Dimakis

Online Policy Gradient for Model Free Learning of Linear Quadratic Regulators with √T Regret
Asaf Cassel, Tomer Koren

Learning to Price Against a Moving Target
Renato Paes Leme, Balasubramanian Sivan, Yifeng Teng, Pratik Worah

Fairness and Bias in Online Selection
Jose Correa, Andres Cristi, Paul Duetting, Ashkan Norouzi-Fard

The Impact of Record Linkage on Learning from Feature Partitioned Data
Richard Nock, Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Jakub Nabaglo, Giorgio Patrini, Guillaume Smith, Brian Thorne

Reserve Price Optimization for First Price Auctions in Display Advertising
Zhe Feng*, Sébastien Lahaie, Jon Schneider, Jinchao Ye

A Regret Minimization Approach to Iterative Learning Control
Naman Agarwal, Elad Hazan, Anirudha Majumdar, Karan Singh

A Statistical Perspective on Distillation
Aditya Krishna Menon, Ankit Singh Rawat, Sashank J. Reddi, Seungyeon Kim, Sanjiv Kumar

Best Model Identification: A Rested Bandit Formulation
Leonardo Cella, Massimiliano Pontil, Claudio Gentile

Generalised Lipschitz Regularisation Equals Distributional Robustness
Zac Cranko, Zhan Shi, Xinhua Zhang, Richard Nock, Simon Kornblith

Stochastic Multi-armed Bandits with Unrestricted Delay Distributions
Tal Lancewicki, Shahar Segal, Tomer Koren, Yishay Mansour

Regularized Online Allocation Problems: Fairness and Beyond
Santiago Balseiro, Haihao Lu, Vahab Mirrokni

Implicit Rate-Constrained Optimization of Non-decomposable Objectives
Abhishek Kumar, Harikrishna Narasimhan, Andrew Cotter

Leveraging Non-uniformity in First-Order Non-Convex Optimization
Jincheng Mei, Yue Gao, Bo Dai, Csaba Szepesvari, Dale Schuurmans

Dynamic Balancing for Model Selection in Bandits and RL
Ashok Cutkosky, Christoph Dann, Abhimanyu Das, Claudio Gentile, Aldo Pacchiano, Manish Purohit

Adversarial Dueling Bandits
Aadirupa Saha, Tomer Koren, Yishay Mansour

Optimizing Black-Box Metrics with Iterative Example Weighting
Gaurush Hiranandani*, Jatin Mathur, Harikrishna Narasimhan, Mahdi Milani Fard, Oluwasanmi Koyejo

Relative Deviation Margin Bounds
Corinna Cortes, Mehryar Mohri, Ananda Theertha Suresh

MC-LSTM: Mass-Conserving LSTM
Pieter-Jan Hoedt, Frederik Kratzert, Daniel Klotz, Christina Halmich, Markus Holzleitner, Grey Nearing, Sepp Hochreiter, Günter Klambauer

12-Lead ECG Reconstruction via Koopman Operators
Authors:Tomer Golany, Kira Radinsky, Daniel Freedman, Saar Minha

Finding Relevant Information via a Discrete Fourier Expansion
Mohsen Heidari, Jithin Sreedharan, Gil Shamir, Wojciech Szpankowski

LEGO: Latent Execution-Guided Reasoning for Multi-hop Question Answering on Knowledge Graphs
Hongyu Ren, Hanjun Dai, Bo Dai, Xinyun Chen, Michihiro Yasunaga, Haitian Sun, Dale Schuurmans, Jure Leskovec, Denny Zhou

SpreadsheetCoder: Formula Prediction from Semi-structured Context
Xinyun Chen, Petros Maniatis, Rishabh Singh, Charles Sutton, Hanjun Dai, Max Lin, Denny Zhou

Combinatorial Blocking Bandits with Stochastic Delays
Alexia Atsidakou, Orestis Papadigenopoulos, Soumya Basu, Constantine Caramani, Sanjay Shakkottai

Beyond log2(T) Regret for Decentralized Bandits in Matching Markets
Soumya Basu, Karthik Abinav Sankararaman, Abishek Sankararaman

Robust Pure Exploration in Linear Bandits with Limited Budget
Ayya Alieva, Ashok Cutkosky, Abhimanyu Das

Latent Programmer: Discrete Latent Codes for Program Synthesis
Joey Hong, David Dohan, Rishabh Singh, Charles Sutton, Manzil Zaheer

Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision (see blog post)
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig

On Linear Identifiability of Learned Representations
Geoffrey Roeder, Luke Metz, Diederik P. Kingma

Hierarchical Clustering of Data Streams: Scalable Algorithms and Approximation Guarantees
Anand Rajagopalan, Fabio Vitale, Danny Vainstein, Gui Citovsky, Cecilia M Procopiuc, Claudio Gentile

Differentially Private Quantiles
Jennifer Gillenwater, Matthew Joseph, Alex Kulesza

Active Covering
Heinrich Jiang, Afshin Rostamizadeh

Sharf: Shape-Conditioned Radiance Fields from a Single View
Konstantinos Rematas, Ricardo Martin-Brualla, Vittorio Ferrari

Learning a Universal Template for Few-Shot Dataset Generalization
Eleni Triantafillou*, Hugo Larochelle, Richard Zemel, Vincent Dumoulin

Private Alternating Least Squares: Practical Private Matrix Completion with Tighter Rates
Steve Chien, Prateek Jain, Walid Krichene, Steffen Rendle, Shuang Song, Abhradeep Thakurta, Li Zhang

Differentially-Private Clustering of Easy Instances
Edith Cohen, Haim Kaplan, Yishay Mansour, Uri Stemmer, Eliad Tsfadia

Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot

Neural Feature Matching in Implicit 3D Representations
Yunlu Chen, Basura Fernando, Hakan Bilen, Thomas Mensink, Efstratios Gavves

Locally Private k-Means in One Round
Alisa Chang, Badih Ghazi, Ravi Kumar, Pasin Manurangsi

Large-Scale Meta-learning with Continual Trajectory Shifting
Jaewoong Shin, Hae Beom Lee, Boqing Gong, Sung Ju Hwang

Statistical Estimation from Dependent Data
Vardis Kandiros, Yuval Dagan, Nishanth Dikkala, Surbhi Goel, Constantinos Daskalakis

Oneshot Differentially Private Top-k Selection
Gang Qiao, Weijie J. Su, Li Zhang

Unsupervised Part Representation by Flow Capsules
Sara Sabour, Andrea Tagliasacchi, Soroosh Yazdani, Geoffrey E. Hinton, David J. Fleet

Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar

Practical and Private (Deep) Learning Without Sampling or Shuffling
Peter Kairouz, Brendan McMahan, Shuang Song, Om Thakkar, Abhradeep Thakurta, Zheng Xu

Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message
Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Rasmus Pagh, Amer Sinha

Leveraging Public Data for Practical Private Query Release
Terrance Liu, Giuseppe Vietri, Thomas Steinke, Jonathan Ullman, Zhiwei Steven Wu

Meta-Thompson Sampling
Branislav Kveton, Mikhail Konobeev, Manzil Zaheer, Chih-wei Hsu, Martin Mladenov, Craig Boutilier, Csaba Szepesvári

Implicit-PDF: Non-parametric Representation of Probability Distributions on the Rotation Manifold
Kieran A Murphy, Carlos Esteves, Varun Jampani, Srikumar Ramalingam, Ameesh Makadia

Improving Ultrametrics Embeddings Through Coresets
Vincent Cohen-Addad, Rémi de Joannis de Verclos, Guillaume Lagarde

A Discriminative Technique for Multiple-Source Adaptation
Corinna Cortes, Mehryar Mohri, Ananda Theertha Suresh, Ningshan Zhang

Self-Supervised and Supervised Joint Training for Resource-Rich Machine Translation
Yong Cheng, Wei Wang*, Lu Jiang, Wolfgang Macherey

Correlation Clustering in Constant Many Parallel Rounds
Vincent Cohen-Addad, Silvio Lattanzi, Slobodan Mitrović, Ashkan Norouzi-Fard, Nikos Parotsidis, Jakub Tarnawski

Hierarchical Agglomerative Graph Clustering in Nearly-Linear Time
Laxman Dhulipala, David Eisenstat, Jakub Łącki, Vahab Mirrokni, Jessica Shi

Meta-learning Bidirectional Update Rules
Mark Sandler, Max Vladymyrov, Andrey Zhmoginov, Nolan Miller, Andrew Jackson, Tom Madams, Blaise Aguera y Arcas

Discretization Drift in Two-Player Games
Mihaela Rosca, Yan Wu, Benoit Dherin, David G.T. Barrett

Reasoning Over Virtual Knowledge Bases With Open Predicate Relations
Haitian Sun*, Pat Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, William W. Cohen

Learn2Hop: Learned Optimization on Rough Landscapes
Amil Merchant, Luke Metz, Samuel Schoenholz, Ekin Cubuk

Locally Adaptive Label Smoothing Improves Predictive Churn
Dara Bahri, Heinrich Jiang

Overcoming Catastrophic Forgetting by Bayesian Generative Regularization
Patrick H. Chen, Wei Wei, Cho-jui Hsieh, Bo Dai

Workshops (only Google affiliations are noted)
LatinX in AI (LXAI) Research at ICML 2021
Hosts: Been Kim, Natasha Jaques

Uncertainty and Robustness in Deep Learning
Organizers: Balaji Lakshminarayanan, Jasper Snoek Invited Speaker: Dustin Tran

Reinforcement Learning for Real Life
Organizers: Minmin Chen, Lihong Li Invited Speaker: Ed Chi

Interpretable Machine Learning in Healthcare
Organizers: Alan Karthikesalingam Invited Speakers: Abhijit Guha Roy, Jim Winkens

The Neglected Assumptions in Causal Inference
Organizer: Alexander D'Amour

ICML Workshop on Algorithmic Recourse
Invited Speakers: Been Kim, Berk Ustun

A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning
Invited Speaker: Nicholas Carlini

Overparameterization: Pitfalls and Opportunities
Organizers: Yasaman Bahri, Hanie Sedghi

Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning (ITR3)
Invited Speaker: Thomas Steinke

Beyond First-Order Methods in Machine Learning Systems
Invited Speaker: Courtney Paquette

ICML 2021 Workshop: Self-Supervised Learning for Reasoning and Perception
Invited Speaker: Chelsea Finn

Workshop on Reinforcement Learning Theory
Invited Speaker: Bo Dai

Tutorials (only Google affiliations are noted)
Responsible AI in Industry: Practical Challenges and Lessons Learned
Organizers: Ben Packer

Online and Non-stochastic Control
Organizers: Elad Hazan

Random Matrix Theory and ML (RMT +ML)
Organizers: Fabian Pedregosa, Jeffrey Pennington, Courntey Paquette Self-Attention for Computer Vision Organizers: Prajit Ramachandran, Ashish Vaswani

* Indicates work done while at Google

Source: Google AI Blog


Beta Channel Update for Desktop

The Beta channel has been updated to 92.0.4515.107 for Windows, linux and Mac.


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Srinivas Sista

Hangouts to Google Chat upgrade beginning August 16th, with option to opt-out

What’s changing 

Beginning August 16, 2021, we will start upgrading users who have the “Chat and classic Hangouts” setting selected to “Chat preferred,” unless you explicitly opt out. Users who already have “Chat only”, “Chat preferred”, or “Classic Only” selected, or users with both services turned off will not be affected. 


Additionally, the “Chat and classic Hangouts'' setting will also be removed for all users in your domain unless you opt out of the upgrade. 


If there are affected users in your domain, you will receive an email notification that contains more information and any necessary action that needs to be taken.


Who’s impacted

Admins and end users


Why it’s important

Unless you opt-out, Google Chat will replace classic Hangouts as the default chat application for your affected users


Beginning late 2021, classic Hangouts will no longer be supported and all remaining users will be migrated to Google Chat. Learn more about the Google Chat upgrade timeline.



Getting started


If you don’t take any action, all users in your organization who have the “Chat and classic Hangouts” setting selected will be automatically upgraded to “Chat preferred” and the “Chat and classic Hangouts” setting will no longer be accessible. We anticipate this migration to take around  two weeks.


No action is required if there are no users in your domain with the “Chat and classic Hangouts” service setting selected.


Additional details

The “Chat and classic Hangouts” setting will be removed from the Admin Console regardless of your selected setting, unless you opt out.  


Conversation History: With the exception of a few special cases, messages sent in classic Hangouts 1:1 and group conversations will be available in Chat. Learn more about Chat and classic Hangouts interoperability.


Direct calling: Chat doesn’t yet support direct calling in the same way as classic Hangouts. We anticipate direct calling to become available for Google Chat later this year — we will provide an update here when that feature becomes available. 


Updates to Google Workspace Public Status Dashboard and service status alerts

What’s changing 

We're introducing a new Public Status Dashboard experience for Google Workspace. As part of this update, we’re enhancing the functionality of the existing Apps outage alert system-defined rule, which provides email notifications regarding service disruptions or outages via the Public Status Dashboard. Specifically, you can now configure the rule to also deliver Apps outage alerts to the Alert Center, and you can retrieve the alerts using the Alert Center API


Who’s impacted 

Admins 


Why it’s important 

New Public Status Dashboard Experience 
Following the Google Maps Platform, the Google Workspace Status Dashboard will soon have a refreshed user interface, which will allow you to find and view important service status information faster. The location of the Public Status Dashboard will not change with this update, and it will continue to support RSS feed subscribers. 

The Google Workspace Status Dashboard will receive a new UI refresh, making it easier to view important information and updates.



Enhanced Apps outage alerts 
By bringing Apps outage alerts to the Alert Center, we are aligning with other Google Workspace alert types. The Apps outage alerts will share a familiar format to other alerts your organization may receive in the Alert Center. 

In addition to the Google Workspace Status Dashboard, you will be able to find Apps outage alerts in the Alert Center.



Additionally, we’ve updated the email notification format to contain structured information such as key issue details, the status of the affected services, and a link to the Google Workspace Status Dashboard.

We've enhanced  email notification for Apps outages to include richer information, a status of the outage, and quicklinks to more information.



Finally, the Apps outage alerts are now available via the Alert Center API and can be identified with the "AppsOutage" alert type. This will allow integration with your existing alerting or ticketing systems within your organization. 


Email Notification Sender Changes 
To align with other email notifications from the Alert Center, the sender email address used for Apps outage alert email notifications is changing from [email protected] to [email protected]


The subject of these emails will not change (it will still be "Google Workspace status alert"). Any email routing or filtering based on the old sender address should be updated accordingly. 


Getting started 


Rollout pace 

  • Rapid Release and Scheduled Release domains: Full rollout (1-3 days for feature visibility) starting on July 19, 2021. 
  • We anticipate the updated Google Workspace Status Dashboard to become available by July 21, 2021. 

Availability 

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers. 

Resources 

Explore the undeciphered writing of the Incas

Issac Newton once said, "If I have seen further it is by standing on the shoulders of giants." By adopting this age-old phrase, he acknowledged that all “new” discoveries depend on all that preceded them.

At Google, we firmly believe that history has much to teach us. For me personally, as a Latin American, I have no doubt that the native peoples who inhabited our beautiful, diverse and inspiring region left us countless treasures — many of which still patiently wait to be discovered.


The MALI Collection on Google Arts & Culture

That is why I am so pleased and proud to present the new online exhibition The Khipu Keepers on Google Arts & Culture.

“Khipus,” which means “knots” in the Quechua language, are the colorful, intricate cords made by the Incas, who inhabited some parts of South America before the Spanish colonization of the Americas. These knotted strings are still an enigma waiting to be unraveled. What secrets are hidden in these colorful knots dating back centuries? What messages from the Incas echo in these intricate cords? Could the ancestral knowledge they hold inform us about our future?

Currently, there are about 1,400 surviving khipus in private collections and museums around the world. While approximately 85% of these contain knots representing numbers, the remaining 15% are believed to be an ancient form of writing without written words on paper or stone. Researchers are still working to decipher the meanings of these coded messages.

With the exhibition launching online today, the Lima Art Museum (MALI) and Google Arts & Culture are opening a window into one of the greatest mysteries the Inca people left behind.

By putting the centuries-old khipus on display online for the first time, this exhibition will let people from across the world engage with the fantastic legacy of the Inca civilization. Yet even more importantly, by creating a digital record of these enigmatic treasures that still have stories to tell, we are also preserving them forever. In this sense, The Khipu Keepers is also a first step of a promising journey for researchers to find new opportunities thanks to the power of technologies such as digitization. 

Track down the history of khipus to Latin America’s first empire in the words of anthropologist Dr. Sabine Hyland, and listen to St. Andrews researcher Manny Medrano as he answers the most pressing questions about what we know of khipus. Watch an intro to the basic components of a khipu and what experts have discovered so far, or explore the Attendance Board that provides a rare connection between words and cords. Zoom into a large double khipu and learn about what it takes to conserve the khipus from the Temple Radicati collection.


Seven interesting facts about the enigmatic khipus

  1. The Quechua word “khipu” means knot.
  2. The pre-Columbian khipus were made of camelid hair or cotton fiber.
  3. The Incas used three types of knots: single, long and figure-eight.
  4. The colors of the khipu cords have different meanings.
  5. The distance between the knots also has a meaning and conveys a message.
  6. A cord without knots represents the number zero.
  7. Of all the known khipus, 85% convey numerical values and the remaining 15% are believed to tell stories.


From Latin America to the world

As seen in other Google Arts & Culture projects likeWoolaroo andFabricius, technology can be a powerful tool in the hands of researchers to preserve, research and understand the legacy of the ancient cultures and communities who came before us.

For the “The Khipu Keepers,” researchers are once again the ones entrusted with “untangling” this chapter of our past and providing us with answers. They now know that they are not alone in this endeavor and that Google technologies can help them delve deeper into elements of history.

Give it up for the woman who helps Googlers give back

Over the past month, Googlers around the world have virtually volunteered in their communities — from mentoring students to reviewing resumes for job seekers. It’s all a part of GoogleServe, our month-long campaign that encourages Googlers to lend their time and expertise to others. GoogleServe is just one of many opportunities employees have to give back, and one of the projects that Megan Colla Wheeler is responsible for running. 

As the lead for Google.org’s global employee giving and volunteering campaigns, Megan’s role is to create and run programs like GoogleServe and connect the nearly 150,000 Googlers around the world to them. Ultimately, her job is to help Googlers dedicate their time, money or expertise to their communities. How’s that for paying it forward?

With more than ten years of experience at Google, we wanted to hear more about how she ended up in this job, her advice to others and all the ways volunteering at Google has changed — particularly this past year. 


How do you explain your job to friends?

My goal is to create meaningful ways for Googlers to contribute to their communities — by offering their time, expertise or money — and help connect them to those opportunities. 


When did you realize you were interested in philanthropy and volunteering?

I was a Kinesiology major in college. Toward the end of my sophomore year, I took a course on social justice and it struck a chord in me. Though I loved sports, I realized I wanted my career to be about something bigger, something meaningful. I wanted to lend my skills for good. So even though I graduated with a kinesiology major, I focused my job search on the nonprofit sector and got a job working for a nonprofit legal organization.


How did you go from there to leading volunteer programs for Google.org?

I never knew that the job I have now was even possible. I left my nonprofit job to become a recruiting coordinator at Google. My plan was to do it for a year, diversify my skills, then go back to the nonprofit world. 

I remember going to my first GoogleServe event. We helped paint and organize a senior citizen community center — all during the workday! It blew me away that Google placed such an importance on volunteering. Coming from the nonprofit world, it felt meaningful seeing a company that cares deeply about these things and encourages employees to get involved. So I stayed at Google and kept finding ways to work on these programs. 


Fast forward 10 years and you’re one of the masterminds behind these events. How has employee volunteering and giving at Google changed over the years?

So many of the things that Google has created, like Gmail, came out of grassroots ideas that then grew as the company did. The same is true of our work to help Googlers get involved in their communities. 


Take GoogleServe for example. In 2008, a Googler came up with the idea to create a company day of service. Over a decade later that campaign has gone from a day-long event to a month of service that encourages over 25,000 employees to volunteer in over 90 offices around the world. And it all started with one Googler saying, "This would be a cool idea." Along the way, more Googlers have come up with ideas to get involved in the communities where we live and work through giving and volunteering. Although the programs have grown and evolved over the years, we’ve maintained the sentiment that inspired those campaigns in the first place.


We’ve also been focused on connecting Googlers to opportunities that use their distinct skills, like coding or data analysis. For example, a team of Googlers - including software engineers, program managers, and UX designers - are currently working with the City of Detroit to help build a mobile-friendly search tool to help people find affordable housing. 


How has it changed in the past year?

At the core, these programs are about giving back, but they’re also culturally iconic moments at Google. They’re a chance for teams to connect and do something together that’s more than just your average team-building activity. You’re building a shared experience and meeting people from completely different roles and departments. They’re also a chance for teams to learn and grow from people outside of Google and to bring that perspective back to their job. 


Over the past year, people have felt generally disconnected. So even though our volunteering has become virtual, it’s still a chance to interact and contribute. Virtual or not, it really does create a positive work culture. 


What advice would you give to people who have a day job in one area and a passion in another?

Be willing to work hard and get your core job done and carve out time to keep doing what you’re passionate about. When you are working on projects that you love, it keeps you engaged in a really special way. And you never know when those passion projects will intersect with your core work, or when they’ll turn into something bigger. 


Allowing developers to apply for more time to comply with Play Payments Policy

Posted by Purnima Kochikar, VP Play Partnerships

Every day we work with developers to help make Google Play a safe, secure and seamless experience for everyone, and to ensure that developers can build sustainable businesses. Last September, we clarified our Payments Policy to be more explicit about when developers should use Google Play’s billing system. While most developers already complied with this policy, we understood that some existing apps currently using an alternative billing system may need to make changes to their apps, and we gave one year for them to make these updates.

Many of our partners have been making steady progress toward the September 30 deadline. However, we continue to hear from developers all over the world that the past year has been particularly difficult, especially for those with engineering teams in regions that continue to be hard hit by the effects of the global pandemic, making it tougher than usual for them to make the technical updates related to this policy.

After carefully considering feedback from both large and small developers, we are giving developers an option to request a 6-month extension, which will give them until March 31, 2022 to comply with our Payments policy. Starting on July 22nd, developers can appeal for an extension through the Help Center and we will review each request and get back to requests as soon as possible.

Check out the Help Center and the Policy Center for details, timelines, and frequently asked questions. You can also check out Play Academy or watch the PolicyBytes video for additional information.