How one Googler is raising awareness around ALS

When you’re young, life is filled with a chorus of well-intentioned advice: “Work hard.” “Be brave.” “Follow your dreams.” 

And of course: “Be the hero of your own life.” 

When it comes to “heroes” I’ve found that even if you’re really lucky, then the person staring back at you in the mirror won’t make your top five. Heroes are those unexpected people who step into your life long enough to teach you something about grace, courage and persistence. 

For me, one of those people is a woman named Stacy Title.

A personal connection to ALS

I met Stacy after competing with her husband, Jonathan Penner, on season 13 of Survivor: Cook Islands. My friendship with Jonathan is one of the things I treasure most from that experience. In time, I befriended his brilliant and lovely wife, Stacy, and their two children, Cooper and Ava.  

Stacy and Yul.jpg

Yul with Stacy in 2019

Two years ago, Stacy was diagnosed with familial ALS, a devastating neurodegenerative disease that slowly robs a person of all muscle control. Her thoughts immediately turned to her children. Beyond the diagnosis, the real horror was knowing that each of her children has a 50 percent chance of inheriting the genetic abnormality that causes this disease. She was determined to fight and somehow spare her children this diagnosis—but she didn’t know how. No one did. 

As Stacy’s disease progressed, ALS took away her ability to move her arms, hug her children, and even to speak. She lost the ability to communicate except by using her eyes to slowly spell out words using eye-tracking technology, which would then be read out loud by an electronic voice.

ALS left my friend's mind intact but otherwise cut her off from the world, and it left a family cut off from a wife and mother—so I asked her if we could try something. 

Stacy, Jonathan and their children Cooper and Ava in San Diego in 2006. Jonathan had recently returned from competing with Yul on Survivor: Cook Islands

Stacy, Jonathan and their children Cooper and Ava in San Diego in 2006. Jonathan had recently returned from competing with Yul on Survivor: Cook Islands.

A newfound sense of autonomy and connectedness

Last spring, I went to Stacy’s home and set her up with a Nest Hub Max smart display and several Google Home Mini speakers. I also got her a subscription for Google Play Music and a gift card for Google Play Books. I didn’t know if any of these would actually be helpful to her. But as it turned out, they changed her life.  

Google Assistant on the Nest Hub Max understood Stacy’s electronic voice perfectly. Suddenly she could play her favorite songs, listen to the news or audiobooks, watch YouTube videos, and ask questions whenever she wanted. Using Google Assistant’s broadcast feature, she could call people in other rooms for help through the Mini speakers. Jonathan could also check on her easily from his phone using the Hub Max’s built-in Nest Cam. 

Jonathan and Stacy with their Nest Hub Max in 2020

Jonathan with Stacy in 2020

Jonathan, Cooper and Ava then installed Google Photos on their phones so that any photos they took were automatically uploaded to a live album and streamed to Stacy’s Hub Max. For the first time in over a year, she could keep up with what her kids were doing, and be present in their lives outside the confines of her bed or chair.

While far from a cure, these products brought back a sense of autonomy, connectedness, and enjoyment she had lost, not because of the tools themselves, but because of the moments these tools allowed her to experience. This was when it really hit home for me how much technology can help people.

Today, Stacy knows it’s only a matter of time. She endures the discomforting intervention of a ventilator and other systems, and lives for one urgent purpose: raising awareness of ALS so that her children will have a chance of escaping her fate.  

Raising awareness for ALS with Survivor

Last year, I received an unexpected invitation to compete in Survivor: Winners at War. This 40th season, now running on CBS, brings together past winners for the ultimate showdown. Though I was initially unsure about returning, I saw it as an opportunity to bring attention to Stacy’s story, and raise funds in a way that I wouldn't be able to do otherwise. So I returned to the South Pacific, and pledged that whatever money I earn from the show will go toward supporting ALS research and other ALS charities.

I’m grateful to Google for building technology that helps people everywhere. I’m grateful to CBS for sharing Stacy’s story and creating a fundraising page to support her family and thousands of other families in need. And most of all, I’m grateful to Stacy for showing me how someone can face the impossible each day with more bravery, persistence, and love than I could ever imagine. Because she reminded me that what’s important isn’t finding hope for her. It’s finding hope for Ava and Cooper and countless others who can still be spared this terrible disease. 

In the end, I think that’s the best way we can honor the people who inspire us: by helping build the future they imagined. I hope we can build Stacy’s future together.


Beta Channel Update for Desktop

The beta channel has been updated to 81.0.4044.62 for Windows, Mac, and, Linux.

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Prudhvikumar Bommana
Google Chrome

Coronavirus: An update on creator support and resources

Dear Creators and Artists,



Over the past several weeks, as we’ve all seen the growing crisis around the coronavirus, we at YouTube have been thinking about how these developments might affect you. We often say that you’re the heart of YouTube, and during this difficult time we wanted to share how we’re working to help and support you.



Keeping our community informed




There’s a lot of uncertainty right now, and we understand the importance of helping people find authoritative sources of news and information. We're using our homepage to direct users to the World Health Organization (WHO), the Centers for Disease Control and Prevention (CDC), and other local authoritative organizations around the world to ensure users can easily find updates. We're also donating advertising inventory to governments and NGOs in impacted regions, who are using it to spotlight timely, helpful information.



It remains our top priority to provide information to users in a responsible way. From the very beginning of this outbreak, we’ve worked to prevent misinformation associated with the spread of the virus. We’re also raising up authoritative sources in search and recommendations and showing information panels on relevant videos. YouTube will continue to quickly remove videos that violate our policies when they are flagged, including those that discourage people from seeking medical treatment or claim harmful substances have health benefits. Finding trustworthy content is especially critical as news is breaking, and we’ll continue to make sure YouTube delivers accurate information for our users.



Partnering to help others




The situation is evolving every day, and we're committed to providing you updates along the way, including any changes that may impact our processes and support systems. Check in here to find the latest resources from YouTube.



People around the world come to YouTube for information, but they’re also looking for something more: to find relief and connect as a community. Creators and artists bring us together, offering entertainment and solace through conversations that help us feel less alone. We’re working to help make those connections possible by meeting the increased demand for live streaming as university events, conferences, and religious services move their gatherings online.



Supporting creators: updates to monetization for coronavirus-related content




YouTube’s policies are designed to support your work on the platform, to protect users, and to give advertisers confidence about where their ads run. We know many of you have had questions about our sensitive events policy, which currently does not allow monetization if a video includes more than a passing mention of the coronavirus. Our sensitive events policy was designed to apply to short-term events of significant magnitude, like a natural disaster. It’s becoming clear this issue is now an ongoing and important part of everyday conversation, and we want to make sure news organizations and creators can continue producing quality videos in a sustainable way. In the days ahead, we will enable ads for content discussing the coronavirus on a limited number of channels, including creators who accurately self-certify and a range of news partners. We’re preparing our policies and enforcement processes to expand monetization to more creators and news organizations in the coming weeks.



The power of community




YouTube creators have shown time and again the difference it makes when we come together. We appreciate everything you do to create positive communities that allow people to turn to each other in times of need. Let’s continue to support each other as we navigate these challenging times.



Susan Wojcicki


Real-Time 3D Object Detection on Mobile Devices with MediaPipe



Object detection is an extensively studied computer vision problem, but most of the research has focused on 2D object prediction. While 2D prediction only provides 2D bounding boxes, by extending prediction to 3D, one can capture an object’s size, position and orientation in the world, leading to a variety of applications in robotics, self-driving vehicles, image retrieval, and augmented reality. Although 2D object detection is relatively mature and has been widely used in the industry, 3D object detection from 2D imagery is a challenging problem, due to the lack of data and diversity of appearances and shapes of objects within a category.

Today, we are announcing the release of MediaPipe Objectron, a mobile real-time 3D object detection pipeline for everyday objects. This pipeline detects objects in 2D images, and estimates their poses and sizes through a machine learning (ML) model, trained on a newly created 3D dataset. Implemented in MediaPipe, an open-source cross-platform framework for building pipelines to process perceptual data of different modalities, Objectron computes oriented 3D bounding boxes of objects in real-time on mobile devices.
 
3D Object Detection from a single image. MediaPipe Objectron determines the position, orientation and size of everyday objects in real-time on mobile devices.
Obtaining Real-World 3D Training Data
While there are ample amounts of 3D data for street scenes, due to the popularity of research into self-driving cars that rely on 3D capture sensors like LIDAR, datasets with ground truth 3D annotations for more granular everyday objects are extremely limited. To overcome this problem, we developed a novel data pipeline using mobile augmented reality (AR) session data. With the arrival of ARCore and ARKit, hundreds of millions of smartphones now have AR capabilities and the ability to capture additional information during an AR session, including the camera pose, sparse 3D point clouds, estimated lighting, and planar surfaces.

In order to label ground truth data, we built a novel annotation tool for use with AR session data, which allows annotators to quickly label 3D bounding boxes for objects. This tool uses a split-screen view to display 2D video frames on which are overlaid 3D bounding boxes on the left, alongside a view showing 3D point clouds, camera positions and detected planes on the right. Annotators draw 3D bounding boxes in the 3D view, and verify its location by reviewing the projections in 2D video frames. For static objects, we only need to annotate an object in a single frame and propagate its location to all frames using the ground truth camera pose information from the AR session data, which makes the procedure highly efficient.
Real-world data annotation for 3D object detection. Right: 3D bounding boxes are annotated in the 3D world with detected surfaces and point clouds. Left: Projections of annotated 3D bounding boxes are overlaid on top of video frames making it easy to validate the annotation.
AR Synthetic Data Generation
A popular approach is to complement real-world data with synthetic data in order to increase the accuracy of prediction. However, attempts to do so often yield poor, unrealistic data or, in the case of photorealistic rendering, require significant effort and compute. Our novel approach, called AR Synthetic Data Generation, places virtual objects into scenes that have AR session data, which allows us to leverage camera poses, detected planar surfaces, and estimated lighting to generate placements that are physically probable and with lighting that matches the scene. This approach results in high-quality synthetic data with rendered objects that respect the scene geometry and fit seamlessly into real backgrounds. By combining real-world data and AR synthetic data, we are able to increase the accuracy by about 10%.
An example of AR synthetic data generation. The virtual white-brown cereal box is rendered into the real scene, next to the real blue book.
An ML Pipeline for 3D Object Detection
We built a single-stage model to predict the pose and physical size of an object from a single RGB image. The model backbone has an encoder-decoder architecture, built upon MobileNetv2. We employ a multi-task learning approach, jointly predicting an object's shape with detection and regression. The shape task predicts the object's shape signals depending on what ground truth annotation is available, e.g. segmentation. This is optional if there is no shape annotation in training data. For the detection task, we use the annotated bounding boxes and fit a Gaussian to the box, with center at the box centroid, and standard deviations proportional to the box size. The goal for detection is then to predict this distribution with its peak representing the object’s center location. The regression task estimates the 2D projections of the eight bounding box vertices. To obtain the final 3D coordinates for the bounding box, we leverage a well established pose estimation algorithm (EPnP). It can recover the 3D bounding box of an object, without a priori knowledge of the object dimensions. Given the 3D bounding box, we can easily compute pose and size of the object. The diagram below shows our network architecture and post-processing. The model is light enough to run real-time on mobile devices (at 26 FPS on an Adreno 650 mobile GPU).
Network architecture and post-processing for 3D object detection.
Sample results of our network — [left] original 2D image with estimated bounding boxes, [middle] object detection by Gaussian distribution, [right] predicted segmentation mask.
Detection and Tracking in MediaPipe
When the model is applied to every frame captured by the mobile device, it can suffer from jitter due to the ambiguity of the 3D bounding box estimated in each frame. To mitigate this, we adopt the detection+tracking framework recently released in our 2D object detection and tracking solution. This framework mitigates the need to run the network on every frame, allowing the use of heavier and therefore more accurate models, while keeping the pipeline real-time on mobile devices. It also retains object identity across frames and ensures that the prediction is temporally consistent, reducing the jitter.

For further efficiency in our mobile pipeline, we run our model inference only once every few frames. Next, we take the prediction and track it over time using the approach described in our previous blogs for instant motion tracking and Motion Stills. When a new prediction is made, we consolidate the detection result with the tracking result based on the area of overlap.

To encourage researchers and developers to experiment and prototype based on our pipeline, we are releasing our on-device ML pipeline in MediaPipe, including an end-to-end demo mobile application and our trained models for two categories: shoes and chairs. We hope that sharing our solution with the wide research and development community will stimulate new use cases, new applications, and new research efforts. In the future, we plan to scale our model to many more categories, and further improve our on-device performance.
   
Examples of our 3D object detection in the wild.
Acknowledgements
The research described in this post was done by Adel Ahmadyan, Tingbo Hou, Jianing Wei, Matthias Grundmann, Liangkai Zhang, Jiuqiang Tang, Chris McClanahan, Tyler Mullen, Buck Bourdon, Esha Uboweja, Mogan Shieh, Siarhei Kazakou, Ming Guang Yong, Chuo-Ling Chang, and James Bruce. We thank Aliaksandr Shyrokau and the annotation team for their diligence to high quality annotations.

Source: Google AI Blog


Real-Time 3D Object Detection on Mobile Devices with MediaPipe



Object detection is an extensively studied computer vision problem, but most of the research has focused on 2D object prediction. While 2D prediction only provides 2D bounding boxes, by extending prediction to 3D, one can capture an object’s size, position and orientation in the world, leading to a variety of applications in robotics, self-driving vehicles, image retrieval, and augmented reality. Although 2D object detection is relatively mature and has been widely used in the industry, 3D object detection from 2D imagery is a challenging problem, due to the lack of data and diversity of appearances and shapes of objects within a category.

Today, we are announcing the release of MediaPipe Objectron, a mobile real-time 3D object detection pipeline for everyday objects. This pipeline detects objects in 2D images, and estimates their poses and sizes through a machine learning (ML) model, trained on a newly created 3D dataset. Implemented in MediaPipe, an open-source cross-platform framework for building pipelines to process perceptual data of different modalities, Objectron computes oriented 3D bounding boxes of objects in real-time on mobile devices.
 
3D Object Detection from a single image. MediaPipe Objectron determines the position, orientation and size of everyday objects in real-time on mobile devices.
Obtaining Real-World 3D Training Data
While there are ample amounts of 3D data for street scenes, due to the popularity of research into self-driving cars that rely on 3D capture sensors like LIDAR, datasets with ground truth 3D annotations for more granular everyday objects are extremely limited. To overcome this problem, we developed a novel data pipeline using mobile augmented reality (AR) session data. With the arrival of ARCore and ARKit, hundreds of millions of smartphones now have AR capabilities and the ability to capture additional information during an AR session, including the camera pose, sparse 3D point clouds, estimated lighting, and planar surfaces.

In order to label ground truth data, we built a novel annotation tool for use with AR session data, which allows annotators to quickly label 3D bounding boxes for objects. This tool uses a split-screen view to display 2D video frames on which are overlaid 3D bounding boxes on the left, alongside a view showing 3D point clouds, camera positions and detected planes on the right. Annotators draw 3D bounding boxes in the 3D view, and verify its location by reviewing the projections in 2D video frames. For static objects, we only need to annotate an object in a single frame and propagate its location to all frames using the ground truth camera pose information from the AR session data, which makes the procedure highly efficient.
Real-world data annotation for 3D object detection. Right: 3D bounding boxes are annotated in the 3D world with detected surfaces and point clouds. Left: Projections of annotated 3D bounding boxes are overlaid on top of video frames making it easy to validate the annotation.
AR Synthetic Data Generation
A popular approach is to complement real-world data with synthetic data in order to increase the accuracy of prediction. However, attempts to do so often yield poor, unrealistic data or, in the case of photorealistic rendering, require significant effort and compute. Our novel approach, called AR Synthetic Data Generation, places virtual objects into scenes that have AR session data, which allows us to leverage camera poses, detected planar surfaces, and estimated lighting to generate placements that are physically probable and with lighting that matches the scene. This approach results in high-quality synthetic data with rendered objects that respect the scene geometry and fit seamlessly into real backgrounds. By combining real-world data and AR synthetic data, we are able to increase the accuracy by about 10%.
An example of AR synthetic data generation. The virtual white-brown cereal box is rendered into the real scene, next to the real blue book.
An ML Pipeline for 3D Object Detection
We built a single-stage model to predict the pose and physical size of an object from a single RGB image. The model backbone has an encoder-decoder architecture, built upon MobileNetv2. We employ a multi-task learning approach, jointly predicting an object's shape with detection and regression. The shape task predicts the object's shape signals depending on what ground truth annotation is available, e.g. segmentation. This is optional if there is no shape annotation in training data. For the detection task, we use the annotated bounding boxes and fit a Gaussian to the box, with center at the box centroid, and standard deviations proportional to the box size. The goal for detection is then to predict this distribution with its peak representing the object’s center location. The regression task estimates the 2D projections of the eight bounding box vertices. To obtain the final 3D coordinates for the bounding box, we leverage a well established pose estimation algorithm (EPnP). It can recover the 3D bounding box of an object, without a priori knowledge of the object dimensions. Given the 3D bounding box, we can easily compute pose and size of the object. The diagram below shows our network architecture and post-processing. The model is light enough to run real-time on mobile devices (at 26 FPS on an Adreno 650 mobile GPU).
Network architecture and post-processing for 3D object detection.
Sample results of our network — [left] original 2D image with estimated bounding boxes, [middle] object detection by Gaussian distribution, [right] predicted segmentation mask.
Detection and Tracking in MediaPipe
When the model is applied to every frame captured by the mobile device, it can suffer from jitter due to the ambiguity of the 3D bounding box estimated in each frame. To mitigate this, we adopt the detection+tracking framework recently released in our 2D object detection and tracking solution. This framework mitigates the need to run the network on every frame, allowing the use of heavier and therefore more accurate models, while keeping the pipeline real-time on mobile devices. It also retains object identity across frames and ensures that the prediction is temporally consistent, reducing the jitter.

For further efficiency in our mobile pipeline, we run our model inference only once every few frames. Next, we take the prediction and track it over time using the approach described in our previous blogs for instant motion tracking and Motion Stills. When a new prediction is made, we consolidate the detection result with the tracking result based on the area of overlap.

To encourage researchers and developers to experiment and prototype based on our pipeline, we are releasing our on-device ML pipeline in MediaPipe, including an end-to-end demo mobile application and our trained models for two categories: shoes and chairs. We hope that sharing our solution with the wide research and development community will stimulate new use cases, new applications, and new research efforts. In the future, we plan to scale our model to many more categories, and further improve our on-device performance.
   
Examples of our 3D object detection in the wild.
Acknowledgements
The research described in this post was done by Adel Ahmadyan, Tingbo Hou, Jianing Wei, Matthias Grundmann, Liangkai Zhang, Jiuqiang Tang, Chris McClanahan, Tyler Mullen, Buck Bourdon, Esha Uboweja, Mogan Shieh, Siarhei Kazakou, Ming Guang Yong, Chuo-Ling Chang, and James Bruce. We thank Aliaksandr Shyrokau and the annotation team for their diligence to high quality annotations.

Source: Google AI Blog


Measuring Compositional Generalization

People are capable of learning the meaning of a new word and then applying it to other language contexts. As Lake and Baroni put it, “Once a person learns the meaning of a new verb ‘dax’, he or she can immediately understand the meaning of ‘dax twice’ and ‘sing and dax’.” Similarly, one can learn a new object shape and then recognize it with different compositions of previously learned colors or materials (e.g., in the CLEVR dataset). This is because people exhibit the capacity to understand and produce a potentially infinite number of novel combinations of known components, or as Chomsky said, to make “infinite use of finite means.” In the context of a machine learning model learning from a set of training examples, this skill is called compositional generalization.

A common approach for measuring compositional generalization in machine learning (ML) systems is to split the training and testing data based on properties that intuitively correlate with compositional structure. For instance, one approach is to split the data based on sequence length—the training set consists of short examples, while the test set consists of longer examples. Another approach uses sequence patterns, meaning the split is based on randomly assigning clusters of examples sharing the same pattern to either train or test sets. For instance, the questions "Who directed Movie1" and "Who directed Movie2" both fall into the pattern "Who directed <MOVIE>" so they would be grouped together. Yet another method uses held out primitives—some linguistic primitives are shown very rarely during training (e.g., the verb “jump”), but are very prominent in testing. While each of these experiments are useful, it is not immediately clear which experiment is a "better" measure for compositionality. Is it possible to systematically design an “optimal” compositional generalization experiment?

In “Measuring Compositional Generalization: A Comprehensive Method on Realistic Data”, we attempt to address this question by introducing the largest and most comprehensive benchmark for compositional generalization using realistic natural language understanding tasks, specifically, semantic parsing and question answering. In this work, we propose a metric—compound divergence—that allows one to quantitatively assess how much a train-test split measures the compositional generalization ability of an ML system. We analyze the compositional generalization ability of three sequence to sequence ML architectures, and find that they fail to generalize compositionally. We also are releasing the Compositional Freebase Questions dataset used in the work as a resource for researchers wishing to improve upon these results.

Measuring Compositionality

In order to measure the compositional generalization ability of a system, we start with the assumption that we understand the underlying principles of how examples are generated. For instance, we begin with the grammar rules to which we must adhere when generating questions and answers. We then draw a distinction between atoms and compounds. Atoms are the building blocks that are used to generate examples and compounds are concrete (potentially partial) compositions of these atoms. For example, in the figure below, every box is an atom (e.g., Shane Steel, brother, <entity>'s <entity>, produce, etc.), which fits together to form compounds, such as produce and <verb>, Shane Steel’s brother, Did Shane Steel’s brother produce and direct Revenge of the Spy?, etc.
Building compositional sentences (compounds) from building blocks (atoms)


An ideal compositionality experiment then should have a similar atom distribution, i.e., the distribution of words and sub-phrases in the training set is as similar as possible to their distribution in the test set, but with a different compound distribution. To measure compositional generalization on a question answering task about a movie domain, one might, for instance, have the following questions in train and test:

Train set Test set
Who directed Inception?
Did Greta Gerwig direct Goldfinger?
...
Did Greta Gerwig produce Goldfinger?
Who produced Inception?
...
While atoms such as “directed”, “Inception”, and “who <predicate> <entity>” appear in both the train and test sets, the compounds are different.

The Compositional Freebase Questions dataset

In order to conduct an accurate compositionality experiment, we created the Compositional Freebase Questions (CFQ) dataset, a simple, yet realistic, large dataset of natural language questions and answers generated from the public Freebase knowledge base. The CFQ can be used for text-in / text-out tasks, as well as semantic parsing. In our experiments, we focus on semantic parsing, where the input is a natural language question and the output is a query, which when executed against Freebase, produces the correct outcome. CFQ contains around 240k examples and almost 35k query patterns, making it significantly larger and more complex than comparable datasets — about 4 times that of WikiSQL with about 17x more query patterns than Complex Web Questions. Special care has been taken to ensure that the questions and answers are natural. We also quantify the complexity of the syntax in each example using the “complexity level” metric (L), which corresponds roughly to the depth of the parse tree, examples of which are shown below.

LQuestion → Answer
10What did Commerzbank acquire? → Eurohypo; Dresdner Bank
15Did Dianna Rhodes’s spouse produce Soldier Blue? → No
20Which costume designer of E.T. married Mannequin’s cinematographer? → Deborah Lynn Scott
40Was Weekend Cowgirls produced, directed, and written by a film editor that The Evergreen State College and Fairway Pictures employed → No
50Were It’s Not About the Shawerma, The Fifth Wall, Rick’s Canoe, White Stork Is Coming, and Blues for the Avatar executive produced, edited, directed, and written by a screenwriter’s parent? → Yes

Compositional Generalization Experiments on CFQ

For a given train-test split, if the compound distributions of the train and test sets are very similar, then their compound divergence would be close to 0, indicating that they are not difficult tests for compositional generalization. A compound divergence close to 1 means that the train-test sets have many different compounds, which makes it a good test for compositional generalization. Compound divergence thus captures the notion of "different compound distribution", as desired.

We algorithmically generate train-test splits using the CFQ dataset that have a compound divergence ranging from 0 to 0.7 (the maximum that we were able to achieve). We fix the atom divergence to be very small. Then, for each split we measure the performance of three standard ML architectures — LSTM+attention, Transformer, and Universal Transformer. The results are shown in the graph below.
Compound divergence vs accuracy for three ML architectures. There is a surprisingly strong negative correlation between compound divergence and accuracy.

We measure the performance of a model by comparing the correct answers with the output string given by the model. All models achieve an accuracy greater than 95% when the compound divergence is very low. The mean accuracy on the split with highest compound divergence is below 20% for all architectures, which means that even a large training set with a similar atom distribution between train and test is not sufficient for the architectures to generalize well. For all architectures, there is a strong negative correlation between the compound divergence and the accuracy. This seems to indicate that compound divergence successfully captures the core difficulty for these ML architectures to generalize compositionally.

Potentially promising directions for future work might be to apply unsupervised pre-training on input language or output queries, or to use more diverse or more targeted learning architectures, such as syntactic attention. It would also be interesting to apply this approach to other domains such as visual reasoning, e.g. based on CLEVR, or to extend our approach to broader subsets of language understanding, including the use of ambiguous constructs, negations, quantification, comparatives, additional languages, and other vertical domains. We hope that this work will inspire others to use this benchmark to advance the compositional generalization capabilities of learning systems.

By Marc van Zee, Software Engineer, Google Research – Brain Team

Anna Vainer knows what makes her remarkable

Even as a teenager, Anna Vainer knew what she wanted. “I remember, at 14, telling my sister ‘I’m going to be working in marketing,’” she says, smiling. “I don’t know how I knew that.” She was right: Anna is the head of B2B Growth Marketing for Google in Europe, the Middle East and Africa (EMEA), and runs the regional team for Think with Google, a destination for marketing trends and insights. Anna says she’s truly driven by working with people, and it’s her other role as the co-founder of #IamRemarkable where she truly gets to flex this skill. 

#IamRemarkable is an initiative that empowers women and underrepresented groups to celebrate their achievements in the workplace and beyond. The goal is to challenge the social perception that surrounds self promotion, an issue that not only affects individuals, but also hinders progress when it comes to diversity, equity and inclusion. 

#IamRemarkable also has a workshop component, which to date has reached more than 100,000 participants in more than 100 countries with the help of 5,000 facilitators; many participants credit the workshop with helping them make real, positive career and personal growth. 

The idea for #IamRemarkable came to Anna during a training that asked women to write down and read lists of their accomplishments. She was shocked by her own reaction. “I remember sitting there, looking at the women reading their achievements and I was thinking to myself, ‘wow, why do they brag? Why do they have to show off?’” she says. “And then it started to hit me that there was something wrong with this feeling. They were asked to stand in front of the room and talk about their achievements; that was the exercise.” Today, Anna helps others learn to acknowledge and announce what makes them great—while also making sure to practice what she preaches. 

What was your career path to Google?

At university, I studied economics and management and then I kind of rolled into doing an internship at a pharmaceutical company working as an economist. I told myself, “you know what, I studied economics, let’s see what it means to be an actual economist,” but soon enough I realized this was not going to be my preferable field of professional engagement. Shortly after that, I applied for an internship at Google and got it, and that was it. I’ve been at Google for nearly 10 years. 

In a parallel universe, what’s a different career you would have pursued? 

I would love to run a boutique hotel in the countryside of Israel, where I grew up. I think about my grandparent's summer house in Minsk, Belarus and the amazing summers we spent as a family in the countryside every summer until we moved to Israel. And running a hotel means I could create this experience for travellers from all over the world in one place. 

How did #IamRemarkable first get started?

After that training where I felt like the women reading their achievements out loud were bragging, I talked to a colleague of mine, Anna Zapesochini, who had the same feeling when she took the course. She told me we should make a video about the process people go through during this exercise. I went to my previous manager, Riki Drori, and said, “I need to make this video, we have a really great idea to help women overcome their confidence gaps and their modesty gaps.” She said, “I’m going to give you the budget for the video, but if this is as important as you say it is, how are you actually going to bring it to every woman on the planet?” That question led to so many ideas. Soon after that conversation, Anna [Zapesochini] and I, with a ton of support from my managers Janusz Moneta and Yonca Dervişoğlu, founded the #IamRemarkable initiative, at the heart of which lies a 90-minute workshop aimed at empowering women and underrepresented groups to celebrate their achievements and break modesty norms and glass ceilings. 

The original #IamRemarkable video that Anna requested the budget to make.

The original #IamRemarkable video that Anna requested the budget to make.

What’s your favorite part of the workshop?

After we ask people to fill out a whole page with statements about what makes them remarkable, we ask them to read it out loud. And the moment you ask them to read it out loud you hear “hhhuuuuhhh!”—like the air is sucked out of the room. That’s definitely my favorite part. 

Have any of them in particular really stuck with you? 

One of the most memorable ones was in the past year at Web Summit in Lisbon. It was my first week back from maternity leave and we ran a workshop for 250 people. The room was packed, people were sitting on the floor. After we asked people to read their lists of what makes them remarkable in their small groups, we invited 10 brave people to stand on stage and read one of their statements out loud, and everybody wept. It was such a high level of intimacy for such a large room, I was astonished. 

My baby and husband were actually at that workshop, which was so great. It made me think of the future generation and how I want the workplace to be for my daughter, and I think we’ve made really good steps in the past couple of years. #IamRemarkable is creating really great tools for people. 


Without putting you on the spot, what are some things that make you remarkable?

Professionally, there are a few achievements I’m proud of. The first is that I created #IamRemarkable; another is that I started a campaign similar to Black Friday in Israel to drive e-commerce in the country. And personally, I’m remarkable because I was part of the Israeli national synchronized swimming team. You won’t see me in the pool with a nose clip now, but I did that for seven years. 

What’s one piece of advice you have for women who struggle with self-promotion? 

The piece of homework we give to people after the workshop is write down your three top achievements from the past month or past period, and practice saying them in front of the mirror. Then practice saying them to a friend or colleague who you trust. Then, put down time down with your manager to go through that list. 

The most recent I Am Remarkable video featuring Anna.

With today’s overload of data—whether it’s email, ads, whatever—you can’t assume people see and understand what you’ve worked on. The ability to talk about your personal contribution is critical, and many times, women specifically use team-based language; “we” as opposed to “I.” Learning to use self-promoting language is important as well. Practice, practice, practice. It’s like flexing a muscle; it’s going to feel awkward the first time, and even maybe the third time—but the tenth time, it will feel natural. 

Was there a time in your life when you could have benefit from these skills? 

To be honest, to this day I still have those moments where I need to practice those skills. I don’t think it’s that you just learn it and then you’re amazing at it. But it definitely would have benefit me earlier in my career, and during school as well. It’s really important to learn from a young age to talk about achievements in an objective way. You see this in the workshop, where people look at their full page and see their lives unfold, all of their achievements on the page, and suddenly it fills them up with so much pride; it gives you this sense of ability and confidence that you can achieve anything. The original video we made with that scrappy budget ends with a woman saying, “I wonder what else I can do.” I think that’s a pretty important feeling to have at any stage of your life. 


Important changes to less secure apps and account recovery management in the Admin console

What’s changing

We’re making some updates to how you manage less secure app (LSA) settings and account recovery (AR) settings in the Admin console. This is part of a wider migration of our Admin console pages to a simplified and more streamlined experience, and will affect the sections at Admin console > Security > Settings > Less Secure Apps and Admin console > Security > Settings > Account Recovery. In those sections you may notice:

  • An updated interface, which reorganizes the settings to make them easier to find and change.
  • A new system to apply group-based policies in these areas. As a result of this change, existing settings will be migrated to the new system. See "Additional details" below for more information.


Who’s impacted

Admins

Why it’s important

The interface updates will make security settings more findable and scannable, reducing the number of clicks it takes to manage these settings. The new group-based policy system is the same one used in other areas of the Admin console and so should be more familiar and intuitive than the legacy system. The new system allows for multiple group based policies to be applied in a single UI view, and makes it possible to manage policies exclusively using groups, instead of a combination of OU-based policy with group-based exceptions.

Additional details

As part of migrating LSA and AR pages to the new UI, we will migrate any currently applied group-based policies to the new groups-based system. This migration will have no functional impact for most customers.

However, for a very small number of organizations (specifically those that currently have group based policies for LSA and AR applied at child-OU levels,) this transition may impact your existing settings. We will email the primary admin at affected domains with more details on how we will do the transition, and instructions for how to prepare. If you don’t receive an email, no action is required.

Getting started

Admins: Existing policies will be migrated to the new group-based policy system automatically unless you’re notified by email (see “Additional details” above). Visit the Help Center to learn more about using groups to manage Admin console settings, controlling access to LSAs, or setting up account recovery for users.

End users: There is no end-user impact unless admins change settings applied to them.
Before

After

Rollout pace




Availability


  • Available to all G Suite customers


Resources


Set custom table ranges for charts in Google Sheets

What’s changing 

We’re improving the way data is suggested and how data is selected when creating a chart in Google Sheets. It’s now easier to locate and select the data you need when creating a dashboard over a dataset with slicers, pivot tables, charts, and more.

Who’s impacted 

End users

Why you’d use it 

When creating reports in Sheets, it’s common to create multiple charts from the same data table, but using different column ranges. Previously, all data ranges on a table would be used when creating a chart. Now, you’ll be able to select which columns to use for the chart axis and series. This allows you to quickly customize your charts so that they display the most relevant data.

Getting started

Admins: There is no admin action required for this feature.

End users: This feature will be available by default. In the chart editor, you can select a column as the X-axis and under “Series” you can select additional columns to populate your chart.


Rollout pace


Availability

  • Available to all G Suite customers and users with personal Google accounts

Announcing our first GCP VRP Prize winner and updates to 2020 program




Last year, we announced a yearly Google Cloud Platform (GCP) VRP Prize to promote security research of GCP. Since then, we’ve received many interesting entries as part of this new initiative from the security research community. Today, we are announcing the winner as well as several updates to our program for 2020.

After careful evaluation of all the submissions, we are excited to announce our winner of the 2019 GCP VRP prize: Wouter ter Maat, who submitted a write-up about Google Cloud Shell vulnerabilities. You can read his winning write-up here.

There were several other excellent reports submitted to our GCP VRP in 2019. To learn more about them watch this video by LiveOverflow, which explains some of the top submissions in detail.

To encourage more security researchers to look for vulnerabilities in GCP and to better reward our top bug hunters, we're tripling the total amount of the GCP VRP Prize this year. We will pay out a total of $313,337 for the top vulnerability reports in GCP products submitted in 2020. The following prize amounts will be distributed between the top 6 submissions:
  • 1st prize: $133,337
  • 2nd prize: $73,331
  • 3rd prize: $73,331
  • 4th prize: $31,337
  • 5th prize: $1,001
  • 6th prize: $1,000

Like last year, submissions should have public write-ups in order to be eligible for the prize. The number of vulnerability reports in a single write-up is not a factor. You can even make multiple submissions, one for each write-up. These prizes are only for vulnerabilities found in GCP products. If you have budget constraints regarding access to testing environments, you can use the free tier of GCP. Note that this prize is not a replacement of our Vulnerability Reward Program (VRP), and that we will continue to pay security researchers under the VRP for disclosing security issues that affect Google services, including GCP. Complete details, terms and conditions about the prize can be found here.


Thank you to everyone who submitted entries in 2019! Make sure to nominate your VRP reports and write-ups for the 2020 GCP VRP prize here before December 31, 2020 at 11:59 GMT.