Google OAuth incremental authorization improvement

Posted by Vikrant Rana, Product Manager, and Badi Azad, Group Product Manager

Summary

Google Identity strives to be the best stewards for Google Account users who entrust us to protect their data. At the same time, we want to help our developer community build apps that give users amazing experiences. Together, Google and developers can provide users three important ways to manage sharing their data:

  1. Give users control in deciding who has access to their account data
  2. Make it easier and safer for users to share their Google Account data with your app when they choose to do so
  3. Make it clear to users the specific data they are sharing with apps

What we are doing today

In service of that stewardship, today we are announcing an OAuth consent experience that simplifies how users can share data with apps. This experience also improves the consent conversion for apps that use incremental authorization, which requests only one scope. Users can now easily share this kind of request with a single tap.

Screenshot compares the previous screen and the new screen you see when Example app wants to access your account

Previous Screen                                               New Screen

A quick recap

Let’s summarize a few past improvements so you have a full picture of the work we have been doing on the OAuth consent flow.

In mid-2019, we significantly overhauled the consent screen to give users fine-grained control over the account data they chose to share with a given app. In that flow, when an app requested access to multiple Google resources, the user would see one screen for each scope.

In July 2021, we consolidated these multiple-permission requests into a single screen, while still allowing granular data sharing control for users. Our change today represents a continuation of improvements on that experience.

Screenshot that shows the option to select what Example app can access

The Identity team will continue to gather feedback and further enhance the overall user experience around Google Identity Services and sharing account data.

What do developers need to do?

There is no change you need to make to your app. However, we recommend using incremental authorization and requesting only one resource at the time your app needs it. We believe that doing this will make your account data request more relevant to the user and therefore improve the consent conversion. Read more about incremental authorization in our developer guides.

If your app requires multiple resources at once, make sure it can handle partial consent gracefully and reduce its functionality appropriately as per the OAuth 2.0 policy.

Related content

Google at ICCV 2021

The International Conference on Computer Vision 2021 (ICCV 2021), one of the world's premier conferences on computer vision, starts this week. A Champion Sponsor and leader in computer vision research, Google will have a strong presence at ICCV 2021 with more than 50 research presentations and involvement in the organization of a number of workshops and tutorials.

If you are attending ICCV this year, we hope you’ll check out the work of our researchers who are actively pursuing the latest innovations in computer vision. Learn more about our research being presented in the list below (Google affilitation in bold).

Organizing Committee
Diversity and Inclusion Chair: Negar Rostamzadeh
Area Chairs: Andrea Tagliasacchi, Boqing Gong, Ce Liu, Dilip Krishnan, Jordi Pont-Tuset, Michael Rubinstein, Michael S. Ryoo, Negar Rostamzadeh, Noah Snavely, Rodrigo Benenson, Tsung-Yi Lin, Vittorio Ferrari

Publications
MosaicOS: A Simple and Effective Use of Object-Centric Images for Long-Tailed Object Detection
Cheng Zhang, Tai-Yu Pan, Yandong Li, Hexiang Hu, Dong Xuan, Soravit Changpinyo, Boqing Gong, Wei-Lun Chao

Learning to Resize Images for Computer Vision Tasks
Hossein Talebi, Peyman Milanfar

Joint Representation Learning and Novel Category Discovery on Single- and Multi-Modal Data
Xuhui Jia, Kai Han, Yukun Zhu, Bradley Green

Explaining in Style: Training a GAN to Explain a Classifier in StyleSpace
Oran Lang, Yossi Gandelsman, Michal Yarom, Yoav Wald, Gal Elidan, Avinatan Hassidim, William T. Freeman, Phillip Isola, Amir Globerson, Michal Irani, Inbar Mosseri

Learning Fast Sample Re-weighting without Reward Data
Zizhao Zhang, Tomas Pfister

Contrastive Multimodal Fusion with TupleInfoNCE
Yunze Liu, Qingnan Fan, Shanghang Zhang, Hao Dong, Thomas Funkhouser, Li Yi

Learning Temporal Dynamics from Cycles in Narrated Video
Dave Epstein*, Jiajun Wu, Cordelia Schmid, Chen Sun

Patch Craft: Video Denoising by Deep Modeling and Patch Matching
Gregory Vaksman, Michael Elad, Peyman Milanfar

How to Train Neural Networks for Flare Removal
Yicheng Wu*, Qiurui He, Tianfan Xue, Rahul Garg, Jiawen Chen, Ashok Veeraraghavan, Jonathan T. Barron

Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data
Abdullah Abuolaim*, Mauricio Delbracio, Damien Kelly, Michael S. Brown, Peyman Milanfar

Hybrid Neural Fusion for Full-Frame Video Stabilization
Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang

A Dark Flash Normal Camera
Zhihao Xia*, Jason Lawrence, Supreeth Achar

Efficient Large Scale Inlier Voting for Geometric Vision Problems
Dror Aiger, Simon Lynen, Jan Hosang, Bernhard Zeisl

Big Self-Supervised Models Advance Medical Image Classification
Shekoofeh Azizi, Basil Mustafa, Fiona Ryan*, Zachary Beaver, Jan Freyberg, Jonathan Deaton, Aaron Loh, Alan Karthikesalingam, Simon Kornblith, Ting Chen, Vivek Natarajan, Mohammad Norouzi

Physics-Enhanced Machine Learning for Virtual Fluorescence Microscopy
Colin L. Cooke, Fanjie Kong, Amey Chaware, Kevin C. Zhou, Kanghyun Kim, Rong Xu, D. Michael Ando, Samuel J. Yang, Pavan Chandra Konda, Roarke Horstmeyer

Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval
Min Jin Chong, Wen-Sheng Chu, Abhishek Kumar, David Forsyth

Deep Survival Analysis with Longitudinal X-Rays for COVID-19
Michelle Shu, Richard Strong Bowen, Charles Herrmann, Gengmo Qi, Michele Santacatterina, Ramin Zabih

MUSIQ: Multi-Scale Image Quality Transformer
Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, Feng Yang

imGHUM: Implicit Generative Models of 3D Human Shape and Articulated Pose
Thiemo Alldieck, Hongyi Xu, Cristian Sminchisescu

Deep Hybrid Self-Prior for Full 3D Mesh Generation
Xingkui Wei, Zhengqing Chen, Yanwei Fu, Zhaopeng Cui, Yinda Zhang

Differentiable Surface Rendering via Non-Differentiable Sampling
Forrester Cole, Kyle Genova, Avneesh Sud, Daniel Vlasic, Zhoutong Zhang

A Lazy Approach to Long-Horizon Gradient-Based Meta-Learning
Muhammad Abdullah Jamal, Liqiang Wang, Boqing Gong

ViViT: A Video Vision Transformer
Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid

The Surprising Impact of Mask-Head Architecture on Novel Class Segmentation (see the blog post)
Vighnesh Birodkar, Zhichao Lu, Siyang Li, Vivek Rathod, Jonathan Huang

Generalize Then Adapt: Source-Free Domain Adaptive Semantic Segmentation
Jogendra Nath Kundu, Akshay Kulkarni, Amit Singh, Varun Jampani, R. Venkatesh Babu

Unified Graph Structured Models for Video Understanding
Anurag Arnab, Chen Sun, Cordelia Schmid

The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, Justin Gilmer

Learning Rare Category Classifiers on a Tight Labeling Budget
Ravi Teja Mullapudi, Fait Poms, William R. Mark, Deva Ramanan, Kayvon Fatahalian

Composable Augmentation Encoding for Video Representation Learning
Chen Sun, Arsha Nagrani, Yonglong Tian, Cordelia Schmid

Multi-Task Self-Training for Learning General Representations
Golnaz Ghiasi, Barret Zoph, Ekin D. Cubuk, Quoc V. Le, Tsung-Yi Lin

With a Little Help From My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations
Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, Andrew Zisserman

Understanding Robustness of Transformers for Image Classification
Srinadh Bhojanapalli, Ayan Chakrabarti, Daniel Glasner, Daliang Li, Thomas Unterthiner, Andreas Veit

Impact of Aliasing on Generalization in Deep Convolutional Networks
Cristina Vasconcelos, Hugo Larochelle, Vincent Dumoulin, Rob Romijnders, Nicolas Le Roux, Ross Goroshin

von Mises-Fisher Loss: An Exploration of Embedding Geometries for Supervised Learning
Tyler R. Scott*, Andrew C. Gallagher, Michael C. Mozer

Contrastive Learning for Label Efficient Semantic Segmentation
Xiangyun Zhao*, Raviteja Vemulapalli, Philip Andrew Mansfield, Boqing Gong, Bradley Green, Lior Shapira, Ying Wu

Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color Image
Baowen Zhang, Yangang Wang, Xiaoming Deng, Yinda Zhang, Ping Tan, Cuixia Ma, Hongan Wang

Telling the What While Pointing to the Where: Multimodal Queries for Image Retrieval
Soravit Changpinyo, Jordi Pont-Tuset, Vittorio Ferrari, Radu Soricut

SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation
Yan Di, Fabian Manhardt, Gu Wang, Xiangyang Ji, Nassir Navab, Federico Tombari

Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval from a Single Image
Weicheng Kuo, Anelia Angelova, Tsung-Yi Lin, Angela Dai

NeRD: Neural Reflectance Decomposition From Image Collections
Mark Boss, Raphael Braun, Varun Jampani, Jonathan T. Barron, Ce Liu, Hendrik P.A. Lensch

THUNDR: Transformer-Based 3D Human Reconstruction with Markers
Mihai Zanfir, Andrei Zanfir, Eduard Gabriel Bazavan, William T. Freeman, Rahul Sukthankar, Cristian Sminchisescu

Discovering 3D Parts from Image Collections
Chun-Han Yao, Wei-Chih Hung, Varun Jampani, Ming-Hsuan Yang

Multiresolution Deep Implicit Functions for 3D Shape Representation
Zhang Chen*, Yinda Zhang, Kyle Genova, Sean Fanello, Sofien Bouaziz, Christian Hane, Ruofei Du, Cem Keskin, Thomas Funkhouser, Danhang Tang

AI Choreographer: Music Conditioned 3D Dance Generation With AIST++ (see the blog post)
Ruilong Li*, Shan Yang, David A. Ross, Angjoo Kanazawa

Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering
Bangbang Yang, Han Zhou, Yinda Zhang, Hujun Bao, Yinghao Xu, Guofeng Zhang, Yijin Li, Zhaopeng Cui

VariTex: Variational Neural Face Textures
Marcel C. Buhler, Abhimitra Meka, Gengyan Li, Thabo Beeler, Otmar Hilliges

Pathdreamer: A World Model for Indoor Navigation (see the blog post)
Jing Yu Koh, Honglak Lee, Yinfei Yang, Jason Baldridge, Peter Anderson

4D-Net for Learned Multi-Modal Alignment
AJ Piergiovanni, Vincent Casser, Michael S. Ryoo, Anelia Angelova

Episodic Transformer for Vision-and-Language Navigation
Alexander Pashevich*, Cordelia Schmid, Chen Sun

Graph-to-3D: End-to-End Generation and Manipulation of 3D Scenes Using Scene Graphs
Helisa Dhamo, Fabian Manhardt, Nassir Navab, Federico Tombari

Unconditional Scene Graph Generation
Sarthak Garg, Helisa Dhamo, Azade Farshad, Sabrina Musatian, Nassir Navab, Federico Tombari

Panoptic Narrative Grounding
Cristina González, Nicolás Ayobi, Isabela Hernández, José Hernández, Jordi Pont-Tuset, Pablo Arbeláez

Cross-Camera Convolutional Color Constancy
Mahmoud Afifi*, Jonathan T. Barron, Chloe LeGendre, Yun-Ta Tsai, Francois Bleibel

Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image
Shumian Xin*, Neal Wadhwa, Tianfan Xue, Jonathan T. Barron, Pratul P. Srinivasan, Jiawen Chen, Ioannis Gkioulekas, Rahul Garg

COMISR: Compression-Informed Video Super-Resolution
Yinxiao Li, Pengchong Jin, Feng Yang, Ce Liu, Ming-Hsuan Yang, Peyman Milanfar

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan

Nerfies: Deformable Neural Radiance Fields
Keunhong Park*, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, Ricardo Martin-Brualla

Baking Neural Radiance Fields for Real-Time View Synthesis
Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, Paul Debevec

Stacked Homography Transformations for Multi-View Pedestrian Detection
Liangchen Song, Jialian Wu, Ming Yang, Qian Zhang, Yuan Li, Junsong Yuan

COTR: Correspondence Transformer for Matching Across Images
Wei Jiang, Eduard Trulls, Jan Hosang, Andrea Tagliasacchi, Kwang Moo Yi

Large Scale Interactive Motion Forecasting for Autonomous Driving: The Waymo Open Motion Dataset
Scott Ettinger, Shuyang Cheng, Benjamin Caine, Chenxi Liu, Hang Zhao, Sabeek Pradhan, Yuning Chai, Ben Sapp, Charles R. Qi, Yin Zhou, Zoey Yang, Aurélien Chouard, Pei Sun, Jiquan Ngiam, Vijay Vasudevan, Alexander McCauley, Jonathon Shlens, Dragomir Anguelov

Low-Shot Validation: Active Importance Sampling for Estimating Classifier Performance on Rare Categories
Fait Poms, Vishnu Sarukkai, Ravi Teja Mullapudi, Nimit S. Sohoni, William R. Mark, Deva Ramanan, Kayvon Fatahalian

Vector Neurons: A General Framework for SO(3)-Equivariant Networks
Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacchi, Leonidas J. Guibas

SLIDE: Single Image 3D Photography with Soft Layering and Depth-Aware Inpainting
Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Michael Krainin, Dominik Kaeser, William T. Freeman, David Salesin, Brian Curless, Ce Liu

DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-Based Optimization
Cheng Zhang, Zhaopeng Cui, Cai Chen, Shuaicheng Liu, Bing Zeng, Hujun Bao, Yinda Zhang

Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image
Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, Angjoo Kanazawa

Workshops (only Google affiliations are noted)
Visual Inductive Priors for Data-Efficient Deep Learning Workshop
Speakers: Ekin Dogus Cubuk, Chelsea Finn

Instance-Level Recognition Workshop
Organizers: Andre Araujo, Cam Askew, Bingyi Cao, Jack Sim, Tobias Weyand

Unsup3D: Unsupervised 3D Learning in the Wild
Speakers: Adel Ahmadyan, Noah Snavely, Tali Dekel

Embedded and Real-World Computer Vision in Autonomous Driving (ERCVAD 2021)
Speakers: Mingxing Tan

Adversarial Robustness in the Real World
Speakers: Nicholas Carlini

Neural Architectures: Past, Present and Future
Speakers: Been Kim, Hanxiao Liu Organizers: Azade Nazi, Mingxing Tan, Quoc V. Le

Computational Challenges in Digital Pathology
Organizers: Craig Mermel, Po-Hsuan Cameron Chen

Interactive Labeling and Data Augmentation for Vision
Speakers: Vittorio Ferrari

Map-Based Localization for Autonomous Driving
Speakers: Simon Lynen

DeeperAction: Challenge and Workshop on Localized and Detailed Understanding of Human Actions in Videos
Speakers: Chen Sun Advisors: Rahul Sukthankar

Differentiable 3D Vision and Graphics
Speakers: Angjoo Kanazawa

Deep Multi-Task Learning in Computer Vision
Speakers: Chelsea Finn

Computer Vision for AR/VR
Speakers: Matthias Grundmann, Ira Kemelmacher-Shlizerman

GigaVision: When Gigapixel Videography Meets Computer Vision
Organizers: Feng Yang

Human Interaction for Robotic Navigation
Speakers: Peter Anderson

Advances in Image Manipulation Workshop and Challenges
Organizers: Ming-Hsuan Yang

More Exploration, Less Exploitation (MELEX)
Speakers: Angjoo Kanazawa

Structural and Compositional Learning on 3D Data
Speakers: Thomas Funkhouser, Kyle Genova Organizers: Fei Xia

Simulation Technology for Embodied AI
Organizers: Li Yi

Video Scene Parsing in the Wild Challenge Workshop
Speakers: Liang-Chieh (Jay) Chen

Structured Representations for Video Understanding
Organizers: Cordelia Schmid

Closing the Loop Between Vision and Language
Speakers: Cordelia Schmid

Segmenting and Tracking Every Point and Pixel: 6th Workshop on Benchmarking Multi-Target Tracking
Organizers: Jun Xie, Liang-Chieh Chen

AI for Creative Video Editing and Understanding
Speakers: Angjoo Kanazawa, Irfan Essa

BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments
Speakers: Chelsea Finn Organizers: Fei Xia

Computer Vision for Automated Medical Diagnosis
Organizers: Maithra Raghu

Computer Vision for the Factory Floor
Speakers: Cordelia Schmid

Tutorials (only Google affiliations are noted)
Towards Robust, Trustworthy, and Explainable Computer Vision
Speakers: Sara Hooker

Multi-Modality Learning from Videos and Beyond
Organizers: Arsha Nagrani

Tutorial on Large Scale Holistic Video Understanding
Organizers: David Ross

Efficient Video Understanding: State of the Art, Challenges, and Opportunities
Organizers: Arsha Nagrani

* Indicates work done while at Google

Source: Google AI Blog


Additional support for continuous framing to more Google Meet hardware devices, available by default for all eligible devices

What’s changing

Continuous framing is a feature for Google Meet hardware devices that ensures participants are automatically framed no matter where they are in the room. We're making two updates to the feature:
  • Available on more devices: We're adding support for continuous framing on Meet-certified cameras that ship with our ASUS & Logitech room kits, specifically the Logitech PTZ Pro 2, Logitech MeetUp, and the Logitech Rally. This feature was previously only available on the Series One Room Kit Smart Camera and Smart Camera XL.
  • Available for eligible devices by default: The toggle in camera settings for continuous framing will now be available for eligible devices. Previously, it had to be enabled in the Admin console. Now, unless you've explicitly turned it off in the Admin console, it will be available as an option in the meeting room, but will be toggled off by default. See more details below.

Who’s impacted

Admins and end users

Why you’d use it

Continuous framing allows in-room participants to be better represented in a hybrid meeting as it captures the speakers in the frame regardless of where they're sitting in the room. Seeing all the participants in the room up close makes it easier to:
  • Read facial expressions and body language
  • Maintain eye contact with meeting participants
  • Smoothly take turns in the conversation

Additional details

Unless you’ve already explicitly turned off the feature in the Admin console, continuous framing will be ON by default on eligible Google Meet hardware in your organization.

If you would like to turn off this feature so it is not available on an individual device, the continuous framing setting is found on the device details page. This toggle can also be modified via a bulk action on the device list page.

Getting started

  • Admins: Unless you’ve already explicitly turned off the feature, continuous framing will be ON by default on eligible Google Meet hardware in your organization. Visit the Help Center to learn more about continuous framing.
  • End users: Unless disabled by your admin, this toggle will be available on the device for all eligible meeting room cameras but will be set to OFF at the beginning of each call by default. You can turn it on in the device settings.

Rollout pace


Availability

  • Available to all Google Workspace customers using eligible “Meet hardware” licensed devices.

Resources

 

 

Highlights from the Web Stories Workshop

In May, over 100 content creators, publishers, agencies, and other businesses joined the Google Web Creators team for a virtual workshop on Web Stories. The workshop was designed to teach attendees about media-rich, tappable, web-based Stories so they could create their own. And now, we want to share the information and presentations from that workshop with you!

An introduction to Web Stories

In the workshop’s intro session, Raunak Mahesh from Google’s Global Product Partnerships team covered the basics of Web Stories. He explained how they’re an effective storytelling format for all types of content creators — from large news outlets like USA Today to individual creators like The Tiny Herbivore. He also shared that more people continue to access content on their mobile phones, and that 64% of readers prefer tappable content over scrolling articles.

Raunak Mahesh is seated in front of the camera, with a green plant and white walls behind him.

Google’s Raunak Mahesh presents Web Stories basics and benefits at the Web Stories Workshop.

Raunak also shared the benefits of Web Stories. Unlike closed platforms, you own your Web Stories, control how long they’re active, and can use them to direct readers to other content on your website or blog. And you don’t need to know how to write code to create Web Stories. Tools such as MakeStories, Newsroom AI, and the WordPress Web Stories plugin put building Stories within reach of busy bloggers and journalists.

Stories allow you to reach new audiences through Google Search and Discover, and can be embedded on your blog for added visual flair. They also help you monetize your content on your own terms, using affiliate links and ads.

Web Stories best practices

Next, Google Web Ecosystem Consultant Shishir Malani discussed best practices for creating Web Stories. Here are his top tips:

  1. Include a clear title and branding elements on your cover page.
  2. Create a complete narrative and flesh out your content with interviews, research, lists of items, and destinations or steps. Stories with incomplete narratives perform poorly compared to stories with complete narrative arcs.
  3. When deciding what to put on a page, think of the media first. “We like to think of Web Stories as writing a blog, but backwards,” Shishir said. “So start with the visuals that will best and most vibrantly tell your story, and then add in text to clarify the narrative.”
  4. Ideally, a Web Story should be 10 to 15 pages long. On each page, text should be readable — ideally less than 280 characters or the length of a tweet. “Shorter is better,” Shishir shared.
  5. Accessibility should be baked in, not an afterthought. In addition to making sure all text is readable, remember to caption your videos. When designing, stay within your tools’ safe zone — where important text and graphics won’t get cut off — so that all readers can fully understand your story.
Shishir wears a blue shirt and virtually presents a slide titled “Drafting the narrative.”

Web Ecosystem Consultant Shishir Malani discusses Web Stories best practices.

You can watch both sessions in the Web Stories workshop video. You can also check out other sessions from the workshop, including a presentation from Forbes about how they use Web Stories in their content mix, and a Q&A session with Google Web Stories experts. For more tips and resources for creating compelling Web Stories, visit the Storytime section of the Google Web Creators YouTube channel.

“Hey Google, tell me a girl hero story”

Today is International Day of the Girl, a chance to recognize the 1.1 billion girls who are changing the world — as well as acknowledge the challenges they face. This year’s theme is “Digital Inclusion for Girls” and to me, it means focusing on our responsibility at Google to make sure our technologies create opportunities for girls in the digital space. And that includes the ongoing work we do on Google Assistant to challenge gender stereotypes.

This effort was very personal to me – my son is five and I see him developing his views on people and differences between them every day. It matters deeply to me that he sees girls, and people from all over the world, as heroic, fierce, smart and successful.

I’m really excited that he can listen to some of the new inspiring stories of girls and women heroes available on Google Assistant. These stories focus on girls and women from diverse backgrounds, and push back against traditional tropes and focus more on problem-solving and leadership.

Just say “Hey Google, tell me a girl hero story” to select from a list of more than 25 new nonfiction and fiction stories from Capstone and The English Schoolhouse, developed by Earplay, on your Assistant-enabled phones and smart displays, like Nest Hub.

Image of the story ”Marielle's Sweet Shop” from The English Schoolhouse

Some of my favorites include “My Sister, Daisy,” which unpacks the relationship of siblings and gender identity, “Aunt Bunny” a heartwarming story highlighting the importance of community, family and friendship, and ”Marielle's Sweet Shop,” a story about a mom and daughter entrepreneur duo.

I also love learning more about personal heroes of mine through stories like “Amelia Earhart Builds a Roller Coaster' or 'Wilma Rudolph Plays Basketball."

Take a listen today, and learn more about the girls and women you think of as heroes — or maybe learn about a new one.

Cooking up a new approach to Mennonite food

When Googler Jo Snyder was 20 years old, she left her family crop farm outside of Kitchener-Waterloo, Ontario, with a backpack, a guitar and a bike. She got on a train, traveled 30 hours to Winnipeg, Manitoba, enrolled in university and later started a punk rock band. She was the first one in her “pretty big, close and liberal Mennonite family” to move away (and is certainly the first punk rocker among them), but the values of her upbringing stayed with her. She still values community, kindness, and, well, food.

Many Mennonites are farmers, and traditionally their diets rely heavily on meat, eggs, dairy and seasonal produce. One recipe book — “The Mennonite Community Cookbook” — has been called the “grandmother” of all Mennonite cookbooks and has taken residence in Mennonite kitchens for generations. First printed in 1950, it’s a collection of 1,400 recipes from Mennonite communities across the U.S. and Canada compiled by its author, Mary Emma Showalter.

Jo’s Grandma Lena, who grew up in Floradale, a rural community in Southwestern Ontario, and was raised an Old Order Mennonite, was one of many to own “The Mennonite Community Cookbook.” After she died, Jo — who has been a vegetarian for 25 years and vegan for the last 10 years — inherited her well-worn copy. In 2018, flipping through the book, she was inspired to make plant-based versions of the recipes her family loved growing up.

“I wanted to remember and pay tribute to my grandmothers but I wanted to do it my way,” Jo says. “I wanted to take the things that are beautiful about a community cookbook with traditional recipes and local food and take it forward into a culture that could be thinking about a different way of eating.”

She spent two years hosting dinner parties and asking for feedback (“Too dry? Too salty? Do you even like it?”). Throughout, she kept a detailed spreadsheet and more than 100 Google Docs containing recipes she constantly tweaked. She shared the Docs with friends and family, asking them to attempt the recipes and see if they worked.

“I think the way I made and tested these recipes embodied the spirit of the book and the Mennonite community in a very important way,” Jo says. “I brought people together around my dinner table — often people I hadn’t seen for a while, or people who didn’t really know each other.”

After three years of testing and refining, “The Vegan Mennonite Kitchen” was published in March this year. Containing more than 80 recipes, including vegan versions of classic dishes like Fried Seitan Chick’n and the simply titled but rather ambitious “Ham,” the book also weaves in stories from her childhood in Southern Ontario.

“Grandma Lena would have been interested to see what I was doing and maybe would have corrected me a little bit,” Jo says, thinking of how her grandmothers would have received her book. “My grandmother Marjorie would have been delighted. She would have been very excited by the idea and flattered to see her recipes shared.”

Want to try one of Jo’s recipes? She suggests trying your hand at an old classic — Dutch Apple Pie.

Dutch Apple Pie recipe card, with the following recipe: FOR THE PIE CRUST 2. cups cake and pastry flour ½ teaspoon salt ⅔ cup plant-based butter, room temperature ⅓ cup cold water  FOR THE FILLING AND THE CRUMBLE TOP 3 tablespoons all-purpose flour 1 cup brown sugar 1 teaspoon cinnamon 4 tablespoons plant-based butter 4 cups apples cored, peeled and sliced (don’t go fancy with the apples, never use Red Delicious. Try a good old McIntosh) 4 tablespoons of soy cream or soy milk for the top (Silk coffee creamer works best)   FOR THE PIE CRUST In a mixing bowl, combine flour and salt. Cut the plant-based butter into the flour with a pastry blender or two knives. You want the mixture to be the size of peas. Add the water slowly, sprinkling 1 tablespoon at a time over the mixture. Toss lightly with a fork. This shouldn’t be sticky or wet at all but everything should be dampened. Use only enough water so that the pastry holds together when pressed between your fingers. Form the dough into a round ball with your hands. Some say you shouldn’t handle pie crust dough too much, but I find if I don’t really get in there with my hands and make a nice dough ball, then it won’t roll out as nicely. So, get in there and make sure it’s mixed well. On a lightly floured surface, roll out into a circle about ⅛ inch thick and about 1 inch larger than the diameter of the top of your pie plate. You want it to hang over the sides when you place it gently on top so that when you press it into the pan you have full coverage. Don’t bake the pie shell first if you’re making a Dutch Apple Pie, but if you want to make a pastry shell for a different dessert then bake it at 450F for 12–15 minutes or until it’s a golden brown.  FOR THE FILLING AND THE CRUMBLE TOP Combine the flour, sugar and cinnamon together in a bowl and then cut in the butter with a pastry blender, two knives, a fork or just get in there with your hands and crumble it up. You don’t want it to be a paste though, so take it easy when you mix. Put the peeled and sliced apples into the unbaked pie shell. Pinch a little salt and squeeze about a tablespoon of lemon on top. Then pile on that crumb mixture on top, but don’t press it down too tight. Make sure there’s enough to generously cover everything. Pour the cream evenly over the top of the crumble. You want it to seep down into the apple mixture. Bake at 375F for 35 minutes or until it looks and smells good. You want the apples to be soft and the top and inside to be gooey. It should have a creamy, rich taste and feel.

Hands down my favorite pie. It’s sweet and creamy. When I was a kid I worked at the St. Jacobs Farmers’ Market stall for the Stone Crock Bakery and we used to make giant trays of these and cut them into big squares. Every Saturday morning I would waffle between a tea ball sprinkled with cinnamon sugar, a warm veggie cheese bun and one of these delicious squares. The Dutch Apple Square almost always won. Here is the original, but plant-based, pie version. - Jo

A Matter of Impact: September updates from Google.org

Jacquelline’s Corner

The pandemic laid bare existing inequalities across gender, race, class and country lines. And at the same time, other disasters — like hurricanes, wildfires and earthquakes — continue to affect people globally and strain already tight resources. To have the greatest impact, we rely on strong relationships with nonprofit organizations around the world that are working on disaster preparedness, relief and recovery — like the Center for Disaster Philanthropy and GiveDirectly, that you’ll hear more about below. We learn about their needs and search for where our philanthropic capital — coupled with technology, data and an eye toward equity — can help make the biggest difference.

But we’re also asking ourselves this: what if cities and organizations could predict disasters and be better prepared with resources before they even happen? With a changing climate, we know there’s more to do in advance of crises to mitigate loss of lives and livelihoods. That’s why we’re betting more and more on the role that technologies like AI and machine learning can play in generating the data we need to be better informed and prepared ahead of disasters.

Last year, our grantees provided 6.9 million people around the world with crisis relief support and resources for long term recovery. An additional 2.8 million people were better prepared with resources and supplies, and nearly three-quarters of our grantees are developing tools to improve the availability of information during a crisis. Together, we can ensure that those who are most vulnerable during a crisis are more protected — before, during and after it hits.

In case you missed it 

Kent Walker, Google SVP for Global Affairs, recently announced a $1.5M grant to The United Nations Office for Coordination of Humanitarian Affair’s (UN OCHA) Centre for Humanitarian Data. The grant will go toward supporting their “Anticipatory Action” work that focuses on developing forecasting models to anticipate humanitarian crises and trigger earlier, smarter action before conditions worsen.

Hear from one of our grantees: Center for Disaster Philanthropy

Regine A. Webster is the founding executive director and vice president of the Center for Disaster Philanthropy (CDP), an organization that seeks to strengthen the ability of communities to withstand disasters and recover equitably when they occur. Since 2010, CDP has provided donors with timely and effective strategies to increase their disaster giving impact, and they work to amp up philanthropy’s game when it comes to disaster and humanitarian assistance giving.

A woman with a short bob haircut smiling at the camera.

Regine A. Webster, founding executive director and vice president of the Center for Disaster Philanthropy (CDP).

“In 2020 alone, with support from Google.org and other donors, the Center for Disaster Philanthropy (CDP) awarded $29.9 million to 173 organizations. These grants helped communities in 50 countries, including the entire United States and its territories, respond to COVID-19, hurricanes, typhoons, wildfires, flooding, earthquake, complex humanitarian emergencies and other disasters. The partnership we have with Google.org allows CDP to implement what we know to be effective disaster grantmaking around the globe. Perhaps more importantly though, philanthropic funding from Google.org and Googlers gives our expert team the freedom to test the way that race and power play themselves out in the disaster recovery context as we award grants to historically marginalized populations. We can test our assumptions on how to direct dollars to lift up Black, Indigenous and other communities of color and the organizations that serve them even in the face of disaster adversity, and how to work with disaster-serving organizations worldwide. We seek to inform the future of disaster philanthropy.”

A few words with a Google.org Fellow: GiveDirectly

Growing up in a family of community advocates, I developed both a strong sense of social justice and an interest in using my skills for the community. This made me excited to work with GiveDirectly for my Google.org Fellowship. GiveDirectly gives no-strings-attached cash to its recipients, empowering people to improve their lives.

A man standing in front of greenery, smiling at the camera.

Janak Ramakrishnan, is a software engineer and Google.org Fellow .

“The fellowship focused on expanding GiveDirectly’s work into a new sector: Americans affected by hurricanes. With climate change, hurricanes are becoming deadlier and more frequent. In their aftermath, aid organizations struggle to get help to affected areas; cash aid can be a great option in those cases. We worked with GiveDirectly to use Google’s expertise in big data and mapping, like Google Earth Engine, on this problem. Our project helped GiveDirectly quickly identify the most affected people right after a disaster hits, when every hour counts.”

15 milestones, moments and more for Google Docs’ 15th birthday

In 2005, an easy-to-use, online word processor called Writely launched. A year later, the collaborative writing tool became part of Google, and over time it evolved into Google Docs. Officially launched to the world in 2006, Google Docs is a core part of Google Workspace. It’s also, as of today, 15 years old. But it wasn’t always so obvious how useful — and loved — Docs would become.

Jen Mazzon was part of the original Docs team, or the Google Writely Team as it was then called. “Everyone told us it was crazy to try and give people a way to access their documents from anywhere — not to mention share documents instantly, or collaborate online within their browser,” she wrote in a March 2006 blog post. “But that's exactly what we did.”

As a much-deserved gift to Docs, here are 15 things about Google Docs that we’re celebrating — from important moments to tips and tricks, there’s a lot to love.

  1. In 2010, Docs got its first big update, adding things like the ability to see others editing and writing in shared documents and better importing features.
  2. Internally, the Docs team has breakfast-themed names for the widgets you see when you edit in Docs. For instance, the yellow messages up at the top are called "butter," and the dialogs that pop out from the bottom right corner are called "toasts" because they pop out of a corner just like toast popping out from an upright toaster. The red error message at the top? That's “ketchup.”
  3. When COVID-19 sent students and educators home, we shared ways they could make use of features like offline Docs and real-time commenting to keep learning and collaboration going remotely.
  4. Lizzo and Sad13 used Google Docs to write music together, and they let us in on their creative process.
  5. There was that time when none other than the Reading Rainbow team designed a book report template for Docs, which you can still use today.
  6. Laura Mae Martin, Google’s Chief Productivity Advisor, always knows the best ways to get the most out of Docs. She shares her tips and tricks regularly on her YouTube channel.
  7. In 2018, the Docs team came up with an Easter egg: Typing #blackhistorymonth into a Doc would trigger Explore in your doc, with information about Black history and the Black community.
Animated GIF of a Google Doc with the words "#blackhistorymonth" on the page. The Explore panel then pops out to surface more information about Black history.

8. Here’s a tip: If you click the “+” icon on the right-hand side panel of the page, you’ll find add-ons — from there, select the hamburger menu (the three lines) and check out Editor’s Choice or Top charts for helpful recommendations.

Screenshot of the right side panel of a Google Doc showing the plus sign icon.

9. Over the years, Docs has became a crucial creative asset for writers of all kinds. Author Viviana Rivero even uses Google Docs to tell stories that people read in real time, as she writes.

10. The Google Workspace team has thought a lot about how to make the most of its tools for hybrid work, including Docs. Learn more in the Google Workspace Guide to Productivity and Wellbeing, which includes tips about how you can make the best use of your time working from home — while also making time for yourself.

11. This past May, the Google Workspace team launched smart canvas — which, among other things, lets you @ mention people in Docs, add checklists and use templates. Soon you’ll also start to see Docs suggesting more inclusive language as you write and edit.

Image showing a screenshot of a Google Doc with an open Doc that says "document review" at the top. A dialog pop up hovers over part of the page with inclusive language suggestions.

12. Thanks to new features like Smart compose and Smart reply — made possible by machine learning and artificial intelligence — Docs has become a stronger collaboration tool for the more than three billion users who rely on Google Workspace.

13. Anyone who’s ever worked on a group Doc knows the upper right-hand corner can sometimes populate with Anonymous Animals — so in 2019, we partnered with the World Wildlife Fund to raise awareness about animals we hope don’t become anonymous.

14. We saw the New York Times share how its staff turned to Google Docs during the pandemic to keep journalists and readers connected. They’ve used Docs to celebrate everyday victories, discover music and recommend movies. As a result of COVID-19 and quarantine, we also saw people use Docs to create virtual escape rooms and organize mutual aid efforts.

15. Over here on the Keyword team, we’re big Docs users: Everything you read on this very blog starts in a Doc — including our weekly newsletter, which we launched last year. And fittingly, this very post.

Happy birthday, Google Docs; we literally couldn’t do it without you.

Make Google TV more you with personalized profiles

If you're like me, you live in a house where each person has their own taste in entertainment. Everyone brings their likes, dislikes and “no WAYs'' to the sofa. To keep everyone in the house happy -- whether that's your partner, kids, other family members or roommates -- we’re bringing features to Google TV that will make TV a little more tailored for whoever’s sitting on the couch.

TV screen with 4 heads showing different profiles in circles.

Set up a personalized profile.

Create a space for your own tastes

Google TV profiles let everyone in your home enjoy their own personalized space with their Google Account. With a personalized profile, you’ll get TV show and movie recommendations just for you, easy access to your personal watchlist and help from your Google Assistant.

Recommendations tailored to you, and only you: As you watch TV, your profile takes into account your interests and preferences to help you discover more of what’s out there for you. And for the little ones, you can always set up a kids profile to help them access a fun collection of movies and shows under your guidance.

Access to your own watchlist: When a friend tips you off to a hot new show, you can always add it to your watchlist to save it for later. Each Google Account has its own watchlist, so your finds will show up right in your profile and stay separate from your other’s lists in your household.

Help from your Google Assistant: Ask for recommendations by saying, “what should I watch?” or get help streamlining your day by saying “show me my day.” Your profile is linked to your account’s Google Assistant, so you’ll get the personalized answers you are looking for.

And setting up a new profile is easy. Your downloaded apps and app login details will be used across profiles, so you won’t have to start from scratch each time you set up a new profile.

TV screen showing a photo of three kids as the background, with a rectangle square overlay naming the location.

Stay up to date at a glance in ambient mode.

Stay up to date even when your TV is idle

Google TV already lets you see your favorite memories from Google Photos when your TV is idle. Now, we're making ambient mode more useful by bringing in more personalized information and recommendations at a glance. From the latest game score, to the weather, news and more, your TV will keep you up to date with info based on your profile. You can even scroll through the on-screen shortcuts to jump into your photos or start playing your music and podcasts with just a click. If you are off for a longer break, your TV will shift fully to your ambient mode’s photos or curated artwork after a few minutes.

Pick your favorite live TV service

With Google TV’s Live and For you tabs, your favorite live shows are just a click away. To give you more live TV provider options, we’ve now integrated Philo into our live TV features, in addition to YouTube TV and SLING TV. To see your shows on the Live TV tab and in your recommendations, just add your Live TV provider.

Support for profiles and glanceable cards in ambient mode will begin rolling out on Chromecast with Google TV and Google TVs from Sony and TCL soon. Profiles will be available globally, while ambient modes cards will be first available in the U.S. only. Philo is now available as an integrated live TV provider in the US. We hope all of this makes your TV feel a little moreyou.

Some features and availability may vary by OEM and/or device manufacturer.

Fitbit Premium launches StrongWill with Will Smith

Will Smith recently joined the Fitbit family and made his public commitment to improve every aspect of his health and wellness. The rapper and actor is creating and curating a Fitbit Premium-exclusive collection of whole-health guidance. Six sweat-inducing, endorphin-boosting workouts and mindfulness sessions in the Will Smith: StrongWill curriculum are now available. From room-shaking workouts to smooth stress-relief techniques, Premium members can now virtually work with Will and his trainers to get their minds and bodies strong. And yes, Big Will-isms do abound!

Premium gets a lot more fresh

Fitbit is inviting you to join in prioritizing your holistic health -- from physical fitness to better sleep habits and maintaining mental wellness. Will’s own drive to get in the best shape of his life was inspired by a desire to improve his overall wellbeing, and the StrongWill collection focuses on both the physical and mental aspects of strength

As he mentioned on social media, “I spent countless days grazing on snacks and didn’t feel my best physically. I love my body, so I want to get my overall health and wellness back on track. To me, being in the ‘best shape of my life’ really means taking better care of my body.”

No matter where you’re at in your wellness pursuit, the hardest part is getting started. Fitbit and Will are helping Premium users to reach goals with an approachable curriculum that fits into your life, enhances your routine and brings calorie-burning moves, form modifications, guided mindfulness and plenty of jokes from Will to keep the energy levels high.

Get motivated with Will’s trainers and join in a variety of guided sessions, exclusively within Fitbit Premium – no equipment or gym required:

  • Bodyweight Strength: Will Smith is no stranger to lifting weights, but sometimes even the Fresh Prince can’t make it to the gym. Join trainer Roz the Diva to learn strength building techniques you can do without much equipment and explore one of Will’s favorite exercises.
  • Core Challenge: To achieve the freshest fitness goals, start with your core, since that’s where peak performance is “born and raised.” Join trainer Jahdy to explore Will’s favorite techniques to strengthen, engage and stretch your core.
  • Find Your Center: Will Smith always makes this clear: when training your body, it’s just as important to train your mind. Join trainer Faith Hunter on a mindful look inward to hone your mental fitness with deep breathing exercises and meditation.
  • Mobility Flow Yoga: Yoga has been key to Will’s fitness success. Follow trainer Hiro Landazuri through a progressive mobility yoga flow to work your dexterity, flexibility and stability. Namaste!
  • Let’s Go Cardio!: In this workout, trainer Maya Monza takes you from warmup, through 10 cardio-intensive exercises, to cool down without skipping a beat. Have some water and a towel ready, because elevating your heart rate and breaking a sweat is what it’s all about. Like Will says, “Let’s turn this furnace on!”
  • Upper Body HIIT: The faster the better: Trainer Bianca G delivers a high-energy, high intensity interval training workout focused on the upper body and core to burn calories, gain endurance and build muscle fast.

While he has been working on his wellness, Will is beginning to wear Fitbit’s newest and most advanced tracker, Charge 5, which complements his regimen, reminds him to keep moving and also helps him to manage stress and his mental wellbeing.

Will Smith does bicep curls with two dumbbells in a gym, wearing a white t-shirt and black shorts.

Finishing “strong”

As we enter the last stretch of 2021, we all want to finish strong and establish better habits for the new year -- but what does that truly mean? Being strong looks different for everybody, whether strong in body, mind or heart. Take it from Will. “I’ve really come to understand that strength is so much more than a physical ability, it’s mental and emotional too,” Will says. “It’s not just about how many abs you have or how big your biceps are, but can you push yourself to try new things, get better at the work you’re already doing and stick with it? That requires a different kind of strength.”

You don’t have to do it alone -- grab a friend, family member or neighbor, just as Will’s community has been sharing their own inspiring stories with him.

You can peek into the strides Will is making to round out the year with better health in YouTube Originals’ new unscripted series, “Best Shape of My Life,” premiering Monday, November 8 on his YouTube channel. This emotionally packed five-day event from Westbrook Media peels back the curtain on what makes Will Smith truly tick as he is pushed to his limits and questions the very behaviors that have led to his success -- and ultimately it’s on this search where his healing can begin.

Also in November, Fitbit is sponsoring the five-city tour of Will’s memoir, “Will,” in support of his path toward wellness and the many “steps” that go into an international book tour. The “Will” memoir shines a light on his path to understanding where outer success, inner happiness and human connection are aligned.

Follow Fitbit on social for more updates on Will’s wellness efforts and stay tuned for more StrongWill collections coming to Fitbit Premium in 2022 for truly “fresh” ways to continue growing stronger.

For more inspiration and guidance about fitness, nutrition, health and wellness, read more on the Fitbit blog.