Fast Pair makes it easier to use your Bluetooth headphones

Bluetooth headphones help us take calls, listen to music while working out, and use our phones anywhere without getting tangled up in wires. And though pairing Bluetooth accessories is an increasingly common activity, it can be a frustrating process for many people.

Fast Pair makes Bluetooth pairing easier on Android 6.0+ phones (learn how to check your Android version). When you turn on your Fast Pair-enabled accessory, it automatically detects and pairs with your Android phone in a single tap. So far, there have been over three million Fast pairings between Bluetooth accessories, like speakers and earbuds, and Android phones. Here are some new capabilities to make Fast Pair experience even easier.

Easily find your lost accessory

It can be frustrating when you put your Bluetooth headphones down and immediately forget where you placed them. If they’re connected to your phone, you can locate your headphones by ringing them. If you have true wireless earbuds (earbuds that aren’t attached by cables or wires), you can choose to ring only the left or right bud. And, when you misplace your headphones, in the coming months, you can check their last known location in the Find My Device app if you have Location History turned on.

Ringing Screen (1).png

Know when to charge your true wireless earbuds

Upon opening the case of your true wireless earbuds, you’ll receive a phone notification about the battery level of each component (right bud, left bud, and the case itself if supported). You’ll also receive a notification when your earbuds and the case battery is running low, so you know when to charge them.

Battery (1).gif

Manage and personalize your accessory easily

To personalize your headset or speakers, your accessory name will include your first name after it successfully pairs with Bluetooth. For example, Pixel Buds will be renamed “Alex’s Pixel Buds.”


On phones running Android 10, you can also adjust headphone settings, like linking it to Google Assistant and accessing Find My Device, right from the device details page. The setting varies depending on your headphone model.

Device Details.png

Harmon Kardon FLY and the new Google Pixel Buds will be the first true wireless earbuds to enjoy all of these new features, with many others to come. We’ll continue to work with our partners to bring Fast Pair to more headset models. Learn about how to connect your Fast Pair accessory here.

Google at ICLR 2020



This week marks the beginning of the 8th International Conference on Learning Representations (ICLR 2020), a fully virtual conference focused on how one can learn meaningful and useful representations of data for machine learning. ICLR offers conference and workshop tracks, both of which include invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction and issues regarding non-convex optimization.

As a Diamond Sponsor of ICLR 2020, Google will have a strong virtual presence with over 80 publications accepted, in addition to participating on organizing committees and in workshops. If you have registered for ICLR 20202, we hope you'll watch our talks and learn about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2020 in the list below (Googlers highlighted in blue).

Officers and Board Members
Includes: Hugo LaRochelle, Samy Bengio, Tara Sainath

Organizing Committee
Includes: Kevin Swersky, Timnit Gebru

Area Chairs
Includes: Balaji Lakshminarayanan, Been Kim, Chelsea Finn, Dale Schuurmans, George Tucker, Honglak Lee, Hossein Mobahi, Jasper Snoek, Justin Gilmer, Katherine Heller, Manaal Faruqui, Michael Ryoo, Nicolas Le Roux, Sanmi Koyejo, Sergey Levine, Tara Sainath, Yann Dauphin, Anders Søgaard, David Duvenaud, Jamie Morgenstern, Qiang Liu

Publications
SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference (see the blog post)
Lasse Espeholt, Raphaël Marinier, Piotr Stanczyk, Ke Wang, Marcin Michalski‎

Differentiable Reasoning Over a Virtual Knowledge Base
Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen

Dynamics-Aware Unsupervised Discovery of Skills
Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman

GenDICE: Generalized Offline Estimation of Stationary Values
Ruiyi Zhang, Bo Dai, Lihong Li, Dale Schuurmans

Mathematical Reasoning in Latent Space
Dennis Lee, Christian Szegedy, Markus N. Rabe, Kshitij Bansal, Sarah M. Loos

Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Will Grathwohl, Kuan-Chieh Wang, Jorn-Henrik Jacobsen, David Duvenaud, Kevin Swersky, Mohammad Norouzi

Adjustable Real-time Style Transfer
Mohammad Babaeizadeh, Golnaz Ghiasi

Are Transformers Universal Approximators of Sequence-to-sequence Functions?
Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashankc J. Reddi, Sanjiv Kumar

AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures
Michael S. Ryoo, AJ Piergiovanni, Mingxing Tan, Anelia Angelova

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan

BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning
Yeming Wen, Dustin Tran, Jimmy Ba

Black-box Off-policy Estimation for Infinite-Horizon Reinforcement Learning (see the blog post)
Ali Mousavi, Lihong Li, Qiang Liu, Dengyong Zhou

Can Gradient Clipping Mitigate Label Noise?
Aditya Krishna Menon, Ankit Singh Rawat, Sashank J. Reddi, Sanjiv Kumar

CAQL: Continuous Action Q-Learning
Moonkyung Ryu, Yinlam Chow, Ross Anderson, Christian Tjandraatmadja, Craig Boutilier

Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation
Byung Hoon Ahn, Prannoy Pilligundla, Amir Yazdanbakhsh, Hadi Esmaeilzadeh

Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization
Satrajit Chatterjee

Consistency Regularization for Generative Adversarial Networks
Han Zhang, Zizhao Zhang, Augustus Odena, Honglak Lee

Contrastive Representation Distillation
Yonglong Tian, Dilip Krishnan, Phillip Isola

Deep Audio Priors Emerge from Harmonic Convolutional Networks
Zhoutong Zhang, Yunyun Wang, Chuang Gan, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba, William T. Freeman

Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
Yao Qin, Nicholas Frosst, Sara Sabour, Colin Raffel, Garrison Cottrell, Geoffrey Hinton

Detecting Extrapolation with Local Ensembles
David Madras, James Atwood, Alexander D'Amour

Disentangling Factors of Variations Using Few Labels
Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Distance-Based Learning from Errors for Confidence Calibration
Chen Xing, Sercan Ö. Arik, Zizhao Zhang, Tomas Pfister

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators (see the blog post)
Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning

ES-MAML: Simple Hessian-Free Meta Learning (see the blog post)
Xingyou Song, Yuxiang Yang, Krzysztof Choromanski, Aldo Pacchiano, Wenbo Gao, Yunhao Tang

Exploration in Reinforcement Learning with Deep Covering Options
Yuu Jinnai, Jee Won Park, Marlos C. Machado, George Konidaris

Extreme Tensoring for Low-Memory Preconditioning
Xinyi Chen, Naman Agarwal, Elad Hazan, Cyril Zhang, Yi Zhang

Fantastic Generalization Measures and Where to Find Them
Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, Samy Bengio

Generalization Bounds for Deep Convolutional Neural Networks
Philip M. Long, Hanie Sedghi

Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition
Jongbin Ryu, GiTaek Kwon, Ming-Hsuan Yang, Jongwoo Lim

Generative Models for Effective ML on Private, Decentralized Datasets
Sean Augenstein, H. Brendan McMahan, Daniel Ramage, Swaroop Ramaswamy, Peter Kairouz, Mingqing Chen, Rajiv Mathews, Blaise Aguera y Arcas

Generative Ratio Matching Networks
Akash Srivastava, Kai Xu, Michael U. Gutmann, Charles Sutton

Global Relational Models of Source Code
Vincent J. Hellendoorn, Petros Maniatis, Rishabh Singh, Charles Sutton, David Bieber

Hierarchical Foresight: Self-Supervised Learning of Long-Horizon Tasks via Visual Subgoal Generation
Suraj Nair, Chelsea Finn

Identity Crisis: Memorization and Generalization Under Extreme Overparameterization
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Michael C. Mozer, Yoram Singer

Imitation Learning via Off-Policy Distribution Matching
Ilya Kostrikov, Ofir Nachum, Jonathan Tompson

Language GANs Falling Short
Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joëlle Pineau, Laurent Charlin

Large Batch Optimization for Deep Learning: Training BERT in 76 Minutes
Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, Cho-Jui Hsieh

Learning Execution through Neural Code Fusion
Zhan Shi, Kevin Swersky, Daniel Tarlow, Parthasarathy Ranganathan, Milad Hashemi

Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning
Gil Lederman, Markus N. Rabe, Edward A. Lee, Sanjit A. Seshia

Learning to Learn by Zeroth-Order Oracle
Yangjun Ruan, Yuanhao Xiong, Sashank Reddi, Sanjiv Kumar, Cho-Jui Hsieh

Learning to Represent Programs with Property Signatures
Augustus Odena, Charles Sutton

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Liwei Wang

Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, Olivier Bousquet

Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies
Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Honglak Lee

Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, Hugo Larochelle

Model-based Reinforcement Learning for Biological Sequence Design
Christof Angermueller, David Dohan, David Belanger, Ramya Deshpande, Kevin Murphy, Lucy Colwell

Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning
Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee

Observational Overfitting in Reinforcement Learning
Xingyou Song, Yiding Jiang, Stephen Tu, Behnam Neyshabur, Yilun Du

On Bonus-based Exploration Methods In The Arcade Learning Environment
Adrien Ali Taiga, William Fedus, Marlos C. Machado, Aaron Courville, Marc G. Bellemare

On Identifiability in Transformers
Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, Roger Wattenhofer

On Mutual Information Maximization for Representation Learning
Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, Mario Lucic

On the Global Convergence of Training Deep Linear ResNets
Difan Zou, Philip M. Long, Quanquan Gu

Phase Transitions for the Information Bottleneck in Representation Learning
Tailin Wu, Ian Fischer

Pre-training Tasks for Embedding-based Large-scale Retrieval
Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, Sanjiv Kumar

Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control
Nir Levine, Yinlam Chow, Rui Shu, Ang Li, Mohammad Ghavamzadeh, Hung Bui

Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks
Wei Hu, Lechao Xiao, Jeffrey Pennington

Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu, Maithra Raghu, Samy Bengio, Oriol Vinyals

Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs
Aditya Paliwal, Felix Gimeno, Vinod Nair, Yujia Li, Miles Lubin, Pushmeet Kohli, Oriol Vinyals

ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel, Kihyuk Sohn

Scalable Model Compression by Entropy Penalized Reparameterization
Deniz Oktay, Johannes Ballé, Saurabh Singh, Abhinav Shrivastava

Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base
William W. Cohen, Haitian Sun, R. Alex Hofer, Matthew Siegler

Semi-Supervised Generative Modeling for Controllable Speech Synthesis
Raza Habib, Soroosh Mariooryad, Matt Shannon, Eric Battenberg, RJ Skerry-Ryan, Daisy Stanton, David Kao, Tom Bagby

Span Recovery for Deep Neural Networks with Applications to Input Obfuscation
Rajesh Jayaram, David Woodruff, Qiuyi Zhang

Thieves on Sesame Street! Model Extraction of BERT-based APIs
Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, Mohit Iyyer

Thinking While Moving: Deep Reinforcement Learning with Concurrent Control
Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog

VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation
Manoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, Laurent Dinh, Durk Kingma

Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards
Allan Zhou, Eric Jang, Daniel Kappler, Alex Herzog, Mohi Khansari, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Sergey Levine, Chelsea Finn

Weakly Supervised Disentanglement with Guarantees
Rui Shu, Yining Chen, Abhishek Kumar, Stefano Ermon, Ben Poole

You Only Train Once: Loss-Conditional Training of Deep Networks
Alexey Dosovitskiy, Josip Djolonga

A Mutual Information Maximization Perspective of Language Representation Learning
Lingpeng Kong, Cyprien de Masson d’Autume, Wang Ling, Lei Yu, Zihang Dai, Dani Yogatama

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations (see the blog post)
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut

Asymptotics of Wide Networks from Feynman Diagrams
Ethan Dyer, Guy Gur-Ari

DDSP: Differentiable Digital Signal Processing
Jesse Engel, Lamtharn Hantrakul, Chenjie Gu, Adam Roberts

Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation
Ziyang Tang, Yihao Feng, Lihong Li, Dengyong Zhou, Qiang Liu

Dream to Control: Learning Behaviors by Latent Imagination (see the blog post)
Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi

Emergent Tool Use From Multi-Agent Autocurricula
Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, Igor Mordatch

Gradientless Descent: High-Dimensional Zeroth-Order Optimization
Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, Qiuyi (Richard) Zhang

HOPPITY: Learning Graph Transformations to Detect and Fix Bugs in Programs
Elizabeth Dinella, Hanjun Dai, Ziyang Li, Mayur Naik, Le Song, Ke Wang

Learning to Plan in High Dimensions via Neural Exploration-Exploitation Trees
Binghong Chen, Bo Dai, Qinjie Lin, Guo Ye, Han Liu, Le Song

Model Based Reinforcement Learning for Atari (see the blog post)
Łukasz Kaiser, Mohammad Babaeizadeh, Piotr Miłos, Błazej Osinski, Roy H. Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, Henryk Michalewski

Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension
Xinyun Chen, Chen Liang, Adams Wei Yu, Denny Zhou, Dawn Song, Quoc V. Le

SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models
Yucen Luo, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ryan P. Adams, Ricky T. Q. Chen

Measuring the Reliability of Reinforcement Learning Algorithms
Stephanie C.Y. Chan, Samuel Fishman, John Canny, Anoop Korattikara, Sergio Guadarrama

Meta-Learning without Memorization
Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, Chelsea Finn

Neural Tangents: Fast and Easy Infinite Neural Networks in Python (see the blog post)
Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein, Samuel S. Schoenholz

Scaling Autoregressive Video Models
Dirk Weissenborn, Oscar Täckström, Jakob Uszkoreit

The Intriguing Role of Module Criticality in the Generalization of Deep Networks
Niladri Chatterji, Behnam Neyshabur, Hanie Sedghi

Reformer: The Efficient Transformer (see the blog post)
Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya

Workshops
Computer Vision for Global Challenges
Organizing Committee: Ernest Mwebaze
Advisory Committee: Timnit Gebru, John Quinn

Practical ML for Developing Countries: Learning under limited/low resource scenarios
Organizing Committee: Nyalleng Moorosi, Timnit Gebru
Program Committee: Pablo Samuel Castro, Samy Bengio
Keynote Speaker: Karmel Allison

Tackling Climate Change with Machine Learning
Organizing Committee: Moustapha Cisse
Co-Organizer: Natasha Jaques
Program Committee: John C. Platt, Kevin McCloskey, Natasha Jaques
Advisor and Panel: John C. Platt

Towards Trustworthy ML: Rethinking Security and Privacy for ML
Organizing Committee: Nicholas Carlini, Nicolas Papernot
Program Committee: Shuang Song

Source: Google AI Blog


Healthcare AI systems that put people at the center

Over the past four years, Google has advanced its AI technologies to address critical problems in healthcare. We’ve developed tools to detect eye disease, AI systems to identify cardiovascular risk factors and signs of anemia, and to improve breast cancer screening.

For these and other AI healthcare applications, the journey from initial research to useful product can take years. One part of that journey is conducting user-centered research. Applied to healthcare, this type of research means studying how care is delivered and how it benefits patients, so we can better understand how algorithms could help, or even inadvertently hinder, assessment and diagnosis.

Our research in practice

For our latest research paper, "A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy," we built on a partnership with the Ministry of Public Health in Thailand to conduct field research in clinics across the provinces of Pathum Thani and Chiang Mai. It’s one of the first published studies examining how a deep learning system is used in patient care, and it’s the first study of its kind that looks at how nurses use an AI system to screen patients for diabetic retinopathy. 

Over a period of eight months, we made regular visits to 11 clinics. At each clinic, we observed how diabetes nurses handle eye screenings, and we interviewed them to understand how to refine this technology. We did our field research alongside a study to evaluate the feasibility and performance of the deep learning system in the clinic, with patients who agreed to be carefully observed and medically supervised during the study. 

A nurse operates the fundus camera, taking images of a patient’s retina.

A nurse operates the fundus camera, taking images of a patient’s retina.

The observational process

In our research, we provide key recommendations for continued product development, and provide guidance on deploying AI in real-world scenarios for other research projects.

Developing new products with a user-centered design process requires involving the people who would interact with the technology early in development. This means getting a deep understanding of people’s needs, expectations, values and preferences, and testing ideas and prototypes with them throughout the entire process. When it comes to AI systems in healthcare, we pay special attention to the healthcare environment, current workflows, system transparency, and trust.

The impact of environment on AI

In addition to these factors, our fieldwork found that we must also factor in environmental differences like lighting, which vary among clinics and can impact the quality of images. Just as an experienced clinician might know how to account for these variables in order to assess it, AI systems also need to be trained to handle these situations.

For instance, some images captured in screening might have issues like blurs or dark areas. An AI system might conservatively call some of these images “ungradable” because the issues might obscure critical anatomical features that are required to provide a definitive result. For clinicians, the gradability of an image may vary depending on one’s own clinical set-up or experience. Building an AI tool to accommodate this spectrum is a challenge, as any disagreements between the system and the clinician can lead to frustration. In response to our observations, we amended the research protocol to have eye specialists review such ungradable images alongside the patient’s medical records, instead of automatically referring patients with ungradable images to an ophthalmologist. This helped to ensure a referral was necessary, and reduced unnecessary travel, missed work, and anxiety about receiving a possible false positive result. 

Finally, alongside evaluating the performance, reliability, and clinical safety of an AI system, the study also accounts for the human impacts of integrating an AI system into patient care. For example, the study found that the AI system could empower nurses to confidently and immediately identify a positive screening, resulting in quicker referrals to an ophthalmologist.

So what does all of this mean? 

Deploying an AI system by considering a diverse set of perspectives in the design and development process is just one part of introducing new health technology that requires human interaction. It's important to also study and incorporate real-life evaluations in the clinic, and engage meaningfully with clinicians and patients, before the technology is widely deployed. That’s how we can best inform improvements to the technology, and how it is integrated into care, to meet the needs of clinicians and patients. 

Libraries are helping bridge the digital divide during COVID-19

It’s National Library Week, and though many libraries are closed due to COVID-19, they continue to work to serve their clients and keep them connected to the larger world. To mark the week and honor the incredibly critical role libraries are playing every day both during this crisis and during more normal times, we’re sharing a post from Jill Joplin, Executive Director of the DeKalb County Library Foundation. The DCLF provides support beyond tax dollars to DeKalb County Public Library in Georgia and DCPL is just one of Google Fiber’s many library partners across the country working to help connect their communities during this time. For example, in San Antonio, we’ve partnered with Libraries without Borders to bring their Wash & Learn Initiative to local laundromats—and right now the WiFi has been extended to the parking lots so people can get online from the safety of their cars. In Nashville, Salt Lake City, Austin and other cities we have provided longtime support for Digital Inclusion Fellows and digital literacy support at public libraries.


At DeKalb County Public Library (DCPL), the Take the Internet Home with You initiative is one of the library’s most popular services and in today’s current COVID-19 environment, it is also one of the most valuable. Normally, patrons are able to check-out a WiFi hotspot for 21 days, and the devices are constantly checked out. Patrons wait by the front desk or call the library each day looking for returned hotspots. Our user data reveals more than 50% of patrons who check out these devices do not have access to the internet in their home.



Two of our regular patrons, who check out the devices as often as they can, were able to check out a device prior to the library’s closure due to COVID-19. The library is allowing patrons with the devices to keep them during the entirety of the closure and no late fines are being assessed. We checked in with them to see how they were using their devices. Joan is a retiree without home internet. She is very grateful to be able to keep the device she checked out during the library’s closure. She is staying in touch with her family and up to date with news and updates related to COVID-19. 

Our other patron, William, says what he once considered a pleasure — the ability to get online at home — is now a blessing. He has been able to file his unemployment paperwork online because he also had checked out a hotspot prior to the library closing. He also is keeping in touch with friends and enjoying streaming movies he wouldn’t be able to see without cable or an internet connection in his home. 

Although to many of us it seems like the entire world is virtually connected, in reality, 10% of Americans don’t have access to the Internet — that number goes up to 30% for low-income Americans. Staff at DeKalb County Public Library realized a few years ago that patrons were accessing the library’s WiFi signal during times the library was closed by sitting in the parking lot or on the steps of the building. Once we’d identified this need, DCPL began seeking funding to provide mobile hotspot devices for check out. 

Thanks to our partners at Google Fiber, Mailchimp, and New York Life, DCPL has been able to provide 200 hotspots to patrons across the library system. The library would not be able to offer this service without this funding from our partners.  While demand was always high for this initiative, with the economic impact of COVID-19, we anticipate it will be even more important in the future. We are proud we can support our patrons with this essential service. 

Posted by Jill Joplin, Executive Director,
DeKalb County Library Foundation



~~~~

author: Jill Joplin

category: community_impact

categoryimage: true

The science of why remote meetings don’t feel the same

As COVID-19 has pushed more teams to work remotely, many of us are turning to video calls. And if you’ve ever been on a video call and wondered why it doesn’t feel quite the same as an in-person conversation, we have something in common. As a researcher at Google, it’s my job to dig into the science behind remote communication. Here are a few things I’ve discovered along the way. 


#1: Milliseconds matter. 


slide_7.gif

As a species, we’re hardwired for the fast-paced exchange of in-person conversation. Humans have spent about 70,000 years learning to communicate face-to-face, but video conferencing is only about 100 years old. When the sound from someone’s mouth doesn’t reach your ears until a half second later, you notice. That’s because we’re ingrained to avoid talking at the same time while minimizing silence between turns. A delay of five-tenths of a second (500 ms)—whether from laggy audio or fumbling for the unmute button—is more than double what we’re used to in-person. These delays mess with the fundamental turn-taking mechanics of our conversations. 

On your next video conference, pump the brakes on your speaking speed to avoid unintended interruptions. If it’s a smaller group, try staying unmuted to provide little bits of verbal feedback (“mmhmm,” “okay”) to show you’re actively listening. 

#2: Virtual hallway conversations boost group performance. 


At the office, my meetings usually start with some impromptu, informal small talk. We share personal tidbits that build rapport and empathy. Making time for personal connections in remote meetings not only feels good, it helps you work better together. Science shows that teams who periodically share personal information perform better than teams who don’t. And when leaders model this, it can boost team performance even more. 

Carve out time at the start of a meeting to catch up and set aside time to connect with colleagues over virtual coffee or lunch breaks.

#3: Visual cues make conversations smoother.


slide_8.gif

If you’re face-to-face with someone, you might notice they’ve leaned forward and invite them to jump into the conversation. Or, you might pick up on a sidelong glance in the audience while you’re giving a presentation, and pause to address a colleague’s confusion or skepticism. Research shows that on video calls where social cues are harder to see, we take 25 percent fewer speaking turns. 

But video calls have something email doesn’t: eye contact. We feel more comfortable talking when our listeners’ eyes are visible because we can read their emotions and attitudes. This is especially important when we need more certainty—like when we meet a new team member or listen to a complex idea.

Resist browser tabs competing for your attention. 


#4: Distance can amplify team trust issues.


When things go wrong, remote teams are more likely to blame individuals rather than examining the situation, which hurts cohesion and performance. Different ways of working can be frustrating, but they’re important. Biological Anthropologist Helen Fisher has shown that we can harness the “productive friction” of diverse work styles today similar to how hunter-gatherers did 50,000 years ago to determine if a newly discovered plant was poisonous, medicinal or delicious.

Have an open conversation with your remote teammates about your preferred working styles and how you might complement each other. 


#5: Passing the talking stick makes remote teams smarter.


talking stick.gif

Conversations on calls are less dynamic, and the proverbial “talking stick” gets passed less often. That’s a big deal for remote teams because sharing the floor more equally is a significant factor in what makes one group smarter than another. Computational social scientists like Alex ‘Sandy’ Pentland and Anita Woolley have shown that higher performing groups aren’t made up of individuals with higher IQs but instead people who are more sensitive to emotions and share the floor more equally.

Identify calls where conversational dynamics could be better. Encourage more balanced conversation, help some get their voice heard and remind others to pass the talking stick.

If you’re interested in learning more about any of this science, you can check out my sources here.


Street View is helping this tour guide stay in business

On March 24, government restrictions due to COVID-19 went into effect across the United Kingdom. With nonessential businesses forced to close, public gatherings banned, and most people required to stay at home, these regulations instantly transformed daily life. They also presented a serious threat to Katie Wignall’s business: Katie makes her living as a tour guide, showing curious visitors the highlights of London.

But instead of trying to simply wait out the crisis, Katie looked to technology for a solution to creatively keep her business going. We chatted with her to find out how she’s successfully managed to take her walking tours virtual.

Beaver statues on London's Oxford Street

One of the beaver statues on Oxford Street

Describe your business, Look Up London.

I provide walking tours all over London for public and private groups. I’m a Blue Badge Tourist Guide, which is the top accreditation for tourist guides in the UK. We do two years of training, pass 11 exams, and we’re the only guides that can take you inside the Tower of London and Westminster Abbey.

Look Up London started originally as a blog and social media channels, where I shared quirkier bits of London history. The name is all about spotting the little details in the architecture around you, to tell the story of why something looks the way it does. For example, on Oxford Street, which is famous for its shopping, there's a building decorated with sculptures of beavers. They're a clue to the fact it used to be a hat factory—slightly gruesome, but a detail that is so often missed by passersby!

How has your business been impacted by COVID-19 and the government restrictions?

I’ve had to shut down, basically. I can’t go out; we’re not able to meet up in groups to deliver the normal tours. All of the work I had booked going into the summer—the busiest time—has just been cancelled or postponed. Literally overnight there was no work at all. 

Katie Wignall giving a tour of London's Guildhall

Katie giving a tour of the Guildhall

What gave you the idea for virtual tours?

It was actually a suggestion from a follower on Instagram who asked, “Is there a way you could do virtual tours?” I started out by going out myself and having my husband film me on London streets, but then as the situation escalated, we weren’t allowed outside.

So then I thought I’d experiment with Google Street View. If I couldn’t go outside, I could offer people the next best thing, through a screen. I was already using Street View a lot for my work—it’s really good for my research. I love the feature where you can go back in time. It’s not possible for every location, but for a lot of central London, you can select a place in Google Maps for desktop, drag the Street View pegman into the picture and click on the clock in the top left corner to explore imagery from the past. You can see where buildings have been demolished and what used to be standing where.

So now, every Monday, Wednesday and Friday at 2 PM London time, I use Street View to give a virtual tour on Instagram Live. And for anyone who can’t make that time, I post the recordings on my website. They’re all free, and if people enjoy them, they can make a donation.

What’s been the response?

People have been so lovely. From the comments, I think it’s been very helpful for people in lockdown, who maybe are older and can’t get out of the house as often, or people who’ve had to leave London and are feeling homesick. Lots have messaged me to say it’s made them feel like they’ve been outside. They’ve really learned something new and taken their minds off the situation for twenty minutes or so.

Any advantages to using Street View compared to being there in person?

The great thing about Street View is that you can hop about—you can jump a mile down the road and people don’t have to get on a bus or actually walk, so you can cover a lot of ground.

And then there’s that feature to go back in time and see things how they appeared years ago, back to 2008. On a normal tour, you can show pictures and give people an idea, but if people are on Street View and feel like they’re standing in a space and seeing the changes right there, it’s a different experience.

One example, on my Aldgate tour, is a garden space that has been relandscaped. The garden looks beautiful now, but three years ago you could see the cobbles of Victorian London. And those cobbles happen to have been the site of the murder of Catherine Eddowes, who was a victim of Jack the Ripper. That was an evocative thing to be able to show.

Any advice for other small business owners who are trying to figure out how to adapt right now?

I think you have to do the thing that you enjoy doing. I don’t think I’d be able to do these three times a week if I didn’t enjoy them. If you have something that you want to share, there’s no reason you shouldn’t do that. Technology has made everything so accessible, and if you care about something, chances are others care about that as well.

Source: Google LatLong


Street View is helping this tour guide stay in business

On March 24, government restrictions due to COVID-19 went into effect across the United Kingdom. With nonessential businesses forced to close, public gatherings banned, and most people required to stay at home, these regulations instantly transformed daily life. They also presented a serious threat to Katie Wignall’s business: Katie makes her living as a tour guide, showing curious visitors the highlights of London.

But instead of trying to simply wait out the crisis, Katie looked to technology for a solution to creatively keep her business going. We chatted with her to find out how she’s successfully managed to take her walking tours virtual.

Beaver statues on London's Oxford Street

One of the beaver statues on Oxford Street

Describe your business, Look Up London.

I provide walking tours all over London for public and private groups. I’m a Blue Badge Tourist Guide, which is the top accreditation for tourist guides in the UK. We do two years of training, pass 11 exams, and we’re the only guides that can take you inside the Tower of London and Westminster Abbey.

Look Up London started originally as a blog and social media channels, where I shared quirkier bits of London history. The name is all about spotting the little details in the architecture around you, to tell the story of why something looks the way it does. For example, on Oxford Street, which is famous for its shopping, there's a building decorated with sculptures of beavers. They're a clue to the fact it used to be a hat factory—slightly gruesome, but a detail that is so often missed by passersby!

How has your business been impacted by COVID-19 and the government restrictions?

I’ve had to shut down, basically. I can’t go out; we’re not able to meet up in groups to deliver the normal tours. All of the work I had booked going into the summer—the busiest time—has just been cancelled or postponed. Literally overnight there was no work at all. 

Katie Wignall giving a tour of London's Guildhall

Katie giving a tour of the Guildhall

What gave you the idea for virtual tours?

It was actually a suggestion from a follower on Instagram who asked, “Is there a way you could do virtual tours?” I started out by going out myself and having my husband film me on London streets, but then as the situation escalated, we weren’t allowed outside.

So then I thought I’d experiment with Google Street View. If I couldn’t go outside, I could offer people the next best thing, through a screen. I was already using Street View a lot for my work—it’s really good for my research. I love the feature where you can go back in time. It’s not possible for every location, but for a lot of central London, you can select a place in Google Maps for desktop, drag the Street View pegman into the picture and click on the clock in the top left corner to explore imagery from the past. You can see where buildings have been demolished and what used to be standing where.

So now, every Monday, Wednesday and Friday at 2 PM London time, I use Street View to give a virtual tour on Instagram Live. And for anyone who can’t make that time, I post the recordings on my website. They’re all free, and if people enjoy them, they can make a donation.

What’s been the response?

People have been so lovely. From the comments, I think it’s been very helpful for people in lockdown, who maybe are older and can’t get out of the house as often, or people who’ve had to leave London and are feeling homesick. Lots have messaged me to say it’s made them feel like they’ve been outside. They’ve really learned something new and taken their minds off the situation for twenty minutes or so.

Any advantages to using Street View compared to being there in person?

The great thing about Street View is that you can hop about—you can jump a mile down the road and people don’t have to get on a bus or actually walk, so you can cover a lot of ground.

And then there’s that feature to go back in time and see things how they appeared years ago, back to 2008. On a normal tour, you can show pictures and give people an idea, but if people are on Street View and feel like they’re standing in a space and seeing the changes right there, it’s a different experience.

One example, on my Aldgate tour, is a garden space that has been relandscaped. The garden looks beautiful now, but three years ago you could see the cobbles of Victorian London. And those cobbles happen to have been the site of the murder of Catherine Eddowes, who was a victim of Jack the Ripper. That was an evocative thing to be able to show.

Any advice for other small business owners who are trying to figure out how to adapt right now?

I think you have to do the thing that you enjoy doing. I don’t think I’d be able to do these three times a week if I didn’t enjoy them. If you have something that you want to share, there’s no reason you shouldn’t do that. Technology has made everything so accessible, and if you care about something, chances are others care about that as well.

Go on a cultural rendezvous with “Art For Two”

If you don’t work for a cultural institution, you’ve probably never had the opportunity to wander all alone through a museum’s hallways, exhibition spaces and galleries, after hours, with no one else around. That’s a privilege usually reserved for staff—until now. 


In the first installment of Google Arts & Culture’s new video series called “Art for Two”, curators from three cultural institutions are extending a special invitation to explore their collections, minus the crowds, as they discuss their favorite rooms and pieces with digital curators Mr. Bacchus and The Art Assignment.


You'll hear from the experts themselves: The director of the Museo d’Arte Orientale shows his favorite figurine and explains why it’s unusual. Sit at an antique kitchen table with Olivier Gabet, director of the Musée des arts décoratifs, or learn more about what makes Lucio Fontana’s installation at the Galleria Civica di Arte Moderna e Contemporanea so special.
Marco Guglielminotti Trivel, director of the Museo d’Arte Orientale meets digital curator Mr. Bacchus

Marco Guglielminotti Trivel, director of the Museo d’Arte Orientale meets digital curator Mr. Bacchus

Still itching to explore more? Another new series called “Perspectives” invites you to learn about important cultural destinations through the eyes and with the commentary of an inspirational guide. For the first edition, Grammy-nominated Indian-American artist Raja Kumari takes us on a personal ride to temples in India, including the famous Mahabalipuram—a cultural jewel and popular tourist destination, referred to as “Sculpture by the Sea.”

Raja Kumari shows you the Temples of India

Raja Kumari shows you the Temples of India

Travel isn't just about checking things off your bucket list. At a slow “couch travel” pace, Quiet Journeys, accompanied by the soothing sound of classical music, will help you relax and drift off into museums and masterpieces from all around the world.


“Art for Two”, ”Perspectives” and “Quiet Journeys” are the latest additions to our growing library of video formats that connect art and culture in new and unexpected ways. Check out Art Zoom to explore masterpieces through the eyes of famous musicians, and other videos on the Google Arts & Culture YouTube channel


Discover more on Google Arts & Culture—or download our free app for iOS or Android.

Go on a cultural rendezvous with “Art For Two”

If you don’t work for a cultural institution, you’ve probably never had the opportunity to wander all alone through a museum’s hallways, exhibition spaces and galleries, after hours, with no one else around. That’s a privilege usually reserved for staff—until now. 


In the first installment of Google Arts & Culture’s new video series called “Art for Two”, curators from three cultural institutions are extending a special invitation to explore their collections, minus the crowds, as they discuss their favorite rooms and pieces with digital curators Mr. Bacchus and The Art Assignment.


You'll hear from the experts themselves: The director of the Museo d’Arte Orientale shows his favorite figurine and explains why it’s unusual. Sit at an antique kitchen table with Olivier Gabet, director of the Musée des arts décoratifs, or learn more about what makes Lucio Fontana’s installation at the Galleria Civica di Arte Moderna e Contemporanea so special.
Marco Guglielminotti Trivel, director of the Museo d’Arte Orientale meets digital curator Mr. Bacchus

Marco Guglielminotti Trivel, director of the Museo d’Arte Orientale meets digital curator Mr. Bacchus

Still itching to explore more? Another new series called “Perspectives” invites you to learn about important cultural destinations through the eyes and with the commentary of an inspirational guide. For the first edition, Grammy-nominated Indian-American artist Raja Kumari takes us on a personal ride to temples in India, including the famous Mahabalipuram—a cultural jewel and popular tourist destination, referred to as “Sculpture by the Sea.”

Raja Kumari shows you the Temples of India

Raja Kumari shows you the Temples of India

Travel isn't just about checking things off your bucket list. At a slow “couch travel” pace, Quiet Journeys, accompanied by the soothing sound of classical music, will help you relax and drift off into museums and masterpieces from all around the world.


“Art for Two”, ”Perspectives” and “Quiet Journeys” are the latest additions to our growing library of video formats that connect art and culture in new and unexpected ways. Check out Art Zoom to explore masterpieces through the eyes of famous musicians, and other videos on the Google Arts & Culture YouTube channel


Discover more on Google Arts & Culture—or download our free app for iOS or Android.

More ways to fine tune Google Assistant for you

Smart speakers and Smart Displays often sit on the kitchen counter or living room table and are used by more than one member in the household. So we’ve made sure that each person can tweak their preferences for interacting with Google Assistant. When setting up your Google Assistant, you can choose to enable Voice Match and teach Assistant to recognize your voice so you can receive personalized results, like calendar reminders and favorite playlists—even if you share a device with other people in your household. 

Now when you set up Voice Match, Google Assistant will prompt you to say full phrases instead of just the hotword "Hey Google." For example, during Voice Match set up, the Assistant will ask you to say “Hey Google, play my workout playlist” so it can better identify who is engaging with significantly higher accuracy. With Voice Match, you can link up to six people to a single Google Assistant-powered device, so you each get tailored results when using the device.

Voice match

Adjust how your devices activate 

Different factors, like how noisy an environment is, may affect the Assistant’s responsiveness to the hotword or cause it to accidentally activate when it hears something similar to “Hey Google.” To better tailor Google Assistant to your environment and desired responsiveness, we’re rolling out a new feature that allows you to adjust how sensitive smart speakers and Smart Displays are to the hotword. You can make Google Assistant more sensitive if you want it to respond more often, or less sensitive to reduce unintentional activations. 

In the coming weeks, you’ll start seeing the option to adjust how sensitive Google Assistant is in your settings through the Google Home app. These settings can be changed at any time and you can fine tune your preferences for each device if, for example, one is in a busy area like the kitchen while the other is on the bedroom nightstand. This feature will be supported in English with more languages to follow.