Tag Archives: Health

Nuestro compromiso con la equidad de la vacuna COVID-19

Read this post in English. // Blog en inglés aquí.

A medida que más personas tienen acceso a la vacuna contra el COVID-19, estamos haciendo más fácil aprender por qué, cuándo y dónde pueden vacunarse. Hoy en día, puedes encontrar ubicaciones de vacunación en Google Maps y en búsquedas de Google en los Estados Unidos, Canadá, Francia, Chile, India y Singapur.

Aún así, queda mucho trabajo por delante para asegurarnos de que todos los que quieran vacunarse puedan hacerlo. En los Estados Unidos, COVID-19 ha afectado de manera desproporcionada a las poblaciones Afroamericanas y Latinas, aun así estos grupos tienen índices más bajos de vacunación. Las vacunas pueden ser más difíciles de acceder para personas debido a factores como el lugar donde viven, la distancia que deben conducir hasta un lugar de vacunación y si tienen acceso confiable a Internet para reservar una cita. Y a nivel mundial, podrían pasar años antes de que algunos países incluso tengan suficientes vacunas. 

Superar la pandemia requerirá un esfuerzo coordinado a escala mundial. Para hacer nuestra parte, hoy anunciamos que estamos proporcionando 250,000 vacunas contra el COVID-19 a países que las necesitan, ayudando a financiar sitios emergentes de vacunas en los Estados Unidos y comprometiendo $250 millones adicionales en Ad Grants para conectar a las personas con información precisa sobre la vacuna.


Asegurando vacunas para personas en todo el mundo 

Hoy, Gavi, The Vaccine Alliance, lanzó una campaña para obtener fondos adicionales para asegurar vacunas para países de ingresos medianos y bajos. Google.org está financiando vacunas para 250,000 personas y proporcionando a Gavi asistencia técnica gratuita para acelerar la distribución mundial. También estamos iniciando una campaña de donaciones para empleados, y tanto Gavi Matching Fund como Google.org igualarán cada donación para triplicar el impacto. 

Desde febrero, hemos estado proporcionando información relacionada con las vacunas para ayudar a Gavi a educar mejor a las comunidades sobre la vacuna contra el COVID-19. Han utilizado esa información para crear contenido educativo que llega a más de medio millón de personas cada día. Ahora estamos comprometiendo $15 millones en Ad Grants para ayudar a Gavi a aprovechar estos esfuerzos y ampliar su campaña de recaudación de fondos.


Financiando sitios temporales de vacunación y haciendo más fácil la reservación de citas 

Casi una cuarta parte de las personas en los Estados Unidos ahora están vacunadas. Sin embargo, sabemos que los índices de vacunación varían según la geografía y la comunidad. Llegar a todos requerirá una asociación con organizaciones comunitarias y centros de salud locales que tengan experiencia en el terreno y la confianza de las personas a las que sirven.

Google.org está proporcionando $2.5 millones en subvenciones a Partners in Health, Stop the Spread y Team Rubicon, que están trabajando directamente con más de 500 organizaciones comunitarias para servir a las comunidades afroamericanas, latinas y rurales. Este financiamiento se destinará a esfuerzos, como sitios de vacunación emergentes.  

Para asegurarse de que más personas, especialmente aquellas con acceso limitado a Internet, puedan inscribirse para recibir una vacuna,Google Cloud está lanzando un ampliado agente virtual como parte de suSolución de impacto de vacuna inteligente (IVI). Las personas podrán programar citas para vacunas y hacer preguntas comunes a través de un agente virtual, en hasta 28 idiomas y dialectos, a través de chat, texto, web, móvil o por teléfono. 


Compromiso de $250 millones para conectar a las comunidades con información confiable sobre vacunas 

Desde el comienzo de la pandemia, cientos de empleados de Google han ayudado a las organizaciones a conectar a las personas con información actualizada, especialmente en comunidades a las que no suelen llegar los anuncios de servicios públicos convencionales. 

Por ejemplo, estamos trabajando con UnidosUS en una campaña de vacunación bilingüe que hasta la fecha ha llegado a más de dos millones de personas en las comunidades más afectadas en Miami, Chicago, Houston, Nueva York y Los Ángeles. Hemos realizado una investigación con la Organización Mundial de la Salud sobre qué información mejora la confianza en las vacunas, y los gobiernos de todo el mundo están utilizando estos conocimientos para informar sus anuncios de servicio público.   

Para expandir este trabajo, estamos comprometiendo $250 millones adicionales en Ad Grants a gobiernos, organizaciones comunitarias y de salud pública, incluida la OMS, que financiarán más de 2,500 millones de anuncios de servicio público relacionados con vacunas. Esto eleva nuestro compromiso total para los anuncios de servicio público relacionados con COVID a más de $800 millones. 

Como hemos aprendido durante la pandemia, nadie está a salvo del COVID-19 hasta que todos estén a salvo. Llevar las vacunas a todo el mundo es un reto desafiante, pero necesario. Seguiremos haciendo nuestra parte y trabajando juntos hasta que lleguemos a lograrlo.



Sintoniza YouTube el 8 de mayo a las 5 p.m. PST / 8 p.m. EST para ver Vax Live: The Concert to Reunite the World, una campaña de recaudación de fondos para vacunar a los trabajadores de la salud que prestan servicios en la primera línea de la pandemia. 

 


Our commitment to COVID-19 vaccine equity

Read this post in Spanish. // Blog en español aquí.


As more people have access to the COVID-19 vaccine, we’re making it easier to learn why, when and where you can get immunized. Today, you can now find vaccination locations on Google Maps and Search in the U.S., Canada, France, Chile, India and Singapore.

Still, there’s a lot of work ahead to make sure everyone who wants to get vaccinated can. In the U.S., COVID-19 has disproportionately impacted Black and Latino populations, yet these groups have lower rates of vaccinations. Vaccines may be harder for people to access based on factors like where they live, how far they have to drive to a vaccination site, and if they have reliable internet access to book an appointment. And globally, it could be years before some countries even have enough vaccines. 

Overcoming the pandemic will require a coordinated effort on a global scale. To do our part, today we're announcing that we're providing 250,000 COVID-19 vaccinations to countries in need, helping fund pop-up vaccine sites in the U.S., and committing an additional $250 million in Ad Grants to connect people to accurate vaccine information.


Securing vaccines for people around the world 

Today, Gavi, The Vaccine Alliance, launched a drive for additional funding to secure vaccines for low and middle-income countries. Google.org is funding vaccinations for 250,000 people and providing Gavi with pro bono technical assistance to accelerate global distribution. We’re also kicking off an employee giving campaign, and both the Gavi Matching Fund and Google.org will match each donation to triple the impact. 

Since February, we’ve been providing vaccine-related insights to help Gavi better educate communities about the COVID-19 vaccine. They’ve used that information to create educational content that reaches more than half a million people each day. We’re now committing $15 million in Ad Grants to help Gavi build on these efforts and amplify their fundraising campaign.

  

Funding pop-up vaccine sites and making it easier to book appointments 

Nearly a quarter of people in the U.S. are now vaccinated. Yet we know that vaccination rates vary by geography and community. Reaching everyone will require partnerships with community-based organizations and local health centers that have on-the-ground expertise and the trust of the people they serve.

Google.org is providing $2.5 million in grant funding to Partners in Health, Stop the Spread and Team Rubicon, who are working directly with over 500 community-based organizations to serve Black, Latino and rural communities. This funding will go toward efforts like pop-up vaccination sites.  

To make sure more people — especially those with limited internet access — can sign up for a vaccine,Google Cloud is launching an expanded virtual agent as part of itsIntelligent Vaccine Impact solution (IVIs). People will be able to schedule vaccine appointments and ask common questions through a virtual agent, in up to 28 languages and dialects, via chat, text, web, mobile or over the phone. 


Committing $250 million to connect communities to trusted vaccine information 

Since the beginning of the pandemic, hundreds of Google employees have helped organizations connect people with up-to-date information — particularly in communities that are not typically reached by mainstream public service announcements. 

For example, we’re working with UnidosUS on a bilingual vaccination campaign that to date has reached more than two million people in hard-hit communities in Miami, Chicago, Houston, New York City and Los Angeles. We’ve conducted research with the World Health Organization (WHO) on what information improves vaccine confidence, and governments worldwide are using these insights to inform their public service announcements.   

To expand this work, we’re committing an additional $250 million in Ad Grants to governments, community and public health organizations, including the WHO, that will fund more than 2.5 billion vaccine-related PSAs. This brings our total commitment for COVID-related public service announcements to more than $800 million. 

As we’ve learned throughout the pandemic, no one is safe from COVID-19 until everyone is safe. Getting vaccines to everyone around the world is a challenging, but necessary, undertaking. We’ll keep doing our part and working together until we get there.



Tune in to YouTube on May 8 at 5 p.m. PST / 8 p.m. EST for Vax Live: The Concert to Reunite the World, a fundraising campaign to vaccinate health workers working on the frontlines of the pandemic. 


Arts and culture activities for your health and wellbeing

Our collective health and wellbeing has taken center stage as the world continues to grapple with the COVID-19 pandemic.  While extraordinary advances in science have delivered vaccines and new hope, for over a year we’ve had to consider what wellbeing means on a personal and global level. We’ve also asked ourselves how best to cope in an era of tremendous stress, grief and isolation.

Many of us intuitively turned to arts and cultural activities as a source of comfort and healing. To honor World Health Day and support our recovery and resilience, we are launching a new experience: Arts + Health & Wellbeing.

Artists have always deeply understood the healing power of the arts from music, poetry and painting to dance and design. Technological leaps in brain imagery and biomarkers are now helping scientists confirm what we’ve all sensed: art heals. Evidence shows that many forms of art can play an important role during treatment and recovery of people living with illnesses such as cancer, Alzheimer’s and Parkinson’s disease, and PTSD. More broadly, the arts relieve stress, anxiety and depression, boost our mood and create stronger connections to ourselves and others.

Like regular exercise or a good night’s sleep, the arts are proving important to our health and wellbeing.

The best discovery is that the arts are for everyone. Regardless of experience or talent, you can enjoy their health benefits today. Take a moment to support your own health and wellbeing — start by doing “The Cultural 5” with the World Health Organization, or enjoy a daily dose of arts and culture activities below:


Opera singer Renée Fleming

1. Try these breathing exercises with Soprano singer Renée Fleming to help increase breath capacity — for many who have experienced COVID-19, breathing is a challenge, one that can remain difficult after they recover from the most acute phase of the illness. Renée Fleming shares breathing exercises with anyone trying to regain better breath after illness.

Dr. Adam Perlman and boxer Ryan Garcia talk about creativity

2. Talk about mental health - Dr. Adam Perlman from the Mayo Clinic and boxer Ryan Garcia explore creativity and the role it plays in mental health and wellbeing.

Art emotions map

3. Dive into a sea of images and explore which artworks represent your emotions. Scientists from The University of California, Berkeley conducted research on the emotions evoked by artworks through time, and across cultures. We asked 1,300 people to describe how certain images make them feel, and plotted these feelings on an interactive map for you to explore. Find how your emotions compare to others.

For the imperfect people image

4. Watch "For the imperfect people," a spoken word video on the topic of mental health, written by students of SocialWorks’ OpenMike program, and in collaboration with Johns Hopkins International Arts + Mind Lab. Learn about the science behind how spoken word and poetry can help people heal emotionally while creating community connections and reducing stress and isolation.

Slow your body down with Stille

5. Slow down your body tempo with Stille (Silence Film), an experimental film aimed at giving viewers a visceral and meditative experience of silence, viewed through the lens of the German film director Thomas Riedelsheimer.

Contactless Sleep Sensing in Nest Hub

People often turn to technology to manage their health and wellbeing, whether it is to record their daily exercise, measure their heart rate, or increasingly, to understand their sleep patterns. Sleep is foundational to a person’s everyday wellbeing and can be impacted by (and in turn, have an impact on) other aspects of one’s life — mood, energy, diet, productivity, and more.

As part of our ongoing efforts to support people’s health and happiness, today we announced Sleep Sensing in the new Nest Hub, which uses radar-based sleep tracking in addition to an algorithm for cough and snore detection. While not intended for medical purposes1, Sleep Sensing is an opt-in feature that can help users better understand their nighttime wellness using a contactless bedside setup. Here we describe the technologies behind Sleep Sensing and discuss how we leverage on-device signal processing to enable sleep monitoring (comparable to other clinical- and consumer-grade devices) in a way that protects user privacy.

Soli for Sleep Tracking
Sleep Sensing in Nest Hub demonstrates the first wellness application of Soli, a miniature radar sensor that can be used for gesture sensing at various scales, from a finger tap to movements of a person’s body. In Pixel 4, Soli powers Motion Sense, enabling touchless interactions with the phone to skip songs, snooze alarms, and silence phone calls. We extended this technology and developed an embedded Soli-based algorithm that could be implemented in Nest Hub for sleep tracking.

Soli consists of a millimeter-wave frequency-modulated continuous wave (FMCW) radar transceiver that emits an ultra-low power radio wave and measures the reflected signal from the scene of interest. The frequency spectrum of the reflected signal contains an aggregate representation of the distance and velocity of objects within the scene. This signal can be processed to isolate a specified range of interest, such as a user’s sleeping area, and to detect and characterize a wide range of motions within this region, ranging from large body movements to sub-centimeter respiration.

Soli spectrogram illustrating its ability to detect a wide range of motions, characterized as (a) an empty room (no variation in the reflected signal demonstrated by the black space), (b) large pose changes, (c) brief limb movements, and (d) sub-centimeter chest and torso displacements from respiration while at rest.

In order to make use of this signal for Sleep Sensing, it was necessary to design an algorithm that could determine whether a person is present in the specified sleeping area and, if so, whether the person is asleep or awake. We designed a custom machine-learning (ML) model to efficiently process a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) and automatically classify each feature into one of three possible states: absent, awake, and asleep.

To train and evaluate the model, we recorded more than a million hours of radar data from thousands of individuals, along with thousands of sleep diaries, reference sensor recordings, and external annotations. We then leveraged the TensorFlow Extended framework to construct a training pipeline to process this data and produce an efficient TensorFlow Lite embedded model. In addition, we created an automatic calibration algorithm that runs during setup to configure the part of the scene on which the classifier will focus. This ensures that the algorithm ignores motion from a person on the other side of the bed or from other areas of the room, such as ceiling fans and swaying curtains.

The custom ML model efficiently processes a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) to automatically compute probabilities for the likelihood of user presence and wakefulness (awake or asleep).

To validate the accuracy of the algorithm, we compared it to the gold-standard of sleep-wake determination, the polysomnogram sleep study, in a cohort of 33 “healthy sleepers” (those without significant sleep issues, like sleep apnea or insomnia) across a broad age range (19-78 years of age). Sleep studies are typically conducted in clinical and research laboratories in order to collect various body signals (brain waves, muscle activity, respiratory and heart rate measurements, body movement and position, and snoring), which can then be interpreted by trained sleep experts to determine stages of sleep and identify relevant events. To account for variability in how different scorers apply the American Academy of Sleep Medicine’s staging and scoring rules, our study used two board-certified sleep technologists to independently annotate each night of sleep and establish a definitive groundtruth.

We compared our Sleep Sensing algorithm’s outputs to the corresponding groundtruth sleep and wake labels for every 30-second epoch of time to compute standard performance metrics (e.g., sensitivity and specificity). While not a true head-to-head comparison, this study’s results can be compared against previously published studies in similar cohorts with comparable methodologies in order to get a rough estimate of performance. In “Sleep-wake detection with a contactless, bedside radar sleep sensing system”, we share the full details of these validation results, demonstrating sleep-wake estimation equivalent to or, in some cases, better than current clinical and consumer sleep tracking devices.

Aggregate performance from previously published accuracies for detection of sleep (sensitivity) and wake (specificity) of a variety of sleep trackers against polysomnography in a variety of different studies, accounting for 3,990 nights in total. While this is not a head-to-head comparison, the performance of Sleep Sensing on Nest Hub in a population of healthy sleepers who simultaneously underwent polysomnography is added to the figure for rough comparison. The size of each circle is a reflection of the number of nights and the inset illustrates the mean±standard deviation for the performance metrics.

Understanding Sleep Quality with Audio Sensing
The Soli-based sleep tracking algorithm described above gives users a convenient and reliable way to see how much sleep they are getting and when sleep disruptions occur. However, to understand and improve their sleep, users also need to understand why their sleep is disrupted. To assist with this, Nest Hub uses its array of sensors to track common sleep disturbances, such as light level changes or uncomfortable room temperature. In addition to these, respiratory events like coughing and snoring are also frequent sources of disturbance, but people are often unaware of these events.

As with other audio-processing applications like speech or music recognition, coughing and snoring exhibit distinctive temporal patterns in the audio frequency spectrum, and with sufficient data an ML model can be trained to reliably recognize these patterns while simultaneously ignoring a wide variety of background noises, from a humming fan to passing cars. The model uses entirely on-device audio processing with privacy-preserving analysis, with no raw audio data sent to Google’s servers. A user can then opt to save the outputs of the processing (sound occurrences, such as the number of coughs and snore minutes) in Google Fit, in order to view personal insights and summaries of their night time wellness over time.

The Nest Hub displays when snoring and coughing may have disturbed a user’s sleep (top) and can track weekly trends (bottom).

To train the model, we assembled a large, hand-labeled dataset, drawing examples from the publicly available AudioSet research dataset as well as hundreds of thousands of additional real-world audio clips contributed by thousands of individuals.

Log-Mel spectrogram inputs comparing cough (left) and snore (right) audio snippets.

When a user opts in to cough and snore tracking on their bedside Nest Hub, the device first uses its Soli-based sleep algorithms to detect when a user goes to bed. Once it detects that a user has fallen asleep, it then activates its on-device sound sensing model and begins processing audio. The model works by continuously extracting spectrogram-like features from the audio input and feeding them through a convolutional neural network classifier in order to estimate the probability that coughing or snoring is happening at a given instant in time. These estimates are analyzed over the course of the night to produce a report of the overall cough count and snoring duration and highlight exactly when these events occurred.

Conclusion
The new Nest Hub, with its underlying Sleep Sensing features, is a first step in empowering users to understand their nighttime wellness using privacy-preserving radar and audio signals. We continue to research additional ways that ambient sensing and the predictive ability of consumer devices could help people better understand their daily health and wellness in a privacy-preserving way.

Acknowledgements
This work involved collaborative efforts from a multidisciplinary team of software engineers, researchers, clinicians, and cross-functional contributors. Special thanks to D. Shin for his significant contributions to this technology and blogpost, and Dr. Logan Schneider, visiting sleep neurologist affiliated with the Stanford/VA Alzheimer’s Center and Stanford Sleep Center, whose clinical expertise and contributions were invaluable to continuously guide this research. In addition to the authors, key contributors to this research from Google Health include Jeffrey Yu, Allen Jiang, Arno Charton, Jake Garrison, Navreet Gill, Sinan Hersek, Yijie Hong, Jonathan Hsu, Andi Janti, Ajay Kannan, Mukil Kesavan, Linda Lei, Kunal Okhandiar‎, Xiaojun Ping, Jo Schaeffer, Neil Smith, Siddhant Swaroop, Bhavana Koka, Anupam Pathak, Dr. Jim Taylor, and the extended team. Another special thanks to Ken Mixter for his support and contributions to the development and integration of this technology into Nest Hub. Thanks to Mark Malhotra and Shwetak Patel for their ongoing leadership, as well as the Nest, Fit, Soli, and Assistant teams we collaborated with to build and validate Sleep Sensing on Nest Hub.


1 Not intended to diagnose, cure, mitigate, prevent or treat any disease or condition. 

Source: Google AI Blog


Contactless Sleep Sensing in Nest Hub

People often turn to technology to manage their health and wellbeing, whether it is to record their daily exercise, measure their heart rate, or increasingly, to understand their sleep patterns. Sleep is foundational to a person’s everyday wellbeing and can be impacted by (and in turn, have an impact on) other aspects of one’s life — mood, energy, diet, productivity, and more.

As part of our ongoing efforts to support people’s health and happiness, today we announced Sleep Sensing in the new Nest Hub, which uses radar-based sleep tracking in addition to an algorithm for cough and snore detection. While not intended for medical purposes1, Sleep Sensing is an opt-in feature that can help users better understand their nighttime wellness using a contactless bedside setup. Here we describe the technologies behind Sleep Sensing and discuss how we leverage on-device signal processing to enable sleep monitoring (comparable to other clinical- and consumer-grade devices) in a way that protects user privacy.

Soli for Sleep Tracking
Sleep Sensing in Nest Hub demonstrates the first wellness application of Soli, a miniature radar sensor that can be used for gesture sensing at various scales, from a finger tap to movements of a person’s body. In Pixel 4, Soli powers Motion Sense, enabling touchless interactions with the phone to skip songs, snooze alarms, and silence phone calls. We extended this technology and developed an embedded Soli-based algorithm that could be implemented in Nest Hub for sleep tracking.

Soli consists of a millimeter-wave frequency-modulated continuous wave (FMCW) radar transceiver that emits an ultra-low power radio wave and measures the reflected signal from the scene of interest. The frequency spectrum of the reflected signal contains an aggregate representation of the distance and velocity of objects within the scene. This signal can be processed to isolate a specified range of interest, such as a user’s sleeping area, and to detect and characterize a wide range of motions within this region, ranging from large body movements to sub-centimeter respiration.

Soli spectrogram illustrating its ability to detect a wide range of motions, characterized as (a) an empty room (no variation in the reflected signal demonstrated by the black space), (b) large pose changes, (c) brief limb movements, and (d) sub-centimeter chest and torso displacements from respiration while at rest.

In order to make use of this signal for Sleep Sensing, it was necessary to design an algorithm that could determine whether a person is present in the specified sleeping area and, if so, whether the person is asleep or awake. We designed a custom machine-learning (ML) model to efficiently process a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) and automatically classify each feature into one of three possible states: absent, awake, and asleep.

To train and evaluate the model, we recorded more than a million hours of radar data from thousands of individuals, along with thousands of sleep diaries, reference sensor recordings, and external annotations. We then leveraged the TensorFlow Extended framework to construct a training pipeline to process this data and produce an efficient TensorFlow Lite embedded model. In addition, we created an automatic calibration algorithm that runs during setup to configure the part of the scene on which the classifier will focus. This ensures that the algorithm ignores motion from a person on the other side of the bed or from other areas of the room, such as ceiling fans and swaying curtains.

The custom ML model efficiently processes a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) to automatically compute probabilities for the likelihood of user presence and wakefulness (awake or asleep).

To validate the accuracy of the algorithm, we compared it to the gold-standard of sleep-wake determination, the polysomnogram sleep study, in a cohort of 33 “healthy sleepers” (those without significant sleep issues, like sleep apnea or insomnia) across a broad age range (19-78 years of age). Sleep studies are typically conducted in clinical and research laboratories in order to collect various body signals (brain waves, muscle activity, respiratory and heart rate measurements, body movement and position, and snoring), which can then be interpreted by trained sleep experts to determine stages of sleep and identify relevant events. To account for variability in how different scorers apply the American Academy of Sleep Medicine’s staging and scoring rules, our study used two board-certified sleep technologists to independently annotate each night of sleep and establish a definitive groundtruth.

We compared our Sleep Sensing algorithm’s outputs to the corresponding groundtruth sleep and wake labels for every 30-second epoch of time to compute standard performance metrics (e.g., sensitivity and specificity). While not a true head-to-head comparison, this study’s results can be compared against previously published studies in similar cohorts with comparable methodologies in order to get a rough estimate of performance. In “Sleep-wake detection with a contactless, bedside radar sleep sensing system”, we share the full details of these validation results, demonstrating sleep-wake estimation equivalent to or, in some cases, better than current clinical and consumer sleep tracking devices.

Aggregate performance from previously published accuracies for detection of sleep (sensitivity) and wake (specificity) of a variety of sleep trackers against polysomnography in a variety of different studies, accounting for 3,990 nights in total. While this is not a head-to-head comparison, the performance of Sleep Sensing on Nest Hub in a population of healthy sleepers who simultaneously underwent polysomnography is added to the figure for rough comparison. The size of each circle is a reflection of the number of nights and the inset illustrates the mean±standard deviation for the performance metrics.

Understanding Sleep Quality with Audio Sensing
The Soli-based sleep tracking algorithm described above gives users a convenient and reliable way to see how much sleep they are getting and when sleep disruptions occur. However, to understand and improve their sleep, users also need to understand why their sleep is disrupted. To assist with this, Nest Hub uses its array of sensors to track common sleep disturbances, such as light level changes or uncomfortable room temperature. In addition to these, respiratory events like coughing and snoring are also frequent sources of disturbance, but people are often unaware of these events.

As with other audio-processing applications like speech or music recognition, coughing and snoring exhibit distinctive temporal patterns in the audio frequency spectrum, and with sufficient data an ML model can be trained to reliably recognize these patterns while simultaneously ignoring a wide variety of background noises, from a humming fan to passing cars. The model uses entirely on-device audio processing with privacy-preserving analysis, with no raw audio data sent to Google’s servers. A user can then opt to save the outputs of the processing (sound occurrences, such as the number of coughs and snore minutes) in Google Fit, in order to view personal insights and summaries of their night time wellness over time.

The Nest Hub displays when snoring and coughing may have disturbed a user’s sleep (top) and can track weekly trends (bottom).

To train the model, we assembled a large, hand-labeled dataset, drawing examples from the publicly available AudioSet research dataset as well as hundreds of thousands of additional real-world audio clips contributed by thousands of individuals.

Log-Mel spectrogram inputs comparing cough (left) and snore (right) audio snippets.

When a user opts in to cough and snore tracking on their bedside Nest Hub, the device first uses its Soli-based sleep algorithms to detect when a user goes to bed. Once it detects that a user has fallen asleep, it then activates its on-device sound sensing model and begins processing audio. The model works by continuously extracting spectrogram-like features from the audio input and feeding them through a convolutional neural network classifier in order to estimate the probability that coughing or snoring is happening at a given instant in time. These estimates are analyzed over the course of the night to produce a report of the overall cough count and snoring duration and highlight exactly when these events occurred.

Conclusion
The new Nest Hub, with its underlying Sleep Sensing features, is a first step in empowering users to understand their nighttime wellness using privacy-preserving radar and audio signals. We continue to research additional ways that ambient sensing and the predictive ability of consumer devices could help people better understand their daily health and wellness in a privacy-preserving way.

Acknowledgements
This work involved collaborative efforts from a multidisciplinary team of software engineers, researchers, clinicians, and cross-functional contributors. Special thanks to D. Shin for his significant contributions to this technology and blogpost, and Dr. Logan Schneider, visiting sleep neurologist affiliated with the Stanford/VA Alzheimer’s Center and Stanford Sleep Center, whose clinical expertise and contributions were invaluable to continuously guide this research. In addition to the authors, key contributors to this research from Google Health include Jeffrey Yu, Allen Jiang, Arno Charton, Jake Garrison, Navreet Gill, Sinan Hersek, Yijie Hong, Jonathan Hsu, Andi Janti, Ajay Kannan, Mukil Kesavan, Linda Lei, Kunal Okhandiar‎, Xiaojun Ping, Jo Schaeffer, Neil Smith, Siddhant Swaroop, Bhavana Koka, Anupam Pathak, Dr. Jim Taylor, and the extended team. Another special thanks to Ken Mixter for his support and contributions to the development and integration of this technology into Nest Hub. Thanks to Mark Malhotra and Shwetak Patel for their ongoing leadership, as well as the Nest, Fit, Soli, and Assistant teams we collaborated with to build and validate Sleep Sensing on Nest Hub.


1 Not intended to diagnose, cure, mitigate, prevent or treat any disease or condition. 

Source: Google AI Blog


Low-Power Sleep Tracking on Android

Posted by Nick Grayson, Product Manager

Illustration of phone with moon and Android logo on screen

Android works best when it helps developers create apps that people love. That’s why we are dedicated to providing useful APIs like Activity Recognition which, with the user’s permission, can detect user’s activities (such as whether a user is biking or walking) to help apps provide contextually aware experiences.

So much of what we do relies on a good night's rest. Our phones have become great tools for making more informed decisions about our sleep. And by being informed about sleep habits, people can make better decisions throughout the day about sleep, which affects things like concentration and mental health.

In an effort to help our users stay informed about their sleep, we are making our Sleep API publicly available.

What is the Sleep API?

The Sleep API is an Android Activity Recognition API that surfaces information about the user’s sleep. It can be used to power features like the Bedtime mode in Clock.

This sleeping information is reported in two ways:

  1. A ‘sleep confidence’, which is reported at a regular interval (up to 10 minutes)
  2. A daily sleep segment which is reported after a wakeup is detected

The API uses an on-device artificial intelligence model that uses the device’s light and motion sensors as inputs.

As with all of our Activity Recognition APIs, the app must be granted the Physical Activity Recognition runtime permission from the user to detect sleep.

Why is this important for developers?

Developers spend valuable engineering time to combine sensor signals to determine when the user has started or ended activities like sleep. These detection algorithms are inconsistent between apps and when multiple apps independently and continuously check for changes in user activity, battery life suffers.

The Sleep API is a simple API that centralizes sleep detection processing in a battery-efficient manner. For this launch, we are proud to collaborate with Urbandroid, the developer of the popular alarm app, Sleep As Android

Android logo sleeping
Sleep as Android is a swiss army knife for getting a better night’s rest. It tracks sleep duration, regularity, phases, snoring, and more. Sleep Duration is one of the most important parameters to watch for ensuring a good night’s rest. The new Sleep API gives us a fantastic opportunity to track it automatically in the most battery efficient way imaginable.

- Sleep as Android Team



When can I start using this API?

The Sleep API is available for developers to use now as part of the latest version of Google Play Services.

This API is one step of our efforts to help our users get a better night's rest. We look forward to working more on this API and in this area in the future.

If you are interested in exploring or using this API, check out our API Documentation.

Using artificial intelligence in breast cancer screening

Every year, approximately 40 million women undergo breast-cancer screening in the U.S. using a procedure called mammography. For some, this can be a nerve-wracking experience; many wait days or weeks before a radiologist can review their scan and provide initial screening results. Between 10 and 15 percent of women must return for a second visit and undergo more scans before receiving a final diagnostic assessment – drawing out the process further. 


Together with Northwestern Medicine, Google Health is working on a new clinical research study to explore whether artificial intelligence (AI) models can help reduce the time to diagnosis, narrowing the assessment gap and improving the patient experience. 


Women who choose to take part in the study may have their mammograms reviewed by an investigational AI model that flags scans for immediate review by a radiologist if they show a higher likelihood of breast cancer. If a radiologist determines that further imaging is required, the woman will have the option to undergo this imaging on the same day. This study will evaluate whether this prioritization could reduce the amount of time that women spend waiting for a diagnostic assessment.  Women whose mammograms are not flagged will continue to have their images reviewed within regular timeframes. 


“Through this study, Northwestern Medicine aims to improve the excellent care we deliver to our patients every day. With the use of artificial intelligence, we hope to expedite the process to diagnosis of breast cancer by identifying suspicious findings on patients’ screening examinations earlier than the standard of care,” says study principal investigator Dr. Sarah Friedewald, chief of breast imaging at Northwestern Medicine and vice chair for women's imaging in radiology at Northwestern University’s Feinberg School of Medicine. “Every patient in the study will continue to have their mammograms interpreted by a radiologist, but the artificial intelligence will flag and prioritize patients that need additional imaging, facilitating the flow of care.”


This research study with Northwestern Medicine builds on previous research which demonstrated the potential of AI models to analyze de-identified retrospectively collected screening mammograms with similar or better accuracy than clinicians. 


Artificial intelligence has shown great potential to improve health care outcomes; the next challenge is to demonstrate how AI can be applied in the real-world. At Google Health, we’re committed to working with clinicians, patients and others to harness advances in research and ultimately bring about better and more accessible care. 

VaxCare simplifies vaccine management with Android Enterprise

Editor’s note: Today’s post is by Evan Landis, Chief Product Officer with VaxCare. The company aims to simplify vaccination for healthcare providers. VaxCare partnered with Social Mobile to create custom devices managed with Android Enterprise for its customers. 

The intense worldwide effort to vaccinate against COVID-19 has highlighted some of the core challenges that have always existed in expanding protections against preventable diseases.  

At VaxCare, our mission for more than 10 years has been to simplify vaccination programs, easing the logistical barriers to increasing vaccination rates. Our digital platform is designed to help healthcare professionals modernize their vaccination programs, reduce costs and focus on their patients. 

Android devices are central to this strategy. Recently, we partnered with Social Mobile who designed and built bespoke, Google Mobile Services-certified devices that interface with our digital platform. The flexibility of Android Enterprise enabled us to build solutions aligned to our customer needs with simple, flexible management and security tools.

A better customer experience with Android

Social Mobile helped us create custom devices that are simple to set up, use and update, while still meeting HIPAA and HITRUST certification compliance. We were inspired by consumer-facing, point-of-sale devices and the flexibility of the Android platform to create an ideal hardware solution for our customers. 

The VaxCare Hub is our stationary, in-practice integrated device with a 13-inch touchscreen, a camera and a scanner that is the main gateway to our platform. When vaccinating patients, healthcare providers scan the dose and view the vaccine and patient information, ensuring accuracy before administering the vaccine. 

As a dedicated device tied to our service, healthcare providers always have access to quickly look up the status of their inventory and get updates on new vaccine shipments.


vaxcare hub

The VaxCare Hub, a custom device powered by Android Enterprise, is the key portal to our service.

To design for the new contexts and places where vaccines are administered, we also worked with Social Mobile to create the VaxCare Mobile Hub. This smaller dedicated Android Enterprise device also connects to our Portal service and gives healthcare providers the flexibility to get the information they need no matter where they are administering vaccines.


vaxcare mobile hub

The VaxCare Mobile Hub helps our customers ensure accurate vaccine administration.

Having this vital information readily available in this purpose-built, rugged device has produced efficiency for our network of over 10,000 providers. Since launching the Mobile Hub device in September 2020, they administered over 650,000 flu shots during the 2020 season.  One partner practice saw their immunization rates increase 54 percent year-over-year.

Flexible management solutions

Android Enterprise provides comprehensive tools for rapid and secure device enrollment and flexible management, which we enable for our devices through Social Mobile’s Enterprise Mobility Management (EMM) platform, Mambo.  

With zero-touch enrollment, we enable a quick and simple device startup experience for customers. After unboxing and powering on the device, it’s automatically enrolled and configured for use with our application. Devices are managed in lock task mode, which locks a device to a specific set of apps, so customers are always connected to our VaxCare Portal.

Security and privacy are critical to any healthcare setting. As a device with Google Mobile Services, the VaxCare Hub and Mobile Hub use Android multi-layered security to continually monitor and protect critical data. We have confidence in the platform security features to ensure we meet the security and privacy promise we make to our customers.

Help for a vaccine surge

With Android Enterprise, we’ve set ourselves up to scale as we see an increased demand for vaccines and offerings like VaxCare. We've been able to quickly bring online support for our partners in the public phase of the COVID-19 vaccine rollout. We’ve optimized our platform to assist any of our providers who enroll in a public vaccination program to manage inventory, record-keeping and billing. 

As we continue our mission of helping the healthcare community more simply deliver vaccines, we’re confident that Android and Social Mobile’s custom solutions will continue to be a major component of our hardware and software strategy to support the healthcare community.

How anonymized data helps fight against disease

Data has always been a vital tool in understanding and fighting disease — from Florence Nightingale’s 1800s hand drawn illustrations that showed how poor sanitation contributed to preventable diseases to the first open source repository of data developed in response to the 2014 Ebola crisis in West Africa. When the first cases of COVID-19 were reported in Wuhan, data again became one of the most critical tools to combat the pandemic. 

A group of researchers, who documented the initial outbreak, quickly joined forces and started collecting data that could help epidemiologists around the world model the trajectory of the novel coronavirus outbreak. The researchers came from University of Oxford, Tsinghua University, Northeastern University and Boston Children’s Hospital, among others. 

However, their initial workflow was not designed for the exponential rise in cases. The researchers turned to Google.org for help. As part of Google’s $100 million contribution to COVID relief, Google.org granted $1.25 million in funding and provided a team of 10 fulltime Google.org Fellows and 7 part-time Google volunteers to assist with the project.  

Google volunteers worked with the researchers to create Global.health, a scalable and open-access platform that pulls together millions of anonymized COVID-19 cases from over 100 countries. This platform helps epidemiologists around the world model the trajectory of COVID-19, and track its variants and future infectious diseases. 

The need for trusted and anonymized case data

When an outbreak occurs, timely access to organized, trustworthy and anonymized data is critical for public health leaders to inform early policy decisions, medical interventions, and allocations of resources — all of which can slow disease spread and save lives. The insights derived from “line-list” data (e.g. anonymized case level information), as opposed to aggregated data such as case counts, are essential for epidemiologists to perform more detailed statistical analyses and model the effectiveness of interventions. 

Volunteers at the University of Oxford started manually curating this data, but it was spread over hundreds of websites, in dozens of formats, in multiple languages. The HealthMap team at Boston Children’s Hospital also identified early reports of COVID-19 through automated indexing of news sites and official sources. These two teams joined forces, shared the data, and published peer-reviewed findings to create a trusted resource for the global community.

Enter the Google.org Fellowship

To help the global community of researchers in this meaningful endeavour, Google.org decided to offer the support of 10 Google.org Fellows who spent 6 months working full-time on Global.health, in addition to $1.25M in grant funding. Working hand in hand with the University of Oxford and Boston Children’s Hospital, the Google.org team spoke to researchers and public health officials working on the frontline to understand real-life challenges they faced when finding and using high-quality trusted data — a tedious and manual process that often takes hours. 

Upholding data privacy is key to the platform’s design. The anonymized data used at Global.health comes from open-access authoritative public health sources, and a panel of data experts rigorously checks it to make sure it meets strict anonymity requirements. The Google.org Fellows assisted the Global.health team to design the data ingestion flow to implement best practices for data verification and quality checks to make sure that no personal data made its way into the platform. (All line-list data added to the platform is stored and hosted in Boston Children’s Hospital’s secure data infrastructure, not Google’s.)

Looking to the future

With the support of Google.org and The Rockefeller Foundation, Global.health has grown into an international consortium of researchers at leading universities curating the most comprehensive line-list COVID-19 database in the world.  It includes millions of anonymized records from trusted sources spanning over 100 countries, including India.

Today, Global.health helps researchers across the globe access data in a matter of minutes and a series of clicks. The flexibility of the Global.health platform means that it can be adapted to any infectious disease data and local context as new outbreaks occur. Global.health lays a foundation for researchers and public health officials to access this data no matter their location, be it New York, São Paulo, Munich, Kyoto or Nairobi.

Posted by Stephen Ratcliffe, Google.org Fellow and the Global.health team

How anonymized data helps fight against disease

Data has always been a vital tool in understanding and fighting disease — from Florence Nightingale’s 1800s hand drawn illustrations that showed how poor sanitation contributed to preventable diseases to the first open source repository of datadeveloped in response to the 2014 Ebola crisis in West Africa. When the first cases of COVID-19 were reported in Wuhan, data again became one of the most critical tools to combat the pandemic. 

A group of researchers, who documented the initial outbreak, quickly joined forces and started collecting data that could help epidemiologists around the world model the trajectory of the novel coronavirus outbreak. The researchers came from University of Oxford, Tsinghua University, Northeastern University and Boston Children’s Hospital, among others. 

However, their initial workflow was not designed for the exponential rise in cases. The researchers turned to Google.org for help. As part of Google’s $100 million contribution to COVID relief, Google.org granted $1.25 million in funding and provided a team of 10 fulltime Google.org Fellows and 7 part-time Google volunteers to assist with the project.  

Google volunteers worked with the researchers to create Global.health, a scalable and open-access platform that pulls together millions of anonymized COVID-19 cases from over 100 countries. This platform helps epidemiologists around the world model the trajectory of COVID-19, and track its variants and future infectious diseases. 


The need for trusted and anonymized case data

When an outbreak occurs, timely access to organized, trustworthy and anonymized data is critical for public health leaders to inform early policy decisions, medical interventions, and allocations of resources — all of which can slow disease spread and save lives. The insights derived from “line-list” data (e.g. anonymized case level information), as opposed to aggregated data such as case counts, are essential for epidemiologists to perform more detailed statistical analyses and model the effectiveness of interventions. 

Volunteers at the University of Oxford started manually curating this data, but it was spread over hundreds of websites, in dozens of formats, in multiple languages. The HealthMap team at Boston Children’s Hospital also identified early reports of COVID-19 through automated indexing of news sites and official sources. These two teams joined forces, shared the data, and published peer-reviewed findings to create a trusted resource for the global community.


Enter the Google.org Fellowship

To help the global community of researchers in this meaningful endeavour, Google.org decided to offer the support of 10 Google.org Fellows who spent 6 months working full-time onGlobal.health, in addition to $1.25M in grant funding. Working hand in hand with the University of Oxford and Boston Children’s Hospital, the Google.org team spoke to researchers and public health officials working on the frontline to understand real-life challenges they faced when finding and using high-quality trusted data — a tedious and manual process that often takes hours. 

Upholding data privacy is key to the platform’s design. The anonymized data used at Global.health comes from open-access authoritative public health sources, and a panel of data experts rigorously checks it to make sure it meets strict anonymity requirements. The Google.org Fellows assisted the Global.health team to design the data ingestion flow to implement best practices for data verification and quality checks to make sure that no personal data made its way into the platform. (All line-list data added to the platform is stored and hosted in Boston Children’s Hospital’s secure data infrastructure, not Google’s.)


Looking to the future

With the support of Google.org and The Rockefeller Foundation, Global.health has grown into an international consortium of researchers at leading universities curating the most comprehensive line-list COVID-19 database in the world.  It includes millions of anonymized records from trusted sources spanning over 100 countries.

Today, Global.health helps researchers across the globe access data in a matter of minutes and a series of clicks. The flexibility of the Global.health platform means that it can be adapted to any infectious disease data and local context as new outbreaks occur. Global.health lays a foundation for researchers and public health officials to access this data no matter their location, be it New York, São Paulo, Munich, Kyoto or Nairobi.