Tag Archives: Health

Arts and culture activities for your health and wellbeing

Our collective health and wellbeing has taken center stage as the world continues to grapple with the COVID-19 pandemic.  While extraordinary advances in science have delivered vaccines and new hope, for over a year we’ve had to consider what wellbeing means on a personal and global level. We’ve also asked ourselves how best to cope in an era of tremendous stress, grief and isolation.

Many of us intuitively turned to arts and cultural activities as a source of comfort and healing. To honor World Health Day and support our recovery and resilience, we are launching a new experience: Arts + Health & Wellbeing.

Artists have always deeply understood the healing power of the arts from music, poetry and painting to dance and design. Technological leaps in brain imagery and biomarkers are now helping scientists confirm what we’ve all sensed: art heals. Evidence shows that many forms of art can play an important role during treatment and recovery of people living with illnesses such as cancer, Alzheimer’s and Parkinson’s disease, and PTSD. More broadly, the arts relieve stress, anxiety and depression, boost our mood and create stronger connections to ourselves and others.

Like regular exercise or a good night’s sleep, the arts are proving important to our health and wellbeing.

The best discovery is that the arts are for everyone. Regardless of experience or talent, you can enjoy their health benefits today. Take a moment to support your own health and wellbeing — start by doing “The Cultural 5” with the World Health Organization, or enjoy a daily dose of arts and culture activities below:


Opera singer Renée Fleming

1. Try these breathing exercises with Soprano singer Renée Fleming to help increase breath capacity — for many who have experienced COVID-19, breathing is a challenge, one that can remain difficult after they recover from the most acute phase of the illness. Renée Fleming shares breathing exercises with anyone trying to regain better breath after illness.

Dr. Adam Perlman and boxer Ryan Garcia talk about creativity

2. Talk about mental health - Dr. Adam Perlman from the Mayo Clinic and boxer Ryan Garcia explore creativity and the role it plays in mental health and wellbeing.

Art emotions map

3. Dive into a sea of images and explore which artworks represent your emotions. Scientists from The University of California, Berkeley conducted research on the emotions evoked by artworks through time, and across cultures. We asked 1,300 people to describe how certain images make them feel, and plotted these feelings on an interactive map for you to explore. Find how your emotions compare to others.

For the imperfect people image

4. Watch "For the imperfect people," a spoken word video on the topic of mental health, written by students of SocialWorks’ OpenMike program, and in collaboration with Johns Hopkins International Arts + Mind Lab. Learn about the science behind how spoken word and poetry can help people heal emotionally while creating community connections and reducing stress and isolation.

Slow your body down with Stille

5. Slow down your body tempo with Stille (Silence Film), an experimental film aimed at giving viewers a visceral and meditative experience of silence, viewed through the lens of the German film director Thomas Riedelsheimer.

Contactless Sleep Sensing in Nest Hub

People often turn to technology to manage their health and wellbeing, whether it is to record their daily exercise, measure their heart rate, or increasingly, to understand their sleep patterns. Sleep is foundational to a person’s everyday wellbeing and can be impacted by (and in turn, have an impact on) other aspects of one’s life — mood, energy, diet, productivity, and more.

As part of our ongoing efforts to support people’s health and happiness, today we announced Sleep Sensing in the new Nest Hub, which uses radar-based sleep tracking in addition to an algorithm for cough and snore detection. While not intended for medical purposes1, Sleep Sensing is an opt-in feature that can help users better understand their nighttime wellness using a contactless bedside setup. Here we describe the technologies behind Sleep Sensing and discuss how we leverage on-device signal processing to enable sleep monitoring (comparable to other clinical- and consumer-grade devices) in a way that protects user privacy.

Soli for Sleep Tracking
Sleep Sensing in Nest Hub demonstrates the first wellness application of Soli, a miniature radar sensor that can be used for gesture sensing at various scales, from a finger tap to movements of a person’s body. In Pixel 4, Soli powers Motion Sense, enabling touchless interactions with the phone to skip songs, snooze alarms, and silence phone calls. We extended this technology and developed an embedded Soli-based algorithm that could be implemented in Nest Hub for sleep tracking.

Soli consists of a millimeter-wave frequency-modulated continuous wave (FMCW) radar transceiver that emits an ultra-low power radio wave and measures the reflected signal from the scene of interest. The frequency spectrum of the reflected signal contains an aggregate representation of the distance and velocity of objects within the scene. This signal can be processed to isolate a specified range of interest, such as a user’s sleeping area, and to detect and characterize a wide range of motions within this region, ranging from large body movements to sub-centimeter respiration.

Soli spectrogram illustrating its ability to detect a wide range of motions, characterized as (a) an empty room (no variation in the reflected signal demonstrated by the black space), (b) large pose changes, (c) brief limb movements, and (d) sub-centimeter chest and torso displacements from respiration while at rest.

In order to make use of this signal for Sleep Sensing, it was necessary to design an algorithm that could determine whether a person is present in the specified sleeping area and, if so, whether the person is asleep or awake. We designed a custom machine-learning (ML) model to efficiently process a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) and automatically classify each feature into one of three possible states: absent, awake, and asleep.

To train and evaluate the model, we recorded more than a million hours of radar data from thousands of individuals, along with thousands of sleep diaries, reference sensor recordings, and external annotations. We then leveraged the TensorFlow Extended framework to construct a training pipeline to process this data and produce an efficient TensorFlow Lite embedded model. In addition, we created an automatic calibration algorithm that runs during setup to configure the part of the scene on which the classifier will focus. This ensures that the algorithm ignores motion from a person on the other side of the bed or from other areas of the room, such as ceiling fans and swaying curtains.

The custom ML model efficiently processes a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) to automatically compute probabilities for the likelihood of user presence and wakefulness (awake or asleep).

To validate the accuracy of the algorithm, we compared it to the gold-standard of sleep-wake determination, the polysomnogram sleep study, in a cohort of 33 “healthy sleepers” (those without significant sleep issues, like sleep apnea or insomnia) across a broad age range (19-78 years of age). Sleep studies are typically conducted in clinical and research laboratories in order to collect various body signals (brain waves, muscle activity, respiratory and heart rate measurements, body movement and position, and snoring), which can then be interpreted by trained sleep experts to determine stages of sleep and identify relevant events. To account for variability in how different scorers apply the American Academy of Sleep Medicine’s staging and scoring rules, our study used two board-certified sleep technologists to independently annotate each night of sleep and establish a definitive groundtruth.

We compared our Sleep Sensing algorithm’s outputs to the corresponding groundtruth sleep and wake labels for every 30-second epoch of time to compute standard performance metrics (e.g., sensitivity and specificity). While not a true head-to-head comparison, this study’s results can be compared against previously published studies in similar cohorts with comparable methodologies in order to get a rough estimate of performance. In “Sleep-wake detection with a contactless, bedside radar sleep sensing system”, we share the full details of these validation results, demonstrating sleep-wake estimation equivalent to or, in some cases, better than current clinical and consumer sleep tracking devices.

Aggregate performance from previously published accuracies for detection of sleep (sensitivity) and wake (specificity) of a variety of sleep trackers against polysomnography in a variety of different studies, accounting for 3,990 nights in total. While this is not a head-to-head comparison, the performance of Sleep Sensing on Nest Hub in a population of healthy sleepers who simultaneously underwent polysomnography is added to the figure for rough comparison. The size of each circle is a reflection of the number of nights and the inset illustrates the mean±standard deviation for the performance metrics.

Understanding Sleep Quality with Audio Sensing
The Soli-based sleep tracking algorithm described above gives users a convenient and reliable way to see how much sleep they are getting and when sleep disruptions occur. However, to understand and improve their sleep, users also need to understand why their sleep is disrupted. To assist with this, Nest Hub uses its array of sensors to track common sleep disturbances, such as light level changes or uncomfortable room temperature. In addition to these, respiratory events like coughing and snoring are also frequent sources of disturbance, but people are often unaware of these events.

As with other audio-processing applications like speech or music recognition, coughing and snoring exhibit distinctive temporal patterns in the audio frequency spectrum, and with sufficient data an ML model can be trained to reliably recognize these patterns while simultaneously ignoring a wide variety of background noises, from a humming fan to passing cars. The model uses entirely on-device audio processing with privacy-preserving analysis, with no raw audio data sent to Google’s servers. A user can then opt to save the outputs of the processing (sound occurrences, such as the number of coughs and snore minutes) in Google Fit, in order to view personal insights and summaries of their night time wellness over time.

The Nest Hub displays when snoring and coughing may have disturbed a user’s sleep (top) and can track weekly trends (bottom).

To train the model, we assembled a large, hand-labeled dataset, drawing examples from the publicly available AudioSet research dataset as well as hundreds of thousands of additional real-world audio clips contributed by thousands of individuals.

Log-Mel spectrogram inputs comparing cough (left) and snore (right) audio snippets.

When a user opts in to cough and snore tracking on their bedside Nest Hub, the device first uses its Soli-based sleep algorithms to detect when a user goes to bed. Once it detects that a user has fallen asleep, it then activates its on-device sound sensing model and begins processing audio. The model works by continuously extracting spectrogram-like features from the audio input and feeding them through a convolutional neural network classifier in order to estimate the probability that coughing or snoring is happening at a given instant in time. These estimates are analyzed over the course of the night to produce a report of the overall cough count and snoring duration and highlight exactly when these events occurred.

Conclusion
The new Nest Hub, with its underlying Sleep Sensing features, is a first step in empowering users to understand their nighttime wellness using privacy-preserving radar and audio signals. We continue to research additional ways that ambient sensing and the predictive ability of consumer devices could help people better understand their daily health and wellness in a privacy-preserving way.

Acknowledgements
This work involved collaborative efforts from a multidisciplinary team of software engineers, researchers, clinicians, and cross-functional contributors. Special thanks to D. Shin for his significant contributions to this technology and blogpost, and Dr. Logan Schneider, visiting sleep neurologist affiliated with the Stanford/VA Alzheimer’s Center and Stanford Sleep Center, whose clinical expertise and contributions were invaluable to continuously guide this research. In addition to the authors, key contributors to this research from Google Health include Jeffrey Yu, Allen Jiang, Arno Charton, Jake Garrison, Navreet Gill, Sinan Hersek, Yijie Hong, Jonathan Hsu, Andi Janti, Ajay Kannan, Mukil Kesavan, Linda Lei, Kunal Okhandiar‎, Xiaojun Ping, Jo Schaeffer, Neil Smith, Siddhant Swaroop, Bhavana Koka, Anupam Pathak, Dr. Jim Taylor, and the extended team. Another special thanks to Ken Mixter for his support and contributions to the development and integration of this technology into Nest Hub. Thanks to Mark Malhotra and Shwetak Patel for their ongoing leadership, as well as the Nest, Fit, Soli, and Assistant teams we collaborated with to build and validate Sleep Sensing on Nest Hub.


1 Not intended to diagnose, cure, mitigate, prevent or treat any disease or condition. 

Source: Google AI Blog


Contactless Sleep Sensing in Nest Hub

People often turn to technology to manage their health and wellbeing, whether it is to record their daily exercise, measure their heart rate, or increasingly, to understand their sleep patterns. Sleep is foundational to a person’s everyday wellbeing and can be impacted by (and in turn, have an impact on) other aspects of one’s life — mood, energy, diet, productivity, and more.

As part of our ongoing efforts to support people’s health and happiness, today we announced Sleep Sensing in the new Nest Hub, which uses radar-based sleep tracking in addition to an algorithm for cough and snore detection. While not intended for medical purposes1, Sleep Sensing is an opt-in feature that can help users better understand their nighttime wellness using a contactless bedside setup. Here we describe the technologies behind Sleep Sensing and discuss how we leverage on-device signal processing to enable sleep monitoring (comparable to other clinical- and consumer-grade devices) in a way that protects user privacy.

Soli for Sleep Tracking
Sleep Sensing in Nest Hub demonstrates the first wellness application of Soli, a miniature radar sensor that can be used for gesture sensing at various scales, from a finger tap to movements of a person’s body. In Pixel 4, Soli powers Motion Sense, enabling touchless interactions with the phone to skip songs, snooze alarms, and silence phone calls. We extended this technology and developed an embedded Soli-based algorithm that could be implemented in Nest Hub for sleep tracking.

Soli consists of a millimeter-wave frequency-modulated continuous wave (FMCW) radar transceiver that emits an ultra-low power radio wave and measures the reflected signal from the scene of interest. The frequency spectrum of the reflected signal contains an aggregate representation of the distance and velocity of objects within the scene. This signal can be processed to isolate a specified range of interest, such as a user’s sleeping area, and to detect and characterize a wide range of motions within this region, ranging from large body movements to sub-centimeter respiration.

Soli spectrogram illustrating its ability to detect a wide range of motions, characterized as (a) an empty room (no variation in the reflected signal demonstrated by the black space), (b) large pose changes, (c) brief limb movements, and (d) sub-centimeter chest and torso displacements from respiration while at rest.

In order to make use of this signal for Sleep Sensing, it was necessary to design an algorithm that could determine whether a person is present in the specified sleeping area and, if so, whether the person is asleep or awake. We designed a custom machine-learning (ML) model to efficiently process a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) and automatically classify each feature into one of three possible states: absent, awake, and asleep.

To train and evaluate the model, we recorded more than a million hours of radar data from thousands of individuals, along with thousands of sleep diaries, reference sensor recordings, and external annotations. We then leveraged the TensorFlow Extended framework to construct a training pipeline to process this data and produce an efficient TensorFlow Lite embedded model. In addition, we created an automatic calibration algorithm that runs during setup to configure the part of the scene on which the classifier will focus. This ensures that the algorithm ignores motion from a person on the other side of the bed or from other areas of the room, such as ceiling fans and swaying curtains.

The custom ML model efficiently processes a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) to automatically compute probabilities for the likelihood of user presence and wakefulness (awake or asleep).

To validate the accuracy of the algorithm, we compared it to the gold-standard of sleep-wake determination, the polysomnogram sleep study, in a cohort of 33 “healthy sleepers” (those without significant sleep issues, like sleep apnea or insomnia) across a broad age range (19-78 years of age). Sleep studies are typically conducted in clinical and research laboratories in order to collect various body signals (brain waves, muscle activity, respiratory and heart rate measurements, body movement and position, and snoring), which can then be interpreted by trained sleep experts to determine stages of sleep and identify relevant events. To account for variability in how different scorers apply the American Academy of Sleep Medicine’s staging and scoring rules, our study used two board-certified sleep technologists to independently annotate each night of sleep and establish a definitive groundtruth.

We compared our Sleep Sensing algorithm’s outputs to the corresponding groundtruth sleep and wake labels for every 30-second epoch of time to compute standard performance metrics (e.g., sensitivity and specificity). While not a true head-to-head comparison, this study’s results can be compared against previously published studies in similar cohorts with comparable methodologies in order to get a rough estimate of performance. In “Sleep-wake detection with a contactless, bedside radar sleep sensing system”, we share the full details of these validation results, demonstrating sleep-wake estimation equivalent to or, in some cases, better than current clinical and consumer sleep tracking devices.

Aggregate performance from previously published accuracies for detection of sleep (sensitivity) and wake (specificity) of a variety of sleep trackers against polysomnography in a variety of different studies, accounting for 3,990 nights in total. While this is not a head-to-head comparison, the performance of Sleep Sensing on Nest Hub in a population of healthy sleepers who simultaneously underwent polysomnography is added to the figure for rough comparison. The size of each circle is a reflection of the number of nights and the inset illustrates the mean±standard deviation for the performance metrics.

Understanding Sleep Quality with Audio Sensing
The Soli-based sleep tracking algorithm described above gives users a convenient and reliable way to see how much sleep they are getting and when sleep disruptions occur. However, to understand and improve their sleep, users also need to understand why their sleep is disrupted. To assist with this, Nest Hub uses its array of sensors to track common sleep disturbances, such as light level changes or uncomfortable room temperature. In addition to these, respiratory events like coughing and snoring are also frequent sources of disturbance, but people are often unaware of these events.

As with other audio-processing applications like speech or music recognition, coughing and snoring exhibit distinctive temporal patterns in the audio frequency spectrum, and with sufficient data an ML model can be trained to reliably recognize these patterns while simultaneously ignoring a wide variety of background noises, from a humming fan to passing cars. The model uses entirely on-device audio processing with privacy-preserving analysis, with no raw audio data sent to Google’s servers. A user can then opt to save the outputs of the processing (sound occurrences, such as the number of coughs and snore minutes) in Google Fit, in order to view personal insights and summaries of their night time wellness over time.

The Nest Hub displays when snoring and coughing may have disturbed a user’s sleep (top) and can track weekly trends (bottom).

To train the model, we assembled a large, hand-labeled dataset, drawing examples from the publicly available AudioSet research dataset as well as hundreds of thousands of additional real-world audio clips contributed by thousands of individuals.

Log-Mel spectrogram inputs comparing cough (left) and snore (right) audio snippets.

When a user opts in to cough and snore tracking on their bedside Nest Hub, the device first uses its Soli-based sleep algorithms to detect when a user goes to bed. Once it detects that a user has fallen asleep, it then activates its on-device sound sensing model and begins processing audio. The model works by continuously extracting spectrogram-like features from the audio input and feeding them through a convolutional neural network classifier in order to estimate the probability that coughing or snoring is happening at a given instant in time. These estimates are analyzed over the course of the night to produce a report of the overall cough count and snoring duration and highlight exactly when these events occurred.

Conclusion
The new Nest Hub, with its underlying Sleep Sensing features, is a first step in empowering users to understand their nighttime wellness using privacy-preserving radar and audio signals. We continue to research additional ways that ambient sensing and the predictive ability of consumer devices could help people better understand their daily health and wellness in a privacy-preserving way.

Acknowledgements
This work involved collaborative efforts from a multidisciplinary team of software engineers, researchers, clinicians, and cross-functional contributors. Special thanks to D. Shin for his significant contributions to this technology and blogpost, and Dr. Logan Schneider, visiting sleep neurologist affiliated with the Stanford/VA Alzheimer’s Center and Stanford Sleep Center, whose clinical expertise and contributions were invaluable to continuously guide this research. In addition to the authors, key contributors to this research from Google Health include Jeffrey Yu, Allen Jiang, Arno Charton, Jake Garrison, Navreet Gill, Sinan Hersek, Yijie Hong, Jonathan Hsu, Andi Janti, Ajay Kannan, Mukil Kesavan, Linda Lei, Kunal Okhandiar‎, Xiaojun Ping, Jo Schaeffer, Neil Smith, Siddhant Swaroop, Bhavana Koka, Anupam Pathak, Dr. Jim Taylor, and the extended team. Another special thanks to Ken Mixter for his support and contributions to the development and integration of this technology into Nest Hub. Thanks to Mark Malhotra and Shwetak Patel for their ongoing leadership, as well as the Nest, Fit, Soli, and Assistant teams we collaborated with to build and validate Sleep Sensing on Nest Hub.


1 Not intended to diagnose, cure, mitigate, prevent or treat any disease or condition. 

Source: Google AI Blog


Low-Power Sleep Tracking on Android

Posted by Nick Grayson, Product Manager

Illustration of phone with moon and Android logo on screen

Android works best when it helps developers create apps that people love. That’s why we are dedicated to providing useful APIs like Activity Recognition which, with the user’s permission, can detect user’s activities (such as whether a user is biking or walking) to help apps provide contextually aware experiences.

So much of what we do relies on a good night's rest. Our phones have become great tools for making more informed decisions about our sleep. And by being informed about sleep habits, people can make better decisions throughout the day about sleep, which affects things like concentration and mental health.

In an effort to help our users stay informed about their sleep, we are making our Sleep API publicly available.

What is the Sleep API?

The Sleep API is an Android Activity Recognition API that surfaces information about the user’s sleep. It can be used to power features like the Bedtime mode in Clock.

This sleeping information is reported in two ways:

  1. A ‘sleep confidence’, which is reported at a regular interval (up to 10 minutes)
  2. A daily sleep segment which is reported after a wakeup is detected

The API uses an on-device artificial intelligence model that uses the device’s light and motion sensors as inputs.

As with all of our Activity Recognition APIs, the app must be granted the Physical Activity Recognition runtime permission from the user to detect sleep.

Why is this important for developers?

Developers spend valuable engineering time to combine sensor signals to determine when the user has started or ended activities like sleep. These detection algorithms are inconsistent between apps and when multiple apps independently and continuously check for changes in user activity, battery life suffers.

The Sleep API is a simple API that centralizes sleep detection processing in a battery-efficient manner. For this launch, we are proud to collaborate with Urbandroid, the developer of the popular alarm app, Sleep As Android

Android logo sleeping
Sleep as Android is a swiss army knife for getting a better night’s rest. It tracks sleep duration, regularity, phases, snoring, and more. Sleep Duration is one of the most important parameters to watch for ensuring a good night’s rest. The new Sleep API gives us a fantastic opportunity to track it automatically in the most battery efficient way imaginable.

- Sleep as Android Team



When can I start using this API?

The Sleep API is available for developers to use now as part of the latest version of Google Play Services.

This API is one step of our efforts to help our users get a better night's rest. We look forward to working more on this API and in this area in the future.

If you are interested in exploring or using this API, check out our API Documentation.

Using artificial intelligence in breast cancer screening

Every year, approximately 40 million women undergo breast-cancer screening in the U.S. using a procedure called mammography. For some, this can be a nerve-wracking experience; many wait days or weeks before a radiologist can review their scan and provide initial screening results. Between 10 and 15 percent of women must return for a second visit and undergo more scans before receiving a final diagnostic assessment – drawing out the process further. 


Together with Northwestern Medicine, Google Health is working on a new clinical research study to explore whether artificial intelligence (AI) models can help reduce the time to diagnosis, narrowing the assessment gap and improving the patient experience. 


Women who choose to take part in the study may have their mammograms reviewed by an investigational AI model that flags scans for immediate review by a radiologist if they show a higher likelihood of breast cancer. If a radiologist determines that further imaging is required, the woman will have the option to undergo this imaging on the same day. This study will evaluate whether this prioritization could reduce the amount of time that women spend waiting for a diagnostic assessment.  Women whose mammograms are not flagged will continue to have their images reviewed within regular timeframes. 


“Through this study, Northwestern Medicine aims to improve the excellent care we deliver to our patients every day. With the use of artificial intelligence, we hope to expedite the process to diagnosis of breast cancer by identifying suspicious findings on patients’ screening examinations earlier than the standard of care,” says study principal investigator Dr. Sarah Friedewald, chief of breast imaging at Northwestern Medicine and vice chair for women's imaging in radiology at Northwestern University’s Feinberg School of Medicine. “Every patient in the study will continue to have their mammograms interpreted by a radiologist, but the artificial intelligence will flag and prioritize patients that need additional imaging, facilitating the flow of care.”


This research study with Northwestern Medicine builds on previous research which demonstrated the potential of AI models to analyze de-identified retrospectively collected screening mammograms with similar or better accuracy than clinicians. 


Artificial intelligence has shown great potential to improve health care outcomes; the next challenge is to demonstrate how AI can be applied in the real-world. At Google Health, we’re committed to working with clinicians, patients and others to harness advances in research and ultimately bring about better and more accessible care. 

VaxCare simplifies vaccine management with Android Enterprise

Editor’s note: Today’s post is by Evan Landis, Chief Product Officer with VaxCare. The company aims to simplify vaccination for healthcare providers. VaxCare partnered with Social Mobile to create custom devices managed with Android Enterprise for its customers. 

The intense worldwide effort to vaccinate against COVID-19 has highlighted some of the core challenges that have always existed in expanding protections against preventable diseases.  

At VaxCare, our mission for more than 10 years has been to simplify vaccination programs, easing the logistical barriers to increasing vaccination rates. Our digital platform is designed to help healthcare professionals modernize their vaccination programs, reduce costs and focus on their patients. 

Android devices are central to this strategy. Recently, we partnered with Social Mobile who designed and built bespoke, Google Mobile Services-certified devices that interface with our digital platform. The flexibility of Android Enterprise enabled us to build solutions aligned to our customer needs with simple, flexible management and security tools.

A better customer experience with Android

Social Mobile helped us create custom devices that are simple to set up, use and update, while still meeting HIPAA and HITRUST certification compliance. We were inspired by consumer-facing, point-of-sale devices and the flexibility of the Android platform to create an ideal hardware solution for our customers. 

The VaxCare Hub is our stationary, in-practice integrated device with a 13-inch touchscreen, a camera and a scanner that is the main gateway to our platform. When vaccinating patients, healthcare providers scan the dose and view the vaccine and patient information, ensuring accuracy before administering the vaccine. 

As a dedicated device tied to our service, healthcare providers always have access to quickly look up the status of their inventory and get updates on new vaccine shipments.


vaxcare hub

The VaxCare Hub, a custom device powered by Android Enterprise, is the key portal to our service.

To design for the new contexts and places where vaccines are administered, we also worked with Social Mobile to create the VaxCare Mobile Hub. This smaller dedicated Android Enterprise device also connects to our Portal service and gives healthcare providers the flexibility to get the information they need no matter where they are administering vaccines.


vaxcare mobile hub

The VaxCare Mobile Hub helps our customers ensure accurate vaccine administration.

Having this vital information readily available in this purpose-built, rugged device has produced efficiency for our network of over 10,000 providers. Since launching the Mobile Hub device in September 2020, they administered over 650,000 flu shots during the 2020 season.  One partner practice saw their immunization rates increase 54 percent year-over-year.

Flexible management solutions

Android Enterprise provides comprehensive tools for rapid and secure device enrollment and flexible management, which we enable for our devices through Social Mobile’s Enterprise Mobility Management (EMM) platform, Mambo.  

With zero-touch enrollment, we enable a quick and simple device startup experience for customers. After unboxing and powering on the device, it’s automatically enrolled and configured for use with our application. Devices are managed in lock task mode, which locks a device to a specific set of apps, so customers are always connected to our VaxCare Portal.

Security and privacy are critical to any healthcare setting. As a device with Google Mobile Services, the VaxCare Hub and Mobile Hub use Android multi-layered security to continually monitor and protect critical data. We have confidence in the platform security features to ensure we meet the security and privacy promise we make to our customers.

Help for a vaccine surge

With Android Enterprise, we’ve set ourselves up to scale as we see an increased demand for vaccines and offerings like VaxCare. We've been able to quickly bring online support for our partners in the public phase of the COVID-19 vaccine rollout. We’ve optimized our platform to assist any of our providers who enroll in a public vaccination program to manage inventory, record-keeping and billing. 

As we continue our mission of helping the healthcare community more simply deliver vaccines, we’re confident that Android and Social Mobile’s custom solutions will continue to be a major component of our hardware and software strategy to support the healthcare community.

How anonymized data helps fight against disease

Data has always been a vital tool in understanding and fighting disease — from Florence Nightingale’s 1800s hand drawn illustrations that showed how poor sanitation contributed to preventable diseases to the first open source repository of data developed in response to the 2014 Ebola crisis in West Africa. When the first cases of COVID-19 were reported in Wuhan, data again became one of the most critical tools to combat the pandemic. 

A group of researchers, who documented the initial outbreak, quickly joined forces and started collecting data that could help epidemiologists around the world model the trajectory of the novel coronavirus outbreak. The researchers came from University of Oxford, Tsinghua University, Northeastern University and Boston Children’s Hospital, among others. 

However, their initial workflow was not designed for the exponential rise in cases. The researchers turned to Google.org for help. As part of Google’s $100 million contribution to COVID relief, Google.org granted $1.25 million in funding and provided a team of 10 fulltime Google.org Fellows and 7 part-time Google volunteers to assist with the project.  

Google volunteers worked with the researchers to create Global.health, a scalable and open-access platform that pulls together millions of anonymized COVID-19 cases from over 100 countries. This platform helps epidemiologists around the world model the trajectory of COVID-19, and track its variants and future infectious diseases. 

The need for trusted and anonymized case data

When an outbreak occurs, timely access to organized, trustworthy and anonymized data is critical for public health leaders to inform early policy decisions, medical interventions, and allocations of resources — all of which can slow disease spread and save lives. The insights derived from “line-list” data (e.g. anonymized case level information), as opposed to aggregated data such as case counts, are essential for epidemiologists to perform more detailed statistical analyses and model the effectiveness of interventions. 

Volunteers at the University of Oxford started manually curating this data, but it was spread over hundreds of websites, in dozens of formats, in multiple languages. The HealthMap team at Boston Children’s Hospital also identified early reports of COVID-19 through automated indexing of news sites and official sources. These two teams joined forces, shared the data, and published peer-reviewed findings to create a trusted resource for the global community.

Enter the Google.org Fellowship

To help the global community of researchers in this meaningful endeavour, Google.org decided to offer the support of 10 Google.org Fellows who spent 6 months working full-time on Global.health, in addition to $1.25M in grant funding. Working hand in hand with the University of Oxford and Boston Children’s Hospital, the Google.org team spoke to researchers and public health officials working on the frontline to understand real-life challenges they faced when finding and using high-quality trusted data — a tedious and manual process that often takes hours. 

Upholding data privacy is key to the platform’s design. The anonymized data used at Global.health comes from open-access authoritative public health sources, and a panel of data experts rigorously checks it to make sure it meets strict anonymity requirements. The Google.org Fellows assisted the Global.health team to design the data ingestion flow to implement best practices for data verification and quality checks to make sure that no personal data made its way into the platform. (All line-list data added to the platform is stored and hosted in Boston Children’s Hospital’s secure data infrastructure, not Google’s.)

Looking to the future

With the support of Google.org and The Rockefeller Foundation, Global.health has grown into an international consortium of researchers at leading universities curating the most comprehensive line-list COVID-19 database in the world.  It includes millions of anonymized records from trusted sources spanning over 100 countries, including India.

Today, Global.health helps researchers across the globe access data in a matter of minutes and a series of clicks. The flexibility of the Global.health platform means that it can be adapted to any infectious disease data and local context as new outbreaks occur. Global.health lays a foundation for researchers and public health officials to access this data no matter their location, be it New York, São Paulo, Munich, Kyoto or Nairobi.

Posted by Stephen Ratcliffe, Google.org Fellow and the Global.health team

How anonymized data helps fight against disease

Data has always been a vital tool in understanding and fighting disease — from Florence Nightingale’s 1800s hand drawn illustrations that showed how poor sanitation contributed to preventable diseases to the first open source repository of datadeveloped in response to the 2014 Ebola crisis in West Africa. When the first cases of COVID-19 were reported in Wuhan, data again became one of the most critical tools to combat the pandemic. 

A group of researchers, who documented the initial outbreak, quickly joined forces and started collecting data that could help epidemiologists around the world model the trajectory of the novel coronavirus outbreak. The researchers came from University of Oxford, Tsinghua University, Northeastern University and Boston Children’s Hospital, among others. 

However, their initial workflow was not designed for the exponential rise in cases. The researchers turned to Google.org for help. As part of Google’s $100 million contribution to COVID relief, Google.org granted $1.25 million in funding and provided a team of 10 fulltime Google.org Fellows and 7 part-time Google volunteers to assist with the project.  

Google volunteers worked with the researchers to create Global.health, a scalable and open-access platform that pulls together millions of anonymized COVID-19 cases from over 100 countries. This platform helps epidemiologists around the world model the trajectory of COVID-19, and track its variants and future infectious diseases. 


The need for trusted and anonymized case data

When an outbreak occurs, timely access to organized, trustworthy and anonymized data is critical for public health leaders to inform early policy decisions, medical interventions, and allocations of resources — all of which can slow disease spread and save lives. The insights derived from “line-list” data (e.g. anonymized case level information), as opposed to aggregated data such as case counts, are essential for epidemiologists to perform more detailed statistical analyses and model the effectiveness of interventions. 

Volunteers at the University of Oxford started manually curating this data, but it was spread over hundreds of websites, in dozens of formats, in multiple languages. The HealthMap team at Boston Children’s Hospital also identified early reports of COVID-19 through automated indexing of news sites and official sources. These two teams joined forces, shared the data, and published peer-reviewed findings to create a trusted resource for the global community.


Enter the Google.org Fellowship

To help the global community of researchers in this meaningful endeavour, Google.org decided to offer the support of 10 Google.org Fellows who spent 6 months working full-time onGlobal.health, in addition to $1.25M in grant funding. Working hand in hand with the University of Oxford and Boston Children’s Hospital, the Google.org team spoke to researchers and public health officials working on the frontline to understand real-life challenges they faced when finding and using high-quality trusted data — a tedious and manual process that often takes hours. 

Upholding data privacy is key to the platform’s design. The anonymized data used at Global.health comes from open-access authoritative public health sources, and a panel of data experts rigorously checks it to make sure it meets strict anonymity requirements. The Google.org Fellows assisted the Global.health team to design the data ingestion flow to implement best practices for data verification and quality checks to make sure that no personal data made its way into the platform. (All line-list data added to the platform is stored and hosted in Boston Children’s Hospital’s secure data infrastructure, not Google’s.)


Looking to the future

With the support of Google.org and The Rockefeller Foundation, Global.health has grown into an international consortium of researchers at leading universities curating the most comprehensive line-list COVID-19 database in the world.  It includes millions of anonymized records from trusted sources spanning over 100 countries.

Today, Global.health helps researchers across the globe access data in a matter of minutes and a series of clicks. The flexibility of the Global.health platform means that it can be adapted to any infectious disease data and local context as new outbreaks occur. Global.health lays a foundation for researchers and public health officials to access this data no matter their location, be it New York, São Paulo, Munich, Kyoto or Nairobi.

Our Care Studio pilot is expanding to more clinicians

Healthcare professionals are healers, not data clerks. Yet many clinicians spend half their day on a computer navigating electronic health records (EHRs) and other systems. Because health records are often scattered across multiple systems, getting a full picture of a patient’s health requires a great deal of clinicians’ time, energy, and resources. These gaps in patient information can contribute to less effective and efficient care. The Google Health team started to think about how we could bring Google’s experience in organizing complex information to healthcare.

Driven by this idea, we created Care Studio, a software solution that provides a comprehensive view of a patient’s records and allows clinicians to quickly search through complex patient information. Care Studio is built for clinicians and works alongside EHR systems; it streamlines workflows and supports more proactive care. We’ve been working with the healthcare organization Ascension on a pilot of Care Studio focused on data quality and product safety with a small group of clinicians based in Nashville, TN and Jacksonville, FL. The pilot is now expanding to more physicians and nurses in the clinical setting.


How Care Studio supports clinicians 

Care Studio streamlines key clinician workflows so that teams can quickly get the information they need to care for patients. It brings together patient records from the multiple EHRs an organization uses – giving clinicians a centralized view of patient data and the ability to search across these records.

We’ve honed our search capabilities based on medical terminology and clinical shorthand, so that clinicians can simply type what they're looking for into a search bar and instantly surface relevant patient record information. Still, a patient’s history can be long and complex, making important details difficult to find. Care Studio uses Google technology to display relevant information in fewer clicks. For example, Care Studio can automatically organize the medications in a patient’s history with information on dosing and when they were prescribed. The tool also makes it easy to find pertinent information, including lab results, procedure orders, medication orders and progress notes. 

Care Studio harmonizes medical data across different systems. For example, even though health systems report measurements like blood pressure or glucose levels using different units, Care Studio automatically converts them so they are easier for a clinician to understand and compare.

Animated GIF showing various tabs in Care Studio.

Search using clinical shorthand or everyday language. All data shown is synthetic (realistic but not real) patient data.


Keeping health information private and secure 

We know healthcare data is sensitive and personal, and it’s our responsibility to keep it private and secure. Google does not own, nor do we ever sell, patient data. This data from Care Studio cannot be used for advertising. Our team designed Care Studio to adhere to industry best practices and regulations, including HIPAA. 

We implement administrative, technical and physical safeguards to protect information. Patient data is encrypted and isolated in a controlled environment, separate from other customer data and consumer data. Consistent with industry best practices, we also participate in regular audits and external certifications such as ISO 27001 and SOC2/3, where auditors validate Care Studio’s processes for safeguarding customer data. With these certifications, third-party specialists make sure we follow a framework of controls for a comprehensive and continually evolving model for managing security.


Taking our next step toward clinical impact 

Based on feedback from Ascension, we've fine tuned Care Studio so it displays relevant clinical information from their systems accurately and in a way that's useful to their physicians and nurses. Now we’re ready to expand our pilot in the clinical setting to further optimize the product for broader usage at Ascension. A select group of clinicians at facilities in Nashville, TN and Jacksonville, FL will use an early release of Care Studio alongside their existing tools during care delivery. We hope to get their feedback to further improve its usability, make the tool more useful to them and better integrate into current workflows. 

Our aim is to bring Google’s experience in organizing complex information into intuitive, useful formats for the healthcare industry. As more Ascension clinicians begin using Care Studio, we look forward to supporting them in caring for their patients. 

Take a pulse on health and wellness with your phone

Mobile devices have become essential daily tools for people all over the world — from staying connected to taking pictures and accessing information. Thanks to sensors that are already built into smartphones — like your microphone, camera and accelerometer — these devices can also be helpful for daily health and wellness.

Heart rate and respiratory rate are two vital signs commonly used to assess your health and wellness. Starting next month, Google Fit will allow you to measure your heart rate and respiratory rate using just your phone’s camera. These features will be available in the Google Fit app for Pixel phones, with plans to expand to more Android devices.

An image of a phone showing how you use Google Fit to monitor your respiratory rate.

Measure and monitor respiratory rate directly in the Google Fit app.

To measure your respiratory rate, you just need to place your head and upper torso in view of your phone’s front-facing camera and breathe normally. To measure your heart rate, simply place your finger on the rear-facing camera lens. 

While these measurements aren’t meant for medical diagnosis or to evaluate medical conditions, we hope they can be useful for people using the Google Fit app to track and improve day-to-day wellness. Once the measurements are made, you can choose to save them in the app to monitor trends over time, alongside other health and wellness information.

Developed to work for more people in real-world conditions

Thanks to increasingly powerful sensors and advances in computer vision, these features let you use your smartphone’s camera to track tiny physical signals at the pixel level — like chest movements to measure your respiratory rate and subtle changes in the color of your fingers for your heart rate.

We developed both features — and completed initial clinical studies to validate them — so they work in a variety of real-world conditions and for as many people as possible. For example, since our heart rate algorithm relies on approximating blood flow from color changes in someone’s fingertip, it has to account for factors such as lighting, skin tone, age and more in order to work for everyone. 

With continued advances in hardware and software, sometimes the device that could be most helpful to your health and wellness is already in your pocket. Our team of researchers, engineers, and clinicians are exploring how everyday devices and inexpensive sensors can give people the information and insights they need to take control of their health. 

You can learn more about our work in this area by tuning in to The Check Up, a virtual event showcasing how Google is working to tackle some of the biggest challenges in health.