Tag Archives: Health

Extending Care Studio with a new healthcare partnership

Today at the HIMSS Conference in Orlando, Florida, we’re introducing a collaboration between Google Health and MEDITECH to jointly work on an integrated clinical solution. This partnership aims to combine our data harmonization, search and summarization capabilities from Google Health’s Care Studio product suite and integrate them into their electronic health record (EHR), MEDITECH Expanse.

Health information is complex — it’s often siloed or stored across different information systems and in different formats. As a result, it can be challenging for clinicians to find the information they need all in one place and quickly make sense of it to care for their patients.

Google Health’s Care Studio technology is designed to make it easier for clinicians to find critical patient information when they need it most. Built to adhere to strict privacy controls, Care Studio works alongside electronic health records (EHRs) to enhance existing workflows. Since we launched Care Studio, we’ve continued to hone our search capabilities for medical data, notes and scanned documents, and are using AI to help make sense of clinical information. We recently introduced our Conditions feature which summarizes a patient’s conditions and uses natural language processing to link to related information — like medications or lab results — so clinicians have the context they need to understand and assess a condition.

We’re proud of what we built with Care Studio thus far, and we know that partnering is fundamental to improving health outcomes at scale — no one product or company can overcome these obstacles alone.

Collaboration with the healthcare industry

MEDITECH has made significant commitments to advancing interoperability — a commitment we share. To best support clinicians, we need to fit into the way they work now. Collaborations with EHRs, like MEDITECH, will help us seamlessly integrate Google Health tools into existing clinical workflows, so we can help remove friction for clinicians.

With MEDITECH, we’re working on a deeply integrated solution to bring some of our data harmonization, search and summarization capabilities to their web-based EHR, MEDITECH Expanse. Using Google Health’s tools, MEDITECH will form a longitudinal health data layer, bringing together data from different sources into a standard format and offering clinicians a full view of patient records. And with Google Health’s search functionality embedded into their EHR, clinicians can find salient information faster for a more frictionless experience, and the intelligent summarization can highlight critical information directly in the Expanse workflow. This will help advance healthcare data interoperability, building on MEDITECH’s vision for a more connected ecosystem. Our collaboration expands on the partnership between MEDITECH and Google Cloud and will utilize Google Cloud’s infrastructure.

The healthcare industry is at an inflection point when it comes to interoperability. As COVID accelerated the need for interoperable systems, more organizations were eager to embrace Fast Healthcare Interoperability Resources (FHIR) as the standard format for healthcare data. We’re using FHIR to support our data harmonization, yet there is more to be done before FHIR is widely adopted and systems can effectively exchange information. We’re hopeful that collaborative approaches, much like what we’re working on with MEDITECH, will help create more interoperable solutions and facilitate an open ecosystem of data interoperability that benefits everyone.

Upholding our privacy commitments

As we deepen our partnerships across the healthcare industry, privacy and security remain our top priorities. As with all our Google Health Care Studio partners, Google does not own or ever sell patient data, and patient data cannot be used for advertising. Our tools are designed to adhere to industry best practices and regulations, including HIPAA. Patient data is encrypted and isolated in a controlled access environment, separate from other Google customer data and consumer data.

Industry collaboration is a critical path to overcoming pervasive data fragmentation challenges. While we’re in the early stages with MEDITECH, this new collaboration marks an exciting step forward in creating a more open healthcare ecosystem and improving health outcomes.

Join us at HIMSS at 3:00 p.m. EST today, in room WF3 to learn more.

Take a look at Conditions, our new feature in Care Studio

At Google Health, we’re always thinking about how we can make our tools most useful for clinicians. This includes Care Studio, our clinical software that harmonizes healthcare data from different sources to give clinicians a comprehensive view of a patient’s records.

Today, at the ViVE Conference in Miami Beach, we previewed Conditions, a new Care Studio feature that helps clinicians make even better sense of patient records.

Instant insights for clinicians

Getting a holistic summary of a patient's medical history can be challenging as key clinical insights are often buried in unstructured notes and data silos. With Conditions, we use our deep understanding of data to provide a quick and concise summary of a patient’s medical conditions along with critical context from clinical notes. Conditions are organized by acuity, so a clinician can quickly determine if a patient’s condition is acute or chronic.

We also provide easy access to information related to a condition — including labs, medications, reports, specialist notes and more — to help clinicians manage and treat a condition. So if a clinician clicks on a condition, like diabetes, they may see blood sugar levels, insulin administrations, endocrinology consult notes and retinopathy screening studies. And if critical information is missing, we will highlight its absence from the chart. For example, we’d flag if standard labs for a patient with diabetes are missing, like hemoglobin A1c results. With these resources, a clinician can quickly understand a new patient’s medical history or easily review an existing patient’s insulin regimen before their appointment.

Bringing natural language processing to medical data

Healthcare data is structured in numerous ways, making it difficult to organize. Clinical notes may be written differently and stored across different systems. Clinician notes also differ based on if content is meant for clinical decision making, billing or regulatory uses. Further, when it comes to writing notes, clinicians use different abbreviations or acronyms depending on their personal preference, what health system they’re a part of, their region and other factors. All of this has made it difficult to synthesize clinical data — until now.

The Conditions feature works by algorithmically understanding medical concepts from notes that may be written in incomplete sentences, shorthand or with misspelled words. We use Google’s advances in AI in an area called natural language processing (NLP) to understand the actual context in which a condition is mentioned and map these concepts to a vocabulary of tens of thousands of medical conditions. For example: One clinician might write “multiple sclerosis exacerbation” while another might document the same problem as “MS flare”. Care Studio is able to recognize that these different terms are linked to the same condition, and supported by the same evidence.

Similarly, Care Studio understands that the statement “Patient has a history of dm”, means that diabetes mellitus (dm) is present. And for the statement “Pneumonia is not likely at this time”, pneumonia is absent.

Care Studio then ranks each condition to determine its importance using various factors — such as the condition itself, its frequency, recency and more — to elevate the most important conditions to the top. Finally, based on input from medical specialists and clinicians on the Google team, Care Studio organizes conditions to support clinical thinking and decision making. For instance, acute conditions are highlighted, and related conditions are presented next to each other.

Healthcare data is complex, and clinicians often have to manually sift through information to make sense of a patient’s conditions. We’re excited to bring this feature to clinicians in the coming months so they can instantly access the information they need all in one place to provide better care.

Machine Learning for Mechanical Ventilation Control

Mechanical ventilators provide critical support for patients who have difficulty breathing or are unable to breathe on their own. They see frequent use in scenarios ranging from routine anesthesia, to neonatal intensive care and life support during the COVID-19 pandemic. A typical ventilator consists of a compressed air source, valves to control the flow of air into and out of the lungs, and a "respiratory circuit" that connects the ventilator to the patient. In some cases, a sedated patient may be connected to the ventilator via a tube inserted through the trachea to their lungs, a process called invasive ventilation.

A mechanical ventilator takes breaths for patients who are not fully capable of doing so on their own. In invasive ventilation, a controllable, compressed air source is connected to a sedated patient via tubing called a respiratory circuit.

In both invasive and non-invasive ventilation, the ventilator follows a clinician-prescribed breathing waveform based on a respiratory measurement from the patient (e.g., airway pressure, tidal volume). In order to prevent harm, this demanding task requires both robustness to differences or changes in patients' lungs and adherence to the desired waveform. Consequently, ventilators require significant attention from highly-trained clinicians in order to ensure that their performance matches the patients’ needs and that they do not cause lung damage.

Example of a clinician-prescribed breathing waveform (orange) in units of airway pressure and the actual pressure (blue), given some controller algorithm.

In “Machine Learning for Mechanical Ventilation Control”, we present exploratory research into the design of a deep learning–based algorithm to improve medical ventilator control for invasive ventilation. Using signals from an artificial lung, we design a control algorithm that measures airway pressure and computes necessary adjustments to the airflow to better and more consistently match prescribed values. Compared to other approaches, we demonstrate improved robustness and better performance while requiring less manual intervention from clinicians, which suggests that this approach could reduce the likelihood of harm to a patient’s lungs.

Current Methods
Today, ventilators are controlled with methods belonging to the PID family (i.e., Proportional, Integral, Differential), which control a system based on the history of errors between the observed and desired states. A PID controller uses three characteristics for ventilator control: proportion (“P”) — a comparison of the measured and target pressure; integral (“I”) — the sum of previous measurements; and differential (“D”) — the difference between two previous measurements. Variants of PID have been used since the 17th century and today form the basis of many controllers in both industrial (e.g., controlling heat or fluids) and consumer (e.g., controlling espresso pressure) applications.

PID control forms a solid baseline, relying on the sharp reactivity of P control to rapidly increase lung pressure when breathing in and the stability of I control to hold the breath in before exhaling. However, operators must tune the ventilator for specific patients, often repeatedly, to balance the “ringing” of overzealous P control against the ineffectually slow rise in lung pressure of dominant I control.

Current PID methods are prone to over- and then under-shooting their target (ringing). Because patients differ in their physiology and may even change during treatment, highly-trained clinicians must constantly monitor and adjust existing methods to ensure such violent ringing as in the above example does not occur.

To more effectively balance these characteristics, we propose a neural network–based controller to create a set of control signals that are more broad and adaptable than PID-generated controls.

A Machine-Learned Ventilator Controller
While one could tune the coefficients of a PID controller (either manually or via an exhaustive grid search) through a limited number of repeated trials, it is impossible to apply such a direct approach towards a deep controller, as deep neural networks (DNNs) are often parameter-rich and require significant training data. Similarly, popular model-free approaches, such as Q-Learning or Policy Gradient, are data-intensive and therefore unsuitable for the physical system at hand. Further, these approaches don't take into account the intrinsic differentiability of the ventilator dynamical system, which is deterministic, continuous and contact-free.

We therefore adopt a model-based approach, where we first learn a DNN-based simulator of the ventilator-patient dynamical system. An advantage of learning such a simulator is that it provides a more accurate data-driven alternative to physics-based models, and can be more widely distributed for controller research.

To train a faithful simulator, we built a dataset by exploring the space of controls and the resulting pressures, while balancing against physical safety, e.g., not over-inflating a test lung and causing damage. Though PID control can exhibit ringing behavior, it performs well enough to use as a baseline for generating training data. To safely explore and to faithfully capture the behavior of the system, we use PID controllers with varied control coefficients to generate the control-pressure trajectory data for simulator training. Further, we add random deviations to the PID controllers to capture the dynamics more robustly.

We collect data for training by running mechanical ventilation tasks on a physical test lung using an open-source ventilator designed by Princeton University's People's Ventilator Project. We built a ventilator farm housing ten ventilator-lung systems on a server rack, which captures multiple airway resistance and compliance settings that span a spectrum of patient lung conditions, as required for practical applications of ventilator systems.

We use a rack-based ventilator farm (10 ventilators / artificial lungs) to collect training data for a ventilator-lung simulator. Using this simulator, we train a DNN controller that we then validate on the physical ventilator farm.

The true underlying state of the dynamical system is not available to the model directly, but rather only through observations of the airway pressure in the system. In the simulator we model the state of the system at any time as a collection of previous pressure observations and the control actions applied to the system (up to a limited lookback window). These inputs are fed into a DNN that predicts the subsequent pressure in the system. We train this simulator on the control-pressure trajectory data collected through interactions with the test lung.

The performance of the simulator is measured via the sum of deviations of the simulator’s predictions (under self-simulation) from the ground truth.

While it is infeasible to compare real dynamics with their simulated counterparts over all possible trajectories and control inputs, we measure the distance between simulation and the known safe trajectories. We introduce some random exploration around these safe trajectories for robustness.

Having learned an accurate simulator, we then use it to train a DNN-based controller completely offline. This approach allows us to rapidly apply updates during controller training. Furthermore, the differentiable nature of the simulator allows for the stable use of the direct policy gradient, where we analytically compute the gradient of the loss with respect to the DNN parameters.  We find this method to be significantly more efficient than model-free approaches.

Results
To establish a baseline, we run an exhaustive grid of PID controllers for multiple lung settings and select the best performing PID controller as measured by average absolute deviation between the desired pressure waveform and the actual pressure waveform. We compare these to our controllers and provide evidence that our DNN controllers are better performing and more robust.

  1. Breathing waveform tracking performance:

    We compare the best PID controller for a given lung setting against our controller trained on the learned simulator for the same setting. Our learned controller shows a 22% lower mean absolute error (MAE) between target and actual pressure waveforms.

    Comparison of the MAE between target and actual pressure waveforms (lower is better) for the best PID controller (orange) for a given lung setting (shown for two settings, R=5 and R=20) against our controller (blue) trained on the learned simulator for the same setting. The learned controller performs up to 22% better.
  2. Robustness:

    Further, we compare the performance of the single best PID controller across the entire set of lung settings with our controller trained on a set of learned simulators over the same settings. Our controller performs up to 32% better in MAE between target and actual pressure waveforms, suggesting that it could require less manual intervention between patients or even as a patient's condition changes.

    As above, but comparing the single best PID controller across the entire set of lung settings against our controller trained over the same settings. The learned controller performs up to 32% better, suggesting that it may require less manual intervention.

Finally, we investigated the feasibility of using model-free and other popular RL algorithms (PPO, DQN), in comparison to a direct policy gradient trained on the simulator. We find that the simulator-trained direct policy gradient achieves slightly better scores and does so with a more stable training process that uses orders of magnitude fewer training samples and a significantly smaller hyperparameter search space.

In the simulator, we find that model-free and other popular algorithms (PPO, DQN) perform approximately as well as our method.
However, these other methods take an order of magnitude more episodes to train to similar levels.

Conclusions and the Road Forward
We have described a deep-learning approach to mechanical ventilation based on simulated dynamics learned from a physical test lung. However, this is only the beginning. To make an impact on real-world ventilators there are numerous other considerations and issues to take into account. Most important amongst them are non-invasive ventilators, which are significantly more challenging due to the difficulty of discerning pressure from lungs and mask pressure. Other directions are how to handle spontaneous breathing and coughing. To learn more and become involved in this important intersection of machine learning and health, see an ICML tutorial on control theory and learning, and consider participating in one of our kaggle competitions for creating better ventilator simulators!

Acknowledgements
The primary work was based in the Google AI Princeton lab, in collaboration with Cohen lab at the Mechanical and Aerospace Engineering department at Princeton University. The research paper was authored by contributors from Google and Princeton University, including: Daniel Suo, Naman Agarwal, Wenhan Xia, Xinyi Chen, Udaya Ghai, Alexander Yu, Paula Gradu, Karan Singh, Cyril Zhang, Edgar Minasyan, Julienne LaChance, Tom Zajdel, Manuel Schottdorf, Daniel Cohen, and Elad Hazan.

Source: Google AI Blog


Group effort: How we helped launch an NYC vaccine site

In March 2021, Google initiated a considerable project: helping to distribute COVID-19 vaccines to New Yorkers in the Chelsea neighborhood of Manhattan.

Google’s New York City office is located in Chelsea, and the neighborhood is also the home to two New York City Housing Authority (NYCHA) developments, the Fulton Houses and the Elliott-Chelsea Houses. The city had identified NYCHA residents as hard-to-reach populations for vaccines — so to help, Google provided a total of more than $1 million in resources to the city to support more vaccine education and to Hudson Guild toward the creation of a local vaccination center.

“Google recognizes that equitable population vaccination is a complex problem to solve,” says Dr. Karen DeSalvo, Google’s chief health officer, “and we’re committed to doing our part.” That commitment led us to partnering with the Hudson Guild, a local nonprofit founded in 1895. Hudson Guild, which Google has worked with as a community partner for more than a decade, is a settlement house that serves 14,000 New Yorkers every year, mostly members of the Chelsea community. The nonprofit has a special relationship with local residents, and organizers and volunteers have taken their grassroots, one-on-one approach to make sure residents have the information and support they need to get vaccinated.

“The reputation you have in the community means everything. Residents who were very hesitant to get the vaccines eventually came around because we took the time to explain the science, give them reliable information and build trust,” says LeeAnn Scaduto, Hudson Guild’s chief operating officer.

A group of people standing in a line in a medical center, looking into the camera and smiling.

From left to right: Googlers Connie Choi, David Goodman, Dale Allsopp, Duncan Wood, Hudson Guild's Daisy Mendoza, Googlers Wenjie Sun and Thomas Coleman

“We’ve been able to be a consistent force in the neighborhood,” says Daisy Mendoza, director of community building for Hudson Guild. She says the team has walked the streets of the communities daily, knocked on doors, made phone calls and even stopped by local businesses to encourage owners and workers to get vaccinated. “The residents see us everyday and know we care about them,” Daisy says.

Googlers got involved in this grassroots effort, too — knocking on doors, helping in the registration efforts and serving as translators to help the vaccination site get up and running. “Many people were still isolating at that time,” says Stavroula Maliarou, a program manager at Google who helped organize the volunteer efforts. “There was a fear of being close to people who could potentially be sick. But so many Googlers showed up to help in any way they could. We know this community — and we knew they needed our help, especially at that moment.”

Of course, there were challenges. The Hudson Guild organizers said they’ve had to combat vaccine hesitancy and residents’ lack of access to technology. But they’ve overcome these obstacles thanks to relying on community volunteers and Hudson Guild staff to share information in plain language, dispel misinformation and make the vaccination process as simple as possible for recipients.

Since April 7, 2021, Hudson Guild’s Fulton Vaccine Hub, funded in part by Google, has helped vaccinate 21,250 people, 1,700 of whom are NYCHA residents. The vaccination site has been so successful it was initially extended to October 2021, and then extended indefinitely to continue bringing vaccines and boosters to the local neighborhood.

“Without Google’s help, this isn’t something we would have been able to do — this isn’t our area of expertise,” LeeAnn says. “Google gave us the opportunity to be part of the solution in a really meaningful way for our community. This allowed us to really find a solution that worked.”

If you’re interested in learning more about Hudson Guild and helping those who live, work or go to school in Chelsea and the west side of New York City, with a focus on those in need, visit hudsonguild.org.

Group effort: How we helped launch an NYC vaccine site

In March 2021, Google initiated a considerable project: helping to distribute COVID-19 vaccines to New Yorkers in the Chelsea neighborhood of Manhattan.

Google’s New York City office is located in Chelsea, and the neighborhood is also the home to two New York City Housing Authority (NYCHA) developments, the Fulton Houses and the Elliott-Chelsea Houses. The city had identified NYCHA residents as hard-to-reach populations for vaccines — so to help, Google provided a total of more than $1 million in resources to the city to support more vaccine education and to Hudson Guild toward the creation of a local vaccination center.

“Google recognizes that equitable population vaccination is a complex problem to solve,” says Dr. Karen DeSalvo, Google’s chief health officer, “and we’re committed to doing our part.” That commitment led us to partnering with the Hudson Guild, a local nonprofit founded in 1895. Hudson Guild, which Google has worked with as a community partner for more than a decade, is a settlement house that serves 14,000 New Yorkers every year, mostly members of the Chelsea community. The nonprofit has a special relationship with local residents, and organizers and volunteers have taken their grassroots, one-on-one approach to make sure residents have the information and support they need to get vaccinated.

“The reputation you have in the community means everything. Residents who were very hesitant to get the vaccines eventually came around because we took the time to explain the science, give them reliable information and build trust,” says LeeAnn Scaduto, Hudson Guild’s chief operating officer.

A group of people standing in a line in a medical center, looking into the camera and smiling.

From left to right: Googlers Connie Choi, David Goodman, Dale Allsopp, Duncan Wood, Hudson Guild's Daisy Mendoza, Googlers Wenjie Sun and Thomas Coleman

“We’ve been able to be a consistent force in the neighborhood,” says Daisy Mendoza, director of community building for Hudson Guild. She says the team has walked the streets of the communities daily, knocked on doors, made phone calls and even stopped by local businesses to encourage owners and workers to get vaccinated. “The residents see us everyday and know we care about them,” Daisy says.

Googlers got involved in this grassroots effort, too — knocking on doors, helping in the registration efforts and serving as translators to help the vaccination site get up and running. “Many people were still isolating at that time,” says Stavroula Maliarou, a program manager at Google who helped organize the volunteer efforts. “There was a fear of being close to people who could potentially be sick. But so many Googlers showed up to help in any way they could. We know this community — and we knew they needed our help, especially at that moment.”

Of course, there were challenges. The Hudson Guild organizers said they’ve had to combat vaccine hesitancy and residents’ lack of access to technology. But they’ve overcome these obstacles thanks to relying on community volunteers and Hudson Guild staff to share information in plain language, dispel misinformation and make the vaccination process as simple as possible for recipients.

Since April 7, 2021, Hudson Guild’s Fulton Vaccine Hub, funded in part by Google, has helped vaccinate 21,250 people, 1,700 of whom are NYCHA residents. The vaccination site has been so successful it was initially extended to October 2021, and then extended indefinitely to continue bringing vaccines and boosters to the local neighborhood.

“Without Google’s help, this isn’t something we would have been able to do — this isn’t our area of expertise,” LeeAnn says. “Google gave us the opportunity to be part of the solution in a really meaningful way for our community. This allowed us to really find a solution that worked.”

If you’re interested in learning more about Hudson Guild and helping those who live, work or go to school in Chelsea and the west side of New York City, with a focus on those in need, visit hudsonguild.org.

Ask a Techspert: What does AI do when it doesn’t know?

As humans, we constantly learn from the world around us. We experience inputs that shape our knowledge — including the boundaries of both what we know and what we don’t know.

Many of today’s machines also learn by example. However, these machines are typically trained on datasets and information that doesn’t always include rare or out-of-the-ordinary examples that inevitably come up in real-life scenarios. What is an algorithm to do when faced with the unknown?

I recently spoke with Abhijit Guha Roy, an engineer on the Health AI team, and Ian Kivlichan, an engineer on the Jigsaw team, to hear more about using AI in real-world scenarios and better understand the importance of training it to know when it doesn’t know.

Abhijit, tell me about your recent research in the dermatology space.

We’re applying deep learning to a number of areas in health, including in medical imaging where it can be used to aid in the identification of health conditions and diseases that might require treatment. In the dermatological field, we have shown that AI can be used to help identify possible skin issues and are in the process of advancing research and products, including DermAssist, that can support both clinicians and people like you and me.

In these real-world settings, the algorithm might come up against something it's never seen before. Rare conditions, while individually infrequent, might not be so rare in aggregate. These so-called “out-of-distribution” examples are a common problem for AI systems which can perform less well when it’s exposed to things they haven’t seen before in its training.

Can you explain what “out-distribution” means for AI?

Most traditional machine learning examples that are used to train AI deal with fairly unsubtle — or obvious — changes. For example, if an algorithm that is trained to identify cats and dogs comes across a car, then it can typically detect that the car — which is an “out-of-distribution” example — is an outlier. Building an AI system that can recognize the presence of something it hasn’t seen before in training is called “out-of-distribution detection,” and is an active and promising field of AI research.

Okay, let’s go back to how this applies to AI in medical settings.

Going back to our research in the dermatology space, the differences between skin conditions can be much more subtle than recognizing a car from a dog or a cat, even more subtle than recognizing a previously unseen “pick-up truck” from a “truck”. As such, the out-of-distribution detection task in medical AI demands even more of our focused attention.

This is where our latest research comes in. We trained our algorithm to recognize even the most subtle of outliers (a so-called “near-out of distribution” detection task). Then, instead of the model inaccurately guessing, it can take a safer course of action — like deferring to human experts.

Ian, you’re working on another area where AI needs to know when it doesn’t know something. What’s that?

The field of content moderation. Our team at Jigsaw used AI to build a free tool called Perspective that scores comments according to how likely they are to be considered toxic by readers. Our AI algorithms help identify toxic language and online harassment at scale so that human content moderators can make better decisions for their online communities. A range of online platforms use Perspective more than 600 million times a day to reduce toxicity and the human time required to moderate content.

In the real world, online conversations — both the things people say and even the ways they say them — are continually changing. For example, two years ago, nobody would have understood the phrase “non-fungible token (NFT).” Our language is always evolving, which means a tool like Perspective doesn't just need to identify potentially toxic or harassing comments, it also needs to “know when it doesn’t know,” and then defer to human moderators when it encounters comments very different from anything it has encountered before.

In our recent research, we trained Perspective to identify comments it was uncertain about and flag them for separate human review. By prioritizing these comments, human moderators can correct more than 80% of the mistakes the AI might otherwise have made.

What connects these two examples?

We have more in common with the dermatology problem than you'd expect at first glance — even though the problems we try to solve are so different.

Building AI that knows when it doesn’t know something means you can prevent certain errors that might have unintended consequences. In both cases, the safest course of action for the algorithm entails deferring to human experts rather than trying to make a decision that could lead to potentially negative effects downstream.

There are some fields where this isn’t as important and others where it’s critical. You might not care if an automated vegetable sorter incorrectly sorts a purple carrot after being trained on orange carrots, but you would definitely care if an algorithm didn’t know what to do about an abnormal shadow on an X-ray that a doctor might recognize as an unexpected cancer.

How is AI uncertainty related to AI safety?

Most of us are familiar with safety protocols in the workplace. In safety-critical industries like aviation or medicine, protocols like “safety checklists” are routine and very important in order to prevent harm to both the workers and the people they serve.

It’s important that we also think about safety protocols when it comes to machines and algorithms, especially when they are integrated into our daily workflow and aid in decision-making or triaging that can have a downstream impact.

Teaching algorithms to refrain from guessing in unfamiliar scenarios and to ask for help from human experts falls within these protocols, and is one of the ways we can reduce harm and build trust in our systems. This is something Google is committed to, as outlined in its AI Principles.

Advancing genomics to better understand and treat disease

Genome sequencing can help us better understand, diagnose and treat disease. For example, healthcare providers are increasingly using genome sequencing to diagnose rare genetic diseases, such as elevated risk for breast cancer or pulmonary arterial hypertension, which are estimated to affect roughly 8% of the population.

At Google Health, we’re applying our technology and expertise to the field of genomics. Here are recent research and industry developments we’ve made to help quickly identify genetic disease and foster the equity of genomic tests across ancestries. This includes an exciting new partnership with Pacific Biosciences to further advance genomic technologies in research and the clinic.

Helping identify life-threatening disease when minutes matter

Genetic diseases can cause critical illness, and in many cases, a timely identification of the underlying issue can allow for life-saving intervention. This is especially true in the case of newborns. Genetic or congenital conditions affect nearly 6% of births, but clinical sequencing tests to identify these conditions typically take days or weeks to complete.

We recently worked with the University of California Santa Cruz Genomics Institute to build a method – called PEPPER-Margin-DeepVariant – that can analyze data for Oxford Nanopore sequencers, one of the fastest commercial sequencing technologies used today. This week, the New England Journal of Medicine published a study led by the Stanford University School of Medicine detailing the use of this method to identify suspected disease-causing variants in five critical newborn intensive care unit (NICU) cases.

In the fastest cases, a likely disease-causing variant was identified less than 8 hours after sequencing began, compared to the prior fastest time of 13.5 hours. In five cases, the method influenced patient care. For example, the team quickly turned around a diagnosis of Poirier–Bienvenu neurodevelopmental disorder for one infant, allowing for timely, disease-specific treatment.

Time required to sequence and analyze individuals in the pilot study. Disease-causing variants were identified in patient IDs 1, 2, 8, 9, and 11.

Applying machine learning to maximize the potential in sequencing data

Looking forward, new sequencing instruments can lead to dramatic breakthroughs in the field. We believe machine learning (ML) can further unlock the potential of these instruments. Our new research partnership with Pacific Biosciences (PacBio), a developer of genomic sequence platforms, is a great example of how Google’s machine learning and algorithm development tools can help researchers unlock more information from sequencing data.

PacBio’s long-read HiFi sequencing provides the most comprehensive view of genomes, transcriptomes and epigenomes. Using PacBio’s technology in combination with DeepVariant, our award-winning variant detection method, researchers have been able to accurately identify diseases that are otherwise difficult to diagnose with alternative methods.

Additionally, we developed a new open source method called DeepConsensus that, in combination with PacBio’s sequencing platforms, creates more accurate reads of sequencing data. This boost in accuracy will help researchers apply PacBio’s technology to more challenges, such as the final completion of the Human Genome and assembling the genomes of all vertebrate species.

Supporting more equitable genomics resources and methods

Like other areas of health and medicine, the genomics field grapples with health equity issues that, if not addressed, could exclude certain populations. For example, the overwhelming majority of participants in genomic studies have historically been of European ancestry. As a result, the genomics resources that scientists and clinicians use to identify and filter genetic variants and to interpret the significance of these variants are not equally powerful across individuals of all ancestries.

In the past year, we’ve supported two initiatives aimed at improving methods and genomics resources for under-represented populations. We collaborated with 23andMe to develop an improved resource for individuals of African ancestry, and we worked with the UCSC Genomics Institute to develop pangenome methods with this work recently published in Science.

In addition, we recently published two open-source methods that improve genetic discovery by more accurately identifying disease labels and improving the use of health measurements in genetic association studies.

We hope that our work developing and sharing these methods with those in the field of genomics will improve overall health and the understanding of biology for everyone. Working together with our collaborators, we can apply this work to real-world applications.

Working with the WHO to power digital health apps

Nearly 4 billion people around the world don’t have access to the essential healthcare services they need, like immunizations or pediatric care. Complicating matters, the World Health Organization (WHO) estimates a global shortage of 18 million healthcare workers by 2030 — primarily in low-and-middle income countries (LMICs).

In many countries, healthcare workers use smartphone applications to manage data specific to certain diseases like malaria and tuberculosis. However, the data is often stored across multiple applications using different data formats, making it difficult for healthcare workers to have all the information they need. Additionally, it’s difficult for healthcare providers and organizations to exchange data, so they often don’t have a holistic view of individual or community health data to inform health decisions.

To give healthcare workers access to advanced mobile digital health solutions, we’re collaborating with the WHO on building an open source software developer kit (SDK). This SDK will help Android developers around the world, including in LMICs, build secure mobile solutions using the Fast Healthcare Interoperability Resources (FHIR), a global standard framework for healthcare data that is being widely adopted to address fragmentation and foster more patient-centered care. With Android OS powering 3 billion active devices worldwide, this collaboration provides an opportunity to support more healthcare workers on the frontlines.

Supporting developers and frontline health workers

Frontline health workers often work in areas where connectivity is unreliable. The SDK allows Android applications to run offline by storing and processing data locally, so health workers can deliver care without worrying about connectivity. When there is connectivity, the SDK will send the server the latest data collected on the device, and receive new updates to patient records.

The SDK is being designed to provide healthcare workers with access to decision support tools. For example, the WHO is using the SDK to develop EmCare, an app for healthcare workers in emergency settings. This application provides clinical decision support, based on the WHO SMART Guidelines content, which ensures compliance with evidence-based recommendations at the point of care.

By providing a common set of application components - like on-device storage, data-access and search APIs - the SDK reduces the time and effort it takes to build FHIR-based, interoperable digital health applications on Android, maximizing the efforts of local developers and unlocking their potential to meet their community’s needs.

The FHIR SDK facilitates interoperability and high-quality data exchange and is designed with a high level of security. Interoperability not only opens up the ability for healthcare workers to more easily gather community health data, but also makes it possible to use high-quality data to understand health trends, better prioritize high-risk patients and deliver more patient-centered care to everyone. All data stored by apps built on the SDK is strongly encrypted, and the SDK does not send or share any data with Google.

Extending interoperability globally

The global digital health community is rallying around FHIR to help improve health data interoperability, and we are committed to helping developers everywhere safely use our SDK to build secure and interoperable digital health solutions for their communities.

We are collaborating with WHO and a group of developers to make sure the SDK meets the needs of the community. We plan to release it more widely in the coming months and look forward to supporting developers as they build digital health tools for healthcare workers everywhere.

This year, we searched for ways to stay healthy

Every day, millions of people come to Google Search to ask important questions about their wellbeing. The COVID-19 pandemic drove even more concern for our health and the health of our loved ones – and this year, searches for ways to heal reached record highs. We saw questions about vaccinations, therapists, body positivity and mental wellbeing, to name a few. Today, we launched our annual Year in Search, which takes a look back at the top-trending searches of the year. Here’s a glimpse into some of the trending searches of 2021, a year we looked for ways to feel better and heal together.

Finding resources near me

Across the world, people searched for information on COVID-19 vaccinations and testing. The top trending "near me" queries in 2021 were "covid vaccine near me" and "covid testing near me.” To help people find credible, timely testing and vaccine information, we updated Google Search information panels, and worked with national and international partners to help people get vaccinated and tested.

Learning how to help

Helping ourselves and our communities was a priority for many of us. We asked questions about how to help others with anxiety and depression, and we also looked for help with our own mental wellbeing. Search interest for “therapists near me” hit record highs in 2021, and the phrase "why do I feel anxious for no reason" also hit an all-time high this year, spiking more than 400%. In addition to providing mental health resources and helplines, a quick Google Search also surfaces self-assessments to help you learn more about mental health topics like depression, anxiety, PTSD and postpartum depression.

Evaluating information effectively

Is it allergies or COVID? A sinus infection or COVID? Pfizer or Moderna? As many of us searched for health related information online, we wanted to know what we found was trustworthy. Connecting people with critical, timely and authoritative health information has been a crucial part of our role over the last year, and our team is constantly working to find ways to help people everywhere find credible and actionable information to help manage their health. To help people evaluate information online, we launched a new tool called About This Result, so you can learn more about the pages you see across a range of topics. About This Result helps people evaluate the credibility of sources, and decide which results are useful for them.

Search continues to be one of the first stops people make when making decisions, big and small, about their health — and so much more. To dive deeper into some of the other trending topics that defined 2021, visit yearinsearch.google/trends.

Making healthcare options more accessible on Search

Navigating the U.S. healthcare system can be quite challenging, so it’s no wonder three in four people turn to the internet first in their search for health information. By providing timely and authoritative health information, plus relevant resources and tools on Google Search, we’re always exploring ways to help people make more informed choices about their health. Here are a few new ways we’re helping.

New ways to find insurance information on Google

In the U.S., finding a doctor who accepts your health insurance is often a top priority. When searching for a specific provider, people can check which insurance networks that they might accept. And if they’re searching for a new provider overall, on mobile, they’re now able to filter providers nearby who accept Medicare — a health plan predominantly for people over the age of 65.

Mobile image showing Accepts Medicare filter on Healthcare Business Profiles.

How providers can keep patients up to date

To help people get connected to the care they need, we’re conducting checks to ensure details of local doctors are up to date, and giving all healthcare providers the ability to update their information by claiming and updating their Google Business Profile.

We continue to expand the features and tools that doctors can use to communicate about the services they offer. After claiming their profile, health professionals can edit and update information about their hours, services, and more.

Whether helping people find information to self-assess their symptoms for mental health conditions like depression or getting real time information of COVID-19 vaccine availability nearby, we continue to explore ways to connect people around the world to relevant and actionable information to better manage their health.