Tag Archives: Health

Improving Genomic Discovery with Machine Learning

Each person’s genome, which collectively encodes the biochemical machinery they are born with, is composed of over 3 billion letters of DNA. However, only a small subset of the genome (~4-5 million positions) varies between two people. Nonetheless, each person’s unique genome interacts with the environment they experience to determine the majority of their health outcomes. A key method of understanding the relationship between genetic variants and traits is a genome-wide association study (GWAS), in which each genetic variant present in a cohort is individually examined for correlation with the trait of interest. GWAS results can be used to identify and prioritize potential therapeutic targets by identifying genes that are strongly associated with a disease of interest, and can also be used to build a polygenic risk score (PRS) to predict disease predisposition based on the combined influence of variants present in an individual. However, while accurate measurement of traits in an individual (called phenotyping) is essential to GWAS, it often requires painstaking expert curation and/or subjective judgment calls.

In “Large-scale machine learning-based phenotyping significantly improves genomic discovery for optic nerve head morphology”, we demonstrate how using machine learning (ML) models to classify medical imaging data can be used to improve GWAS. We describe how models can be trained for phenotypes to generate trait predictions and how these predictions are used to identify novel genetic associations. We then show that the novel associations discovered improve PRS accuracy and, using glaucoma as an example, that the improvements for anatomical eye traits relate to human disease. We have released the model training code and detailed documentation for its use on our Genomics Research GitHub repository.

Identifying genetic variants associated with eye anatomical traits
Previous work has demonstrated that ML models can identify eye diseases, skin diseases, and abnormal mammogram results with accuracy approaching or exceeding state-of-the-art methods by domain experts. Because identifying disease is a subset of phenotyping, we reasoned that ML models could be broadly used to improve the speed and quality of phenotyping for GWAS.

To test this, we chose a model that uses a fundus image of the eye to accurately predict whether a patient should be referred for assessment for glaucoma. This model uses the fundus images to predict the diameters of the optic disc (the region where the optic nerve connects to the retina) and the optic cup (a whitish region in the center of the optic disc). The ratio of the diameters of these two anatomical features (called the vertical cup-to-disc ratio, or VCDR) correlates strongly with glaucoma risk.

A representative retinal fundus image showing the vertical cup-to-disc ratio, which is an important diagnostic measurement for glaucoma.

We applied this model to predict VCDR in all fundus images from individuals in the UK Biobank, which is the world’s largest dataset available to researchers worldwide for health-related research in the public interest, containing extensive phenotyping and genetic data for ~500,000 pseudonymized (the UK Biobank's standard for de-identification) individuals. We then performed GWAS in this dataset to identify genetic variants that are associated with the model-based predictions of VCDR.

Applying a VCDR prediction model trained on clinical data to generate predicted values for VCDR to enable discovery of genetic associations for the VCDR trait.

The ML-based GWAS identified 156 distinct genomic regions associated with VCDR. We compared these results to a VCDR GWAS conducted by another group on the same UK Biobank data, Craig et al. 2020, where experts had painstakingly labeled all images for VCDR. The ML-based GWAS replicates 62 of the 65 associations found in Craig et al., which indicates that the model accurately predicts VCDR in the UK Biobank images. Additionally, the ML-based GWAS discovered 93 novel associations.

Number of statistically significant GWAS associations discovered by exhaustive expert labeling approach (Craig et al., left), and by our ML-based approach (right), with shared associations in the middle.

The ML-based GWAS improves polygenic model predictions
To validate that the novel associations discovered in the ML-based GWAS are biologically relevant, we developed independent PRSes using the Craig et al. and ML-based GWAS results, and tested their ability to predict human-expert-labeled VCDR in a subset of UK Biobank as well as a fully independent cohort (EPIC-Norfolk). The PRS developed from the ML-based GWAS showed greater predictive ability than the PRS built from the expert labeling approach in both datasets, providing strong evidence that the novel associations discovered by the ML-based method influence VCDR biology, and suggesting that the improved phenotyping accuracy (i.e., more accurate VCDR measurement) of the model translates into a more powerful GWAS.

The correlation between a polygenic risk score (PRS) for VCDR generated from the ML-based approach and the exhaustive expert labeling approach (Craig et al.). In these plots, higher values on the y-axis indicate a greater correlation and therefore greater prediction from only the genetic data. [* — p ≤ 0.05; *** — p ≤ 0.001]

As a second validation, because we know that VCDR is strongly correlated with glaucoma, we also investigated whether the ML-based PRS was correlated with individuals who had either self-reported that they had glaucoma or had medical procedure codes suggestive of glaucoma or glaucoma treatment. We found that the PRS for VCDR determined using our model predictions were also predictive of the probability that an individual had indications of glaucoma. Individuals with a PRS 2.5 or more standard deviations higher than the mean were more than 3 times as likely to have glaucoma in this cohort. We also observed that the VCDR PRS from ML-based phenotypes was more predictive of glaucoma than the VCDR PRS produced from the extensive manual phenotyping.

The odds ratio of glaucoma (self-report or ICD code) stratified by the PRS for VCDR determined using the ML-based phenotypes (in standard deviations from the mean). In this plot, the y-axis shows the probability that the individual has glaucoma relative to the baseline rate (represented by the dashed line). The x-axis shows standard deviations from the mean for the PRS. Data are visualized as a standard box plot, which illustrates values for the mean (the orange line), first and third quartiles, and minimum and maximum.

Conclusion
We have shown that ML models can be used to quickly phenotype large cohorts for GWAS, and that these models can increase statistical power in such studies. Although these examples were shown for eye traits predicted from retinal imaging, we look forward to exploring how this concept could generally apply to other diseases and data types.

Acknowledgments
We would like to especially thank co-author Dr. Anthony Khawaja of Moorfields Eye Hospital for contributing his extensive medical expertise. We also recognize the efforts of Professor Jamie Craig and colleagues for their exhaustive labeling of UK Biobank images, which allowed us to make comparisons with our method. Several authors of that work, as well as Professor Stuart MacGregor and collaborators in Australia and at Max Kelsen have independently replicated these findings, and we value these scientific contributions as well.

Source: Google AI Blog


New tools to support vaccine access and distribution

While over half of U.S. adults are fully vaccinated, vaccine uptake is slowing across the country. Research shows a variety of factors are preventing people from getting vaccinated — from physical access issues, like transportation challenges and not being able to take time off work, to concerns about safety and side effects. 

To help public health officials and researchers in the U.S. reach people facing these challenges, we’re introducing new tools to better understand the vaccination needs of a community. This builds on our work of providing data, insights and tools to public health, epidemiologists, researchers and policymakers since the early days of the pandemic. 

Equitable access to vaccinations 

For some people getting vaccinated is as simple as walking a few blocks to their local pharmacy. For others, it may be much more difficult and involve a long drive or navigating public transit. If public health officials, researchers and healthcare providers can identify areas where vaccination sites are inaccessible or hard to reach, they may be able to implement measures like pop-up vaccine sites or transportation support like ride vouchers.  

Our COVID-19 Vaccination Access Dataset, which is available to the public today, calculates travel time to vaccination sites to identify areas where it may be difficult to reach a site whether someone is walking, driving or taking public transportation. We prepared this dataset using Google Maps Platform Directions API, the same API that powers navigation in Google Maps. This dataset does not contain any user data.

This dataset will help power a new Vaccine Equity Plannerdashboard from Ariadne Labs, a joint center for health systems innovation at Brigham & Women’s Hospital and the Harvard T.H. Chan School of Public Health, and  Boston Children’s Hospital, the primary pediatric teaching affiliate of Harvard Medical School. . This dashboard integrates our dataset with data from other organizations, such as the CDC’s social vulnerability index, to identify “vaccine deserts,” or areas where people have little or no convenient access to a vaccine site, to inform interventions such as pop-up clinics or new sites. 

Vaccine Equity Planner dashboard for New York and Mississippi

Vaccine Equity Planner dashboard for New York and Mississippi.   

Understanding vaccine information needs 

Public health organizations have been the go-to sources for authoritative information throughout the pandemic, and have provided educational campaigns about the safety, efficacy and availability of vaccines. We’ve heard from public health organizations and researchers that they want access to localized and timely data about what information their communities are seeking so they can tailor their communication to people not yet vaccinated. 

In the coming weeks, we’ll introduce a COVID-19 Vaccination Search Insights tool to help public health officials and researchers explore vaccine-related concerns and the information needs of local communities. The tool will show trends representing the relative search interests across three search categories: all vaccine information, intent to get vaccinated (such as eligibility, availability and sites), and safety and side effects. Insights will be provided at the county and zip code level and updated weekly.  

The trends are based on aggregate and anonymized Google Search data so that no user information is included. The process to anonymize the COVID-19 Vaccination Search Insights is powered by differential privacy, a technique that adds noise to the data to provide privacy guarantees while preserving the overall quality of the data. The data can be compared across different regions and over time, without sharing the absolute number of queries in any given area. 

Both tools will initially be available in English and for the U.S. As we get more feedback from public health organizations, researchers, and the community at large, we’ll evaluate expanding these tools internationally.

With these insights, we hope that public health organizations and healthcare providers can more easily and effectively reach their communities. Google will continue to do its part by providing timely and accurate vaccine information and appointment availability to people in Search and supporting organizations focused on vaccine equity. 

Source: Search


Tracking data to advance health equity


Last year, I saw COVID-19 impact the lives of some of the strongest people I know because of their race, class and zip code — especially in my hard-hit hometown of Detroit. But I wasn’t the only one who witnessed this. We’ve all heard how the pandemic has affected vulnerable communities across the country due to structural and long-standing health inequities. Even so, there was no central resource to help consolidate, visualize and understand the data on a national scale. 


Over the past year a team of Google.org Fellows and I worked with the Satcher Health Leadership Institute at Morehouse School of Medicine and a multi-disciplinary Health Equity Taskforce to understand COVID-19 health inequities. Today, we released The Health Equity Tracker (HET), a publicly available data platform that visually displays and contextualizes the health disparities facing communities of color throughout the U.S.


With $1.5 million of Google.org grant funding and over 15,000 pro bono hours donated from 18 Google.org Fellows, the HET parses through a mountain of public health data to record COVID-19 cases, deaths and hospitalizations nationwide across race and ethnicity, sex and age, as well as state and county. The tracker also measures social and systemic factors — like poverty and lack of health insurance — that exacerbate these inequities and have resulted in higher COVID-19 death rates for people of color, especially Black and Latinx communities.  

The HET allows users to compare public health data on a local and national level.

The HET allows users to compare public health data on a local and national level. 

Collecting this data showed us where there are gaps in our knowledge. Public health data can be inconsistent, collected in silos or missing completely. Knowing where these blindspots are is valuable. When we’re aware of unknown or missing data, we’re able to take action toward improving data collection and reporting standards.


The tracker currently focuses on data analysis for COVID-19, but in the future we expect to be able to track additional conditions, like mental and behavioral health. And we’ll include analysis of health inequities for people with disabilities, the LGBTQ+ community and those facing socio economic challenges. 


For me, the process of creating this during a time of devastation has helped me translate mourning into meaning. Future generations deserve more complete, accurate, and representative data that can advance health equity in times of crisis and beyond


Watch Satcher Health Leadership Institute’s YouTube series to learn more about health equity tracker and the Google.org fellows who worked on it. 

Tackling tuberculosis screening with AI

Today we’re sharing new AI research that aims to improve screening for one of the top causes of death worldwide: tuberculosis (TB). TB infects 10 million people per year and disproportionately affects people in low-to-middle-income countries. Diagnosing TB early is difficult because its symptoms can mimic those of common respiratory diseases.

Cost-effective screening, specifically chest X-rays, has been identified as one way to improve the screening process. However, experts aren’t always available to interpret results. That’s why the World Health Organization (WHO) recently recommended the use of computer-aided detection (CAD) for screening and triaging.

To help catch the disease early and work toward eventually eradicating it, Google researchers developed an AI-based tool that builds on our existing work in medical imaging to identify potential TB patients for follow-up testing. 

A deep learning system to detect active pulmonary tuberculosis  

In a new study released this week, we found that the right deep learning system can be used to accurately identify patients who are likely to have active TB based on their chest X-ray. By using this screening tool as a preliminary step before ordering a more expensive diagnostic test, our study showed that effective AI-powered screening could save up to 80% of the cost per positive TB case detected. 

Our AI-based tool was able to accurately detect active pulmonary TB cases with false-negative and false-positive detection rates that were similar to 14 radiologists. This accuracy was maintained even when examining patients who were HIV-positive, a population that is at higher risk of developing TB and is challenging to screen because their chest X-rays may differ from typical TB cases.

To make sure the model worked for patients from a wide range of races and ethnicities, we used de-identified data from nine countries to train the model and tested it on cases from five countries. These findings build on our previousresearch that showed AI can detect common issues like collapsed lungs, nodules or fractures in chest X-rays. 

Applying these findings in the real world

The AI system produces a number between 0 and 1 that indicates the risk of TB. For the system to be useful in a real-world setting, there needs to be agreement about what risk level indicates that patients should be recommended for additional testing. Calibrating this threshold can be time-consuming and expensive because administrators can only come to this number after running the system on hundreds of patients, testing these patients, and analyzing the results. 

Based on the performance of our model, our research suggests that any clinic could start from this default threshold and be confident that the model will perform similarly to radiologists, making it easier to deploy this technology. From there, clinics can adjust the threshold based on local needs and resources. For example, regions with fewer resources may use a higher cut-off point to reduce the number of follow-up tests needed. 

The path to eradicating tuberculosis

The WHO’s “The End TB Strategy” lays out the global efforts that are underway to dramatically reduce the incidence of tuberculosis in the coming decade. Because TB can remain pervasive in communities, even if a relatively low number of people have it at a given time, more and earlier screenings are critical to reducing its prevalence. 

We’ll keep contributing to these efforts — especially when it comes to research and development. Later this year, we plan to expand this work through two separate research studies with our partners, Apollo Hospitals in India and the Centre for Infectious Disease Research in Zambia (CIDRZ). 

Tackling tuberculosis screening with AI

Today we’re sharing new AI research that aims to improve screening for one of the top causes of death worldwide: tuberculosis (TB). TB infects 10 million people per year and disproportionately affects people in low-to-middle-income countries. Diagnosing TB early is difficult because its symptoms can mimic those of common respiratory diseases.

Cost-effective screening, specifically chest X-rays, has been identified as one way to improve the screening process. However, experts aren’t always available to interpret results. That’s why the World Health Organization (WHO) recently recommended the use of computer-aided detection (CAD) for screening and triaging.

To help catch the disease early and work toward eventually eradicating it, Google researchers developed an AI-based tool that builds on our existing work in medical imaging to identify potential TB patients for follow-up testing. 

A deep learning system to detect active pulmonary tuberculosis  

In a new study released this week, we found that the right deep learning system can be used to accurately identify patients who are likely to have active TB based on their chest X-ray. By using this screening tool as a preliminary step before ordering a more expensive diagnostic test, our study showed that effective AI-powered screening could save up to 80% of the cost per positive TB case detected. 

Our AI-based tool was able to accurately detect active pulmonary TB cases with false-negative and false-positive detection rates that were similar to 14 radiologists. This accuracy was maintained even when examining patients who were HIV-positive, a population that is at higher risk of developing TB and is challenging to screen because their chest X-rays may differ from typical TB cases.

To make sure the model worked for patients from a wide range of races and ethnicities, we used de-identified data from nine countries to train the model and tested it on cases from five countries. These findings build on our previousresearch that showed AI can detect common issues like collapsed lungs, nodules or fractures in chest X-rays. 

Applying these findings in the real world

The AI system produces a number between 0 and 1 that indicates the risk of TB. For the system to be useful in a real-world setting, there needs to be agreement about what risk level indicates that patients should be recommended for additional testing. Calibrating this threshold can be time-consuming and expensive because administrators can only come to this number after running the system on hundreds of patients, testing these patients, and analyzing the results. 

Based on the performance of our model, our research suggests that any clinic could start from this default threshold and be confident that the model will perform similarly to radiologists, making it easier to deploy this technology. From there, clinics can adjust the threshold based on local needs and resources. For example, regions with fewer resources may use a higher cut-off point to reduce the number of follow-up tests needed. 

The path to eradicating tuberculosis

The WHO’s “The End TB Strategy” lays out the global efforts that are underway to dramatically reduce the incidence of tuberculosis in the coming decade. Because TB can remain pervasive in communities, even if a relatively low number of people have it at a given time, more and earlier screenings are critical to reducing its prevalence. 

We’ll keep contributing to these efforts — especially when it comes to research and development. Later this year, we plan to expand this work through two separate research studies with our partners, Apollo Hospitals in India and the Centre for Infectious Disease Research in Zambia (CIDRZ). 

Using AI to help find answers to common skin conditions

Artificial intelligence (AI) has the potential to help clinicians care for patients and treat disease — from improving the screening process for breast cancer to helping detect tuberculosis more efficiently. When we combine these advances in AI with other technologies, like smartphone cameras, we can unlock new ways for people to stay better informed about their health, too.  


Today at  I/O, we shared a preview of an AI-powered dermatology assist tool that helps you understand what’s going on with issues related to your body’s largest organ: your skin, hair and nails. Using many of the same techniques that detect diabetic eye disease or lung cancer in CT scans, this tool gets you closer to identifying dermatologic issues — like a rash on your arm that’s bugging you — using your phone’s camera. 

How our AI-powered dermatology tool works 

Each year we see almost ten billion Google Searches related to skin, nail and hair issues. Two billion people worldwide suffer from dermatologic issues, but there’s a global shortage of specialists. While many people’s first step involves going to a Google Search bar, it can be difficult to describe what you’re seeing on your skin through words alone.

Our AI-powered dermatology assist tool is a web-based application that we hope to launch as a pilot later this year, to make it easier to figure out what might be going on with your skin. Once you launch the tool, simply use your phone’s camera to take three images of the skin, hair or nail concern from different angles. You’ll then be asked questions about your skin type, how long you’ve had the issue and other symptoms that help the tool narrow down the possibilities. The AI model analyzes this information and draws from its knowledge of 288 conditions to give you a list of possible matching conditions that you can then research further.

For each matching condition, the tool will show dermatologist-reviewed information and answers to commonly asked questions, along with similar matching images from the web. The tool is not intended to provide a diagnosis nor be a substitute for medical advice as many conditions require clinician review, in-person examination, or additional testing like a biopsy. Rather we hope it gives you access to authoritative information so you can make a more informed decision about your next step.

Image of a phone showing you each step of using the AI-powered dermatology assist tool.

Based on the photos and information you provide, our AI-powered dermatology assist tool will offer suggested conditions. This product has been CE marked as a Class I medical device in the EU. It is not available in the United States.

Developing an AI model that assesses issues for all skin types 

Our tool is the culmination of over three years of machine learning research and product development. To date, we’ve published several peer-reviewed papers that validate our AI model and more are in the works. 

Our landmark study, featured in Nature Medicine, debuted our deep learning approach to assessing skin diseases and showed that our AI system can achieve accuracy that is on par with U.S. board-certified dermatologists. Our most recent paper in JAMA Network Open demonstrated how non-specialist doctors can use AI-based tools to improve their ability to interpret skin conditions

To make sure we’re building for everyone, our model accounts for factors like age, sex, race and skin types — from pale skin that does not tan to brown skin that rarely burns. We developed and fine-tuned our model with de-identified data encompassing around 65,000 images and case data of diagnosed skin conditions, millions of curated skin concern images and thousands of examples of healthy skin — all across different demographics. 

Recently, the AI model that powers our tool successfully passed clinical validation, and the tool has been CE marked as a Class I medical device in the EU.¹ In the coming months, we plan to build on this work so more people can use this tool to answer questions about common skin issues. If you’re interested in this tool, sign up here to be notified (subject to availability in your region).

¹This tool has not been evaluated by the U.S. FDA for safety or efficacy. It is not available in the United States.

Using AI to help find answers to common skin conditions

Artificial intelligence (AI) has the potential to help clinicians care for patients and treat disease — from improving the screening process for breast cancer to helping detect tuberculosis more efficiently. When we combine these advances in AI with other technologies, like smartphone cameras, we can unlock new ways for people to stay better informed about their health, too.  


Today at  I/O, we shared a preview of an AI-powered dermatology assist tool that helps you understand what’s going on with issues related to your body’s largest organ: your skin, hair and nails. Using many of the same techniques that detect diabetic eye disease or lung cancer in CT scans, this tool gets you closer to identifying dermatologic issues — like a rash on your arm that’s bugging you — using your phone’s camera. 

How our AI-powered dermatology tool works 

Each year we see almost ten billion Google Searches related to skin, nail and hair issues. Two billion people worldwide suffer from dermatologic issues, but there’s a global shortage of specialists. While many people’s first step involves going to a Google Search bar, it can be difficult to describe what you’re seeing on your skin through words alone.

Our AI-powered dermatology assist tool is a web-based application that we hope to launch as a pilot later this year, to make it easier to figure out what might be going on with your skin. Once you launch the tool, simply use your phone’s camera to take three images of the skin, hair or nail concern from different angles. You’ll then be asked questions about your skin type, how long you’ve had the issue and other symptoms that help the tool narrow down the possibilities. The AI model analyzes this information and draws from its knowledge of 288 conditions to give you a list of possible matching conditions that you can then research further.

For each matching condition, the tool will show dermatologist-reviewed information and answers to commonly asked questions, along with similar matching images from the web. The tool is not intended to provide a diagnosis nor be a substitute for medical advice as many conditions require clinician review, in-person examination, or additional testing like a biopsy. Rather we hope it gives you access to authoritative information so you can make a more informed decision about your next step.

Image of a phone showing you each step of using the AI-powered dermatology assist tool.

Based on the photos and information you provide, our AI-powered dermatology assist tool will offer suggested conditions. This product has been CE marked as a Class I medical device in the EU. It is not available in the United States.

Developing an AI model that assesses issues for all skin types 

Our tool is the culmination of over three years of machine learning research and product development. To date, we’ve published several peer-reviewed papers that validate our AI model and more are in the works. 

Our landmark study, featured in Nature Medicine, debuted our deep learning approach to assessing skin diseases and showed that our AI system can achieve accuracy that is on par with U.S. board-certified dermatologists. Our most recent paper in JAMA Network Open demonstrated how non-specialist doctors can use AI-based tools to improve their ability to interpret skin conditions

To make sure we’re building for everyone, our model accounts for factors like age, sex, race and skin types — from pale skin that does not tan to brown skin that rarely burns. We developed and fine-tuned our model with de-identified data encompassing around 65,000 images and case data of diagnosed skin conditions, millions of curated skin concern images and thousands of examples of healthy skin — all across different demographics. 

Recently, the AI model that powers our tool successfully passed clinical validation, and the tool has been CE marked as a Class I medical device in the EU.¹ In the coming months, we plan to build on this work so more people can use this tool to answer questions about common skin issues. If you’re interested in this tool, sign up here to be notified (subject to availability in your region).

¹This tool has not been evaluated by the U.S. FDA for safety or efficacy. It is not available in the United States.

When it comes to mental health, what are we searching for?

You know that exhaustion you’re feeling — the one that no amount of espresso shots or power naps can remedy? Well, it turns out you’re not alone. 

Last month in the U.S. we saw spikes in fatigue-related Google searches, and the question “why do I feel bad?” reached a record high. There’s a collective feeling of exhaustion, and we’re all looking for ways to cope with it. Over the past year, we’ve seen an increase in searches related to meditation, virtual therapy, walking and digital detoxes

Since this week marks the beginning of Mental Health Awareness Month in the U.S., we chatted with two of Google’s experts on the topic: Dr. David Feinberg, a psychiatrist by training and head of Google Health, and Dr. Jessica DiVento, a licensed clinical psychologist and the Chief Mental Health Advisor for YouTube. David and Jessica talk about why we’re feeling this way and what we can do about it. 


What’s going on with our collective wellbeing at this moment in time? 

Jessica: Our body’s threat detection system is working in overdrive. We’re constantly making sense of what’s happening so we know what’s causing us stress and can react to it. People don’t realize how much mental energy that takes. Even though you might not be doing much physically, it makes sense to feel fatigued. 

In the U.S., more people are getting vaccinated and guidelines are changing. Adjusting to this new routine takes a lot of cognitive processing. 

David:It's a hard transition. Our bodies are good at achieving homeostasis. I’ve become comfortable working from home, eating outside and socializing within my pod — these are abnormal things that I’ve incorporated as normal. In parts of the world, you’re telling me to go back to my old ways. Things that used to require minimal thinking — like meeting a friend for dinner — now require so much processing. 


How do you expect people’s emotions to change over the coming months? 

David:Fear is when you open the door and a bear is there. Anxiety is when there’s no bear and you don’t know why you’re feeling that way. We’ve been in a constant state of both with the pandemic. Already, I’ve felt a bit of these heavy feelings lift. When I got my first shot of the vaccine at CVS I felt some of the anxiety and fear I was carrying release — it was almost a spiritual experience. 

This is a dramatic life experience. It will be part of our narrative and change how we respond to things. When a vase falls and it breaks, you glue it back together. When it falls again it usually breaks in the same spot. When there are triggers — like seeing spikes in India — it brings back emotions from this collective trauma. 

Jessica:As a global society, there’s a long way to go. Some of us going through the reconstruction phase will ask, “Why am I not feeling better yet?” Transitioning out of this will take time. 


What have you both done to maintain your own mental health?

Jessica:We know all the things to do to minimize stress and anxiety: eat well, exercise, sleep and so on. We also know what doesn’t help. For me, that’s the overconsumption of technology. Digital wellbeing features, like Pixel’s Flip to Shhh and app timers, help me stop scrolling so I can be more present.

David:I’ve focused on my sleep. Dreams are a way to consolidate new information. I’ve measured my sleep with my Fitbit smartwatch and now with Sleep Sensing on my new Nest Hub, and have learned that eating or working out late at night negatively affects my sleep. So I’ve made adjustments.


As more people search for ways to cope, what are Google and YouTube doing to help?

David:Part of coping with anxiety is researching and taking action on the things you can control.  I love seeing Google connect people to actionable information through things like our mental health self-assessments, information on vaccination and testing locations, and authoritative data about things like symptoms and guidelines to stay safe.  

Jessica:The rise in searches for mental health content shows that it’s becoming okay to say that you’re not okay. The more conversations we spark and the more places we share content about mental health, the less stigma there will be. At YouTube, we work closely with experts in the mental health space to make sure there are credible and engaging videos out there. When someone searches specifically for anxiety or depression resources, we’ll show information about symptoms, treatment resources and self-assessments. And for searches that may indicate someone in crisis, we’re committed to connecting them with free 24/7 crisis support resources. Also, Fitbit recently teamed up with Deepak Chopra to create an exclusive wellness collection for its Premium members, making it easier for them to create a mindfulness practice. Things like that help make sure anyone can take care of their mental health and wellbeing. I hope that lives on past this moment. 


What questions do you hope the world is searching for in the next six months?  

Jessica: I think we’ll see people searching for ways they can help others — looking at careers in counseling and epidemiology — and how they can keep leaning into wellbeing. 

David:I hope people are searching “Am I in love?” and “Why do I feel great?”


AI assists doctors in interpreting skin conditions

Globally, skin conditions affect about 2 billion people. Diagnosing and treating these skin conditions is a complex process that involves specialized training. Due to a shortage of dermatologists and long wait times to see one, most patients first seek care from non-specialists.

Typically, a clinician examines the affected areas and the patient's medical history before arriving at a list of potential diagnoses, sometimes known as a “differential diagnosis”. They then use this information to decide on the next step such as a test, observation or treatment. 

To see if artificial intelligence (AI) could improve the process, we conducted a randomized retrospective study that was published today in JAMA Network Open. The study examined if a research tool we developed could help non-specialists clinicians — such as primary care physicians and nurse practitioners — more accurately interpret skin conditions. The tool uses Google’s deep learning system (that you can learn more about in Nature Medicine) to interpret de-identified images and medical history and provide a list of matching skin conditions.

In the study, 40 non-specialist clinicians interpreted de-identified images of patients’ skin conditions from a telemedicine dermatology service, identified the condition, and made recommendations such as biopsy or referral to a dermatologist. Each clinician examined over 1,000 cases — clinicians used the AI-powered tool for half of the cases and didn’t have access to the assistive AI tool in the other half.


Main takeaways of study: AI-assisted clinicians were better able to interpret skin conditions.

Main takeaways of study: AI-assisted clinicians were better able to interpret skin conditions and more often arrive at the same diagnosis as dermatologists. 

Clinicians with AI assistance were significantly more likely to arrive at the same diagnosis as dermatologists, compared to clinicians reviewing cases without AI assistance. The chances of identifying the correct top condition improved by more than 20% on a relative basis, though the degree of improvement varied by the individual.

We believe AI must be designed to improve care for everyone. In the study, clinicians' performance was consistently higher with AI assistance across a broad range of skin types — from pale skin that does not tan to brown skin that rarely burns. In addition to improving diagnostic ability, the AI assistance helped clinicians in the study feel more confident about their assessment and reassuringly did not increase their likelihood to recommend biopsies or referrals to dermatologists as the next appropriate step.

These research study results are promising and show that AI-based assistive tools could help non-specialist clinicians assess skin conditions. AI has shown great potential to improve health care outcomes; the next challenge is to demonstrate how AI can be applied in the real world. At Google Health, we’re committed to working with clinicians, patients and others to harness advances in research and ultimately bring about better and more accessible care. 

Dr. Ivor Horn talks about technology and health equity

Dr. Ivor Horn’s career has spanned medicine, academia and technology. Along the way she’s been focused on one thing: making sure that people get what they need out of the healthcare system and attain their fullest health potential — no matter who they are. 

She recently joined Google as the Director of Health Equity & Product Inclusion. We sat down with her to learn more about what health equity looks like, how technology can help and what she’s working on at Google. 

Where did your passion for this work come from? 

Growing up, I spent a lot of time in hospitals. When I was in the fourth grade, my dad had a head injury and developed a seizure disorder. Being Black in Mississippi, where I grew up, my mom would make sure that we all dressed up when we went to the doctor so they would recognize that my dad was someone who was cared for and who was loved — all that with the hope that we’d get better care. Living through that made me want to go to medical school so I could change the healthcare system. I didn't want other people to go through what we did. 

Once I was a practicing pediatrician, I saw patients in communities that were underserved by health care. I noticed young parents bringing their child and their flip phones into the clinic. They’d pull out their phone to show me things like a photo of their child’s rash that faded overnight. This tool helped them communicate with me more effectively, and I became interested in figuring out how we could use technology like that to improve health care more broadly. 

Can you tell us more about health inequity and the pandemic?

It’s important to remember that health inequity is the product of systemic and structural racism, particularly in the U.S. We know that people’s experience with health can be impacted by where they live, how wealthy they are, and their ethnicity or skin color. Before the pandemic, studies showed that people of color had less access to primary care, received a lower quality of treatment in places like emergency departments, and were less likely to be given additional examinations like blood tests. 

When you have a broken foundation, those cracks eventually become tremendous fissures — and that's what we saw with COVID-19. Health inequities surfaced at every level — from the lack of available protective equipment in developing countries to the higher than average death rates and infection of people of color. Health inequity has been an endemic aspect of the pandemic.

How do you even begin to solve that?

We cannot continue to build on something that's broken. Mending the cracks starts with building technology that helps those who are experiencing what's most broken about the healthcare system. If you build for that community, it will work for others — then you can transform healthcare.

This week’s news about vaccines is a great example. We’ve created virtual agents so anyone — especially those without access to the internet or people with limited tech skills — can book appointments and get critical vaccine information in whatever way they’re most comfortable with. It's available in multiple languages and modes of communication — whether that’s over the phone, through text, or on the web. We’ve also made vaccination locations available on Google Maps in the U.S. and other countries. All of this is to help reduce inequities, both in the outcomes and in the distribution of vaccines. 

But, technology has its limits; it can facilitate this work, but it’s not the complete solution. That’s why it’s important to partner with community-based organizations to reach people who might not otherwise see mainstream public service announcements or have easy access to vaccinations.

What role does Google play?

When you look at Google through the lens of health equity, so much of what we do touches people along their health journey. Research shows that roughly 7 in 10 people turn to the internet first when they’re looking for health information. We have the chance to build products that guide them to the right resources and find the information they need. 

My job is to look across all of our products to make sure we embed health equity into the DNA of everything we do. 

When tackling big problems, like health equity, what keeps you motivated? 

This generation of young people is fighting for lasting change with an energy that’s contagious. Seeing the things that we’ve worked so hard for, for so long, become the passion of a new generation makes everything I’ve done and continue to do so worth it.  If I can help make the structural changes so that they can fly, I’ll count that as a win.