How anonymized data helps fight against disease

Data has always been a vital tool in understanding and fighting disease — from Florence Nightingale’s 1800s hand drawn illustrations that showed how poor sanitation contributed to preventable diseases to the first open source repository of data developed in response to the 2014 Ebola crisis in West Africa. When the first cases of COVID-19 were reported in Wuhan, data again became one of the most critical tools to combat the pandemic. 

A group of researchers, who documented the initial outbreak, quickly joined forces and started collecting data that could help epidemiologists around the world model the trajectory of the novel coronavirus outbreak. The researchers came from University of Oxford, Tsinghua University, Northeastern University and Boston Children’s Hospital, among others. 

However, their initial workflow was not designed for the exponential rise in cases. The researchers turned to Google.org for help. As part of Google’s $100 million contribution to COVID relief, Google.org granted $1.25 million in funding and provided a team of 10 fulltime Google.org Fellows and 7 part-time Google volunteers to assist with the project.  

Google volunteers worked with the researchers to create Global.health, a scalable and open-access platform that pulls together millions of anonymized COVID-19 cases from over 100 countries. This platform helps epidemiologists around the world model the trajectory of COVID-19, and track its variants and future infectious diseases. 

The need for trusted and anonymized case data

When an outbreak occurs, timely access to organized, trustworthy and anonymized data is critical for public health leaders to inform early policy decisions, medical interventions, and allocations of resources — all of which can slow disease spread and save lives. The insights derived from “line-list” data (e.g. anonymized case level information), as opposed to aggregated data such as case counts, are essential for epidemiologists to perform more detailed statistical analyses and model the effectiveness of interventions. 

Volunteers at the University of Oxford started manually curating this data, but it was spread over hundreds of websites, in dozens of formats, in multiple languages. The HealthMap team at Boston Children’s Hospital also identified early reports of COVID-19 through automated indexing of news sites and official sources. These two teams joined forces, shared the data, and published peer-reviewed findings to create a trusted resource for the global community.

Enter the Google.org Fellowship

To help the global community of researchers in this meaningful endeavour, Google.org decided to offer the support of 10 Google.org Fellows who spent 6 months working full-time on Global.health, in addition to $1.25M in grant funding. Working hand in hand with the University of Oxford and Boston Children’s Hospital, the Google.org team spoke to researchers and public health officials working on the frontline to understand real-life challenges they faced when finding and using high-quality trusted data — a tedious and manual process that often takes hours. 

Upholding data privacy is key to the platform’s design. The anonymized data used at Global.health comes from open-access authoritative public health sources, and a panel of data experts rigorously checks it to make sure it meets strict anonymity requirements. The Google.org Fellows assisted the Global.health team to design the data ingestion flow to implement best practices for data verification and quality checks to make sure that no personal data made its way into the platform. (All line-list data added to the platform is stored and hosted in Boston Children’s Hospital’s secure data infrastructure, not Google’s.)

Looking to the future

With the support of Google.org and The Rockefeller Foundation, Global.health has grown into an international consortium of researchers at leading universities curating the most comprehensive line-list COVID-19 database in the world.  It includes millions of anonymized records from trusted sources spanning over 100 countries, including India.

Today, Global.health helps researchers across the globe access data in a matter of minutes and a series of clicks. The flexibility of the Global.health platform means that it can be adapted to any infectious disease data and local context as new outbreaks occur. Global.health lays a foundation for researchers and public health officials to access this data no matter their location, be it New York, São Paulo, Munich, Kyoto or Nairobi.

Posted by Stephen Ratcliffe, Google.org Fellow and the Global.health team

How anonymized data helps fight against disease

Data has always been a vital tool in understanding and fighting disease — from Florence Nightingale’s 1800s hand drawn illustrations that showed how poor sanitation contributed to preventable diseases to the first open source repository of datadeveloped in response to the 2014 Ebola crisis in West Africa. When the first cases of COVID-19 were reported in Wuhan, data again became one of the most critical tools to combat the pandemic. 

A group of researchers, who documented the initial outbreak, quickly joined forces and started collecting data that could help epidemiologists around the world model the trajectory of the novel coronavirus outbreak. The researchers came from University of Oxford, Tsinghua University, Northeastern University and Boston Children’s Hospital, among others. 

However, their initial workflow was not designed for the exponential rise in cases. The researchers turned to Google.org for help. As part of Google’s $100 million contribution to COVID relief, Google.org granted $1.25 million in funding and provided a team of 10 fulltime Google.org Fellows and 7 part-time Google volunteers to assist with the project.  

Google volunteers worked with the researchers to create Global.health, a scalable and open-access platform that pulls together millions of anonymized COVID-19 cases from over 100 countries. This platform helps epidemiologists around the world model the trajectory of COVID-19, and track its variants and future infectious diseases. 


The need for trusted and anonymized case data

When an outbreak occurs, timely access to organized, trustworthy and anonymized data is critical for public health leaders to inform early policy decisions, medical interventions, and allocations of resources — all of which can slow disease spread and save lives. The insights derived from “line-list” data (e.g. anonymized case level information), as opposed to aggregated data such as case counts, are essential for epidemiologists to perform more detailed statistical analyses and model the effectiveness of interventions. 

Volunteers at the University of Oxford started manually curating this data, but it was spread over hundreds of websites, in dozens of formats, in multiple languages. The HealthMap team at Boston Children’s Hospital also identified early reports of COVID-19 through automated indexing of news sites and official sources. These two teams joined forces, shared the data, and published peer-reviewed findings to create a trusted resource for the global community.


Enter the Google.org Fellowship

To help the global community of researchers in this meaningful endeavour, Google.org decided to offer the support of 10 Google.org Fellows who spent 6 months working full-time onGlobal.health, in addition to $1.25M in grant funding. Working hand in hand with the University of Oxford and Boston Children’s Hospital, the Google.org team spoke to researchers and public health officials working on the frontline to understand real-life challenges they faced when finding and using high-quality trusted data — a tedious and manual process that often takes hours. 

Upholding data privacy is key to the platform’s design. The anonymized data used at Global.health comes from open-access authoritative public health sources, and a panel of data experts rigorously checks it to make sure it meets strict anonymity requirements. The Google.org Fellows assisted the Global.health team to design the data ingestion flow to implement best practices for data verification and quality checks to make sure that no personal data made its way into the platform. (All line-list data added to the platform is stored and hosted in Boston Children’s Hospital’s secure data infrastructure, not Google’s.)


Looking to the future

With the support of Google.org and The Rockefeller Foundation, Global.health has grown into an international consortium of researchers at leading universities curating the most comprehensive line-list COVID-19 database in the world.  It includes millions of anonymized records from trusted sources spanning over 100 countries.

Today, Global.health helps researchers across the globe access data in a matter of minutes and a series of clicks. The flexibility of the Global.health platform means that it can be adapted to any infectious disease data and local context as new outbreaks occur. Global.health lays a foundation for researchers and public health officials to access this data no matter their location, be it New York, São Paulo, Munich, Kyoto or Nairobi.

Sunset date for deprecated DBM API services postponed to April 15, 2021

The sunset of deprecated services in the DoubleClick Bid Manager (DBM) API, originally scheduled for February 26, 2021, has been postponed to April 15, 2021. The deprecated services scheduled for sunset include the entirety of DBM API v1, the DBM API v1.1 SDF Download service, and the DBM API v1.1 Line Item service.

Prior blog posts regarding this sunset give instructions on how to migrate from these deprecated services to either the DBM API v1.1 Reporting service or the Display & Video 360 (DV360) API. Consult these previous announcements for more information.

If you encounter issues with your migration or want to report a separate issue, please contact us using our support contact form.


The Technology Behind Cinematic Photos

Looking at photos from the past can help people relive some of their most treasured moments. Last December we launched Cinematic photos, a new feature in Google Photos that aims to recapture the sense of immersion felt the moment a photo was taken, simulating camera motion and parallax by inferring 3D representations in an image. In this post, we take a look at the technology behind this process, and demonstrate how Cinematic photos can turn a single 2D photo from the past into a more immersive 3D animation.

Camera 3D model courtesy of Rick Reitano.
Depth Estimation
Like many recent computational photography features such as Portrait Mode and Augmented Reality (AR), Cinematic photos requires a depth map to provide information about the 3D structure of a scene. Typical techniques for computing depth on a smartphone rely on multi-view stereo, a geometry method to solve for the depth of objects in a scene by simultaneously capturing multiple photos at different viewpoints, where the distances between the cameras is known. In the Pixel phones, the views come from two cameras or dual-pixel sensors.

To enable Cinematic photos on existing pictures that were not taken in multi-view stereo, we trained a convolutional neural network with encoder-decoder architecture to predict a depth map from just a single RGB image. Using only one view, the model learned to estimate depth using monocular cues, such as the relative sizes of objects, linear perspective, defocus blur, etc.

Because monocular depth estimation datasets are typically designed for domains such as AR, robotics, and self-driving, they tend to emphasize street scenes or indoor room scenes instead of features more common in casual photography, like people, pets, and objects, which have different composition and framing. So, we created our own dataset for training the monocular depth model using photos captured on a custom 5-camera rig as well as another dataset of Portrait photos captured on Pixel 4. Both datasets included ground-truth depth from multi-view stereo that is critical for training a model.

Mixing several datasets in this way exposes the model to a larger variety of scenes and camera hardware, improving its predictions on photos in the wild. However, it also introduces new challenges, because the ground-truth depth from different datasets may differ from each other by an unknown scaling factor and shift. Fortunately, the Cinematic photo effect only needs the relative depths of objects in the scene, not the absolute depths. Thus we can combine datasets by using a scale-and-shift-invariant loss during training and then normalize the output of the model at inference.

The Cinematic photo effect is particularly sensitive to the depth map’s accuracy at person boundaries. An error in the depth map can result in jarring artifacts in the final rendered effect. To mitigate this, we apply median filtering to improve the edges, and also infer segmentation masks of any people in the photo using a DeepLab segmentation model trained on the Open Images dataset. The masks are used to pull forward pixels of the depth map that were incorrectly predicted to be in the background.

Camera Trajectory
There can be many degrees of freedom when animating a camera in a 3D scene, and our virtual camera setup is inspired by professional video camera rigs to create cinematic motion. Part of this is identifying the optimal pivot point for the virtual camera’s rotation in order to yield the best results by drawing one’s eye to the subject.

The first step in 3D scene reconstruction is to create a mesh by extruding the RGB image onto the depth map. By doing so, neighboring points in the mesh can have large depth differences. While this is not noticeable from the “face-on” view, the more the virtual camera is moved, the more likely it is to see polygons spanning large changes in depth. In the rendered output video, this will look like the input texture is stretched. The biggest challenge when animating the virtual camera is to find a trajectory that introduces parallax while minimizing these “stretchy” artifacts.

The parts of the mesh with large depth differences become more visible (red visualization) once the camera is away from the “face-on” view. In these areas, the photo appears to be stretched, which we call “stretchy artifacts”.

Because of the wide spectrum in user photos and their corresponding 3D reconstructions, it is not possible to share one trajectory across all animations. Instead, we define a loss function that captures how much of the stretchiness can be seen in the final animation, which allows us to optimize the camera parameters for each unique photo. Rather than counting the total number of pixels identified as artifacts, the loss function triggers more heavily in areas with a greater number of connected artifact pixels, which reflects a viewer’s tendency to more easily notice artifacts in these connected areas.

We utilize padded segmentation masks from a human pose network to divide the image into three different regions: head, body and background. The loss function is normalized inside each region before computing the final loss as a weighted sum of the normalized losses. Ideally the generated output video is free from artifacts but in practice, this is rare. Weighting the regions differently biases the optimization process to pick trajectories that prefer artifacts in the background regions, rather than those artifacts near the image subject.

During the camera trajectory optimization, the goal is to select a path for the camera with the least amount of noticeable artifacts. In these preview images, artifacts in the output are colored red while the green and blue overlay visualizes the different body regions.

Framing the Scene
Generally, the reprojected 3D scene does not neatly fit into a rectangle with portrait orientation, so it was also necessary to frame the output with the correct right aspect ratio while still retaining the key parts of the input image. To accomplish this, we use a deep neural network that predicts per-pixel saliency of the full image. When framing the virtual camera in 3D, the model identifies and captures as many salient regions as possible while ensuring that the rendered mesh fully occupies every output video frame. This sometimes requires the model to shrink the camera's field of view.

Heatmap of the predicted per-pixel saliency. We want the creation to include as much of the salient regions as possible when framing the virtual camera.

Conclusion
Through Cinematic photos, we implemented a system of algorithms – with each ML model evaluated for fairness – that work together to allow users to relive their memories in a new way, and we are excited about future research and feature improvements. Now that you know how they are created, keep an eye open for automatically created Cinematic photos that may appear in your recent memories within the Google Photos app!

Acknowledgments
Cinematic Photos is the result of a collaboration between Google Research and Google Photos teams. Key contributors also include: Andre Le, Brian Curless, Cassidy Curtis, Ce Liu‎, Chun-po Wang, Daniel Jenstad, David Salesin, Dominik Kaeser, Gina Reynolds, Hao Xu, Huiwen Chang, Huizhong Chen‎, Jamie Aspinall, Janne Kontkanen, Matthew DuVall, Michael Kucera, Michael Milne, Mike Krainin, Mike Liu, Navin Sarma, Orly Liba, Peter Hedman, Rocky Cai‎, Ruirui Jiang‎, Steven Hickson, Tracy Gu, Tyler Zhu, Varun Jampani, Yuan Hao, Zhongli Ding.

Source: Google AI Blog


Combining Similar Bid Strategies

The v6 release of the Google Ads API added support for Maximize conversions and Maximize conversion value bid strategies in Search campaigns. This includes a new read-only MaximizeConversions.target_cpa field. Bid strategies having either this new target_cpa field or the read-only MaximizeConversionValue.target_roas field act identically to TargetCpa and TargetRoas bid strategies, respectively. In the future, bid strategies for Search campaigns will be reorganized for simplification.

What’s Changing

For Search campaigns, TargetCpa and TargetRoas will no longer be separate from the MaximizeConversions and MaximizeConversionValue bid strategies. Instead, they will be represented as MaximizeConversions and MaximizeConversionValue bid strategies with their respective target_cpa and target_roas fields set. Use of TargetCpa and TargetRoas as separate strategies will be deprecated.

There will be no impact to bidding behavior due to these changes. The MaximizeConversions bid strategy using the new optional target_cpa setting will still behave like the TargetCpa strategy does today, and likewise, MaximizeConversionValue using the new optional target_roas setting will behave like TargetRoas.

Before After
TargetRoas MaximizeConversionValue.target_roas
TargetCpa MaximizeConversions.target_cpa


Starting in April 2021, the Google Ads UI will start allowing some users to create MaximizeConversions and MaximizeConversionValue bid strategies with their target_cpa and target_roas fields set, in lieu of the old-style TargetCpa and TargetRoas bid strategies. This change will gradually ramp-up to more accounts over time.

The target_roas and target_cpa will remain read-only to API users until a future version of the API enables mutate functionality.

What to Do
Developers should ensure their code treats Search campaigns that have MaximizeConversions with a set target_cpa field and MaximizeConversionValue with a set target_cpa field the same way it treats TargetCpa and TargetRoas bid strategies, respectively.

We will publish an update on the blog when the above fields are mutable, along with several months' notice before TargetCpa and TargetRoas strategies are deprecated in Search campaigns.

If you have any questions or need additional help, contact us via the forum or at [email protected].

Combining Similar Bid Strategies

The v6 release of the Google Ads API added support for Maximize conversions and Maximize conversion value bid strategies in Search campaigns. This includes a new read-only MaximizeConversions.target_cpa field. Bid strategies having either this new target_cpa field or the read-only MaximizeConversionValue.target_roas field act identically to TargetCpa and TargetRoas bid strategies, respectively. In the future, bid strategies for Search campaigns will be reorganized for simplification.

What’s Changing

For Search campaigns, TargetCpa and TargetRoas will no longer be separate from the MaximizeConversions and MaximizeConversionValue bid strategies. Instead, they will be represented as MaximizeConversions and MaximizeConversionValue bid strategies with their respective target_cpa and target_roas fields set. Use of TargetCpa and TargetRoas as separate strategies will be deprecated.

There will be no impact to bidding behavior due to these changes. The MaximizeConversions bid strategy using the new optional target_cpa setting will still behave like the TargetCpa strategy does today, and likewise, MaximizeConversionValue using the new optional target_roas setting will behave like TargetRoas.

Before After
TargetRoas MaximizeConversionValue.target_roas
TargetCpa MaximizeConversions.target_cpa


Starting in April 2021, the Google Ads UI will start allowing some users to create MaximizeConversions and MaximizeConversionValue bid strategies with their target_cpa and target_roas fields set, in lieu of the old-style TargetCpa and TargetRoas bid strategies. This change will gradually ramp-up to more accounts over time.

The target_roas and target_cpa will remain read-only to API users until a future version of the API enables mutate functionality.

What to Do
Developers should ensure their code treats Search campaigns that have MaximizeConversions with a set target_cpa field and MaximizeConversionValue with a set target_cpa field the same way it treats TargetCpa and TargetRoas bid strategies, respectively.

We will publish an update on the blog when the above fields are mutable, along with several months' notice before TargetCpa and TargetRoas strategies are deprecated in Search campaigns.

If you have any questions or need additional help, contact us via the forum or at [email protected].

Here’s how to watch #TheAndroidShow in just under 24 hours

Posted by The Jetpack Compose Team

In less than 24 hours, we're giving you a backstage pass to Jetpack Compose, Android's modern toolkit for building native UIs, on #TheAndroidShow. Hosted by Kari Byron, you'll hear the latest on Jetpack Compose from the people who built it, plus a fireside interview with Android's Dave Burke.

The show kicks off live at 9AM PT!

Broadcasting live on February 24th at 9AM PT, you’ll be able to watch the show at goo.gle/TheAndroidShow, where you’ll also be able to find more information and links to all of the things we covered in the show. Or if you prefer, you can watch directly on YouTube or Twitter.

There’s still time to ask your Jetpack Compose questions, use #TheAndroidShow

Got a burning Jetpack Compose question? Want to learn about annotating a function type with @ Composable? Or how to add a static parameter to Composable functions at the compiler level? Tweet us your Jetpack Compose questions now, using #TheAndroidShow. We’ve assembled a team of experts, ready to answer your questions live on #TheAndroidShow; tune in on February 24 to see if we cover your question!

GameSnacks brings HTML5 games to Google products

Last February we announced GameSnacks, a HTML5 gaming platform from Area 120, Google’s workshop for experimental products. We launched GameSnacks to test whether lightweight, casual games would resonate with people who use the internet via low memory devices on 2G and 3G networks, especially in countries like India and Indonesia.

Since then, millions of people from around the world have played our games. GameSnacks now has more than 100 games built by early game development partners. These games span multiple genres: classics (e.g. Chess), racing games (e.g. Retro Drift), puzzle games (e.g. Element Blocks), and hypercasual games (e.g. Cake Slice Ninja) to list a few. You can check out the full catalog by visiting gamesnacks.com.


Today, we’re sharing how we’ve broadened our efforts by bringing HTML5 games to Google products. We’re also inviting more game developers to join us as we grow the platform.

Finding HTML5 games to play is hard

When I mention HTML5 web gaming to friends and family, they fondly remember Flash gaming sites from 10 or 15 years ago. Web games have come a long way since then. Mobile browsers can now render rich graphics, and engines like Phaser, Construct and Cocos make it easier for developers to build HTML5 games. 

HTML5 games tend to be small, enabling them to load quickly in a variety of network conditions, whether on 2G near the outskirts of New Delhi or on an intermittent connection on a New York City subway. Users can play them on any device with a web browser: Android, iOS, and desktop. And across these devices, users don’t need to install anything to play. They simply tap on a link and start playing games immediately.

However, the distribution landscape for HTML5 games is fragmented. Developers have to painstakingly modify their HTML5 games to work across each app they integrate with or web portal they upload to. Discovering HTML5 games to play is often difficult.

We’ve been thinking about how we can make HTML5 game developers’ lives easier to ultimately get more HTML5 games out to more users. Here’s a closer look at how we’re doing this.

A new way to discover HTML5 games across Google products

Back in February 2020, we announced our partnership with Gojek to bring HTML5 games to their users and give developers a new distribution opportunity. Since then, we’ve been bringing the GameSnacks catalog to users across a variety of different Google apps. 

First, we’ve made it easy to access GameSnacks games directly from the New Tab page in Chrome, starting with users in India, Indonesia, Nigeria and Kenya. Users can get to gamesnacks.com via the Top Sites icon on Chrome on Android. The Games section is one of the most frequently visited sections of the page.

The game Stack Bounce played on Google Chrome in mobile.

Blast through blocks in Stack Bounce on GameSnacks on Google Chrome.

Second, we’ve brought GameSnacks games to Google Pay users in India. Google Pay initially started as a way to help users pay friends. Increasingly, they allow users to get many more things done: book rides, order food and now, entertain themselves.


Google Pay users in India can play GameSnacks games from the Games section of the app.

Bolly Beat played on Google Pay in India

Bounce to the rhythm in Bolly Beat on GameSnacks on Google Pay.

Third, we’re experimenting with bringing GameSnacks games to the Google Assistant. When select Android Assistant users ask to play a GameSnacks game, they can start playing instantly.


The game 99 Balls played on Google Assistant.

Ask Google to play 99 Balls on GameSnacks on Google Assistant.

And finally, we’re experimenting with surfacing GameSnacks games in Discover. Select users in India will see GameSnacks games appear in their feed:

The game Tiger Run on Google Discover.

See how far you can run in Tiger Run on GameSnacks on Google Discover.


GameSnacks will be a one-stop shop for developers to bring their HTML5 games to Google users, no matter what product they’re using. Over the coming months, we’ll look for more opportunities to bring GameSnacks games to more Google products.

An open call to game developers

We’re committed to helping game developers succeed with HTML5. Beyond continuing to help developers reach more users, we’ll help developers build meaningful gaming businesses by helping them better monetize HTML5 games. We’ll soon start experimenting with next-generation AdSense for Games ad formats with a select number of GameSnacks games.

Meanwhile, we’re continuing to add more high quality HTML5 games to our catalog. If you’re a game developer interested in being an early GameSnacks partner, reach out and let’s work together.

New Password Checkup Feature Coming to Android

With the proliferation of digital services in our lives, it’s more important than ever to make sure our online information remains safe and secure. Passwords are usually the first line of defense against hackers, and with the number of data breaches that could publicly expose those passwords, users must be vigilant about safeguarding their credentials.

To make this easier, Chrome introduced the Password Checkup feature in 2019, which notifies you when one of the passwords you’ve saved in Chrome is exposed. We’re now bringing this functionality to your Android apps through Autofill with Google. Whenever you fill or save credentials into an app, we’ll check those credentials against a list of known compromised credentials and alert you if your password has been compromised. The prompt can also take you to your Password Manager page, where you can do a comprehensive review of your saved passwords. Password Checkup on Android apps is available on Android 9 and above, for users of Autofill with Google.

Follow the instructions below to enable Autofill with Google on your Android device:

  1. Open your phone’s Settings app
  2. Tap System > Languages & input > Advanced
  3. Tap Autofill service
  4. Tap Google to make sure the setting is enabled

If you can’t find these options, check out this page with details on how to get information from your device manufacturer.

How it works

User privacy is top of mind, especially when it comes to features that handle sensitive data such as passwords. Autofill with Google is built on the Android autofill framework which enforces strict privacy & security invariants that ensure that we have access to the user’s credentials only in the following two cases: 1) the user has already saved said credential to their Google account; 2) the user was offered to save a new credential by the Android OS and chose to save it to their account.

When the user interacts with a credential by either filling it into a form or saving it for the first time, we use the same privacy preserving API that powers the feature in Chrome to check if the credential is part of the list of known compromised passwords tracked by Google.

This implementation ensures that:

  • Only an encrypted hash of the credential leaves the device (the first two bytes of the hash are sent unencrypted to partition the database)
  • The server returns a list of encrypted hashes of known breached credentials that share the same prefix
  • The actual determination of whether the credential has been breached happens locally on the user’s device
  • The server (Google) does not have access to the unencrypted hash of the user’s password and the client (User) does not have access to the list of unencrypted hashes of potentially breached credentials

For more information on how this API is built under the hood, check out this blog from the Chrome team.

Additional security features

In addition to Password Checkup, Autofill with Google offers other features to help you keep your data secure:

  • Password generation: With so many credentials to manage, it’s easy for users to recycle the same password across multiple accounts. With password generation, we’ll generate a unique, secure password for you and save it to your Google account so you don’t have to remember it at all. On Android, you can request password generation for an app by long pressing the password field and selecting “Autofill” in the pop-up menu.
  • Biometric authentication: You can add an extra layer of protection on your device by requiring biometric authentication any time you autofill your credentials or payment information. Biometric authentication can be enabled inside of the Autofill with Google settings.

As always, stay tuned to the Google Security blog to keep up to date on the latest ways we’re improving security across our products.

How one trailblazer uses Maps to explore the outdoors

Lydia Kluge is an active member of the Google Maps Local Guides community, the everyday people passionate about sharing their experiences on Maps. In 2020, she added more than 1,100 contributions on Google Maps in the form of reviews, photos, and places. Coincidentally, Lydia also hiked, ran, and biked 1,100 miles last year. All those adventures earned her the well-deserved Expert Trailblazer and Expert Fact Finder badges on Google Maps.


But Lydia’s journey has been full of adventures long before 2020. Originally from England, Lydia landed in Utah in 2005 for what was meant to be a six-month stint as a ski instructor. She’s been there ever since after falling in love with (and on) the slopes where she met her now-husband.


Over the past fifteen years, the couple traveled to over 30 countries. Along the way, Lydia used Google Maps to find hidden gems — from the best restaurants in Paris to snorkeling spots in Australia.


In 2019, Lydia and her husband welcomed their beautiful baby girl into their family and couldn’t wait to travel with her. But COVID-19 changed their international jet-setting plans. Like many of us, Lydia’s spending more time closer to home. She’s explored Utah's mountains, deserts, and national and state parks. And, just like in her international travels, Google Maps has been her companion. She’s added and reviewed dozens of nature trails, trailheads, and parks, and created lists of family-friendly activities in Utah. “One thing I've missed about working outside of the home is how I can contribute to others and my community,” Lydia said. “Adding these things to Google Maps is a way I can give back.”


Here are Lydia’s tips on how to use Google Maps to explore natural attractions near you:


Find parks and hiking trails on Google Maps

Search outdoor terms like “hiking trails” or “parks near me” to find nearby treks. For most hiking trails, you’ll be able to find ratings, reviews and photos from other hikers. Some may also have useful details like open hours and phone numbers. You can also use the Lists feature on Google Maps to see curated recommendations, like Lydia’s Things to See and Do in St. George and Food and Fun in Park City. Simply search for a town and scroll down to see Featured Lists.
A photo of a search for hiking trails in Google Maps

Use the search bar in Google Maps to find things to do, like hiking trails nearby or in a specific town or city

Quickly sort through reviews to find popular topics or search for specific words

Lydia leaves detailed reviews on parks and hikes with searchable terms like “family,” “steep,” or “kid-friendly.” Search for specific words to quickly sort through reviews and get a better sense of the place. If you want an idea of what most people are talking about, you can see a list of popular keywords in reviews — from “banana slug” and “poison ivy” to “parking lot” and “sunset.”
An image showing popular topics in Google Maps reviews

You can see what the popular topics are for hikes and places by seeing the most common keywords. Tap a topic to see what people are saying.

Preview your trek with photos

Lydia has left more than 3,500 photos on Google Maps that have been viewed more than 25 million times. To get a sense of what your outdoor trip will look like, browse photos that people like Lydia have uploaded. Sort photos to see the latest, pan through Street View and 360-degree images, and even see videos. Pay it forward to the next trekker and leave photos of what made your hike memorable.
A photo of the castle-like rock formations at Turret Arch in Moab, Utah.

A photo of the castle-like rock formations at Turret Arch in Moab, Utah.

Add and update hiking areas yourself

Some trails may not have  traditional signage and could be hard to find. If you know where an unmarked (or poorly marked) trailhead is, you can confirm that the pin locations are in the appropriate spot. To do so, open your Google Maps app and navigate to the place. Tap “suggest an edit” to update information about the hiking area.
An image of a hiking trail added to Google Maps by Lydia

Lydia added Limber Pine Nature Trail to Google Maps

To follow Lydia’s adventures, check out and follow her Google Maps profile.

Source: Google LatLong