Author Archives: Tanya Birch

Land cover data just got real-time

Our planet is changing dramatically in ways that are visible even from space. These changes are in part because of climate change amplifying environmental disturbances, like wildfires and floods, and human activity, like deforestation and urban development. Detailed information about these changes and their impact on people, the climate, and ecosystems can help governments and researchers develop helpful solutions and minimize their effects on issues like climate change, food insecurity and loss of biodiversity.

Historically, it’s been difficult to access detailed, up-to-date land cover data which documents how much of a region is covered with different land and water types such as wetlands, forests, agricultural crops, trees, urban development and more.

To help turn satellite imagery into more useful information for quantifying change, we worked with the World Resources Institute (WRI) to create Dynamic World. Powered by Google Earth Engine and AI Platform, Dynamic World provides global, near real-time land cover data at a ten-meter resolution, giving an unprecedented level of detail about what's on the land and how it's being used — whether it’s forests in the Amazon, agriculture in Asia, urban development in Europe or seasonal water resources in North America. With this information, people — like scientists and policymakers — can monitor and understand land and ecosystems so they can make more accurate predictions and effective plans to protect our planet in the future.

A more detailed understanding of earth’s land than ever before

Currently, most existing datasets assign a single land cover type to an area of land — like trees, built-up, crops or snow — based on what’s most prominent in a satellite image combined with an expert’s determination of the land cover. So current datasets might classify a satellite image of a city as ‘built-up,’ but visit any city and you’ll see our world is far more dynamic. While you might see lots of buildings, you’ll also see trees or even snow on the ground from a recent storm.

To create a more accurate understanding of land cover with Dynamic World, our partners at WRI identified the nine most critical land cover types we wanted to classify: water, flooded vegetation, built-up areas, trees, crops, bare ground, grass, shrub/scrub, and snow/ice. Dynamic World uses our AI and cloud computing to detect combinations of different land cover types and make conclusions about how likely it is for each of the nine types to be present in every pixel (about 1,100 square feet of land) of a satellite image.

This level of insight into how land is being used can help public, private and non-profit decision makers better understand what’s happening to the world’s land. With this knowledge, they can develop plans to protect, manage and restore land, and monitor the effectiveness of those plans using alert systems to notify when unforeseen land changes are taking place.

As Craig Hanson, Vice President of Food, Forests, Water and the Ocean at the World Resources Institute, explains: “The global land squeeze pressures us to find smarter, efficient, and more sustainable ways to use land. If the world is to produce what is needed from land, protect the nature that remains and restore some of what has been lost, we need trusted, near real-time monitoring of every hectare of the planet.”

A near real time, regularly updating dataset

Not only is our world more dynamic than individual land types, it’s also constantly changing. Current global land cover maps can take months to produce, and typically only provide land cover data on a monthly or annual basis. With our AI model analyzing Copernicus Sentinel-2 satellite images as they become available, over 5,000 Dynamic World images are produced every day, providing land cover data dating back to June 2015 to as recently as two days ago.

This means that not only is the land cover information in Dynamic World more detailed, but it's also more timely within any given day, week or month than existing datasets. This level of detail allows scientists and policymakers to detect and quantify the extent of recent events anywhere on the globe — such as snowstorms, wildfires or volcanic eruptions — within days.

A gif shows satellite imagery translated into Dynamic World imagery with lots of green area indicating tree coverage before the fire and most of that area turning yellow indicating shrub/scrub after the fire.

Satellite imagery translated into Dynamic World imagery showing land in El Dorado County, California changing from trees, indicated in green, to shrub/scrub, indicated in yellow, days after the Caldor Fire burned 221,775 acres of land beginning August 14, 2021.

A gif shows satellite imagery and Dynamic World imagery of the Okavango Delta in Botswana with increasing green and blue coloring to show changes in land cover as the delta floods in July and August and then dries from September to October

Sentinel-2 satellite imagery (left) and Dynamic World dataset (right) show typical seasonal changes in the Okavango Delta in Botswana.

Dynamic World allows researchers to build their own maps based on the outputs of our machine learning model, a major advancement in mapmaking. Researchers can combine local information with the data from Dynamic World to produce a new map, for example a map that analyzes crop harvests between particular dates. Dynamic World is also useful for understanding longer-term trends of seasonal ecosystem change, as seen in the Okavango Delta, an area that attracts thirsty wildlife when it floods in July and August and then dries from September to October.

We’re excited to put this open, freely available dataset and the methodology behind it into the hands of scientists, researchers, governments and companies. Together, we can make wiser decisions to protect, manage and restore our forests, nature and ecosystems.

Dynamic World is one of the largest global-scale land cover datasets produced to date, and is the first of its kind at 10 meter resolution in near real-time. A peer-reviewed paper about Dynamic World was published today in Nature Scientific Data. Explore the data at dynamicworld.app and access Dynamic World in Google Earth Engine and on Resource Watch.

This World Wildlife Day, the key word is adapt

Wolverines are stocky, energetic carnivores who resemble small bears. These animals travel up to 15 miles a day and summit peaks in the wildest lands. Currently, their habitat range includes parts of the northern U.S. and Canada where they have access to huge swaths of remote land with abundant winter and spring snowpack to build dens for their baby kits. However, like other species across the world, their habitat is at risk of shrinking due to climate change.

As entire habitats change, land managers and policymakers need to be able to make local land-use decisions that support regionally important species and ecosystems. Cloud-based mapping tools, like TerrAdapt which launched to the public today on World Wildlife Day, can help prioritize areas for conservation actions — like habitat restoration, increasing protection status, and building wildlife crossings. TerrAdapt uses satellite monitoring technology powered by Google Earth Engine and Google Cloud Platform to project habitat conditions given future climate and land-use scenarios.

Using TerrAdapt to monitor wolverines

It’s initially being developed in the Cascadia region — which spans part of Washington in the U.S and British Columbia in Canada — to model habitat ranges for species like the wolverine, as well as the fisher, grizzly bear, greater sage-grouse and Canada lynx. Working with the Cascadia Partner Forum and the Washington Department of Natural Resources, the TerrAdapt team partnered with leading wolverine biologists to model changes in the wolverines’ habitat and connectivity between 1990 to 2100.

Areas in orange and red show the shrinking of montane wet forest habitats where snow-dependent wildlife like the wolverine live, projected to 2100.

Areas in orange and red show the shrinking of montane wet forest habitats where snow-dependent wildlife like the wolverine live, projected to 2100.

According to this model, wolverines and other snow-dependent species are expected to see significant changes to their habitat — especially when climate change scenarios are factored into the mix. Looking forward to 2100, there is little remaining wolverine habitat in the U.S.

Projections of how the suitable habitat for snow-dependent species changes from 1990 to 2100 based on the amount of liquid water contained in the snowpack, or SWE, under a “business as usual” climate scenario.

Projections of how the suitable habitat for snow-dependent species changes from 1990 to 2100 based on the amount of liquid water contained in the snowpack, or SWE, under a “business as usual” climate scenario.

Conservationists are concerned we’re not adequately preparing to protect the wolverines and their habitat which is also home to other species of animals and plants. In 2020, the decision to federally list the wolverine as threatened under the Endangered Species Act was rejected on the basis that there’s still sufficient snowpack.

Moving forward, land managers and policymakers can use TerrAdapt projections to better inform decisions like this. Carly Vynne, TerrAdapt co-founder and Director of Biodiversity and Climate at RESOLVE says that TerrAdapt helps them keep these animals on the landscape. “TerrAdapt allows us to visualize future scenarios and plan management responses,” she says. “This helps make sure that our region is as resilient as possible for wolverines and the other plants, animals, and human communities that depend on our natural landscapes.”

Making decisions that benefit the planet

The ability to use findings to inform conservation decisions and policy needs to grow. Equipped with information from TerrAdapt on how our current and future land-use decisions affect our natural world, we can increase ecological resilience to climate change risks and make land-use decisions that benefit our planet.

Explore how Google’s technology, such as Google Earth Engine, is being used to help decision makers improve resilience and adapt to climate change. And learn more about how TerrAdapt is helping us plan for a positive future with wolverines in this short video.

Using AI to find where the wild things are

According to the World Wildlife Fund, vertebrate populations have shrunk an average of 60 percent since the 1970s. And a recent UN global assessment found that we’re at risk of losing one million species to extinction, many of which may become extinct within the next decade. 

To better protect wildlife, seven organizations, led by Conservation International, and Google have mapped more than 4.5 million animals in the wild using photos taken from motion-activated cameras known as camera traps. The photos are all part of Wildlife Insights, an AI-enabled, Google Cloud-based platform that streamlines conservation monitoring by speeding up camera trap photo analysis.

With photos and aggregated data available for the world to see, people can change the way protected areas are managed, empower local communities in conservation, and bring the best data closer to conservationists and decision makers.

Wildlife managers at Instituto Humboldt take advantage of a new AI-enabled tool for processing wildlife data.

Wildlife managers at Instituto Humboldt take advantage of a new AI-enabled tool for processing wildlife data

Ferreting out insights from mountains of data

Camera traps help researchers assess the health of wildlife species, especially those that are reclusive and rare. Worldwide, biologists and land managers place motion-triggered cameras in forests and wilderness areas to monitor species, snapping millions of photos a year. 


But what do you do when you have millions of wildlife selfies to sort through? On top of that, how do you quickly process photos where animals are difficult to find, like when an animal is in the dark or hiding behind a bush? And how do you quickly sort through up to 80 percent of photos that have no wildlife at all because the camera trap was triggered by the elements, like grass blowing in the wind?


Processing all these photos isn’t only time consuming and painstaking. For decades, one of the biggest challenges has been simply collecting them. Today, millions of camera trap photos languish on the hard drives and discs of individuals and organizations worldwide.


Illuminating the natural world with AI

With Wildlife Insights, conservation scientists with camera trap photos can now upload their images to Google Cloud and run Google’s species identification AI models over the images, collaborate with others, visualize wildlife on a map and develop insights on species population health.


It’s the largest and most diverse public camera-trap database in the world that allows people to explore millions of camera-trap images, and filter images by species, country and year.


Wildlife Insights

Seven leading conservation organizations and Google released Wildlife Insights to better protect wildlife.

On average, human experts can label 300 to 1,000 images per hour. With the help of Google AI Platform Predictions, Wildlife Insights can classify the same images up to 3,000 times faster, analyzing 3.6 million photos an hour. To make this possible, we trained an AI model to automatically classify species in an image using Google’s open source TensorFlow framework. 

Even though species identification can be a challenging task for AI, across the 614 species that Google’s AI models have been trained on, species like jaguars, white-lipped peccaries and African elephants have between an 80 to 98.6 percent probability of being correctly predicted. Most importantly, images detected to contain no animals with a very high confidence are removed automatically, freeing biologists to do science instead of looking at empty images of blowing grass. 

With this data, managers of protected areas or anti-poaching programs can gauge the health of specific species, and local governments can use data to inform policies and create conservation measures. 

Wildlife Insights Animal Classifier

The Wildlife Insights Animal Classifier tool helps researchers classify 614 species.

Acting before it’s too late

Thanks to the combination of advanced technology, data sharing, partnerships and science-based analytics, we have a chance to bend the curve of species decline.

While we’re just at the beginning of applying AI to better understand wildlife from sensors in the field, solutions like Wildlife Insights can help us protect our planet so that future generations can live in a world teeming with wildlife. 

Learn more about Wildlife Insights and watch the documentary film Eyes in the Forest: Saving Wildlife In Colombia Using Camera Traps and AI. The film tells the story of a camera trapper who uses Wildlife Insights to document and preserve the biological diversity in Caño Cristales, a reserve in Colombia’s remote upper Amazon region. 

Wildlife Insights is a collaboration between Conservation International, Smithsonian’s National Zoo and Conservation Biology Institute, North Carolina Museum of Natural Sciences, Map of Life, World Wide Fund for Nature, Wildlife Conservation Society, Zoological Society of London, Google Earth Outreach,  built by Vizzuality, and supported by the Gordon and Betty Moore Foundation and Lyda Hill Philanthropies. 

Take a walk on the wild side in Google Earth

This World Wildlife Day, become one with nature—and its animal inhabitants—on Voyager, Google Earth’s storytelling feature. We’ve launched three interactive tours with Explore.org, National Geographic Society and The Nature Conservancy that let you get up close with our planet’s magnificent animals and the challenges they face.

This live cam is owl you need

First, fly to the treetops of Montana with Explore.org to see owls and ospreys in the wild. You can watch live streams of three different owl species—Long-eared, Great Horned and Great Gray Owls — raising their young in their nests.

RaptorsOfMontana_OwlCam_GIF_1_Smaller_1.gif

All aboard

Hop on the National Geographic Photo Ark, an ambitious project from photographer Joel Sartore to document every species living in human care. Peek behind the scenes to see how Sartore captures these amazing shots, and don’t miss the last page for a choose-your-own-adventure look at 30 of the feathered, furry and finned friends that have already joined the Photo Ark.

up

Turtle power

Finally, dive into the South Pacific near the Arnavon Islands. Here you’ll find The Nature Conservancy and local communities working to protect the largest nesting site of the endangered hawksbill turtle.

SeaTurtleCam_GIF_1_Smaller_2.gif

Google Earth Live: Explore.org invites you to hang out with Alaskan Brown Bears

Bear
Watch bears at Brooks Falls LIVE in Google Earth.

Spring comes quickly to Alaska. The snowpack melts, rivers swell with crisp water, delicate blue forget-me-nots bloom near the water’s edge, and brown bears emerge from a six or seven  month hibernation in the Katmai National Park. On their annual migration from the ocean to their spawning grounds, sockeye salmon rush up the Brooks River until they meet the falls. Waiting for them there are the bears, who eagerly paw the air, striking for some fresh protein as they jump out of the water.


Beginning today, we’re bringing live content to Google Earth’s storytelling platform, Voyager. In a story by Explore.org you can journey into Katmai National Park — watch the hungry bears dine out at Brooks Falls or salmon darting towards the underwater livecam.
Explore

Hear a personal perspective from the founder of Explore.org, Charles Annenberg, in which he shares his motivations for putting the Explore.org livecam network together, including the Katmai bear livecams.

Ready, Set, Explore!

Walk, climb and swim with wildlife in Google Earth

This week we’re giving you a taste of what you can find in Voyager, a showcase of interactive tours and stories from experts, nonprofits and more in the new Google Earth.

For 10 years, Google Earth Outreach has empowered nonprofits to create positive change in the world with Google’s mapping tools. Learn more about the efforts of many of these organizations in today’s Voyager spotlight.

Start with Dr. Jane Goodall, as she introduces you to the G-Family—that's chimpanzees Gremlin, Gaia and Google (!)—in Tanzania’s Gombe National Park. From East Africa, head to the Gulf of California with Dr. Sylvia Earle and dive into the vibrant waters off Baja, Mexico, to witness leaping mobula rays and other vibrant ocean life. Finally, walk alongside the Hardwoods elephant family of Kenya’s Samburu National Reserve with the organization working to save them, Save the Elephants.

In addition to chimpanzees, we’ve got lions and tigers and bears (oh my!), along with most of the other species on the planet. Visit Voyager today to dive with sharks, waddle with penguins and learn about wildlife conservation efforts around the globe.

Walk, climb and swim with wildlife in Google Earth

This week we’re giving you a taste of what you can find in Voyager, a showcase of interactive tours and stories from experts, nonprofits and more in the new Google Earth.

For 10 years, Google Earth Outreach has empowered nonprofits to create positive change in the world with Google’s mapping tools. Learn more about the efforts of many of these organizations in today’s Voyager spotlight.

Start with Dr. Jane Goodall, as she introduces you to the G-Family—that's chimpanzees Gremlin, Gaia and Google (!)—in Tanzania’s Gombe National Park. From East Africa, head to the Gulf of California with Dr. Sylvia Earle and dive into the vibrant waters off Baja, Mexico, to witness leaping mobula rays and other vibrant ocean life. Finally, walk alongside the Hardwoods elephant family of Kenya’s Samburu National Reserve with the organization working to save them, Save the Elephants.

In addition to chimpanzees, we’ve got lions and tigers and bears (oh my!), along with most of the other species on the planet. Visit Voyager today to dive with sharks, waddle with penguins and learn about wildlife conservation efforts around the globe.