Our efforts to fight child sexual abuse online

Across Google and YouTube, we are always working to protect our users from harmful content, especially the kind of horrific, illegal content referred to as child sexual abuse material (CSAM). Since our earliest days, we’ve been committed to fighting online child sexual exploitation and abuse both on our platforms and in the broader online ecosystem. We have invested in the teams, tools, and resources to deter, remove, and report this kind of content, and to help other companies do so. But we know this issue cannot be solved by any one company alone, and we’re committed to tackling it with others in our industry and partners who are dedicated to protecting children around the world. Today, we’re sharing more information about our work, including new efforts to combat this abuse, and how we’re supporting organizations that are committed to protecting kids online.

How we identify and remove CSAM

We identify and report CSAM with a combination of specialized, trained teams of people and cutting-edge technology. We use both hash-matching software like CSAI Match (a technology developed by YouTube engineers to identify re-uploads of previously identified child sexual abuse in videos) and machine learning classifiers that can identify never-before-seen CSAM imagery. These tools allow us to proactively scan our platforms for potential CSAM and identify potentially abusive content so that it can be removed and reported — and the corresponding accounts disabled — as quickly as possible. A crucial part of our efforts to tackle this kind of abuse is working with the National Center for Missing and Exploited Children (NCMEC), the U.S.-based reporting center for CSAM. NCMEC tracks reports from platforms and individuals and then sends those reports to law enforcement agencies around the world.

New insights into our work to fight CSAM

We recently launched a new transparency report on Google’s Efforts to Combat Online Child Sexual Abuse Material, where we detail the number of reports we made to NCMEC in the first and second half of 2020. The report also provides data around our efforts on YouTube, how we detect and remove CSAM results from Google Search, and how many accounts are disabled for CSAM violations across our services. We also include information on the number of “hashes” of newly identified CSAM we share with NCMEC. These hashes (unique digital fingerprints) help other platforms identify CSAM automatically at scale. Contributing to the NCMEC hash database is one of the most important ways we, and others in the industry, can help in the effort to combat CSAM because it helps reduce the recirculation of this material and the associated re-victimization of children who have been abused.

Working to combat CSAM across the internet

Because CSAM is an issue that spans beyond any one platform, in 2018 we developed and launched the Content Safety API. Using AI classifiers we built for our own products, the API helps organizations classify and prioritize the most likely CSAM content for review. Today, the API is being used by NGOs like SaferNet Brazil and companies including Facebook and Yubo. Along with CSAI Match, these tools are offered free-of-charge for qualifying organizations and companies. In 2020, the Content Safety API was used by our partners to classify more than 2 billion images, helping them identify the small fraction of violative content faster and with more precision. We encourage organizations who are interested to apply to use CSAI Match or Content Safety API. 


For many years, we’ve had dedicated teams working to prevent access to CSAM on google.com by de-indexing and reporting illegal sites and filtering autocompletes for search terms associated with CSAM. Last summer, we redesigned and expanded a feature we’ve been running since 2013 where users who enter CSAM-related queries are shown a prominent message that CSAM is illegal and instructions on how to report this content to their local authorities. We also provide information about local resources to connect users with NGOs that support children or families who may have been victims of abuse. We’re already seeing an impact from these efforts: hundreds of thousands of users each month are clicking through to the reporting hotlines we surface, including the Internet Watch Foundation in the UK, the Canadian Center for Child Protection and Te Protejo in Colombia. And, crucially, we’ve seen when these warning boxes are shown, we’re less likely to see follow-up searches seeking similar material. We will be expanding this feature over the course of this year. 

Supporting organizations to fight CSAM globally

The scale and complexity of fighting CSAM online means we must take a global and multi-stakeholder approach. That’s why we’re working together across industry and with leading child safety organizations like the WeProtect Global Alliance, Thorn, the Global Partnership to End Violence Against Children. And we continue to work to empower and support organizations that are creating real and lasting change for children. For example, we’ve funded a three-year Google Fellow at NCMEC to modernize and integrate their systems. We’ve also extended our Ad Grants program to qualifying child protection nonprofits during the pandemic, providing funding and campaign help for organizations like the INHOPE hotline network and ECPAT International. Since 2003, we’ve given almost $90 million in Ad Grants to global child protection organizations. We also supported the Five Country Ministerial Forum Voluntary Principles to Counter Child Sexual Exploitation and Abuse and collaborated across industry to produce a practical guide for companies considering applying these principles. This builds on our work on Project Protect as part of the Technology Coalition


Working together, we can make meaningful progress in the global fight against CSAM.