Working together to combat terrorists online

Last week Google hosted a regional event in Jakarta exploring how ‘counter narratives’ can be used to prevent extremism. Counter narratives can be described as responses to extremist ideas or propaganda and can take many forms. The discussions extended to far right extremism and experiences using counter narratives within this community as well as the definition of extremism.
At the event there was strong representation from Australia and New Zealand with YouTube content creators, civil society groups, academics and policy makers joining the conversation. There was an unanimous agreement that none of us can address this challenge on our own - we need to come together in fora such as the one in Jakarta to share information and ideas; and explore opportunities where we can actively collaborate or support each other’s activities. We look forward to continuing these conversations over the coming months, including within the Global Internet Forum to Counter Terrorism that Kent Walker, our Global General Counsel, describes below.


[Editor’s note: This is a revised and abbreviated version of a speech Kent delivered at the United Nations in New York City, NY, on behalf of the members of the Global Internet Forum to Counter Terrorism.]
The Global Internet Forum to Counter Terrorism is a group of four technology companies—Facebook, Microsoft, Twitter, and YouTube—that are committed to working together and with governments and civil society to address the problem of online terrorist content.
For our companies, terrorism isn’t just a business concern or a technical challenge. These are deeply personal threats. We are citizens of London, Paris, Jakarta, and New York. And in the wake of each terrorist attack we too frantically check in on our families and co-workers to make sure they are safe. We’ve all had to do this far too often.
The products that our companies build lower barriers to innovation and empower billions of people around the world. But we recognize that the internet and other tools have also been abused by terrorists in their efforts to recruit, fundraise, and organize. And we are committed to doing everything in our power to ensure that our platforms aren't used to distribute terrorist material.
The Forum’s efforts are focused on three areas: leveraging technology, conducting research on patterns of radicalization and misuse of online platforms, and sharing best practices to accelerate our joint efforts against dangerous radicalization. Let me say more about each pillar.
First, when it comes to technology, you should know that our companies are putting our best talent and technology against the task of getting terrorist content off our services. There is no silver bullet when it comes to finding and removing this content, but we’re getting much better.
One early success in collaboration has been our “hash sharing” database, which allows a company that discovers terrorist content on one of their sites to create a digital fingerprint and share it with the other companies in the coalition, who can then more easily detect and review similar content for removal.
We have to deal with these problems at tremendous scale. The haystacks are unimaginably large and the needles are both very small and constantly changing. People upload over 400 hours of content to YouTube every minute. Our software engineers have spent years developing technology that can spot certain telltale cues and markers. In recent months we have more than doubled the number of videos we've removed for violent extremism and have located these videos twice as fast. And what’s more, 75 percent of the violent extremism videos we’ve removed in recent months were found using technology before they received a single human flag.
These efforts are working. Between August 2015 and June 2017, Twitter suspended more than 935,000 accounts for the promotion of terrorism. During the first half of 2017, over 95 percent of the accounts it removed were detected using its in-house technology. Facebook is using new advances in artificial intelligence to root out "terrorist clusters" by mapping out the pages, posts, and profiles with terrorist material and then shutting them down.
Despite this recent progress, machines are simply not at the stage where they can replace human judgment. For example, portions of a terrorist video in a news broadcast might be entirely legitimate, but a computer program will have difficulty distinguishing documentary coverage from incitement.
The Forum’s second pillar is focused on conducting and sharing research about how terrorists use the internet to influence their audiences so that we can stay one step ahead.
Today, the members of the Forum are pleased to announce that we are making a multi-million dollar commitment to support research on terrorist abuse of the internet and how governments, tech companies, and civil society can fight back against online radicalization.
The Forum has also set a goal of working with 50 smaller tech companies to help them better tackle terrorist content on their platforms. On Monday, we hosted dozens of companies for a workshop with our partners under the UN Counter Terrorism Executive Directorate. There will be a workshop in Brussels in December and another in Indonesia in the coming months. And we are also working to expand the hash-sharing database to smaller companies.
The Forum’s final pillar is working together to find powerful messages and avenues to reach out to those at greatest risk of radicalization.
Members of the forum are doing a better job of sharing breakthroughs with each other. One success we’ve seen is with the Redirect Method developed at Alphabet’s Jigsaw group. Redirect uses targeted advertising to reach people searching for terrorist content and presents videos that undermine extremist recruiting efforts. During a recent eight-week study more than 300,000 users clicked on our targeted ads and watched more than 500,000 minutes of video. This past April, Microsoft started a similar program on Bing. And Jigsaw and Bing are now exploring a partnership to share best practices and expertise.
At the same time, we’re elevating the voices that are most credible in speaking out against terrorism, hate, and violence. YouTube’s Creators for Change program highlights online stars taking a stand against xenophobia and extremism. And Facebook's P2P program has brought together more than 5,000 students from 68 countries to create campaigns to combat hate speech. And together the companies have participated in hundreds of meetings and trainings to counter violent extremism including events in Beirut, Bosnia, and Brussels and summits at the White House, here at the United Nations, London, and Sydney to empower credible non-governmental voices against violent extremism.
There is no magic computer program that will eliminate online terrorist content, but we are committed to working with everyone in this room as we continue to ramp up our own efforts to stop terrorists’ abuse of our services. This forum is an important step in the right direction. We look forward to working with national and local governments, and civil society, to prevent extremist ideology from spreading in communities and online.
- Kent Walker, Global General Counsel, Google