Author Archives: Kent Walker

Working together to combat terrorists online

Editor’s note: This is a revised and abbreviated version of a speech Kent delivered today at the United Nations in New York City, NY, on behalf of the members of the Global Internet Forum to Counter Terrorism.

The Global Internet Forum to Counter Terrorism is a group of four technology companies—Facebook, Microsoft, Twitter, and YouTube—that are committed to working together and with governments and civil society to address the problem of online terrorist content.

For our companies, terrorism isn’t just a business concern or a technical challenge. These are deeply personal threats. We are citizens of London, Paris, Jakarta, and New York. And in the wake of each terrorist attack we too frantically check in on our families and co-workers to make sure they are safe. We’ve all had to do this far too often.

The products that our companies build lower barriers to innovation and empower billions of people around the world. But we recognize that the internet and other tools have also been abused by terrorists in their efforts to recruit, fundraise, and organize. And we are committed to doing everything in our power to ensure that our platforms aren't used to distribute terrorist material.

The Forum’s efforts are focused on three areas: leveraging technology, conducting research on patterns of radicalization and misuse of online platforms, and sharing best practices to accelerate our joint efforts against dangerous radicalization. Let me say more about each pillar.

First, when it comes to technology, you should know that our companies are putting our best talent and technology against the task of getting terrorist content off our services. There is no silver bullet when it comes to finding and removing this content, but we’re getting much better.

One early success in collaboration has been our “hash sharing” database, which allows a company that discovers terrorist content on one of their sites to create a digital fingerprint and share it with the other companies in the coalition, who can then more easily detect and review similar content for removal.  

We have to deal with these problems at tremendous scale. The haystacks are unimaginably large and the needles are both very small and constantly changing. People upload over 400 hours of content to YouTube every minute. Our software engineers have spent years developing technology that can spot certain telltale cues and markers. In recent months we have more than doubled the number of videos we've removed for violent extremism and have located these videos twice as fast. And what’s more, 75 percent of the violent extremism videos we’ve removed in recent months were found using technology before they received a single human flag.

These efforts are working. Between August 2015 and June 2017, Twitter suspended more than 935,000 accounts for the promotion of terrorism. During the first half of 2017, over 95 percent of the accounts it removed were detected using its in-house technology. Facebook is using new advances in artificial intelligence to root out "terrorist clusters" by mapping out the pages, posts, and profiles with terrorist material and then shutting them down.

Despite this recent progress, machines are simply not at the stage where they can replace human judgment. For example, portions of a terrorist video in a news broadcast might be entirely legitimate, but a computer program will have difficulty distinguishing documentary coverage from incitement.  

The Forum’s second pillar is focused on conducting and sharing research about how terrorists use the internet to influence their audiences so that we can stay one step ahead.

Today, the members of the Forum are pleased to announce that we are making a multi-million dollar commitment to support research on terrorist abuse of the internet and how governments, tech companies, and civil society can fight back against online radicalization.

The Forum has also set a goal of working with 50 smaller tech companies to help them better tackle terrorist content on their platforms. On Monday, we hosted dozens of companies for a workshop with our partners under the UN Counter Terrorism Executive Directorate. There will be a workshop in Brussels in December and another in Indonesia in the coming months. And we are also working to expand the hash-sharing database to smaller companies.

The Forum’s final pillar is working together to find powerful messages and avenues to reach out to those at greatest risk of radicalization.

Members of the forum are doing a better job of sharing breakthroughs with each other. One success we’ve seen is with the Redirect Method developed at Alphabet’s Jigsaw group. Redirect uses targeted advertising to reach people searching for terrorist content and presents videos that undermine extremist recruiting efforts. During a recent eight-week study more than 300,000 users clicked on our targeted ads and watched more than 500,000 minutes of video. This past April, Microsoft started a similar program on Bing. And Jigsaw and Bing are now exploring a partnership to share best practices and expertise.

At the same time, we’re elevating the voices that are most credible in speaking out against terrorism, hate, and violence. YouTube’s Creators for Change program highlights online stars taking a stand against xenophobia and extremism.  And Facebook's P2P program has brought together more than 5,000 students from 68 countries to create campaigns to combat hate speech. And together the companies have participated in hundreds of meetings and trainings to counter violent extremism including events in Beirut, Bosnia, and Brussels and summits at the White House, here at the United Nations, London, and Sydney to empower credible non-governmental voices against violent extremism.

There is no magic computer program that will eliminate online terrorist content, but we are committed to working with everyone in this room as we continue to ramp up our own efforts to stop terrorists’ abuse of our services. This forum is an important step in the right direction. We look forward to working with national and local governments, and civil society, to prevent extremist ideology from spreading in communities and online.

Working together to combat terrorists online

Editor’s note: This is a revised and abbreviated version of a speech Kent delivered today at the United Nations in New York City, NY, on behalf of the members of the Global Internet Forum to Counter Terrorism.

The Global Internet Forum to Counter Terrorism is a group of four technology companies—Facebook, Microsoft, Twitter, and YouTube—that are committed to working together and with governments and civil society to address the problem of online terrorist content.

For our companies, terrorism isn’t just a business concern or a technical challenge. These are deeply personal threats. We are citizens of London, Paris, Jakarta, and New York. And in the wake of each terrorist attack we too frantically check in on our families and co-workers to make sure they are safe. We’ve all had to do this far too often.

The products that our companies build lower barriers to innovation and empower billions of people around the world. But we recognize that the internet and other tools have also been abused by terrorists in their efforts to recruit, fundraise, and organize. And we are committed to doing everything in our power to ensure that our platforms aren't used to distribute terrorist material.

The Forum’s efforts are focused on three areas: leveraging technology, conducting research on patterns of radicalization and misuse of online platforms, and sharing best practices to accelerate our joint efforts against dangerous radicalization. Let me say more about each pillar.

First, when it comes to technology, you should know that our companies are putting our best talent and technology against the task of getting terrorist content off our services. There is no silver bullet when it comes to finding and removing this content, but we’re getting much better.

One early success in collaboration has been our “hash sharing” database, which allows a company that discovers terrorist content on one of their sites to create a digital fingerprint and share it with the other companies in the coalition, who can then more easily detect and review similar content for removal.  

We have to deal with these problems at tremendous scale. The haystacks are unimaginably large and the needles are both very small and constantly changing. People upload over 400 hours of content to YouTube every minute. Our software engineers have spent years developing technology that can spot certain telltale cues and markers. In recent months we have more than doubled the number of videos we've removed for violent extremism and have located these videos twice as fast. And what’s more, 75 percent of the violent extremism videos we’ve removed in recent months were found using technology before they received a single human flag.

These efforts are working. Between August 2015 and June 2017, Twitter suspended more than 935,000 accounts for the promotion of terrorism. During the first half of 2017, over 95 percent of the accounts it removed were detected using its in-house technology. Facebook is using new advances in artificial intelligence to root out "terrorist clusters" by mapping out the pages, posts, and profiles with terrorist material and then shutting them down.

Despite this recent progress, machines are simply not at the stage where they can replace human judgment. For example, portions of a terrorist video in a news broadcast might be entirely legitimate, but a computer program will have difficulty distinguishing documentary coverage from incitement.  

The Forum’s second pillar is focused on conducting and sharing research about how terrorists use the internet to influence their audiences so that we can stay one step ahead.

Today, the members of the Forum are pleased to announce that we are making a multi-million dollar commitment to support research on terrorist abuse of the internet and how governments, tech companies, and civil society can fight back against online radicalization.

The Forum has also set a goal of working with 50 smaller tech companies to help them better tackle terrorist content on their platforms. On Monday, we hosted dozens of companies for a workshop with our partners under the UN Counter Terrorism Executive Directorate. There will be a workshop in Brussels in December and another in Indonesia in the coming months. And we are also working to expand the hash-sharing database to smaller companies.

The Forum’s final pillar is working together to find powerful messages and avenues to reach out to those at greatest risk of radicalization.

Members of the forum are doing a better job of sharing breakthroughs with each other. One success we’ve seen is with the Redirect Method developed at Alphabet’s Jigsaw group. Redirect uses targeted advertising to reach people searching for terrorist content and presents videos that undermine extremist recruiting efforts. During a recent eight-week study more than 300,000 users clicked on our targeted ads and watched more than 500,000 minutes of video. This past April, Microsoft started a similar program on Bing. And Jigsaw and Bing are now exploring a partnership to share best practices and expertise.

At the same time, we’re elevating the voices that are most credible in speaking out against terrorism, hate, and violence. YouTube’s Creators for Change program highlights online stars taking a stand against xenophobia and extremism.  And Facebook's P2P program has brought together more than 5,000 students from 68 countries to create campaigns to combat hate speech. And together the companies have participated in hundreds of meetings and trainings to counter violent extremism including events in Beirut, Bosnia, and Brussels and summits at the White House, here at the United Nations, London, and Sydney to empower credible non-governmental voices against violent extremism.

There is no magic computer program that will eliminate online terrorist content, but we are committed to working with everyone in this room as we continue to ramp up our own efforts to stop terrorists’ abuse of our services. This forum is an important step in the right direction. We look forward to working with national and local governments, and civil society, to prevent extremist ideology from spreading in communities and online.

Supporting new ideas in the fight against hate

Addressing the threat posed by violence and hate is a critical challenge for us all. Google has taken steps to tackle violent extremist content online—putting our best talent and technology to the task, and partnering with law enforcement agencies, civil society groups, and the wider technology industry. We can’t do it alone, but we’re making progress.

Our efforts to disrupt terrorists’ ability to use the Internet focus on three areas: leveraging technology, conducting and sharing research, and sharing best practices and encouraging affirmative efforts against dangerous radicalization. Today we’re announcing a new effort to build on that third pillar. Over the last year we’ve made $2 million in grants to nonprofits around the world seeking to empower and amplify counter-extremist voices. Today we’re expanding that effort and launching a $5 million Google.org innovation fund to counter hate and extremism. Over the next two years, this funding will support technology-driven solutions, as well as grassroots efforts like community youth projects that help build communities and promote resistance to radicalization.

We’re making our first grant from the fund to the Institute for Strategic Dialogue (ISD), an expert counter-extremist organization in the U.K. ISD will use our $1.3 million grant to help leaders from the U.K.’s technology, academic, and charity sectors develop projects to counter extremism. This will be the largest project of its kind outside of government and aims to produce innovative, effective and data-driven solutions that can undermine and overcome radicalization propaganda. We’ll provide an update in the coming months with more information on how to apply.

By funding experts like ISD, we hope to support sustainable solutions to extremism both online and offline. We don’t have all the answers, but we’re committed to playing our part. We’re looking forward to helping bring new ideas and technologies to life.

The European Commission decision on online shopping: the other side of the story

When you shop online, you want to find the products you’re looking for quickly and easily. And advertisers want to promote those same products. That's why Google shows shopping ads, connecting our users with thousands of advertisers, large and small, in ways that are useful for both.

We believe the European Commission’s online shopping decision underestimates the value of those kinds of fast and easy connections. While some comparison shopping sites naturally want Google to show them more prominently, our data show that people usually prefer links that take them directly to the products they want, not to websites where they have to repeat their searches.

We think our current shopping results are useful and are a much-improved version of the text-only ads we showed a decade ago. Showing ads that include pictures, ratings, and prices benefits us, our advertisers, and most of all, our users. And we show them only when your feedback tells us they are relevant. Thousands of European merchants use these ads to compete with larger companies like Amazon and eBay.

Google shopping screengrab

When the Commission asks why some comparison websites have not done as well as others, we think it should consider the many sites that have grown in this period--including platforms like Amazon and eBay. With its comparison tools, reviews, millions of retailers, and vast range of products from sneakers to groceries, Amazon is a formidable competitor and has become the first port of call for product searches.  And as Amazon has grown, it’s natural that some comparison services have proven less popular than others. We compete with Amazon and other sites for shopping-related searches by showing ever more useful product information.

When you use Google to search for products, we try to give you what you’re looking for. Our ability to do that well isn’t favoring ourselves, or any particular site or seller--it’s the result of hard work and constant innovation, based on user feedback.

Given the evidence, we respectfully disagree with the conclusions announced today. We will review the Commission’s decision in detail as we consider an appeal, and we look forward to continuing to make our case.

The European Commission decision on online shopping: the other side of the story

When you shop online, you want to find the products you’re looking for quickly and easily. And advertisers want to promote those same products. That's why Google shows shopping ads, connecting our users with thousands of advertisers, large and small, in ways that are useful for both.

We believe the European Commission’s online shopping decision underestimates the value of those kinds of fast and easy connections. While some comparison shopping sites naturally want Google to show them more prominently, our data shows that people usually prefer links that take them directly to the products they want, not to websites where they have to repeat their searches.

We think our current shopping results are useful and are a much-improved version of the text-only ads we showed a decade ago. Showing ads that include pictures, ratings, and prices benefits us, our advertisers, and most of all, our users. And we show them only when your feedback tells us they are relevant. Thousands of European merchants use these ads to compete with larger companies like Amazon and eBay.

Google shopping screengrab

When the Commission asks why some comparison websites have not done as well as others, we think it should consider the many sites that have grown in this period--including platforms like Amazon and eBay. With its comparison tools, reviews, millions of retailers, and vast range of products from sneakers to groceries, Amazon is a formidable competitor and has become the first port of call for product searches.  And as Amazon has grown, it’s natural that some comparison services have proven less popular than others. We compete with Amazon and other sites for shopping-related searches by showing ever more useful product information.

When you use Google to search for products, we try to give you what you’re looking for. Our ability to do that well isn’t favoring ourselves, or any particular site or seller--it’s the result of hard work and constant innovation, based on user feedback.

Given the evidence, we respectfully disagree with the conclusions announced today. We will review the Commission’s decision in detail as we consider an appeal, and we look forward to continuing to make our case.

The European Commission decision on online shopping: the other side of the story

When you shop online, you want to find the products you’re looking for quickly and easily. And advertisers want to promote those same products. That's why Google shows shopping ads, connecting our users with thousands of advertisers, large and small, in ways that are useful for both.

We believe the European Commission’s online shopping decision underestimates the value of those kinds of fast and easy connections. While some comparison shopping sites naturally want Google to show them more prominently, our data shows that people usually prefer links that take them directly to the products they want, not to websites where they have to repeat their searches.

We think our current shopping results are useful and are a much-improved version of the text-only ads we showed a decade ago. Showing ads that include pictures, ratings, and prices benefits us, our advertisers, and most of all, our users. And we show them only when your feedback tells us they are relevant. Thousands of European merchants use these ads to compete with larger companies like Amazon and eBay.

Google shopping screengrab

When the Commission asks why some comparison websites have not done as well as others, we think it should consider the many sites that have grown in this period--including platforms like Amazon and eBay. With its comparison tools, reviews, millions of retailers, and vast range of products from sneakers to groceries, Amazon is a formidable competitor and has become the first port of call for product searches.  And as Amazon has grown, it’s natural that some comparison services have proven less popular than others. We compete with Amazon and other sites for shopping-related searches by showing ever more useful product information.

When you use Google to search for products, we try to give you what you’re looking for. Our ability to do that well isn’t favoring ourselves, or any particular site or seller--it’s the result of hard work and constant innovation, based on user feedback.

Given the evidence, we respectfully disagree with the conclusions announced today. We will review the Commission’s decision in detail as we consider an appeal, and we look forward to continuing to make our case.

Digital security and due process: A new legal framework for the cloud era

Editor’s note: This is an abbreviated version of a speech Kent delivered today at The Heritage Foundation in Washington, D.C.

For as long as we’ve had legal systems, prosecutors and police have needed to gather evidence. And for each new advance in communications, law enforcement has adapted. With the advent of the post office, police got warrants to search letters and packages. With the arrival of telephones, police served subpoenas for the call logs of suspects. Digital communications have now gone well beyond the Postal Service and Ma Bell. But the laws that govern evidence-gathering on the internet were written before the Information Revolution, and are now both hindering the flow of information to law enforcement and jeopardizing user privacy as a result.

These rules are due for a fundamental realignment in light of the rapid growth of technology that relies on the cloud, the very real security threats that face people and communities, and the expectations of privacy that internet users have in their communications.

Today, we’re proposing a new framework that allows countries that commit to baseline privacy, human rights, and due process principles to gather evidence more quickly and efficiently. We believe these reforms would not only help law enforcement conduct more effective investigations but also encourage countries to improve and align on privacy and due process standards. Further, reducing the amount of time countries have to wait to gather evidence means would reduce the pressure to pursue more problematic ways of trying to gather data.

Current laws hinder law enforcement and user privacy

The U.S. Electronic Communications Privacy Act (ECPA) governs requests for content from law enforcement. Under ECPA, foreign countries largely have to rely on diplomatic mechanisms such as Mutual Legal Assistance Treaties (MLAT) to obtain content that is held by a company in the United States. The last data we’ve seen suggests that the average wait to receive content through the MLAT process is 10 months, far too long for most criminal cases. While law enforcement waits for this data, crimes could remain unsolved or a trial might happen missing key evidence.

The current legal framework poses a threat to users’ privacy as well. Faced with the extended delays under the MLAT process, some countries are now asserting that their laws apply to companies and individuals outside of their borders. Countries asserting extraterritorial authority potentially put companies in an untenable situation where we risk violating either the law of the requesting country or the law of the country where we are headquartered.

We are also seeing various proposals to require companies to store data within local borders as a means to gain easier access. There are a host of problems with this: small, one-off data centers are easier targets for attackers and jeopardize data security and privacy. Further, requiring businesses to build these data-centers will raise the costs for cloud services, erecting significant barriers for smaller companies.

The legal ambiguity concerning cross-border law enforcement requests has also created complications for law enforcement in the United States. Last year, the Second Circuit Court of Appeals was asked to determine the reach of ECPA search warrants issued under the now out-of-date statute. The Court ruled that under existing law, an ECPA search warrant cannot be used to compel service providers to disclose user data that is stored outside of the U.S. But even those judges agreed that ECPA should be updated by Congress to reflect the new reality of today’s global networks.

Principles for reform

Our proposal to address these challenges for domestic and international law enforcement, for companies, and for users has two core principles:

First, countries that honor baseline principles of privacy, human rights, and due process should be able to make direct requests to service providers for user data that pertains to serious crimes that happen within their borders and users who are within their jurisdiction.  

While the U.S. cannot solve the problem on its own, and many countries have blocking regulations, policy reform in the US is a necessary first step. We’ve been pleased to see serious debate around ways to update digital evidence laws in Washington on this issue.

In May, the U.S. Department of Justice presented legislation that would amend ECPA and  authorize U.S. providers to disclose records and communications content to foreign governments that adhere to baseline due process, human rights, and privacy standards. This legislation would be the critical starting point for the new framework of direct requests.

ECPA should also be updated to address what data is available using an ECPA search warrant in a way that serves broader public policy objectives. Law enforcement requests for digital evidence should be based on the location and nationality of users, not the location of data. A key component of this reform is the International Communications Privacy Act (ICPA), which Google supports. ICPA provides a unique opportunity for Congress to update laws governing digital evidence both for investigations in the U.S. and abroad. While refinements to ICPA may be necessary, we believe the principles upon which ICPA is based are sound.

Second, provided that countries can meet baseline standards and the U.S. amends ECPA, the next step would be for the United States and foreign governments to sign new agreements that could provide an alternative to the MLAT process. The bilateral agreements that could be authorized by the legislation put forward by the Department of Justice provide a promising avenue to improve global privacy standards and create a pathway for foreign governments to obtain digital evidence for investigations.

We’re ready to do our part

We know that this will be an involved process. It’ll require action here in Washington and in capitals around the world. However, we can’t accept the complexity of action as a reason for inaction in addressing an important and growing problem.

Our proposal asks for a lot of movement from governments. But we recognize our role as well. Google is ready to work with legislators, regulators, civil society, academics, and other companies to progress these proposals and make sure that we get this right. And I look forward to conversations that we’ll have in Washington, D.C. and beyond in the months to come.

Digital security and due process: A new legal framework for the cloud era

Editor’s note: This is an abbreviated version of a speech Kent delivered today at The Heritage Foundation in Washington, D.C.

For as long as we’ve had legal systems, prosecutors and police have needed to gather evidence. And for each new advance in communications, law enforcement has adapted. With the advent of the post office, police got warrants to search letters and packages. With the arrival of telephones, police served subpoenas for the call logs of suspects. Digital communications have now gone well beyond the Postal Service and Ma Bell. But the laws that govern evidence-gathering on the internet were written before the Information Revolution, and are now both hindering the flow of information to law enforcement and jeopardizing user privacy as a result.

These rules are due for a fundamental realignment in light of the rapid growth of technology that relies on the cloud, the very real security threats that face people and communities, and the expectations of privacy that internet users have in their communications.

Today, we’re proposing a new framework that allows countries that commit to baseline privacy, human rights, and due process principles to gather evidence more quickly and efficiently. We believe these reforms would not only help law enforcement conduct more effective investigations but also encourage countries to improve and align on privacy and due process standards. Further, reducing the amount of time countries have to wait to gather evidence means would reduce the pressure to pursue more problematic ways of trying to gather data.

Current laws hinder law enforcement and user privacy

The U.S. Electronic Communications Privacy Act (ECPA) governs requests for content from law enforcement. Under ECPA, foreign countries largely have to rely on diplomatic mechanisms such as Mutual Legal Assistance Treaties (MLAT) to obtain content that is held by a company in the United States. The last data we’ve seen suggests that the average wait to receive content through the MLAT process is 10 months, far too long for most criminal cases. While law enforcement waits for this data, crimes could remain unsolved or a trial might happen missing key evidence.

The current legal framework poses a threat to users’ privacy as well. Faced with the extended delays under the MLAT process, some countries are now asserting that their laws apply to companies and individuals outside of their borders. Countries asserting extraterritorial authority potentially put companies in an untenable situation where we risk violating either the law of the requesting country or the law of the country where we are headquartered.

We are also seeing various proposals to require companies to store data within local borders as a means to gain easier access. There are a host of problems with this: small, one-off data centers are easier targets for attackers and jeopardize data security and privacy. Further, requiring businesses to build these data-centers will raise the costs for cloud services, erecting significant barriers for smaller companies.

The legal ambiguity concerning cross-border law enforcement requests has also created complications for law enforcement in the United States. Last year, the Second Circuit Court of Appeals was asked to determine the reach of ECPA search warrants issued under the now out-of-date statute. The Court ruled that under existing law, an ECPA search warrant cannot be used to compel service providers to disclose user data that is stored outside of the U.S. But even those judges agreed that ECPA should be updated by Congress to reflect the new reality of today’s global networks.

Principles for reform

Our proposal to address these challenges for domestic and international law enforcement, for companies, and for users has two core principles:

First, countries that honor baseline principles of privacy, human rights, and due process should be able to make direct requests to service providers for user data that pertains to serious crimes that happen within their borders and users who are within their jurisdiction.  

While the U.S. cannot solve the problem on its own, and many countries have blocking regulations, policy reform in the US is a necessary first step. We’ve been pleased to see serious debate around ways to update digital evidence laws in Washington on this issue.

In May, the U.S. Department of Justice presented legislation that would amend ECPA and  authorize U.S. providers to disclose records and communications content to foreign governments that adhere to baseline due process, human rights, and privacy standards. This legislation would be the critical starting point for the new framework of direct requests.

ECPA should also be updated to address what data is available using an ECPA search warrant in a way that serves broader public policy objectives. Law enforcement requests for digital evidence should be based on the location and nationality of users, not the location of data. A key component of this reform is the International Communications Privacy Act (ICPA), which Google supports. ICPA provides a unique opportunity for Congress to update laws governing digital evidence both for investigations in the U.S. and abroad. While refinements to ICPA may be necessary, we believe the principles upon which ICPA is based are sound.

Second, provided that countries can meet baseline standards and the U.S. amends ECPA, the next step would be for the United States and foreign governments to sign new agreements that could provide an alternative to the MLAT process. The bilateral agreements that could be authorized by the legislation put forward by the Department of Justice provide a promising avenue to improve global privacy standards and create a pathway for foreign governments to obtain digital evidence for investigations.

We’re ready to do our part

We know that this will be an involved process. It’ll require action here in Washington and in capitals around the world. However, we can’t accept the complexity of action as a reason for inaction in addressing an important and growing problem.

Our proposal asks for a lot of movement from governments. But we recognize our role as well. Google is ready to work with legislators, regulators, civil society, academics, and other companies to progress these proposals and make sure that we get this right. And I look forward to conversations that we’ll have in Washington, D.C. and beyond in the months to come.

Digital security and due process: A new legal framework for the cloud era

Editor’s note: This is an abbreviated version of a speech Kent delivered today at The Heritage Foundation in Washington, D.C.

For as long as we’ve had legal systems, prosecutors and police have needed to gather evidence. And for each new advance in communications, law enforcement has adapted. With the advent of the post office, police got warrants to search letters and packages. With the arrival of telephones, police served subpoenas for the call logs of suspects. Digital communications have now gone well beyond the Postal Service and Ma Bell. But the laws that govern evidence-gathering on the internet were written before the Information Revolution, and are now both hindering the flow of information to law enforcement and jeopardizing user privacy as a result.

These rules are due for a fundamental realignment in light of the rapid growth of technology that relies on the cloud, the very real security threats that face people and communities, and the expectations of privacy that internet users have in their communications.

Today, we’re proposing a new framework that allows countries that commit to baseline privacy, human rights, and due process principles to gather evidence more quickly and efficiently. We believe these reforms would not only help law enforcement conduct more effective investigations but also encourage countries to improve and align on privacy and due process standards. Further, reducing the amount of time countries have to wait to gather evidence means would reduce the pressure to pursue more problematic ways of trying to gather data.

Current laws hinder law enforcement and user privacy

The U.S. Electronic Communications Privacy Act (ECPA) governs requests for content from law enforcement. Under ECPA, foreign countries largely have to rely on diplomatic mechanisms such as Mutual Legal Assistance Treaties (MLAT) to obtain content that is held by a company in the United States. The last data we’ve seen suggests that the average wait to receive content through the MLAT process is 10 months, far too long for most criminal cases. While law enforcement waits for this data, crimes could remain unsolved or a trial might happen missing key evidence.

The current legal framework poses a threat to users’ privacy as well. Faced with the extended delays under the MLAT process, some countries are now asserting that their laws apply to companies and individuals outside of their borders. Countries asserting extraterritorial authority potentially put companies in an untenable situation where we risk violating either the law of the requesting country or the law of the country where we are headquartered.

We are also seeing various proposals to require companies to store data within local borders as a means to gain easier access. There are a host of problems with this: small, one-off data centers are easier targets for attackers and jeopardize data security and privacy. Further, requiring businesses to build these data-centers will raise the costs for cloud services, erecting significant barriers for smaller companies.

The legal ambiguity concerning cross-border law enforcement requests has also created complications for law enforcement in the United States. Last year, the Second Circuit Court of Appeals was asked to determine the reach of ECPA search warrants issued under the now out-of-date statute. The Court ruled that under existing law, an ECPA search warrant cannot be used to compel service providers to disclose user data that is stored outside of the U.S. But even those judges agreed that ECPA should be updated by Congress to reflect the new reality of today’s global networks.

Principles for reform

Our proposal to address these challenges for domestic and international law enforcement, for companies, and for users has two core principles:

First, countries that honor baseline principles of privacy, human rights, and due process should be able to make direct requests to service providers for user data that pertains to serious crimes that happen within their borders and users who are within their jurisdiction.  

While the U.S. cannot solve the problem on its own, and many countries have blocking regulations, policy reform in the US is a necessary first step. We’ve been pleased to see serious debate around ways to update digital evidence laws in Washington on this issue.

In May, the U.S. Department of Justice presented legislation that would amend ECPA and  authorize U.S. providers to disclose records and communications content to foreign governments that adhere to baseline due process, human rights, and privacy standards. This legislation would be the critical starting point for the new framework of direct requests.

ECPA should also be updated to address what data is available using an ECPA search warrant in a way that serves broader public policy objectives. Law enforcement requests for digital evidence should be based on the location and nationality of users, not the location of data. A key component of this reform is the International Communications Privacy Act (ICPA), which Google supports. ICPA provides a unique opportunity for Congress to update laws governing digital evidence both for investigations in the U.S. and abroad. While refinements to ICPA may be necessary, we believe the principles upon which ICPA is based are sound.

Second, provided that countries can meet baseline standards and the U.S. amends ECPA, the next step would be for the United States and foreign governments to sign new agreements that could provide an alternative to the MLAT process. The bilateral agreements that could be authorized by the legislation put forward by the Department of Justice provide a promising avenue to improve global privacy standards and create a pathway for foreign governments to obtain digital evidence for investigations.

We’re ready to do our part

We know that this will be an involved process. It’ll require action here in Washington and in capitals around the world. However, we can’t accept the complexity of action as a reason for inaction in addressing an important and growing problem.

Our proposal asks for a lot of movement from governments. But we recognize our role as well. Google is ready to work with legislators, regulators, civil society, academics, and other companies to progress these proposals and make sure that we get this right. And I look forward to conversations that we’ll have in Washington, D.C. and beyond in the months to come.

Four steps we’re taking today to fight terrorism online

Editor’s Note: This post appeared as an op-ed in the Financial Times earlier today.

Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution. We are working with government, law enforcement and civil society groups to tackle the problem of violent extremism online. There should be no place for terrorist content on our services.

While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.

We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.

Today, we are pledging to take four additional steps.

First, we are increasing our use of technology to help identify extremist and terrorism-related videos. This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user. We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove extremist and terrorism-related content.

Second, because technology alone is not a silver bullet, we will greatly increase the number of independent experts in YouTube’s Trusted Flagger programme. Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern. We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants. This allows us to benefit from the expertise of specialised organisations working on issues like hate speech, self-harm, and terrorism. We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalise and recruit extremists.

Third, we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.

Finally, YouTube will expand its role in counter-radicalisation efforts. Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.

We have also recently committed to working with industry colleagues—including Facebook, Microsoft, and Twitter—to establish an international forum to share and develop technology and support smaller companies and accelerate our joint efforts to tackle terrorism online.

Collectively, these changes will make a difference. And we’ll keep working on the problem until we get the balance right. Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them. Together, we can build lasting solutions that address the threats to our security and our freedoms. It is a sweeping and complex challenge. We are committed to playing our part.