Category Archives: Public Policy Blog

Google’s views on government, policy and politics

Extending domain opt-out and AdWords API tools

In 2012, Google made voluntary commitments to the Federal Trade Commission (FTC) that are set to expire on December 27th, 2017. At that time, we agreed to remove certain clauses from our AdWords API Terms and Conditions. We also agreed to provide a mechanism for websites to opt out of the display of their crawled content on certain Google web pages linked to google.com in the United States on a domain-by-domain basis.  

We believe that these policies provide continued flexibility for developers and websites, and we will be continuing our current practices regarding the AdWords API Terms and Conditions and the domain-by-domain opt-out following the expiration of the voluntary commitments. Additional information can be found here:

New government removals and National Security Letter data

Since 2010, we’ve shared regular updates in our Transparency Report about the effects of government and corporate policies on users’ data and content. Our goal has always been to make this information as accessible as possible, and to continue expanding this report with new and relevant data.

Today, we’re announcing three updates to our Transparency Report. We’re expanding the National Security Letters (NSL) section, releasing new data on requests from governments to remove content from services like YouTube and Blogger, and making it easier for people to share select data and charts from the Transparency Report.

National Security Letters

Following the 2015 USA Freedom Act, the FBI started lifting indefinite gag restrictions—prohibitions against publicly sharing details—on particular NSLs. Last year, we began publishing NSLs we have received where, either through litigation or legislation, we have been freed of these nondisclosure obligations. We have added a new subsection to the NSL page of the Transparency Report where we publish these letters. We also added letters to the collection, and look to update this section regularly.

Government requests for content removals

As usage of our services increases, we remain committed to keeping internet users safe, working with law enforcement to remove illegal content, and complying with local laws. During this latest reporting period, we’ve continued to expand our work with local law enforcement. From January to June 2017, we received 19,176 requests from governments around the world to remove 76,714 pieces of content. This was a 20 percent percent increase in removal requests over the second half of 2016.

Making our Transparency Report easier to use

Finally, we’ve implemented a new “deep linking” feature that makes it easier to bookmark and share specific charts in the Transparency Report. Sorting data by country, time period, and other categories now generates a distinct web address at the top of your browser window. This allows you to create a link that will show, for example, just government removals data in France, by Google product, for the first half of 2015. We hope this will make it easier for citizens to find and reference information in the report, and for journalists and researchers to highlight specific details that they may be examining as well.

By continuing to make updates like these, we aim to spark new conversations about transparency, accountability and the role of governments and companies in the flow of information online.

Update on the Global Internet Forum to Counter Terrorism

At last year's EU Internet Forum, Facebook, Microsoft, Twitter and YouTube declared our joint determination to curb the spread of terrorist content online. Over the past year, we have formalized this partnership with the launch of the Global Internet Forum to Counter Terrorism (GIFCT). We hosted our first meeting in August where representatives from the tech industry, government and non-governmental organizations came together to focus on three key areas: technological approaches, knowledge sharing, and research. Since then, we have participated in a Heads of State meeting at the UN General Assembly in September and the G7 Interior Ministers meeting in October, and we look forward to hosting a GIFCT event and attending the EU Internet Forum in Brussels on the 6th of December.

The GIFCT is committed to working on technological solutions to help thwart terrorists' use of our services, and has built on the groundwork laid by the EU Internet Forum, particularly through a shared industry hash database, where companies can create “digital fingerprints” for terrorist content and share it with participating companies.

The database, which we announced our commitment to building last December and became operational last spring, now contains more than 40,000 hashes. It allows member companies to use those hashes to identify and remove matching content — videos and images — that violate our respective policies or, in some cases, block terrorist content before it is even posted.

We are pleased that Ask.fm, Cloudinary, Instagram, Justpaste.it, LinkedIn, Oath, and Snap have also recently joined this hash-sharing consortium, and we will continue our work to add additional companies throughout 2018.

In order to disrupt the distribution of terrorist content across the internet, companies have invested in collaborating and sharing expertise with one another. GIFCT's knowledge-sharing work has grown quickly in large measure because companies recognize that in countering terrorism online we face many of the same challenges.

Although our companies have been sharing best practices around counterterrorism for several years, in recent months GIFCT has provided a more formal structure to accelerate and strengthen this work. In collaboration with the Tech Against Terror initiative — which recently launched a Knowledge Sharing Platform with the support of GIFCT and the UN Counter-Terrorism Committee Executive Directorate — we have held workshops for smaller tech companies in order to share best practices on how to disrupt the spread of violent extremist content online.

Our initial goal for 2017 was to work with 50 smaller tech companies to to share best practices on how to disrupt the spread of violent extremist material. We have exceeded that goal, engaging with 68 companies over the past several months through workshops in San Francisco, New York, and Jakarta, plus another workshop next week in Brussels on the sidelines of the EU Internet Forum.

We recognize that our work is far from done, but we are confident that we are heading in the right direction. We will continue to provide updates as we forge new partnerships and develop new technology in the face of this global challenge

Defending access to lawful information at Europe’s highest court

Under the right to be forgotten, Europeans can ask for information about themselves to be removed from search results for their name if it is outdated, or irrelevant. From the outset, we have publicly stated our concerns about the ruling, but we have still worked hard to comply—and to do so conscientiously and in consultation with Data Protection Authorities. To date, we’ve handled requests to delist nearly 2 million search results in Europe, removing more than 800,000 of them. We have also taken great care not to erase results that are clearly in the public interest, as the European Court of Justice directed. Most Data Protection Authorities have concluded that this approach strikes the right balance.


But two right to be forgotten cases now in front of the European Court of Justice threaten that balance.


In the first case, four individuals—who we can’t name—present an apparently simple argument: European law protects sensitive personal data; sensitive personal data includes information about your political beliefs or your criminal record; so all mentions of criminality or political affiliation should automatically be purged from search results, without any consideration of public interest.


If the Court accepted this argument, it would give carte blanche to people who might wish to use privacy laws to hide information of public interest—like a politician’s political views, or a public figure’s criminal record. This would effectively erase the public’s right to know important information about people who represent them in society or provide them services.


In the second case, the Court must decide whether Google should enforce the right to be forgotten not just in Europe, but in every country around the world. We—and a wide range of human rights and media organizations, and others, like Wikimedia—believe that this runs contrary to the basic principles of international law: no one country should be able to impose its rules on the citizens of another country, especially when it comes to linking to lawful content. Adopting such a rule would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world.


We’re speaking out because restricting access to lawful and valuable information is contrary to our mission as a company and keeps us from delivering the comprehensive search service that people expect of us.


But the threat is much greater than this. These cases represent a serious assault on the public’s right to access lawful information.


We will argue in court for a reasonable interpretation of the right to be forgotten and for the ability of countries around the world to set their own laws, not have those of others imposed on them. Up to November 20, European countries and institutions have the chance to make their views known to the Court. And we encourage everyone who cares about public access to information to stand up and fight to preserve it.

Security and disinformation in the U.S. 2016 election

We’ve seen many types of efforts to abuse Google’s services over the years. And, like other internet platforms, we have found some evidence of efforts to misuse our platforms during the 2016 U.S. election by actors linked to the Internet Research Agency in Russia. 

Preventing the misuse of our platforms is something that we take very seriously; it’s a major focus for our teams. We’re committed to finding a way to stop this type of abuse, and to working closely with governments, law enforcement, other companies, and leading NGOs to promote electoral integrity and user security, and combat misinformation. 

We have been conducting a thorough investigation related to the U.S. election across our products drawing on the work of our information security team, research into misinformation campaigns from our teams, and leads provided by other companies. Today, we are sharing results from that investigation. While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable.

We will be launching several new initiatives to provide more transparency and enhance security, which we also detail in these information sheets: what we found, steps against phishing and hacking, and our work going forward.

Our work doesn’t stop here, and we’ll continue to investigate as new information comes to light. Improving transparency is a good start, but we must also address new and evolving threat vectors for misinformation and attacks on future elections. We will continue to do our best to help people find valuable and useful information, an essential foundation for an informed citizenry and a robust democratic process.

Towards a future of work that works for everyone

The future of work concerns us all. Our grandchildren will have jobs that don’t yet exist, and will live lives we cannot imagine. In Europe, getting the future of work right for individuals, societies and industries means having an open debate about the possibilities right now. We want to be a part of that discussion, and help contribute to a future of work that works for everyone. So last week in Stockholm and The Hague we brought together a range of leading international experts from academia, trade unions, public sector and businesses to discuss the impact of technology on jobs. We also asked McKinsey for a report on the impact of automation on work, jobs and skills.


As advances in machine learning and robotics make headlines, there’s a heated debate about whether innovation is a magic fix for an aging workforce, or a fast track to mass unemployment. Data can illuminate that debate, and McKinsey focused their research on the Nordics, Benelux, Ireland and Estonia—a diverse group which have at least one thing in common: They’re Europe’s digital frontrunners. The report from McKinsey shows us that while automation will impact existing jobs, innovation and adopting new technology can increase the total number of jobs available.


The report makes it very clear that divergent paths are possible. To make a success of the digital transition, countries should promote adoption of new technologies and double down on skills training and education. We want to play our part here. One example of how we contribute is our program Digitalakademin in Sweden: So far, we’ve trained more than 20,000 people in small- and medium-sized business in digital skills. And together with the Swedish National Employment Agency we’ve developed training to help unemployed people get the skills necessary for the jobs of the future.

As Erik Sandström from the National Employment Agency stressed at our event in Stockholm, it “all starts with digital competence—if you’re lacking in digital competence you will overestimate the risks and underestimate the opportunities.” That sentiment was echoed in a keynote by Ylva Johansson, the Swedish Minister for Employment and Integration: “Why do we have an attitude where unions, employees are positively accepting ongoing changes? Because we’ve been able to protect people and to present new opportunities through reskilling.”

For our event in The Hague we partnered with Dutch company Randstad to discuss the same topic of future of work. Their CEO, Jacques van den Broek, struck an optimistic tone: “The digital transformation is an opportunity, not a threat,” he said. “The lesson we’ve learned is that whilst some jobs disappear, tech creates jobs. The longer you wait to embrace that change, the longer it takes to be able to compete.”


The coming changes will likely affect a wide range of tasks and jobs. “In Denmark, we discussed the destruction of jobs,” Thomas Søby from the Danish Steelworkers Union said. “New ones are created,” he added. “But some people will lose their jobs and feel left behind, and as a society we need to take care of those people.”


Those new jobs aren’t simply replacements—they’re roles we don’t have yet. “In a few years something else will be hot,” said Aart-Jan de Geus of Bertelsmann Stiftung, a German private foundation which looks at managing future challenges. He stressed that fears about job losses shouldn’t be overstated, especially as consumer demand and spending won’t go away. “The big mistake would be to try to protect jobs; we need to protect workers.”


In The Hague, Eric Schmidt, Alphabet’s executive chairman, ended on a positive note, saying that anxiety about change was understandable but that society can make sure the digital transition includes everyone. “Incumbents resist change. This is not new and in fact we have seen it throughout every stage of history,” he said. “But if history has taught us anything, it is that when disruptors and pioneers are right, society always recalibrates.”

Updating our Transparency Report and electronic privacy laws

Today, we are releasing the latest version of our Transparency Report concerning government requests for user data. This includes government requests for user data in criminal cases, as well as national security matters under U.S. law. Google fought for the right to publish this information in court and before Congress, and we continue to believe that this type of transparency can inform the broader debate about the nature and scope of government surveillance laws and programs.


In the first half of 2017, worldwide, we received 48,941 government requests that relate to 83,345 accounts. You can see more detailed figures, including a country-by-country breakdown of requests, here. We’ve also posted updated figures for the number of users/accounts impacted by Foreign Intelligence Surveillance Act (FISA) requests for content in previous reporting periods. While the total number of FISA content requests was reported accurately, we inadvertently under-reported the user/account figures in some reporting periods and over-reported the user/account figures in the second half of 2010. The corrected figures are in the latest report and reflected on our visible changes page.


Updating Electronic Privacy Laws


We are publishing the latest update to our Transparency Report as the U.S. Congress embarks upon an important debate concerning the nature and scope of key FISA provisions. Section 702 of the FISA Amendments Act of 2008 expires at the end of 2017. This is the section of FISA that authorizes the U.S. government to compel service providers like Google to disclose user data (including communications content) about non-U.S. persons in order to acquire “foreign intelligence information.”


Earlier this year, we expressed support for specific reforms to Section 702. We continue to believe that Congress can enact reforms to Section 702 in a way that enhances privacy protection for internet users while protecting national security. Independent bodies have concluded that Section 702 is valuable and effective in protecting national security and producing useful foreign intelligence. These assessments, however, do not preclude reforms that improve privacy protections for U.S. and non-U.S. persons and that do not disturb the core purposes of Section 702.


Government access laws are due for a fundamental realignment and update in light of the proliferation of technology, the very real security threats to people, and the expectations of privacy that Internet users have in their communications. Our General Counsel, Kent Walker, delivered a speech earlier this year calling for a new framework to address cross-border law enforcement requests. Updates to the Electronic Communications Privacy Act (ECPA) will be necessary to create a legal framework that addresses both law enforcement and civil liberties concerns.


The recent introduction of the International Communications Privacy Act (ICPA) in the Senate and the House is a significant step in the right direction, and we applaud Senators Hatch, Coons, and Heller and Representatives Collins, Jeffries, Issa, and DeBene for their leadership on this important bill. ECPA should also be updated to enable countries that commit to baseline privacy, due process, and human rights principles to make direct requests to U.S. providers. Providing a pathway for such countries to obtain electronic evidence directly from service providers in other jurisdictions will remove incentives for the unilateral, extraterritorial assertion of a country’s laws, data localization proposals, aggressive expansion of government access authorities, and dangerous investigative techniques. These measures ultimately weaken privacy, due process, and human rights standards.


We look forward to continuing in the constructive discussion about these issues.


Working together to combat terrorists online

Editor’s note: This is a revised and abbreviated version of a speech Kent delivered today at the United Nations in New York City, NY, on behalf of the members of the Global Internet Forum to Counter Terrorism.

The Global Internet Forum to Counter Terrorism is a group of four technology companies—Facebook, Microsoft, Twitter, and YouTube—that are committed to working together and with governments and civil society to address the problem of online terrorist content.

For our companies, terrorism isn’t just a business concern or a technical challenge. These are deeply personal threats. We are citizens of London, Paris, Jakarta, and New York. And in the wake of each terrorist attack we too frantically check in on our families and co-workers to make sure they are safe. We’ve all had to do this far too often.

The products that our companies build lower barriers to innovation and empower billions of people around the world. But we recognize that the internet and other tools have also been abused by terrorists in their efforts to recruit, fundraise, and organize. And we are committed to doing everything in our power to ensure that our platforms aren't used to distribute terrorist material.

The Forum’s efforts are focused on three areas: leveraging technology, conducting research on patterns of radicalization and misuse of online platforms, and sharing best practices to accelerate our joint efforts against dangerous radicalization. Let me say more about each pillar.

First, when it comes to technology, you should know that our companies are putting our best talent and technology against the task of getting terrorist content off our services. There is no silver bullet when it comes to finding and removing this content, but we’re getting much better.

One early success in collaboration has been our “hash sharing” database, which allows a company that discovers terrorist content on one of their sites to create a digital fingerprint and share it with the other companies in the coalition, who can then more easily detect and review similar content for removal.  

We have to deal with these problems at tremendous scale. The haystacks are unimaginably large and the needles are both very small and constantly changing. People upload over 400 hours of content to YouTube every minute. Our software engineers have spent years developing technology that can spot certain telltale cues and markers. In recent months we have more than doubled the number of videos we've removed for violent extremism and have located these videos twice as fast. And what’s more, 75 percent of the violent extremism videos we’ve removed in recent months were found using technology before they received a single human flag.

These efforts are working. Between August 2015 and June 2017, Twitter suspended more than 935,000 accounts for the promotion of terrorism. During the first half of 2017, over 95 percent of the accounts it removed were detected using its in-house technology. Facebook is using new advances in artificial intelligence to root out "terrorist clusters" by mapping out the pages, posts, and profiles with terrorist material and then shutting them down.

Despite this recent progress, machines are simply not at the stage where they can replace human judgment. For example, portions of a terrorist video in a news broadcast might be entirely legitimate, but a computer program will have difficulty distinguishing documentary coverage from incitement.  

The Forum’s second pillar is focused on conducting and sharing research about how terrorists use the internet to influence their audiences so that we can stay one step ahead.

Today, the members of the Forum are pleased to announce that we are making a multi-million dollar commitment to support research on terrorist abuse of the internet and how governments, tech companies, and civil society can fight back against online radicalization.

The Forum has also set a goal of working with 50 smaller tech companies to help them better tackle terrorist content on their platforms. On Monday, we hosted dozens of companies for a workshop with our partners under the UN Counter Terrorism Executive Directorate. There will be a workshop in Brussels in December and another in Indonesia in the coming months. And we are also working to expand the hash-sharing database to smaller companies.

The Forum’s final pillar is working together to find powerful messages and avenues to reach out to those at greatest risk of radicalization.

Members of the forum are doing a better job of sharing breakthroughs with each other. One success we’ve seen is with the Redirect Method developed at Alphabet’s Jigsaw group. Redirect uses targeted advertising to reach people searching for terrorist content and presents videos that undermine extremist recruiting efforts. During a recent eight-week study more than 300,000 users clicked on our targeted ads and watched more than 500,000 minutes of video. This past April, Microsoft started a similar program on Bing. And Jigsaw and Bing are now exploring a partnership to share best practices and expertise.

At the same time, we’re elevating the voices that are most credible in speaking out against terrorism, hate, and violence. YouTube’s Creators for Change program highlights online stars taking a stand against xenophobia and extremism.  And Facebook's P2P program has brought together more than 5,000 students from 68 countries to create campaigns to combat hate speech. And together the companies have participated in hundreds of meetings and trainings to counter violent extremism including events in Beirut, Bosnia, and Brussels and summits at the White House, here at the United Nations, London, and Sydney to empower credible non-governmental voices against violent extremism.

There is no magic computer program that will eliminate online terrorist content, but we are committed to working with everyone in this room as we continue to ramp up our own efforts to stop terrorists’ abuse of our services. This forum is an important step in the right direction. We look forward to working with national and local governments, and civil society, to prevent extremist ideology from spreading in communities and online.

Google’s fight against human trafficking

Google has made it a priority to tackle the heinous crime of sex trafficking. I know, because I’ve worked on this from the day I joined in 2012. We have hired and funded advocates in this area. We have developed and built extensive technology to connect victims with the resources they need. And we have helped pioneer the use of technologies that identify trafficking networks to make it easier and quicker for law enforcement to arrest these abusers. You can read about these efforts here. We’ve supported over 40 bills on human trafficking. And we are determined to do more to stop this evil, including support for tougher legislation. 

There is currently a debate over a proposed bill to combat sex trafficking by amending section 230 of the Communications Decency Act. While we agree with the intentions of the bill, we are concerned that it erodes the “good samaritan” protection and would actually hinder the fight against sex trafficking. While large companies are more likely to continue their proactive enforcement efforts and can afford to fight lawsuits, if smaller platforms are made liable for “knowledge” of human trafficking occurring on their platforms, there is a risk that some will seek to avoid that “knowledge”; they will simply stop looking for it. This would be a disaster. We think it’s much better to foster an environment in which all technology companies can continue to clean their platforms and support effective tools for law enforcement and advocacy organizations to find and disrupt these networks. We’ve met with the sponsors of the particular bill and provided alternatives that will encourage this environment, and we’ll continue to seek a constructive approach to advance a shared goal.

We’re not alone in this view. Organizations as broad and diverse as Engine Advocacy, PEN America, Charles Koch Institute, Heritage Action, ACLU, U.S. Chamber Technology Engagement Center, Business Software Alliance, Internet Commerce Coalition, Internet Association (whose members include Microsoft, Twitter, Facebook, Amazon, Snap, Match.com, Pinterest, etc.), TechFreedom, Medium, GitHub, Reddit, Wikimedia, National Venture Capital Association and many others have raised concerns about the bill. We—and many others—stand ready to work with Congress on changes to the bill, and on other legislation and measures to fight human trafficking and protect and support victims and survivors.

A lot of the discussion around this issue focuses on the role of a website called Backpage.com. I want to make our position on this clear. Google believes that Backpage.com can and should be held accountable for its crimes. We strongly applaud the work of the Senate Permanent Subcommittee on Investigations in exposing Backpage's intentional promotion of child sex trafficking through ads. Based on those findings, Google believes that Backpage.com should be criminally prosecuted by the US Department of Justice for facilitating child sex trafficking, something they can do today without need to amend any laws. And years before the Senate's investigation and report, we prohibited Backpage from advertising on Google, and we have criticized Backpage publicly.

I understand that when important legislation is being discussed, public debate is robust. That’s how it should be. But on the crucial issue of sex trafficking, we’ve been a deeply committed partner in the fight. Let’s not let a genuine disagreement over the likely unintended impact of a particular piece of legislation obscure that fact.

Google’s fight against human trafficking

Google has made it a priority to tackle the heinous crime of sex trafficking. I know, because I’ve worked on this from the day I joined in 2012. We have hired and funded advocates in this area. We have developed and built extensive technology to connect victims with the resources they need. And we have helped pioneer the use of technologies that identify trafficking networks to make it easier and quicker for law enforcement to arrest these abusers. You can read about these efforts here. We’ve supported over 40 bills on human trafficking. And we are determined to do more to stop this evil, including support for tougher legislation. 

There is currently a debate over a proposed bill to combat sex trafficking by amending section 230 of the Communications Decency Act. While we agree with the intentions of the bill, we are concerned that it erodes the “good samaritan” protection and would actually hinder the fight against sex trafficking. While large companies are more likely to continue their proactive enforcement efforts and can afford to fight lawsuits, if smaller platforms are made liable for “knowledge” of human trafficking occurring on their platforms, there is a risk that some will seek to avoid that “knowledge”; they will simply stop looking for it. This would be a disaster. We think it’s much better to foster an environment in which all technology companies can continue to clean their platforms and support effective tools for law enforcement and advocacy organizations to find and disrupt these networks. We’ve met with the sponsors of the particular bill and provided alternatives that will encourage this environment, and we’ll continue to seek a constructive approach to advance a shared goal.

We’re not alone in this view. Organizations as broad and diverse as Engine Advocacy, PEN America, Charles Koch Institute, Heritage Action, ACLU, U.S. Chamber Technology Engagement Center, Business Software Alliance, Internet Commerce Coalition, Internet Association (whose members include Microsoft, Twitter, Facebook, Amazon, Snap, Match.com, Pinterest, etc.), TechFreedom, Medium, GitHub, Reddit, Wikimedia, National Venture Capital Association and many others have raised concerns about the bill. We—and many others—stand ready to work with Congress on changes to the bill, and on other legislation and measures to fight human trafficking and protect and support victims and survivors.

A lot of the discussion around this issue focuses on the role of a website called Backpage.com. I want to make our position on this clear. Google believes that Backpage.com can and should be held accountable for its crimes. We strongly applaud the work of the Senate Permanent Subcommittee on Investigations in exposing Backpage's intentional promotion of child sex trafficking through ads. Based on those findings, Google believes that Backpage.com should be criminally prosecuted by the US Department of Justice for facilitating child sex trafficking, something they can do today without need to amend any laws. And years before the Senate's investigation and report, we prohibited Backpage from advertising on Google, and we have criticized Backpage publicly.

I understand that when important legislation is being discussed, public debate is robust. That’s how it should be. But on the crucial issue of sex trafficking, we’ve been a deeply committed partner in the fight. Let’s not let a genuine disagreement over the likely unintended impact of a particular piece of legislation obscure that fact.