Tag Archives: Public Policy

2018 Google North America Public Policy Fellowship now accepting applications

Applications are now open for the 2018 North America Google Policy Fellowship, a paid fellowship that will connect students interested in emerging technology policy issues with leading nonprofits, think tanks and advocacy groups in Washington, DC, and California. This year’s fellows will be given the opportunity to work at a diverse group of organizations at the forefront of addressing some of today’s most challenging tech policy questions. Whether working on issues at the intersection of accessibility and technology or researching the future of work at a preeminent think tank, students will gain valuable hands on experience tackling critical tech policy issues throughout the summer.


The application period opens today for the North America region and all applications must be received by 12:00AM midnight ET, Tuesday, March 20. This year's program will run from June 5–August 11, with regular programming throughout the summer. More specific information, including a list of this year’s hosts, can be found on our site.


More fellowship opportunities in Asia, Africa, and Europe will be coming soon. You can learn about the program, application process and host organizations on the Google Public Policy Fellowship website.

Updating our “right to be forgotten” Transparency Report

In May 2014, in a landmark ruling, the European Court of Justice established the “right to be forgotten,” or more accurately, the “right to delist,” allowing Europeans to ask search engines to delist information about themselves from search results. In deciding what to delist, search engines like Google must consider if the information in question is “inaccurate, inadequate, irrelevant or excessive”—and whether there is a public interest in the information remaining available in search results.


Understanding how we make these types of decisions—and how people are using new rights like those granted by the European Court—is important. Since 2014, we’ve provided information about “right to be forgotten” delisting requests in our Transparency Report, including the number of URLs submitted to us, the number of URLs delisted and not delisted, and anonymized examples of some of the requests we have received.

New data in the Transparency Report


Today, we’re expanding the scope of our transparency reporting about the “right to be forgotten” and adding new data going back to January 2016 when our reviewers started manually annotating each URL submitted to us with additional information, including:


  • Requesters:We show a breakdown of the requests made by private individuals vs. non-private individuals—e.g., government officials or corporate entities.

  • Content of the request:We classify the content that the individual has asked us to delist into a set of categories: personal information, professional information, crime, and name not found, meaning that we were not able to find the individual’s name on the page.

  • Content of the site: When we evaluate a URL for potential delisting, we classify the website that hosts the page as a directory site, news site, social media, or other.

  • Content delisting rate:This is the rate at which we delist content by category on a quarterly basis.


Looking back: analyzing three years of delisting requests


In addition to updating the Transparency Report, we’re also providing a snapshot of our efforts to process these requests over the last three years.

rtbf_infographic.png

We’re also releasing the draft of a new research paper called Three Years of the Right to be Forgotten, which has been submitted to the Privacy Enhancing Technologies Symposium for peer review. This paper uses our manual reviewers’ annotations to provide a comprehensive analysis of the ways Europeans are using the “right to be forgotten.”


We hope the new data we’ve added to the Transparency Report and our research paper will help inform the ongoing discussion about the interplay between the right to privacy and the right to access lawful information online.

Extending domain opt-out and AdWords API tools

In 2012, Google made voluntary commitments to the Federal Trade Commission (FTC) that are set to expire on December 27th, 2017. At that time, we agreed to remove certain clauses from our AdWords API Terms and Conditions. We also agreed to provide a mechanism for websites to opt out of the display of their crawled content on certain Google web pages linked to google.com in the United States on a domain-by-domain basis.  

We believe that these policies provide continued flexibility for developers and websites, and we will be continuing our current practices regarding the AdWords API Terms and Conditions and the domain-by-domain opt-out following the expiration of the voluntary commitments. Additional information can be found here:

Extending domain opt-out and AdWords API tools

In 2012, Google made voluntary commitments to the Federal Trade Commission (FTC) that are set to expire on December 27th, 2017. At that time, we agreed to remove certain clauses from our AdWords API Terms and Conditions. We also agreed to provide a mechanism for websites to opt out of the display of their crawled content on certain Google web pages linked to google.com in the United States on a domain-by-domain basis.  

We believe that these policies provide continued flexibility for developers and websites, and we will be continuing our current practices regarding the AdWords API Terms and Conditions and the domain-by-domain opt-out following the expiration of the voluntary commitments. Additional information can be found here:

New government removals and National Security Letter data

Since 2010, we’ve shared regular updates in our Transparency Report about the effects of government and corporate policies on users’ data and content. Our goal has always been to make this information as accessible as possible, and to continue expanding this report with new and relevant data.

Today, we’re announcing three updates to our Transparency Report. We’re expanding the National Security Letters (NSL) section, releasing new data on requests from governments to remove content from services like YouTube and Blogger, and making it easier for people to share select data and charts from the Transparency Report.

National Security Letters

Following the 2015 USA Freedom Act, the FBI started lifting indefinite gag restrictions—prohibitions against publicly sharing details—on particular NSLs. Last year, we began publishing NSLs we have received where, either through litigation or legislation, we have been freed of these nondisclosure obligations. We have added a new subsection to the NSL page of the Transparency Report where we publish these letters. We also added letters to the collection, and look to update this section regularly.

Government requests for content removals

As usage of our services increases, we remain committed to keeping internet users safe, working with law enforcement to remove illegal content, and complying with local laws. During this latest reporting period, we’ve continued to expand our work with local law enforcement. From January to June 2017, we received 19,176 requests from governments around the world to remove 76,714 pieces of content. This was a 20 percent percent increase in removal requests over the second half of 2016.

Making our Transparency Report easier to use

Finally, we’ve implemented a new “deep linking” feature that makes it easier to bookmark and share specific charts in the Transparency Report. Sorting data by country, time period, and other categories now generates a distinct web address at the top of your browser window. This allows you to create a link that will show, for example, just government removals data in France, by Google product, for the first half of 2015. We hope this will make it easier for citizens to find and reference information in the report, and for journalists and researchers to highlight specific details that they may be examining as well.

By continuing to make updates like these, we aim to spark new conversations about transparency, accountability and the role of governments and companies in the flow of information online.

New government removals and National Security Letter data

Since 2010, we’ve shared regular updates in our Transparency Report about the effects of government and corporate policies on users’ data and content. Our goal has always been to make this information as accessible as possible, and to continue expanding this report with new and relevant data.

Today, we’re announcing three updates to our Transparency Report. We’re expanding the National Security Letters (NSL) section, releasing new data on requests from governments to remove content from services like YouTube and Blogger, and making it easier for people to share select data and charts from the Transparency Report.

National Security Letters

Following the 2015 USA Freedom Act, the FBI started lifting indefinite gag restrictions—prohibitions against publicly sharing details—on particular NSLs. Last year, we began publishing NSLs we have received where, either through litigation or legislation, we have been freed of these nondisclosure obligations. We have added a new subsection to the NSL page of the Transparency Report where we publish these letters. We also added letters to the collection, and look to update this section regularly.

Government requests for content removals

As usage of our services increases, we remain committed to keeping internet users safe, working with law enforcement to remove illegal content, and complying with local laws. During this latest reporting period, we’ve continued to expand our work with local law enforcement. From January to June 2017, we received 19,176 requests from governments around the world to remove 76,714 pieces of content. This was a 20 percent percent increase in removal requests over the second half of 2016.

Making our Transparency Report easier to use

Finally, we’ve implemented a new “deep linking” feature that makes it easier to bookmark and share specific charts in the Transparency Report. Sorting data by country, time period, and other categories now generates a distinct web address at the top of your browser window. This allows you to create a link that will show, for example, just government removals data in France, by Google product, for the first half of 2015. We hope this will make it easier for citizens to find and reference information in the report, and for journalists and researchers to highlight specific details that they may be examining as well.

By continuing to make updates like these, we aim to spark new conversations about transparency, accountability and the role of governments and companies in the flow of information online.

Update on the Global Internet Forum to Counter Terrorism

At last year's EU Internet Forum, Facebook, Microsoft, Twitter and YouTube declared our joint determination to curb the spread of terrorist content online. Over the past year, we have formalized this partnership with the launch of the Global Internet Forum to Counter Terrorism (GIFCT). We hosted our first meeting in August where representatives from the tech industry, government and non-governmental organizations came together to focus on three key areas: technological approaches, knowledge sharing, and research. Since then, we have participated in a Heads of State meeting at the UN General Assembly in September and the G7 Interior Ministers meeting in October, and we look forward to hosting a GIFCT event and attending the EU Internet Forum in Brussels on the 6th of December.

The GIFCT is committed to working on technological solutions to help thwart terrorists' use of our services, and has built on the groundwork laid by the EU Internet Forum, particularly through a shared industry hash database, where companies can create “digital fingerprints” for terrorist content and share it with participating companies.

The database, which we announced our commitment to building last December and became operational last spring, now contains more than 40,000 hashes. It allows member companies to use those hashes to identify and remove matching content — videos and images — that violate our respective policies or, in some cases, block terrorist content before it is even posted.

We are pleased that Ask.fm, Cloudinary, Instagram, Justpaste.it, LinkedIn, Oath, and Snap have also recently joined this hash-sharing consortium, and we will continue our work to add additional companies throughout 2018.

In order to disrupt the distribution of terrorist content across the internet, companies have invested in collaborating and sharing expertise with one another. GIFCT's knowledge-sharing work has grown quickly in large measure because companies recognize that in countering terrorism online we face many of the same challenges.

Although our companies have been sharing best practices around counterterrorism for several years, in recent months GIFCT has provided a more formal structure to accelerate and strengthen this work. In collaboration with the Tech Against Terror initiative — which recently launched a Knowledge Sharing Platform with the support of GIFCT and the UN Counter-Terrorism Committee Executive Directorate — we have held workshops for smaller tech companies in order to share best practices on how to disrupt the spread of violent extremist content online.

Our initial goal for 2017 was to work with 50 smaller tech companies to to share best practices on how to disrupt the spread of violent extremist material. We have exceeded that goal, engaging with 68 companies over the past several months through workshops in San Francisco, New York, and Jakarta, plus another workshop next week in Brussels on the sidelines of the EU Internet Forum.

We recognize that our work is far from done, but we are confident that we are heading in the right direction. We will continue to provide updates as we forge new partnerships and develop new technology in the face of this global challenge

Update on the Global Internet Forum to Counter Terrorism

At last year's EU Internet Forum, Facebook, Microsoft, Twitter and YouTube declared our joint determination to curb the spread of terrorist content online. Over the past year, we have formalized this partnership with the launch of the Global Internet Forum to Counter Terrorism (GIFCT). We hosted our first meeting in August where representatives from the tech industry, government and non-governmental organizations came together to focus on three key areas: technological approaches, knowledge sharing, and research. Since then, we have participated in a Heads of State meeting at the UN General Assembly in September and the G7 Interior Ministers meeting in October, and we look forward to hosting a GIFCT event and attending the EU Internet Forum in Brussels on the 6th of December.

The GIFCT is committed to working on technological solutions to help thwart terrorists' use of our services, and has built on the groundwork laid by the EU Internet Forum, particularly through a shared industry hash database, where companies can create “digital fingerprints” for terrorist content and share it with participating companies.

The database, which we announced our commitment to building last December and became operational last spring, now contains more than 40,000 hashes. It allows member companies to use those hashes to identify and remove matching content — videos and images — that violate our respective policies or, in some cases, block terrorist content before it is even posted.

We are pleased that Ask.fm, Cloudinary, Instagram, Justpaste.it, LinkedIn, Oath, and Snap have also recently joined this hash-sharing consortium, and we will continue our work to add additional companies throughout 2018.

In order to disrupt the distribution of terrorist content across the internet, companies have invested in collaborating and sharing expertise with one another. GIFCT's knowledge-sharing work has grown quickly in large measure because companies recognize that in countering terrorism online we face many of the same challenges.

Although our companies have been sharing best practices around counterterrorism for several years, in recent months GIFCT has provided a more formal structure to accelerate and strengthen this work. In collaboration with the Tech Against Terror initiative — which recently launched a Knowledge Sharing Platform with the support of GIFCT and the UN Counter-Terrorism Committee Executive Directorate — we have held workshops for smaller tech companies in order to share best practices on how to disrupt the spread of violent extremist content online.

Our initial goal for 2017 was to work with 50 smaller tech companies to to share best practices on how to disrupt the spread of violent extremist material. We have exceeded that goal, engaging with 68 companies over the past several months through workshops in San Francisco, New York, and Jakarta, plus another workshop next week in Brussels on the sidelines of the EU Internet Forum.

We recognize that our work is far from done, but we are confident that we are heading in the right direction. We will continue to provide updates as we forge new partnerships and develop new technology in the face of this global challenge

Update on the Global Internet Forum to Counter Terrorism

At last year's EU Internet Forum, Facebook, Microsoft, Twitter and YouTube declared our joint determination to curb the spread of terrorist content online. Over the past year, we have formalized this partnership with the launch of the Global Internet Forum to Counter Terrorism (GIFCT). We hosted our first meeting in August where representatives from the tech industry, government and non-governmental organizations came together to focus on three key areas: technological approaches, knowledge sharing, and research. Since then, we have participated in a Heads of State meeting at the UN General Assembly in September and the G7 Interior Ministers meeting in October, and we look forward to hosting a GIFCT event and attending the EU Internet Forum in Brussels on the 6th of December.

The GIFCT is committed to working on technological solutions to help thwart terrorists' use of our services, and has built on the groundwork laid by the EU Internet Forum, particularly through a shared industry hash database, where companies can create “digital fingerprints” for terrorist content and share it with participating companies.

The database, which we announced our commitment to building last December and became operational last spring, now contains more than 40,000 hashes. It allows member companies to use those hashes to identify and remove matching content — videos and images — that violate our respective policies or, in some cases, block terrorist content before it is even posted.

We are pleased that Ask.fm, Cloudinary, Instagram, Justpaste.it, LinkedIn, Oath, and Snap have also recently joined this hash-sharing consortium, and we will continue our work to add additional companies throughout 2018.

In order to disrupt the distribution of terrorist content across the internet, companies have invested in collaborating and sharing expertise with one another. GIFCT's knowledge-sharing work has grown quickly in large measure because companies recognize that in countering terrorism online we face many of the same challenges.

Although our companies have been sharing best practices around counterterrorism for several years, in recent months GIFCT has provided a more formal structure to accelerate and strengthen this work. In collaboration with the Tech Against Terror initiative — which recently launched a Knowledge Sharing Platform with the support of GIFCT and the UN Counter-Terrorism Committee Executive Directorate — we have held workshops for smaller tech companies in order to share best practices on how to disrupt the spread of violent extremist content online.

Our initial goal for 2017 was to work with 50 smaller tech companies to to share best practices on how to disrupt the spread of violent extremist material. We have exceeded that goal, engaging with 68 companies over the past several months through workshops in San Francisco, New York, and Jakarta, plus another workshop next week in Brussels on the sidelines of the EU Internet Forum.

We recognize that our work is far from done, but we are confident that we are heading in the right direction. We will continue to provide updates as we forge new partnerships and develop new technology in the face of this global challenge

Defending access to lawful information at Europe’s highest court

Under the right to be forgotten, Europeans can ask for information about themselves to be removed from search results for their name if it is outdated, or irrelevant. From the outset, we have publicly stated our concerns about the ruling, but we have still worked hard to comply—and to do so conscientiously and in consultation with Data Protection Authorities. To date, we’ve handled requests to delist nearly 2 million search results in Europe, removing more than 800,000 of them. We have also taken great care not to erase results that are clearly in the public interest, as the European Court of Justice directed. Most Data Protection Authorities have concluded that this approach strikes the right balance.


But two right to be forgotten cases now in front of the European Court of Justice threaten that balance.


In the first case, four individuals—who we can’t name—present an apparently simple argument: European law protects sensitive personal data; sensitive personal data includes information about your political beliefs or your criminal record; so all mentions of criminality or political affiliation should automatically be purged from search results, without any consideration of public interest.


If the Court accepted this argument, it would give carte blanche to people who might wish to use privacy laws to hide information of public interest—like a politician’s political views, or a public figure’s criminal record. This would effectively erase the public’s right to know important information about people who represent them in society or provide them services.


In the second case, the Court must decide whether Google should enforce the right to be forgotten not just in Europe, but in every country around the world. We—and a wide range of human rights and media organizations, and others, like Wikimedia—believe that this runs contrary to the basic principles of international law: no one country should be able to impose its rules on the citizens of another country, especially when it comes to linking to lawful content. Adopting such a rule would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world.


We’re speaking out because restricting access to lawful and valuable information is contrary to our mission as a company and keeps us from delivering the comprehensive search service that people expect of us.


But the threat is much greater than this. These cases represent a serious assault on the public’s right to access lawful information.


We will argue in court for a reasonable interpretation of the right to be forgotten and for the ability of countries around the world to set their own laws, not have those of others imposed on them. Up to November 20, European countries and institutions have the chance to make their views known to the Court. And we encourage everyone who cares about public access to information to stand up and fight to preserve it.