Author Archives: Kent Walker

Supporting election integrity through greater advertising transparency

Last year, Google committed to make political advertising more transparent. This week, we’re rolling out new policies for U.S. election ads across our platforms as we work to meet those commitments.

As a first step, we’ll now require additional verification for anyone who wants to purchase an election ad on Google in the U.S. and require that advertisers confirm they are a U.S. citizen or lawful permanent residents, as required by law. That means advertisers will have to provide a government-issued ID and other key information. To help people better understand who is paying for an election ad, we’re also requiring that ads incorporate a clear disclosure of who is paying for it.  

There's more to come. This summer, we’ll also release a new Transparency Report specifically focused on election ads. This Report will describe who ​is ​buying ​election-related ​ads ​on ​our ​platforms ​and ​how ​much ​money ​is being spent. We’re also building a searchable library for election ads, where anyone can find which election ads purchased on Google and who paid for them.

As we learn from these changes and our continued engagement with leaders and experts in the field, we’ll work to improve transparency of political issue ads and expand our coverage to a wider range of elections.

Our work on elections goes far beyond improving policies for advertising. We’re investing heavily in keeping our own platforms secure and working with campaigns, elections officials, journalists, and others to help ensure the security of the online platforms that they depend on. In addition to the industry-leading protections in our consumer products, we’ve developed a range of Protect Your Election tools with Alphabet’s Jigsaw that are specifically tailored for people who are at particularly high risk of online attacks.

Yesterday, we announced improvements to one such product. Google’s Advanced Protection Program, our strongest level of account security for those who face increased risk of sophisticated phishing attacks sent to their email address, now supports Apple’s native applications on iOS devices, including Apple Mail, Calendar and Contacts. We expect this will help more campaigns and officials who are often the targets of sophisticated phishing attacks.

We are also working across the industry and beyond to strengthen protections around elections. We’ve partnered with the National Cyber Security Alliance and Digital Democracy Project at the Belfer Center at Harvard Kennedy School to fund security training programs for elected officials, campaigns, and staff members. We are also supporting the Harvard Kennedy School's Shorenstein's Center, “Disinfo Lab,” which will employ journalists to leverage computational tools to monitor misinformation in the run-up to and during elections.

For over a decade we’ve built products that provide information about elections around the world, to help voters make decisions on the leadership of their communities, their cities, their states and their countries. We are continuing that work through our efforts to increase election advertising transparency, to improve online security for campaigns and candidates, and to help combat misinformation.  Stay tuned for more announcements in the coming months.

Searching for new solutions to the evolving jobs market

We’ve all seen lots of articles about the future of work in today’s rapidly changing economy. Too often, the loudest voices propose just one of two visions for the future. Either globalization and technology will eliminate quality jobs, or we'll adapt to change just like we always have.


Google may be built on code, but we don't believe the future is binary. What lies ahead is hard to predict, and the most likely scenario for the future of work is a new sort of hybrid—with technology both transforming and creating jobs and new models of employment. But we’re confident that, working together, we can shape a labor market where everyone has access to opportunity.


Last year, we launched Grow with Google, an initiative that aims to help everyone across America access the best of Google’s training and tools to grow their skills, careers, and businesses. Google Hire helps employers find great employees. And Google for Jobs helps job seekers find new opportunities.


But making a difference requires more than just one company. Today, as part of our commitment to jobs and opportunity, Walmart and Google are making a $5 million grant investment to three organizations testing solutions in reskilling the American workforce and matching skills to roles.


  • Learning throughout life: The Drucker Institute is partnering with the Mayor of South Bend, Indiana, to bring together the city’s educational and workforce resources so that everyone has access to skill-building throughout their careers. This “City of Lifelong Learning” will serve as a national model for communities looking to make learning available throughout life.
  • Improving matching between skills and roles: Opportunity@Work is launching the techhire.careers platform, a new tool that helps underserved groups validate their skills for employers and connect to opportunities. This inclusive hiring marketplace helps job seekers and entry-level workers connect to trainings and jobs that make best use of their skills, and helps companies consider and hire nontraditional talent.
  • Backing social innovators with new skilling and job matching ideas:MIT’s Initiative on the Digital Economy is holding the Inclusive Innovation Challenge, a challenge for social innovators to use technology to reinvent the future of work. Through this tournament, the IDE will be seeking out and funding social innovators experimenting with new ways of helping people develop the skills they need for the digital economy and connect to job opportunities in a new way.

These grants are part of Google.org’s Work Initiative, a search for new solutions to prepare people for the changing nature of work. Last year, we committed $50 million to nonprofits experimenting with new ideas in skill-building, job matching, job quality, and social protections. In response to an open call for proposals, we received hundreds of ideas from across the U.S. In addition to our joint funding with Walmart, today we’re announcing four more grantees:


  • Assessing and credentialing soft skills:Southern New Hampshire University is developing the Authentic Assessment Platform (AAP), an assessment of in-demand soft skills. Results from this assessment will feed into a job placement process for young jobseekers. SNHU will provide those who complete this assessment with an SNHU official badge.
  • Training workers for the gig economy:Samaschool is developing a new training, with both in-person and online components, that helps independent workers learn the basics of finding freelance work, building their careers, managing contracts and taxes, and more.
  • Helping communities adjust to workforce transitions: Just Transition Fund is working with communities in coal country to develop a blueprint for coal-affected communities undergoing workforce transitions, helping them to effectively prepare for jobs in emerging sectors.
  • Aiding employers in clearly signaling their needs:The U.S. Chamber of Commerce Foundation is developing new and open resources to help those who hire to better convey their needs. These tools will include new standards on job descriptions, a digital library of open-sourced competency and credential resources, and a repository of job descriptions for benchmarking.

Through these new grants, we aim to back leading social innovators’ thinking about how work can help more people access not just income, but also purpose and meaning. Over the next several months, we’ll be announcing more grantees, and, most importantly, sharing what Google and all our grantees are learning through these efforts.

Defending access to lawful information at Europe’s highest court

Under the right to be forgotten, Europeans can ask for information about themselves to be removed from search results for their name if it is outdated, or irrelevant. From the outset, we have publicly stated our concerns about the ruling, but we have still worked hard to comply—and to do so conscientiously and in consultation with Data Protection Authorities. To date, we’ve handled requests to delist nearly 2 million search results in Europe, removing more than 800,000 of them. We have also taken great care not to erase results that are clearly in the public interest, as the European Court of Justice directed. Most Data Protection Authorities have concluded that this approach strikes the right balance.


But two right to be forgotten cases now in front of the European Court of Justice threaten that balance.


In the first case, four individuals—who we can’t name—present an apparently simple argument: European law protects sensitive personal data; sensitive personal data includes information about your political beliefs or your criminal record; so all mentions of criminality or political affiliation should automatically be purged from search results, without any consideration of public interest.


If the Court accepted this argument, it would give carte blanche to people who might wish to use privacy laws to hide information of public interest—like a politician’s political views, or a public figure’s criminal record. This would effectively erase the public’s right to know important information about people who represent them in society or provide them services.


In the second case, the Court must decide whether Google should enforce the right to be forgotten not just in Europe, but in every country around the world. We—and a wide range of human rights and media organizations, and others, like Wikimedia—believe that this runs contrary to the basic principles of international law: no one country should be able to impose its rules on the citizens of another country, especially when it comes to linking to lawful content. Adopting such a rule would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world.


We’re speaking out because restricting access to lawful and valuable information is contrary to our mission as a company and keeps us from delivering the comprehensive search service that people expect of us.


But the threat is much greater than this. These cases represent a serious assault on the public’s right to access lawful information.


We will argue in court for a reasonable interpretation of the right to be forgotten and for the ability of countries around the world to set their own laws, not have those of others imposed on them. Up to November 20, European countries and institutions have the chance to make their views known to the Court. And we encourage everyone who cares about public access to information to stand up and fight to preserve it.

Defending access to lawful information at Europe’s highest court

Under the right to be forgotten, Europeans can ask for information about themselves to be removed from search results for their name if it is outdated, or irrelevant. From the outset, we have publicly stated our concerns about the ruling, but we have still worked hard to comply—and to do so conscientiously and in consultation with Data Protection Authorities. To date, we’ve handled requests to delist nearly 2 million search results in Europe, removing more than 800,000 of them. We have also taken great care not to erase results that are clearly in the public interest, as the European Court of Justice directed. Most Data Protection Authorities have concluded that this approach strikes the right balance.


But two right to be forgotten cases now in front of the European Court of Justice threaten that balance.


In the first case, four individuals—who we can’t name—present an apparently simple argument: European law protects sensitive personal data; sensitive personal data includes information about your political beliefs or your criminal record; so all mentions of criminality or political affiliation should automatically be purged from search results, without any consideration of public interest.


If the Court accepted this argument, it would give carte blanche to people who might wish to use privacy laws to hide information of public interest—like a politician’s political views, or a public figure’s criminal record. This would effectively erase the public’s right to know important information about people who represent them in society or provide them services.


In the second case, the Court must decide whether Google should enforce the right to be forgotten not just in Europe, but in every country around the world. We—and a wide range of human rights and media organizations, and others, like Wikimedia—believe that this runs contrary to the basic principles of international law: no one country should be able to impose its rules on the citizens of another country, especially when it comes to linking to lawful content. Adopting such a rule would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world.


We’re speaking out because restricting access to lawful and valuable information is contrary to our mission as a company and keeps us from delivering the comprehensive search service that people expect of us.


But the threat is much greater than this. These cases represent a serious assault on the public’s right to access lawful information.


We will argue in court for a reasonable interpretation of the right to be forgotten and for the ability of countries around the world to set their own laws, not have those of others imposed on them. Up to November 20, European countries and institutions have the chance to make their views known to the Court. And we encourage everyone who cares about public access to information to stand up and fight to preserve it.

Defending access to lawful information at Europe’s highest court

Under the right to be forgotten, Europeans can ask for information about themselves to be removed from search results for their name if it is outdated, or irrelevant. From the outset, we have publicly stated our concerns about the ruling, but we have still worked hard to comply—and to do so conscientiously and in consultation with Data Protection Authorities. To date, we’ve handled requests to delist nearly 2 million search results in Europe, removing more than 800,000 of them. We have also taken great care not to erase results that are clearly in the public interest, as the European Court of Justice directed. Most Data Protection Authorities have concluded that this approach strikes the right balance.


But two right to be forgotten cases now in front of the European Court of Justice threaten that balance.


In the first case, four individuals—who we can’t name—present an apparently simple argument: European law protects sensitive personal data; sensitive personal data includes information about your political beliefs or your criminal record; so all mentions of criminality or political affiliation should automatically be purged from search results, without any consideration of public interest.


If the Court accepted this argument, it would give carte blanche to people who might wish to use privacy laws to hide information of public interest—like a politician’s political views, or a public figure’s criminal record. This would effectively erase the public’s right to know important information about people who represent them in society or provide them services.


In the second case, the Court must decide whether Google should enforce the right to be forgotten not just in Europe, but in every country around the world. We—and a wide range of human rights and media organizations, and others, like Wikimedia—believe that this runs contrary to the basic principles of international law: no one country should be able to impose its rules on the citizens of another country, especially when it comes to linking to lawful content. Adopting such a rule would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world.


We’re speaking out because restricting access to lawful and valuable information is contrary to our mission as a company and keeps us from delivering the comprehensive search service that people expect of us.


But the threat is much greater than this. These cases represent a serious assault on the public’s right to access lawful information.


We will argue in court for a reasonable interpretation of the right to be forgotten and for the ability of countries around the world to set their own laws, not have those of others imposed on them. Up to November 20, European countries and institutions have the chance to make their views known to the Court. And we encourage everyone who cares about public access to information to stand up and fight to preserve it.

Security and disinformation in the U.S. 2016 election

We’ve seen many types of efforts to abuse Google’s services over the years. And, like other internet platforms, we have found some evidence of efforts to misuse our platforms during the 2016 U.S. election by actors linked to the Internet Research Agency in Russia. 

Preventing the misuse of our platforms is something that we take very seriously; it’s a major focus for our teams. We’re committed to finding a way to stop this type of abuse, and to working closely with governments, law enforcement, other companies, and leading NGOs to promote electoral integrity and user security, and combat misinformation. 

We have been conducting a thorough investigation related to the U.S. election across our products drawing on the work of our information security team, research into misinformation campaigns from our teams, and leads provided by other companies. Today, we are sharing results from that investigation. While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable.

We will be launching several new initiatives to provide more transparency and enhance security, which we also detail in these information sheets: what we found, steps against phishing and hacking, and our work going forward.

Our work doesn’t stop here, and we’ll continue to investigate as new information comes to light. Improving transparency is a good start, but we must also address new and evolving threat vectors for misinformation and attacks on future elections. We will continue to do our best to help people find valuable and useful information, an essential foundation for an informed citizenry and a robust democratic process.

Security and disinformation in the U.S. 2016 election

We’ve seen many types of efforts to abuse Google’s services over the years. And, like other internet platforms, we have found some evidence of efforts to misuse our platforms during the 2016 U.S. election by actors linked to the Internet Research Agency in Russia. 

Preventing the misuse of our platforms is something that we take very seriously; it’s a major focus for our teams. We’re committed to finding a way to stop this type of abuse, and to working closely with governments, law enforcement, other companies, and leading NGOs to promote electoral integrity and user security, and combat misinformation. 

We have been conducting a thorough investigation related to the U.S. election across our products drawing on the work of our information security team, research into misinformation campaigns from our teams, and leads provided by other companies. Today, we are sharing results from that investigation. While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable.

We will be launching several new initiatives to provide more transparency and enhance security, which we also detail in these information sheets: what we found, steps against phishing and hacking, and our work going forward.

Our work doesn’t stop here, and we’ll continue to investigate as new information comes to light. Improving transparency is a good start, but we must also address new and evolving threat vectors for misinformation and attacks on future elections. We will continue to do our best to help people find valuable and useful information, an essential foundation for an informed citizenry and a robust democratic process.

Working together to combat terrorists online

Editor’s note: This is a revised and abbreviated version of a speech Kent delivered today at the United Nations in New York City, NY, on behalf of the members of the Global Internet Forum to Counter Terrorism.

The Global Internet Forum to Counter Terrorism is a group of four technology companies—Facebook, Microsoft, Twitter, and YouTube—that are committed to working together and with governments and civil society to address the problem of online terrorist content.

For our companies, terrorism isn’t just a business concern or a technical challenge. These are deeply personal threats. We are citizens of London, Paris, Jakarta, and New York. And in the wake of each terrorist attack we too frantically check in on our families and co-workers to make sure they are safe. We’ve all had to do this far too often.

The products that our companies build lower barriers to innovation and empower billions of people around the world. But we recognize that the internet and other tools have also been abused by terrorists in their efforts to recruit, fundraise, and organize. And we are committed to doing everything in our power to ensure that our platforms aren't used to distribute terrorist material.

The Forum’s efforts are focused on three areas: leveraging technology, conducting research on patterns of radicalization and misuse of online platforms, and sharing best practices to accelerate our joint efforts against dangerous radicalization. Let me say more about each pillar.

First, when it comes to technology, you should know that our companies are putting our best talent and technology against the task of getting terrorist content off our services. There is no silver bullet when it comes to finding and removing this content, but we’re getting much better.

One early success in collaboration has been our “hash sharing” database, which allows a company that discovers terrorist content on one of their sites to create a digital fingerprint and share it with the other companies in the coalition, who can then more easily detect and review similar content for removal.  

We have to deal with these problems at tremendous scale. The haystacks are unimaginably large and the needles are both very small and constantly changing. People upload over 400 hours of content to YouTube every minute. Our software engineers have spent years developing technology that can spot certain telltale cues and markers. In recent months we have more than doubled the number of videos we've removed for violent extremism and have located these videos twice as fast. And what’s more, 75 percent of the violent extremism videos we’ve removed in recent months were found using technology before they received a single human flag.

These efforts are working. Between August 2015 and June 2017, Twitter suspended more than 935,000 accounts for the promotion of terrorism. During the first half of 2017, over 95 percent of the accounts it removed were detected using its in-house technology. Facebook is using new advances in artificial intelligence to root out "terrorist clusters" by mapping out the pages, posts, and profiles with terrorist material and then shutting them down.

Despite this recent progress, machines are simply not at the stage where they can replace human judgment. For example, portions of a terrorist video in a news broadcast might be entirely legitimate, but a computer program will have difficulty distinguishing documentary coverage from incitement.  

The Forum’s second pillar is focused on conducting and sharing research about how terrorists use the internet to influence their audiences so that we can stay one step ahead.

Today, the members of the Forum are pleased to announce that we are making a multi-million dollar commitment to support research on terrorist abuse of the internet and how governments, tech companies, and civil society can fight back against online radicalization.

The Forum has also set a goal of working with 50 smaller tech companies to help them better tackle terrorist content on their platforms. On Monday, we hosted dozens of companies for a workshop with our partners under the UN Counter Terrorism Executive Directorate. There will be a workshop in Brussels in December and another in Indonesia in the coming months. And we are also working to expand the hash-sharing database to smaller companies.

The Forum’s final pillar is working together to find powerful messages and avenues to reach out to those at greatest risk of radicalization.

Members of the forum are doing a better job of sharing breakthroughs with each other. One success we’ve seen is with the Redirect Method developed at Alphabet’s Jigsaw group. Redirect uses targeted advertising to reach people searching for terrorist content and presents videos that undermine extremist recruiting efforts. During a recent eight-week study more than 300,000 users clicked on our targeted ads and watched more than 500,000 minutes of video. This past April, Microsoft started a similar program on Bing. And Jigsaw and Bing are now exploring a partnership to share best practices and expertise.

At the same time, we’re elevating the voices that are most credible in speaking out against terrorism, hate, and violence. YouTube’s Creators for Change program highlights online stars taking a stand against xenophobia and extremism.  And Facebook's P2P program has brought together more than 5,000 students from 68 countries to create campaigns to combat hate speech. And together the companies have participated in hundreds of meetings and trainings to counter violent extremism including events in Beirut, Bosnia, and Brussels and summits at the White House, here at the United Nations, London, and Sydney to empower credible non-governmental voices against violent extremism.

There is no magic computer program that will eliminate online terrorist content, but we are committed to working with everyone in this room as we continue to ramp up our own efforts to stop terrorists’ abuse of our services. This forum is an important step in the right direction. We look forward to working with national and local governments, and civil society, to prevent extremist ideology from spreading in communities and online.

Working together to combat terrorists online

Editor’s note: This is a revised and abbreviated version of a speech Kent delivered today at the United Nations in New York City, NY, on behalf of the members of the Global Internet Forum to Counter Terrorism.

The Global Internet Forum to Counter Terrorism is a group of four technology companies—Facebook, Microsoft, Twitter, and YouTube—that are committed to working together and with governments and civil society to address the problem of online terrorist content.

For our companies, terrorism isn’t just a business concern or a technical challenge. These are deeply personal threats. We are citizens of London, Paris, Jakarta, and New York. And in the wake of each terrorist attack we too frantically check in on our families and co-workers to make sure they are safe. We’ve all had to do this far too often.

The products that our companies build lower barriers to innovation and empower billions of people around the world. But we recognize that the internet and other tools have also been abused by terrorists in their efforts to recruit, fundraise, and organize. And we are committed to doing everything in our power to ensure that our platforms aren't used to distribute terrorist material.

The Forum’s efforts are focused on three areas: leveraging technology, conducting research on patterns of radicalization and misuse of online platforms, and sharing best practices to accelerate our joint efforts against dangerous radicalization. Let me say more about each pillar.

First, when it comes to technology, you should know that our companies are putting our best talent and technology against the task of getting terrorist content off our services. There is no silver bullet when it comes to finding and removing this content, but we’re getting much better.

One early success in collaboration has been our “hash sharing” database, which allows a company that discovers terrorist content on one of their sites to create a digital fingerprint and share it with the other companies in the coalition, who can then more easily detect and review similar content for removal.  

We have to deal with these problems at tremendous scale. The haystacks are unimaginably large and the needles are both very small and constantly changing. People upload over 400 hours of content to YouTube every minute. Our software engineers have spent years developing technology that can spot certain telltale cues and markers. In recent months we have more than doubled the number of videos we've removed for violent extremism and have located these videos twice as fast. And what’s more, 75 percent of the violent extremism videos we’ve removed in recent months were found using technology before they received a single human flag.

These efforts are working. Between August 2015 and June 2017, Twitter suspended more than 935,000 accounts for the promotion of terrorism. During the first half of 2017, over 95 percent of the accounts it removed were detected using its in-house technology. Facebook is using new advances in artificial intelligence to root out "terrorist clusters" by mapping out the pages, posts, and profiles with terrorist material and then shutting them down.

Despite this recent progress, machines are simply not at the stage where they can replace human judgment. For example, portions of a terrorist video in a news broadcast might be entirely legitimate, but a computer program will have difficulty distinguishing documentary coverage from incitement.  

The Forum’s second pillar is focused on conducting and sharing research about how terrorists use the internet to influence their audiences so that we can stay one step ahead.

Today, the members of the Forum are pleased to announce that we are making a multi-million dollar commitment to support research on terrorist abuse of the internet and how governments, tech companies, and civil society can fight back against online radicalization.

The Forum has also set a goal of working with 50 smaller tech companies to help them better tackle terrorist content on their platforms. On Monday, we hosted dozens of companies for a workshop with our partners under the UN Counter Terrorism Executive Directorate. There will be a workshop in Brussels in December and another in Indonesia in the coming months. And we are also working to expand the hash-sharing database to smaller companies.

The Forum’s final pillar is working together to find powerful messages and avenues to reach out to those at greatest risk of radicalization.

Members of the forum are doing a better job of sharing breakthroughs with each other. One success we’ve seen is with the Redirect Method developed at Alphabet’s Jigsaw group. Redirect uses targeted advertising to reach people searching for terrorist content and presents videos that undermine extremist recruiting efforts. During a recent eight-week study more than 300,000 users clicked on our targeted ads and watched more than 500,000 minutes of video. This past April, Microsoft started a similar program on Bing. And Jigsaw and Bing are now exploring a partnership to share best practices and expertise.

At the same time, we’re elevating the voices that are most credible in speaking out against terrorism, hate, and violence. YouTube’s Creators for Change program highlights online stars taking a stand against xenophobia and extremism.  And Facebook's P2P program has brought together more than 5,000 students from 68 countries to create campaigns to combat hate speech. And together the companies have participated in hundreds of meetings and trainings to counter violent extremism including events in Beirut, Bosnia, and Brussels and summits at the White House, here at the United Nations, London, and Sydney to empower credible non-governmental voices against violent extremism.

There is no magic computer program that will eliminate online terrorist content, but we are committed to working with everyone in this room as we continue to ramp up our own efforts to stop terrorists’ abuse of our services. This forum is an important step in the right direction. We look forward to working with national and local governments, and civil society, to prevent extremist ideology from spreading in communities and online.

Supporting new ideas in the fight against hate

Addressing the threat posed by violence and hate is a critical challenge for us all. Google has taken steps to tackle violent extremist content online—putting our best talent and technology to the task, and partnering with law enforcement agencies, civil society groups, and the wider technology industry. We can’t do it alone, but we’re making progress.

Our efforts to disrupt terrorists’ ability to use the Internet focus on three areas: leveraging technology, conducting and sharing research, and sharing best practices and encouraging affirmative efforts against dangerous radicalization. Today we’re announcing a new effort to build on that third pillar. Over the last year we’ve made $2 million in grants to nonprofits around the world seeking to empower and amplify counter-extremist voices. Today we’re expanding that effort and launching a $5 million Google.org innovation fund to counter hate and extremism. Over the next two years, this funding will support technology-driven solutions, as well as grassroots efforts like community youth projects that help build communities and promote resistance to radicalization.

We’re making our first grant from the fund to the Institute for Strategic Dialogue (ISD), an expert counter-extremist organization in the U.K. ISD will use our $1.3 million grant to help leaders from the U.K.’s technology, academic, and charity sectors develop projects to counter extremism. This will be the largest project of its kind outside of government and aims to produce innovative, effective and data-driven solutions that can undermine and overcome radicalization propaganda. We’ll provide an update in the coming months with more information on how to apply.

By funding experts like ISD, we hope to support sustainable solutions to extremism both online and offline. We don’t have all the answers, but we’re committed to playing our part. We’re looking forward to helping bring new ideas and technologies to life.