Author Archives: Kent Walker

Responding to the European Commission’s AI white paper

In January, our CEO Sundar Pichai visited Brussels to talk about artificial intelligence and how Google could help people and businesses succeed in the digital age through partnership. Much has changed since then due to COVID-19, but one thing hasn’t—our commitment to the potential of partnership with Europe on AI, especially to tackle the pandemic and help people and the economy recover. 

As part of that effort, we earlier today filed our response to the European Commission’s Consultation on Artificial Intelligence, giving our feedback on the Commission’s initial proposal for how to regulate and accelerate the adoption of AI. 

Excellence, skills, trust

Our filing applauds the Commission’s focus on building out the European “ecosystem of excellence.” European universities already boast renowned leaders in dozens of areas of AI research—Google partners with some of them via our machine learning research hubs in Zurich, Amsterdam, Berlin, Paris and London—and many of their students go on to make important contributions to European businesses.  

We support the Commission’s plans to help businesses develop the AI skills they need to thrive in the new digital economy. Next month, we’ll contribute to those efforts by extending our machine learning check-up tool to 11 European countries to help small businesses implement AI and grow their businesses. Google Cloud already works closely with scores of businesses across Europe to help them innovate using AI.  

We also support the Commission’s goal of building a framework for AI innovation that will create trust and guide ethical development and use of this widely applicable technology. We appreciate the Commission's proportionate, risk-based approach. It’s important that AI applications in sensitive fields—such as medicine or transportation—are held to the appropriate standards. 

Based on our experience working with AI, we also offered a couple of suggestions for making future regulation more effective. We want to be a helpful and engaged partner to policymakers, and we have provided more details in our formal response to the consultation.

Definition of high-risk AI applications

AI has a broad range of current and future applications, including some that involve significant benefits and risks.  We think any future regulation would benefit from a more carefully nuanced definition of “high-risk” applications of AI. We agree that some uses warrant extra scrutiny and safeguards to address genuine and complex challenges around safety, fairness, explainability, accountability, and human interactions. 

Assessment of AI applications

When thinking about how to assess high-risk AI applications, it's important to strike a balance. While AI won’t always be perfect, it has great potential to help us improve over the performance of existing systems and processes. But the development process for AI must give people confidence that the AI system they’re using is reliable and safe. That’s especially true for applications like new medical diagnostic techniques, which potentially allow skilled medical practitioners to offer more accurate diagnoses, earlier interventions, and better patient outcomes. But the requirements need to be proportionate to the risk, and shouldn’t unduly limit innovation, adoption, and impact. 

This is not an easy needle to thread. The Commission’s proposal suggests “ex ante” assessment of AI applications (i.e., upfront assessment, based on forecasted rather than actual use cases). Our contribution recommends having established due diligence and regulatory review processes expand to include the assessment of AI applications. This would avoid unnecessary duplication of efforts and likely speed up implementation.

For the (probably) rare instances when high-risk applications of AI are not obviously covered by existing regulations, we would encourage clear guidance on the “due diligence” criteria companies should use in their development processes. This would enable robust upfront self-assessment and documentation of any risks and their mitigations, and could also include further scrutiny after launch.

This approach would give European citizens confidence about the trustworthiness of AI applications, while also fostering innovation across the region. And it would encourage companies—especially smaller ones—to launch a range of valuable new services. 

Principles and process

Responsible development of AI presents new challenges and critical questions for all of us. In 2018 we published our own AI Principles to help guide our ethical development and use of AI, and also established internal review processes to help us avoid bias, test rigorously for safety, design with privacy top of mind.  Our principles also specify areas where we will not design or deploy AI, such as to support mass surveillance or violate human rights. Look out for an update on our work around these principles in the coming weeks. 

AI is an important part of Google’s business and our aspirations for the future. We share a common goal with policymakers—a desire to build trust in AI through responsible innovation and thoughtful regulation, so that European citizens can safely enjoy the full social and economic benefits of AI. We hope that our contribution to the consultation is useful, and we look forward to participating in the discussion in coming months.

Responding to the European Commission’s AI white paper

In January, our CEO Sundar Pichai visited Brussels to talk about artificial intelligence and how Google could help people and businesses succeed in the digital age through partnership. Much has changed since then due to COVID-19, but one thing hasn’t—our commitment to the potential of partnership with Europe on AI, especially to tackle the pandemic and help people and the economy recover. 

As part of that effort, we earlier today filed our response to the European Commission’s Consultation on Artificial Intelligence, giving our feedback on the Commission’s initial proposal for how to regulate and accelerate the adoption of AI. 

Excellence, skills, trust

Our filing applauds the Commission’s focus on building out the European “ecosystem of excellence.” European universities already boast renowned leaders in dozens of areas of AI research—Google partners with some of them via our machine learning research hubs in Zurich, Amsterdam, Berlin, Paris and London—and many of their students go on to make important contributions to European businesses.  

We support the Commission’s plans to help businesses develop the AI skills they need to thrive in the new digital economy. Next month, we’ll contribute to those efforts by extending our machine learning check-up tool to 11 European countries to help small businesses implement AI and grow their businesses. Google Cloud already works closely with scores of businesses across Europe to help them innovate using AI.  

We also support the Commission’s goal of building a framework for AI innovation that will create trust and guide ethical development and use of this widely applicable technology. We appreciate the Commission's proportionate, risk-based approach. It’s important that AI applications in sensitive fields—such as medicine or transportation—are held to the appropriate standards. 

Based on our experience working with AI, we also offered a couple of suggestions for making future regulation more effective. We want to be a helpful and engaged partner to policymakers, and we have provided more details in our formal response to the consultation.

Definition of high-risk AI applications

AI has a broad range of current and future applications, including some that involve significant benefits and risks.  We think any future regulation would benefit from a more carefully nuanced definition of “high-risk” applications of AI. We agree that some uses warrant extra scrutiny and safeguards to address genuine and complex challenges around safety, fairness, explainability, accountability, and human interactions. 

Assessment of AI applications

When thinking about how to assess high-risk AI applications, it's important to strike a balance. While AI won’t always be perfect, it has great potential to help us improve over the performance of existing systems and processes. But the development process for AI must give people confidence that the AI system they’re using is reliable and safe. That’s especially true for applications like new medical diagnostic techniques, which potentially allow skilled medical practitioners to offer more accurate diagnoses, earlier interventions, and better patient outcomes. But the requirements need to be proportionate to the risk, and shouldn’t unduly limit innovation, adoption, and impact. 

This is not an easy needle to thread. The Commission’s proposal suggests “ex ante” assessment of AI applications (i.e., upfront assessment, based on forecasted rather than actual use cases). Our contribution recommends having established due diligence and regulatory review processes expand to include the assessment of AI applications. This would avoid unnecessary duplication of efforts and likely speed up implementation.

For the (probably) rare instances when high-risk applications of AI are not obviously covered by existing regulations, we would encourage clear guidance on the “due diligence” criteria companies should use in their development processes. This would enable robust upfront self-assessment and documentation of any risks and their mitigations, and could also include further scrutiny after launch.

This approach would give European citizens confidence about the trustworthiness of AI applications, while also fostering innovation across the region. And it would encourage companies—especially smaller ones—to launch a range of valuable new services. 

Principles and process

Responsible development of AI presents new challenges and critical questions for all of us. In 2018 we published our own AI Principles to help guide our ethical development and use of AI, and also established internal review processes to help us avoid bias, test rigorously for safety, design with privacy top of mind.  Our principles also specify areas where we will not design or deploy AI, such as to support mass surveillance or violate human rights. Look out for an update on our work around these principles in the coming weeks. 

AI is an important part of Google’s business and our aspirations for the future. We share a common goal with policymakers—a desire to build trust in AI through responsible innovation and thoughtful regulation, so that European citizens can safely enjoy the full social and economic benefits of AI. We hope that our contribution to the consultation is useful, and we look forward to participating in the discussion in coming months.

The case for open innovation

Software programs work better when they work together. Open software interfaces let smartphone apps and other services connect across devices and operating systems. And interoperability—the ability of different software systems to exchange information—lets people mix and match great features, and helps developers create new products that work across platforms. The result? Consumers get more choices for how they use software tools; developers and startups can challenge bigger incumbents; and businesses can move data from one platform to another without missing a beat. 


This kind of open and collaborative innovation, from scientific peer-reviewed papers to open-source software, has been key to America’s achievements in science and technology.


That’s why today we filed our opening Supreme Court brief in Oracle’s lawsuit against us. We’re asking the Court to reaffirm the importance of the software interoperability that has allowed millions of developers to write millions of applications that work on billions of devices. As Microsoft said in an earlier filing in this case: "Consumers ... expect to be able to take a photo on their Apple phone, save it onto Google’s cloud servers, and edit it on their Surface tablets." 


The Court will review whether copyright should extend to nuts-and-bolts software interfaces, and if so, whether it can be fair to use those interfaces to create new technologies, as the jury in this case found. Software interfaces are the access points that allow computer programs to connect to each other, like plugs and sockets. Imagine a world in which every time you went to a different building, you needed a different plug to fit the proprietary socket, and no one was allowed to create adapters.


This case will make a difference for everyone who touches technology—from startups to major tech platforms, software developers to product manufacturers, businesses to consumers—and we’re pleased that many leading representatives of those groups will be filing their own briefs to support our position.


Open interfaces between programs are the building blocks of many of the services and products we use today, as well as of technologies we haven’t yet imagined. An Oracle win would upend the way the technology industry has always approached the important issue of software interfaces. It would for the first time grant copyright owners a monopoly power to stymie the creation of new implementations and applications. And it would make it harder and costlier for developers and startups to create more products for people to use.


We welcome the opportunity to appear before the Supreme Court this spring to argue for software interoperability that has promoted the progress of science and useful arts—the core purpose of American copyright law.

Supporting election integrity through greater advertising transparency

Last year, Google committed to make political advertising more transparent. This week, we’re rolling out new policies for U.S. election ads across our platforms as we work to meet those commitments.

As a first step, we’ll now require additional verification for anyone who wants to purchase an election ad on Google in the U.S. and require that advertisers confirm they are a U.S. citizen or lawful permanent residents, as required by law. That means advertisers will have to provide a government-issued ID and other key information. To help people better understand who is paying for an election ad, we’re also requiring that ads incorporate a clear disclosure of who is paying for it.  

There's more to come. This summer, we’ll also release a new Transparency Report specifically focused on election ads. This Report will describe who ​is ​buying ​election-related ​ads ​on ​our ​platforms ​and ​how ​much ​money ​is being spent. We’re also building a searchable library for election ads, where anyone can find which election ads purchased on Google and who paid for them.

As we learn from these changes and our continued engagement with leaders and experts in the field, we’ll work to improve transparency of political issue ads and expand our coverage to a wider range of elections.

Our work on elections goes far beyond improving policies for advertising. We’re investing heavily in keeping our own platforms secure and working with campaigns, elections officials, journalists, and others to help ensure the security of the online platforms that they depend on. In addition to the industry-leading protections in our consumer products, we’ve developed a range of Protect Your Election tools with Alphabet’s Jigsaw that are specifically tailored for people who are at particularly high risk of online attacks.

Yesterday, we announced improvements to one such product. Google’s Advanced Protection Program, our strongest level of account security for those who face increased risk of sophisticated phishing attacks sent to their email address, now supports Apple’s native applications on iOS devices, including Apple Mail, Calendar and Contacts. We expect this will help more campaigns and officials who are often the targets of sophisticated phishing attacks.

We are also working across the industry and beyond to strengthen protections around elections. We’ve partnered with the National Cyber Security Alliance and Digital Democracy Project at the Belfer Center at Harvard Kennedy School to fund security training programs for elected officials, campaigns, and staff members. We are also supporting the Harvard Kennedy School's Shorenstein's Center, “Disinfo Lab,” which will employ journalists to leverage computational tools to monitor misinformation in the run-up to and during elections.

For over a decade we’ve built products that provide information about elections around the world, to help voters make decisions on the leadership of their communities, their cities, their states and their countries. We are continuing that work through our efforts to increase election advertising transparency, to improve online security for campaigns and candidates, and to help combat misinformation.  Stay tuned for more announcements in the coming months.

Searching for new solutions to the evolving jobs market

We’ve all seen lots of articles about the future of work in today’s rapidly changing economy. Too often, the loudest voices propose just one of two visions for the future. Either globalization and technology will eliminate quality jobs, or we'll adapt to change just like we always have.


Google may be built on code, but we don't believe the future is binary. What lies ahead is hard to predict, and the most likely scenario for the future of work is a new sort of hybrid—with technology both transforming and creating jobs and new models of employment. But we’re confident that, working together, we can shape a labor market where everyone has access to opportunity.


Last year, we launched Grow with Google, an initiative that aims to help everyone across America access the best of Google’s training and tools to grow their skills, careers, and businesses. Google Hire helps employers find great employees. And Google for Jobs helps job seekers find new opportunities.


But making a difference requires more than just one company. Today, as part of our commitment to jobs and opportunity, Walmart and Google are making a $5 million grant investment to three organizations testing solutions in reskilling the American workforce and matching skills to roles.


  • Learning throughout life: The Drucker Institute is partnering with the Mayor of South Bend, Indiana, to bring together the city’s educational and workforce resources so that everyone has access to skill-building throughout their careers. This “City of Lifelong Learning” will serve as a national model for communities looking to make learning available throughout life.
  • Improving matching between skills and roles: Opportunity@Work is launching the techhire.careers platform, a new tool that helps underserved groups validate their skills for employers and connect to opportunities. This inclusive hiring marketplace helps job seekers and entry-level workers connect to trainings and jobs that make best use of their skills, and helps companies consider and hire nontraditional talent.
  • Backing social innovators with new skilling and job matching ideas:MIT’s Initiative on the Digital Economy is holding the Inclusive Innovation Challenge, a challenge for social innovators to use technology to reinvent the future of work. Through this tournament, the IDE will be seeking out and funding social innovators experimenting with new ways of helping people develop the skills they need for the digital economy and connect to job opportunities in a new way.

These grants are part of Google.org’s Work Initiative, a search for new solutions to prepare people for the changing nature of work. Last year, we committed $50 million to nonprofits experimenting with new ideas in skill-building, job matching, job quality, and social protections. In response to an open call for proposals, we received hundreds of ideas from across the U.S. In addition to our joint funding with Walmart, today we’re announcing four more grantees:


  • Assessing and credentialing soft skills:Southern New Hampshire University is developing the Authentic Assessment Platform (AAP), an assessment of in-demand soft skills. Results from this assessment will feed into a job placement process for young jobseekers. SNHU will provide those who complete this assessment with an SNHU official badge.
  • Training workers for the gig economy:Samaschool is developing a new training, with both in-person and online components, that helps independent workers learn the basics of finding freelance work, building their careers, managing contracts and taxes, and more.
  • Helping communities adjust to workforce transitions: Just Transition Fund is working with communities in coal country to develop a blueprint for coal-affected communities undergoing workforce transitions, helping them to effectively prepare for jobs in emerging sectors.
  • Aiding employers in clearly signaling their needs:The U.S. Chamber of Commerce Foundation is developing new and open resources to help those who hire to better convey their needs. These tools will include new standards on job descriptions, a digital library of open-sourced competency and credential resources, and a repository of job descriptions for benchmarking.

Through these new grants, we aim to back leading social innovators’ thinking about how work can help more people access not just income, but also purpose and meaning. Over the next several months, we’ll be announcing more grantees, and, most importantly, sharing what Google and all our grantees are learning through these efforts.

Defending access to lawful information at Europe’s highest court

Under the right to be forgotten, Europeans can ask for information about themselves to be removed from search results for their name if it is outdated, or irrelevant. From the outset, we have publicly stated our concerns about the ruling, but we have still worked hard to comply—and to do so conscientiously and in consultation with Data Protection Authorities. To date, we’ve handled requests to delist nearly 2 million search results in Europe, removing more than 800,000 of them. We have also taken great care not to erase results that are clearly in the public interest, as the European Court of Justice directed. Most Data Protection Authorities have concluded that this approach strikes the right balance.


But two right to be forgotten cases now in front of the European Court of Justice threaten that balance.


In the first case, four individuals—who we can’t name—present an apparently simple argument: European law protects sensitive personal data; sensitive personal data includes information about your political beliefs or your criminal record; so all mentions of criminality or political affiliation should automatically be purged from search results, without any consideration of public interest.


If the Court accepted this argument, it would give carte blanche to people who might wish to use privacy laws to hide information of public interest—like a politician’s political views, or a public figure’s criminal record. This would effectively erase the public’s right to know important information about people who represent them in society or provide them services.


In the second case, the Court must decide whether Google should enforce the right to be forgotten not just in Europe, but in every country around the world. We—and a wide range of human rights and media organizations, and others, like Wikimedia—believe that this runs contrary to the basic principles of international law: no one country should be able to impose its rules on the citizens of another country, especially when it comes to linking to lawful content. Adopting such a rule would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world.


We’re speaking out because restricting access to lawful and valuable information is contrary to our mission as a company and keeps us from delivering the comprehensive search service that people expect of us.


But the threat is much greater than this. These cases represent a serious assault on the public’s right to access lawful information.


We will argue in court for a reasonable interpretation of the right to be forgotten and for the ability of countries around the world to set their own laws, not have those of others imposed on them. Up to November 20, European countries and institutions have the chance to make their views known to the Court. And we encourage everyone who cares about public access to information to stand up and fight to preserve it.

Defending access to lawful information at Europe’s highest court

Under the right to be forgotten, Europeans can ask for information about themselves to be removed from search results for their name if it is outdated, or irrelevant. From the outset, we have publicly stated our concerns about the ruling, but we have still worked hard to comply—and to do so conscientiously and in consultation with Data Protection Authorities. To date, we’ve handled requests to delist nearly 2 million search results in Europe, removing more than 800,000 of them. We have also taken great care not to erase results that are clearly in the public interest, as the European Court of Justice directed. Most Data Protection Authorities have concluded that this approach strikes the right balance.


But two right to be forgotten cases now in front of the European Court of Justice threaten that balance.


In the first case, four individuals—who we can’t name—present an apparently simple argument: European law protects sensitive personal data; sensitive personal data includes information about your political beliefs or your criminal record; so all mentions of criminality or political affiliation should automatically be purged from search results, without any consideration of public interest.


If the Court accepted this argument, it would give carte blanche to people who might wish to use privacy laws to hide information of public interest—like a politician’s political views, or a public figure’s criminal record. This would effectively erase the public’s right to know important information about people who represent them in society or provide them services.


In the second case, the Court must decide whether Google should enforce the right to be forgotten not just in Europe, but in every country around the world. We—and a wide range of human rights and media organizations, and others, like Wikimedia—believe that this runs contrary to the basic principles of international law: no one country should be able to impose its rules on the citizens of another country, especially when it comes to linking to lawful content. Adopting such a rule would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world.


We’re speaking out because restricting access to lawful and valuable information is contrary to our mission as a company and keeps us from delivering the comprehensive search service that people expect of us.


But the threat is much greater than this. These cases represent a serious assault on the public’s right to access lawful information.


We will argue in court for a reasonable interpretation of the right to be forgotten and for the ability of countries around the world to set their own laws, not have those of others imposed on them. Up to November 20, European countries and institutions have the chance to make their views known to the Court. And we encourage everyone who cares about public access to information to stand up and fight to preserve it.

Defending access to lawful information at Europe’s highest court

Under the right to be forgotten, Europeans can ask for information about themselves to be removed from search results for their name if it is outdated, or irrelevant. From the outset, we have publicly stated our concerns about the ruling, but we have still worked hard to comply—and to do so conscientiously and in consultation with Data Protection Authorities. To date, we’ve handled requests to delist nearly 2 million search results in Europe, removing more than 800,000 of them. We have also taken great care not to erase results that are clearly in the public interest, as the European Court of Justice directed. Most Data Protection Authorities have concluded that this approach strikes the right balance.


But two right to be forgotten cases now in front of the European Court of Justice threaten that balance.


In the first case, four individuals—who we can’t name—present an apparently simple argument: European law protects sensitive personal data; sensitive personal data includes information about your political beliefs or your criminal record; so all mentions of criminality or political affiliation should automatically be purged from search results, without any consideration of public interest.


If the Court accepted this argument, it would give carte blanche to people who might wish to use privacy laws to hide information of public interest—like a politician’s political views, or a public figure’s criminal record. This would effectively erase the public’s right to know important information about people who represent them in society or provide them services.


In the second case, the Court must decide whether Google should enforce the right to be forgotten not just in Europe, but in every country around the world. We—and a wide range of human rights and media organizations, and others, like Wikimedia—believe that this runs contrary to the basic principles of international law: no one country should be able to impose its rules on the citizens of another country, especially when it comes to linking to lawful content. Adopting such a rule would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world.


We’re speaking out because restricting access to lawful and valuable information is contrary to our mission as a company and keeps us from delivering the comprehensive search service that people expect of us.


But the threat is much greater than this. These cases represent a serious assault on the public’s right to access lawful information.


We will argue in court for a reasonable interpretation of the right to be forgotten and for the ability of countries around the world to set their own laws, not have those of others imposed on them. Up to November 20, European countries and institutions have the chance to make their views known to the Court. And we encourage everyone who cares about public access to information to stand up and fight to preserve it.

Security and disinformation in the U.S. 2016 election

We’ve seen many types of efforts to abuse Google’s services over the years. And, like other internet platforms, we have found some evidence of efforts to misuse our platforms during the 2016 U.S. election by actors linked to the Internet Research Agency in Russia. 

Preventing the misuse of our platforms is something that we take very seriously; it’s a major focus for our teams. We’re committed to finding a way to stop this type of abuse, and to working closely with governments, law enforcement, other companies, and leading NGOs to promote electoral integrity and user security, and combat misinformation. 

We have been conducting a thorough investigation related to the U.S. election across our products drawing on the work of our information security team, research into misinformation campaigns from our teams, and leads provided by other companies. Today, we are sharing results from that investigation. While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable.

We will be launching several new initiatives to provide more transparency and enhance security, which we also detail in these information sheets: what we found, steps against phishing and hacking, and our work going forward.

Our work doesn’t stop here, and we’ll continue to investigate as new information comes to light. Improving transparency is a good start, but we must also address new and evolving threat vectors for misinformation and attacks on future elections. We will continue to do our best to help people find valuable and useful information, an essential foundation for an informed citizenry and a robust democratic process.

Security and disinformation in the U.S. 2016 election

We’ve seen many types of efforts to abuse Google’s services over the years. And, like other internet platforms, we have found some evidence of efforts to misuse our platforms during the 2016 U.S. election by actors linked to the Internet Research Agency in Russia. 

Preventing the misuse of our platforms is something that we take very seriously; it’s a major focus for our teams. We’re committed to finding a way to stop this type of abuse, and to working closely with governments, law enforcement, other companies, and leading NGOs to promote electoral integrity and user security, and combat misinformation. 

We have been conducting a thorough investigation related to the U.S. election across our products drawing on the work of our information security team, research into misinformation campaigns from our teams, and leads provided by other companies. Today, we are sharing results from that investigation. While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable.

We will be launching several new initiatives to provide more transparency and enhance security, which we also detail in these information sheets: what we found, steps against phishing and hacking, and our work going forward.

Our work doesn’t stop here, and we’ll continue to investigate as new information comes to light. Improving transparency is a good start, but we must also address new and evolving threat vectors for misinformation and attacks on future elections. We will continue to do our best to help people find valuable and useful information, an essential foundation for an informed citizenry and a robust democratic process.