Author Archives: Kent Walker

What people are saying about Australia’s proposed News Media Bargaining Code

Microsoft’s take on Australia’s proposed law is unsurprising — of course they'd be eager to impose an unworkable levy on a rival and increase their market share. 

But in its eagerness, Microsoft makes numerous claims that have been thoroughly and independently debunked

We have long been committed to supporting high-quality content on the web. Our issue is absolutely not with paying news organizations — we’ve done this for many years. Today Google News Showcase is paying publishers, and supporting local journalism, in Australia and over a dozen countries. Through these partnerships, we are paying significant amounts to support news organizations large and small — with more to come. 

But we and others have pointed to significant concerns with the proposed Australian law, while proposing reasonable amendments to make it work. The issue isn’t whether companies pay to support quality content; the issue is how. The law would unfairly require unknown payments for simply showing links to news businesses, while giving, to a favored few, special previews of search ranking. Those aren’t workable solutions and would fundamentally change the Internet, hurting the people and businesses who use it. But there are better ways, and we’re committed to making progress.

Don't take our (or Microsoft’s) word for it. Here's what others are saying, from a former Australian Prime Minister, to the Business Council of Australia, to the inventor of the web.

Quote from the Business Council of Australia
An image of a quote from the FT editorial board
An image of a quote from Sir Tim Berners Lee

Our continuing support for Dreamers

For generations, talented immigrants have helped America drive technological breakthroughs and scientific advancements that have created millions of new jobs in new industries, enriching our culture and our economy.

That’s why we have long supported the Deferred Action for Childhood Arrivals (DACA) program. This was established in 2012 and allows “Dreamers” who came to the United States as children to request deferred action and work authorization for a renewal period of two years. Google proudly employs Dreamers who work to build the products you use every day. And we’ve defended their right to stay in the United States by joining amici briefs in court supporting DACA.

Unfortunately, DACA’s immediate future is uncertain. At the end of 2020, a U.S. District Court indicated that it could soon issue a ruling against DACA and bar new applications and ultimately renewals as well, leaving countless Dreamers in limbo during this uncertain time.

We believe it’s important that Dreamers have a chance to apply for protection under the program so that they can safeguard their status in the United States. But in the middle of a global pandemic that has led to economic hardship, especially for the many immigrants playing essential roles on the front lines, there is concern that many Dreamers cannot afford to pay the application fee

We want to do our part, so Google.org is making a $250,000 grant to United We Dream to cover the DACA application fees of over 500 Dreamers. This grant builds on over $35 million in support that Google.org and Google employees have contributed over the years to support immigrants and refugees worldwide, including more than $1 million from Googlers and Google.org specifically supporting DACA and domestic immigration efforts through employee giving campaigns led by HOLA (Google’s Latino Employee Resource Group).

We know this is only a temporary solution. We need legislation that not only protects Dreamers, but also delivers other much-needed reforms. We will support efforts by the new Congress and incoming Administration to pass comprehensive immigration reform that improves employment-based visa programs that enhance American competitiveness, gives greater assurance to immigrant workers and employers, and promotes better and more humane immigration processing and border security practices.

Dreamers and other talented immigrants enrich our communities, contribute to our economy, and exemplify the innovative spirit of America. We’re proud to support them.

The opportunity for “Digital Sprinters”

People around the world are confronting once-in-a-generation challenges: a global pandemic, an economic downturn of unprecedented proportions, rising demands for equity, and dramatic strains on financial resources. 

The rain from this perfect storm is falling hardest on emerging markets. In many cases, they’re struggling to manage the pandemic with fewer public health resources and also suffer from greater economic vulnerabilities. Yet emerging markets also have some of the most vibrant economies and greatest entrepreneurial energy in the world. With the right policy frameworks, they can become ideal launching pads for future innovation. This challenging moment may be exactly the right time for these economies to pursue ambitious digital transformation, using their immediate recovery efforts to develop sustainable economic gains. 

Nearly a third of U.S. small business owners are using digital tools to save their business during the COVID-19 crisis. In emerging markets too, digital technologies are often providing a lifeline: a plus-size clothes designer in Manaus, Brazil, a musical instruments maker in Istanbul, Turkey and an owner of a guest house in Durban, South Africa have all been able to survive by using digital technologies and online commerce.

Becoming “Digital Sprinters”

We call these emerging economies “Digital Sprinters” because, by becoming more digital, they have the potential to sprint ahead toward economic development. Based on our experiences, we believe governments and the private sector should focus on four key areas, as detailed in a report we're releasing today:

Four key areas to focus on regarding "Digital Sprinters"
  • Physical capital: this is about digital connectivity and infrastructure. It’s not just about investment but also how infrastructure is managed.
  • Human capital: countries need a comprehensive approach to worker training, economic security, entrepreneurship, and combating discrimination.
  • Technology: increasing the use of data, artificial intelligence, and cloud computing, which empower the growth of next-generation technologies and unlock future growth. This means new opportunities alongside new questions about how best to harness these technologies.
  • Competitiveness:policies that promote competitive and open markets, interoperable regulatory standards, and tax regimes that are predictable and based on international standards.

Our recommendations reflect just one perspective on public policy frameworks for digital transformation. We hope that the report will help advance conversations about digitally-driven growth among governments, civil society, international organizations, academic institutions and entrepreneurs.

Potential economic gains

The economic potential from digital transformation is huge. A new study finds that, by 2030, digital transformation could generate as much $3.4 trillion of economic value in these Digital Sprinter markets. At a country level this translates to 25 percent of GDP in Brazil, 31 percent in Saudi Arabia and 33 percent in Nigeria, to name a few examples.


Emerging markets face a watershed moment today. As COVID-19 is disrupting world order and breaking supply chains, emerging markets have an opportunity to transform and emerge as stronger players. We hope these reports published today can play a part in helping decision-makers take advantage of these opportunities.

A deeply flawed lawsuit that would do nothing to help consumers

Google Search has put the world’s information at the fingertips of over a billion people. Our engineers work to offer the best search engine possible, constantly improving and fine-tuning it. We think that’s why a wide cross-section of Americans value and often love our free products. 

Today’s lawsuit by the Department of Justice is deeply flawed. People use Google because they choose to, not because they're forced to, or because they can't find alternatives. 

This lawsuit would do nothing to help consumers. To the contrary, it would artificially prop up lower-quality search alternatives, raise phone prices, and make it harder for people to get the search services they want to use.

The Department's dubious complaint

Let's talk specifics. The Department's complaint relies on dubious antitrust arguments to criticize our efforts to make Google Search easily available to people. 

Yes, like countless other businesses, we pay to promote our services, just like a cereal brand might pay a supermarket to stock its products at the end of a row or on a shelf at eye level. For digital services, when you first buy a device, it has a kind of home screen “eye level shelf.” On mobile, that shelf is controlled by Apple, as well as companies like AT&T, Verizon, Samsung and LG. On desktop computers, that shelf space is overwhelmingly controlled by Microsoft. 

So, we negotiate agreements with many of those companies for eye-level shelf space. But let's be clear—our competitors are readily available too, if you want to use them. 

Our agreements with Apple and other device makers and carriers are no different from the agreements that many other companies have traditionally used to distribute software. Other search engines, including Microsoft’s Bing, compete with us for these agreements. And our agreements have passed repeated antitrust reviews. 

Here's more detail:

Apple devices

Apple features Google Search in its Safari browser because they say Google is “the best.” This arrangement is not exclusive—our competitors Bing and Yahoo! pay to prominently feature, and other rival services also appear.
Bing and Yahoo! pay Apple to be featured in Safari. iPhone 11 and Macbook Pro showing iOS 14 and MacOS Catalina with callouts showing Yahoo!, Bing and Google icons

Changing your search engine in Safari is easy. On desktop, one click and you’re presented with a range of options.

Setting your search engine on Safari desktop. Laptop showing a dropdown menu in browser with options to select Google, Yahoo, Bing and DuckDuckGo

Apple’s iPhone makes it simple to change your settings and use alternative search engines in Safari—and it’s even easier in iOS14 where you can add widgets from different providers or swipe on the home screen to search.

Microsoft

Google doesn't come preloaded on Windows devices. Microsoft preloads its Edge browser on Windows devices, where Bing is the default search engine.

Microsoft Edge is preloaded on Windows devices and Bing is the default search engine. HP 14" laptop with Windows 10 showing Bing preloaded.

Android

On Android devices, we have promotional agreements with carriers and device makers to feature Google services. These agreements enable us to distribute Android for free, so they directly reduce the price that people pay for phones. But even with these agreements, carriers and device makers often preload numerous competing apps and app stores.

Rival apps and app stores are often preloaded onto Android devices. Samsung Galaxy A51 running Android 10 with a callout box showing Samsung Bixby Assistant, Samsung Galaxy Store, Samsung Browser, Facebook, and Microsoft Outlook Email

Look how easy it is to add a different search app or widget on Android.

Downloading a search engine on Android. Samsung Galaxy A51 running Android 10 showing Bing being downloaded
Setting a search widget on Android. Samsung Galaxy A51 running Android 10 showing Bing widget being set up

The bigger point the lawsuit misses 

The bigger point is that people don’t use Google because they have to, they use it because they choose to. This isn’t the dial-up 1990s, when changing services was slow and difficult, and often required you to buy and install software with a CD-ROM. Today, you can easily download your choice of apps or change your default settings in a matter of seconds—faster than you can walk to another aisle in the grocery store. 

This lawsuit claims that Americans aren’t sophisticated enough to do this. But we know that’s not true. And you know it too: people downloaded a record 204 billion apps in 2019. Many of the world's most popular apps aren't preloaded—think of Spotify, Instagram, Snapchat, Amazon and Facebook.

The data shows that people choose their preferred service: take Mozilla’s Firefox browser as an example. It’s funded almost entirely by revenue from search promotional agreements.  When Yahoo! paid to be the default search engine in Firefox, most Americans promptly switched their search engine to their first choice—Google. (Mozilla later chose Google to be its default search provider, citing an “effort to provide quality search” and its “focus on user experience.”)

It’s also trivially easy to change your search engine in our browser, Chrome.

Setting your search engine on Chrome mobile. Samsung Galaxy A51 running Android 10 showing someone changing search engine from Google to Bing
Setting your search engine on Chrome desktop. Chrome browser on desktop showing someone changing search engine to Bing


How people access information today

There's another area in which the lawsuit is wrong about how Americans use the Internet. It claims that we compete only with other general search engines. But that’s demonstrably wrong. People find information in lots of ways: They look for news on Twitter, flights on Kayak and Expedia, restaurants on OpenTable, recommendations on Instagram and Pinterest. And when searching to buy something, around 60 percent of Americans start on Amazon. Every day, Americans choose to use all these services and thousands more.

Next steps

We understand that with our success comes scrutiny, but we stand by our position. American antitrust law is designed to promote innovation and help consumers, not tilt the playing field in favor of particular competitors or make it harder for people to get the services they want. We’re confident that a court will conclude that this suit doesn’t square with either the facts or the law. 

In the meantime, we remain absolutely focused on delivering the free services that help Americans every day. Because that’s what matters most.

You can learn more about our approach to competition at g.co/competition.

A deeply flawed lawsuit that would do nothing to help consumers

Google Search has put the world’s information at the fingertips of over a billion people. Our engineers work to offer the best search engine possible, constantly improving and fine-tuning it. We think that’s why a wide cross-section of Americans value and often love our free products. 

Today’s lawsuit by the Department of Justice is deeply flawed. People use Google because they choose to, not because they're forced to, or because they can't find alternatives. 

This lawsuit would do nothing to help consumers. To the contrary, it would artificially prop up lower-quality search alternatives, raise phone prices, and make it harder for people to get the search services they want to use.

The Department's dubious complaint

Let's talk specifics. The Department's complaint relies on dubious antitrust arguments to criticize our efforts to make Google Search easily available to people. 

Yes, like countless other businesses, we pay to promote our services, just like a cereal brand might pay a supermarket to stock its products at the end of a row or on a shelf at eye level. For digital services, when you first buy a device, it has a kind of home screen “eye level shelf.” On mobile, that shelf is controlled by Apple, as well as companies like AT&T, Verizon, Samsung and LG. On desktop computers, that shelf space is overwhelmingly controlled by Microsoft. 

So, we negotiate agreements with many of those companies for eye-level shelf space. But let's be clear—our competitors are readily available too, if you want to use them. 

Our agreements with Apple and other device makers and carriers are no different from the agreements that many other companies have traditionally used to distribute software. Other search engines, including Microsoft’s Bing, compete with us for these agreements. And our agreements have passed repeated antitrust reviews. 

Here's more detail:

Apple devices

Apple features Google Search in its Safari browser because they say Google is “the best.” This arrangement is not exclusive—our competitors Bing and Yahoo! pay to prominently feature, and other rival services also appear.
Bing and Yahoo! pay Apple to be featured in Safari. iPhone 11 and Macbook Pro showing iOS 14 and MacOS Catalina with callouts showing Yahoo!, Bing and Google icons

Changing your search engine in Safari is easy. On desktop, one click and you’re presented with a range of options.

Setting your search engine on Safari desktop. Laptop showing a dropdown menu in browser with options to select Google, Yahoo, Bing and DuckDuckGo

Apple’s iPhone makes it simple to change your settings and use alternative search engines in Safari—and it’s even easier in iOS14 where you can add widgets from different providers or swipe on the home screen to search.

Microsoft

Google doesn't come preloaded on Windows devices. Microsoft preloads its Edge browser on Windows devices, where Bing is the default search engine.

Microsoft Edge is preloaded on Windows devices and Bing is the default search engine. HP 14" laptop with Windows 10 showing Bing preloaded.

Android

On Android devices, we have promotional agreements with carriers and device makers to feature Google services. These agreements enable us to distribute Android for free, so they directly reduce the price that people pay for phones. But even with these agreements, carriers and device makers often preload numerous competing apps and app stores.

Rival apps and app stores are often preloaded onto Android devices. Samsung Galaxy A51 running Android 10 with a callout box showing Samsung Bixby Assistant, Samsung Galaxy Store, Samsung Browser, Facebook, and Microsoft Outlook Email

Look how easy it is to add a different search app or widget on Android.

Downloading a search engine on Android. Samsung Galaxy A51 running Android 10 showing Bing being downloaded
Setting a search widget on Android. Samsung Galaxy A51 running Android 10 showing Bing widget being set up

The bigger point the lawsuit misses 

The bigger point is that people don’t use Google because they have to, they use it because they choose to. This isn’t the dial-up 1990s, when changing services was slow and difficult, and often required you to buy and install software with a CD-ROM. Today, you can easily download your choice of apps or change your default settings in a matter of seconds—faster than you can walk to another aisle in the grocery store. 

This lawsuit claims that Americans aren’t sophisticated enough to do this. But we know that’s not true. And you know it too: people downloaded a record 204 billion apps in 2019. Many of the world's most popular apps aren't preloaded—think of Spotify, Instagram, Snapchat, Amazon and Facebook.

The data shows that people choose their preferred service: take Mozilla’s Firefox browser as an example. It’s funded almost entirely by revenue from search promotional agreements.  When Yahoo! paid to be the default search engine in Firefox, most Americans promptly switched their search engine to their first choice—Google. (Mozilla later chose Google to be its default search provider, citing an “effort to provide quality search” and its “focus on user experience.”)

It’s also trivially easy to change your search engine in our browser, Chrome.

Setting your search engine on Chrome mobile. Samsung Galaxy A51 running Android 10 showing someone changing search engine from Google to Bing
Setting your search engine on Chrome desktop. Chrome browser on desktop showing someone changing search engine to Bing


How people access information today

There's another area in which the lawsuit is wrong about how Americans use the Internet. It claims that we compete only with other general search engines. But that’s demonstrably wrong. People find information in lots of ways: They look for news on Twitter, flights on Kayak and Expedia, restaurants on OpenTable, recommendations on Instagram and Pinterest. And when searching to buy something, around 60 percent of Americans start on Amazon. Every day, Americans choose to use all these services and thousands more.

Next steps

We understand that with our success comes scrutiny, but we stand by our position. American antitrust law is designed to promote innovation and help consumers, not tilt the playing field in favor of particular competitors or make it harder for people to get the services they want. We’re confident that a court will conclude that this suit doesn’t square with either the facts or the law. 

In the meantime, we remain absolutely focused on delivering the free services that help Americans every day. Because that’s what matters most.

You can learn more about our approach to competition at g.co/competition.

A more responsible, innovative and helpful internet in Europe

Over the last 20 years, digital tools have played an increasingly important role in our everyday lives, in societal debate and in the economy. In 2020, many of us have found digital tools to be a real lifeline. We've used them to connect with loved ones and teach our children during lockdown. Governments have used them to share vital information with citizens. And businesses across Europe are using them to reach customers and recover more quickly and sustainably. As we look to the future, it's important that regulation keeps pace with change, and Google supports Europe's effort to create a more responsible, innovative and helpful internet for everyone.

That's why we are submitting our response today to the consultation for the European Digital Services Act (DSA), drawing on our 20+ years experience in building technology that both helps people and creates greater economic opportunity. Well-designed regulation gives consumers confidence that their interests are being protected as they shop, search and socialize online. It also provides businesses with protection from opaque or unfair practices.

Our response encourages European policymakers to build on the success of the e-Commerce Directive and focus on three key areas: 


  • A more responsible internet: Introducing clearer rules for notifying platforms of illegal content while protecting fundamental rights of expression and access to information 
  • A more innovative internet: Encouraging economic growth and innovation by enabling Europeans to build the next generation of apps, businesses and services, and exporting European creativity and culture around the world
  • A more helpful internet: Competition regulation which supports product innovations, helps people manage their data and provides businesses with the tools to grow 

A more responsible internet

Because of our commitment to safety, we invest heavily in technology and people to combat illegal content, and we welcome an updated legal framework. We would encourage legislators to  provide greater clarity on the rules, roles and responsibilities of online platforms. 

The e-Commerce Directive set vital ground rules for conduct and responsibility online, which helped online innovation thrive. Whether an individual is claiming defamation, a studio is claiming that a video infringes on copyright or a government is seeking to remove a terrorist video, it’s essential to provide clear notice about the specific piece of content to the online platform.  The platform then has a responsibility to take appropriate action on that content. This is especially important given the significant differences in what is considered illegal content across EU Member States. 

We are continually seeking to improve our technical systems and processes to identify illegal content. While breakthroughs in machine learning and other technology have significantly enhanced our ability to detect bad content, such technology is still unable to reliably understand context, which is often critical in determining whether or not content is legal, for example distinguishing violent content from a human rights organization documenting abuses. Mandated use of such technology would lead to overblocking of Europeans’ speech and access to information. This is why platforms should be encouraged to further invest in these innovations while retaining the invaluable nuance and judgment that comes with human input. 

Google's products are designed to encourage people to share their views safely and respectfully, and have been a force for creativity, learning and expression. In order to ensure that fundamental rights are respected, it's important for the DSA to focus on capturing illegal content, so lawful speech isn't caught in the net. However, this should not prevent further actions on lawful-but-harmful content, such as cyber-bullying, through self- and co-regulatory initiatives, such as the EU Code of Practice on Disinformation and the EU Code of Conduct on Hate Speech, both of which Google joined from the start. Google also invests in easy-to-use reporting processes and clear guidelines to help ensure a positive online experience.

We are committed to providing greater transparency for our users and governments so that they better understand the content they are seeing and how to notify us of concerns. The DSA should support these kinds of constructive transparency measures while ensuring that platforms can continue to protect user privacy, ensure commercially sensitive information is not revealed and prevent bad actors from gaming the system. Google has long been a leader in transparency, including disclosing data on content moderation, content removal requests and blocking bad ads.  


A more innovative internet

The e-Commerce Directive, which the DSA will update, has guided Internet services, users and European society through 20 years of economic growth fueled by innovation, including entirely new industries ranging from app developers to YouTube creators.  The next wave of online innovation will play a vital role in helping people, governments and businesses overcome the many challenges - medical, societal, economic - that come with a global pandemic.

To foster innovation, the DSA should reflect the wide range of services offered by the tech industry. No two services are the same and the new act should be rooted in objectives and principles that can be applied, as appropriate, across this broad, diverse ecosystem.  This will ensure that everyone - platforms regulators, people and businesses -  are responsible for the parts they play. 


A more helpful internet

People want to save time and get things done when they are online. Our testing has consistently shown that people want quick access to information, so over the years we’ve developed new ways to organize and display results. For example, when you are searching online for a restaurant, you can at the same time quickly access directions because a map has been integrated into Google’s Search results pages - saving you the time and effort of a second search through a map app or website. Integrations also help small businesses to be found more easily and to provide relevant information to their customers such as delivery, curbside pickup or takeaway options during lockdown periods, and can help people in times of emergency such as the Android Emergency location feature. New rules should encourage new and improved features and products which help European consumers get things done and access information quickly and easily.

Artboard 1@2x (1).jpg

European startups and entrepreneurs also need online tools to help grow their businesses more easily and at a lower cost. For example, online ads help businesses of all sizes find new customers around the world, while cloud computing helps reduce operating costs and increase productivity. As the Commission updates its regulations, it should ensure new rules don't add undue cost and burden for European businesses in ways that make it harder to scale quickly and offer their services across the EU and around the world.

We agree that competition between digital platforms is strengthened by measures that allow people to move between platforms without losing access to their data, which also makes it easier for new players to enter or expand in digital markets. Google offers a wide range of tools that allow people to be in control of their online experience, such as Google “My Account”, which helps users choose the privacy settings that are right for them, or Google Takeout, which allows users to export their data. Similarly, providing access to aggregated datasets could benefit R&D in a range of industries while safeguarding user data privacy. As new rules are being evaluated, the question is not whether data mobility or data access should be facilitated, but how to achieve their benefits without sacrificing product quality or innovation incentives. 


Modernizing regulation

Creating a more responsible, innovative and helpful internet is a societal challenge, and we acknowledge the need for companies, governments and civil society to work together towards reaching our shared goals. That’s why we support modernizing rules for the digital age. 

Our response today is committed to creating a balanced regulatory framework that can adapt to future technological innovations so we can build on the momentum and benefits that online services have provided European citizens and businesses over the past two decades.

In support of interoperability

Open software interfaces have been an integral part of the innovation economy. They enable the interoperability that has always let software developers build on each other’s work. And the interoperability of open software interfaces is what lets different technologies like apps work together on a variety of devices and platforms: That’s why you can take a photo on an Apple phone, save it onto Google’s cloud servers, and edit it on a Surface tablet. Our legal case with Oracle turns on our belief that interoperability has been good for innovation, good for developers, and good for consumers.


The Supreme Court has heard from 250 leading computer scientists, businesses, and software developers who share this conviction. The Court also recently asked for additional information about how courts should respect a jury’s decision that a given use (like the reuse of software interfaces) constitutes allowable fair use. 


Today, we filed a supplemental brief explaining how the jury in our case heard from over a dozen witnesses, reviewed hundreds of documents, and then unanimously agreed with our position. America’s Constitution enshrines the right to a jury trial. The Supreme Court has recognized the important role of a jury in deciding nuanced, fact-specific questions like the ones in this case.


A decision in Oracle’s favor would limit consumers’ freedom to use technologies on a range of devices. It would upend the way developers have always used software interfaces, locking them into existing platforms and giving copyright owners new power to control the building blocks of new technologies. And it would erode the traditional role of the jury in evaluating all the facts relevant to a decision.  


We look forward to making this case to the Court on October 7.


A digital jobs program to help America’s economic recovery

Technology has been a lifeline to help many small businesses during the COVID-19 crisis. And online tools can help people get new skills and find good-paying jobs. Nearly two-thirds of all new jobs created since 2010 require either high-level or medium-level digital skills. This presents a challenge for many job seekers, as well as to America’s long-term economic security. People need good jobs, and the broader economy needs their energy and skills to support our future growth. 


College degrees are out of reach for many Americans, and you shouldn’t need a college diploma to have economic security. We need new, accessible job-training solutions—from enhanced vocational programs to online education—to help America recover and rebuild.


Our Grow with Google initiative helps people get the skills they need to get a job or grow their business.  Today we’re announcing a new suite of Google Career Certificates that will help Americans get qualifications in high-paying high-growth job fields—no college degree required. We will fund 100,000 need-based scholarships and at Google we will consider our new career certificates as the equivalent of a four-year degree for related roles.  We’re also committing $10 million in job training Google.org grants for communities across America, working with partners like YWCA, NPower and JFF


Here are more details on today’s announcements: 


  • Three newGoogle Career Certificates in the high-paying, high-growth career fields of Data Analytics, Project Management, and User Experience (UX) Design. Like our IT Support and Automation in Python Certificates, these new career programs are designed and taught by Google employees who work in these fields. The programs equip participants with the essential skills they need to get a job. No degree or prior experience is required to take the courses. 

  • 100,000 need-based scholarships, funded by Google, to complete any of these career certificates. 

  • An expansion of our IT Certificate Employer Consortium, which currently includes over 50 employers like Walmart, Hulu, Sprint and of course Google.

  • Hundreds of apprenticeship opportunities at Google for people completing these career certificate programs to provide real on-the-job training.

  • The Google Career Certificates in Career and Technical Education high schools throughout America, starting with our IT Support Certificate this Fall. These certificates build on our established partnership with more than 100 community colleges. 

  • $10 million in Google.org grants to the YWCA, NPower and JFF to help workforce boards and nonprofits improve their job training programs and increase access to digital skills for women, veterans, and underserved Americans. As part of our Future of Work initiative, since 2017 Google.orghas provided over $200 million in grants to nonprofits working to promote economic opportunity. 


The new Google Career Certificates build on our existing programs to create pathways into IT Support careers for people without college degrees. Launched in 2018, the Google IT Certificate program has become the single most popular certificate on Coursera, and thousands of people have found new jobs and increased their earnings after completing the course. Take Yves Cooper, who enrolled in the program through our Grow with Google Partner, Merit America, while working as a van driver. Within five days of completing the program, he was offered a role as an IT helpdesk technician at a nonprofit in his hometown of Washington, D.C. We’re especially proud that the Google IT Certificate provides a pathway to jobs for groups that are underrepresented in the tech industry: 58 percent of IT Certificate learners identify as Black, Latino, female or veteran. 

Thumbnail_10.jpg

Yves Cooper was offered a role as an IT helpdesk technician at a nonprofit after completing the Google IT Certificate program.

As America rebuilds our local communities, it’s important to start with the people that give them life. Since 2017, we’ve helped 5 million Americans learn digital skills through Grow with Google and we promise to do our part to help even more people prepare for jobs, creating economic opportunity for everyone.

An update on our work on AI and responsible innovation

AI is a powerful tool that will have a significant impact on society for many years to come, from improving sustainability around the globe to advancing the accuracy of disease screenings. As a leader in AI, we’ve always prioritized the importance of understanding its societal implications and developing it in a way that gets it right for everyone. 


That’s why we first published our AI Principles two years ago and why we continue to provide regular updates on our work. As our CEO Sundar Pichai said in January, developing AI responsibly and with social benefit in mind can help avoid significant challenges and increase the potential to improve billions of lives. 

The world has changed a lot since January, and in many ways our Principles have become even more important to the work of our researchers and product teams. As we develop AI we are committed to testing safety, measuring social benefits, and building strong privacy protections into products. Our Principles give us a clear framework for the kinds of AI applications we will not design or deploy, like those that violate human rights or enable surveillance that violates international norms. For example, we were the first major company to have decided, several years ago, not to make general-purpose facial recognition commercially available.

Over the last 12 months, we’ve shared our point of view on how to develop AI responsibly—see our  2019 annual report and our recent submission to the European Commission’s Consultation on Artificial Intelligence. This year, we’ve also expanded our internal education programs, applied our principles to our tools and research, continued to refine our comprehensive review process, and engaged with external stakeholders around the world, while identifying emerging trends and patterns in AI. 

Building on previous AI Principles updates we shared here on the Keyword in 2018 and 2019, here’s our latest overview of what we’ve learned, and how we’re applying these learnings in practice.

Internal education

In addition to launching the initial Tech Ethics training that 800+ Googlers have taken since its launch last year, this year we developed a new training for AI Principles issue spotting. We piloted the course with more than 2,000 Googlers, and it is now available as an online self-study course to all Googlers across the company. The course coaches employees on asking critical questions to spot potential ethical issues, such as whether an AI application might lead to economic or educational exclusion, or cause physical, psychological, social or environmental harm. We recently released a version of this training as a mandatory course for customer-facing Cloud teams and 5,000 Cloud employees have already taken it.

Tools and research

Our researchers are working on computer science and technology not just for today, but for tomorrow as well. They continue to play a leading role in the field, publishing more than 200 academic papers and articles in the last year on new methods for putting our principles into practice. These publications address technical approaches to fairness, safety, privacy, and accountability to people, including effective techniques for improving fairness in machine learning at scale, a method for incorporating ethical principles into a machine-learned model, and design principles for interpretable machine learning systems.

Over the last year, a team of Google researchers and collaborators published an academic paper proposing a framework called Model Cards that’s similar to a food nutrition label and designed to report an AI model’s intent of use, and its performance for people from a variety of backgrounds. We’ve applied this research by releasing Model Cards for Face Detection and Object Detection models used in Google Cloud’s Vision API product.

Our goal is for Google to be a helpful partner not only to researchers and developers who are building AI applications, but also to the billions of people who use them in everyday products. We’ve gone a step further, releasing 14 new tools that help explain how responsible AI works, from simple data visualizations on algorithmic bias for general audiences to Explainable AIdashboards and tool suites for enterprise users. You’ll find a number of these within our new Responsible AI with TensorFlow toolkit.

Review process 

As we’ve shared previously, Google has a central, dedicated team that reviews proposals for AI research and applications for alignment with our principles. Operationalizing the AI Principles is challenging work. Our review process is iterative, and we continue to refine and improve our assessments as advanced technologies emerge and evolve. The team also consults with internal domain experts in machine-learning fairness, security, privacy, human rights, and other areas. 

Whenever relevant, we conduct additional expert human rights assessments of new products in our review process, before launch. For example, we enlisted the nonprofit organization BSR (Business for Social Responsibility) to conduct a formal human rights assessment of the new Celebrity Recognition tool, offered within Google Cloud Vision and Video Intelligence products. BSR applied the UN’s Guiding Principles on Business and Human Rights as a framework to guide the product team to consider the product’s implications across people’s privacy and freedom of expression, as well as potential harms that could result, such as discrimination. This assessment informed not only the product’s design, but also the policies around its use. 

In addition, because any robust evaluation of AI needs to consider not just technical methods but also social context(s), we consult a wider spectrum of perspectives to inform our AI review process, including social scientists and Google’s employee resource groups.

As one example, consider how we’ve built upon learnings from a case we published in our last AI Principles update: the review of academic research on text-to-speech (TTS) technology. Since then, we have applied what we learned in that earlier review to establish a Google-wide approach to TTS. Google Cloud’s Text-to-Speech service, used in products such as Google Lens, puts this approach into practice.

Because TTS could be used across a variety of products, a group of senior Google technical and business leads were consulted. They considered the proposal against our AI Principles of being socially beneficial and accountable to people, as well as the need to incorporate privacy by design and avoiding technologies that cause or are likely to cause overall harm.

  • Reviewers identified the benefits of an improved user interface for various products, and significant accessibility benefits for people with hearing impairments. 

  • They considered the risks of voice mimicry and impersonation, media manipulation, and defamation.

  • They took into account how an AI model is used, and recognized the importance of adding layers of barriers for potential bad actors, to make harmful outcomes less likely.

  • They recommended on-device privacy and security precautions that serve as barriers to misuse, reducing the risk of overall harm from use of TTS technology for nefarious purposes.  

  • The reviewers recommended approving TTS technology for use in our products, but only with user consent and on-device privacy and security measures.

  • They did not approve open-sourcing of TTS models, due to the risk that someone might misuse them to build harmful deepfakes and distribute misinformation. 

Text to Speech.jpg

External engagement

To increase the number and variety of outside perspectives, this year we launched the Equitable AI Research Roundtable, which brings together advocates for communities of people who are currently underrepresented in the technology industry, and who are most likely to be impacted by the consequences of AI and advanced technology. This group of community-based, non-profit leaders and academics meet with us quarterly to discuss AI ethics issues, and learnings from these discussions help shape operational efforts and decision-making frameworks. 


Our global efforts this year included new programs to support non-technical audiences in their understanding of, and participation in, the creation of responsible AI systems, whether they are policymakers, first-time ML (machine learning) practitioners or domain experts. These included:

 

  • Partnering with Yielding Accomplished African Women to implement the first-ever Women in Machine Learning Conference in Africa. We built a network of 1,250 female machine learning engineers from six different African countries. Using the Google Cloud Platform, we trained and certified 100 women at the conference in Accra, Ghana. More than 30 universities and 50 companies and organizations were represented. The conference schedule included workshops on Qwiklabs, AutoML, TensorFlow, human-centered approach to AI, mindfulness and #IamRemarkable

  • Releasing, in partnership with the Ministry of Public Health in Thailand, the first studyof its kind on how researchers apply nurses' and patients' input to make recommendations on future AI applications, based on how nurses deployed a new AI system to screen patients for diabetic retinopathy. 

  • Launching an ML workshop for policymakers featuring content and case studies covering the topics of Explainability, Fairness, Privacy, and Security. We’ve run this workshop, via Google Meet, with over 80 participants in the policy space with more workshops planned for the remainder of the year. 

  • Hosting the PAIR (People + AI Research) Symposium in London, which focused on participatory ML and marked PAIR’s expansion to the EMEA region. The event drew 160 attendees across academia, industry, engineering, and design, and featured cross-disciplinary discussions on human-centered AI and hands-on demos of ML Fairness and interpretability tools. 

We remain committed to external, cross-stakeholder collaboration. We continue to serve on the board and as a member of the Partnership on AI, a multi-stakeholder organization that studies and formulates best practices on AI technologies. As an example of our work together, the Partnership on AI is developing best practices that draw from our Model Cards proposal as a framework for accountability among its member organizations. 

Trends, technologies and patterns emerging in AI

We know no system, whether human or AI powered, will ever be perfect, so we don’t consider the task of improving it to ever be finished. We continue to identify emerging trends and challenges that surface in our AI Principles reviews. These prompt us to ask questions such as when and how to responsibly develop synthetic media, keep humans in an appropriate loop of AI decisions, launch products with strong fairness metrics, deploy affective technologies, and offer explanations on how AI works, within products themselves. 


As Sundar wrote in January, it’s crucial that companies like ours not only build promising new technologies, but also harness them for good—and make them available for everyone. This is why we believe regulation can offer helpful guidelines for AI innovation, and why we share our principled approach to applying AI. As we continue to responsibly develop and use AI to benefit people and society, we look forward to continuing to update you on specific actions we’re taking, and on our progress.

Responding to the European Commission’s AI white paper

In January, our CEO Sundar Pichai visited Brussels to talk about artificial intelligence and how Google could help people and businesses succeed in the digital age through partnership. Much has changed since then due to COVID-19, but one thing hasn’t—our commitment to the potential of partnership with Europe on AI, especially to tackle the pandemic and help people and the economy recover. 

As part of that effort, we earlier today filed our response to the European Commission’s Consultation on Artificial Intelligence, giving our feedback on the Commission’s initial proposal for how to regulate and accelerate the adoption of AI. 

Excellence, skills, trust

Our filing applauds the Commission’s focus on building out the European “ecosystem of excellence.” European universities already boast renowned leaders in dozens of areas of AI research—Google partners with some of them via our machine learning research hubs in Zurich, Amsterdam, Berlin, Paris and London—and many of their students go on to make important contributions to European businesses.  

We support the Commission’s plans to help businesses develop the AI skills they need to thrive in the new digital economy. Next month, we’ll contribute to those efforts by extending our machine learning check-up tool to 11 European countries to help small businesses implement AI and grow their businesses. Google Cloud already works closely with scores of businesses across Europe to help them innovate using AI.  

We also support the Commission’s goal of building a framework for AI innovation that will create trust and guide ethical development and use of this widely applicable technology. We appreciate the Commission's proportionate, risk-based approach. It’s important that AI applications in sensitive fields—such as medicine or transportation—are held to the appropriate standards. 

Based on our experience working with AI, we also offered a couple of suggestions for making future regulation more effective. We want to be a helpful and engaged partner to policymakers, and we have provided more details in our formal response to the consultation.

Definition of high-risk AI applications

AI has a broad range of current and future applications, including some that involve significant benefits and risks.  We think any future regulation would benefit from a more carefully nuanced definition of “high-risk” applications of AI. We agree that some uses warrant extra scrutiny and safeguards to address genuine and complex challenges around safety, fairness, explainability, accountability, and human interactions. 

Assessment of AI applications

When thinking about how to assess high-risk AI applications, it's important to strike a balance. While AI won’t always be perfect, it has great potential to help us improve over the performance of existing systems and processes. But the development process for AI must give people confidence that the AI system they’re using is reliable and safe. That’s especially true for applications like new medical diagnostic techniques, which potentially allow skilled medical practitioners to offer more accurate diagnoses, earlier interventions, and better patient outcomes. But the requirements need to be proportionate to the risk, and shouldn’t unduly limit innovation, adoption, and impact. 

This is not an easy needle to thread. The Commission’s proposal suggests “ex ante” assessment of AI applications (i.e., upfront assessment, based on forecasted rather than actual use cases). Our contribution recommends having established due diligence and regulatory review processes expand to include the assessment of AI applications. This would avoid unnecessary duplication of efforts and likely speed up implementation.

For the (probably) rare instances when high-risk applications of AI are not obviously covered by existing regulations, we would encourage clear guidance on the “due diligence” criteria companies should use in their development processes. This would enable robust upfront self-assessment and documentation of any risks and their mitigations, and could also include further scrutiny after launch.

This approach would give European citizens confidence about the trustworthiness of AI applications, while also fostering innovation across the region. And it would encourage companies—especially smaller ones—to launch a range of valuable new services. 

Principles and process

Responsible development of AI presents new challenges and critical questions for all of us. In 2018 we published our own AI Principles to help guide our ethical development and use of AI, and also established internal review processes to help us avoid bias, test rigorously for safety, design with privacy top of mind.  Our principles also specify areas where we will not design or deploy AI, such as to support mass surveillance or violate human rights. Look out for an update on our work around these principles in the coming weeks. 

AI is an important part of Google’s business and our aspirations for the future. We share a common goal with policymakers—a desire to build trust in AI through responsible innovation and thoughtful regulation, so that European citizens can safely enjoy the full social and economic benefits of AI. We hope that our contribution to the consultation is useful, and we look forward to participating in the discussion in coming months.