Author Archives: Kent Walker

Making Open Source software safer and more secure

We welcomed the opportunity to participate in the White House Open Source Software Security Summit today, building on our work with the Administration to strengthen America’s collective cybersecurity through critical areas like open source software.

Industries and governments have been making strides to tackle the frequent security issues that plague legacy, proprietary software. The recent log4j open source software vulnerability shows that we need the same attention and commitment to safeguarding open source tools, which are just as critical.

Open source software code is available to the public, free for anyone to use, modify, or inspect. Because it is freely available, open source facilitates collaborative innovation and the development of new technologies to help solve shared problems. That’s why many aspects of critical infrastructure and national security systems incorporate it. But there’s no official resource allocation and few formal requirements or standards for maintaining the security of that critical code. In fact, most of the work to maintain and enhance the security of open source, including fixing known vulnerabilities, is done on an ad hoc, volunteer basis.

For too long, the software community has taken comfort in the assumption that open source software is generally secure due to its transparency and the assumption that “many eyes” were watching to detect and resolve problems. But in fact, while some projects do have many eyes on them, others have few or none at all.

At Google, we’ve been working to raise awareness of the state of open source security. We’ve invested millions in developing frameworks and new protective tools. We’ve also contributed financial resources to groups and individuals working on securing foundational open source projects like Linux. Just last year, as part of our $10 billion commitment to advancing cybersecurity, we pledged to expand the application of our Supply chain Levels for Software Artifacts (SLSA or “Salsa”) framework to protect key open source components. That includes $100 million to support independent organizations, like the Open Source Security Foundation (OpenSSF), that manage open source security priorities and help fix vulnerabilities.

But we know more work is needed across the ecosystem to create new models for maintaining and securing open source software. During today’s meeting, we shared a series of proposals for how to do this:

Identifying critical projects

We need a public-private partnership to identify a list of critical open source projects — with criticality determined based on the influence and importance of a project — to help prioritize and allocate resources for the most essential security assessments and improvements.

Longer term, we need new ways of identifying software that might pose a systemic risk — based on how it will be integrated into critical projects — so that we can anticipate the level of security required and provide appropriate resourcing.

Establishing security, maintenance & testing baselines

Growing reliance on open source means that it’s time for industry and government to come together to establish baseline standards for security, maintenance, provenance, and testing — to ensure national infrastructure and other important systems can rely on open source projects. These standards should be developed through a collaborative process, with an emphasis on frequent updates, continuous testing, and verified integrity.

Fortunately, the software community is off to a running start. Organizations like the OpenSSF are already working across industry to create these standards (including supporting efforts like our SLSA framework).

Increasing public and private support

Many leading companies and organizations don’t recognize how many parts of their critical infrastructure depend on open source. That’s why it’s essential that we see more public and private investment in keeping that ecosystem healthy and secure. In the discussion today, we proposed setting up an organization to serve as a marketplace for open source maintenance, matching volunteers from companies with the critical projects that most need support. Google stands ready to contribute resources to this effort.

Given the importance of digital infrastructure in our lives, it’s time to start thinking of it in the same way we do our physical infrastructure. Open source software is a connective tissue for much of the online world — it deserves the same focus and funding we give to our roads and bridges. Today’s meeting at the White House was both a recognition of the challenge and an important first step towards addressing it. We applaud the efforts of the National Security Council, the Office of the National Cyber Director, and DHS CISA in leading a concerted response to cybersecurity challenges and we look forward to continuing to do our part to support that work.

A digital fast lane for emerging economies

A look at the new Future Readiness Economic Index for decision makers

The pandemic has had devastating effects on  emerging economies, threatening to undo thirty years of progress. In countries like Kenya, India, and Brazil, COVID drove up unemployment, disrupted supply chains, and devastated entire sectors.

If we do nothing, it could take years for these countries to recover, creating even greater divides between people in developed and emerging economies. But we’re seeing a contrary trend that could dramatically turn things around. Looking beyond the short-term headlines to longer-term trends actually tells a different story.

As last year’s Digital Sprinters Framework outlined, if emerging economies adopt the right digital policies, they could actually emerge stronger and better prepared to accelerate economic growth and opportunity.

While COVID has accelerated use of technology to learn and conduct business, almost half of all households in the developing world still lack access to broadband and high-speed internet. Greater digital adoption could help emerging economies generate as much as $3.4 trillion of economic value by 2030. That amount of growth would mean an astonishing 25 percent increase in GDP in Brazil, a 31 percent increase in Saudi Arabia and a 33 percent increase in Nigeria.

Unlocking this growth will require focused initiatives. Governments in emerging markets want to know where to invest limited resources, and how to support and grow their national talent pool.  That’s why, building on the Digital Sprinters framework, Google commissioned the Portulans Institute to develop a “Future Readiness Economic Index” — a ranking of digital progress, and a roadmap for the future.


The Future Readiness Economic Index

The Future Readiness Economic Index gives governments, businesses and analysts comprehensive metrics and milestones to assess their digital transformation.

Assessing countrywide trends can be an inexact science. But by breaking down the data in critical areas like infrastructure, talent development, skills matching, and technology adoption, the Portulans Institute’s Index can help countries focus their efforts to get the biggest returns on investment. For example, the Index suggests that Brazil, which ranks 67th globally on the Index, could sprint ahead with more adoption of digital technologies like cloud, AI and machine learning.


Seizing the chance to sprint ahead

Emerging economies have a key advantage. Unlike developed economies — which need to upgrade or replace outdated legacy infrastructure — many emerging markets can leapfrog ahead, building advanced tools from scratch rather than remodeling existing ones. (Think of how many countries without extensive landline telephone infrastructures in the 1990s have become leaders in mobile telephony adoption.)

Starting with the latest technologies can streamline progress. But which technologies, and what’s the right balance of investing in human capital, infrastructure and other critical elements? And which policies will accelerate progress and yield the greatest gains in competitiveness? The Index provides some objective comparisons to help answer those questions.

Good public policy that supports technology innovation can expand the pie for everyone.  Widely dispersed technological progress has doubled human lifespans over the last century and lifted more than a billion people from poverty in the last thirty years alone. Evidence-based investments, policies and digital tools will equip emerging economies to make even more progress in the years ahead.

Why we’re committing $10 billion to advance cybersecurity

We welcomed the opportunity to participate in President Biden’s White House Cyber Security Meeting today, and appreciated the chance to share our recommendations to advance this important agenda. The meeting comes at a timely moment, as widespread cyberattacks continue to exploit vulnerabilities targeting people, organizations, and governments around the world.


That’s why today, we are announcing that we will invest $10 billion over the next five years to strengthen cybersecurity, including expanding zero-trust programs, helping secure the software supply chain, and enhancing open-source security. We are also pledging, through the Google Career Certificate program, to train 100,000 Americans in fields like IT Support and Data Analytics, learning in-demand skills including data privacy and security. 


Governments and businesses are at a watershed moment in addressing cybersecurity. Cyber attacks are increasingly endangering valuable data and critical infrastructure. While we welcome increased measures to reinforce cybersecurity, governments and companies are both facing key challenges: 


First, organizations continue to depend on vulnerable legacy infrastructure and software, rather than adopting modern IT and security practices. Too many governments still rely on legacy vendor contracts that limit competition and choice, inflate costs, and create privacy and security risks. 


Second, nation-state actors, cybercriminals and other malicious actors continue to target weaknesses in software supply chains and many vendors don’t have the tools or expertise to stop them. 


Third, countries simply don’t have enough people trained to anticipate and deal with these threats. 


For the past two decades, Google has made security the cornerstone of our product strategy. We don’t just plug security holes, we work to eliminate entire classes of threats for consumers and businesses whose work depends on our services. We keep more users safe than anyone else in the world — blocking malware, phishing attempts, spam messages, and potential cyber attacks. We’ve published over 160 academic research papers on computer security, privacy, and abuse prevention, and we warn other software companies of weaknesses in their systems. And dedicated teams like our Threat Analysis Group work to counter government-backed hacking and attacks against Google and our users, making the internet safer for everyone.


Extending the zero-trust security model 

We’re one of the pioneers inzero-trust computing, in which no person, device, or network enjoys inherent trust.  Trust that allows access to information must be earned.  We’ve learned a lot about both the power and the challenges of running this model at scale. 


Implemented properly, zero-trust computing provides the highest level of security for organizations.  We support the White House effort to deploy this model across the federal government. 


As government and industry work together to develop and implement zero-trust solutions for employee access to corporate assets, we also need to apply the approach to production environments. This is necessary to address events like Solarwinds, where attackers used access to the production environment to compromise dozens of outside entities. The U.S. government can encourage adoption by expanding zero-trust guidelines and reference architecture language in the Executive Order implementation process to include production environments, which in addition to application segmentation substantially improves an organization’s defense in depth strategy. 


Securing the software supply chain 

Following the Solarwinds attack, the software world gained a deeper understanding of the real risks and ramifications of supply chain attacks. Today, the vast majority of modern software development makes use of open source software, including software incorporated in many aspects of critical infrastructure and national security systems. Despite this, there is no formal requirement or standard for maintaining the security of that software. Most of the work that is done to enhance the security of open source software, including fixing known vulnerabilities, is done on an ad hoc basis. 


That’s why we worked with the Open Source Security Foundation (OpenSSF) to develop and release Supply Chain Levels for Software Artifacts (SLSA or “salsa”), a proven framework for securing the software supply chain. In our view, wide support for and adoption of the SLSA framework will raise the security bar for the entire software ecosystem. 


To further advance our work and the broader community’s work in this space, we committed to invest in the expansion of the application of our SLSA framework to protect the key components of open-source software widely used by many organizations. We also pledged to provide $100 million to support third-party foundations, like OpenSSF, that manage open source security priorities and help fix vulnerabilities.


Strengthening the digital security skills of the American workforce

Robust cybersecurity ultimately depends on having the people to implement it. That includes people with digital skills capable of designing and executing cybersecurity solutions, as well as promoting awareness of cybersecurity risks and protocols among the broader population. In short, we need more and better computer security education and training.  


Over the next three years, we're pledging to help 100,000 Americans earn Google Career Certificates in fields like IT Support and Data Analytics to learn in-demand skills including data privacy and security. The certificates are industry-recognized and supported credentials that equip Americans with the skills they need to get high-paying, high-growth jobs. To date, more than half of our graduates have come from backgrounds underserved in tech (Black, Latinx, veteran, or female). 46% of our graduates come from the lowest income tertile in the country. And the results are strong: 82% of our graduates report a positive career impact within six months of graduation. Additionally, we will train over 10 million Americans in digital skills from basic to advanced by 2023.


Leading the world in cybersecurity is critical to our national security. Today’s meeting at the White House was both an acknowledgment of the threats we face and a call to action to address them. It emphasized cybersecurity as a global imperative and encouraged new ways of thinking and partnering across government, industry and academia. We look forward to working with the Administration and others to define and drive a new era in cybersecurity. Our collective safety, economic growth, and future innovation depend on it.


Why we’re committing $10 billion to advance cybersecurity

We welcomed the opportunity to participate in President Biden’s White House Cyber Security Meeting today, and appreciated the chance to share our recommendations to advance this important agenda. The meeting comes at a timely moment, as widespread cyberattacks continue to exploit vulnerabilities targeting people, organizations, and governments around the world.


That’s why today, we are announcing that we will invest $10 billion over the next five years to strengthen cybersecurity, including expanding zero-trust programs, helping secure the software supply chain, and enhancing open-source security. We are also pledging, through the Google Career Certificate program, to train 100,000 Americans in fields like IT Support and Data Analytics, learning in-demand skills including data privacy and security. 


Governments and businesses are at a watershed moment in addressing cybersecurity. Cyber attacks are increasingly endangering valuable data and critical infrastructure. While we welcome increased measures to reinforce cybersecurity, governments and companies are both facing key challenges: 


First, organizations continue to depend on vulnerable legacy infrastructure and software, rather than adopting modern IT and security practices. Too many governments still rely on legacy vendor contracts that limit competition and choice, inflate costs, and create privacy and security risks. 


Second, nation-state actors, cybercriminals and other malicious actors continue to target weaknesses in software supply chains and many vendors don’t have the tools or expertise to stop them. 


Third, countries simply don’t have enough people trained to anticipate and deal with these threats. 


For the past two decades, Google has made security the cornerstone of our product strategy. We don’t just plug security holes, we work to eliminate entire classes of threats for consumers and businesses whose work depends on our services. We keep more users safe than anyone else in the world — blocking malware, phishing attempts, spam messages, and potential cyber attacks. We’ve published over 160 academic research papers on computer security, privacy, and abuse prevention, and we warn other software companies of weaknesses in their systems. And dedicated teams like our Threat Analysis Group work to counter government-backed hacking and attacks against Google and our users, making the internet safer for everyone.


Extending the zero-trust security model 

We’re one of the pioneers inzero-trust computing, in which no person, device, or network enjoys inherent trust.  Trust that allows access to information must be earned.  We’ve learned a lot about both the power and the challenges of running this model at scale. 


Implemented properly, zero-trust computing provides the highest level of security for organizations.  We support the White House effort to deploy this model across the federal government. 


As government and industry work together to develop and implement zero-trust solutions for employee access to corporate assets, we also need to apply the approach to production environments. This is necessary to address events like Solarwinds, where attackers used access to the production environment to compromise dozens of outside entities. The U.S. government can encourage adoption by expanding zero-trust guidelines and reference architecture language in the Executive Order implementation process to include production environments, which in addition to application segmentation substantially improves an organization’s defense in depth strategy. 


Securing the software supply chain 

Following the Solarwinds attack, the software world gained a deeper understanding of the real risks and ramifications of supply chain attacks. Today, the vast majority of modern software development makes use of open source software, including software incorporated in many aspects of critical infrastructure and national security systems. Despite this, there is no formal requirement or standard for maintaining the security of that software. Most of the work that is done to enhance the security of open source software, including fixing known vulnerabilities, is done on an ad hoc basis. 


That’s why we worked with the Open Source Security Foundation (OpenSSF) to develop and release Supply Chain Levels for Software Artifacts (SLSA or “salsa”), a proven framework for securing the software supply chain. In our view, wide support for and adoption of the SLSA framework will raise the security bar for the entire software ecosystem. 


To further advance our work and the broader community’s work in this space, we committed to invest in the expansion of the application of our SLSA framework to protect the key components of open-source software widely used by many organizations. We also pledged to provide $100 million to support third-party foundations, like OpenSSF, that manage open source security priorities and help fix vulnerabilities.


Strengthening the digital security skills of the American workforce

Robust cybersecurity ultimately depends on having the people to implement it. That includes people with digital skills capable of designing and executing cybersecurity solutions, as well as promoting awareness of cybersecurity risks and protocols among the broader population. In short, we need more and better computer security education and training.  


Over the next three years, we're pledging to help 100,000 Americans earn Google Career Certificates in fields like IT Support and Data Analytics to learn in-demand skills including data privacy and security. The certificates are industry-recognized and supported credentials that equip Americans with the skills they need to get high-paying, high-growth jobs. To date, more than half of our graduates have come from backgrounds underserved in tech (Black, Latinx, veteran, or female). 46% of our graduates come from the lowest income tertile in the country. And the results are strong: 82% of our graduates report a positive career impact within six months of graduation. Additionally, we will train over 10 million Americans in digital skills from basic to advanced by 2023.


Leading the world in cybersecurity is critical to our national security. Today’s meeting at the White House was both an acknowledgment of the threats we face and a call to action to address them. It emphasized cybersecurity as a global imperative and encouraged new ways of thinking and partnering across government, industry and academia. We look forward to working with the Administration and others to define and drive a new era in cybersecurity. Our collective safety, economic growth, and future innovation depend on it.


An update on our progress in responsible AI innovation

Over the past year, responsibly developed AI has transformed health screenings, supportedfact-checking to battle misinformation and save lives, predicted Covid-19 cases to support public health, and protected wildlife after bushfires. Developing AI in a way that gets it right for everyone requires openness, transparency, and a clear focus on understanding the societal implications. That is why we were among the first companies to develop and publish AI Principles and why, each year, we share updates on our progress.

Building on previous AI Principles updates in 20182019, and 2020, today we’re providing our latest overview of what we’ve learned, and how we’re applying these learnings.


Internal Education

In the last year, to ensure our teams have clarity from day one, we’ve added an introduction to our AI Principles for engineers and incoming hires in technical roles. The course presents each of the Principles as well as the applications we will not pursue.

Integrating our Principles into the work we do with enterprise customers is key, so we’ve continued to make our AI Principles in Practice training mandatory for customer-facing Cloud employees. A version of this training is available to all Googlers.

There is no single way to apply the AI Principles to specific features and product development. Training must consider not only the technology and data, but also where and how AI is used. To offer a more comprehensive approach to implementing the AI Principles, we’ve been developing opportunities for Googlers to share their points of view on the responsible development of future technologies, such as the AI Principles Ethics Fellowship for Google’s Employee Resource Groups. Fellows receive AI Principles training and craft hypothetical case studies to inform how Google prioritizes socially beneficial applications. This inaugural year, 27 fellows selected from 191 applicants from around the world wrote and presented case studies on topics such as genome datasets and a Covid-19 content moderation workflow.

Other programs include a bi-weekly Responsible AI Tech Talk Series featuring external experts, such as the Brookings Institution’s Dr. Nicol Turner Lee presenting on detecting and mitigating algorithmic bias.


Tools and Research

To bring together multiple teams working on technical tools and research, this year we formed the Responsible AI and Human-Centered Technology organization. The basic and applied researchers in the organization are devoted to developing technology and best practices for the technical realization of the AI Principles guidance.

As discussed in our December 2020 End-of-Year report, we regularly release these tools to the public. Currently, researchers are developing Know Your Data (in beta) to help developers understand datasets with the goal of improving data quality, helping to mitigate fairness and bias issues.

Image of Know Your Data, a Responsible AI tool in beta

Know Your Data, a Responsible AI tool in beta

Product teams use these tools to evaluate their work’s alignment with the AI Principles. For example, the Portrait Light feature available in both Pixel’s Camera and Google Photos uses multiple machine learning components to instantly add realistic synthetic lighting to portraits. Using computational methods to achieve this effect, however, raised several responsible innovation challenges, including potentially reinforcing unfair bias (AI Principle #2) despite the goal of building a feature that works for all users. So the Portrait Light team generated a training dataset containing millions of photos based on an original set of photos of different people in a diversity of lighting environments, with their explicit consent. The engineering team used various Google Responsible AI tools to test proactively whether the ML models used in Portrait Light performed equitably across different audiences.

Our ongoing technical research related to responsible innovation in AI in the last 12 months has led to more than 200 research papers and articles that address AI Principles-related topics. These include exploring and mitigating data cascades; creating the first model-agnostic framework for partially local federated learning suitable for training and inference at scale; and analyzing the energy- and carbon-costs of training six recent ML models to reduce the carbon footprint of training an ML system by up to 99.9%.


Operationalizing the Principles

To help our teams put the AI Principles into practice, we deploy our decision-making process for reviewing new custom AI projects in development — from chatbots to newer fields such as affective technologies. These reviews are led by a multidisciplinary Responsible Innovation team, which draws on expertise in disciplines including trust and safety, human rights, public policy, law, sustainability, and product management. The team also advises product areas, on specific issues related to serving enterprise customers. Any Googler, at any level, is encouraged to apply for an AI Principles review of a project or planned product or service.

Teams can also request other Responsible Innovation services, such as informal consultations or product fairness testing with the Product Fairness (ProFair) team. ProFair tests products from the user perspective to investigate known issues and find new ones, similar to how an academic researcher would go about identifying fairness issues.

Our Google Cloud, Image Search, Next Billion Users and YouTube product teams have engaged with ProFair to test potential new projects for fairness. Consultations include collaborative testing scenarios, focus groups, and adversarial testing of ML models to address improving equity in data labels, fairness in language models, and bias in predictive text, among other issues. Recently, the ProFair team spent nine months consulting with Google researchers on creating an object recognition dataset for physical landmarks (such as buildings), developing criteria for how to choose which classes to recognize and how to determine the amount of training data per class in ways that would assign a fairer relevance score to each class.

Reviewers weigh the nature of the social benefit involved and whether it substantially exceeds potential challenges. For example, in the past year, reviewers decided not to publicly release ML models that can create photo-realistic synthetic faces made with generative adversarial networks (GANs), because of the risk of potential misuse by malicious actors to create “deepfakes” for misinformation purposes.

As another example, a research team requested a review of a new ML training dataset for computer vision fairness techniques that offered more specific attributes about people, such as perceived gender and age-range. The team worked with Open Images, an open data project containing ~9 million images spanning thousands of object categories and bounding box annotations for 600 classes. Reviewers weighed the risk of labeling the data with the sensitive labels of perceived gender presentation and age-range, and the societal benefit of these labels for fairness analysis and bias mitigation. Given these risks, reviewers required creation of a data card explaining the details and limitations of the project. We released the MIAP (More Inclusive Annotations for People) dataset in the Open Images Extended collection. The collection contains more complete bounding box annotations for the person class hierarchy in 100K images containing people. Each annotation is also labeled with fairness-related attributes. MIAP was accepted and presented at the 2021 Artificial Intelligence, Ethics and Society conference.


External Engagement

We remain committed to contributing to emerging international principles for responsible AI innovation. For example, we submitted a response to the European Commission consultation on the inception impact assessment on ethical and legal requirements for AI and feedback on NITI Aayog’s working document on Responsible Use of AI to guide national principles for India. We also supported the Singaporean government’s Guide to Job Redesign in the Age of AI, which outlines opportunities to optimize AI benefits by redesigning jobs and reskilling employees.

Our efforts to engage global audiences, from students to policymakers, center on educational programs and discussions to offer helpful and practical ML education, including:

  • A workshop on Federated Learning and Analytics, making all research talks and a TensorFlow Federated tutorial publicly available.
  • Machine Learning for Policy Leaders (ML4PL), a 2-hour virtual workshop on the basics of ML. To date, we’ve expanded this globally, reaching more than 350 policymakers across the EU, LatAm, APAC, and US.
  • A workshop co-hosted with the Indian Ministry of Electronics and Information Technology on the Responsible Use of AI for Social Empowerment, exploring the potential of AI adoption in the government to address COVID-19, agricultural and environmental crises.

To support these workshops and events with actionable and equitable programming designed for long-term collaboration, over the past year we’ve helped launch:

  • AI for Social Good workshops, bringing together NGOs applying AI to tough challenges in their communities with academic experts and Google researchers to encourage collaborative AI solutions. So far we’ve supported more than 30 projects in Asia Pacific and Sub-Saharan Africa with expertise, funding and Cloud Credits.
  • Two collaborations with the U.S. National Science Foundation: one to support the National AI Research Institute for Human-AI Interaction and Collaboration with $5 million in funding, along with AI expertise, research collaborations and Cloud support; another to join other industry partners and federal agencies as part of a combined $40 million investment in academic research for Resilient and Intelligent Next-Generation (NextG) Systems, in which Google will offer expertise, research collaborations, infrastructure and in-kind support to researchers.
  • Quarterly Equitable AI Research Roundtables (EARR), focused on the potential downstream harms of AI with experts from the Othering and Belonging Institute at UC Berkeley, PolicyLink, and Emory University School of Law.
  • MLCommons, a non-profit, which will administer MLPerf, a suite of benchmarks for Google and industry peers.


Committed to sharing tools and discoveries

In the three years since Google released our AI Principles, we’ve worked to integrate this ethical charter across our work — from the development of advanced technologies to business processes. As we learn and engage with people and organizations across society we’re committed to sharing tools, processes and discoveries with the global community. You’ll find many of these in our recently updated People + AI Research Guidebook and on the Google AI responsibilities site, which we update quarterly with case studies and other resources. 

How Google supports today’s critical cybersecurity efforts

The past six months have seen some of the most widespread and alarming cyber attacks against our digital infrastructure in history — against public utilities, private sector companies, government entities and people living in democracies around the world. Attacks by nation-states and criminals are increasingly brazen and effective, penetrating even widely used products and services that are supposed to keep you safe.

We are deeply concerned by these trends. Security is the cornerstone of our product strategy, and we’ve spent the last decade building infrastructure and designing products that implement security at scale: every day Gmail blocks more than 100 million phishing attempts that never reach you. Google Play Protect scans over 100 billion apps for malware and other issues. We strive to deliver the most trusted cloudin the industry.  And we have dedicated teams like Project Zero who focus on finding and fixing vulnerabilities across the web to make the internet safer for all of us. 

Our security-first approach builds on awareness of an evolving threat environment, industry-wide information sharing, and the leadership of the international security community. We welcome growing efforts by governments around the world to address cybersecurity challenges. The recent cyber attacks create an opportunity to improve international cooperation and collaboration on areas of common concern. 

In the United States, we are committed to supporting the most recent White House Cybersecurity Executive Order, which makes critical strides to improve America’s cyber defenses in three key areas: 


Modernization and security innovation 

One of the most promising aspects of the U.S. government’s approach is to set agencies and departments on a path to modernize security practices and strengthen cyber defenses across the federal government. We strongly support modernizing computing systems, making security simple and scalable by default, and adopting best practices like zero trust frameworks. As we saw with SolarWinds and the Microsoft Exchange attacks, proprietary systems and restrictions on interoperability and data portability can amplify a network’s vulnerability, helping attackers scale up their efforts. Being tied to a single legacy system also keeps public sector agencies and businesses from taking advantage of the latest cloud-based security solutions. 

Modern systems create the ability to make frequent security updates and changes safely, a critical part of cyber-defense for both the government and private sector. If we are going to solve big security problems, we need to move beyond security band-aids to eliminating entire classes of vulnerabilities, like the risk of clicking on bad links


Secure software development

The U.S. government’s call to action to secure software development practices could bring about the most significant progress on cybersecurity in a decade and will likely have a significant long-term impact on government risk postures. 

At Google, we’ve emphasized securing the software supply chain and we’ve long built technologies and advocated for standards that enhance the integrity and security of software. We continue to work with the U.S. Commerce Department on these issues and support their effort to develop and share best practices. 

Public-private partnerships

In the last few weeks, ransomware attacks have targeted our schools, hospitals, oil pipelines and food supply. Meaningful improvement in cybersecurity will require the public and private sectors to work together in areas like sharing information on cyber threats; developing a comprehensive, defensive security posture to protect against ransomware; and coordinating how they identify and invest in next-generation security tools. 

We are committed to advancing our collective cybersecurity. We have had to block many attacks, including some from nation-states.  Those experiences have given us insights into what works in practice, so our government and private-sector customers don’t have to tackle these issues on their own or depend on the same enterprise technology that created the issues in the first place. Governments need industry-wide support and we are ready and willing to do our part.

We look forward to expanding our work with the United States and other governments, as well as with private sector partners, to develop security technologies and standards that make us all safer. 


Seizing the moment – A framework for American innovation

Decades of government investment in R&D led to scientific breakthroughs that gave us the tools we use every day, and public-private partnerships have sparked innovations from the microchip to the internet. Government R&D investment has led to economic growth, jobs and new startups. As just one example, some of Google’s earliest work was made possible, in part, by the Digital Library Initiative, funded by the National Science Foundation.

But if you fast forward to today, the U.S. government investment in tech has moved to the slow lane. Government-funded research in the U.S. has fallen by 60% as a percentage of GDP — from 1.9% of GDP in 1964 to just 0.7% today. Many countries around the world are investing significantly in research and development. For example, China has said that it will be increasing government R&D funding by 7% annually and recently announced a five-year plan to invest an additional $1.4 trillion in developing next-generation technologies.

As a nation we now have a historic opportunity to put aside partisanship and come together on an issue that will determine our future competitiveness. The United States must seize the moment to cultivate science and technology by setting out a national innovation strategy, and we commit to doing our part.

Senators Schumer and Young have introduced the bipartisan Endless Frontier Act — an important step in putting to work America’s strengths in science and technology to tackle some of the biggest issues of our time, from climate change to global health. Legislative proposals to increase funding for the National Science Foundation will accelerate innovation in the technologies of the future — including quantum computing, AI, biotech and genomics, advanced wireless networks, and robotics — and strengthen the U.S. innovation ecosystem through regional hubs spread throughout the country.

We are also encouraged to see that key components of President Biden’s American Jobs Plan call for increased investment in R&D, including focus on advanced manufacturing, support for underrepresented students in STEM, and collaboration with U.S. universities. We hope Congress can come together in a bipartisan way to support extra investment in research and development.

Beyond direct support for R&D, our national innovation strategy should include support for immigration reform, entrepreneurial start-ups, regulatory clarity, and open data and interoperability.

America’s leadership in science and technology comes in part from our unmatched ability to recruit, train, and retain the world’s best talent. Our doors must be open to the best and the brightest, and we should make it easier for experts in vital technology fields to come to the United States and help grow our innovation economy. In parallel, a renewed focus on STEM education, skills-based training, and school-to-work apprenticeship programs will empower American workers and promote job and wage growth around the country.

America’s innovation framework should work for businesses of all sizes.  At a time when we’re seeing record-setting investment in the promise of new companies, the government can pitch in by expanding access to public resources such as data, software, and computing infrastructure. Streamlined government contracting will also make it easier for startups to bid for large contracts and gain commercial opportunities.

Tech breakthroughs are built on accumulated and shared knowledge — we all stand on the shoulders of others. Data interoperability and open-source software help all of us, including smaller companies and research organizations. The government should promote interoperability, open data, and open-source applications by more actively sharing public data and contributing to open-source platforms.

Clear, balanced and consistent regulations can unlock innovation while protecting consumers and ensuring an equal playing field. Any new generation of technology raises important new questions and requires a balancing of concerns. At the same time, streamlining regulatory burdens can speed great new products to market, helping smaller companies who can struggle to comply with costly or complex rules.

Larger companies like Google and Alphabet have an important role to play in supporting this work, and our public reports show that we’ve invested more than $100 billion in R&D over the last five years. We’ll keep publishing our findings in scientific journals and support public research through public-private partnerships, like our work with NSF on the National AI Research Institute for Human-AI Interaction and Collaboration and our breakthroughs in quantum computing. We also launched Google Career Certificates to help workers develop the skills they need and share in the benefits of growing industries. And because open data and open-source code are essential for innovation, we host a large number ofpublicly available datasets, services, and software accessible to everyone.

We welcome the moves made in recent days and weeks to support America’s innovation leadership.  We’ll continue to look for opportunities to collaborate with government, academic institutions, and others to do our part.

Our ongoing commitment to supporting journalism

Google has always been committed to providing high-quality and relevant information, and to supporting the news publishers who help create it. We are one of the world’s leading financial supporters of journalism. We’ve shared billions of dollars in revenue with news publishers via our ad network, helped news organizations develop new business models and revenue streams, and committed $1 billion over the next three years to license news content through Google News Showcase.  

We welcome the discussion of ways to create a better economic future for quality journalism, especially as the news media business model has been facing increased challenges for many years. But proposals that would disrupt access to the open web (such as requiring payment for just showing links to websites) would hurt consumers, small businesses, and publishers. That’s why we’ve engaged constructively with publishers around the world on better solutions and will continue to do so. 

We also believe that this important debate should be about the substance of the issue, and not derailed by naked corporate opportunism … which brings us to Microsoft’s sudden interest in this discussion. We respect Microsoft’s success and we compete hard with them in cloud computing, search, productivity apps, video conferencing, email and many other areas. Unfortunately, as competition in these areas intensifies, they are reverting to their familiar playbook of attacking rivals and lobbying for regulations that benefit their own interests. They are now making self-serving claims and are even willing to break the way the open web works in an effort to undercut a rival. And their claims about our business and how we work with news publishers are just plain wrong.

This latest attack marks a return to Microsoft’s longtime practices. And it’s no coincidence that Microsoft’s newfound interest in attacking us comes on the heels of the SolarWinds attack and at a moment when they’ve allowed tens of thousands of their customers — including government agencies in the U.S., NATO allies, banks, nonprofits, telecommunications providers, public utilities, police, fire and rescue units, hospitals and, presumably, news organizations — to be actively hacked via major Microsoft vulnerabilities. Microsoft was warned about the vulnerabilities in their system, knew they were being exploited, and are now doing damage control while their customers scramble to pick up the pieces from what has been dubbed the Great Email Robbery. So maybe it’s not surprising to see them dusting off the old diversionary Scroogled playbook.

Microsoft is the second-largest company in the U.S. by market capitalization and the owner of LinkedIn, MSN, Microsoft News and Bing, all of which are places where news is regularly consumed and shared. But their track record is spotty: They have paid out a much smaller amount to the news industry than we have. And given the chance to support or fund their own journalists, Microsoft replaced them with AI bots

Microsoft’s attempts at distraction aside, we’ll continue to collaborate with news organizations and policymakers around the world to enable a strong future for journalism. We're doing a lot to support journalism, and will do much more. We look forward to continuing to engage with regulators and news publishers to ensure a thriving and healthy publishing industry.

What people are saying about Australia’s proposed News Media Bargaining Code

Microsoft’s take on Australia’s proposed law is unsurprising — of course they'd be eager to impose an unworkable levy on a rival and increase their market share. 

But in its eagerness, Microsoft makes numerous claims that have been thoroughly and independently debunked

We have long been committed to supporting high-quality content on the web. Our issue is absolutely not with paying news organizations — we’ve done this for many years. Today Google News Showcase is paying publishers, and supporting local journalism, in Australia and over a dozen countries. Through these partnerships, we are paying significant amounts to support news organizations large and small — with more to come. 

But we and others have pointed to significant concerns with the proposed Australian law, while proposing reasonable amendments to make it work. The issue isn’t whether companies pay to support quality content; the issue is how. The law would unfairly require unknown payments for simply showing links to news businesses, while giving, to a favored few, special previews of search ranking. Those aren’t workable solutions and would fundamentally change the Internet, hurting the people and businesses who use it. But there are better ways, and we’re committed to making progress.

Don't take our (or Microsoft’s) word for it. Here's what others are saying, from a former Australian Prime Minister, to the Business Council of Australia, to the inventor of the web.

Quote from the Business Council of Australia
An image of a quote from the FT editorial board
An image of a quote from Sir Tim Berners Lee

Our continuing support for Dreamers

For generations, talented immigrants have helped America drive technological breakthroughs and scientific advancements that have created millions of new jobs in new industries, enriching our culture and our economy.

That’s why we have long supported the Deferred Action for Childhood Arrivals (DACA) program. This was established in 2012 and allows “Dreamers” who came to the United States as children to request deferred action and work authorization for a renewal period of two years. Google proudly employs Dreamers who work to build the products you use every day. And we’ve defended their right to stay in the United States by joining amici briefs in court supporting DACA.

Unfortunately, DACA’s immediate future is uncertain. At the end of 2020, a U.S. District Court indicated that it could soon issue a ruling against DACA and bar new applications and ultimately renewals as well, leaving countless Dreamers in limbo during this uncertain time.

We believe it’s important that Dreamers have a chance to apply for protection under the program so that they can safeguard their status in the United States. But in the middle of a global pandemic that has led to economic hardship, especially for the many immigrants playing essential roles on the front lines, there is concern that many Dreamers cannot afford to pay the application fee

We want to do our part, so Google.org is making a $250,000 grant to United We Dream to cover the DACA application fees of over 500 Dreamers. This grant builds on over $35 million in support that Google.org and Google employees have contributed over the years to support immigrants and refugees worldwide, including more than $1 million from Googlers and Google.org specifically supporting DACA and domestic immigration efforts through employee giving campaigns led by HOLA (Google’s Latino Employee Resource Group).

We know this is only a temporary solution. We need legislation that not only protects Dreamers, but also delivers other much-needed reforms. We will support efforts by the new Congress and incoming Administration to pass comprehensive immigration reform that improves employment-based visa programs that enhance American competitiveness, gives greater assurance to immigrant workers and employers, and promotes better and more humane immigration processing and border security practices.

Dreamers and other talented immigrants enrich our communities, contribute to our economy, and exemplify the innovative spirit of America. We’re proud to support them.