Tag Archives: Public Policy

Doing our part to share open data responsibly

This past weekend marked Open Data Day, an annual celebration of making data freely available to everyone. Communities around the world organized events, and we’re taking a moment here at Google to share our own perspective on the importance of open data. More accessible data can meaningfully help people and organizations, and we’re doing our part by opening datasets, providing access to APIs and aggregated product data, and developing tools to make data more accessible and useful.

Responsibly opening datasets

Sharing datasets is increasingly important as more people adopt machine learning through open frameworks like TensorFlow. We’ve released over 50 open datasets for other developers and researchers to use. These include YouTube 8M, a corpus of annotated videos used externally for video understanding; the HDR+ Burst Photography dataset, which helps others experiment with the technology that powers Pixel features like Portrait Mode; and Open Images, along with the Open Images Extended dataset which increases photo diversity.

Just because data is open doesn’t mean it will be useful, however. First, a dataset needs to be cleaned so that any insights developed from it are based on well-structured and accurate examples. Cleaning a large dataset is no small feat; before opening up our own, we spend hundreds of hours standardizing data and validating quality. Second, a dataset should be shared in a machine-readable format that’s easy for others to use, such as JSON rather than PDF. Finally, consider whether the dataset is representative of the intended content. Even if data is usable and representative of some situations, it may not be appropriate for every application. For instance, if a dataset contains mostly North American animal images, it may help you classify a deer, but not a giraffe. Tools like Facets can help you analyze the makeup of a dataset and evaluate the best ways to put it to use. We’re also working to build more representative datasets through interfaces like the Crowdsource application. To guide others’ use of your own dataset, consider publishing a data card which denotes authorship, composition and suggested use cases (here’s an example from our Open Images Extended release).

Making data findable and useful

It’s not enough to just make good data open, though--it also needs to be findable. Researchers, developers, journalists and other curious data-seekers often struggle to locate data scattered across the web’s thousands of repositories. Our Dataset Search tool helps people find data sources wherever they’re hosted, as long as the data is described in a way that search engines can locate. Since the tool launched a few months ago, we’ve seen the number of unique datasets on the platform double to 10 million, including contributions from the U.S. National Ocean and Atmospheric Administration (NOAA), National Institutes of Health (NIH), the Federal Reserve, the European Data Portal, the World Bank and government portals from every continent.

What makes data useful is how easily it can be analyzed. Though there’s more open data today, data scientists spend significant time analyzing it across multiple sources. To help solve that problem, we’ve created Data Commons. It’s a knowledge graph of data sources that lets users  treat various datasets of interest—regardless of source and format—as if they are all in a single local database. Anyone can contribute datasets or build applications powered by the infrastructure. For people using the platform, that means less time engineering data and more time generating insights. We’re already seeing exciting use cases of Data Commons. In one UC Berkeley data science course taught by Josh Hug and Fernando Perez, students used Census, CDC and Bureau of Labor Statistics data to correlate obesity levels across U.S. cities with other health and economic factors. Typically, that analysis would take days or weeks; using Data Commons, students were able to build high-fidelity models in less than an hour. We hope to partner with other educators and researchers—if you’re interested, reach out to collaborate@datacommons.org.

Balancing trade-offs

There are trade-offs to opening up data, and we aim to balance various sensitivities with the potential benefits of sharing. One consideration is that broad data openness can facilitate uses that don’t align with our AI Principles. For instance, we recently made synthetic speech data available only to researchers participating in the 2019 ASVspoof Challenge, to ensure that the data can be used to develop tools to detect deepfakes, while limiting misuse.

Extreme data openness can also risk exposing user or proprietary information, causing privacy breaches or threatening the security of our platforms. We allow third party developers to build on services like Maps, Gmail and more via APIs, so they can build their own products while user data is kept safe. We also publish aggregated product data like Search Trends to share information of public interest in a privacy-preserving way.

While there can be benefits to using sensitive data in controlled and principled ways, like predicting medical conditions or events, it’s critical that safeguards are in place so that training machine learning models doesn’t compromise individual privacy. Emerging research provides promising new avenues to learn from sensitive data. One is Federated Learning, a technique for training global ML models without data ever leaving a person’s device, which we’ve recently made available open-source with TensorFlow Federated. Another is Differential Privacy, which can offer strong guarantees that training data details aren’t inappropriately exposed in ML models. Additionally, researchers are experimenting more and more with using small training datasets and zero-shot learning, as we demonstrated in our recent prostate cancer detection research and work on Google Translate.

We hope that our efforts will help people access and learn from clean, useful, relevant and privacy-preserving open data from Google to solve the problems that matter to them. We also encourage other organizations to consider how they can contribute—whether by opening their own datasets, facilitating usability by cleaning them before release, using schema.org metadata standards to increase findability, enhancing transparency through data cards or considering trade-offs like user privacy and misuse. To everyone who has come together over the past week to celebrate open data: we look forward to seeing what you build.

To help fight the opioid crisis, a new tool from Maps and Search

In 2017, the Department of Health and Human Services (HHS) declared the opioid crisis a public health emergency, with over 130 Americans dying every day from opioid-related drug overdoses.  Last month, we saw that search queries for “medication disposal near me” reached an all-time high on Google.

opioids_data

53 percent of prescription drug abuse starts with drugs obtained from family or friends, so we’re working alongside government agencies and nonprofit organizations to help people safely remove excess or unused opioids from their medicine cabinets. Last year, we partnered with the U.S. Drug Enforcement Administration (DEA) for National Prescription Take Back Day by developing a Google Maps API  locator tool to help people dispose of their prescription drugs at temporary locations twice a year. With the help of this tool, the DEA and its local partners collected a record 1.85 million pounds of unused prescription drugs in 2018.

Today, we’re making it easier for Americans to quickly find disposal locations on Google Maps and Search all year round. A search for queries like “drug drop off near me” or “medication disposal near me” will display permanent disposal locations at your local pharmacy, hospital or government building so you can quickly and safely discard your unneeded medication.



opioid_gif

This pilot has been made possible thanks to the hard work of many federal agencies, states and pharmacies. Companies like Walgreens and CVS Health, along with state governments in Alabama, Arizona, Colorado, Iowa, Massachusetts, Michigan and Pennsylvania have been instrumental in this project, contributing data with extensive lists of public and private disposal locations. The DEA is already working with us to provide additional location data to expand the pilot.

For this pilot, we also looked to public health authorities—like HHS—for ideas on how technology can help communities respond to the opioid crisis. In fact, combining disposal location data from different sources was inspired by a winning entry at the HHS’s Opioid Code-A-Thon held a year ago.

We’ll be working to expand coverage and add more locations in the coming months. To learn more about how your state or business can bring more disposal locations to Google Maps and Search, contact RXdisposal-data@google.com today.


Source: Google LatLong


Smart regulation for combating illegal content

We've written before about how we're working to support smart regulation, and one area of increasing attention is regulation to combat illegal content.

As online platforms have become increasingly popular, there’s been a rich debate about the best legal framework for combating illegal content in a way that respects other social values, like free expression, diversity and innovation. Today, various laws provide detailed regulations, including Section 230 of the Communications Decency Act in the United States and European Union’s e-Commerce Directive.

Google invests millions of dollars in technology and people to combat illegal content in an effective and fair way. It’s a complex task, and–just as in offline contexts—it’s not a problem that can be totally solved. Rather, it’s a problem that must be managed, and we are constantly refining our practices.

In addressing illegal content, we’re also conscious of the importance of protecting legal speech. Context often matters when determining whether content is illegal. Consider a video of military conflict. In one context the footage might be documentary evidence of atrocities in areas where journalists have great difficulty and danger accessing. In another context the footage could be promotional material for an illegal organization. Even a highly trained reviewer could have a hard time telling the difference, and we need to get those decisions right across many different languages and cultures, and across the vast scale of audio, video, text, and images uploaded online. We make it easy to easily submit takedown notices; at the same time, we also create checks and balances against misuse of removal processes. And we look to the work of international agencies and principles from leading groups like the Global Network Initiative.

A smart regulatory framework is essential to enabling an appropriate approach to illegal content. We wanted to share four key principles that inform our practices and that (we would suggest) make for an effective regulatory framework:

  • Shared Responsibility: Tackling illegal content is a societal challenge—in which companies, governments, civil society, and users all have a role to play. Whether a company is alleging copyright infringement, an individual is claiming defamation, or a government is seeking removal of terrorist content, it’s essential to provide clear notice about the specific piece of content to an online platform, and then platforms have a responsibility to take appropriate action on the specific content. In some cases, content may not be clearly illegal, either because the facts are uncertain or because the legal outcome depends on a difficult balancing act; in turn, courts have an essential role to play in fact-finding and reaching legal conclusions on which platforms can rely.

  • Rule of law and creating legal clarity: It’s important to clearly define what platforms can do to fulfill their legal responsibilities, including removal obligations. An online platform that takes other voluntary steps to address illegal content should not be penalized. (This is sometimes called “Good Samaritan” protection.)

  • Flexibility to accommodate new technology:While laws should accommodate relevant differences between platforms, given the fast-evolving nature of the sector, laws should be written in ways that address the underlying issue rather than focusing on existing technologies or mandating specific technological fixes. 

  • Fairness and transparency: Laws should support companies’ ability to publish transparency reports about content removals, and provide people with notice and an ability to appeal removal of content. They should also recognize that fairness is a flexible and context-dependent notion—for example, improperly blocking newsworthy content or political expression could cause more harm than mistakenly blocking other types of content. 

With these principles in mind, we support refinement of notice-and-takedown regimes, but we have significant concerns about laws that would mandate proactively monitoring or filtering content, impose overly rigid timelines for content removal, or otherwise impose harsh penalties even on those acting in good faith. These types of laws create a risk that platforms won’t take a balanced approach to content removals, but instead take a “better safe than sorry” approach—blocking content at upload or implementing a “take down first, ask questions later (or never)” approach. We regularly receive overly broad removal requests, and analyses of cease-and-desist and takedown letters have found that many seek to remove potentially legitimate or protected speech.

There’s ample room for debate and nuance on these topics—we discuss them every day—and we’ll continue to seek ongoing collaboration among governments, industry, and civil society on this front. Over time, an ecosystem of tools and institutions—like the Global Internet Forum to Counter Terrorism, and the Internet Watch Foundation, which has taken down child sexual abuse material for more than two decades—has evolved to address the issue. Continuing to develop initiatives like these and other multistakeholder efforts remains critical, and we look forward to progressing those discussions.

Investing across the U.S. in 2019

One year ago this week, I was in Montgomery County, Tennessee to break ground for a new data center in Clarksville. It was clear from the excitement at the event that the jobs and economic investment meant a great deal to the community. I’ve seen that same optimism in communities around the country that are helping to power our digital economy. And I’m proud to say that our U.S. footprint is growing rapidly: In the last year, we’ve hired more than 10,000 people in the U.S. and made over $9 billion in investments. Our expansion across the U.S. has been crucial to finding great new talent, improving the services that people use every day, and investing in our business.

Today we’re announcing over $13 billion in investments throughout 2019 in data centers and offices across the U.S., with major expansions in 14 states. These new investments will give us the capacity to hire tens of thousands of employees, and enable the creation of more than 10,000 new construction jobs in Nebraska, Nevada, Ohio, Texas, Oklahoma, South Carolina and Virginia. With this new investment, Google will now have a home in 24 total states, including data centers in 13 communities. 2019 marks the second year in a row we’ll be growing faster outside of the Bay Area than in it.

This growth will allow us to invest in the communities where we operate, while we improve the products and services that help billions of people and businesses globally. Our new data center investments, in particular, will enhance our ability to provide the fastest and most reliable services for all our users and customers. As part of our commitment to our 100 percent renewable energy purchasing, we’re also making significant renewable energy investments in the U.S. as we grow. Our data centers make a significant economic contribution to local communities, as do the associated $5 billion in energy investments that our energy purchasing supports.

Here’s a closer look at the investments we’re making state by state:

Map gif

Midwest

We’re continuing to expand our presence in Chicago and are developing new data centers in Ohio and Nebraska. The Wisconsin office is set to move into a larger space in the next few months—and last November we opened a Detroit office in Little Caesars Arena, where you can see into the space where the Detroit Red Wings play.

detroit office opening

Googlers and partners at our office opening in Detroit last November

South

With new office and data center development, our workforce in Virginia will double. And with a new office in Georgia, our workforce will double there as well. Data centers in Oklahoma and South Carolina will expand, and we’re developing a new office and data center in Texas.

ribbon cutting

Opening one of our data centers last year.

Northeast

Massachusetts has one of our largest sales and engineering communities outside of the Bay Area, and we’re building new office space there. In New York, the Google Hudson Square campus—a major product, engineering and business hub—will come to life over the next couple of years.

West

We’ll open our first data center in Nevada and will expand our Washington office, a key product and engineering hub. In addition to investments in the Bay Area, our investments in California continue with the redevelopment of the Westside Pavillion, and the Spruce Goose Hangar in the Los Angeles area.

googlers

Googlers at work. Our investments this year will go toward expansions in data centers and offices across the U.S.

All of this growth is only possible with our local partners. Thank you for welcoming Google into your communities—we look forward to working together to grow our economy and support jobs in the U.S.


Oracle v. Google and the future of software development

Today we asked the Supreme Court of the United States to review our long-running copyright dispute with Oracle over the use of software interfaces. The outcome will have a far-reaching impact on innovation across the computer industry.

Standardized software interfaces have driven innovation in software development. They let computer programs interact with each other and let developers easily build technologies for different platforms. Unless the Supreme Court steps in here, the industry will be hamstrung by court decisions finding that the use of software interfaces in creating new programs is not allowed under copyright law.

With smartphone apps now common, we sometimes forget how hard it once was for developers to build apps across a wide range of different platforms. Our 2008 release of the open-source Android platform changed the game. It helped developers overcome the challenges of smaller processors, limited memory, and short battery life, while providing innovative features and functionality for smartphone development. The result was a win for everyone: Developers could build new apps, manufacturers could build great new devices, and the resulting competition gave consumers both lower prices and an extraordinary range of choice.

We built Android following the computer industry’s long-accepted practice of re-using software interfaces, which provide sets of commands that make it easy to implement common functionality—in the same way that computer keyboard short-cuts like pressing “control” and “p” make it easy to print. Android created a transformative new platform, while letting millions of Java programmers use their existing skills to create new applications. And the creators of Java backed the release of Android, saying that it had “strapped another set of rockets to the [Java] community’s momentum.”

But after it acquired Java in 2010, Oracle sued us for using these software interfaces, trying to profit by changing the rules of software development after the fact. Oracle’s lawsuit claims the right to control software interfaces—the building blocks of software development—and as a result, the ability to lock in a community of developers who have invested in learning the free and open Java language.

A court initially ruled that the software interfaces in this case are not copyrightable, but that decision was overruled. A unanimous jury then held that our use of the interfaces was a legal fair use, but that decision was likewise overruled. Unless the Supreme Court corrects these twin reversals, this case will end developers’ traditional ability to freely use existing software interfaces to build new generations of computer programs for consumers. Just like we all learn to use computer keyboard shortcuts, developers have learned to use the many standard interfaces associated with different programming languages. Letting these reversals stand would effectively lock developers into the platform of a single copyright holder—akin to saying that keyboard shortcuts can work with only one type of computer.

The U.S. Constitution authorized copyrights to “promote the progress of science and useful arts,” not to impede creativity or promote lock-in of software platforms. Leading voices from business, technology, academia, and the nonprofit sector agree and have spoken out about the potentially devastating impacts of this case.

We support software developers’ ability to develop the applications we all have come to use every day, and we hope that the Supreme Court will give this case the serious and careful consideration it deserves.  

Engaging policy stakeholders on issues in AI governance

AI has become part of the fabric of modern life, with applications in sectors ranging from agriculture to retail to health to education. We believe that AI, used appropriately, can deliver great benefit for economies and society, and help people to make decisions that are fairer, safer, and more inclusive and informed.

As with other technologies, there are new policy questions that arise with the use of AI, and governments and civil society groups worldwide have a key role to play in the AI governance discussion. In a white paper we’re publishing today, we outline five areas where government can work with civil society and AI practitioners to provide important guidance on responsible AI development and use: explainability standards, fairness appraisal, safety considerations, human-AI collaboration and liability frameworks.

There are many trade-offs within each of these areas and the details are paramount for responsible implementation. For example, how should explainability and the need to hold an algorithm accountable be balanced with safeguarding the security of the system against hackers, protecting proprietary information and the desire to make AI experiences user-friendly? How should benchmarks and testing to ensure the safety of an AI system be balanced with the potential safety costs of not using the system?

No one company, country, or community has all the answers; on the contrary, it’s crucial for policy stakeholders worldwide to engage in these conversations. In the majority of cases, general legal frameworks and existing sector-specific processes will continue to provide an appropriate governance structure; for example, medical device regulations should continue to govern medical devices, regardless of whether AI is used in the device or not. However, in cases where additional oversight is needed, we hope this paper can help to promote pragmatic and forward-looking “rules of the road” and approaches to governance that keep pace with changing attitudes and technology.

Applications are open for the Google North America Public Policy Fellowship

Starting today, we’re accepting applications for the 2019 North America Google Policy Fellowship. Our fellowship gives undergraduate and graduate students a paid opportunity to spend 10-weeks diving head first into Internet policy at leading nonprofits, think tanks and advocacy groups. In addition to opportunities in Washington, D.C. and California, we’ve expanded our program to include academic institutions and advocacy groups in New York and Utah, where students will have the chance to be at the forefront of debates on internet freedom and economic opportunity. We’re looking for students from all majors and degree programs who are passionate about technology and want to gain hands on experience exploring important intersections of tech policy.

The application period opens today for the North America region and all applications must be received by 12:00 p.m. ET/9 a.m. PT, Friday, February, 15th. This year's program will run from early June through early August, with regular programming throughout the summer. More specific information, including a list of this year’s hosts and locations, can be found on our site.

You can learn about the program, application process and host organizations on the Google Public Policy Fellowship website.

Principles for evolving technology policy in 2019

The past year has seen a range of public debates about the roles and responsibilities of technology companies. As 2019 begins, I’d like to share my thoughts on these important discussions and why Google supports smart regulation and other ways to address emerging issues.

We’ve always been and still are fundamentally optimistic about the power of innovative technology. We’re proud that Google’s products and services empower billions of people, drive economic growth and offer important tools for your everyday life. This takes many forms, whether it’s instant access to the world’s information, an infinite gallery of sortable photos, tools that let you share documents and calendars with friends, directions that help you avoid traffic jams, or whatever Google tool you find most helpful.

But this optimism doesn’t obscure the challenges we face—including those posed by misuse of new technologies. New tools inevitably affect not just the people and businesses who use them, but also cultures, economies and societies as a whole. We’ve come a long way from our days as a scrappy startup, and with billions of people using our services every day, we recognize the need to confront tough issues regarding technology's impacts.

The scrutiny of lawmakers and others often improves our products and the policies that govern them. It’s sometimes claimed that the internet is an unregulated “wild west,” but that's not the case. Many laws and regulations have contributed to the internet’s vitality: competition and consumer protection laws, advertising regulations, and copyright, to name just a few. Existing legal frameworks reflect trade-offs that help everyone reap the benefits of modern technologies, minimize social costs, and respect fundamental rights. As technology evolves, we need to stay attuned to how best to improve those rules.

In some cases, laws do need updates, as we laid out in our recent post on data protection and our proposal regarding law enforcement access to data. In other cases, collaboration among industry, government, and civil society may lead to complementary approaches, like joint industry efforts to fight online terrorist content, child sexual abuse material and copyright piracy. Shared concerns can also lead to ways to empower people with new tools and choices, like helping people control and move their data—that’s why we have been a leader since 2007 in developing data portability tools and last year helped launch the cross-company Data Transfer Project.

We don’t see smart regulation as a singular end state, it must develop and evolve. In an era (and a sector) of rapid change, one-size-fits-all solutions are unlikely to work out well. Instead, it's important to start with a focus on a specific problem and seek well-tailored and well-informed solutions, thinking through the benefits, the second-order impacts, and the potential for unintended side-effects.

Efforts to address illegal and harmful online content illustrate how tech companies can play a supportive role in this process:

  • First, to support constructive transparency, we launched our Transparency Report more than eight years ago, and we have continued to extend our transparency efforts over time, most recently with YouTube’s Community Guidelines enforcement report.

  • Second, to cultivate best practices for responsible content removals, we’ve supported initiatives like the Global Internet Forum to Counter Terrorism, where tech companies, governments and civil society have worked together to stop exploitation of online services.

  • Finally, we have participated in government-overseen systems of accountability. For instance, the EU’s Hate Speech Code of Conduct includes an audit process to monitor how platforms are meeting our commitments. And in the recent EU Code of Practice On Disinformation, we agreed to help researchers study this topic and to regular reporting and assessment of our next steps in this fight.

While the world is no longer at the start of the Information Revolution, the most important and exciting chapters are still to come. Google has pioneered a number of new artificial intelligence (AI) tools, and published a set of principles to guide our work and inform the larger public debate about the use of these remarkable technologies. We’ll have more to say about issues in AI governance in the coming weeks. Of course, every new breakthrough will raise its own set of new issues—and we look forward to hearing from others and sharing our own thoughts and ideas.

To stop terror content online, tech companies need to work together

Wherever we live, whatever our background, we’ve all seen the pain caused by senseless acts of terrorism. Just last week, the tragic murder of Christmas shoppers in Strasbourg was a sobering reminder that terrorist attacks can happen at any time.


What is clear from such attacks is that we all—government, industry, and civil society—have to remain vigilant and work together to address this continuing threat. While governments and civil society groups face a complex challenge in deterring terrorist violence, collaboration across the industry to responsibly address terrorist content online is delivering progress. And more tech companies must join the fight against terrorist content online.


In June 2017 senior representatives from Facebook, Microsoft, Twitter and YouTube came together to form the Global Internet Forum to Counter Terrorism (GIFCT), a coalition to share information on how to best curb the spread of terrorism online. I’ve had the responsibility of chairing this Forum for its initial a year and a half, and I’m pleased to report that the Forum has helped to deliver significant results across a number of areas.


In September 2017 at the United Nations General Assembly, I joined the leaders of the United Kingdom, France, and Italy to discuss what more the tech industry could do to combat terrorist content. I was there on behalf of the GIFCT member companies to present our commitments to tackle terrorism online: We collectively pledged to develop and share technology to responsibly address terrorist content across the industry; to fund research and share good practices that help all companies stay abreast of the latest trends; and to elevate positive counter messages.  


We understand that we must responsibly lead the way in developing new technologies and standards for identifying and removing harmful terrorist content. As EU Commissioner Avromopolis said: “The tools you are developing yourselves on your platforms are the most effective counter-measures we all have. That is why I am a strong supporter of your efforts under the Global Internet Forum to Counter Terrorism.” A key pillar of GIFCT’s work to drive progress is maintaining a shared database of digital fingerprints (hashes) of known terrorist content that lets any member of the coalition automatically find and remove identical terrorist content on their platforms. In 2018, we set—and achieved—an ambitious goal of depositing 100k new hashes in the database.


Over the past year and a half, we’ve also engaged smaller businesses around the world to discuss their unique needs and to share ways to responsibly address terrorist content online. With the UN’s counterterrorism directorate and the UN-initiated TechAgainstTerrorism program, we’ve worked with more than 100 tech companies on four continents. We also convened forums in Europe, the Asia Pacific region, and Silicon Valley for companies, civil society groups, and governments to share experiences and get suggestions for further efforts.


To enhance our understanding of the latest trends in online terrorist propaganda, GIFCT has been working with a research network led by the Royal United Services Institute. We are speaking with its network of eight think tanks around the world about how terrorist networks operate online, the ethics of content moderation, and the interplay between online content and offline actions. That network will publish ten academic papers over the next six months to benefit everyone working on the problem of terrorist content online.


We’ve also successfully worked alongside governments and Internet Referral Units, like Europol to get terrorist content down even more quickly. With civil society organizations, we’ve developed a tool that will help them mount counter extremism campaigns across many online platforms at once. And together with Google.org, we launched a $5 million innovation fund to counter hate and extremism. The fund gives grants to nonprofits that are countering hate, both online and off. Our £1M pilot program in the UK received over 230 applications, and we awarded grants to 22 initiatives.


These are significant developments for the industry, but we know we have much more to do. The Forum will continue to expand our membership, vastly increase the size of our database of hashes, and do even more to help small companies and academic websites responsibly address terrorist content.


We can never be complacent against the continuing threat of terrorism. The work being done today by our coalition members has helped limit the use of our platforms by terrorist organizations, and we have extended an open invitation to others in the industry to join with us in this effort. Working together, we will continue to develop and implement solutions across the industry to protect our users, our societies, and a free and open internet.

A “First Step” towards criminal justice reform

For the first time in 22 years, Alice Johnson will be home for the holidays. Now a great-grandmother, Johnson was sentenced to life in prison without parole for a first-time, non-violent drug felony in 1997. She had spent over two decades behind bars when her story gained national attention—prompting President Trump to officially reduce Johnson’s sentence and send her home free earlier this year. Johnson’s case set off a long-overdue debate around the country about harsh sentencing laws and the need to reform our criminal justice system.

We’ve long supported efforts to end mass incarceration and help individuals like Johnson get a second chance. In 2017, we collaborated on a YouTube video in which Johnson urged the public from her prison cell to advocate for the release of those serving life sentences for nonviolent offenses. We later partnered with Mic.com to produce a digital op-ed, which caught the attention of Kim Kardashian West and inspired her to take up Johnson’s cause.

America’s thirty-year experiment with mandatory minimum sentences and sweeping criminalization has too often imposed unfair and disproportionate penalties on people across the country. As a former prosecutor, I have witnessed many individuals and families bear the consequences of these policies—policies that haven’t made us any safer, but have cost millions in taxpayer dollars and cast a pall over many lives.

This week, Congress—in a rare show of bipartisan consensus—passed the First Step Act, changing these policies and reforming our criminal justice system. The legislation lowers mandatory minimum sentences for drug felonies, reduces the disparity in sentencing guidelines between crack and powder cocaine offenses, and gives judges the discretion to shorten mandatory minimum sentences for low-level crimes. President Trump has already expressed support for the bill, and we look forward to him quickly signing it into law.

The Act marks an important step forward in restoring equal justice and due process, and promoting consistency and fairness in sentencing. Moreover, the Act includes measures that will bolster rehabilitation programs in prisons across the country to help incarcerated women and men successfully re-enter society, reduce recidivism rates, and make our communities safer.

Google.org has long backed these kinds of efforts to improve our criminal justice system. We’ve supported work by non-profits promoting reform and by police departments working to improve interactions with their communities. We have promoted the use of data to increase the transparency of our criminal justice system. And we have launched programs like our digital LoveLetters initiative, which supports children with imprisoned parents.  

While we’re encouraged by the passage of the First Step Act, there is still more work to be done at the federal, state, and local level to improve our criminal justice system. And we all have a part to play. As an example, our company policies seek to promote fair hiring by “banning the box” (requiring job applicants to disclose criminal history only once they get a chance to interview) and encouraging our suppliers to do the same. And we don’t accept ads for bail bonds, an industry with an unfortunate history of predatory practices.

We look forward to continuing to work with people from many backgrounds and across a spectrum of views, united in our belief that America’s legal and criminal justice systems can and should be an example to the world.