Tag Archives: Public Policy

Presenting search app and browser options to Android users in Europe

People have always been able to customize their Android devices to suit their preferences. That includes personalizing the design, installing any apps they want and choosing which services to use as defaults in apps like Google Chrome.

Following the changes we made to comply with the European Commission's ruling last year, we’ll start presenting new screens to Android users in Europe with an option to download search apps and browsers.  

These new screens will be displayed the first time a user opens Google Play after receiving an upcoming update. Two screens will surface: one for search apps and another for browsers, each containing a total of five apps, including any that are already installed. Apps that are not already installed on the device will be included based on their popularity and shown in a random order.

Android screen

An illustration of how the screens will look. The apps shown will vary by country.

Users can tap to install as many apps as they want. If an additional search app or browser is installed, the user will be shown an additional screen with instructions on how to set up the new app (e.g., placing app icons and widgets or setting defaults). Where a user downloads a search app from the screen, we’ll also ask them whether they want to change Chrome's default search engine the next time they open Chrome.

Chrome

The prompt in Google Chrome to ask the user whether they want to change their default search engine.

The screens are rolling out over the next few weeks and will apply to both existing and new Android phones in Europe.

These changes are being made in response to feedback from the European Commission. We will be evolving the implementation over time.  

USMCA: A trade framework for the digital age

When the North American Free Trade Agreement (NAFTA) was signed in 1992, the global economy and the world looked a lot different than they do today. There was no such thing as a web search engine. Most people didn't know what email was (let alone use it). And to participate in international trade, a business needed big financial resources, offices and staff around the world, and lots of fax machines.

Thanks to the internet, that's all changed. Today, even the smallest of businesses can be global players and have customers in every corner of the world. Using the internet and online tools, the family-run Missouri Star Quilt Company has built an international business by sharing quilting how-to videos on YouTube, and the social impact brand Sword & Plough has sold thousands of bags and accessories globally that support veteran jobs.

The web has fundamentally changed not only how we trade, but also who trades. Small businesses using online tools are five times more likely to export than their offline counterparts. U.S. manufacturers are now the leading exporters of products and services online.

That’s why we need trade agreements that reflect the reality of today's economy. NAFTA references “telegrams” multiple times, but doesn’t even mention the internet. In contrast, the new U.S.-Mexico-Canada Trade Agreement (USMCA) includes a comprehensive set of digital trade provisions that keep the internet open, and protect the businesses and consumers that rely on it:

  • Trusted infrastructure: USMCA promotes an open and secure global technical infrastructure that supports a new kind of trade. For example, the agreement prohibits the U.S., Mexico and Canada from requiring that data be stored and replicated locally, reducing the cost of doing business in other countries and ensuring that data isn’t vulnerable to attack.

  • Innovation-enabling rules: USMCA promotes the open online framework that’s been key to the success of the U.S. internet economy. This framework both allows for platform-based trade, and also empowers internet platforms to combat harmful content online and fight piracy.

  • Protecting data: Consumers’ privacy should be protected no matter what country an individual or business is located in, and USMCA reflects this important principle. The agreement promotes strong privacy laws and cybersecurity standards to protect people’s data.

  • Access to information: USMCA limits government restrictions on information flow across borders, recognizing that wide availability of information leads to more trade and economic growth. The agreement also encourages governments to release non-sensitive data in an open and machine-readable format, so companies of all sizes have the opportunity to build commercial applications and services with public information.

  • Modernizing trade: Finally, USMCA prohibits our trading partners from imposing customs duties on things like e-books, videos, music, software, games and apps—ensuring consumers can continue to enjoy free or low-cost digital products.

USMCA will establish a strong framework to promote the new digital economy, and will unlock new sources of opportunity, creativity and job growth in North America. We look forward to seeing the agreement approved and implemented in a way that allows everyone to benefit from a free and open internet.

Supporting choice and competition in Europe

For nearly a decade, we’ve been in discussions with the European Commission about the way some of our products work. Throughout this process, we’ve always agreed on one thing一that healthy, thriving markets are in everyone’s interest.

A key characteristic of open and competitive markets一and of Google’s products一is constant change. Every year, we make thousands of changes to our products, spurred by feedback from our partners and our users. Over the last few years, we’ve also made changes一to Google Shopping; to our mobile apps licenses; and to AdSense for Search一in direct response to formal concerns raised by the European Commission.  

Since then, we’ve been listening carefully to the feedback we’re getting, both from the European Commission, and from others. As a result, over the next few months, we’ll be making further updates to our products in Europe.

Since 2017, when we adapted Google Shopping to comply with the Commission’s order, we’ve made a number of changes to respond to feedback. Recently, we’ve started testing a new format that gives direct links to comparison shopping sites, alongside specific product offers from merchants.  

On Android phones, you’ve always been able to install any search engine or browser you want, irrespective of what came pre-installed on the phone when you bought it. In fact, a typical Android phone user will usually install around 50 additional apps on their phone.

After the Commission’s July 2018 decision, we changed the licensing model for the Google apps we build for use on Android phones, creating new, separate licenses for Google Play, the Google Chrome browser, and for Google Search. In doing so, we maintained the freedom for phone makers to install any alternative app alongside a Google app.

Now we’ll also do more to ensure that Android phone owners know about the wide choice of browsers and search engines available to download to their phones. This will involve asking users of existing and new Android devices in Europe which browser and search apps they would like to use.

We’ve always tried to give people the best and fastest answers一whether direct from Google, or from the wide range of specialist websites and app providers out there today.  These latest changes demonstrate our continued commitment to operating in an open and principled way.

Source: Android


Doing our part to share open data responsibly

This past weekend marked Open Data Day, an annual celebration of making data freely available to everyone. Communities around the world organized events, and we’re taking a moment here at Google to share our own perspective on the importance of open data. More accessible data can meaningfully help people and organizations, and we’re doing our part by opening datasets, providing access to APIs and aggregated product data, and developing tools to make data more accessible and useful.

Responsibly opening datasets

Sharing datasets is increasingly important as more people adopt machine learning through open frameworks like TensorFlow. We’ve released over 50 open datasets for other developers and researchers to use. These include YouTube 8M, a corpus of annotated videos used externally for video understanding; the HDR+ Burst Photography dataset, which helps others experiment with the technology that powers Pixel features like Portrait Mode; and Open Images, along with the Open Images Extended dataset which increases photo diversity.

Just because data is open doesn’t mean it will be useful, however. First, a dataset needs to be cleaned so that any insights developed from it are based on well-structured and accurate examples. Cleaning a large dataset is no small feat; before opening up our own, we spend hundreds of hours standardizing data and validating quality. Second, a dataset should be shared in a machine-readable format that’s easy for others to use, such as JSON rather than PDF. Finally, consider whether the dataset is representative of the intended content. Even if data is usable and representative of some situations, it may not be appropriate for every application. For instance, if a dataset contains mostly North American animal images, it may help you classify a deer, but not a giraffe. Tools like Facets can help you analyze the makeup of a dataset and evaluate the best ways to put it to use. We’re also working to build more representative datasets through interfaces like the Crowdsource application. To guide others’ use of your own dataset, consider publishing a data card which denotes authorship, composition and suggested use cases (here’s an example from our Open Images Extended release).

Making data findable and useful

It’s not enough to just make good data open, though--it also needs to be findable. Researchers, developers, journalists and other curious data-seekers often struggle to locate data scattered across the web’s thousands of repositories. Our Dataset Search tool helps people find data sources wherever they’re hosted, as long as the data is described in a way that search engines can locate. Since the tool launched a few months ago, we’ve seen the number of unique datasets on the platform double to 10 million, including contributions from the U.S. National Ocean and Atmospheric Administration (NOAA), National Institutes of Health (NIH), the Federal Reserve, the European Data Portal, the World Bank and government portals from every continent.

What makes data useful is how easily it can be analyzed. Though there’s more open data today, data scientists spend significant time analyzing it across multiple sources. To help solve that problem, we’ve created Data Commons. It’s a knowledge graph of data sources that lets users  treat various datasets of interest—regardless of source and format—as if they are all in a single local database. Anyone can contribute datasets or build applications powered by the infrastructure. For people using the platform, that means less time engineering data and more time generating insights. We’re already seeing exciting use cases of Data Commons. In one UC Berkeley data science course taught by Josh Hug and Fernando Perez, students used Census, CDC and Bureau of Labor Statistics data to correlate obesity levels across U.S. cities with other health and economic factors. Typically, that analysis would take days or weeks; using Data Commons, students were able to build high-fidelity models in less than an hour. We hope to partner with other educators and researchers—if you’re interested, reach out to collaborate@datacommons.org.

Balancing trade-offs

There are trade-offs to opening up data, and we aim to balance various sensitivities with the potential benefits of sharing. One consideration is that broad data openness can facilitate uses that don’t align with our AI Principles. For instance, we recently made synthetic speech data available only to researchers participating in the 2019 ASVspoof Challenge, to ensure that the data can be used to develop tools to detect deepfakes, while limiting misuse.

Extreme data openness can also risk exposing user or proprietary information, causing privacy breaches or threatening the security of our platforms. We allow third party developers to build on services like Maps, Gmail and more via APIs, so they can build their own products while user data is kept safe. We also publish aggregated product data like Search Trends to share information of public interest in a privacy-preserving way.

While there can be benefits to using sensitive data in controlled and principled ways, like predicting medical conditions or events, it’s critical that safeguards are in place so that training machine learning models doesn’t compromise individual privacy. Emerging research provides promising new avenues to learn from sensitive data. One is Federated Learning, a technique for training global ML models without data ever leaving a person’s device, which we’ve recently made available open-source with TensorFlow Federated. Another is Differential Privacy, which can offer strong guarantees that training data details aren’t inappropriately exposed in ML models. Additionally, researchers are experimenting more and more with using small training datasets and zero-shot learning, as we demonstrated in our recent prostate cancer detection research and work on Google Translate.

We hope that our efforts will help people access and learn from clean, useful, relevant and privacy-preserving open data from Google to solve the problems that matter to them. We also encourage other organizations to consider how they can contribute—whether by opening their own datasets, facilitating usability by cleaning them before release, using schema.org metadata standards to increase findability, enhancing transparency through data cards or considering trade-offs like user privacy and misuse. To everyone who has come together over the past week to celebrate open data: we look forward to seeing what you build.

To help fight the opioid crisis, a new tool from Maps and Search

In 2017, the Department of Health and Human Services (HHS) declared the opioid crisis a public health emergency, with over 130 Americans dying every day from opioid-related drug overdoses.  Last month, we saw that search queries for “medication disposal near me” reached an all-time high on Google.

opioids_data

53 percent of prescription drug abuse starts with drugs obtained from family or friends, so we’re working alongside government agencies and nonprofit organizations to help people safely remove excess or unused opioids from their medicine cabinets. Last year, we partnered with the U.S. Drug Enforcement Administration (DEA) for National Prescription Take Back Day by developing a Google Maps API  locator tool to help people dispose of their prescription drugs at temporary locations twice a year. With the help of this tool, the DEA and its local partners collected a record 1.85 million pounds of unused prescription drugs in 2018.

Today, we’re making it easier for Americans to quickly find disposal locations on Google Maps and Search all year round. A search for queries like “drug drop off near me” or “medication disposal near me” will display permanent disposal locations at your local pharmacy, hospital or government building so you can quickly and safely discard your unneeded medication.



opioid_gif

This pilot has been made possible thanks to the hard work of many federal agencies, states and pharmacies. Companies like Walgreens and CVS Health, along with state governments in Alabama, Arizona, Colorado, Iowa, Massachusetts, Michigan and Pennsylvania have been instrumental in this project, contributing data with extensive lists of public and private disposal locations. The DEA is already working with us to provide additional location data to expand the pilot.

For this pilot, we also looked to public health authorities—like HHS—for ideas on how technology can help communities respond to the opioid crisis. In fact, combining disposal location data from different sources was inspired by a winning entry at the HHS’s Opioid Code-A-Thon held a year ago.

We’ll be working to expand coverage and add more locations in the coming months. To learn more about how your state or business can bring more disposal locations to Google Maps and Search, contact RXdisposal-data@google.com today.


Source: Google LatLong


Smart regulation for combating illegal content

We've written before about how we're working to support smart regulation, and one area of increasing attention is regulation to combat illegal content.

As online platforms have become increasingly popular, there’s been a rich debate about the best legal framework for combating illegal content in a way that respects other social values, like free expression, diversity and innovation. Today, various laws provide detailed regulations, including Section 230 of the Communications Decency Act in the United States and European Union’s e-Commerce Directive.

Google invests millions of dollars in technology and people to combat illegal content in an effective and fair way. It’s a complex task, and–just as in offline contexts—it’s not a problem that can be totally solved. Rather, it’s a problem that must be managed, and we are constantly refining our practices.

In addressing illegal content, we’re also conscious of the importance of protecting legal speech. Context often matters when determining whether content is illegal. Consider a video of military conflict. In one context the footage might be documentary evidence of atrocities in areas where journalists have great difficulty and danger accessing. In another context the footage could be promotional material for an illegal organization. Even a highly trained reviewer could have a hard time telling the difference, and we need to get those decisions right across many different languages and cultures, and across the vast scale of audio, video, text, and images uploaded online. We make it easy to easily submit takedown notices; at the same time, we also create checks and balances against misuse of removal processes. And we look to the work of international agencies and principles from leading groups like the Global Network Initiative.

A smart regulatory framework is essential to enabling an appropriate approach to illegal content. We wanted to share four key principles that inform our practices and that (we would suggest) make for an effective regulatory framework:

  • Shared Responsibility: Tackling illegal content is a societal challenge—in which companies, governments, civil society, and users all have a role to play. Whether a company is alleging copyright infringement, an individual is claiming defamation, or a government is seeking removal of terrorist content, it’s essential to provide clear notice about the specific piece of content to an online platform, and then platforms have a responsibility to take appropriate action on the specific content. In some cases, content may not be clearly illegal, either because the facts are uncertain or because the legal outcome depends on a difficult balancing act; in turn, courts have an essential role to play in fact-finding and reaching legal conclusions on which platforms can rely.

  • Rule of law and creating legal clarity: It’s important to clearly define what platforms can do to fulfill their legal responsibilities, including removal obligations. An online platform that takes other voluntary steps to address illegal content should not be penalized. (This is sometimes called “Good Samaritan” protection.)

  • Flexibility to accommodate new technology:While laws should accommodate relevant differences between platforms, given the fast-evolving nature of the sector, laws should be written in ways that address the underlying issue rather than focusing on existing technologies or mandating specific technological fixes. 

  • Fairness and transparency: Laws should support companies’ ability to publish transparency reports about content removals, and provide people with notice and an ability to appeal removal of content. They should also recognize that fairness is a flexible and context-dependent notion—for example, improperly blocking newsworthy content or political expression could cause more harm than mistakenly blocking other types of content. 

With these principles in mind, we support refinement of notice-and-takedown regimes, but we have significant concerns about laws that would mandate proactively monitoring or filtering content, impose overly rigid timelines for content removal, or otherwise impose harsh penalties even on those acting in good faith. These types of laws create a risk that platforms won’t take a balanced approach to content removals, but instead take a “better safe than sorry” approach—blocking content at upload or implementing a “take down first, ask questions later (or never)” approach. We regularly receive overly broad removal requests, and analyses of cease-and-desist and takedown letters have found that many seek to remove potentially legitimate or protected speech.

There’s ample room for debate and nuance on these topics—we discuss them every day—and we’ll continue to seek ongoing collaboration among governments, industry, and civil society on this front. Over time, an ecosystem of tools and institutions—like the Global Internet Forum to Counter Terrorism, and the Internet Watch Foundation, which has taken down child sexual abuse material for more than two decades—has evolved to address the issue. Continuing to develop initiatives like these and other multistakeholder efforts remains critical, and we look forward to progressing those discussions.

Investing across the U.S. in 2019

One year ago this week, I was in Montgomery County, Tennessee to break ground for a new data center in Clarksville. It was clear from the excitement at the event that the jobs and economic investment meant a great deal to the community. I’ve seen that same optimism in communities around the country that are helping to power our digital economy. And I’m proud to say that our U.S. footprint is growing rapidly: In the last year, we’ve hired more than 10,000 people in the U.S. and made over $9 billion in investments. Our expansion across the U.S. has been crucial to finding great new talent, improving the services that people use every day, and investing in our business.

Today we’re announcing over $13 billion in investments throughout 2019 in data centers and offices across the U.S., with major expansions in 14 states. These new investments will give us the capacity to hire tens of thousands of employees, and enable the creation of more than 10,000 new construction jobs in Nebraska, Nevada, Ohio, Texas, Oklahoma, South Carolina and Virginia. With this new investment, Google will now have a home in 24 total states, including data centers in 13 communities. 2019 marks the second year in a row we’ll be growing faster outside of the Bay Area than in it.

This growth will allow us to invest in the communities where we operate, while we improve the products and services that help billions of people and businesses globally. Our new data center investments, in particular, will enhance our ability to provide the fastest and most reliable services for all our users and customers. As part of our commitment to our 100 percent renewable energy purchasing, we’re also making significant renewable energy investments in the U.S. as we grow. Our data centers make a significant economic contribution to local communities, as do the associated $5 billion in energy investments that our energy purchasing supports.

Here’s a closer look at the investments we’re making state by state:

Map gif

Midwest

We’re continuing to expand our presence in Chicago and are developing new data centers in Ohio and Nebraska. The Wisconsin office is set to move into a larger space in the next few months—and last November we opened a Detroit office in Little Caesars Arena, where you can see into the space where the Detroit Red Wings play.

detroit office opening

Googlers and partners at our office opening in Detroit last November

South

With new office and data center development, our workforce in Virginia will double. And with a new office in Georgia, our workforce will double there as well. Data centers in Oklahoma and South Carolina will expand, and we’re developing a new office and data center in Texas.

ribbon cutting

Opening one of our data centers last year.

Northeast

Massachusetts has one of our largest sales and engineering communities outside of the Bay Area, and we’re building new office space there. In New York, the Google Hudson Square campus—a major product, engineering and business hub—will come to life over the next couple of years.

West

We’ll open our first data center in Nevada and will expand our Washington office, a key product and engineering hub. In addition to investments in the Bay Area, our investments in California continue with the redevelopment of the Westside Pavillion, and the Spruce Goose Hangar in the Los Angeles area.

googlers

Googlers at work. Our investments this year will go toward expansions in data centers and offices across the U.S.

All of this growth is only possible with our local partners. Thank you for welcoming Google into your communities—we look forward to working together to grow our economy and support jobs in the U.S.


Oracle v. Google and the future of software development

Today we asked the Supreme Court of the United States to review our long-running copyright dispute with Oracle over the use of software interfaces. The outcome will have a far-reaching impact on innovation across the computer industry.

Standardized software interfaces have driven innovation in software development. They let computer programs interact with each other and let developers easily build technologies for different platforms. Unless the Supreme Court steps in here, the industry will be hamstrung by court decisions finding that the use of software interfaces in creating new programs is not allowed under copyright law.

With smartphone apps now common, we sometimes forget how hard it once was for developers to build apps across a wide range of different platforms. Our 2008 release of the open-source Android platform changed the game. It helped developers overcome the challenges of smaller processors, limited memory, and short battery life, while providing innovative features and functionality for smartphone development. The result was a win for everyone: Developers could build new apps, manufacturers could build great new devices, and the resulting competition gave consumers both lower prices and an extraordinary range of choice.

We built Android following the computer industry’s long-accepted practice of re-using software interfaces, which provide sets of commands that make it easy to implement common functionality—in the same way that computer keyboard short-cuts like pressing “control” and “p” make it easy to print. Android created a transformative new platform, while letting millions of Java programmers use their existing skills to create new applications. And the creators of Java backed the release of Android, saying that it had “strapped another set of rockets to the [Java] community’s momentum.”

But after it acquired Java in 2010, Oracle sued us for using these software interfaces, trying to profit by changing the rules of software development after the fact. Oracle’s lawsuit claims the right to control software interfaces—the building blocks of software development—and as a result, the ability to lock in a community of developers who have invested in learning the free and open Java language.

A court initially ruled that the software interfaces in this case are not copyrightable, but that decision was overruled. A unanimous jury then held that our use of the interfaces was a legal fair use, but that decision was likewise overruled. Unless the Supreme Court corrects these twin reversals, this case will end developers’ traditional ability to freely use existing software interfaces to build new generations of computer programs for consumers. Just like we all learn to use computer keyboard shortcuts, developers have learned to use the many standard interfaces associated with different programming languages. Letting these reversals stand would effectively lock developers into the platform of a single copyright holder—akin to saying that keyboard shortcuts can work with only one type of computer.

The U.S. Constitution authorized copyrights to “promote the progress of science and useful arts,” not to impede creativity or promote lock-in of software platforms. Leading voices from business, technology, academia, and the nonprofit sector agree and have spoken out about the potentially devastating impacts of this case.

We support software developers’ ability to develop the applications we all have come to use every day, and we hope that the Supreme Court will give this case the serious and careful consideration it deserves.  

Engaging policy stakeholders on issues in AI governance

AI has become part of the fabric of modern life, with applications in sectors ranging from agriculture to retail to health to education. We believe that AI, used appropriately, can deliver great benefit for economies and society, and help people to make decisions that are fairer, safer, and more inclusive and informed.

As with other technologies, there are new policy questions that arise with the use of AI, and governments and civil society groups worldwide have a key role to play in the AI governance discussion. In a white paper we’re publishing today, we outline five areas where government can work with civil society and AI practitioners to provide important guidance on responsible AI development and use: explainability standards, fairness appraisal, safety considerations, human-AI collaboration and liability frameworks.

There are many trade-offs within each of these areas and the details are paramount for responsible implementation. For example, how should explainability and the need to hold an algorithm accountable be balanced with safeguarding the security of the system against hackers, protecting proprietary information and the desire to make AI experiences user-friendly? How should benchmarks and testing to ensure the safety of an AI system be balanced with the potential safety costs of not using the system?

No one company, country, or community has all the answers; on the contrary, it’s crucial for policy stakeholders worldwide to engage in these conversations. In the majority of cases, general legal frameworks and existing sector-specific processes will continue to provide an appropriate governance structure; for example, medical device regulations should continue to govern medical devices, regardless of whether AI is used in the device or not. However, in cases where additional oversight is needed, we hope this paper can help to promote pragmatic and forward-looking “rules of the road” and approaches to governance that keep pace with changing attitudes and technology.

Applications are open for the Google North America Public Policy Fellowship

Starting today, we’re accepting applications for the 2019 North America Google Policy Fellowship. Our fellowship gives undergraduate and graduate students a paid opportunity to spend 10-weeks diving head first into Internet policy at leading nonprofits, think tanks and advocacy groups. In addition to opportunities in Washington, D.C. and California, we’ve expanded our program to include academic institutions and advocacy groups in New York and Utah, where students will have the chance to be at the forefront of debates on internet freedom and economic opportunity. We’re looking for students from all majors and degree programs who are passionate about technology and want to gain hands on experience exploring important intersections of tech policy.

The application period opens today for the North America region and all applications must be received by 12:00 p.m. ET/9 a.m. PT, Friday, February, 15th. This year's program will run from early June through early August, with regular programming throughout the summer. More specific information, including a list of this year’s hosts and locations, can be found on our site.

You can learn about the program, application process and host organizations on the Google Public Policy Fellowship website.