Author Archives: Danny Sullivan

How we update Search to improve your results

Our computers, smartphones and apps are regularly updated to help make them better. The same thing happens with Google Search. In fact, Google Search is updated thousands of times a year to improve the experience and the quality of results. Here’s more on how that process works.


Why updates are important

Google Search receives billions of queries every day from countries around the world in 150 languages. Our automated systems identify the most relevant and reliable information from hundreds of billions of pages in our index to help people find what they’re looking for. Delivering great results at this type of scale and complexity requires many different systems, and we’re always looking for ways to improve these systems so we can display the most useful results possible.

Thanks to ongoing improvements, our evaluation processes show we’ve decreased the number of irrelevant results appearing on a search results page by over 40% over the past five years. Google sends billions of visits to websites each day, and by providing highly relevant results, we've been able to continue growing the traffic we send to sites every year since our founding.

We also send visitors to a wide range of sites — more than 100 million every day — so we’re helping sites from across the web and around the world get discovered. As new sites emerge and the web changes, continued updates are key to ensuring we’re supporting a wide range of publishers, creators and businesses, while providing searchers with the best information available.

How updates make Search better

Here are a few examples of what these updates look like:

Last month we launched an improvement we made to help people find better product reviews through Search. We have an automated system that tries to determine if a review  seems to go beyond just sharing basic information about a product and instead demonstrates in-depth research or expertise. This helps people find high quality information from the content producers who are making it.

Another example is an update we made several years ago that tries to determine if content is mobile-friendly. In situations where there are many possible matches with relatively equal relevancy, giving a preference to those that render better on mobile devices is more useful for users searching on those devices.

In any given week, we might implement dozens of updates that are meant to improve Search in incremental ways. These are improvements that have been fully tested and evaluated through our rating process. People using Search generally don’t notice these updates, but Google gets a little better with each one. Collectively, they add up to help Search continue providing great results.

Because there are so many incremental updates, it’s not useful for us to share details about all of them. However, we try to do so when we feel there is actionable information that site owners, content producers or others might consider applying, as was the case with both of the updates mentioned above.

Core updates involve broad improvements to Search

Periodically, we make more substantial improvements to our overall ranking processes. We refer to these as core updates, and they can produce some noticeable changes — though typically these are more often noticed by people actively running websites or performing search engine optimization (SEO) than ordinary users.

This is why we give notice when these kinds of updates are coming. We want site owners to understand these changes aren't because of something they've done but rather because of how our systems have been improved to better assess content overall and better address user expectations. We also want to remind them that nothing in a core update (or any update) is specific to a particular site, but is rather about improving Search overall. As we’ve said previously in our guidance about this:


There's nothing wrong with pages that may perform less well in a core update. They haven't violated our webmaster guidelines nor been subjected to manual or algorithmic action, as can happen to pages that do violate those guidelines. In fact, there's nothing in a core update that targets specific pages or sites. Instead, the changes are about improving how our systems assess content overall. These changes may cause some pages that were previously under-rewarded to do better.

One way to think of how a core update operates is to imagine that in 2015 you made a list of the top 100 movies. A few years later in 2019, you refresh the list. It's going to naturally change. Some new and wonderful movies that never existed before will now be candidates for inclusion. You might also reassess some films and realize they deserved a higher place on the list than they had before.

The list will change, and films previously higher on the list that move down aren't bad. There are simply more deserving films that are coming before them.

Core updates are designed to increase the overall relevancy of our search results. In terms of traffic we send, it’s largely a net exchange. Some content might do less well, but other content gains. In the long term, improving our systems in this way is how we’ve continued to improve Search and send more traffic to sites across the web every year.


How we help businesses and creators with guidance and tools 

While there’s nothing specific sites need to implement for core updates, we provide guidance and actionable advice that may help them be successful with Search overall. Following this guidance isn't a guarantee a site will rank well for every query it wants to. That’s not something Google or any other search engine could guarantee.

Any particular query can have thousands of pages or other content that's all relevant in some way. It’s impossible to show all this content at the top of our results. And that wouldn’t be useful for searchers, who come to Search precisely because they expect us to show the most helpful information first.

By following our core update guidance, businesses, site owners and content creators can help us better understand when they really have the most relevant and useful content to display. We also recommend sites follow our quality guidelines, implement our optimization tips and make use of the free Search Console tool that anyone can use.

These kinds of updates, along with the tools and advice we offer, are how we make sure we keep connecting searchers to content creators, businesses and others who have the helpful information they’re looking for.

Source: Search


When (and why) we remove content from Google search results

Access to information is at the core of Google’s mission, and every day we work to make information from the web available to everyone. We design our systems to return the most relevant and reliable information possible, but our search results include pages from the open web. Depending on what you search for, the results can include content that people might find objectionable or offensive.

While we’re committed to providing open access to information, we also have a strong commitment and responsibility to comply with the law and protect our users. When content is against local law, we remove it from being accessible in Google Search. 

Overall, our approach to information quality and webpage removals aims to strike a balance between ensuring that people have access to the information they need, while also doing our best to protect against harmful information online. Here’s an overview of how we do that.

Complying with the law

We hold ourselves to a high standard when it comes to our legal requirements to remove pages from Google search results. For many issues, such as privacy or defamation, our legal obligations may vary country by country, as different jurisdictions have come to different conclusions about how to deal with these complex topics.


We encourage people and authorities to alert us to content they believe violates the law. In fact, in most cases, this is necessary, because determining whether content is illegal is not always a determination that Google is equipped to make, especially without notice from those who are affected. 

For example, in the case of copyrighted material, we can’t automatically confirm whether a given page hosting that particular content has a license to do so, so we need rightsholders to tell us. By contrast, the mere presence of child sex abuse material (CSAM) on a page is illegal in most jurisdictions, so we develop ways to automatically identify that content and prevent it from showing in our results.

In the case of all legal removals, we share information about government requests for removal in our Transparency Report. Where possible, we inform website owners about requests for removal via Search Console.

Voluntary removal policies

Beyond removing content as required by law, we also have a set of policies that go beyond what’s legally required, mostly focused on highly personal content appearing on the open web. Examples of this content include financial or medical information, government-issued IDs, and intimate imagery published without consent.


These types of content are information that people generally intend to keep private and can cause serious harm, like identity theft, so we give people the ability to request removal from our search results.


We also look for new ways to carefully expand these policies to allow further protections for people online. For example, we allow people to request the removal of pages about themselves on sites with exploitative removals policies, as well as pages that include contact information alongside personal threats, a form of “doxxing.” In these cases, while people may want to access these sites to find potentially useful information or understand their policies and practices, the pages themselves provide little value or public interest, and might lead to reputational or even physical harm that we aim to help protect against.

Solving issues at scale

It might seem intuitive to solve content problems by removing more content — either page by page, or by limiting access to entire sites. However, in addition to being in tension with our mission, this approach also doesn’t effectively scale to the size of the open web, with trillions of pages and more being added each minute. Building scalable, automated approaches allows us to not only solve these challenges more effectively, but also avoid unnecessarily limiting access to legal content online.


Our most effective protection is to design systems that rank high-quality, reliable information at the top of our results. And while we do remove pages in compliance with our policies and legal obligations, we also use insights from those removals to improve our systems overall.


For example, when we receive a high volume of valid copyright removal requests from a given site, we are able to use that as a quality signal and demote the site in our results. We’ve developed similar approaches for sites whose pages we’ve removed under our voluntary policies. This allows us to not only help the people requesting the removals, but also scalably fight against the issue in other cases.

An evolving web

Ultimately, it’s important to remember that even when we remove content from Google Search, it may still exist on the web, and only a website owner can remove content entirely. But we do fight against the harmful effects of sensitive personal information appearing in our results, and have strict practices to ensure we’re complying with the law. We’re always evolving our approach to protect against bad actors on the web and ensure Google continues to deliver high-quality, reliable information for everyone. 


Beyond how we handle removals of we pages, if you’d like to learn more about how we approach our policies for search features, visit this post. And if you’re still looking for more details about Search, check out more past articles in our How Search Works series.

Source: Search


Google Search sends more traffic to the open web every year

This week, we saw some discussion about a claim that the majority of searches on Google end without someone clicking off to a website — or what some have called “zero-click” searches. As practitioners across the search industry have noted, this claim relies on flawed methodology that misunderstands how people use Search. In reality, Google Search sends billions of clicks to websites every day, and we’ve sent more traffic to the open web every year since Google was first created. And beyond just traffic, we also connect people with businesses in a wide variety of ways through Search, such as enabling a phone call to a business. 


To set the record straight, we wanted to provide important context about this misleading claim.

How people use Search 

People use Search to find a wide range of information, and billions of times per day, Google Search sends someone to a website. But not every query results in a click to a website, and there are a lot of very good reasons why:


People reformulate their queries

People don’t always know how to word their queries when they begin searching. They might start with a broad search, like “sneakers” and, after reviewing results, realize that they actually wanted to find “black sneakers.” In this case, these searches would be considered a “zero-click” — because the search didn’t result immediately in a click to a website. In the case of shopping for sneakers, it may take a few “zero-click” searches to get there, but if someone ultimately ends up on a retailer site and makes a purchase, Google has delivered a qualified visitor to that site, less likely to bounce back dissatisfied.


Because this happens so frequently, we offer many features (like “related searches” links) to help people formulate their searches and get to the most helpful result, which is often on a website.


People look for quick facts

People look for quick, factual information, like weather forecasts, sports scores, currency conversions, the time in different locations and more. As many search engines do, we provide this information directly on the results page, drawing from licensing agreements or tools we’ve developed. These results are helpful for users, and part of our ongoing work to make Google Search better every day.

In 2020, for example, we showed factual information about important topics like COVID and the U.S. elections, which generated some of the most interest we’ve ever seen on Search. Our elections results feature was seen billions of times, delivering high-quality information in real time as people awaited the outcome. We also provided factual information about COVID symptoms in partnership with the WHO and local health authorities, making critical information readily accessible and upholding our responsibility to fight against potential misinformation online. 


People connect with a business directly

When it comes to local businesses, we provide many ways for consumers to connect directly with businesses through Google Search, many of which don’t require a traditional click. As an example, people might search for business hours, then drive to the store after confirming a location is open. Or they find restaurants on Google and call for information or to place an order, using phone numbers we list. On average, local results in Search drive more than 4 billion connections for businesses every month. This includes more than 2 billion visits to websites as well as connections like phone calls, directions, ordering food and making reservations.


We also help the many local businesses that don’t have their own website. Through Google My Business, businesses can create and manage their own page on Google, and get found online. Each month, Google Search connects people with more than 120 million businesses that don’t have a website. 


People navigate directly to apps

Some searches take people directly to apps, rather than to websites. For example, if you search for a TV show, you'll see links to various streaming providers like Netflix or Hulu. If you have that streaming app on your phone, these links will take you directly into the app. The same is true for many other apps, such as Instagram, Amazon, Spotify and more.

More opportunity for websites and businesses

We send billions of visits to websites every day, and the traffic we’ve sent to the open web has increased every year since Google Search was first created. 


Over the years, we’ve worked to constantly improve Google Search by designing and rolling out helpful features to help people quickly find what they’re looking for, including maps, videos, links to products and services you can buy directly, flight and hotel options, and local business information like hours of operation and delivery services. In doing so, we’ve dramatically grown the opportunity for websites to reach people. In fact, our search results page, which used to show 10 blue links, now shows an average of 26 links to websites on a single search results page on mobile. 

Building for the future of the web

We care deeply about the open web and have continually improved Google Search over the years, helping businesses, publishers and creators thrive. Some would argue that we should revert back to showing only 10 blue website links. While we do show website links for many queries today when they are the most helpful response, we also want to build new features that organize information in more helpful ways than just a list of links. And we’ve seen that as we’ve introduced more of these features over the last two decades, the traffic we’re driving to the web has also grown — showing that this is helpful for both consumers and businesses.

Source: Search


How location helps provide more relevant search results

There are many factors that play a role in providing helpful results when you search for something on Google. These factors help us rank or order results and can include the words of your query, the relevance or usability of web pages in our index, and the expertise of sources.

Location is another important factor to provide relevant Search results. It helps you find the nearest coffee shop when you need a pick-me-up, traffic predictions along your route, and even important emergency information for your area. In this post, we’ll share details about the vital role that location plays in generating great search results.

Finding businesses and services in your community

It’s a Friday night. You’re hungry and want some pizza delivered. If Google couldn’t consider location in search ranking, our results might display random pizza restaurants that are nowhere near you. With location information, we can better ensure you’re getting webpages and business listings about pizza places that are local and relevant to you.

The same is true for many types of businesses and services with physical locations, such as banks, post offices, restaurants or stores. Consider two people who search for zoos—one in Omaha, Nebraska and the other in Mobile, Alabama. Location information helps both get the right local information that they need:

Searches for "zoos" in Omaha, Nebraska and Mobile, Alabama

Same query, different local contexts

Location can matter even when you’re searching for something that doesn’t necessarily have a physical location. For example, a search for “air quality” in San Diego, California versus Tulsa, Oklahoma might lead you to pages with local information relevant to each area.

Searches for “air quality” in San Diego, California and Tulsa, Oklahoma

Similarly, certain information in Search can be more useful if it’s specific to your city or neighborhood. If you were to search Google for “parking information,” you might see information about municipal codes and parking enforcement for your local area that would differ from what someone else might see in another city. 

Local information in search results can also be helpful in an emergency. If you search for “hurricane,” our Crisis Response features can show you local shelter information if there’s a hurricane close by, rather than just generic information about what a hurricane is.

Of course, just because some searches have local results, it’s not the case that everyone gets completely different results just because they are in different cities (or even different countries). If a search topic has no local aspect to it, there won’t be local results shown. If there is, we’ll show a mix of local results that are relevant to particular places along with non-local results that are generally useful.

How location works at Google

You might be wondering how location works at Google. Google determines location from a few different sources, and then uses this information to deliver more relevant experiences when it will be more helpful for people. Learn more about the different ways we may understand location in the video below as well as how to manage your data in a way that works best for you on our help center page about location and Search

Location is a critical part of how Google is able to deliver the most relevant and helpful search results possible—whether you need emergency information in a snap, or just some late-night pizza delivered. For more under-the-hood information, check out our How Search Works series. 

Source: Search


How Google autocomplete predictions are generated

You come to Google with an idea of what you’d like to search for. As soon as you start typing, predictions appear in the search box to help you finish what you’re typing. These time-saving predictions are from a feature called Autocomplete, which we covered previously in this How Search Works series.


In this post, we’ll explore how Autocomplete’s predictions are automatically generated based on real searches and how this feature helps you finish typing the query you already had in mind. We’ll also look at why not all predictions are helpful, and what we do in those cases.


Where predictions come from

Autocomplete predictions reflect searches that have been done on Google. To determine what predictions to show, our systems begin by looking at common and trending queries that match what someone starts to enter into the search box. For instance, if you were to type in “best star trek…”, we’d look for the common completions that would follow, such as “best star trek series” or “best star trek episodes.”


Autocomplete star trek

That’s how predictions work at the most basic level. However, there’s much more involved. We don’t just show the most common predictions overall. We also consider things like the language of the searcher or where they are searching from, because these make predictions far more relevant. 


Below, you can see predictions for those searching for “driving test” in the U.S. state of California versus the Canadian province of Ontario. Predictions differ in naming relevant locations or even spelling “centre” correctly for Canadians rather than using the American spelling of “center.”


Autocomplete driving test

To provide better predictions for long queries, our systems may automatically shift from predicting an entire search to portions of a search. For example, we might not see a lot of queries for “the name of the thing at the front” of some particular object. But we do see a lot of queries for “the front of a ship” or “the front of a boat” or “the front of a car.” That’s why we’re able to offer these predictions toward the end of what someone is typing.


Autocomplete name of a thing

We also take freshness into account when displaying predictions. If our automated systems detect there’s rising interest in a topic, they might show a trending prediction even if it isn’t typically the most common of all related predictions that we know about. For example, searches for a basketball team are probably more common than individual games. However, if that team just won a big face-off against a rival, timely game-related predictions may be more useful for those seeking information that’s relevant in that moment.


Predictions also will vary, of course, depending on the specific topic that someone is searching for. People, places and things all have different attributes that people are interested in. For example, someone searching for “trip to New York” might see a prediction of “trip to New York for Christmas,” as that’s a popular time to visit that city. In contrast, “trip to San Francisco” may show a prediction of “trip to San Francisco and Yosemite.” Even if two topics seem to be similar or fall into similar categories, you won’t always see the same predictions if you try to compare them.  Predictions will reflect the queries that are unique and relevant to a particular topic.


Overall, Autocomplete is a complex time-saving feature that’s not simply displaying the most common queries on a given topic. That’s also why it differs from and shouldn’t be compared against Google Trends, which is a tool for journalists and anyone else who’s interested to research the popularity of searches and search topics over time.


Predictions you likely won’t see

Predictions, as explained, are meant to be helpful ways for you to more quickly finish completing something you were about to type. But like anything, predictions aren’t perfect. There’s the potential to show unexpected or shocking predictions. It’s also possible that people might take predictions as assertions of facts or opinions. We also recognize that some queries are less likely to lead to reliable content.


We deal with these potential issues in two ways. First and foremost, we have systems designed to prevent potentially unhelpful and policy-violating predictions from appearing. Secondly, if  our automated systems don’t catch predictions that violate our policies, we have enforcement teams that remove predictions in accordance with those policies.


Our systems are designed to recognize terms and phrases that might be violent, sexually-explicit, hateful, disparaging or dangerous. When we recognize that such content might surface in a particular prediction, our systems prevent it from displaying. 


People can still search for such topics using those words, of course. Nothing prevents that. We’re simply not wanting to unintentionally shock or surprise people with predictions they might not have expected.


Using our automated systems, we can also recognize if a prediction is unlikely to return much reliable content. For example, after a major news event, there can be any number of unconfirmed rumors or information spreading, which we would not want people to think Autocomplete is somehow confirming. In these cases, our systems identify if there’s likely to be reliable content on a particular topic for a particular search. If that likelihood is low, the systems might automatically prevent a prediction from appearing. But again, this doesn’t stop anyone from completing a search on their own, if they wish.


While our automated systems typically work very well, they don’t catch everything. This is why we have policies for Autocomplete, which we publish for anyone to read. Our systems aim to prevent policy-violating predictions from appearing. But if any such predictions do get past our systems, and we’re made aware (such as through public reporting options), our enforcement teams work to review and remove them, as appropriate. In these cases, we remove both the specific prediction in question and often use pattern-matching and other methods to catch closely-related variations.


As an example of all this in action, consider our policy about names in Autocomplete, which began in 2016. It’s designed to prevent showing offensive, hurtful or inappropriate queries in relation to named individuals, so that people aren’t potentially forming an impression about others solely off predictions.  We have systems that aim to prevent these types of predictions from showing for name queries. But if violations do get through, we remove them in line with our policies. 


You can always search for what you want

Having discussed why some predictions might not appear, it’s also helpful to remember that predictions are not search results. Occasionally, people concerned about predictions for a particular query might suggest that we’re preventing actual search results from appearing. This is not the case. Autocomplete policies only apply to predictions. They do not apply to search results. 


We understand that our protective systems may prevent some useful predictions from showing. In fact, our systems take a particularly cautious approach when it comes to names and might prevent some non-policy violating predictions from appearing. However, we feel that taking this cautious approach is best. That’s especially because even if a prediction doesn’t appear, this does not impact the ability for someone to finish typing a query on their own and finding search results. 


We hope this has helped you understand more about how we generate predictions that allow you to more quickly complete the query you started, whether that’s while typing on your laptop or swiping the on-screen keyboard on your phone.


Why is the sky orange? How Google gave people the right info

On the morning of September 10, millions of people in Northern California woke up to an orange sky after wildfire smoke spread like a thick layer across the West Coast. It persisted for days, and it was the first time lots of people had ever seen something like this. 

To understand what was happening, many people turned to Search. According to Google Trends, searches for “why is the sky orange” hit an all-time high this month in the United States. As you can see in the graph below, this wasn't a totally new query. There are many pages on the web with general scientific explanations of what can cause the sky to turn orange. But people wanted to know why, in that moment, where they were, the sky was tangerine tinted.

Google Trends Data.png

Search interest for “why is the sky orange” since 2004, US (Google Trends)


So how does Google respond to a query spike like this? Well,language understanding is at the core of Search, but it’s not just about the words. Critical context, like time and place, also helps us understand what you’re really looking for. This is particularly true for featured snippets, a feature in Search that highlights pages that our systems determine are likely a great match for your search. We’ve made improvements to better understand when fresh or local information -- or both -- is key to delivering relevant results to your search. 

In the case of the orange sky phenomenon, for people in Northern California, the time and location was really important to understanding what these searches were looking for. Our freshness indicators identified a rush of new content was being produced on this topic that was both locally relevant and different from the more evergreen content that existed. This signaled to our systems to ignore most of the specifics that they previously understood about the topic of “orange sky”--like the relation to a sunset--but to retain broad associations like “air” and “ocean” that were still relevant. In a matter of minutes, our systems learned this new pattern and provided fresh featured snippet results for people looking for this locally relevant information in the Bay Area.
Why is the sky orange.png

Put simply, instead of surfacing general information on what causes a sunset, when people searched for “why is the sky orange” during this time period, our systems automatically pulled in current, location-based information to help people find the timely results they were searching for. 

Over the course of the week, we saw even more examples of these systems at work. As a residual effect of the wildfires, New York City and Massachusetts started experiencing a hazy sky. But that wasn’t the case in all states. So for a query like “why is it hazy?” local context was similarly important for providing a relevant result.

NYC Search Results.png

For this query, people in New York found an explanation of how the wildfire smoke was caught in a jet stream, which caused the haze to move east. People in Boston would have found a similar feature snippet, but specific to the conditions in that city. And those in Alaska, who were not impacted, would not see these same results. 

These are just two of billions of queries we get each day, and as new searches arise and information in the world changes, we’ll continue to provide fresh, relevant results in these moments.

How Google delivers reliable information in Search

For many people, Google Search is a place they go when they want to find information about a question, whether it’s to learn more about an issue, or fact check a friend quoting a stat about your favorite team. We get billions of queries every day, and one of the reasons people continue to come to Google is they know that they can often find relevant, reliable information that they can trust.


Delivering a high-quality search experience is core to what makes Google so helpful. From the early days when we introduced the PageRank algorithm, understanding the quality of web content was what set Google apart from other search engines.


But people often ask: What do you mean by quality, and how do you figure out how to ensure that the information people find on Google is reliable?


A simple way to think about it is that there are three key elements to our approach to information quality:


  • First, we fundamentally design our ranking systems to identify information that people are likely to find useful and reliable. 

  • To complement those efforts, we also have developed a number of Search features that not only help you make sense of all the information you’re seeing online, but that also provide direct access to information from authorities—like health organizations or government entities. 

  • Finally, we have policies for what can appear in Search features to make sure that we’re showing high quality and helpful content.


With these three approaches, we’re able to continue to improve Search and raise the bar on quality to deliver a trusted experience for people around the world. Let’s take a closer look at how we approach each of these areas.


Orienting our ranking systems around quality 

To understand what results are most relevant to your query, we have a variety of language understanding systems that aim to match the words and concepts in your query with related information in our index. This ranges from systems that understand things like misspellings or synonyms, to more advanced AI-based systems like our BERT-based language capabilities that can understand more complex, natural-language queries. 


Updates to our language understanding systems certainly make Search results more relevant and improve the experience overall. But when it comes to high-quality, trustworthy information, even with our advanced information understanding capabilities, search engines like Google do not understand content the way humans do. We often can’t tell from the words or images alone if something is exaggerated, incorrect, low-quality or otherwise unhelpful.


Instead, search engines largely understand the quality of content through what are commonly called “signals.” You can think of these as clues about the characteristics of a page that align with what humans might interpret as high quality or reliable. For example, the number of quality pages that link to a particular page is a signal that a page may be a trusted source of information on a topic.


We consider a variety of other quality signals, and to understand if our mixture of quality signals is working, we run a lot of tests. We have more than 10,000 search quality raters, people who collectively perform millions of sample searches and rate the quality of the results according to how well they measure up against what we call E-A-T: Expertise, Authoritativeness and Trustworthiness. 


Raters, following instructions anyone can read in our Search Quality Rater Guidelines, evaluate results for sample queries and assess how well the pages listed appear to demonstrate these characteristics of quality.


We recently explained the search rater process in more depth, but it’s worth noting again the ratings we receive are not used directly in our ranking algorithms. Instead, ratings provide data that, when taken in aggregate, help us measure how well our systems are working to deliver quality content that’s aligned with how people—across the country and around the world—evaluate information. This data helps us to improve our systems and ensure we’re delivering high quality results.


For topics where quality information is particularly important—like health, finance, civic information, and crisis situations—we place an even greater emphasis on factors related to expertise and trustworthiness. We’ve learned that sites that demonstrate authoritativeness and expertise on a topic are less likely to publish false or misleading information, so if we can build our systems to identify signals of those characteristics, we can continue to provide reliable information. The design of these systems is our greatest defense against low-quality content, including potential misinformation, and is work that we’ve been investing in for many years.


Info from experts, right in Search

In most cases, our ranking systems do a very good job of making it easy to find relevant and reliable information from the open web, particularly for topics like health, or in times of crisis. But in these areas, we also develop features to make information from authoritative organizations like local governments, health agencies and elections commissions available directly on Search.


For example, we’ve long had knowledge panels in Search with information about health conditions and symptoms, vetted by medical experts. More recently, we saw a significant increase in people searching for information about unemployment benefits, so we worked with administrative agencies to highlight details about eligibility and how to access this civic service. And for many years, we’ve offered features that help you find out how to vote and where your polling place is. Through the Google Civic Information API, we help other sites and services make this information available across the web. This type of information is not always easy to find, especially in rapidly changing situations, so features like these help ensure people get critical guidance when they need it most.


Helping you understand information you see

For many searches, people aren’t necessarily looking for a quick fact, but rather to understand a more complex topic. We also know that people come to Search having heard information elsewhere, with the aim of seeing what others are saying to form their own opinion.


In these cases, we want to give people tools to make sense of the information they’re seeing online, to find reliable sources and explore the full picture about a topic. 


For example, we make it easy to spot fact checks in Search, News, and now in Google Images by displaying fact check labels. These labels come from publishers that use ClaimReview schema to mark up fact checks they have published. For years now we’ve offered Full Coverage on Google News and Search, helping people explore and understand how stories have evolved and explore different angles and perspectives.


Protecting Search features through policies

We also offer more general Search features, like knowledge panels, featured snippets and Autocomplete, that highlight and organize information in unique ways or predict queries you might want to do. Because of the way these features highlight information in Search, we hold ourselves to a very high standard for quality and have guidelines around what content should appear in those spaces.


Within these features, we first and foremost design our automated ranking systems to show helpful content. But our systems aren’t always perfect. So if our systems fail to prevent policy-violating content from appearing, our enforcement team will take action in accordance with our policies. 


To learn more about how we approach policies for our search features, visit this post. And if you’re still looking for more details about Search, check out more past articles in our How Search Works series.


Source: Search


How insights from people around the world make Google Search better

Every Google search you do is one of billions we receive that day. In less than half a second, our systems sort through hundreds of billions of web pages to try and find the most relevant and helpful results available.


Because the web and people’s information needs keep changing, we make a lot of improvements to our search algorithms to keep up. Thousands per year, in fact. And we’re always working on new ways to make our results more helpful whether it’s a new feature, or bringing new language understanding capabilities to Search.


The improvements we make go through an evaluation process designed so that people around the world continue to find Google useful for whatever they’re looking for. Here are some ways that insights and feedback from people around the world help make Search better.


Our research team at work

Changes that we make to Search are aimed at making it easier for people to find useful information, but depending on their interests, what language they speak, and where they are in the world, different people have different information needs. It’s our mission to make information universally accessible and useful, and we are committed to serving all of our users in pursuit of that goal.


This is why we have a research team whose job it is to talk to people all around the world to understand how Search can be more useful. We invite people to give us feedback on different iterations of our projects and we do field research to understand how people in different communities access information online.


For example, we’ve learned over the years about the unique needs and technical limitations that people in emerging markets have when accessing information online. So we developed Google Go, a lightweight search app that works well with less powerful phones and less reliable connections. On Google Go, we’ve also introduced uniquely helpful features, including one that lets you listen to web pages out loud, which is particularly useful for people learning a new language or who may be less comfortable with reading long text. Features like these would not be possible without insights from the people who will ultimately use them.


Search quality raters

A key part of our evaluation process is getting feedback from everyday users about whether our ranking systems and proposed improvements are working well. But what do we mean by “working well”? We publish publicly available rater guidelines that describe in great detail how our systems intend to surface great content. These guidelines are more than 160 pages long, but if we have to boil it down to just a phrase, we like to say that Search is designed to return relevant results from the most reliable sources available.


Our systems use signals from the web itself—like where words in your search appear on web pages, or how pages link to one another on the web—to understand what information is related to your query and whether it’s information that people tend to trust. But notions of relevance and trustworthiness are ultimately human judgments, so to measure whether our systems are in fact understanding these correctly, we need to gather insights from people.


To do this, we have a group of more than 10,000 people all over the world we call “search quality raters.” Raters help us measure how people are likely to experience our results. They provide ratings based on our guidelines and represent real users and their likely information needs, using their best judgment to represent their locale. These people study and are tested on our rater guidelines before they can begin to provide ratings.


How rating works

Here’s how a rater task works: we generate a sample of queries (say, a few hundred). A group of raters will be assigned this set of queries, and they’re shown two versions of results pages for those searches. One set of results is from the current version of Google, and the other set is from an improvement we’re considering.


Raters review every page listed in the results set and evaluate that page against the query, based on our rater guidelines. They evaluate whether those pages meet the information needs based on their understanding of what that query was seeking, and they consider things like how authoritative and trustworthy that source seems to be on the topic in the query. To evaluate things like expertise, authoritativeness, and trustworthiness—sometimes referred to as “E-A-T”—raters are asked to do reputational research on the sources.


Here’s what that looks like in practice: imagine the sample query is “carrot cake recipe.” The results set may include articles from recipe sites, food magazines, food brands and perhaps blogs. To determine if a webpage meets their information needs, a rater might consider how easy the cooking instructions are to understand, how helpful the recipe is in terms of visual instructions and imagery, and whether there are other useful features on the site, like a shopping list creator or calculator for recipe doubling. 


To understand if the author has subject matter expertise, a rater would do some online research to see if the author has cooking credentials, has been profiled or referenced on other food websites, or has produced other great content that has garnered positive reviews or ratings on recipe sites. Basically, they do some digging to answer questions like: is this page trustworthy, and does it come from a site or author with a good reputation?  


Ratings are not used directly for search ranking

Once raters have done this research, they then provide a quality rating for each page. It’s important to note that this rating does not directly impact how this page or site ranks in Search. Nobody is deciding that any given source is “authoritative” or “trustworthy.” In particular, pages are not assigned ratings as a way to determine how well to rank them. Indeed, that would be an impossible task and a poor signal for us to use. With hundreds of billions of pages that are constantly changing, there’s no way humans could evaluate every page on a recurring basis.


Instead, ratings are a data point that, when taken in aggregate, helps us measure how well our systems are working to deliver great content that’s aligned with how people—across the country and around the world—evaluate information.


Last year alone, we did more than 383,605 search quality tests and 62,937 side-by-side experiments with our search quality raters to measure the quality of our results and help us make more than 3,600 improvements to our search algorithms. 


In-product experiments

Our research and rater feedback isn’t the only feedback we use when making improvements. We also need to understand how a new feature will work when it’s actually available in Search and people are using it as they would in real life. To make sure we’re able to get these insights, we test how people interact with new features through live experiments.


They’re called “live” experiments because they’re actually available to a small proportion of randomly selected people using the current version of Search. To test a change, we will launch a feature to a small percentage of all queries we get, and we look at a number of different metrics to measure the impact.


Did people click or tap on the new feature? Did most people just scroll past it? Did it make the page load slower? These insights can help us understand quite a bit about whether a new feature or change is helpful and if people will actually use it.


In 2019, we ran more than 17,000 live traffic experiments to test out new features and improvements to Search. If you compare that with how many launches actually happened (around 3600, remember?), you can see that only the best and most useful improvements make it into Search.


Always improving

While our search results will never be perfect, these research and evaluation processes have proven to be very effective over the past two decades. They allow us to make frequent improvements and ensure that the changes we make represent the needs of people around the world coming to Search for information.


Source: Search


Why keeping spam out of Search is so important

When you come to Search with a query in mind, you trust that Google will find a number of relevant and helpful pages to choose from. We put a lot of time and effort into improving our search systems to ensure that’s the case.


Working on improvements to our language understanding and other search systems is only part of why Google remains so helpful. Equally important is our ability to fight spam. Without our spam-fighting systems and teams, the quality of Search would be reduced--it would be a lot harder to find helpful information you can trust. 


With low quality pages spamming their way into the top results, the greater the chances that people could get tricked by phony sites trying to steal personal information or infect their computers with malware. If you’ve ever gone into your spam folder in Gmail, that’s akin to what Search results would be like without our spam detection capabilities.


Every year we publish a Webspam Report that details the efforts behind reducing spam in your search results and supporting the community of site creators whose websites we help you discover. To coincide with this year’s report, we wanted to give some additional context for why spam-fighting is so important, and how we go about it.


Defining “spam”

We’ve always designed our systems to prioritize the most relevant and reliable webpages at the top. We publicly describe the factors that go into our ranking systems so that web creators can understand the types of content that our systems will recognize as high quality.

We define “spam” as using techniques that attempt to mimic these signals without actually delivering on the promise of a high quality content, or other tactics that might prove harmful to searchers.

Our Webmaster Guidelines detail the types of spammy behavior that is discouraged and can lead to a lower ranking: everything from scraping pages and keyword stuffing to participating in link schemes and implementing sneaky redirects


Fighting spam is never-ending battle, a constant game of cat-and-mouse against existing and new spammy behaviors. This threat of spam is why we’ve continued to be very careful about how much detail we reveal about how our systems work. However, we do share a lot, including resources that provide transparency about the positive behaviors creators should follow to create great information and gain visibility and traffic from Search.


Spotting the spammers

The first step of fighting spam is detection. So how do we spot it? We employ a combination of manual reviews by our analysts and a variety of automated detection systems.


We can’t share the specific techniques we use for spam fighting because that would weaken our protections and ultimately make Search much less useful. But we can share about spammy behavior that can be detected systematically. 


After all, a low quality page might include the right words and phrases that match what you searched for, so our language systems wouldn’t be able to detect unhelpful pages from content alone. The telltale signs of spam are in the behavioral tactics used and how they try to manipulate our ranking systems against our Webmaster Guidelines


Our spam-fighting systems detect these behaviors so we can tackle this problem at scale. In fact, the scale is huge. Last year, we observed that more than 25 billion of the pages we find each day are spammy. (If each of those pages were a page in a book, that would be more than 20 million copies of “War & Peace” each day!) This leads to an important question: once we find all this spam, what happens next?


Stopping the spammers

When it comes to how we handle spam, it depends on the type of spam and how severe the violation is. For most of the 25 billion spammy pages detected each day, we’re able to automatically recognize their spammy behavior and ensure they don’t rank well in our results. But that’s not the case for everything. 


As with anything, our automated systems aren’t perfect. That’s why we also supplement them with human review, a team that does its own spam sleuthing to understand if content or sites are violating our guidelines. Often, this human review process leads to better automated systems. We look to understand how that spam got past our systems and then work to improve our detection, so that we catch the particular case and automatically detect many other similar cases overall.


In other cases, we may issue what’s called a manual action, when one of our human spam reviewers finds that content that isn’t complying with our Webmaster Guidelines. This can lead to a demotion or a removal of spam content from our search results, especially if it’s deemed to be particularly harmful, like a hacked site that has pages distributing malware to visitors.


When a manual action takes place, we send a notice to the site owner via Search Console, which webmasters can see in their Manual Actions Report. We send millions of these notices each year, and it gives site owners the opportunity to fix the issue and submit for reconsideration. After all, not all “spam” is purposeful, so if a site owner has inadvertently tried tactics that run afoul of our guidelines, or if their site has been compromised by hackers, we want to ensure they can make things right and have their useful information again available to people in Search. This brings us back to why we invest so much effort in fighting spam: so that Search can bring you good, helpful and safe content from sites across the web.


Discovering great information

It’s unfortunate that there’s so much spam, and so much effort that has to be spent fighting it. But that shouldn’t overshadow the fact there are millions upon millions of businesses, publishers and websites with great content for people to discover. We want them to succeed, and we provide tools, support and guidance to help.


We publish our own Search Engine Optimization Starter Guide to provide tips on how to succeed with appropriate techniques in Search. Our Search Relations team conducts virtual office hours, monitors our Webmaster Community forums, and (when possible!) hosts and participates in events around the world to help site creators improve their presence in Search. We provide a variety of support resources, as well as the Search Console toolset to help creators with search.


We’d also encourage anyone to visit our How Google Search Works site, which shares more generally about how our systems work to generate great search results for everyone.


Why keeping spam out of Search is so important

When you come to Search with a query in mind, you trust that Google will find a number of relevant and helpful pages to choose from. We put a lot of time and effort into improving our search systems to ensure that’s the case.


Working on improvements to our language understanding and other search systems is only part of why Google remains so helpful. Equally important is our ability to fight spam. Without our spam-fighting systems and teams, the quality of Search would be reduced--it would be a lot harder to find helpful information you can trust. 


With low quality pages spamming their way into the top results, the greater the chances that people could get tricked by phony sites trying to steal personal information or infect their computers with malware. If you’ve ever gone into your spam folder in Gmail, that’s akin to what Search results would be like without our spam detection capabilities.


Every year we publish a Webspam Report that details the efforts behind reducing spam in your search results and supporting the community of site creators whose websites we help you discover. To coincide with this year’s report, we wanted to give some additional context for why spam-fighting is so important, and how we go about it.


Defining “spam”

We’ve always designed our systems to prioritize the most relevant and reliable webpages at the top. We publicly describe the factors that go into our ranking systems so that web creators can understand the types of content that our systems will recognize as high quality.

We define “spam” as using techniques that attempt to mimic these signals without actually delivering on the promise of a high quality content, or other tactics that might prove harmful to searchers.

Our Webmaster Guidelines detail the types of spammy behavior that is discouraged and can lead to a lower ranking: everything from scraping pages and keyword stuffing to participating in link schemes and implementing sneaky redirects


Fighting spam is never-ending battle, a constant game of cat-and-mouse against existing and new spammy behaviors. This threat of spam is why we’ve continued to be very careful about how much detail we reveal about how our systems work. However, we do share a lot, including resources that provide transparency about the positive behaviors creators should follow to create great information and gain visibility and traffic from Search.


Spotting the spammers

The first step of fighting spam is detection. So how do we spot it? We employ a combination of manual reviews by our analysts and a variety of automated detection systems.


We can’t share the specific techniques we use for spam fighting because that would weaken our protections and ultimately make Search much less useful. But we can share about spammy behavior that can be detected systematically. 


After all, a low quality page might include the right words and phrases that match what you searched for, so our language systems wouldn’t be able to detect unhelpful pages from content alone. The telltale signs of spam are in the behavioral tactics used and how they try to manipulate our ranking systems against our Webmaster Guidelines


Our spam-fighting systems detect these behaviors so we can tackle this problem at scale. In fact, the scale is huge. Last year, we observed that more than 25 billion of the pages we find each day are spammy. (If each of those pages were a page in a book, that would be more than 20 million copies of “War & Peace” each day!) This leads to an important question: once we find all this spam, what happens next?


Stopping the spammers

When it comes to how we handle spam, it depends on the type of spam and how severe the violation is. For most of the 25 billion spammy pages detected each day, we’re able to automatically recognize their spammy behavior and ensure they don’t rank well in our results. But that’s not the case for everything. 


As with anything, our automated systems aren’t perfect. That’s why we also supplement them with human review, a team that does its own spam sleuthing to understand if content or sites are violating our guidelines. Often, this human review process leads to better automated systems. We look to understand how that spam got past our systems and then work to improve our detection, so that we catch the particular case and automatically detect many other similar cases overall.


In other cases, we may issue what’s called a manual action, when one of our human spam reviewers finds that content that isn’t complying with our Webmaster Guidelines. This can lead to a demotion or a removal of spam content from our search results, especially if it’s deemed to be particularly harmful, like a hacked site that has pages distributing malware to visitors.


When a manual action takes place, we send a notice to the site owner via Search Console, which webmasters can see in their Manual Actions Report. We send millions of these notices each year, and it gives site owners the opportunity to fix the issue and submit for reconsideration. After all, not all “spam” is purposeful, so if a site owner has inadvertently tried tactics that run afoul of our guidelines, or if their site has been compromised by hackers, we want to ensure they can make things right and have their useful information again available to people in Search. This brings us back to why we invest so much effort in fighting spam: so that Search can bring you good, helpful and safe content from sites across the web.


Discovering great information

It’s unfortunate that there’s so much spam, and so much effort that has to be spent fighting it. But that shouldn’t overshadow the fact there are millions upon millions of businesses, publishers and websites with great content for people to discover. We want them to succeed, and we provide tools, support and guidance to help.


We publish our own Search Engine Optimization Starter Guide to provide tips on how to succeed with appropriate techniques in Search. Our Search Relations team conducts virtual office hours, monitors our Webmaster Community forums, and (when possible!) hosts and participates in events around the world to help site creators improve their presence in Search. We provide a variety of support resources, as well as the Search Console toolset to help creators with search.


We’d also encourage anyone to visit our How Google Search Works site, which shares more generally about how our systems work to generate great search results for everyone.


Source: Search