Tag Archives: How Search Works

How location helps provide more relevant search results

There are many factors that play a role in providing helpful results when you search for something on Google. These factors help us rank or order results and can include the words of your query, the relevance or usability of web pages in our index, and the expertise of sources.

Location is another important factor to provide relevant Search results. It helps you find the nearest coffee shop when you need a pick-me-up, traffic predictions along your route, and even important emergency information for your area. In this post, we’ll share details about the vital role that location plays in generating great search results.

Finding businesses and services in your community

It’s a Friday night. You’re hungry and want some pizza delivered. If Google couldn’t consider location in search ranking, our results might display random pizza restaurants that are nowhere near you. With location information, we can better ensure you’re getting webpages and business listings about pizza places that are local and relevant to you.

The same is true for many types of businesses and services with physical locations, such as banks, post offices, restaurants or stores. Consider two people who search for zoos—one in Omaha, Nebraska and the other in Mobile, Alabama. Location information helps both get the right local information that they need:

Searches for "zoos" in Omaha, Nebraska and Mobile, Alabama

Same query, different local contexts

Location can matter even when you’re searching for something that doesn’t necessarily have a physical location. For example, a search for “air quality” in San Diego, California versus Tulsa, Oklahoma might lead you to pages with local information relevant to each area.

Searches for “air quality” in San Diego, California and Tulsa, Oklahoma

Similarly, certain information in Search can be more useful if it’s specific to your city or neighborhood. If you were to search Google for “parking information,” you might see information about municipal codes and parking enforcement for your local area that would differ from what someone else might see in another city. 

Local information in search results can also be helpful in an emergency. If you search for “hurricane,” our Crisis Response features can show you local shelter information if there’s a hurricane close by, rather than just generic information about what a hurricane is.

Of course, just because some searches have local results, it’s not the case that everyone gets completely different results just because they are in different cities (or even different countries). If a search topic has no local aspect to it, there won’t be local results shown. If there is, we’ll show a mix of local results that are relevant to particular places along with non-local results that are generally useful.

How location works at Google

You might be wondering how location works at Google. Google determines location from a few different sources, and then uses this information to deliver more relevant experiences when it will be more helpful for people. Learn more about the different ways we may understand location in the video below as well as how to manage your data in a way that works best for you on our help center page about location and Search

Location is a critical part of how Google is able to deliver the most relevant and helpful search results possible—whether you need emergency information in a snap, or just some late-night pizza delivered. For more under-the-hood information, check out our How Search Works series. 

Source: Search


How Google Search ads Work

If you’re searching for information on Google where businesses might have relevant services or products to provide, you will likely come across ads. These could be ads from the flower shop down the street, your favorite nonprofit or a large retailer. If there are no useful ads to show for your search, you won’t see any--which is actually the case for a large majority of searches. Our goal is to provide you with relevant information to help you find exactly what you’re looking for. And we only make money if ads are useful and relevant, as indicated by your click on the ad.

How and when we show ads in Search

Nearly all of the ads you see are on searches with commercial intent, such as searches for “sneakers,” “t-shirt” or “plumber.” We’ve long said that we don’t show ads--or make money--on the vast majority of searches.  In fact, on average over the past four years, 80 percent of searches on Google haven’t had any ads at the top of search results. Further, only a small fraction of searches--less than 5 percent--currently have four top text ads. 

Though we follow established principles for how and when ads can appear in Google Search, there are a variety of factors beyond our control that influence how many top ads people may see, like whether you're searching for something commercial, whether advertisers are interested in advertising against that subject, or whether there are ads that are relevant to your query. For example, in April 2020, people saw an average of 40 percent fewer top text ads per search than they did compared to April 2019. This was primarily due to COVID-related effects, when advertisers were reducing their ad budgets and users were searching less for commercial interests. Therefore, Google Search users saw fewer ads on average. 

It’s about quality, not quantity

Organizations that want to advertise in Google Search participate in an auction and set their own bids (the amount they are willing to pay per click) and budgets. But advertisers’ bids are only one component of our ad ranking algorithms. When it comes to the ads you see, the relevance of those ads to your search and the overall quality of the advertisers’ ads and websites are key components of our algorithms as well. That means that no matter how large or small the organization is, they have an opportunity to reach potential customers with their message in Google Search. In the Google Ads auction, advertisers are often charged less, and sometimes much less, than their bid.

The experience of our users comes first, which is why we only show ads that are helpful to people. Even for the fraction of search queries where we do show ads, we don't make a cent unless people find it relevant enough to click on the ad. We invest significantly in our ads quality systems to continuously improve on our ability to show ads that are highly relevant to people, and helpful to what they’re searching for. Over time, this has led to better, more relevant ads and major improvements in the overall user experience. In fact, over the last four years, we’ve been able to reduce the rate of low quality and irrelevant ads by 3x. 

Part of delivering a great user experience is also ensuring that Google Search ads are clearly labeled as coming from an advertiser―and we’ve long been an industry leader in providing prominent ad labeling. When Search ads do appear, they have the word “Ad” clearly labeled in bold black text in the current design. We rely on extensive user testing both on mobile and desktop to ensure ad labels meet our high standards for being prominent and distinguishable from unpaid results. 

A level playing field for all businesses, regardless of budget or size

With ads on Google Search we give businesses, organizations and governments around the world an opportunity to reach millions of people with information. Every advertiser, regardless of their budget, has the opportunity to reach people with ads in Google Search. That’s why, for small and local businesses in particular, Google Search ads help them compete with the largest companies for the same customers and opportunities, not just in their communities but also around the world. Every day, countless small businesses use Google Search ads to help drive awareness for their products and services and reach new customers. Just this year, in the midst of the COVID-19 pandemic, Google Search ads helped a bakery in New York reach more local customers and expand their business, provided a bicycle shop in South Dakota with the tools to go global, and gave a Black-owned chocolate shop in Dallas the opportunity to find new chocolate lovers from as far away as India. 

We also help connect people to causes through our Ad Grants program which gives nonprofits $10,000 a month in Search ads to help them attract donors, recruit volunteers and promote their missions. Earlier this year as the coronavirus pandemic worsened, we increased our grants to help government agencies take swift action and get lifesaving information to the public and local communities. In the U.S. alone, we served more than 100 million PSAs containing critical information on public health to millions of people this year.

Protecting your privacy


We make money from advertising, not from selling personal information. When you use our products, you trust us with your data and it's our responsibility to keep it private, safe and secure. That’s why we’ve built controls to help you choose the privacy settings that are right for you, and the ability to permanently delete your data. And we’ll never sell your personal information to anyone. As always, our goal is to ensure the ads that you see are as helpful and relevant as possible. This benefits millions of businesses and organizations, and most importantly, the people who rely on Google Search every day.

How Google organizes information to find what you’re looking for

When you come to Google and do a search, there might be billions of pages that are potential matches for your query, and millions of new pages being produced every minute. In the early days, we updated our search index once per month. Now, like other search engines Google, is constantly indexing new info to make accessible through Search.


But to make all of this information useful, it’s critical that we organize it in a way that helps people quickly find what they’re looking for. With this in mind, here’s a closer look at how we approach organizing information on Google Search.


Organizing information in rich and helpful features

Google indexes all types of information--from text and images in web pages, to real-world information, like whether a local store has a sweater you’re looking for in stock. To make this information useful to you, we organize it on the search results page in a way that makes it easy to scan and digest. When looking for jobs, you often want to see a list of specific roles. Whereas if you’re looking for a restaurant, seeing a map can help you easily find a spot nearby. 


We offer a wide range of features--from video and news carousels, to results with rich imagery, to helpful labels like star reviews--to help you navigate the available information more seamlessly. These features include links to web pages, so you can easily click to a website to find more information. In fact, we’ve grown the average number of outbound links to websites on a search results page from only 10 (“10 blue links”) to now an average of 26 links on a mobile results page. As we’ve added more rich features to Google Search, people are more likely to find what they’re looking for, and websites have more opportunity to appear on the first page of search results.


google search results page for pancake in 2012 v. 2020

When you searched for “pancake” in 2012, you mostly saw links to webpages. Now, you can easily find recipe links, videos, facts about pancakes, nutritional information, restaurants that serve pancakes, and more.

Presenting information in rich features, like an image carousel or a map, makes Google Search more helpful, both to people and to businesses. These features are designed so you can find the most relevant and useful information for your query. By improving our ability to deliver relevant results, we’ve seen that people are spending more time on the webpages they find through Search. The amount of time spent on websites following a click from Google Search has significantly grown year over year. 


Helping you explore and navigate topics

Another important element of organizing information is helping you learn more about a topic. After all, most queries don’t just have a single answer--they’re often open-ended questions like “dessert ideas.”


Our user experience teams spend a lot of time focused on how we can make it easy and intuitive to refine your search as you go. This is why we’ve introduced features like carousels, where you can easily swipe your phone screen to get more results. For instance, if you search for “meringue”, you might see a list of related topics along with related questions that other people have asked to help you on your journey.


google search results page for query meringue

How features and results are ranked

Organizing information into easy-to-use formats is just one piece of the puzzle. To make all of this information truly useful, we also must order, or “rank,” results in a way that ensures the most helpful and reliable information rises to the top.


Our ranking systems consider a number of factors--from what words appear on the page, to how fresh the content is--to determine what results are most relevant and helpful for a given query. Underpinning these systems is a deep understanding of information--from language and visual content to context like time and place--that allows us to match the intent of your query with the most relevant, highest quality results available


In cases where there’s a single answer, like “When was the first Academy Awards?,” directly providing that answer is the most helpful result, so it will appear at the top of the page. But sometimes queries can have many interpretations. Take a query like “pizza”--you might be looking for restaurants nearby, delivery options, pizza recipes, and more. Our systems aim to compose a page that is likely to have what you’re looking for, ranking results for the most likely intents at the top of the page. Ranking a pizza recipe first would certainly be relevant, but our systems have learned that people searching for “pizza” are more likely to be looking for restaurants, so we’re likely to show a map with local restaurants first. Contrast that to a query like “pancake” where we find that people are more likely looking for recipes, so recipes often rank higher, and a map with restaurants serving pancakes may appear lower on the page.
google search results pages for pizza and pancake

An important thing to remember is that ranking is dynamic. New things are always happening in the world, so the available information and the meaning of queries can change day-by-day. This summer, searches for “why is the sky orange” turned from a general question about sunsets to a specific, locally relevant query about weather conditions on the West Coast of the U.S. due to wildfires. We constantly evaluate the quality of our results to ensure that even as queries or content changes, we’re still providing helpful information. More than 10,000 search quality raters around the world help us conduct hundreds of thousands of tests every year, and it’s through this process that we know that our investments in Google Search truly benefit people.


We’ve heard people ask if we design our search ranking systems to benefit advertisers, and we want to be clear: that is absolutely not the case. We never provide special treatment to advertisers in how our search algorithms rank their websites, and nobody can pay us to do so. 


Ongoing investment in a high quality experience

As we’ve seen for many years, and as was particularly apparent in the wake of COVID, information needs can change rapidly. As the world changes, we are always looking for new ways we can make Google Search better and help people improve their lives through access to information.


Every year, we make thousands of improvements to Google Search, all of which we test to ensure they’re truly making the experience more intuitive, modern, delightful, helpful and all-around better for the billions of queries we get every day. Search will never be a solved problem, but we’re committed to continuing to innovate to make Google better for you.


Organizing the world’s information: where does it all come from?

Since Google was founded more than 22 years ago, we’ve continued to pursue an ambitious mission of organizing the world’s information and making it universally accessible and useful. While we started with organizing web pages, our mission has always been much more expansive. We didn’t set out to organize the web’s information, but all the world’s information. 

Quickly, Google expanded beyond the web and began to look for new ways to understand the world and make information and knowledge accessible for more people. The internet--and the world--have changed a lot since those early days, and we’ve continued to improve Google Search to both anticipate and respond to the ever-evolving information needs that people have. 

It’s no mystery that the search results you saw back in 1998 look different than what you might find today. So we wanted to share an overview of where the information on Google comes from and, in another post, how we approach organizing an ever-expanding universe of web pages, images, videos, real-world insights and all the other forms of information out there.

Information from the open web

You’re probably familiar with web listings on Google--the iconic “blue link” results that take you to pages from across the web. These listings, along with many other features on the search results page, link out to pages on the open web that we’ve crawled and indexed, following instructions provided by the site creators themselves.

Site owners have the control to tell our web crawler (Googlebot) what pages we should crawl and index, and they even have more granular controls to indicate which portions of a page should appear as a text snippet on Google Search. Using our developer tools, site creators can choose if they want to be discovered via Google and optimize their sites to improve how they’re presented, with the aim to get more free traffic from people looking for the information and services they’re offering. 

Google Search is one of many ways people find information and websites.  Every day, we send billions of visitors to sites across the web, and the traffic we send has grown every year since Google started. This traffic goes to a wide range of websites, helping people discover new companies, blogs, and products, not just the largest, well known sites on the web. Every day, we send visitors to well over 100 million different websites. 

Common knowledge and public data sources

Creators, publishers and businesses of all sizes work to create unique content, products and services. But there is also information that falls into the category of what you might describe as common knowledge--information that wasn’t uniquely created or doesn’t “belong” to any one person, but represents a set of facts that is broadly known. Think: the birthdate of a historical figure, the height of the tallest mountain in South America, or even what day it is today. 

We help people easily find these types of facts through a variety of Google Search features like knowledge panels. The information comes from a wide range of openly licensed sources such as Wikipedia, The Encyclopedia of Life, Johns Hopkins University CSSE COVID-19 Data, and the Data Commons Project, an open knowledge database of statistical data we started in collaboration with the U.S. Census, Bureau of Labor Statistics, Eurostat, World Bank and many others.

Another type of common knowledge is the product of calculations, and this is information that Google often generates directly. So when you search for a conversion of time (“What time is it in London?”) or measurement (“How many pounds in a metric ton?”), or want to know the square root of 348, those are pieces of information that Google calculates. Fun fact: we also calculate the sunrise and sunset times for locations based on latitude and longitude!

Licenses and partnerships

When it comes to organizing information, unstructured data (words and phrases on web pages) is more challenging for our automated systems to understand. Structured databases, including public knowledge bases like Wikidata, make it a lot easier for our systems to understand, organize and present facts in helpful features and formats.

For some specialized types of data, like sports scores, information about TV shows and movies, and song lyrics, there are providers who work to organize information in a structured format and offer technical solutions (like APIs) to deliver fresh info. We license data from these companies to ensure that providers and creators (like music publishers and artists) are compensated for their work. When people come to Google looking for this information, they can access it right away.

We always work to deliver high quality information, and for topics like health or civic participation that affect people’s livelihoods, easy access to reliable, authoritative information is critically important. For these types of topics, we work with organizations like local health authorities, such as the CDC in the U.S., and nonpartisan, nonprofit organizations like Democracy Works to make authoritative information readily available on Google.

Information that people and businesses provide

There’s a wide range of information that exists in the world that isn’t currently available on the open web, so we look for ways to help people and businesses share these updates, including by providing information directly to Google. Local businesses can claim their Business Profile and share the latest with potential customers on Search, even if they don’t have a website. In fact, each month Google Search connects people with more than 120 million businesses that don’t have a website. On average, local results in Search drive more than 4 billion connections for businesses every month, including more than 2 billion visits to websites as well as connections like phone calls, directions, ordering food and making reservations.

We’re also deeply investing in new techniques to ensure that we’re reflecting the latest accurate information. This can be especially challenging as local information is constantly changing and not often accurately reflected on the web. For example, in the wake of COVID-19, we’ve used our Duplex conversational technology to call businesses, helping to update their listings by confirming details like modified store hours or whether they offer takeout and delivery. Since this work began, we’ve made over 3 million updates to businesses like pharmacies, restaurants and grocery stores that have been seen over 20 billion times in Maps and Search. 

Other businesses like airlines, retailers and manufacturers also provide Google and other sites with data about their products and inventory through direct feeds. So when you search for a flight from Bogota to Lima, or want to learn more about the specs of the hottest new headphones, Google can provide high quality information straight from the source.

We also provide ways for people to share their knowledge about places across more than 220 countries and territories. Thanks to millions of contributions submitted by users every day--from reviews and ratings to photos, answers to questions, address updates and more--people all around the world can find the latest, accurate local information on Google Search and Maps. 

Newly created information and insights from Google

Through advancements in AI and machine learning, we’ve developed innovative ways to derive new insights from the world around us, providing people with information that can not only help them in their everyday lives, but also keep them safe.

For years, people have turned to our Popular Times feature to help gauge the crowds at their favorite brunch spots or visit their local grocery store when it’s less busy. We're continually improving the accuracy and coverage of this feature, currently available for 20 million places around the world on Maps and Search. Now, this technology is serving more critical needs during COVID. With an expansion of our live busyness feature, these Google insights are helping people take crowdedness into account as they patronize businesses through the pandemic. 

We also generate new insights to aid in crisis response--from wildfire maps based on satellite data to AI-generated flood forecasting--to help people stay out of harm’s way when disaster strikes.

Organizing information and making it accessible and useful

Simply compiling a wide range of information is not enough. Core to making information accessible is organizing it in a way that people can actually use it. 

How we organize information continues to evolve, especially as new information and content formats become available. To learn more about our approach to provide you with helpful, well-organized search results pages, check out the next blog in our How Search Worksseries.

How Google autocomplete predictions are generated

You come to Google with an idea of what you’d like to search for. As soon as you start typing, predictions appear in the search box to help you finish what you’re typing. These time-saving predictions are from a feature called Autocomplete, which we covered previously in this How Search Works series.


In this post, we’ll explore how Autocomplete’s predictions are automatically generated based on real searches and how this feature helps you finish typing the query you already had in mind. We’ll also look at why not all predictions are helpful, and what we do in those cases.


Where predictions come from

Autocomplete predictions reflect searches that have been done on Google. To determine what predictions to show, our systems begin by looking at common and trending queries that match what someone starts to enter into the search box. For instance, if you were to type in “best star trek…”, we’d look for the common completions that would follow, such as “best star trek series” or “best star trek episodes.”


Autocomplete star trek

That’s how predictions work at the most basic level. However, there’s much more involved. We don’t just show the most common predictions overall. We also consider things like the language of the searcher or where they are searching from, because these make predictions far more relevant. 


Below, you can see predictions for those searching for “driving test” in the U.S. state of California versus the Canadian province of Ontario. Predictions differ in naming relevant locations or even spelling “centre” correctly for Canadians rather than using the American spelling of “center.”


Autocomplete driving test

To provide better predictions for long queries, our systems may automatically shift from predicting an entire search to portions of a search. For example, we might not see a lot of queries for “the name of the thing at the front” of some particular object. But we do see a lot of queries for “the front of a ship” or “the front of a boat” or “the front of a car.” That’s why we’re able to offer these predictions toward the end of what someone is typing.


Autocomplete name of a thing

We also take freshness into account when displaying predictions. If our automated systems detect there’s rising interest in a topic, they might show a trending prediction even if it isn’t typically the most common of all related predictions that we know about. For example, searches for a basketball team are probably more common than individual games. However, if that team just won a big face-off against a rival, timely game-related predictions may be more useful for those seeking information that’s relevant in that moment.


Predictions also will vary, of course, depending on the specific topic that someone is searching for. People, places and things all have different attributes that people are interested in. For example, someone searching for “trip to New York” might see a prediction of “trip to New York for Christmas,” as that’s a popular time to visit that city. In contrast, “trip to San Francisco” may show a prediction of “trip to San Francisco and Yosemite.” Even if two topics seem to be similar or fall into similar categories, you won’t always see the same predictions if you try to compare them.  Predictions will reflect the queries that are unique and relevant to a particular topic.


Overall, Autocomplete is a complex time-saving feature that’s not simply displaying the most common queries on a given topic. That’s also why it differs from and shouldn’t be compared against Google Trends, which is a tool for journalists and anyone else who’s interested to research the popularity of searches and search topics over time.


Predictions you likely won’t see

Predictions, as explained, are meant to be helpful ways for you to more quickly finish completing something you were about to type. But like anything, predictions aren’t perfect. There’s the potential to show unexpected or shocking predictions. It’s also possible that people might take predictions as assertions of facts or opinions. We also recognize that some queries are less likely to lead to reliable content.


We deal with these potential issues in two ways. First and foremost, we have systems designed to prevent potentially unhelpful and policy-violating predictions from appearing. Secondly, if  our automated systems don’t catch predictions that violate our policies, we have enforcement teams that remove predictions in accordance with those policies.


Our systems are designed to recognize terms and phrases that might be violent, sexually-explicit, hateful, disparaging or dangerous. When we recognize that such content might surface in a particular prediction, our systems prevent it from displaying. 


People can still search for such topics using those words, of course. Nothing prevents that. We’re simply not wanting to unintentionally shock or surprise people with predictions they might not have expected.


Using our automated systems, we can also recognize if a prediction is unlikely to return much reliable content. For example, after a major news event, there can be any number of unconfirmed rumors or information spreading, which we would not want people to think Autocomplete is somehow confirming. In these cases, our systems identify if there’s likely to be reliable content on a particular topic for a particular search. If that likelihood is low, the systems might automatically prevent a prediction from appearing. But again, this doesn’t stop anyone from completing a search on their own, if they wish.


While our automated systems typically work very well, they don’t catch everything. This is why we have policies for Autocomplete, which we publish for anyone to read. Our systems aim to prevent policy-violating predictions from appearing. But if any such predictions do get past our systems, and we’re made aware (such as through public reporting options), our enforcement teams work to review and remove them, as appropriate. In these cases, we remove both the specific prediction in question and often use pattern-matching and other methods to catch closely-related variations.


As an example of all this in action, consider our policy about names in Autocomplete, which began in 2016. It’s designed to prevent showing offensive, hurtful or inappropriate queries in relation to named individuals, so that people aren’t potentially forming an impression about others solely off predictions.  We have systems that aim to prevent these types of predictions from showing for name queries. But if violations do get through, we remove them in line with our policies. 


You can always search for what you want

Having discussed why some predictions might not appear, it’s also helpful to remember that predictions are not search results. Occasionally, people concerned about predictions for a particular query might suggest that we’re preventing actual search results from appearing. This is not the case. Autocomplete policies only apply to predictions. They do not apply to search results. 


We understand that our protective systems may prevent some useful predictions from showing. In fact, our systems take a particularly cautious approach when it comes to names and might prevent some non-policy violating predictions from appearing. However, we feel that taking this cautious approach is best. That’s especially because even if a prediction doesn’t appear, this does not impact the ability for someone to finish typing a query on their own and finding search results. 


We hope this has helped you understand more about how we generate predictions that allow you to more quickly complete the query you started, whether that’s while typing on your laptop or swiping the on-screen keyboard on your phone.


Why is the sky orange? How Google gave people the right info

On the morning of September 10, millions of people in Northern California woke up to an orange sky after wildfire smoke spread like a thick layer across the West Coast. It persisted for days, and it was the first time lots of people had ever seen something like this. 

To understand what was happening, many people turned to Search. According to Google Trends, searches for “why is the sky orange” hit an all-time high this month in the United States. As you can see in the graph below, this wasn't a totally new query. There are many pages on the web with general scientific explanations of what can cause the sky to turn orange. But people wanted to know why, in that moment, where they were, the sky was tangerine tinted.

Google Trends Data.png

Search interest for “why is the sky orange” since 2004, US (Google Trends)


So how does Google respond to a query spike like this? Well,language understanding is at the core of Search, but it’s not just about the words. Critical context, like time and place, also helps us understand what you’re really looking for. This is particularly true for featured snippets, a feature in Search that highlights pages that our systems determine are likely a great match for your search. We’ve made improvements to better understand when fresh or local information -- or both -- is key to delivering relevant results to your search. 

In the case of the orange sky phenomenon, for people in Northern California, the time and location was really important to understanding what these searches were looking for. Our freshness indicators identified a rush of new content was being produced on this topic that was both locally relevant and different from the more evergreen content that existed. This signaled to our systems to ignore most of the specifics that they previously understood about the topic of “orange sky”--like the relation to a sunset--but to retain broad associations like “air” and “ocean” that were still relevant. In a matter of minutes, our systems learned this new pattern and provided fresh featured snippet results for people looking for this locally relevant information in the Bay Area.
Why is the sky orange.png

Put simply, instead of surfacing general information on what causes a sunset, when people searched for “why is the sky orange” during this time period, our systems automatically pulled in current, location-based information to help people find the timely results they were searching for. 

Over the course of the week, we saw even more examples of these systems at work. As a residual effect of the wildfires, New York City and Massachusetts started experiencing a hazy sky. But that wasn’t the case in all states. So for a query like “why is it hazy?” local context was similarly important for providing a relevant result.

NYC Search Results.png

For this query, people in New York found an explanation of how the wildfire smoke was caught in a jet stream, which caused the haze to move east. People in Boston would have found a similar feature snippet, but specific to the conditions in that city. And those in Alaska, who were not impacted, would not see these same results. 

These are just two of billions of queries we get each day, and as new searches arise and information in the world changes, we’ll continue to provide fresh, relevant results in these moments.

How Google delivers reliable information in Search

For many people, Google Search is a place they go when they want to find information about a question, whether it’s to learn more about an issue, or fact check a friend quoting a stat about your favorite team. We get billions of queries every day, and one of the reasons people continue to come to Google is they know that they can often find relevant, reliable information that they can trust.


Delivering a high-quality search experience is core to what makes Google so helpful. From the early days when we introduced the PageRank algorithm, understanding the quality of web content was what set Google apart from other search engines.


But people often ask: What do you mean by quality, and how do you figure out how to ensure that the information people find on Google is reliable?


A simple way to think about it is that there are three key elements to our approach to information quality:


  • First, we fundamentally design our ranking systems to identify information that people are likely to find useful and reliable. 

  • To complement those efforts, we also have developed a number of Search features that not only help you make sense of all the information you’re seeing online, but that also provide direct access to information from authorities—like health organizations or government entities. 

  • Finally, we have policies for what can appear in Search features to make sure that we’re showing high quality and helpful content.


With these three approaches, we’re able to continue to improve Search and raise the bar on quality to deliver a trusted experience for people around the world. Let’s take a closer look at how we approach each of these areas.


Orienting our ranking systems around quality 

To understand what results are most relevant to your query, we have a variety of language understanding systems that aim to match the words and concepts in your query with related information in our index. This ranges from systems that understand things like misspellings or synonyms, to more advanced AI-based systems like our BERT-based language capabilities that can understand more complex, natural-language queries. 


Updates to our language understanding systems certainly make Search results more relevant and improve the experience overall. But when it comes to high-quality, trustworthy information, even with our advanced information understanding capabilities, search engines like Google do not understand content the way humans do. We often can’t tell from the words or images alone if something is exaggerated, incorrect, low-quality or otherwise unhelpful.


Instead, search engines largely understand the quality of content through what are commonly called “signals.” You can think of these as clues about the characteristics of a page that align with what humans might interpret as high quality or reliable. For example, the number of quality pages that link to a particular page is a signal that a page may be a trusted source of information on a topic.


We consider a variety of other quality signals, and to understand if our mixture of quality signals is working, we run a lot of tests. We have more than 10,000 search quality raters, people who collectively perform millions of sample searches and rate the quality of the results according to how well they measure up against what we call E-A-T: Expertise, Authoritativeness and Trustworthiness. 


Raters, following instructions anyone can read in our Search Quality Rater Guidelines, evaluate results for sample queries and assess how well the pages listed appear to demonstrate these characteristics of quality.


We recently explained the search rater process in more depth, but it’s worth noting again the ratings we receive are not used directly in our ranking algorithms. Instead, ratings provide data that, when taken in aggregate, help us measure how well our systems are working to deliver quality content that’s aligned with how people—across the country and around the world—evaluate information. This data helps us to improve our systems and ensure we’re delivering high quality results.


For topics where quality information is particularly important—like health, finance, civic information, and crisis situations—we place an even greater emphasis on factors related to expertise and trustworthiness. We’ve learned that sites that demonstrate authoritativeness and expertise on a topic are less likely to publish false or misleading information, so if we can build our systems to identify signals of those characteristics, we can continue to provide reliable information. The design of these systems is our greatest defense against low-quality content, including potential misinformation, and is work that we’ve been investing in for many years.


Info from experts, right in Search

In most cases, our ranking systems do a very good job of making it easy to find relevant and reliable information from the open web, particularly for topics like health, or in times of crisis. But in these areas, we also develop features to make information from authoritative organizations like local governments, health agencies and elections commissions available directly on Search.


For example, we’ve long had knowledge panels in Search with information about health conditions and symptoms, vetted by medical experts. More recently, we saw a significant increase in people searching for information about unemployment benefits, so we worked with administrative agencies to highlight details about eligibility and how to access this civic service. And for many years, we’ve offered features that help you find out how to vote and where your polling place is. Through the Google Civic Information API, we help other sites and services make this information available across the web. This type of information is not always easy to find, especially in rapidly changing situations, so features like these help ensure people get critical guidance when they need it most.


Helping you understand information you see

For many searches, people aren’t necessarily looking for a quick fact, but rather to understand a more complex topic. We also know that people come to Search having heard information elsewhere, with the aim of seeing what others are saying to form their own opinion.


In these cases, we want to give people tools to make sense of the information they’re seeing online, to find reliable sources and explore the full picture about a topic. 


For example, we make it easy to spot fact checks in Search, News, and now in Google Images by displaying fact check labels. These labels come from publishers that use ClaimReview schema to mark up fact checks they have published. For years now we’ve offered Full Coverage on Google News and Search, helping people explore and understand how stories have evolved and explore different angles and perspectives.


Protecting Search features through policies

We also offer more general Search features, like knowledge panels, featured snippets and Autocomplete, that highlight and organize information in unique ways or predict queries you might want to do. Because of the way these features highlight information in Search, we hold ourselves to a very high standard for quality and have guidelines around what content should appear in those spaces.


Within these features, we first and foremost design our automated ranking systems to show helpful content. But our systems aren’t always perfect. So if our systems fail to prevent policy-violating content from appearing, our enforcement team will take action in accordance with our policies. 


To learn more about how we approach policies for our search features, visit this post. And if you’re still looking for more details about Search, check out more past articles in our How Search Works series.


Source: Search


How insights from people around the world make Google Search better

Every Google search you do is one of billions we receive that day. In less than half a second, our systems sort through hundreds of billions of web pages to try and find the most relevant and helpful results available.


Because the web and people’s information needs keep changing, we make a lot of improvements to our search algorithms to keep up. Thousands per year, in fact. And we’re always working on new ways to make our results more helpful whether it’s a new feature, or bringing new language understanding capabilities to Search.


The improvements we make go through an evaluation process designed so that people around the world continue to find Google useful for whatever they’re looking for. Here are some ways that insights and feedback from people around the world help make Search better.


Our research team at work

Changes that we make to Search are aimed at making it easier for people to find useful information, but depending on their interests, what language they speak, and where they are in the world, different people have different information needs. It’s our mission to make information universally accessible and useful, and we are committed to serving all of our users in pursuit of that goal.


This is why we have a research team whose job it is to talk to people all around the world to understand how Search can be more useful. We invite people to give us feedback on different iterations of our projects and we do field research to understand how people in different communities access information online.


For example, we’ve learned over the years about the unique needs and technical limitations that people in emerging markets have when accessing information online. So we developed Google Go, a lightweight search app that works well with less powerful phones and less reliable connections. On Google Go, we’ve also introduced uniquely helpful features, including one that lets you listen to web pages out loud, which is particularly useful for people learning a new language or who may be less comfortable with reading long text. Features like these would not be possible without insights from the people who will ultimately use them.


Search quality raters

A key part of our evaluation process is getting feedback from everyday users about whether our ranking systems and proposed improvements are working well. But what do we mean by “working well”? We publish publicly available rater guidelines that describe in great detail how our systems intend to surface great content. These guidelines are more than 160 pages long, but if we have to boil it down to just a phrase, we like to say that Search is designed to return relevant results from the most reliable sources available.


Our systems use signals from the web itself—like where words in your search appear on web pages, or how pages link to one another on the web—to understand what information is related to your query and whether it’s information that people tend to trust. But notions of relevance and trustworthiness are ultimately human judgments, so to measure whether our systems are in fact understanding these correctly, we need to gather insights from people.


To do this, we have a group of more than 10,000 people all over the world we call “search quality raters.” Raters help us measure how people are likely to experience our results. They provide ratings based on our guidelines and represent real users and their likely information needs, using their best judgment to represent their locale. These people study and are tested on our rater guidelines before they can begin to provide ratings.


How rating works

Here’s how a rater task works: we generate a sample of queries (say, a few hundred). A group of raters will be assigned this set of queries, and they’re shown two versions of results pages for those searches. One set of results is from the current version of Google, and the other set is from an improvement we’re considering.


Raters review every page listed in the results set and evaluate that page against the query, based on our rater guidelines. They evaluate whether those pages meet the information needs based on their understanding of what that query was seeking, and they consider things like how authoritative and trustworthy that source seems to be on the topic in the query. To evaluate things like expertise, authoritativeness, and trustworthiness—sometimes referred to as “E-A-T”—raters are asked to do reputational research on the sources.


Here’s what that looks like in practice: imagine the sample query is “carrot cake recipe.” The results set may include articles from recipe sites, food magazines, food brands and perhaps blogs. To determine if a webpage meets their information needs, a rater might consider how easy the cooking instructions are to understand, how helpful the recipe is in terms of visual instructions and imagery, and whether there are other useful features on the site, like a shopping list creator or calculator for recipe doubling. 


To understand if the author has subject matter expertise, a rater would do some online research to see if the author has cooking credentials, has been profiled or referenced on other food websites, or has produced other great content that has garnered positive reviews or ratings on recipe sites. Basically, they do some digging to answer questions like: is this page trustworthy, and does it come from a site or author with a good reputation?  


Ratings are not used directly for search ranking

Once raters have done this research, they then provide a quality rating for each page. It’s important to note that this rating does not directly impact how this page or site ranks in Search. Nobody is deciding that any given source is “authoritative” or “trustworthy.” In particular, pages are not assigned ratings as a way to determine how well to rank them. Indeed, that would be an impossible task and a poor signal for us to use. With hundreds of billions of pages that are constantly changing, there’s no way humans could evaluate every page on a recurring basis.


Instead, ratings are a data point that, when taken in aggregate, helps us measure how well our systems are working to deliver great content that’s aligned with how people—across the country and around the world—evaluate information.


Last year alone, we did more than 383,605 search quality tests and 62,937 side-by-side experiments with our search quality raters to measure the quality of our results and help us make more than 3,600 improvements to our search algorithms. 


In-product experiments

Our research and rater feedback isn’t the only feedback we use when making improvements. We also need to understand how a new feature will work when it’s actually available in Search and people are using it as they would in real life. To make sure we’re able to get these insights, we test how people interact with new features through live experiments.


They’re called “live” experiments because they’re actually available to a small proportion of randomly selected people using the current version of Search. To test a change, we will launch a feature to a small percentage of all queries we get, and we look at a number of different metrics to measure the impact.


Did people click or tap on the new feature? Did most people just scroll past it? Did it make the page load slower? These insights can help us understand quite a bit about whether a new feature or change is helpful and if people will actually use it.


In 2019, we ran more than 17,000 live traffic experiments to test out new features and improvements to Search. If you compare that with how many launches actually happened (around 3600, remember?), you can see that only the best and most useful improvements make it into Search.


Always improving

While our search results will never be perfect, these research and evaluation processes have proven to be very effective over the past two decades. They allow us to make frequent improvements and ensure that the changes we make represent the needs of people around the world coming to Search for information.


Source: Search


Why keeping spam out of Search is so important

When you come to Search with a query in mind, you trust that Google will find a number of relevant and helpful pages to choose from. We put a lot of time and effort into improving our search systems to ensure that’s the case.


Working on improvements to our language understanding and other search systems is only part of why Google remains so helpful. Equally important is our ability to fight spam. Without our spam-fighting systems and teams, the quality of Search would be reduced--it would be a lot harder to find helpful information you can trust. 


With low quality pages spamming their way into the top results, the greater the chances that people could get tricked by phony sites trying to steal personal information or infect their computers with malware. If you’ve ever gone into your spam folder in Gmail, that’s akin to what Search results would be like without our spam detection capabilities.


Every year we publish a Webspam Report that details the efforts behind reducing spam in your search results and supporting the community of site creators whose websites we help you discover. To coincide with this year’s report, we wanted to give some additional context for why spam-fighting is so important, and how we go about it.


Defining “spam”

We’ve always designed our systems to prioritize the most relevant and reliable webpages at the top. We publicly describe the factors that go into our ranking systems so that web creators can understand the types of content that our systems will recognize as high quality.

We define “spam” as using techniques that attempt to mimic these signals without actually delivering on the promise of a high quality content, or other tactics that might prove harmful to searchers.

Our Webmaster Guidelines detail the types of spammy behavior that is discouraged and can lead to a lower ranking: everything from scraping pages and keyword stuffing to participating in link schemes and implementing sneaky redirects


Fighting spam is never-ending battle, a constant game of cat-and-mouse against existing and new spammy behaviors. This threat of spam is why we’ve continued to be very careful about how much detail we reveal about how our systems work. However, we do share a lot, including resources that provide transparency about the positive behaviors creators should follow to create great information and gain visibility and traffic from Search.


Spotting the spammers

The first step of fighting spam is detection. So how do we spot it? We employ a combination of manual reviews by our analysts and a variety of automated detection systems.


We can’t share the specific techniques we use for spam fighting because that would weaken our protections and ultimately make Search much less useful. But we can share about spammy behavior that can be detected systematically. 


After all, a low quality page might include the right words and phrases that match what you searched for, so our language systems wouldn’t be able to detect unhelpful pages from content alone. The telltale signs of spam are in the behavioral tactics used and how they try to manipulate our ranking systems against our Webmaster Guidelines


Our spam-fighting systems detect these behaviors so we can tackle this problem at scale. In fact, the scale is huge. Last year, we observed that more than 25 billion of the pages we find each day are spammy. (If each of those pages were a page in a book, that would be more than 20 million copies of “War & Peace” each day!) This leads to an important question: once we find all this spam, what happens next?


Stopping the spammers

When it comes to how we handle spam, it depends on the type of spam and how severe the violation is. For most of the 25 billion spammy pages detected each day, we’re able to automatically recognize their spammy behavior and ensure they don’t rank well in our results. But that’s not the case for everything. 


As with anything, our automated systems aren’t perfect. That’s why we also supplement them with human review, a team that does its own spam sleuthing to understand if content or sites are violating our guidelines. Often, this human review process leads to better automated systems. We look to understand how that spam got past our systems and then work to improve our detection, so that we catch the particular case and automatically detect many other similar cases overall.


In other cases, we may issue what’s called a manual action, when one of our human spam reviewers finds that content that isn’t complying with our Webmaster Guidelines. This can lead to a demotion or a removal of spam content from our search results, especially if it’s deemed to be particularly harmful, like a hacked site that has pages distributing malware to visitors.


When a manual action takes place, we send a notice to the site owner via Search Console, which webmasters can see in their Manual Actions Report. We send millions of these notices each year, and it gives site owners the opportunity to fix the issue and submit for reconsideration. After all, not all “spam” is purposeful, so if a site owner has inadvertently tried tactics that run afoul of our guidelines, or if their site has been compromised by hackers, we want to ensure they can make things right and have their useful information again available to people in Search. This brings us back to why we invest so much effort in fighting spam: so that Search can bring you good, helpful and safe content from sites across the web.


Discovering great information

It’s unfortunate that there’s so much spam, and so much effort that has to be spent fighting it. But that shouldn’t overshadow the fact there are millions upon millions of businesses, publishers and websites with great content for people to discover. We want them to succeed, and we provide tools, support and guidance to help.


We publish our own Search Engine Optimization Starter Guide to provide tips on how to succeed with appropriate techniques in Search. Our Search Relations team conducts virtual office hours, monitors our Webmaster Community forums, and (when possible!) hosts and participates in events around the world to help site creators improve their presence in Search. We provide a variety of support resources, as well as the Search Console toolset to help creators with search.


We’d also encourage anyone to visit our How Google Search Works site, which shares more generally about how our systems work to generate great search results for everyone.


Why keeping spam out of Search is so important

When you come to Search with a query in mind, you trust that Google will find a number of relevant and helpful pages to choose from. We put a lot of time and effort into improving our search systems to ensure that’s the case.


Working on improvements to our language understanding and other search systems is only part of why Google remains so helpful. Equally important is our ability to fight spam. Without our spam-fighting systems and teams, the quality of Search would be reduced--it would be a lot harder to find helpful information you can trust. 


With low quality pages spamming their way into the top results, the greater the chances that people could get tricked by phony sites trying to steal personal information or infect their computers with malware. If you’ve ever gone into your spam folder in Gmail, that’s akin to what Search results would be like without our spam detection capabilities.


Every year we publish a Webspam Report that details the efforts behind reducing spam in your search results and supporting the community of site creators whose websites we help you discover. To coincide with this year’s report, we wanted to give some additional context for why spam-fighting is so important, and how we go about it.


Defining “spam”

We’ve always designed our systems to prioritize the most relevant and reliable webpages at the top. We publicly describe the factors that go into our ranking systems so that web creators can understand the types of content that our systems will recognize as high quality.

We define “spam” as using techniques that attempt to mimic these signals without actually delivering on the promise of a high quality content, or other tactics that might prove harmful to searchers.

Our Webmaster Guidelines detail the types of spammy behavior that is discouraged and can lead to a lower ranking: everything from scraping pages and keyword stuffing to participating in link schemes and implementing sneaky redirects


Fighting spam is never-ending battle, a constant game of cat-and-mouse against existing and new spammy behaviors. This threat of spam is why we’ve continued to be very careful about how much detail we reveal about how our systems work. However, we do share a lot, including resources that provide transparency about the positive behaviors creators should follow to create great information and gain visibility and traffic from Search.


Spotting the spammers

The first step of fighting spam is detection. So how do we spot it? We employ a combination of manual reviews by our analysts and a variety of automated detection systems.


We can’t share the specific techniques we use for spam fighting because that would weaken our protections and ultimately make Search much less useful. But we can share about spammy behavior that can be detected systematically. 


After all, a low quality page might include the right words and phrases that match what you searched for, so our language systems wouldn’t be able to detect unhelpful pages from content alone. The telltale signs of spam are in the behavioral tactics used and how they try to manipulate our ranking systems against our Webmaster Guidelines


Our spam-fighting systems detect these behaviors so we can tackle this problem at scale. In fact, the scale is huge. Last year, we observed that more than 25 billion of the pages we find each day are spammy. (If each of those pages were a page in a book, that would be more than 20 million copies of “War & Peace” each day!) This leads to an important question: once we find all this spam, what happens next?


Stopping the spammers

When it comes to how we handle spam, it depends on the type of spam and how severe the violation is. For most of the 25 billion spammy pages detected each day, we’re able to automatically recognize their spammy behavior and ensure they don’t rank well in our results. But that’s not the case for everything. 


As with anything, our automated systems aren’t perfect. That’s why we also supplement them with human review, a team that does its own spam sleuthing to understand if content or sites are violating our guidelines. Often, this human review process leads to better automated systems. We look to understand how that spam got past our systems and then work to improve our detection, so that we catch the particular case and automatically detect many other similar cases overall.


In other cases, we may issue what’s called a manual action, when one of our human spam reviewers finds that content that isn’t complying with our Webmaster Guidelines. This can lead to a demotion or a removal of spam content from our search results, especially if it’s deemed to be particularly harmful, like a hacked site that has pages distributing malware to visitors.


When a manual action takes place, we send a notice to the site owner via Search Console, which webmasters can see in their Manual Actions Report. We send millions of these notices each year, and it gives site owners the opportunity to fix the issue and submit for reconsideration. After all, not all “spam” is purposeful, so if a site owner has inadvertently tried tactics that run afoul of our guidelines, or if their site has been compromised by hackers, we want to ensure they can make things right and have their useful information again available to people in Search. This brings us back to why we invest so much effort in fighting spam: so that Search can bring you good, helpful and safe content from sites across the web.


Discovering great information

It’s unfortunate that there’s so much spam, and so much effort that has to be spent fighting it. But that shouldn’t overshadow the fact there are millions upon millions of businesses, publishers and websites with great content for people to discover. We want them to succeed, and we provide tools, support and guidance to help.


We publish our own Search Engine Optimization Starter Guide to provide tips on how to succeed with appropriate techniques in Search. Our Search Relations team conducts virtual office hours, monitors our Webmaster Community forums, and (when possible!) hosts and participates in events around the world to help site creators improve their presence in Search. We provide a variety of support resources, as well as the Search Console toolset to help creators with search.


We’d also encourage anyone to visit our How Google Search Works site, which shares more generally about how our systems work to generate great search results for everyone.


Source: Search