Author Archives: Danny Sullivan

A reintroduction to our Knowledge Graph and knowledge panels

Sometimes Google Search will show special boxes with information about people, places and things. We call these knowledge panels. They’re designed to help you quickly understand more about a particular subject by surfacing key facts and to make it easier to explore a topic in more depth. Information within knowledge panels comes from our Knowledge Graph, which is like a giant virtual encyclopedia of facts. In this post, we’ll share more about how knowledge panels are automatically generated, how data for the Knowledge Graph is gathered and how we monitor and react to reports of incorrect information.

What’s a knowledge panel?

Knowledge panels are easily recognized by those who do desktop searching, appearing to the right of search results:
Knowledge Panel

Our systems aim to show the most relevant and popular information for a topic within a knowledge panel. Because no topic is the same, exactly what is shown in a knowledge panel will vary. But typically, they’ll include:

  • Title and short summary of the topic
  • A longer description of the subject
  • A picture or pictures of the person, place or thing
  • Key facts, such as when a notable figure was born or where something is located
  • Links to social profiles and official websites

Knowledge panels might also include special information related to particular topics. For example:

  • Songs from musical artists
  • Upcoming episodes from TV shows
  • Rosters of sports teams.

Sources of information for the Knowledge Graph

The information about an “entity”—a person, place or thing—in our knowledge panels comes from our Knowledge Graph, which was launched in 2012. It’s a system that understands facts and information about entities from materials shared across the web, as well as from open source and licensed databases. It has amassed over 500 billion facts about five billion entities.


Wikipedia is a commonly-cited source, but it’s not the only one. We draw from hundreds of sources from across the web, including licensing data that appears in knowledge panels for music, sports and TV. We work with medical providers to create carefully vetted content for knowledge panels for health issues. We also draw from special coding that content owners can use, such as to indicate upcoming events.

On mobile, multiple knowledge panels provide facts

When we first launched knowledge panels, most search activity happened on desktop, where there was room to easily show knowledge panels alongside search results. Today, most search activity happens on mobile, where screen size doesn’t allow for a side-by-side display.


To this end, information from the Knowledge Graph is often not presented through a single knowledge panel on mobile. Instead, one or more knowledge panels may appear interspersed among the overall results.

Mobile Knowledge Panel

How we work to improve the Knowledge Graph

Inaccuracies in the Knowledge Graph can occasionally happen. Just as we have automatic systems that gather facts for the Knowledge Graph, we also have automatic systems designed to prevent inaccuracies from appearing. However, as with anything, the systems aren’t perfect. That’s why we also accept reports from anyone about issues.


Selecting the “Feedback” link at the bottom of a knowledge panel or the three dots at the top of one on mobile brings up options to provide feedback to us:

Knowledge Panel feedback

We analyze feedback like this to understand how any actual inaccuracies got past our systems, so that we can make improvements generally across the Knowledge Graph overall. We also remove inaccurate facts that come to our attention for violating our policies, especially prioritizing issues relating to public interest topics such as civic, medical, scientific, and historical issues or where there’s a risk of serious and immediate harm.

How entities can claim and suggest changes to a knowledge panel

Many knowledge panels can be “claimed” by the subject they are about, such as a person or a company. The claiming process—what we call getting verified—allows subjects to provide feedback directly to us about potential changes or to suggest things like a preferred photo. For local businesses, there’s a separate process of claiming that operates through Google My Business. This enables local businesses to manage special elements in their knowledge panels, such as opening hours and contact phone numbers.

For more information about topics like this, check out our How Search Works blog series and website.

A reintroduction to our Knowledge Graph and knowledge panels

Sometimes Google Search will show special boxes with information about people, places and things. We call these knowledge panels. They’re designed to help you quickly understand more about a particular subject by surfacing key facts and to make it easier to explore a topic in more depth. Information within knowledge panels comes from our Knowledge Graph, which is like a giant virtual encyclopedia of facts. In this post, we’ll share more about how knowledge panels are automatically generated, how data for the Knowledge Graph is gathered and how we monitor and react to reports of incorrect information.

What’s a knowledge panel?

Knowledge panels are easily recognized by those who do desktop searching, appearing to the right of search results:
Knowledge Panel

Our systems aim to show the most relevant and popular information for a topic within a knowledge panel. Because no topic is the same, exactly what is shown in a knowledge panel will vary. But typically, they’ll include:

  • Title and short summary of the topic
  • A longer description of the subject
  • A picture or pictures of the person, place or thing
  • Key facts, such as when a notable figure was born or where something is located
  • Links to social profiles and official websites

Knowledge panels might also include special information related to particular topics. For example:

  • Songs from musical artists
  • Upcoming episodes from TV shows
  • Rosters of sports teams.

Sources of information for the Knowledge Graph

The information about an “entity”—a person, place or thing—in our knowledge panels comes from our Knowledge Graph, which was launched in 2012. It’s a system that understands facts and information about entities from materials shared across the web, as well as from open source and licensed databases. It has amassed over 500 billion facts about five billion entities.


Wikipedia is a commonly-cited source, but it’s not the only one. We draw from hundreds of sources from across the web, including licensing data that appears in knowledge panels for music, sports and TV. We work with medical providers to create carefully vetted content for knowledge panels for health issues. We also draw from special coding that content owners can use, such as to indicate upcoming events.

On mobile, multiple knowledge panels provide facts

When we first launched knowledge panels, most search activity happened on desktop, where there was room to easily show knowledge panels alongside search results. Today, most search activity happens on mobile, where screen size doesn’t allow for a side-by-side display.


To this end, information from the Knowledge Graph is often not presented through a single knowledge panel on mobile. Instead, one or more knowledge panels may appear interspersed among the overall results.

Mobile Knowledge Panel

How we work to improve the Knowledge Graph

Inaccuracies in the Knowledge Graph can occasionally happen. Just as we have automatic systems that gather facts for the Knowledge Graph, we also have automatic systems designed to prevent inaccuracies from appearing. However, as with anything, the systems aren’t perfect. That’s why we also accept reports from anyone about issues.


Selecting the “Feedback” link at the bottom of a knowledge panel or the three dots at the top of one on mobile brings up options to provide feedback to us:

Knowledge Panel feedback

We analyze feedback like this to understand how any actual inaccuracies got past our systems, so that we can make improvements generally across the Knowledge Graph overall. We also remove inaccurate facts that come to our attention for violating our policies, especially prioritizing issues relating to public interest topics such as civic, medical, scientific, and historical issues or where there’s a risk of serious and immediate harm.

How entities can claim and suggest changes to a knowledge panel

Many knowledge panels can be “claimed” by the subject they are about, such as a person or a company. The claiming process—what we call getting verified—allows subjects to provide feedback directly to us about potential changes or to suggest things like a preferred photo. For local businesses, there’s a separate process of claiming that operates through Google My Business. This enables local businesses to manage special elements in their knowledge panels, such as opening hours and contact phone numbers.

For more information about topics like this, check out our How Search Works blog series and website.

Source: Search


How Google autocomplete works in Search

Autocomplete is a feature within Google Search designed to make it faster to complete searches that you’re beginning to type. In this post—the second in a series that goes behind-the-scenes about Google Search—we’ll explore when, where and how autocomplete works.

Using autocomplete

Autocomplete is available most anywhere you find a Google search box, including the Google home page, the Google app for iOS and Android, the quick search box from within Android and the “Omnibox” address bar within Chrome. Just begin typing, and you’ll see predictions appear:

Autocomplete_1.png

In the example above, you can see that typing the letters “san f” brings up predictions such as “san francisco weather” or “san fernando mission,” making it easy to finish entering your search on these topics without typing all the letters.

Sometimes, we’ll also help you complete individual words and phrases, as you type:

Autocomplete_1_detail.png

Autocomplete is especially useful for those using mobile devices, making it easy to complete a search on a small screen where typing can be hard. For both mobile and desktop users, it’s a huge time saver all around. How much? Well:

  • On average, it reduces typing by about 25 percent
  • Cumulatively, we estimate it saves over 200 years of typing time per day. Yes, per day!

Predictions, not suggestions

You’ll notice we call these autocomplete “predictions” rather than “suggestions,” and there’s a good reason for that. Autocomplete is designed to help people complete a search they were intending to do, not to suggest new types of searches to be performed. These are our best predictions of the query you were likely to continue entering.

How do we determine these predictions? We look at the real searches that happen on Google and show common and trending ones relevant to the characters that are entered and also related to your location and previous searches.

The predictions change in response to new characters being entered into the search box. For example, going from “san f” to “san fe” causes the San Francisco-related predictions shown above to disappear, with those relating to San Fernando then appearing at the top of the list:

Autocomplete_2.png

That makes sense. It becomes clear from the additional letter that someone isn’t doing a search that would relate to San Francisco, so the predictions change to something more relevant.

Why some predictions are removed

The predictions we show are common and trending ones related to what someone begins to type. However, Google removes predictions that are against our autocomplete policies, which bar:


  • Sexually explicit predictions that are not related to medical, scientific, or sex education topics
  • Hateful predictions against groups and individuals on the basis of race, religion or several other demographics
  • Violent predictions
  • Dangerous and harmful activity in predictions

In addition to these policies, we may remove predictions that we determine to be spam, that are closely associated with piracy, or in response to valid legal requests.

A guiding principle here is that autocomplete should not shock users with unexpected or unwanted predictions.

This principle and our autocomplete policies are also why popular searches as measured in our Google Trends tool might not appear as predictions within autocomplete. Google Trends is designed as a way for anyone to deliberately research the popularity of search topics over time. Autocomplete removal policies are not used for Google Trends.

Why inappropriate predictions happen

We have systems in place designed to automatically catch inappropriate predictions and not show them. However, we process billions of searches per day, which in turn means we show many billions of predictions each day. Our systems aren’t perfect, and inappropriate predictions can get through. When we’re alerted to these, we strive to quickly remove them.

It’s worth noting that while some predictions may seem odd, shocking or cause a “Who would search for that!” reaction, looking at the actual search results they generate sometimes provides needed context. As we explained earlier this year, the search results themselves may make it clearer in some cases that predictions don’t necessarily reflect awful opinions that some may hold but instead may come from those seeking specific content that’s not problematic. It’s also important to note that predictions aren’t search results and don’t limit what you can search for.

Regardless, even if the context behind a prediction is good, even if a prediction is infrequent,  it’s still an issue if the prediction is inappropriate. It’s our job to reduce these as much as possible.

Our latest efforts against inappropriate predictions

To better deal with inappropriate predictions, we launched a feedback tool last year and have been using the data since to make improvements to our systems. In the coming weeks, expanded criteria applying to hate and violence will be in force for policy removals.

Our existing policy protecting groups and individuals against hateful predictions only covers cases involving race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity. Our expanded policy for search will cover any case where predictions are reasonably perceived as hateful or prejudiced toward individuals and groups, without particular demographics.

With the greater protections for individuals and groups, there may be exceptions where compelling public interest allows for a prediction to be retained. With groups, predictions might also be retained if there’s clear “attribution of source” indicated. For example, predictions for song lyrics or book titles that might be sensitive may appear, but only when combined with words like “lyrics” or “book” or other cues that indicate a specific work is being sought.

As for violence, our policy will expand to cover removal of predictions which seem to advocate, glorify or trivialize violence and atrocities, or which disparage victims.

How to report inappropriate predictions

Our expanded policies will roll out in the coming weeks. We hope that the new policies, along with other efforts with our systems, will improve autocomplete overall. But with billions of predictions happening each day, we know that we won’t catch everything that’s inappropriate.

Should you spot something, you can report using the “Report inappropriate predictions” link we launched last year, which appears below the search box on desktop:

Search Autocomplete painted.png

For those on mobile or using the Google app for Android, long press on a prediction to get a reporting option. Those using the Google app on iOS can swipe to the left to get the reporting option.

By the way, if we take action on a reported prediction that violates our policies, we don’t just remove that particular prediction. We expand to ensure we’re also dealing with closely related predictions. Doing this work means sometimes an inappropriate prediction might not immediately disappear, but spending a little extra time means we can provide a broader solution.

Making predictions richer and more useful

As said above, our predictions show in search boxes that range from desktop to mobile to within our Google app. The appearance, order and some of the predictions themselves can vary along with this.

When you’re using Google on desktop, you’ll typically see up to 10 predictions. On a mobile device, you’ll typically see up to five, as there’s less screen space.

On mobile or Chrome on desktop, we may show you information like dates, the local weather, sports information and more below a prediction:

Autocomplete_3.png

In the Google app, you may also notice that some of the predictions have little logos or images next to them. That’s a sign that we have special Knowledge Graph information about that topic, structured information that’s often especially useful to mobile searchers:

Autocomplete_4.png

Predictions also will vary because the list may include any related past searches you’ve done. We show these to help you quickly get back to a previous search you may have conducted:

Autocomplete_5.png

You can tell if a past search is appearing because on desktop, you’ll see the word “Remove” appear next to a prediction. Click on that word if you want to delete the past search.

On mobile, you’ll see a clock icon on the left and an X button on the right. Click on the X to delete a past search. In the Google App, you’ll also see a clock icon. To remove a prediction, long press on it in Android or swipe left on iOS to reveal a delete option.

You can also delete all your past searches in bulk, or by particular dates or those matching particular terms using My Activity in your Google Account.

More about autocomplete

We hope this post has helped you understand more about autocomplete, including how we’re working to reduce inappropriate predictions and to increase the usefulness of the feature. For more, you can also see our help page about autocomplete.

You can also check out the recent Wired video interview below, where our our vice president of search Ben Gomes and the product manager of autocomplete Chris Haire answer questions about autocomplete that came from…autocomplete!

A reintroduction to Google’s featured snippets

Sometimes when you do a search, you’ll find that there’s a descriptive box at the top of Google’s results. We call this a “featured snippet.” In this post—the first in a new series going behind-the-scenes on how Google Search works—we’ll explore when, where and why we provide featured snippets.

What is a featured snippet?

Let’s start with a look at a featured snippet, in this case, one that appears for a search on “Why is the sky blue?

FeaturedSnippet_1.png

We call these featured snippets because unlike our regular web listings, the page’s description—what we call a “snippet”—comes first. With featured snippets, we reverse the usual format. We’re featuring the snippet, hence the “featured snippet” name. We also generate featured snippets in a different way from our regular snippets, so that they’re easier to read.

We display featured snippets in search when we believe this format will help people more easily discover what they’re seeking, both from the description and when they click on the link to read the page itself. It’s especially helpful for those on mobile or searching by voice.

Here are a few examples where featured snippets enhance the search experience by making it easier to access information from good sources, big and small:

Featured snippets aren’t just for written content. Our recently launched video featured snippets jump you directly to the right place in a video, such as for how to braid your own hair:

FeaturedSnippet_2.png

Featured snippets help with mobile and voice search

Mobile search traffic has surpassed desktop traffic worldwide. And with the growth in voice-activated digital assistants, more people are doing voice queries. In these cases, the traditional "10 blue links" format doesn't work as well, making featured snippets an especially useful format.

Of course, we continue to show regular listings in response to searches along with featured snippets. That’s because featured snippets aren’t meant as a sole source of information. They’re part of an overall set of results we provide, giving people information from a wide range of sources.

People click on featured snippets to learn more

When we introduced featured snippets in January 2014, there were some concerns that they might cause publishers to lose traffic. What if someone learns all they need to know from the snippet and doesn’t visit the source site?

It quickly became clear that featured snippets do indeed drive traffic. That’s why publishers share tips on how to increase the chances of becoming one, because they recognize being featured in this way is a traffic driver.

When it comes to spoken featured snippets, we cite the source page in the spoken result and provide a link to the page within the Google Home app, so people can click and learn more:

FeaturedSnippet_3.png

We recognize that featured snippets have to work in a way that helps support the sources that ultimately makes them possible. That’s why we always take publishers into account when we make updates to this feature.

Working to improve featured snippets

The vast majority of featured snippets work well, as we can tell from usage stats and from what our search quality raters report to us, people paid to evaluate the quality of our results. A third-party test last year by Stone Temple found a 97.4 percent accuracy rate for featured snippets and related formats like Knowledge Graph information.

Because featured snippets are so useful, especially with mobile and voice-only searches, we’re working hard to smooth out bumps with them as they continue to grow and evolve.

Last year, we took deserved criticism for featured snippets that said things like “women are evil” or that former U.S. President Barack Obama was planning a coup. We failed in these cases because we didn’t weigh the authoritativeness of results strongly enough for such rare and fringe queries.

To improve, we launched an effort that included updates to our Search Quality Rater Guidelines to provide more detailed examples of low-quality webpages for raters to appropriately flag, which can include misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories. This work has helped our systems better identify when results are prone to low-quality content. If detected, we may opt not to show a featured snippet.

Even when a featured snippet has good content, we occasionally appear to goof because it might not seem the best response to a query. On the face of it, it might not appear to answer a query at all.

For example, a search for “How did the Romans tell time at night” until recently suggested sundials, which would be useless in the dark:

FeaturedSnippet_4.png
Left: Until recently, a search for “How did the Romans tell time at night” resulted in a featured snippet suggesting sundials. Right:We now provide a better response: water clocks.

While the example above might give you a chuckle, we take issues like this seriously, as we do with any problems reported to us or that we spot internally. We study them and use those learnings to make improvements for featured snippets overall. In this case, it led to us providing a better response: water clocks.

When near-matches can be helpful

Another improvement we’re considering is to better communicate when we give you a featured snippet that’s not exactly what you searched for but close enough that it helps you get to the information you seek.

For example, the original “sundial” featured snippet above was actually a response for “How did Romans tell time.” We displayed this near-match then because we didn’t have enough confidence to show a featured snippet specifically about how Romans told time at night. We knew sundials were used by Romans to tell time generally, because so many pages discussed this. How they told time at night was less discussed, so we had less data to make a firm connection.

Showing a near-match may seem odd at first glance, but we know in such cases that people often explore the source of a featured snippet and discover what they’re looking for. In this case, the page that the featured snippet originally came from did explain that Romans used water clocks to tell time at night. We just didn't then have enough confidence then to display that information as a featured snippet.

We’re considering increasing the use of a format we currently employ only in some limited situations, to make it clearer when we serve a near-match. For example, we might display "How did Romans tell time?" above the featured snippet, as illustrated in the mockup below:

FeaturedSnippet_5.png

Our testing and experiments will guide what we ultimately do here. We might not expand use of the format, if our testing finds people often inherently understand a near-match is being presented without the need for an explicit label.

Improving results by showing more than one featured snippet

Sometimes, a single featured snippet isn’t right for every question. For example, “how to setup call forwarding” varies by carrier. That’s where a recent feature we launched lets you interactively select a featured snippet specific to your situation. In the example below, you can see how it allows people to quickly locate solutions from various providers:

FeaturedSnippet_6.png

Another format coming soon is designed to help people better locate information by showing more than one featured snippet that’s related to what they originally searched for:

FeaturedSnippet_7.png

Showing more than one featured snippet may also eventually help in cases where you can get contradictory information when asking about the same thing but in different ways.

For instance, people who search for “are reptiles good pets” should get the same featured snippet as “are reptiles bad pets” since they are seeking the same information: how do reptiles rate as pets? However, the featured snippets we serve contradict each other.

FeaturedSnippet_8.png

This happens because sometimes our systems favor content that’s strongly aligned with what was asked. A page arguing that reptiles are good pets seems the best match for people who search about them being good. Similarly, a page arguing that reptiles are bad pets seems the best match for people who search about them being bad. We’re exploring solutions to this challenge, including showing multiple responses.

"There are often legitimate diverse perspectives offered by publishers, and we want to provide users visibility and access into those perspectives from multiple sources,” Matthew Gray, the software engineer who leads the featured snippets team, told me.

Your feedback wanted

Featured snippets will never be absolutely perfect, just as search results overall will never be absolutely perfect. On a typical day, 15 percent of the queries we process have never been asked before. That’s just one of the challenges along with sifting through trillions of pages of information across the web to try and help people make sense of the world.

Last year, we made it easier to send us feedback in cases where a featured snippet warrants review. Just use the “feedback” link at the bottom of a featured snippet box. Your feedback, along with our own internal testing and review, helps us keep improving the quality of featured snippets.

featured snippets feedback

We'll explore more about how Google Search works in future posts in this series. In the meantime, you can learn more on our Inside Google Search and How Search Works sites and follow @searchliaison on Twitter for ongoing updates.

Source: Search