Author Archives: Google Webmaster Central

Goodbye Google Webmasters, hello Google Search Central

Googlebot reading a book with a new spider friend

The history behind Google Webmasters

Merriam-Webster claims the first known use of the word "webmaster" was in 1993, years before Google even existed. However, the term is becoming archaic, and according to the data found in books, its use is in sharp decline. A user experience study we ran revealed that very few web professionals identify themselves as webmasters anymore. They're more likely to call themselves Search Engine Optimizer (SEO), online marketer, blogger, web developer, or site owner, but very few "webmasters".

We're changing our name

In brainstorming our new name, we realized that there's not one term that perfectly summarizes the work people do on websites. To focus more on the topic that we talk about (Google Search), we're changing our name from "Google Webmasters Central" to "Google Search Central", both on our websites and on social media. Our goal is still the same; we aim to help people improve the visibility of their website on Google Search.The change will happen on most platforms in the next couple days.

Centralizing help information to one site

To help people learn how to improve their website's visibility on Google Search, we're also consolidating our help documentation and blogs to one site.

Moving forward, the Search Console Help Center will contain only documentation related to using Search Console. It's also still the home of our help forum, newly renamed from "Webmasters Help Community" to "Google Search Central Community". The information related to how Google Search works, crawling and indexing, Search guidelines, and other Search-related topics are moving to our new site, which previously focused only on web developer documentation. The content move will happen over the next few days.

We will continue to create content for anyone who wants their websites to show up on Google Search, whether you're just getting started with SEO or you're an experienced web professional.

Consolidating the blogs

The blog that you're reading right now is also moving to our main site. However, we will wait one week to allow subscribers to read this last post on the old platform. Moving this blog, including our other 13 localized blogs, to one place brings the following benefits:

  • More discovery of related content (help documentation, localized blogs, event information, on one site)
  • Easier to switch between languages (no longer have to find the localized blog URL)
  • Better platform allows us to maintain content, localize blog post more easily, and format posts consistently

Going forward, all archived and new blog posts will appear on https://developers.google.com/search/blog. You don't need to take any action in order to keep getting updates from us; we will redirect the current set of RSS and email subscribers to the new blog URL.

Googlebot mascot gets a refresh

Our Googlebot mascot is also getting an upgrade. Googlebot's days of wandering the web solo come to a close as a new sidekick joins Googlebot in crawling the internet.

Googlebot mascot gets a refresh

When we first met this curious critter, we wondered, "Is it really a spider?" After some observation, we noticed this spider bot hybrid can jump great distances and sees best when surrounded by green light. We think Googlebot's new best friend is a spider from the genus Phidippus, though it seems to also have bot-like characteristics. Googlebot's been trying out new nicknames for the little spider bot, but they haven't settled on anything yet. Maybe you can help?

As parting words, update your bookmarks and if you have any questions or comments, you can find us on Twitter and in our Google Search Central Help Community.

Timing for bringing page experience to Google Search

This past May, we announced that page experience signals would be included in Google Search ranking. These signals measure how users perceive the experience of interacting with a web page and contribute to our ongoing work to ensure people get the most helpful and enjoyable experiences from the web. In the past several months, we’ve seen a median 70% increase in the number of users engaging with Lighthouse and Page Speed Insights, and many site owners using Search Console’s Core Web Vitals report to identify opportunities for improvement.


Today we’re announcing that the page experience signals in ranking will roll out in May 2021. The new page experience signals combine Core Web Vitals with our existing search signals including mobile-friendliness, safe-browsing, HTTPS-security, and intrusive interstitial guidelines.

A diagram illustrating the components of Search’s signal for page experience.

The change for non-AMP content to become eligible to appear in the mobile Top Stories feature in Search will also roll out in May 2021. Any page that meets the Google News content policies will be eligible and we will prioritize pages with great page experience, whether implemented using AMP or any other web technology, as we rank the results.

In addition to the timing updates described above, we plan to test a visual indicator that highlights pages in search results that have great page experience.

A New Way of Highlighting Great Experiences in Google Search

We believe that providing information about the quality of a web page’s experience can be helpful to users in choosing the search result that they want to visit. On results, the snippet or image preview helps provide topical context for users to know what information a page can provide. Visual indicators on the results are another way to do the same, and we are working on one that identifies pages that have met all of the page experience criteria. We plan to test this soon and if the testing is successful, it will launch in May 2021 and we’ll share more details on the progress of this in the coming months.

The Tools Publishers Need for Improving Page Experience

To get ready for these changes, we have released a variety of tools that publishers can use to start improving their page experience. The first step is doing a site-wide audit of your pages to see where there is room for improvement. Search Console’s report for Core Web Vitals gives you an overview of how your site is doing and a deepdive into issues. Once you’ve identified opportunities, Page Speed Insights and Lighthouse can help you as you iterate on fixing any issues that you’ve uncovered. Head over to web.dev/vitals-tools for a roundup of all the tools you need to get started.

Additionally, AMP is one of the easiest and cost-effective ways for publishers looking to achieve great page experience outcomes. Based on the analysis that the AMP team has done, the majority of the AMP pages achieve great page experiences. If you’re an AMP publisher, check out the recently launched AMP Page Experience Guide, a diagnostic tool that provides developers with actionable advice.

We continue to support AMP content in Google Search. If you publish an AMP version of your content, Google Search will link to that cache-optimized AMP version to help optimize delivery to users, just as is the case today.

Conclusion

At Google Search our mission is to help users find the most relevant and quality sites on the web. The goal with these updates is to highlight the best experiences and ensure that users can find the information they’re looking for. Our work is ongoing, which is why we plan to incorporate more page experience signals going forward and update them on a yearly basis. We hope that the tools and resources we’ve provided make it easier for you to create great websites, and thereby build a web ecosystem that users love.

If you have questions or feedback, please visit our help forums or let us know through Twitter.

Timing for bringing page experience to Google Search

This past May, we announced that page experience signals would be included in Google Search ranking. These signals measure how users perceive the experience of interacting with a web page and contribute to our ongoing work to ensure people get the most helpful and enjoyable experiences from the web. In the past several months, we’ve seen a median 70% increase in the number of users engaging with Lighthouse and Page Speed Insights, and many site owners using Search Console’s Core Web Vitals report to identify opportunities for improvement.


Today we’re announcing that the page experience signals in ranking will roll out in May 2021. The new page experience signals combine Core Web Vitals with our existing search signals including mobile-friendliness, safe-browsing, HTTPS-security, and intrusive interstitial guidelines.

A diagram illustrating the components of Search’s signal for page experience.

The change for non-AMP content to become eligible to appear in the mobile Top Stories feature in Search will also roll out in May 2021. Any page that meets the Google News content policies will be eligible and we will prioritize pages with great page experience, whether implemented using AMP or any other web technology, as we rank the results.

In addition to the timing updates described above, we plan to test a visual indicator that highlights pages in search results that have great page experience.

A New Way of Highlighting Great Experiences in Google Search

We believe that providing information about the quality of a web page’s experience can be helpful to users in choosing the search result that they want to visit. On results, the snippet or image preview helps provide topical context for users to know what information a page can provide. Visual indicators on the results are another way to do the same, and we are working on one that identifies pages that have met all of the page experience criteria. We plan to test this soon and if the testing is successful, it will launch in May 2021 and we’ll share more details on the progress of this in the coming months.

The Tools Publishers Need for Improving Page Experience

To get ready for these changes, we have released a variety of tools that publishers can use to start improving their page experience. The first step is doing a site-wide audit of your pages to see where there is room for improvement. Search Console’s report for Core Web Vitals gives you an overview of how your site is doing and a deepdive into issues. Once you’ve identified opportunities, Page Speed Insights and Lighthouse can help you as you iterate on fixing any issues that you’ve uncovered. Head over to web.dev/vitals-tools for a roundup of all the tools you need to get started.

Additionally, AMP is one of the easiest and cost-effective ways for publishers looking to achieve great page experience outcomes. Based on the analysis that the AMP team has done, the majority of the AMP pages achieve great page experiences. If you’re an AMP publisher, check out the recently launched AMP Page Experience Guide, a diagnostic tool that provides developers with actionable advice.

We continue to support AMP content in Google Search. If you publish an AMP version of your content, Google Search will link to that cache-optimized AMP version to help optimize delivery to users, just as is the case today.

Conclusion

At Google Search our mission is to help users find the most relevant and quality sites on the web. The goal with these updates is to highlight the best experiences and ensure that users can find the information they’re looking for. Our work is ongoing, which is why we plan to incorporate more page experience signals going forward and update them on a yearly basis. We hope that the tools and resources we’ve provided make it easier for you to create great websites, and thereby build a web ecosystem that users love.

If you have questions or feedback, please visit our help forums or let us know through Twitter.

PES@Home 2020: Google’s first virtual summit for Product Experts

More than ever, users are relying on Google products to do their jobs, educate their kids, and stay in touch with loved ones. Our Google Product Experts (PEs) play a vital role in supporting these users in community forums, like the Webmaster Help Community, across many languages.

For several years now, we have been inviting our most advanced PEs from around the world to a bi-annual, 3-day PE Summit. These events provide an opportunity not only to share latest product updates with PEs, but also to acknowledge and celebrate their tireless contributions to help our users. As the world was going into lockdown in March, we quickly decided that we don’t want to miss out on this annual celebration. So the summit team shifted focus and started planning an all virtual event which came to be called PES@Home.

Through the house-themed virtual event platform, PEs participated in over 120 sessions in the "office", got a chance to engage with each other and Googlers in the "kitchen", had fun on the "rooftop" learning more about magic or mixology, and - hopefully - came out feeling reconnected and re-energized. In addition to a large number of general talks, Webmaster PEs were able to attend and ask questions during eight product specific breakout sessions with Search product managers and engineers, which covered topics like page experience, Web Stories, crawling, Image Search, and free shopping listings.

We are truly overwhelmed and grateful for how Webmaster PEs continue to grow the community, connection, and engagement in such a strange time. As a testament to the helpful spirit in the Webmaster community, we were thrilled to present the "Silver Lining Award" for someone who demonstrates a sense of humour and emphasizes the positive side of every situation to one of our own Webmaster PEs.

In the name of all the countless people asking product questions in the forums, we'd like to express our thanks to the patient, knowledgeable, helpful, and friendly Webmaster Product Experts, who help to make the web shine when it comes to presence on Search.

If you want to read more about the summit, check out this summary from a Webmaster PE point of view.

Best practices for Black Friday and Cyber Monday pages

The end of the year holiday season is a peak time for many merchants with special sales events such as Black Friday and Cyber Monday. As a merchant, you can help Google highlight your sales events by providing landing pages with relevant content and high quality images.

Best Practices

The following are recommended best practices for your landing pages:

  • Create the page early. Make sure you create the page well before the sale so Googlebot has time to discover and index the page. Make sure you are not blocking Google from crawling the URL (the Google URL Inspection Tool can be used to check this).
  • Follow standard SEO best practices. A list of SEO best practices for landing pages can be found in our Search Engine Optimization (SEO) Starter Guide.
  • Link to the landing pages from your home page (or similar) to increase their prominence, helping users (and Google) find the landing page quicker.
  • Use a recurring URL, not a new URL for each occurrence of the event. Give the landing page of recurring events a meaningful URL that reflects the event that is used each year (for example: use /sale/black-friday, not /sale/2020/black-friday).
  • Include a relevant, high quality image. Provide a static image with an up-to-date representation of your sale. Trim any whitespace around the borders of the image, and ensure that the image is visually engaging and is of good quality. For additional guidance on image quality, review the Google Images best practices and Images Web Fundamentals.
  • Get your page recrawled. After you've tested your structured data for validity, ask Google to recrawl your page to get your content updated more quickly. (Note: as of publication this tool is undergoing maintenance, but we hope to have it up-and-running again soon.)

If you have any questions, let us know through the Help forum or on Twitter.

The Search Console Training lives on

In November 2019 we announced the Search Console Training YouTube series and started publishing videos regularly. The goal of the series was to create updated video content to be used alongside Search documentation, for example in the Help Center and in the Developers site.

The wonderful Google Developer Studio team (the engine behind those videos!) put together this fun blooper reel for the first wave of videos that we recorded in the Google London studio.

So far we’ve published twelve episodes in the series, each focusing on a different part of the tool. We’ve seen it’s helping lots of people to learn how to use Search Console - so we decided to continue recording videos… at home! Please bear with the trucks, ambulances, neighbors, passing clouds, and of course the doorbell. ¯\_(ツ)_/¯

In addition to the location change, we’re also changing the scope of the new videos. Instead of focusing on one report at a time, we’ll discuss how Search Console can help YOUR business. In each episode we’ll focus on types of website, like ecommerce, and job roles, like developers.

To hear about new videos as soon as they're published, subscribe to our YouTube channel, and feel free to leave feedback on Twitter.

Stay tuned!

Daniel Waisberg, Search Advocate

New Schema.org support for retailer shipping data

Quick summary: Starting today, we support shippingDetails schema.org markup as an alternative way for retailers to be eligible for shipping details in Google Search results.

Since June 2020, retailers have been able to list their products across different Google surfaces for free, including on Google Search. We are committed to supporting ways for the ecosystem to better connect with users that come to Google to look for the best products, brands, and retailers by investing both in more robust tooling in Google Merchant Center as well as with new kinds of schema.org options.

Shipping details, including cost and expected delivery times, are often a key consideration for users making purchase decisions. In our own studies, we’ve heard that users abandon shopping checkouts because of unforeseen or uncertain shipping costs. This is why we will often show shipping cost information in certain result types, including on free listings on Google Search (currently in the US, in English only).

Shipping details in Search results

Retailers have always been able to configure shipping settings in Google Merchant Center in order to display this information in listings. Starting today, we now also support the shippingDetails schema.org markup type for retailers who don't have active Merchant Center accounts with product feeds.

For retailers that are interested in this new markup, check out our documentation to get started.

New open source robots.txt projects

Last year we released the robots.txt parser and matcher that we use in our production systems to the open source world. Since then, we've seen people build new tools with it, contribute to the open source library (effectively improving our production systems- thanks!), and release new language versions like golang and rust, which make it easier for developers to build new tools.

With the intern season ending here at Google, we wanted to highlight two new releases related to robots.txt that were made possible by two interns working on the Search Open Sourcing team, Andreea Dutulescu and Ian Dolzhanskii

Robots.txt Specification Test

First, we are releasing a testing framework for robots.txt parser developers, created by Andreea. The project provides a testing tool that can validate whether a robots.txt parser follows the Robots Exclusion Protocol, or to what extent. Currently there is no official and thorough way to assess the correctness of a parser, so Andreea built a tool that can be used to create robots.txt parsers that are following the protocol.

Java robots.txt parser and matcher

Second, we are releasing an official Java port of the C++ robots.txt parser, created by Ian. Java is the 3rd most popular programming language on GitHub and it's extensively used at Google as well, so no wonder it's been the most requested language port. The parser is a 1-to-1 translation of the C++ parser in terms of functions and behavior, and it's been thoroughly tested for parity against a large corpora of robots.txt rules. Teams are already planning to use the Java robots.txt parser in Google production systems, and we hope that you'll find it useful, too. 

As usual, we welcome your contributions to these projects. If you built something with the C++ robots.txt parser or with these new releases, let us know so we can potentially help you spread the word! If you found a bug, help us fix it by opening an issue on GitHub or directly contributing with a pull request. If you have questions or comments about these projects, catch us on Twitter!

It was our genuine pleasure to host Andreea and Ian, and we're sad that their internship is ending. Their contributions help make the Internet a better place and we hope that we can welcome them back to Google in the future.

New open source robots.txt projects

Last year we released the robots.txt parser and matcher that we use in our production systems to the open source world. Since then, we've seen people build new tools with it, contribute to the open source library (effectively improving our production systems- thanks!), and release new language versions like golang and rust, which make it easier for developers to build new tools.

With the intern season ending here at Google, we wanted to highlight two new releases related to robots.txt that were made possible by two interns working on the Search Open Sourcing team, Andreea Dutulescu and Ian Dolzhanskii

Robots.txt Specification Test

First, we are releasing a testing framework for robots.txt parser developers, created by Andreea. The project provides a testing tool that can validate whether a robots.txt parser follows the Robots Exclusion Protocol, or to what extent. Currently there is no official and thorough way to assess the correctness of a parser, so Andreea built a tool that can be used to create robots.txt parsers that are following the protocol.

Java robots.txt parser and matcher

Second, we are releasing an official Java port of the C++ robots.txt parser, created by Ian. Java is the 3rd most popular programming language on GitHub and it's extensively used at Google as well, so no wonder it's been the most requested language port. The parser is a 1-to-1 translation of the C++ parser in terms of functions and behavior, and it's been thoroughly tested for parity against a large corpora of robots.txt rules. Teams are already planning to use the Java robots.txt parser in Google production systems, and we hope that you'll find it useful, too. 

As usual, we welcome your contributions to these projects. If you built something with the C++ robots.txt parser or with these new releases, let us know so we can potentially help you spread the word! If you found a bug, help us fix it by opening an issue on GitHub or directly contributing with a pull request. If you have questions or comments about these projects, catch us on Twitter!

It was our genuine pleasure to host Andreea and Ian, and we're sad that their internship is ending. Their contributions help make the Internet a better place and we hope that we can welcome them back to Google in the future.

Googlebot will soon speak HTTP/2

Quick summary: Starting November 2020, Googlebot will start crawling some sites over HTTP/2.

Ever since mainstream browsers started supporting the next major revision of HTTP, HTTP/2 or h2 for short, web professionals asked us whether Googlebot can crawl over the upgraded, more modern version of the protocol.

Today we're announcing that starting mid November 2020, Googlebot will support crawling over HTTP/2 for select sites.

What is HTTP/2

As we said, it's the next major version of HTTP, the protocol the internet primarily uses for transferring data. HTTP/2 is much more robust, efficient, and faster than its predecessor, due to its architecture and the features it implements for clients (for example, your browser) and servers. If you want to read more about it, we have a long article on the HTTP/2 topic on developers.google.com.

Why we're making this change

In general, we expect this change to make crawling more efficient in terms of server resource usage. With h2, Googlebot is able to open a single TCP connection to the server and efficiently transfer multiple files over it in parallel, instead of requiring multiple connections. The fewer connections open, the fewer resources the server and Googlebot have to spend on crawling.

How it works

In the first phase, we'll crawl a small number of sites over h2, and we'll ramp up gradually to more sites that may benefit from the initially supported features, like request multiplexing.

Googlebot decides which site to crawl over h2 based on whether the site supports h2, and whether the site and Googlebot would benefit from crawling over HTTP/2. If your server supports h2 and Googlebot already crawls a lot from your site, you may be already eligible for the connection upgrade, and you don't have to do anything.

If your server still only talks HTTP/1.1, that's also fine. There's no explicit drawback for crawling over this protocol; crawling will remain the same, quality and quantity wise.

How to opt out

Our preliminary tests showed no issues or negative impact on indexing, but we understand that, for various reasons, you may want to opt your site out from crawling over HTTP/2. You can do that by instructing the server to respond with a 421 HTTP status code when Googlebot attempts to crawl your site over h2. If that's not feasible at the moment, you can send a message to the Googlebot team (however, this solution is temporary).

If you have more questions about Googlebot and HTTP/2, check the questions we thought you might ask. If you can't find your question, write to us on Twitter and in the help forums.

Posted by Jin Liang and Gary

Questions that we thought you might ask

Why are you upgrading Googlebot now?

The software we use to enable Googlebot to crawl over h2 has matured enough that it can be used in production.

Do I need to upgrade my server ASAP?

It's really up to you. However, we will only switch to crawling over h2 sites that support it and will clearly benefit from it. If there's no clear benefit for crawling over h2, Googlebot will still continue to crawl over h1.

How do I test if my site supports h2?

Cloudflare has a blog post with a plethora of different methods to test whether a site supports h2, check it out!

How do I upgrade my site to h2?

This really depends on your server. We recommend talking to your server administrator or hosting provider.

How do I convince Googlebot to talk h2 with my site?

You can't. If the site supports h2, it is eligible for being crawled over h2, but only if that would be beneficial for the site and Googlebot. If crawling over h2 would not result in noticeable resource savings for example, we would simply continue to crawl the site over HTTP/1.1.

Why are you not crawling every h2-enabled site over h2?

In our evaluations we found little to no benefit for certain sites (for example, those with very low qps) when crawling over h2. Therefore we have decided to switch crawling to h2 only when there's clear benefit for the site. We'll continue to evaluate the performance gains and may change our criteria for switching in the future.

How do I know if my site is crawled over h2?

When a site becomes eligible for crawling over h2, the owners of that site registered in Search Console will get a message saying that some of the crawling traffic may be over h2 going forward. You can also check in your server logs (for example, in the access.log file if your site runs on Apache).

Which h2 features are supported by Googlebot?

Googlebot supports most of the features introduced by h2. Some features like server push, which may be beneficial for rendering, are still being evaluated.

Does Googlebot support plaintext HTTP/2 (h2c)?

No. Your website must use HTTPS and support HTTP/2 in order to be eligible for crawling over HTTP/2. This is equivalent to how modern browsers handle it.

Is Googlebot going to use the ALPN extension to decide which protocol version to use for crawling?

Application-layer protocol negotiation (ALPN) will only be used for sites that are opted in to crawling over h2, and the only accepted protocol for responses will be h2. If the server responds during the TLS handshake with a protocol version other than h2, Googlebot will back off and come back later on HTTP/1.1.

How will different h2 features help with crawling?

Some of the many, but most prominent benefits of h2 include:

  • Multiplexing and concurrency: Fewer TCP connections open means fewer resources spent.
  • Header compression: Drastically reduced HTTP header sizes will save resources.
  • Server push: This feature is not yet enabled; it's still in the evaluation phase. It may be beneficial for rendering, but we don't have anything specific to say about it at this point.

If you want to know more about specific h2 features and their relation to crawling, ask us on Twitter.

Will Googlebot crawl more or faster over h2?

The primary benefit of h2 is resource savings, both on the server side, and on Googlebot side. Whether we crawl using h1 or h2 does not affect how your site is indexed, and hence it does not affect how much we plan to crawl from your site.

Is there any ranking benefit for a site in being crawled over h2?

No.