Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
-
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
-
-
Quoting here, to ask again, why this is happening with out pages too? is Google going crazy or what?
@James-Avery said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update such as our page at backwards 3 index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
First off, it's great that your entire website made the transition to HTTPS and HTTP/2 three years ago. That's definitely a step in the right direction for performance and security.
Since your hosting provider has confirmed that the server is configured correctly for HTTP/2 and you've got the 301 redirects set up properly, it's puzzling why GoogleBot is still sticking to HTTP/1.1 for accessing the homepage. One thing you might want to double-check is if there are any specific directives in your server configuration that could be affecting how GoogleBot accesses your site. Sometimes, even seemingly minor configurations can have unintended consequences.
Regarding the non-secure version of your website still showing up in the Discovery section of Google Search Console (GSC), despite the homepage being correctly indexed with the HTTPS version, it could be a matter of Google's index taking some time to catch up. However, it's worth investigating further to ensure there aren't any lingering issues causing this discrepancy.
As for the home page not ranking as well in SERPs compared to other pages, despite having better content and speed, this could be due to a variety of factors. It's possible that Google's algorithms are prioritizing other pages for certain keywords or that there are specific technical issues with the homepage that are affecting its visibility.
In terms of next steps, I'd recommend continuing to monitor the situation closely and perhaps reaching out to Google's support team for further assistance. They may be able to provide additional insights or suggestions for resolving these issues.
Overall, it sounds like you've done a thorough job of troubleshooting so far, but sometimes these technical SEO mysteries require a bit of persistence to unravel. Keep at it, and hopefully, you'll be able to get to the bottom of these issues soon!
-
-
@john1408 said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
It's baffling that GoogleBot persists with HTTP/1.1 for the homepage despite proper setup. Consider exploring Google Search Console further for indexing insights, and reach out to Google Support for assistance in resolving this unusual behavior.
-
-
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update backwards 3 index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
First off, it's great that your entire website made the transition to HTTPS and HTTP/2 three years ago. That's definitely a step in the right direction for performance and security.
Since your hosting provider has confirmed that the server is configured correctly for HTTP/2 and you've got the 301 redirects set up properly, it's puzzling why GoogleBot is still sticking to HTTP/1.1 for accessing the homepage. One thing you might want to double-check is if there are any specific directives in your server configuration that could be affecting how GoogleBot accesses your site. Sometimes, even seemingly minor configurations can have unintended consequences.
Regarding the non-secure version of your website still showing up in the Discovery section of Google Search Console (GSC), despite the homepage being correctly indexed with the HTTPS version, it could be a matter of Google's index taking some time to catch up. However, it's worth investigating further to ensure there aren't any lingering issues causing this discrepancy.
As for the home page not ranking as well in SERPs compared to other pages, despite having better content and speed, this could be due to a variety of factors. It's possible that Google's algorithms are prioritizing other pages for certain keywords or that there are specific technical issues with the homepage that are affecting its visibility.
In terms of next steps, I'd recommend continuing to monitor the situation closely and perhaps reaching out to Google's support team for further assistance. They may be able to provide additional insights or suggestions for resolving these issues.
Overall, it sounds like you've done a thorough job of troubleshooting so far, but sometimes these technical SEO mysteries require a bit of persistence to unravel. Keep at it, and hopefully, you'll be able to get to the bottom of these issues soon!
-
-
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocolRobots file is correct (simply allowing all and referring to https://www. sitemap
Sitemap is referencing https://www. pages including homepage
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
301 redirects set up for non-secure and non-www versions of website all to https://www. version
Not using a CDN or proxy
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!t seems like you've taken several steps to ensure the correct protocol (HTTP/2) for your website, and it's puzzling that GoogleBot still accesses the home page via HTTP/1.1. A few additional suggestions:
Crawl Rate Settings: Check your Google Search Console (GSC) for crawl rate settings. Google might be intentionally crawling your site slowly.
Server Logs: Reanalyze server logs to confirm that GoogleBot is indeed accessing via HTTP/1.1 for the home page. This could help identify patterns or anomalies.
Mobile Usability: Ensure your home page is mobile-friendly. Google tends to prioritize mobile indexing.
Fetch and Render Tool: Use GSC's Fetch and Render tool to see how Google renders your home page. It might provide insights into how Google sees your page.
Structured Data and Markup: Ensure structured data and markup on your home page are correct and up-to-date.
Manual Submission: Consider manually requesting indexing for your home page through GSC.
Regarding the new pages performing well compared to the home page, it might be worth revisiting your on-page SEO elements and analyzing the competition for relevant keywords.
-
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why MOZ just index some of the links?
hello everyone i've been using moz pro for a while and found a lot of backlink oppertunites as checking my competitor's backlink profile.
Link Building | | seogod123234
i'm doing the same way as my competitors but moz does not see and index lots of them, maybe just index 10% of them. though my backlinks are commenly from sites with +80 and +90 DA like Github, Pinterest, Tripadvisor and .... and the strange point is that 10% are almost from EDU sites with high DA. i go to EDU sites and place a comment and in lots of case, MOZ index them in just 2-3 days!! with maybe just 10 links like this, my DA is incresead from 15 to 19 in less than one month! so, how does this "SEO TOOL" work?? is there anyway to force it to crawl a page?0 -
Is page speed important to improve SEO ranking?
I saw on a SEO Agency's site (https://burstdgtl.com/search-engine-optimization/) that page speed apparently affects Google ranking. Is this true? And if it is, how do I improve it, do I need an agency?
On-Page Optimization | | jasparcj0 -
Will Google crawl and rank our ReactJS website content?
We have 250+ products dynamically inserted and sorted on our site daily (more specifically our homepage... yes, it's a long page). Our dev team would like to explore rendering the page server-side using ReactJS. We currently use a CDN to cache all the content, which of course we would like to continue using. SO... will Google be able to crawl that content? We've read some articles with different ideas (including prerendering): http://andrewhfarmer.com/react-seo/
Technical SEO | | Jane.com
http://www.seoskeptic.com/json-ld-big-day-at-google/ If we were to only load the schema important to the page (like product title, image, price, description, etc.) from the server and then let the client render the remaining content (comments, suggested products, etc.), would that go against best practices? It seems like that might be seen as showing the googlebot 1 version and showing the site visitor a different (more complete) version.0 -
Tools/Software that can crawl all image URLs in a site
Excluding Screaming Frog, what other tools/software to use in order to crawl all image URLs in a site? Because in Screaming Frog, they don't crawl image URLs which are not under the site domain. Example of an image URL outside the client site: http://cdn.shopify.com/images/this-is-just-a-sample.png If the client is: http://www.example.com, Screaming Frog only crawls images under it like, http://www.example.com/images/this-is-just-a-sample.png
Technical SEO | | jayoliverwright0 -
<sub>& <sup>tags, any SEO issues?</sup></sub>
Hi - the content on our corporate website is pretty technical, and we include chemical element codes in the text that users would search on (like S02, C02, etc.) A lot of times our engineers request that we list the codes correctly, with a <sub>on the last number. Question - does adding this code into the keyword affect SEO? The code would look like SO<sub>2</sub>.</sub> Thanks.
Technical SEO | | Jenny10 -
Will blocking the Wayback Machine (archive.org) have any impact on Google crawl and indexing/SEO?
Will blocking the Wayback Machine (archive.org) by adding the code they give have any impact on Google crawl and indexing/SEO? Anyone know? Thanks! ~Brett
Technical SEO | | BBuck0 -
Redirecting blog.<mydomain>.com to www.<mydomain>.com\blog</mydomain></mydomain>
This is more of a technical question than pure SEO per se, but I am guessing that some folks here may have covered this and so I would appreciate any questions. I am moving from a WordPress.com-based blog (hosted on WordPress) to a WordPress installation on my own server (as suggested by folks in another thread here). As part of this I want to move from the format blog.<mydomain>.com to www.mydomain.com\blog. I have installed WordPress on my server and have imported posts from the hosted site to my own server. How should I manage the transition from first format to the second? I have a bunch of links on Facebook, etc that refer to URLs of the blog..com format so it's important that I redirect.</mydomain> I am running DotNetNuke/WordPress on my own IIS/ASP.Net servers. Thanks. Mark
Technical SEO | | MarkWill0 -
Crawling image folders / crawl allowance
We recently removed /img and /imgp from our robots.txt file thus allowing googlebot to crawl our image folders. Not sure why we had these blocked in the first place, but we opened them up in response to an email from Google Product Search about not being able to crawl images - which can/has hurt our traffic from Google Shopping. My question is: will allowing Google to crawl our image files eat up our 'crawl allowance'? We wouldn't want Google to not crawl/index certain pages, and ding our organic traffic, because more of our allotted crawl bandwidth is getting chewed up crawling image files. Outside of the non-detailed crawl stat graphs from Webmaster Tools, what's the best way to check how frequently/ deeply our site is getting crawled? Thanks all!
Technical SEO | | evoNick0