Unsolved Why does Moz Crawler start with HTTP//1.1 version??
-
We have run Moz Pro's Site Crawl for example-domain.com
Why has Moz's crawler decided to site http://www.example-domain.com (ie the non-secure version) as zero crawl depth page and report the actually live https://www.example-domain.com (ie secure HTTP2 version) as a crawl depth of 2?
Surely the main live page should be the first crawled and reported as crawl depth = 0?
-
hope you're doing well. I have a question regarding the behavior of the Moz crawler and a concern related to crawling issues on my website.
I've observed that the Moz crawler initiates its sessions with the HTTP/1.1 version. Could you please provide insights into why this is the case? I'm curious about whether this is a standard protocol for the Moz crawler and if there are any implications for website owners.
Additionally, I'm experiencing crawling issues with my website, CFMS BILL STATUS. Despite my efforts to optimize crawlability, I continue to encounter challenges. The website seems to have difficulty being effectively crawled by Moz.
Could you offer guidance or suggestions on how to address these crawling issues and ensure that CFMS BILL STATUS is properly indexed by the Moz crawler?
-
I'll provide a comprehensive response to the prompt, aiming for clarity and incorporating insights from Moz's best practices:
Understanding Crawl Depth and the Issue:
Crawl depth refers to the number of clicks (or links) it takes a crawler to reach a specific page from the starting point (usually the homepage).
In this case, Moz's crawler is reporting a crawl depth of 0 for the non-secure HTTP version of the homepage (http://www.example-domain.com), while the secure HTTPS version (https://www.example-domain.com) has a crawl depth of 2. This discrepancy suggests a potential issue with how the site is configured or how the crawler is interpreting it.
Potential Reasons for the Discrepancy:Redirect Configuration:
If http://www.example-domain.com redirects to https://www.example-domain.com, the crawler might initially treat the non-secure version as the starting point (crawl depth = 0) and the secure version as a secondary page (crawl depth = 2).
Canonical Tags:If the canonical tag on https://www.example-domain.com points to http://www.example-domain.com, Moz might prioritize the non-secure version.
Sitemap and Internal Linking:Ensure your sitemap lists the https version of URLs and that internal links use https URLs consistently.
Crawler Settings:Some tools allow specifying which version (http or https) to prioritize. Check for such settings in Moz Pro.
Historical Data:If the site recently migrated from http to https, historical data might influence crawl behavior.
Resolving the Issue:Review Redirects:
Ensure redirects are set up correctly to prioritize https.
Check Canonical Tags:
Verify that canonical tags point to the https version.
Update Sitemap and Internal Links:
Use https URLs consistently.
Adjust Crawler Settings:
If possible, prioritize https in Moz Pro's settings.
Contact Moz Support:
If the issue persists, seek guidance from Moz support. -
@AKCAC When using Moz Pro's Site Crawl for your website and encountering a situation where the non-secure (http) version of your domain is reported as having a crawl depth of zero, while the secure (https) version shows a greater crawl depth, there are several potential reasons and implications to consider:
-
Redirect Configuration: The most common reason for this is how redirects are set up on your site. If
http://www.example-domain.com
is the primary address that Moz encounters due to your server's configuration, and it redirects tohttps://www.example-domain.com
, Moz might initially treat the non-secure version as the starting point (crawl depth = 0) and the secure version as a secondary page (thus a greater crawl depth). -
Canonical Tags: Check your canonical tags. If the canonical tag on your https pages points to the http version, Moz (and other search engines) might treat the http version as the primary page.
-
Sitemap and Internal Linking: Ensure that your sitemap lists the https version of your URLs and that internal linking on your site uses https URLs. If your internal links or sitemap reference the http version, crawlers may initially prioritize these.
-
Crawler Settings: In some tools, including Moz, you can specify which version of the site (http or https) to prioritize in a crawl. Check if such a setting is influencing the crawl behavior.
-
Historical Data: If your site recently migrated from http to https, and Moz has historical data from previous crawls, it might temporarily reflect the older structure until it fully updates its index with the new configuration.
-
DNS and Server Configuration: Verify your DNS and server settings to ensure that they correctly redirect all http traffic to https and that the https version is set as the primary endpoint.
-
Robots.txt File: Make sure your robots.txt file doesn't unintentionally block or deprioritize https URLs.
Steps to Resolve the Issue:
- Ensure Consistent Redirects: All http URLs should 301 redirect to their https counterparts.
- Update Canonical Tags: Canonical tags on all pages should point to the https versions.
- Verify Sitemap and Internal Links: Both should consistently use and reference https URLs.
- Re-crawl the Site: After making changes, re-run the Moz Site Crawl to
-
-
Moz Crawler, like many web crawlers, typically starts with the HTTP/1.1 version because it is a widely accepted and supported protocol for communication between web clients and servers. HTTP/1.1 is the latest version of the HTTP protocol at the time of Moz Crawler's implementation, offering improvements over its predecessor, HTTP/1.0. It provides features such as persistent connections, chunked transfer encoding, and the ability to pipeline multiple requests, enhancing the efficiency of data transmission. Starting with HTTP/1.1 allows Moz Crawler to leverage these features for more effective and streamlined interactions with web servers, optimizing the crawling process and ultimately enhancing its performance in retrieving information from websites. For More Info Visit Now.
-
The crawl depth reported by tools like Moz Pro is determined by the level of clicks it takes to reach a particular page from the homepage or root domain. It's not solely based on whether the page is HTTP or HTTPS.
In your scenario, if Moz Pro is reporting that the HTTP version (http://www.example-domain.com) has a crawl depth of 0, it means that this page is directly accessible from the root domain. On the other hand, if the HTTPS version (https://www.example-domain.com) is reported as having a crawl depth of 2, it implies that it takes two clicks (or two levels deep) from the homepage to reach this particular HTTPS page.
There could be various reasons for such a situation, such as the site structure, internal linking, or redirects. It's not uncommon for websites to have different versions (HTTP and HTTPS) of their pages, and the crawler may follow links or redirects differently, leading to variations in crawl depth.
To further investigate, you may want to examine your site's internal linking structure, make sure that there are no unexpected redirects or canonicalization issues, and ensure that your preferred version (HTTPS in this case) is correctly configured and prioritized in your website settings and sitemap. Additionally, Moz Pro may provide more detailed insights into the specific reasons for the reported crawl depth if you review the crawl report or log files.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
Unsolved Crawling only the Home of my website
Hello,
Product Support | | Azurius
I don't understand why MOZ crawl only the homepage of our webiste https://www.modelos-de-curriculum.com We add the website correctly, and we asked for crawling all the pages. But the tool find only the homepage. Why? We are testing the tool before to suscribe. But we need to be sure that the tool is working for our website. If you can please help us.0 -
Unsolved Is Moz Able to Track Internal Links Per Page?
I am trying to track internal links and identify orphan pages. What is the best way to do this?
Moz Pro | | WebMarkets0 -
Unsolved how to add my known backlinks manually to moz
hello
Moz Local | | icogems
i have cryptocurrency website and i found backlinks listed in my google webmasters dashboard, but those backlinks dont show in my moz dashboard even after 45 days. so my question is can i add those backlinks to moz, just to check my website real da score thanks,0 -
Does increase in DA increase Moz Trust score?
I have checked many website which in my office. Have a confusion Does increase in DA increase Moz Trust score? I have a website https://emocrest.com/ which is new one and Moz Trust score as of now is 1 only. Well, most of the projects I'm handling in our office having 2,3,4 in the range. How this MT value is calculated and does it got any relation with the change in Domain Authoiry or the page authority of a website?
Moz Pro | | nazfazy1 -
Question for Moz developers - Highcharts?
So, I see that Moz is using Highcharts as it's charting display engine. What made you decide to use them instead of some of the other solutions out there, like FusionCharts or Google Charts, even creating your own home-made creation?
Moz Pro | | MrSchadow
Our company is starting over from scratch with reports/charts and are looking at other solutions than what we currently are using (fusioncharts/fusion widgets). And I wanted to get feedback on why you chose this route over any other. Thanks!!0 -
Who else has inaccurate reporting? SEO Moz BUG
I have been in contact with support to little avail no timelines for the correction kind of leaves me out in the cold. I have loved SEOmoz in the past but this recent bug is costing me time and money. Can someone please tell me how they are working around all these issues in rank reporting and a zero accuracy rate on page report cards? bugs are holding up my SEO efforts. SEOmoz has been in touch but has provided no timelines. My programs are suffering. I found inaccuracy in each report metric and now have very little faith that this is going to be corrected in a timely fashion. I rely greatly on the wonderful tools SEOmoz provides and now I am flying blind. Who else has been hit by the bug? And, what has support done for you to help?
Moz Pro | | ericajane0 -
Press Release - using moz bar/OSE is reading domain not page? How? Why?
A question posed by Christopher Glaeser from early today:low PA high DA, had a follow up response from him providing 2 urls from PR WEB for separate press releases: http://www.prweb.com/releases/2011/11/prweb8923419.htm (HP White) On moz bar Page Analysis/Link Data = PA - 47 DA - 36 http://www.prweb.com/releases/2011/12/prweb9051351.htm (Golfer's Advice) On moz bar Page Analysis/Link Data = PA - 1 DA - 96 I kept scratching my head as to how a press release of 6 weeks ago had garnered such attention from a company that would not seem to have a huge traffic due to more obscure product offering and scientific subject (Analyses of Armor Industry versus Golf Advice).
Moz Pro | | RobertFisher
Then I realized that for HP White, Link Data was not about the PR. The url from mozbar was HPWhite.com not the above, I did not notice until I used OSE where same thing was happening. When I cut and pasted the above press release url for HP White and placed it in OSE this changed: PA - 49 DA - 96 (2 links 2 linking root domains) For Golfers advice (0 links from 0 linking domains) Note to all: the links to the PR WEB release for the HP included a low end directory type link and a link from PR WEB (. For Golfer's Advice there was not a link back to the release from PR WEB: Note that Golfer's Advice is a newer release (6 weeks). So, any link from HPWhite release would equal more juice to HP White and PR Web and Vocus. Any link to Golfer's Advice from release offers......???? to Golfer's Advice and who cares to Vocus and PR Web. So, I guess this begs a couple of questions: Why the mozbar link analysis difference for one versus the other? Does PR Web treat some differently than others? Who benefits most from me paying a PR Web to do press releases for a client, PR Web and Vocus or my client and I???? I have tried to order the images to make sense: L to R top, then bottom is last. [](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a>0