Crawl depth seems off?
-
I'm reviewing my site crawl data and am seeing some very strange things such as:
- The homepage URL has a listed crawl depth of 2.
- Pages that are featured in the main site navigation (which is present on all pages, including homepage) are ranking at a crawl depth of 3.
What am I missing here? Shouldn't my homepage have a crawl depth of 0 or 1? Why would pages linked directly from my homepage have a crawl depth other than 1? (Single click from homepage to that page)?
Thank you!
-
Hi Samantha,
I set up a new campaign using the https:// version of the site and ran a new crawl, but I'm running into the same issue as before. Perhaps this is a bigger question of how site redirects work? I was under the impression that any large-scale redirects (such as from non-www to www or http to https across all pages) can affect crawl time/load time. Rereading your comment, it sounds like what you're saying is those redirects count as layers of crawl depth, as well. By the same token, I'm assuming any redirects (301's in particular) also add a layer of crawl depth.
So, my larger question then is: how can I maximize crawl depth if my site has been redirected from http to https? Will that "extra layer" of crawling always be there as long as the redirect is in place, or is there a way to compress/expedite how the crawl happens?
Thanks for your input on this!
-
Hi Samantha,
That makes sense, thank you. I'll set up a new campaign tracking with "https://" instead!
-
Hey there,
Sam from Moz's Help Team here!
So the thing to keep in mind when you set up a campaign at the root domain level is that we'll be starting the crawl from the http protocol (non-www). In this case - http://logic2020.com/. If you filter by crawl depth in your Site Crawl you'll see that URL with a crawl depth of 0.
It redirects to http://www.logic2020.com/ which has a crawl depth of 1. That URL then redirects again to https://www.logic2020.com/, which is listed with a crawl depth of 2 - hence why links we found on that page have a crawl depth of 3.
I hope this helps to clarify but let me know if you have any other questions!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are my trackable links appearing in my site crawls?
Our Google-created trackable campaign URLs are appearing in site crawls and often triggering error warnings.
Link Explorer | | K.Reeves0 -
Moz was unable to crawl your site on Jun 22, 2020\. We were unable to access your site due to a page timeout on your robots.txt, which prevented us from crawling the rest of your site.
Site: www.kpmg.us Getting robots.txt timeout fail since 02/29/20. We've checked our server logs and see no errors. Went through all the steps of the "Troubleshooter". Updated robots.txt to allow rogerbot full access: User-agent: rogerbot
Link Explorer | | KPMG-Search-Social
Disallow: Any ideas how to get roger to crawl my site????1 -
Crawling 4XX errors because of URL accented
Hello guys, I am experiencing crawling errors in Moz because the URLs read by spiders contain accents and special characters, which I know isn't best but yet my client needs to keep it. I know that Moz "uses percent encoding to parse the HTML in the source code, so any line breaks and spaces in your HTML links or sitemap links are converted to %0A and %20, causing a 404 error". Is there any way to avoid these errors happening in the dashboard? Or am I supposed to simply ignore it?
Link Explorer | | fran8751 -
Does the Moz Pro site crawl, crawl password protected sites?
So i asked Moz Pro site crawl to crawl my page, and a lot of issues came up - but for password protected sites. Does the Moz Pro site crawl do this? A lot of the issues, are not relevant for a site that is password protected.
Link Explorer | | Minlaering.dk0 -
I crawled my site, but an old crawl report still is visible
I crawled my site recently, but an old crawl report still is still all I can see
Link Explorer | | Bigjim0 -
Incorrect crawl errors
A crawl of my websites has indicated that there are some 5XX server errors on my website: Error Code 608: Page not Decodable as Specified Content Encoding
Link Explorer | | LiamMcArthur
Error Code 803: Incomplete HTTP Response Received
Error Code 803: Incomplete HTTP Response Received
Error Code 608: Page not Decodable as Specified Content Encoding
Error Code 902: Network Errors Prevented Crawler from Contacting Server The five pages in question are all in fact perfectly working pages and are returning HTTP 200 codes. Is this a problem with the Moz crawler?1 -
How can I get a Moz crawl report of 404 errors on my site
I have a Moz subscription and I see dead links on my website that link externally. Is there a Moz crawl report which will show me these 404 errors and which pages on my site those 404 links are on?
Link Explorer | | Marbanasin0 -
How Is a Page Crawled by Moz When Moz Says 'No Links'?
As above, really. I've crawled a new client's site to find the Moz crawler has identified a handful of 404 errors. The Moz crawler says these pages have '0 linking domains', and OSE has no data for these pages. So how are these pages being crawled by Moz and what should I advise my client?
Link Explorer | | xerox4320