Crawl depth seems off?
-
I'm reviewing my site crawl data and am seeing some very strange things such as:
- The homepage URL has a listed crawl depth of 2.
- Pages that are featured in the main site navigation (which is present on all pages, including homepage) are ranking at a crawl depth of 3.
What am I missing here? Shouldn't my homepage have a crawl depth of 0 or 1? Why would pages linked directly from my homepage have a crawl depth other than 1? (Single click from homepage to that page)?
Thank you!
-
Hi Samantha,
I set up a new campaign using the https:// version of the site and ran a new crawl, but I'm running into the same issue as before. Perhaps this is a bigger question of how site redirects work? I was under the impression that any large-scale redirects (such as from non-www to www or http to https across all pages) can affect crawl time/load time. Rereading your comment, it sounds like what you're saying is those redirects count as layers of crawl depth, as well. By the same token, I'm assuming any redirects (301's in particular) also add a layer of crawl depth.
So, my larger question then is: how can I maximize crawl depth if my site has been redirected from http to https? Will that "extra layer" of crawling always be there as long as the redirect is in place, or is there a way to compress/expedite how the crawl happens?
Thanks for your input on this!
-
Hi Samantha,
That makes sense, thank you. I'll set up a new campaign tracking with "https://" instead!
-
Hey there,
Sam from Moz's Help Team here!
So the thing to keep in mind when you set up a campaign at the root domain level is that we'll be starting the crawl from the http protocol (non-www). In this case - http://logic2020.com/. If you filter by crawl depth in your Site Crawl you'll see that URL with a crawl depth of 0.
It redirects to http://www.logic2020.com/ which has a crawl depth of 1. That URL then redirects again to https://www.logic2020.com/, which is listed with a crawl depth of 2 - hence why links we found on that page have a crawl depth of 3.
I hope this helps to clarify but let me know if you have any other questions!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz crawling doesn't show all of my pages
Hello, I'm trying to make an SEO on-pages audit of my website. When using the SEO crawl, i see only one page while i have much more pages in this website (200+). Is it an issue with the sitemap ? The robot.txt ? How can i check and correct this. My website is discoolver.com
Link Explorer | | MK-Discoolver1 -
Angular SPA & MOZ Crawl Issues
Website: https://www.exambazaar.com/ Issue: Domain Authority & Page Authority 1/100 I am using Prerender to cache/render static pages to crawl agents but MOZ is not able to crawl through my website (https://www.exambazaar.com/). Hence I think it has a domain authority of 1/100. I have been in touch with Prerender support to find a fix for the same and have also added dotbot to the list of crawler agents in addition to Prerender default list which includes rogerbot. Do you have any suggestions to fix this? List: https://github.com/prerender/prerender-node/commit/5e9044e3f5c7a3bad536d86d26666c0d868bdfff Adding dotbot to Express Server:
Link Explorer | | gparashar
prerender.crawlerUserAgents.push('dotbot');0 -
Crawl a node js page - Why can I only see my frontpage?
Hi When i am trying to crawl my website ( https://www.doorot.com/ ) it can only find my frontpage. It's a node js page. Any one had the same problem or know how to crawl my site in order to see all my pages? Kasper
Link Explorer | | KasperClio1 -
How many pages Google crawl in free version
I am using moz pro 30 days trial version.Can Anybody tell me how many pages moz crawl in a day or in a week.Because its two days and only 2 pages they crawled. Thanks
Link Explorer | | VarinderS0 -
How to force moz to crawl my backlinks?
I have some good number number of backlinks in my webmaster tools. But, open site explorer is showing very few backlinks. How to force moz to crawl all the backlinks? Or is there any way to submit backlinks to moz?
Link Explorer | | sankar7890 -
Moz Crawl Canonicals and Duplicates
Hi all, I am using Moz Crawl to analyze some sites I am having to optimize.
Link Explorer | | Eurasmus.com
I keep seeing many of my pages detected as duplicate content when they have the rel=canonical applied. Example: www.spain-internship.com/zh-CN/blog-by-aaron
I have seen that in other sites. Of course I understand that Moz is not perfect but, is there a known issue or am I doing something wrong with the canonicals? Regards,0 -
Is there some way to tell the Moz crawler not to crawl URL's with particular dynamic tags such as "?redirect-to:http//" ?
We are encountering an issue where the crawler is finding a ton of pages from our wordpress login url that has this dynamic tag in it to kinds of different blog entries. It's madness. I can't figure out what is causing these URLs to generate to be crawled in the first place! Does this sound familiar to anyone out there, any constructive suggestions? Robots text or maybe meta robots tags that would resolve this crawl issue?
Link Explorer | | RegistrarCorp0 -
Moz can't crawl domain due to IP Geo redirect loop
Hi, I'm trying to crawl our domain www.salvationarmy.org.au via my Moz account and it only ever returns results for one page when it should be crawling more than 3,000 pages. In talking to support, they have said that because of the redirect we have in place it is creating a 302 loop and therefore not delivering results. Usually in this case I would obtain Moz's IP addresses and add them to the redirect settings as an exception, but Moz have said they use cloud-based services for crawling so the IPs change all the time. Does anyone have any idea how to solve this issue? At this point I've paid for a year's subscription to a product I can't use. Thanks, Mel
Link Explorer | | SalvationArmy0