Crawl depth seems off?
-
I'm reviewing my site crawl data and am seeing some very strange things such as:
- The homepage URL has a listed crawl depth of 2.
- Pages that are featured in the main site navigation (which is present on all pages, including homepage) are ranking at a crawl depth of 3.
What am I missing here? Shouldn't my homepage have a crawl depth of 0 or 1? Why would pages linked directly from my homepage have a crawl depth other than 1? (Single click from homepage to that page)?
Thank you!
-
Hi Samantha,
I set up a new campaign using the https:// version of the site and ran a new crawl, but I'm running into the same issue as before. Perhaps this is a bigger question of how site redirects work? I was under the impression that any large-scale redirects (such as from non-www to www or http to https across all pages) can affect crawl time/load time. Rereading your comment, it sounds like what you're saying is those redirects count as layers of crawl depth, as well. By the same token, I'm assuming any redirects (301's in particular) also add a layer of crawl depth.
So, my larger question then is: how can I maximize crawl depth if my site has been redirected from http to https? Will that "extra layer" of crawling always be there as long as the redirect is in place, or is there a way to compress/expedite how the crawl happens?
Thanks for your input on this!
-
Hi Samantha,
That makes sense, thank you. I'll set up a new campaign tracking with "https://" instead!
-
Hey there,
Sam from Moz's Help Team here!
So the thing to keep in mind when you set up a campaign at the root domain level is that we'll be starting the crawl from the http protocol (non-www). In this case - http://logic2020.com/. If you filter by crawl depth in your Site Crawl you'll see that URL with a crawl depth of 0.
It redirects to http://www.logic2020.com/ which has a crawl depth of 1. That URL then redirects again to https://www.logic2020.com/, which is listed with a crawl depth of 2 - hence why links we found on that page have a crawl depth of 3.
I hope this helps to clarify but let me know if you have any other questions!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz crawling doesn't show all of my pages
Hello, I'm trying to make an SEO on-pages audit of my website. When using the SEO crawl, i see only one page while i have much more pages in this website (200+). Is it an issue with the sitemap ? The robot.txt ? How can i check and correct this. My website is discoolver.com
Link Explorer | | MK-Discoolver1 -
Moz crawling http rather than https site
Our site is secure but when I ask moz to crawl it by giving the root domain including https moz insists on crawling the non secure version. How do i force it to crawl the secure version?
Link Explorer | | media12340 -
Does the Moz Pro site crawl, crawl password protected sites?
So i asked Moz Pro site crawl to crawl my page, and a lot of issues came up - but for password protected sites. Does the Moz Pro site crawl do this? A lot of the issues, are not relevant for a site that is password protected.
Link Explorer | | Minlaering.dk0 -
Angular SPA & MOZ Crawl Issues
Website: https://www.exambazaar.com/ Issue: Domain Authority & Page Authority 1/100 I am using Prerender to cache/render static pages to crawl agents but MOZ is not able to crawl through my website (https://www.exambazaar.com/). Hence I think it has a domain authority of 1/100. I have been in touch with Prerender support to find a fix for the same and have also added dotbot to the list of crawler agents in addition to Prerender default list which includes rogerbot. Do you have any suggestions to fix this? List: https://github.com/prerender/prerender-node/commit/5e9044e3f5c7a3bad536d86d26666c0d868bdfff Adding dotbot to Express Server:
Link Explorer | | gparashar
prerender.crawlerUserAgents.push('dotbot');0 -
Duplicated content detected with MOZ crawl with canonical applied
Hi there! I have a slight problem.
Link Explorer | | Eurasmus.com
I have a site with Joomla 3.3 that we recently migrated from 2.5. Joomla, for some reason that I don´t really get, creates hundreds of weird urls for the site like
mydomain.com/en -> joomla creates en/home/149-xxx-xxx/xxxxxx-xxxxxx that links to the first one.
The new version 3.3 knows this bug and applies a rel=canonical to the ones created "artificially", so they should not be identified as duplicated. Sample piece of code: en/home/149-all-en/xxxxxxx-xxxxxx" rel="canonical" / MOZ crawler identifies this as duplicated and like this I have thousands of pages duplicated all with titles, content etc... all the ones created by joomla. Still my site has good SEO results and I can not see any penalties but I am a bit concerned they may come in the future.... Can anyone explain me what is happening? Thank you in advance for your time,0 -
How can I get a Moz crawl report of 404 errors on my site
I have a Moz subscription and I see dead links on my website that link externally. Is there a Moz crawl report which will show me these 404 errors and which pages on my site those 404 links are on?
Link Explorer | | Marbanasin0 -
Getting "google bloking" in results of Crawl
What is the meaning of this in Excell results of crawling a website: multilingues.eu <colgroup><col width="165"> <col width="149"> <col width="139"></colgroup>
Link Explorer | | FernandoH.Silva
| | | |
| Blocking Google | Blocking Yahoo | Blocking Bing |
| | | |
| 312 | 14 | 187 |
| | | |
| 66 | 1 | 0 |
| | | |
| 46 | 2 | 1 |
| | | |0 -
Moz crawling bot
Hi guys, in OpenSiteExplorer -> Top Pages, there are no page titles displayed in a raport for certain domain, and "HTTP Status" column shows: "Blocked by robots.txt". I tried to find out what the ID of Moz crawling bot is, and on this page: http://moz.com/community/q/seomoz-spider-bot-details someone says it's: Mozilla/5.0 (compatible; rogerBot/1.0; http://www.seomoz.org/dp/rogerbot). However, my robots.txt doesn't have such entry. Take a look: Automatically banned scanners and crawlers section User-agent: 008 Disallow: / user-agent: AhrefsBot Disallow: / User-agent: MJ12bot Disallow: / User-agent: metajobbot Disallow: / User-agent: Exabot Disallow: / User-agent: Ezooms Disallow: / User-agent: fyberspider Disallow: / User-agent: dotbot Disallow: / User-agent: MojeekBot Disallow: / Section end What could be the problem here, then? Why does the Moz bot think I'm blocking it?
Link Explorer | | superseopl0