Crawl Diagnostics Summary Problem
-
We added our website a Robots.txt file and there are pages blocked by robots.txt. Crawl Diagnostics Summary page shows there is no page blocked by Robots.txt. Why?
-
Hey there,
Thanks for the question. How you have your robots.txt set is actually preventing all bots from even touching on those pages, not just the engines.
If you had a directive allowing RogerBot access to those pages it would be able to touch on them and register that they are blocked from the Search Engines in the robots.txt.
Since our crawler strictly adheres to the robots.txt file you won't have anything populated there.I hope that makes sense. Feel free to reach out if you need more information.
Cheers,
Joel. -
Thanks Federico,
Can we use meta robots noindex and robots.txt together?
-
I am guessing here, but Moz crawler does not respect your robots.txt file. Instead, if you want pages not to be crawled, try using the meta robots noindex for a change and see what happens.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why did Moz crawl our development site?
In our Moz Pro account we have one campaign set up to track our main domain. This week Moz threw up around 400 new crawl errors, 99% of which were meta noindex issues. What happened was that somehow Moz found the development/staging site and decided to crawl that. I have no idea how it was able to do this - the robots.txt is set to disallow all and there is password protection on the site. It looks like Moz ignored the robots.txt, but I still don't have any idea how it was able to do a crawl - it should have received a 401 Forbidden and not gone any further. How do I a) clean this up without going through and manually ignoring each issue, and b) stop this from happening again? Thanks!
Moz Pro | | MultiTimeMachine0 -
Crawlers crawl weird long urls
I did a crawl start for the first time and i get many errors, but the weird fact is that the crawler tracks duplicate long, not existing urls. For example (to be clear): there is a page: www.website.com/dogs/dog.html but then it is continuing crawling:
Moz Pro | | r.nijkamp
www.website.com/dogs/dog.html
www.website.com/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dogs/dog.html what can I do about this? Screaming Frog gave me the same issue, so I know it's something with my website0 -
Pages Crawled: 1 Why?
I have some campaigns which have only 1 page crawled, while some other campaigns, having completely similar URL (subdomain) and number of keywords and pages, have all pages crawled... Why is that so? It has been also a while I waited and so far no change...
Moz Pro | | BritishCouncil0 -
SEOMOZ Crawl Test
Guys I really have an issue that i know have but cannot see if that makes sense. Basically 3 months ago i did a site wide 301 from economyleasinguk.co.uk to www.economy-car-leasing.co.uk Every thing looks good get all the correct header responses , all canonicals work perfectly , Google webmaster tools is updated fetch as google bot shows the old site is 301 I tried the seomoz crawl test today on the old domain and got this message Oh no! Looks like the page you were trying to access is temporarily down which at first thought ok because the site was not there it wont do it on an old 301 domain, however i tried it on a domain i know has just been 301'd and i got this message The URL http://www.site1.com/ redirects to http://site2.com/. Do you want to crawl http://site2.com/ instead?
Moz Pro | | kellymandingo
Would you like to:
Continue with www.site1.com
Continue with site2.com I really do not know what to do, its either the redirect script is missing something however its doing what it should or the server is a problem but again its doing what it should so why would SEOMOZ not be able to crawl the old URL like it example site above. Now the strange thing is Open Site Explorer does see the 301 and asks if i want to check the new URL instead Ps the redirect is done using PHP redirect which i am asking him to change to a htaccess as its now on a apache server and was wondering if this could be an issue, all pages go to correct pages as requested Thanks in Advance1 -
Crawl Test has taken over 5 days and still has yet to complete
I am running some crawls on some sites and I have a number still pending. I have one from 7 days ago, a couple from 6 days ago, and 1 from 5 days ago. The confusing thing is that I have run a few others in that same period that have finished already. Do I need to restart the crawls or cancel them and start over?
Moz Pro | | DRSearchEngOpt0 -
Crawl Diagnostics Error Spike
With the last crawl update to one of my sites there was a huge spike in errors reported. The errors jumped by 16,659 -- majority of which are under the duplicate title and duplicate content category. When I look at the specific issues it seems that the crawler is crawling a ton of blank pages on the sites blog through pagination. The odd thing is that the site has not been updated in a while and prior to this crawl on Jun 4th there were no reports of these blank pages. Is this something that can be an error on the crawler side of things? Any suggestions on next steps would be greatly appreciated. I'm adding an image of the error spike Xovep.jpg?1 Xovep.jpg?1
Moz Pro | | VanadiumInteractive1 -
Crawl Diagnostics Update
I have corrected some errors in my SEOMoz Crawl Diagnostics, however the errors are still showing. It says a crawl has happen since. Any idea's why?
Moz Pro | | petewinter0 -
Can I exclude pages from my Crawl Diagnostics?
Right now my crawl diagnostic information is being skewed because it's including the onsite search from my website. Is there a way to remove certain pages like search from the errors and warnings of the crawl diagnostic? My search pages are coming up as: Long URL Title Element Too Long Missing Meta Description Blocked by meta-robots (Which is how I want it) Rel Canonical Here is what the crawl diagnostic thinks my page URL looks like: website.com/search/gutter%25252525252525252525252525252525252525252525252525252525 252525252525252525252525252525252525252525252525252525252525252 525252525252525252525252525252525252525252525252525252525252525 252525252525252525252525252525252525252525252525252525252525252 52525252525252525252525252525252525252525252525252Bcleaning/ Thank you, Jonathan
Moz Pro | | JonathanGoodman0