Crawl Diagnostics Summary Problem
-
We added our website a Robots.txt file and there are pages blocked by robots.txt. Crawl Diagnostics Summary page shows there is no page blocked by Robots.txt. Why?
-
Hey there,
Thanks for the question. How you have your robots.txt set is actually preventing all bots from even touching on those pages, not just the engines.
If you had a directive allowing RogerBot access to those pages it would be able to touch on them and register that they are blocked from the Search Engines in the robots.txt.
Since our crawler strictly adheres to the robots.txt file you won't have anything populated there.I hope that makes sense. Feel free to reach out if you need more information.
Cheers,
Joel. -
Thanks Federico,
Can we use meta robots noindex and robots.txt together?
-
I am guessing here, but Moz crawler does not respect your robots.txt file. Instead, if you want pages not to be crawled, try using the meta robots noindex for a change and see what happens.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl findings 301 redirects I didn't make?
Hi, I'm new to SEOMOZ Pro and loving it so far, but was confused as to how the 51 page Crawl of my site (http://cryptophoneaustralia.com) found so many 301 redirects. 18 to be exact. It's a Wordpress site, and my htaccess file has no 301's in it, so I'm kind of confused as to where to start looking as to why they've shown up in the crawl. I've been building sites for years, and use 301's quite regularly, but this site should have none. The site was originally on a subdomain until it was ready to go live, then I moved the site to it's current domain and ran the Velvet Blues plugin to update all the URLs. I then went through and manually changed the ones in areas where this plugin tends to miss. The site still functions fine, it just bothers me why the 301's are being found in the crawl. Thank you.
Moz Pro | | TrentDrake0 -
Crawl report - duplicate page title/content issue
When the crawl report is finished, it is saying that there are duplicate content/page titles issues. However there is a canonical tag that is formatted correctly so just wondered if this was a bug or if anyone else was having the same issues? For example, I'm getting a error warning for this page http://www.thegreatgiftcompany.com/categories/categories_travel?sort=name_asc&searchterm=&page=1&layout=table
Moz Pro | | KarlBantleman0 -
Crawl Diagnostics 403 on home page...
In the crawl diagnostics it says oursite.com/ has a 403. doesn't say what's causing it but mentions no robots.txt. There is a robots.txt and I see no problems. How can I find out more information about this error?
Moz Pro | | martJ0 -
Why does SEOMoz only crawl 1 page of my site?
My site is: www.thetravelingdutchman.com. It has quite a few pages, but for some reason SEOMoz only crawls one. Please advise. Thanks, Jasper
Moz Pro | | Japking0 -
Joined yesterday, today crawl errors (incorrectly) shows as zero...
Hi. We set up our SEOMoz account yesterday, and the initial crawl showed up a number of errors and warnings which we were in the process of looking at and resolving. I log into SEOMoz today and it's showing 0 errors, Pages Crawled: 0 | Limit: 10,000 Last Crawl Completed: Nov. 27th, 2012 Next Crawl Starts: Dec. 4th, 2012errors, warnings and notices show as 0, and the issues found yesterday show only in the change indicators.There's no way of getting to the results seen yesterday other than waiting a week?We were hoping to continue working through the found issues!
Moz Pro | | WorldText0 -
Not all pages are being crawled
I am set up on the PRO plan, I was under the impression that it would crawl up to 10,000 pages. My site has just over 200 pages, but whenever I am crawled it only crawls 121 pages. Is this normal? It's hard to know how reliable my data is because a significant amount of pages are missing.
Moz Pro | | KristinHarding0 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0 -
How long does a crawl take?
A crawl of my site started on the 8th July & is still going on - is there something wrong???
Moz Pro | | Brian_Worger1