Why is Roger crawling pages that are disallowed in my robots.txt file?
-
I have specified the following in my robots.txt file:
Disallow: /catalog/product_compare/
Yet Roger is crawling these pages = 1,357 errors.
Is this a bug or am I missing something in my robots.txt file?
Here's one of the URLs that Roger pulled:
<colgroup><col width="312"></colgroup>
|Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks!
|
-
Digging in further I discovered that rogerbot had blocked a portion of these URL variations, but 2/3 slipped through. I sent an email to support. Thanks for the suggestion.
-
Digging back through the Q&A... I'm several posts reporting this sort of thing.
http://www.seomoz.org/dp/rogerbot
Perhaps you could try specifically blocking rogerbot? If that doesn't work, an email to the SEOmoz team may do the trick
-
Yes, blocking all --> *
-
Have you specified a User-Agent?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt blocking Moz
Moz are reporting the robots.txt file is blocking them from crawling one of our websites. But as far as we can see this file is exactly the same as the robots.txt files on other websites that Moz is crawling without problems. We have never come up against this before, even with this site. Our stats show Rogerbot attempting to crawl our site, but it receives a 404 error. Can anyone enlighten us to the problem please? http://www.wychwoodflooring.com -Christina
Moz Pro | | ChristinaRadisic0 -
How to remove 404 pages wordpress
I used the crawl tool and it return a 404 error for several pages that I no longer have published in Wordpress. They must still be on the server somewhere? Do you know how to remove them? I think they are not a file on the server like an html file since Wordpress uses databases? I figure that getting rid of the 404 errors will improve SEO is this correct? Thanks, David
Moz Pro | | DJDavid0 -
The pages that add robots as noindex will Crawl and marked as duplicate page content on seo moz ?
When we marked a page as noindex with robots like {<meta name="<a class="attribute-value">robots</a>" content="<a class="attribute-value">noindex</a>" />} will crawl and marked as duplicate page content(Its already a duplicate page content within the site. ie, Two links pointing to the same page).So we are mentioning both the links no need to index on SE.But after we made this and crawl reports have no change like it tooks the duplicate with noindex marked pages too. Please help to solve this problem.
Moz Pro | | trixmediainc0 -
One page report are empty !
Hi Rodgerbot, Now, i've no seomoz one page report for any campaign 😞 What happen ? I've previously several report. Thanks,
Moz Pro | | Max840 -
Duplicate page reported in Wordpress site, but I can't find it in All Pages list
In a crawl report a duplicate page content warning has been displayed. The two urls are 1. http://www.superheroes.com.au/shop
Moz Pro | | Andyfools
and
2. http://www.superheroes.com.au/shop/category/catalog/ It's a WordPress site and I cannot find the second page anywhere in the list of All Pages in the Admin (I want to add canonicalisation code) When I view the 2nd page and click Edit Page, it redirects to the 1st page. Any ideas where SEOMoz would be finding this 2nd page or how it might be being generated? (btw I didn't build this site) Thanks Simon0 -
Domain and Page authority dropped on home page
My two strongest links did not show up in the latest Open Site Explorer run. Business.com and BOTW.com...Linking domains went from 21 to 15. Domain dropped 3 pts and Page authority dropped 3 pts I believe. Is there a function in SEOmoz.org that lets you track your domain scores over time? Thanks, Boo
Moz Pro | | Boodreaux0 -
Does the SEOMoz weekly crawl that highlights no meta description tag, take into account if there is a meta robots noindex,follow tag on the pages it indicates the missing meta descriptions?
The weekly crawl website report is telling me that there are pages that have missing meta description tags, yet I've implemented meta robots tags to 'noindex, follow' those pages which are visible in those page source files. As far as Google Is concerned, surely this then won't be a problem since it is being instructed NOT to consider these specific pages for indexing. I am assuming that the weekly SEOmoz website crawl is simply throwing the missing meta description crawl findings into its report without itself observing that the particluar URL references contain the meta robots 'noindex,follow' tag ???? Appreciate if you can clairfy if this is the case. It would help me understand that (at least in terms of my efforts towards Google) your own crawl doesn't observe the meta robots tag instruction, hence the resultant report's flagging the discrepancy.
Moz Pro | | callassist0 -
Redirecting duplicate .asp pages??
Hi all, I have a bit of a problem with duplicate content on our website. The CMS has been creating identical duplicate pages depending on which menu route a user takes to get to a product (i.e. via the side menu button or the top menu bar). Anyway, the web design company we use are sorting it out going forward, and creating 301 redirects on the duplicate pages. My question is, some of the duplicates take two different forms. E.g. for the home page: www.<my domain="">.co.uk
Moz Pro | | gdavies09031977
www..<my domain="">.co.uk/index.html
www.<my domain="">.co.uk/index.asp</my></my></my> Now I understand the 'index.html' page should be redirected, but does the 'index.asp' need to be directed also? What makes this more confusing is when I run the SEOMoz diagnostics report (which brought my attention to the duplicate content issue in the first place - thanks SEOMoz), not all the .asp pages are identified as duplicates. For example, the above 'index.asp' page is identified as a duplicate, but 'contact-us.asp' is not highlighted as a duplicate to 'contact-us.html'? I'm a bit new to all this (I'm not a IT specialist), so any clarification anyone can give would be appreciated. Thanks, Gareth0