Spider Indexed Disallowed URLs
-
Hi there,
In order to reduce the huge amount of duplicate content and titles for a cliënt, we have disallowed all spiders for some areas of the site in August via the robots.txt-file. This was followed by a huge decrease in errors in our SEOmoz crawl report, which, of course, made us satisfied.
In the meanwhile, we haven't changed anything in the back-end, robots.txt-file, FTP, website or anything. But our crawl report came in this November and all of a sudden all the errors where back. We've checked the errors and noticed URLs that are definitly disallowed. The disallowment of these URLs is also verified by our Google Webmaster Tools, other robots.txt-checkers and when we search for a disallowed URL in Google, it says that it's blocked for spiders. Where did these errors came from? Was it the SEOmoz spider that broke our disallowment or something? You can see the drop and the increase in errors in the attached image.
Thanks in advance.
[](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a> LAAFj.jpg
-
This was what I was looking for! The pages are indexed by Google, yes, but they aren't being crawled by the Googlebot (as my Webmaster Tool and the Matt Cutts Video is telling me), but they are occasionally being crawled by the Rogerbot probably (not monthly). Thank you very much!
-
Yes yes, canonicalization or meta noindex-tag would be better of course to pass the possible link juice, but we aren't worried about that. I was worried Google would still see the pages as duplicates. (couldn't really distile that out of the article, although it was useful!) Barry Smith answered that last issue in the answer below, but i do want to thank you for your insight.
-
The directives issued in a robots.txt file are just a suggestion to bots. One that Google does follow though.
Malicious bots will ignore them and occasionally even bots that follow the directives may mess up (probably what's happened here).
Google may also index pages that you've blocked as they've found them via a link as explained here - http://www.youtube.com/watch?v=KBdEwpRQRD0 - or for an overview of what Google does with robots.txt files you can read here - http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449
I'd suggest you look at other ways of fixing the problem than just blocking 1500 pages but I see you've considered what would be required to fix the issues without removing the pages from a crawl and decided the value isn't there.
If WMT is telling you the pages are blocked from being crawled I'd believe that.
Try searching for a url that should be blocked in Google and see if it's indexed or do site:http://yoursitehere.com and see if blocked pages come up.
-
The assumptions of what to expect from using robots.txt may not be in line with the realities. Crawling a page isn't the same thing as indexing the content to appear in SERPs and even with robots, your pages can be crawled.
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
Thanks mister Goyal. Of course we have been thinking about ways and figured out some options in doing so, but implementing these solutions would be disastreous from a time/financial perspective. The pages that we have blocked from the spiders aren't needed for visibility in the search engines and don't carry much link juice, they are only there for the visitors, so we decided we don't really need them for our SEO-efforts in a positive way. But when these pages do get crawled and the engines notice the huge amount of duplicates, i recogn this would have a negative influence on our site as a whole.
So, the problem we have is focused on the doubts we have on the legitimacy of the report. If SEOMoz can crawl it, the Googlebot could probably too, right, since we've used: User-agent: *
-
Mark
Are you blocking all your bots to spider these erroneous URLs ? Is there a way for you to fix these such that either they don't exist or they are not duplicate anymore.
I'd just recommend looking from that perspective as well. Not just the intent of making those errors disappear from the SEOMoz report.
I hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Indexed But Not Submitted to Sitemap
Hi guys, In Google's webmaster tool it says that the URL has been indexed but not submitted to the sitemap. Is it necessary that the URL be submitted to the sitemap if it has already been indexed? Appreciate your help with this. Mark
Technical SEO | | marktheshark100 -
Google Indexing Pages with Made Up URL
Hi all, Google is indexing a URL on my site that doesn't exist, and never existed in the past. The URL is completely made up. Anyone know why this is happening and more importantly how to get rid of it. Thanks 🙂
Technical SEO | | brian-madden0 -
New URL Structure
Hi Guy's, For our webshop we're considering a new URL structure because longtail keywords to rank so well. Now we have /category (main focus keywords)
Technical SEO | | Happy-SEO
/product/the-product345897345123/ (nice to rank on, not that much volume) We have over 500 categories and every one of them is placed after our domain. Because i think it's better to work with a good structure and managed a way to make categories and sub-categories. The 500 categories may be the case why not every one of them is ranking so well, so that was also the choice of thinking about a new structure. So the new URL structure will be: /category (main focus keywords)
/category/subcat/ (also main focus keywords) Everything will be redirect (301, good way), so i think there won't be to much problems. I'm thinking about what to do with the /product/ URL. Because now it will be on the same level as the subcategories, and i'm affraid that when it's on that level, Google will give the same value to both of them. My options that i'm considering are: **Old way **
/product/the-product-345897345123/ .html (seen this on big webshops)
/product/the-product-345897345123.html/ Level deeper SKU /product/the-product/345897345123/ What would you suggest? The new structure would be 20 categories 500+ sub's devided under main categories 5000+ products Thanks!0 -
Index subpages but not homepage
Hello, Google index all my subpages, but not the homepage, how is this possible ? https://www.google.nl/#q=http:%2F%2Fwww.lavistarelatiegeschenken.nl He index the homepage of the neigbour country in the same language How possible ? Gr Marcel
Technical SEO | | Bossie720 -
Canonical URL
I previously set the canonical Url in google web masters to the non www version, when I check my on page opt, it tells me that I have a critical issue with this. Should I change it in google web masters back to the www version? if so is there the possibility of negative results? Or is there a better way to deal with this? Note, I have inbound links pointing to both types.
Technical SEO | | bronxpad0 -
Google Indexing
Hi Everybody, I am having kind of an issue when it comes to the results Google is showing on my site. I have a multilingual site, which is main language is Catalan. But of course if I am looking results in Spanish (google.es) or in English (google.com) I want Google to show the results with the proper URL, title and descriptions. My brand is "Vallnord" so if you type this in Google you will be displayed the result in Catalan (Which is not optimized at all yet) but if you search "vallnord.com/es" only then you will be displayed the result in Spanish What do I have to do in order for Google to read this the way I want? Regards, Guido.
Technical SEO | | SilbertAd0 -
About Google Spider
Hello, people! I have some questions regarding on Google spider. Many people are saying that "Google spiders only have US IP address." Is this really true? But I also saw video from Google's offical blog and it said "Google spider come from all around the world." At this point I am really confused. Q1) I researched and it seems like Google spiders have only US IP address. THen what does exactly mean by "Google spider come from all around the world."? Q2) If Google spider have only US IP address, what happen to site which use IP delivery? Is this means that Google spider always redirect to us site since they only have US IP? Can anyone help me to understand?? One more questions! When Google analyzing for cloaking issue, do you think Google analyze when spider crawls the site or after they crawled the site?
Technical SEO | | Artience0