Spider Indexed Disallowed URLs
-
Hi there,
In order to reduce the huge amount of duplicate content and titles for a cliënt, we have disallowed all spiders for some areas of the site in August via the robots.txt-file. This was followed by a huge decrease in errors in our SEOmoz crawl report, which, of course, made us satisfied.
In the meanwhile, we haven't changed anything in the back-end, robots.txt-file, FTP, website or anything. But our crawl report came in this November and all of a sudden all the errors where back. We've checked the errors and noticed URLs that are definitly disallowed. The disallowment of these URLs is also verified by our Google Webmaster Tools, other robots.txt-checkers and when we search for a disallowed URL in Google, it says that it's blocked for spiders. Where did these errors came from? Was it the SEOmoz spider that broke our disallowment or something? You can see the drop and the increase in errors in the attached image.
Thanks in advance.
[](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a> LAAFj.jpg
-
This was what I was looking for! The pages are indexed by Google, yes, but they aren't being crawled by the Googlebot (as my Webmaster Tool and the Matt Cutts Video is telling me), but they are occasionally being crawled by the Rogerbot probably (not monthly). Thank you very much!
-
Yes yes, canonicalization or meta noindex-tag would be better of course to pass the possible link juice, but we aren't worried about that. I was worried Google would still see the pages as duplicates. (couldn't really distile that out of the article, although it was useful!) Barry Smith answered that last issue in the answer below, but i do want to thank you for your insight.
-
The directives issued in a robots.txt file are just a suggestion to bots. One that Google does follow though.
Malicious bots will ignore them and occasionally even bots that follow the directives may mess up (probably what's happened here).
Google may also index pages that you've blocked as they've found them via a link as explained here - http://www.youtube.com/watch?v=KBdEwpRQRD0 - or for an overview of what Google does with robots.txt files you can read here - http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449
I'd suggest you look at other ways of fixing the problem than just blocking 1500 pages but I see you've considered what would be required to fix the issues without removing the pages from a crawl and decided the value isn't there.
If WMT is telling you the pages are blocked from being crawled I'd believe that.
Try searching for a url that should be blocked in Google and see if it's indexed or do site:http://yoursitehere.com and see if blocked pages come up.
-
The assumptions of what to expect from using robots.txt may not be in line with the realities. Crawling a page isn't the same thing as indexing the content to appear in SERPs and even with robots, your pages can be crawled.
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
Thanks mister Goyal. Of course we have been thinking about ways and figured out some options in doing so, but implementing these solutions would be disastreous from a time/financial perspective. The pages that we have blocked from the spiders aren't needed for visibility in the search engines and don't carry much link juice, they are only there for the visitors, so we decided we don't really need them for our SEO-efforts in a positive way. But when these pages do get crawled and the engines notice the huge amount of duplicates, i recogn this would have a negative influence on our site as a whole.
So, the problem we have is focused on the doubts we have on the legitimacy of the report. If SEOMoz can crawl it, the Googlebot could probably too, right, since we've used: User-agent: *
-
Mark
Are you blocking all your bots to spider these erroneous URLs ? Is there a way for you to fix these such that either they don't exist or they are not duplicate anymore.
I'd just recommend looking from that perspective as well. Not just the intent of making those errors disappear from the SEOMoz report.
I hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Indexation and visibility problem
Hi I am working on a website (usarrestsearch org) for 6 months. I wrote about 100 pages full of good content. for some reason I see only 75% of the pages indexed in GWT. and Im having problems with SERP positions not rising. I suspect that it might be connected to the structure of the site. will appreciate any help thanks
Technical SEO | | holdportals0 -
Wrong canonical URL was specified. How to refresh the index now?
Wrong canonical URL was applied to thousands of pages of a client website, pointing them all to a single non-existing URL. Now Google has de-indexed most of those pages. We have fixed the problem now, but do we get Search engines crawl those pages again and start showing in Search results? I understand that a slow recovery is possible if we don't do anything. Was wondering if we can fast track the recovery... Any pointers? Thanks
Technical SEO | | Krupesh0 -
Bing is not Indexing my site.
Hi, My website is four months old and has more than 8000 pages. Bing has indexed only 8 pages till date and Google also keeps playing hide and seek with it. There was a time when google indexed almost all the pages of my site but now there are only 5000 pages indexed. Moreover when I check my site on google (by typing site:socktail.com), it shows only 26 pages. Please let me know what should I do. If somebody wants to take a look, my website is http://socktail.com Thanks
Technical SEO | | saurabh19050 -
Why are URLs like www.site.com/#something being indexed?
So, everything after a hash (#) is not supposed to be crawled and indexed. Has that changed? I see a clients site with all sorts of URLs indexed like ... http://www.website.com/#!category/c11f For the above URL, I thought it was the same as simply http://www.website.com/. But they aren't, they're getting indexed and all the content on the pages with these hash tags are getting crawled as well. Thanks!
Technical SEO | | wiredseo0 -
De-indexed from Google
Hi Search Experts! We are just launching a new site for a client with a completely new URL. The client can not provide any access details for their existing site. Any ideas how can we get the existing site de-indexed from Google? Thanks guys!
Technical SEO | | rikmon0 -
URL structure
Hi, I am in the process of having a site created which will focus on the Xbox 360, PS3, Wii and PS3 Vita. I would appreciate some advice when it comes to the URL structure. Each category mentioned above will have the following subsections News
Technical SEO | | WalesDragon
Reviews
Screenshots
Trailers Would the best url structure be? www.domain.com/xbox-360/news/news-story-headline
www.domain.com/ps3/reviews/ps3-game-name Thanks in advance for your help and suggestions.0 -
Bing indexing
Hello, people~ I want to discuss about Bing indexation. I have a new web site which opened about 3 months ago. Google has no problem to index my site and all pages within the site indexed by Google. However, Bing and Yahoo is different story. I used manual submission, Bing webmaster tool to let Bing know about the site. However, Bing is not indexing my site yet. I researched about it and found that my site should have some external links before I get index by Bing. I check external links of my site with Google webmaster tool, SEOmoz tool and "link:" on Google. All tools show different number as below. Google webmaster Tool : more than 50 SEMoz site explorer : 5 link: on Google: none Why all method of checking links are different and which on should most depend on? Also how many links should I have in order to get index by Bing? Could you people please share your opinion?
Technical SEO | | Artience0 -
Indexing of flash files
When Google indexes a flash file, do they use a library for such a purpose ? What set me thinking was this blog post ( although old ) which states - "we expanded our SWF indexing capabilities thanks to our continued collaboration with Adobe and a new library that is more robust and compatible with features supported by Flash Player 10.1."
Technical SEO | | seoug_20050