.htaccess probelem causing 605 Error?
-
I'm working on a site, it's just a few html pages and I've added a WP blog. I've just noticed that moz is giving me the following error with reference to http://website.com: (webmaster tools is set to show the www subdomain, so it appears OK).
Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag
Here's the code from my htaccess, is this causing the problem?
RewriteEngine on
Options +FollowSymLinks
RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://www.website.com/$1 [R=301,L]
RewriteCond %{THE_REQUEST} ^./index.php
RewriteRule ^(.)index.php$ http://www.website.com/$1 [R=301,L]RewriteCond %{HTTP_HOST} ^website.com$ [NC]
RewriteRule ^(.*)$ http://www.website.com/$1 [R=301,L]BEGIN WordPress
<ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule>END WordPress
Thanks for any advice you can offer!
-
Hey Matt Antonio!
I just wanted to clarify that this error isn't specific to the robots.txt file and can also indicate that we are being blocked by X-Robots Tag, HTTP Header, or Meta Robots Tag. Usually this error does indicate an actual issue with the site we are crawling rather than with our crawler.
The other Q&A post you mentioned is definitely an exception to that rule, but that issue was resolved in August 2014 and has not occurred again.
I hope that clears things up a bit. We are always happy to look into the specific issue causing the crawl error with a site, so I do agree that contacting the help team for these types of issues is often a good idea.
Thanks,
Chiaryn
-
Hi Stevie-G,
I just took a look at your campaign and I am actually getting a 300 http response for your robots.txt file in the browser and in a CURL request from our crawler: http://www.screencast.com/t/UjiIU0MD
The only responses we can accept from the robots.txt file as allowing access to your site are 200 and 404 responses. (301s are also okay if the target URL resolves as a 200 or 404.) Any other http response is considered as denying access to your site, so we aren't able to crawl the site due to the 300 response code we receive from the robots.txt file.
I hope this helps!
Chiaryn
Help Team Sensei -
This happened before but they seemed to be blocking Roger:
I'm not sure if that's the current issue but if your actual /robots.txt file isn't blocking rogerbot, I can't imagine why you'd pull a 605 short of a technical Moz issue. May want to contact support and direct them here to see if it's a similar issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Quest about 404 Errors
About two months ago, we deleted some unnecessary pages on our website that were no longer relevant. However, MOZ is still saying that these deleted pages are returning 404 errors when a crawl test is done. The page is no longer there, at least that I can see. What is the best solution for this? I have a page that similar to the older page, so is it a good choice to just redirect the bad page to my good page? If so, what's the best way to do this. I found some useful information searching but none of it truly pertained to me. I went around my site to make sure there were no old links that directed traffic to the non existent page, and there are none.
Technical SEO | | Meier0 -
Sitemap as Referrer in Crawl Error Report
I have just downloaded the SEOMoz crawl error report, and I have a number of pages listed which all show FALSE. The only common denominator is the referrer - the sitemap. I can't find anything wrong, should I be worried this is appearing in the error report?
Technical SEO | | ChristinaRadisic0 -
What does this error mean?
We recently merged our Google + & Google Local pages and sent a request to Webmaster tools to connect the Google + page to our website. The message was successfully sent. However, when clicking the 'Approve or reject this request' link, the following error message appears: 'Can't find associate request' Anyone know what we are doing incorrectly? Thanks in advance for any insight.
Technical SEO | | SEOSponge0 -
Google webmaster errors
**If you know what these google webmasters errors mean, and you can explain it to me in simple english and tell me how I can locate the problem, I would really appreciate it!. <colgroup><col width=""><col width=""><col width=""><col width=""><col width="*"><col width="124"><col width="54"></colgroup>
Technical SEO | | Joseph-Green-SEO
| | | | | Server error | | | | Soft 404 | | | | Access denied | | Not found | | | Not followed | | | |** I have many of these errors, is it harming SEO?Yoseph0 -
.htaccess and error 404
Hi, I permit to contact the community again because you have good and quick answer ! Yesterday, I lost the file .htaccess on my server. Right now, only the home page is working and the other pages give me this message : Not Found The requested URL /freshadmin/user/login/ was not found on this server Could you help me please? Thanks
Technical SEO | | Probikeshop0 -
URL rewriting causing problems
Hi I am having problems with my URL rewriting to create seo friendly / user friendly URL's. I hope you follow me as I try to explain what is happening... Since the creation of my rewrite rule I am getting lots of errors in my SEOMOZ report and Google WMT reports due to duplicate content, titles, description etc For example for a product detail, it takes the page and instead of a URL parameter it creates a user friendly url of mydomain.com/games-playstation-vita-psp/B0054QAS However in the google index there is also the following friendly URL which is the same page - which I would like to remove domain.com/games-playstation-vita/B0054QAS The key to the rewrite on the above URLs is the /B0054QAS appended at the end - this tells the script which product to load, the details preceeding this could be in effect rubbish i.e. domain.com/a-load-of-rubbish/B0054QAS and it would still bring back the same page as above. What is the best way of resolving the duplicate URLs that are currently in the google index which is causing problems The same issue is causing a quite serious a 5XX error on one of the generated URLs http://www.mydomain.com/retailersname/1 - , if I click on the link the link does work - it takes you to the retailers site, but again it is the number appended at the end that is the key - the retailersname is just there for user friendly search reasons How can I block this or remove it from the results? Hope you are still with me and can shed some light on these issues please. Many Thanks
Technical SEO | | ocelot0 -
Duplicate content error - same URL
Hi, One of my sites is reporting a duplicate content and page title error. But it is the same page? And the home page at that. The only difference in the error report is a trailing slash. www.{mysite}.co.uk www.{mysite}.co.uk/ Is this an easy htaccess fix? Many thanks TT
Technical SEO | | TheTub1 -
What are the causes of pages desindexation?
Hello, I was wondering what can be the causes of pages desindexation by Google? A poor quality pages,...? Thank you for your answers, Jonathan
Technical SEO | | JonathanLeplang0