.htaccess probelem causing 605 Error?
-
I'm working on a site, it's just a few html pages and I've added a WP blog. I've just noticed that moz is giving me the following error with reference to http://website.com: (webmaster tools is set to show the www subdomain, so it appears OK).
Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag
Here's the code from my htaccess, is this causing the problem?
RewriteEngine on
Options +FollowSymLinks
RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://www.website.com/$1 [R=301,L]
RewriteCond %{THE_REQUEST} ^./index.php
RewriteRule ^(.)index.php$ http://www.website.com/$1 [R=301,L]RewriteCond %{HTTP_HOST} ^website.com$ [NC]
RewriteRule ^(.*)$ http://www.website.com/$1 [R=301,L]BEGIN WordPress
<ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule>END WordPress
Thanks for any advice you can offer!
-
Hey Matt Antonio!
I just wanted to clarify that this error isn't specific to the robots.txt file and can also indicate that we are being blocked by X-Robots Tag, HTTP Header, or Meta Robots Tag. Usually this error does indicate an actual issue with the site we are crawling rather than with our crawler.
The other Q&A post you mentioned is definitely an exception to that rule, but that issue was resolved in August 2014 and has not occurred again.
I hope that clears things up a bit. We are always happy to look into the specific issue causing the crawl error with a site, so I do agree that contacting the help team for these types of issues is often a good idea.
Thanks,
Chiaryn
-
Hi Stevie-G,
I just took a look at your campaign and I am actually getting a 300 http response for your robots.txt file in the browser and in a CURL request from our crawler: http://www.screencast.com/t/UjiIU0MD
The only responses we can accept from the robots.txt file as allowing access to your site are 200 and 404 responses. (301s are also okay if the target URL resolves as a 200 or 404.) Any other http response is considered as denying access to your site, so we aren't able to crawl the site due to the 300 response code we receive from the robots.txt file.
I hope this helps!
Chiaryn
Help Team Sensei -
This happened before but they seemed to be blocking Roger:
I'm not sure if that's the current issue but if your actual /robots.txt file isn't blocking rogerbot, I can't imagine why you'd pull a 605 short of a technical Moz issue. May want to contact support and direct them here to see if it's a similar issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Product Code Error in Volusion
I started working with about 800+ 404 errors in September after we migrated our site to Volusion 13. There is a recurring 404 error that I can't trace inside of our source code or in our Sitemap. I don't know what is causing this error so I have no way of knowing how to fix it. Tech support at Volusion has been less than helpful so any feed back would be appreciated. | http://www.apelectric.com/Generac-6438-Guardian-Series-11kW-p/{1} | The error is seemingly starting with the product code. The addendum at the end of the URL "p/" should be followed by the product code. In this example, 6438. Instead, the code is being automatically populated with %7B1%7D Has anyone else this issue with Volusion or does this look familiar across any other platform?
Technical SEO | | MonicaOConnor0 -
Redirecting Domain will cause SEO?
I am redirecting my domain, based on Geo Location. For example : if someone browse from india www.example.com they will redirect to www.example.in Does redirecting domain on Geo based will cause SEO or Any kind of duplicate content issue, as i am using same content on both TLDs.
Technical SEO | | anishtapan0 -
My 404 page shows in the report as an error.
How can i make my actual 404 page not show up as a 404 error in the report?
Technical SEO | | LindseyNewman0 -
My .htaccess has changed, what do i do to avoid it again...?
Hello Today i notice that our site did not auto changed from without www to with, when i checked the .htaccess file i notice # in-front of each line and i know we did not insert it in there, after i removed it it worked fine. The only changes that we did recently was to a mobile version to the site but the call to autodirect is in a JS and not in the .htaccess, could it be the server..? is there any way that anything else might cause this...? The site is HTML and WP could it be because of that...? Thank's Simo
Technical SEO | | Yonnir0 -
Duplicate title tag error
Hi all, I am new to SEO, and we have just launched a new version of our site (kept the domain name the same though). I keep getting errors for duplicate title tags - e.g. www.sandafayre.com/default.aspx and www.sandafayre.com/Default.aspx, www.sandafayre.com/StampAuctions.aspx and www.sandafayre.com/stampauctions.aspx (plus loads others :o). The only difference each time seems to be the capitalisation of the first character - but I though URLs were not case sensitive? I've been advised to add the rel canonical tag to one of the pages, but the problem is I really only have 1 version of each page! Can anybody help please? Many thanks in advance! Nikki
Technical SEO | | Stampy780 -
Please advice needed SSL .htaccess
Hi everyone, I recently installed verisign ssl. the idea to have page https://example.com all redirect from non-http to https work properly, but in IE whenever smbdy types https://www.example.com it shows the red screen with invalid certificate. If you click "proceed" - everything goes to normal page and on server redirect www to non-www seem to work fine. Is there way to get rid of the warning? Is it server or certificate issue? Here is the peice of code from htaccess. Please, advice needed! RewriteEngine On RewriteBase /RewriteCond %{HTTPS} !=on RewriteRule ^(.*) https://%{SERVER_NAME}/$1 [R,L] RewriteRule ^index.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] Thanks in advance
Technical SEO | | Kotkov0 -
Crawl Errors for duplicate titles/content when canonicalised or noindexed
Hi there, I run an ecommerce store and we've recently started changing the way we handle pagination links and canonical links. We run Magento, so each category eg /shoes has a number of parameters and pages depending on the number of products in the category. For example /shoes?mode=grid will display products in grid view, /shoes?mode=grid&p=2 is page 2 in grid mode. Previously, all URL variations per category were canonicalised to /shoes. Now, we've been advised to paginate the base URLs with page number only. So /shoes has a pagination next link to /shoes?p=2, page 2 has a prev link to /shoes and a next link to /shoes?p=3. When any other parameter is introduced (such as mode=grid) we canonicalise that back to the main category URL of /shoes and put a noindex meta tag on the page. However, SEOMoz is picking up duplicate title warnings for urls like /shoes?p=2 and /shoes?mode=grid&p=2 despite the latter being canonicalised and having a noindex tag. Presumably search engines will look at the canonical and the noindex tag so this shouldn't be an issue. Is that correct, or should I be concerned by these errors? Thanks.
Technical SEO | | Fergus_Macdonald0 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0