.htaccess probelem causing 605 Error?
-
I'm working on a site, it's just a few html pages and I've added a WP blog. I've just noticed that moz is giving me the following error with reference to http://website.com: (webmaster tools is set to show the www subdomain, so it appears OK).
Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag
Here's the code from my htaccess, is this causing the problem?
RewriteEngine on
Options +FollowSymLinks
RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://www.website.com/$1 [R=301,L]
RewriteCond %{THE_REQUEST} ^./index.php
RewriteRule ^(.)index.php$ http://www.website.com/$1 [R=301,L]RewriteCond %{HTTP_HOST} ^website.com$ [NC]
RewriteRule ^(.*)$ http://www.website.com/$1 [R=301,L]BEGIN WordPress
<ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule>END WordPress
Thanks for any advice you can offer!
-
Hey Matt Antonio!
I just wanted to clarify that this error isn't specific to the robots.txt file and can also indicate that we are being blocked by X-Robots Tag, HTTP Header, or Meta Robots Tag. Usually this error does indicate an actual issue with the site we are crawling rather than with our crawler.
The other Q&A post you mentioned is definitely an exception to that rule, but that issue was resolved in August 2014 and has not occurred again.
I hope that clears things up a bit. We are always happy to look into the specific issue causing the crawl error with a site, so I do agree that contacting the help team for these types of issues is often a good idea.
Thanks,
Chiaryn
-
Hi Stevie-G,
I just took a look at your campaign and I am actually getting a 300 http response for your robots.txt file in the browser and in a CURL request from our crawler: http://www.screencast.com/t/UjiIU0MD
The only responses we can accept from the robots.txt file as allowing access to your site are 200 and 404 responses. (301s are also okay if the target URL resolves as a 200 or 404.) Any other http response is considered as denying access to your site, so we aren't able to crawl the site due to the 300 response code we receive from the robots.txt file.
I hope this helps!
Chiaryn
Help Team Sensei -
This happened before but they seemed to be blocking Roger:
I'm not sure if that's the current issue but if your actual /robots.txt file isn't blocking rogerbot, I can't imagine why you'd pull a 605 short of a technical Moz issue. May want to contact support and direct them here to see if it's a similar issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help! How to Remove Error Code 901: DNS Errors (But to a URL that doesn't exist!)
I have 2 urgent errors saying there are 2 x error code 909's detected. These don't link to any page - but I can tell there is a mistake somewhere - I just don't know what needs changing. http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/printed-promotional-keyrings http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/blank-unassembled-keyrings Could someone help please? screen-shot-2015-08-11-at-13.18.17.png?t=1439292942
Technical SEO | | FullSteamBusiness0 -
Webmaster Crawl errors caused by Joomla menu structure.
Webmaster Tools is reporting crawl errors for pages that do not exist due to how my Joomla menu system works. Example, I have a menu item named "Service Area" that stores 3 sub items but no actual page for Service Area. This results in a URL like domainDOTcom/service-area/service-page.html Because the Service Area menu item is constructed in a way that shows the bot it is a link, I am getting a 404 error saying it can't find domainDOTcom/service-area/ (The link is to "javasript:;") Note, the error doesn't say domainDOTcom/service-area/javascript:; it just says /service-area/ What is the best way to handle this? Can I do something in robots.txt to tell the bot that this /service-area/ should be ignored but any page after /service-area/ is good to go? Should I just mark them as fixed as it's really not a 404 a human will encounter or is it best to somehow explain this to the bot? I was advised on google forums to try this, but I'm nervous about it. Disallow: /service-area/*
Technical SEO | | dwallner
Allow: /service-area/summerlin-pool-service.
Allow: /service-area/north-las-vegas
Allow: /service-area/centennial-hills-pool-service I tried a 301 redirect of /service-area to home page but then it pulls that out of the url and my landing pages become 404's. http://www.lvpoolcleaners.com/ Thanks for any advice! Derrick0 -
Htaccess - multiple matches by error
Hi all, I stumbled upon an issue on my site. We have a video section: www.holdnyt.dk/video htaccess rule: RewriteCond %{REQUEST_FILENAME} !-f
Technical SEO | | rasmusbang
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^video index.php?area=video [L,QSA] Problem is that these URLs give the same content:
www.holdnyt.dk/anystring/video
www.holdnyt.dk/whatsoever/video Any one with a take on whats wrong with the htaccess line? -Rasmus0 -
Numerous 404 errors on crawl diagnostics (non existent pages)..
As new as them come to SEO so please be gentle.... I have a wordpress site setup for my photography business. Looking at my crawl diagnostics I see several 4xx (client error) alerts. These all show up to non existent pages on my site IE: | http://www.robertswanigan.com/happy-birthday-sara/109,97,105,108,116,111,58,104,116,116,112,58,47,47,109,97,105,108,116,111,58,105,110,102,111,64,114,111,98,101,114,116,115,119,97,110,105,103,97,110,46,99,111,109 | Totally lost on what could be causing this. Thanks in advance for any help!
Technical SEO | | Swanny8110 -
Why do I get duplicate content errors just for tags I place on blog entries?
I the SEO MOZ crawl diagnostics for my site, www.heartspm.com, I am getting over 100 duplicate content errors on links built from tags on blog entries. I do have the original base blog entry in my site map not referencing the tags. Similarly, I am getting almost 200 duplicate meta description errors in Google Webmaster Tools associated with links automatically generated from tags on my blog. I have more understanding that I could get these errors from my forum, since the forum entries are not in the sitemap, but the blog entries are there in the site map. I thought the tags were only there to help people search by category. I don't understand why every tag becomes its' own link. I can see how this falsely creates the impression of a lot of duplicate data. As seen in GWT: Pages with duplicate meta descriptions Pages [Customer concerns about the use of home water by pest control companies.](javascript:dropInfo('zip_0div', 'none', document.getElementById('zip_0zipimg'), 'none', null);)/category/job-site-requirements/tag/cost-of-water/tag/irrigation-usage/tag/save-water/tag/standard-industry-practice/tag/water-use 6 [Pest control operator draws analogy between Children's Day and the state of the pest control industr](javascript:dropInfo('zip_1div', 'none', document.getElementById('zip_1zipimg'), 'none', null);)/tag/children-in-modern-world/tag/children/tag/childrens-day/tag/conservation-medicine/tag/ecowise-certified/tag/estonia/tag/extermination-service/tag/exterminator/tag/green-thumb/tag/hearts-pest-management/tag/higher-certification/tag/higher-education/tag/tartu/tag/united-states
Technical SEO | | GerryWeitz0 -
Tracking a Crawl error
Hi All, If you find a crawl error on your page. How do you find it? The error only says the URL that is wrong but this is not the location. Can i drill down and find out more information? Thank you!
Technical SEO | | wedmonds0 -
How to fix 404 (Client Error) errors in wordpress blog?
hey A very quick question...after analyzed my wp blog I've found "34" 404 (Client Error) Errors and I don't know how to fix it, do you know how?? *I renew html code of 404 of my wordpress blog.
Technical SEO | | akitmane1