How can I make Google Webmaster Tools see the robots.txt file when I am doing a .htacces redirec?
-
We are moving a site to a new domain. I have setup an .htaccess file and it is working fine. My problem is that Google Webmaster tools now says it cannot access the robots.txt file on the old site. How can I make it still see the robots.txt file when the .htaccess is doing a full site redirect?
.htaccess currently has:
Options +FollowSymLinks -MultiViews
RewriteEngine on
RewriteCond %{HTTP_HOST} ^(www.)?michaelswilderhr.com$ [NC]
RewriteRule ^ http://www.s2esolutions.com/ [R=301,L]Google webmaster tools is reporting:
Over the last 24 hours, Googlebot encountered 1 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%.
-
Possible Solitions for your problem:
.htaccess authentication blocking robots.txt
301 redirect. How to make an exception for the robots.txt
http://forum.cs-cart.com/topic/23747-301-redirect-how-to-make-an-exception-for-the-robotstxt/
1. Canonical robots.txt
http://digwp.com/2011/03/htaccess-wordpress-seo-security/
General .htaccess tutorials: http://httpd.apache.org/docs/2.0/howto/htaccess.htmlhttp://httpd.apache.org/docs/2.0/misc/rewriteguide.html
-
Thank you that seems to be working.
-
You could add an exception to the htaccess to allow the robots to be loaded. You would do this with by adding another condition. I'd use something like:
<code>Options +FollowSymLinks -MultiViews RewriteEngine on RewriteCond %{REQUEST_URI} !/robots.txt RewriteCond %{HTTP_HOST} ^(www\.)?michaelswilderhr\.com$ [NC] RewriteRule ^ [http://www.s2esolutions.com/](http://www.s2esolutions.com/) [R=301,L]</code>
Disclaimer: I am lucky enough to have people at work who check these things. This hasn't been checked! Use at your own discretion
However I'll admit that I've never used this. I just stick the 301 in and it all seems to work out fine. Probably done it on hundreds of domains over the years.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can ht access file affect page load times
We have a large and old site. As we've transition from one CMS to another, there's been a need for create 301 redirects using our ht access file. I'm not a technical SEO person, but concerned that the size of our ht access file might be contributing source for long page download times. Can large ht access files cause slow page load times? Or is the coding of the 301 redirect a cause for slow page downloads? Thanks
Technical SEO | | ahw1 -
Changes to 'links to your site' in WebMaster Tools?
We're writing more out of curiosity... Clicking on "Download latest links" within 'Links to your site' in Google's WebMaster Tools would usually bring back links discovered recently. However, the last few times (for numerous accounts) it has brought back a lot of legacy links - some from 2011 - and includes nothing recent. We would usually expect to see a dozen at least each month. ...Has anyone else noticed this? Or, do you have any advice? Thanks in advance, Ant!
Technical SEO | | AbsoluteDesign0 -
Google webmaster errors
**If you know what these google webmasters errors mean, and you can explain it to me in simple english and tell me how I can locate the problem, I would really appreciate it!. <colgroup><col width=""><col width=""><col width=""><col width=""><col width="*"><col width="124"><col width="54"></colgroup>
Technical SEO | | Joseph-Green-SEO
| | | | | Server error | | | | Soft 404 | | | | Access denied | | Not found | | | Not followed | | | |** I have many of these errors, is it harming SEO?Yoseph0 -
Restricted by robots.txt does this cause problems?
I have restricted around 1,500 links which are links to retailers website and links that affiliate links accorsing to webmaster tools Is this the right approach as I thought it would affect the link juice? or should I take the no follow out of the restricted by robots.txt file
Technical SEO | | ocelot0 -
How a google bot sees your site
So I have stumbled across various websites like this: http://www.smart-it-consulting.com/internet/google/googlebot-spoofer/ The concept here is to be able to view your site as a googlebot sees it. However, the results are a little puzzling. Google is reading the text on my page but not the title tags according to the results. Are websites like this accurate OR does Google not read title tags and H1 tags anymore? Also on a slighly related note. I noticed the results show the navigation bar is being read first by google, is this bad and should the navigation bar be optimized for keywords as well? If it did, it would read a bit funny and the "humans" would be confused.
Technical SEO | | StreetwiseReports0 -
Client accidently blocked entire site with robots.txt for a week
Our client was having a design firm do some website development work for them. The work was done on a staging server that was blocked with a robots.txt to prevent duplicate content issues. Unfortunately, when the design firm made the changes live, they also moved over the robots.txt file, which blocked the good, live site from search for a full week. We saw the error (!) as soon as the latest crawl report came in. The error has been corrected, but... Does anyone have any experience with a snafu like this? Any idea how long it will take for the damage to be reversed and the site to get back in the good graces of the search engines? Are there any steps we should take in the meantime that would help to rectify the situation more quickly? Thanks for all of your help.
Technical SEO | | pixelpointpress0 -
Magento - Google Webmaster Crawl Errors
Hi guys, Started my free trial - very impressed - just thought I'd ask a question or two while I can. I've set up the website for http://www.worldofbooks.com (large bookseller in the UK), using Magento. I'm getting a huge amount of not found crawl errors (27,808), I think this is due to URL rewrites, all the errors are in this format (non search friendly): http://www.worldofbooks.com/search_inventory.php?search_text=&category=&tag=Ure&gift_code=&dd_sort_by=price_desc&dd_records_per_page=40&dd_page_number=1 As oppose to this format: http://www.worldofbooks.com/arts-books/history-of-art-design-styles/the-art-book-by-phaidon.html (the re-written URL). This doesn't seem to really be affecting our rankings, we targeted 'cheap books' and 'bargain books' heavily - we're up to 2nd for Cheap Books and 3rd for Bargain Books. So my question is - are these large amount of Crawl errors cause for concern or is it something that will work itself out? And secondly - if it is cause for concern will it be affecting our rankings negatively in any way and what could we do to resolve this issue? Any points in the right direction much appreciated. If you need any more clarification regarding any points I've raised just let me know. Benjamin Edwards
Technical SEO | | Benj250 -
Is robots.txt a must-have for 150 page well-structured site?
By looking in my logs I see dozens of 404 errors each day from different bots trying to load robots.txt. I have a small site (150 pages) with clean navigation that allows the bots to index the whole site (which they are doing). There are no secret areas I don't want the bots to find (the secret areas are behind a Login so the bots won't see them). I have used rel=nofollow for internal links that point to my Login page. Is there any reason to include a generic robots.txt file that contains "user-agent: *"? I have a minor reason: to stop getting 404 errors and clean up my error logs so I can find other issues that may exist. But I'm wondering if not having a robots.txt file is the same as some default blank file (or 1-line file giving all bots all access)?
Technical SEO | | scanlin0