Robots.txt file getting a 500 error - is this a problem?
-
Hello all!
While doing some routine health checks on a few of our client sites, I spotted that a new client of ours - who's website was not designed built by us - is returning a 500 internal server error when I try to look at the robots.txt file.
As we don't host / maintain their site, I would have to go through their head office to get this changed, which isn't a problem but I just wanted to check whether this error will actually be having a negative effect on their site / whether there's a benefit to getting this changed?
Thanks in advance!
-
Hi Barry,
Thanks for your swift response on this. The pages certainly seem to be getting cached correctly, and when we initially took over the SEO and made wholesale changes to the site, there were huge improvements, so it looks for all the world like the main pages at least are being looked at.
But I think you make a good point about getting it solved anyway so we can identify any problems that may be occurring / will occur later.
-
robots.txt isn't a requirement, indeed it's only voluntarily followed by spiders (as in they can choose to ignore it), so I think you'll be fine without it. The default is to 'allow all' and 'follow, index', so they should still be crawling the site correctly.
Check in Webmaster tools by fetching as Googlebot or alternative find a page and put cache:pageurl.html into google and see if it's cached it correctly.
That said returning a 500 instead of a 404 may be causing an issue that isn't obviously apparent and 500 is a bit too generic a message to say specifically what, but I would try and solve it as quick as possible. The benefits will depends on what you put in your robots.txt file
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
One robots.txt file for multiple sites?
I have 2 sites hosted with Blue Host and was told to put the robots.txt in the root folder and just use the one robots.txt for both sites. Is this right? It seems wrong. I want to block certain things on one site. Thanks for the help, Rena
Technical SEO | | renalynd270 -
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. Â He has since blocked these php pages via robots.txt disallow. Â Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Â Or will the robots.txt keep Google away and we lose all these high quality backlinks? Â I guess the same question applies if we use the canonical tag instead of the 301. Â Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
Wordpress Hatom problem
Hi, in Webmaster Tools i receive the following warnings: hatom-feedhatom-entry:Warning:Â At least one field must be set for HatomEntry.Warning:Â Missing required field "entry-title".Warning:Â Missing required field "updated".Warning:Â Missing required hCard "author".I googled a few strategies how to solve this problem but is it for SEO purpose really necessary to edit Theme core code to satisfy google's warnings?
Technical SEO | | reisefm0 -
Identifying a 301-redirect problem?
I was looking at the Search Engine Optimization reports for one of my clients in Google Analytics, and I saw that their two biggest landing pages are www.website.com and http://website.com. Â Does this mean that Google is serving both the 'www' and 'non-www' versions of the website, and thus harming the website's overall ranking? Thanks for any input!
Technical SEO | | williammarlow0 -
Best use of robots.txt for "garbage" links from Joomla!
I recently started out on Seomoz and is trying to make some cleanup according to the campaign report i received. One of my biggest gripes is the point of "Dublicate Page Content". Right now im having over 200 pages with dublicate page content. Now.. This is triggerede because Seomoz have snagged up auto generated links from my site. My site has a "send to freind" feature, and every time someone wants to send a article or a product to a friend via email a pop-up appears. Now it seems like the pop-up pages has been snagged by the seomoz spider,however these pages is something i would never want to index in Google. So i just want to get rid of them. Now to my question I guess the best solution is to make a general rule via robots.txt, so that these pages is not indexed and considered by google at all. But, how do i do this? what should my syntax be? A lof of the links looks like this, but has different id numbers according to the product that is being send: http://mywebshop.dk/index.php?option=com_redshop&view=send_friend&pid=39&tmpl=component&Itemid=167 I guess i need a rule that grabs the following and makes google ignore links that contains this: view=send_friend
Technical SEO | | teleman0 -
Can't find mistake in robots.txt
Hi all, we recently filled our robots.txt file to prevent some directories from crawling. Looks like: User-agent: * Disallow: /Views/ Disallow: /login/ Disallow: /routing/ Disallow: /Profiler/ Disallow: /LILLYPROFILER/ Disallow: /EventRweKompaktProfiler/ Disallow: /AccessIntProfiler/ Disallow: /KellyIntProfiler/ Disallow: /lilly/ now, as Google Webmaster Tools hasn't updated our robots.txt yet, I checked our robots.txt in some ckeckers. They tell me that the User agent: * contains an error. **Example:** **Line 1: Syntax error! Expected <field>:</field> <value></value> 1: User-agent: *** **`I checked other robots.txt written the same way --> they work,`** accordign to the checkers... **`Where the .... is the mistake???`** ```
Technical SEO | | accessKellyOCG0 -
Confirming Robots.txt code deep Directories
Just want to make sure I understand exactly what I am doing If I place this in my Robots.txt Disallow: /root/this/that By doing this I want to make sure that I am ONLY blocking the directory /that/ and anything in front of that. I want to make sure that /root/this/ still stays in the index, its just the that directory I want gone. Am I correct in understanding this?
Technical SEO | | cbielich0 -
How should I properly setup my .htaccess file?
I have searched google for 'how to setup .htaccess file' and it seems that every website has some variation. For example: RewriteCond %{HTTP_HOST} ^yoursite.com RewriteRule ^(.*)$ http://www.yoursite.com/$1 [R=permanent,L] On SEOMOZ someone posted this: RewriteCond %{HTTP_HOST} !^www.yoursite.com [NC] RewriteRule (.*) http://www.yoursite.com/$1 [L,R=301] On yet another website, I found this: RewriteEngine On RewriteCond %{HTTP_HOST} !^your-site.com$ [NC] RewriteRule ^(.*)$ http://your-site.com/$1 [L,R=301] As you can see there are slight differences. Which one do I use? I'm on Apache CentOS and I have HTML5 websites and several Joomla! wesites. Would the HTACCESS File be different for both?
Technical SEO | | maxduveen0