Robots.txt file getting a 500 error - is this a problem?
-
Hello all!
While doing some routine health checks on a few of our client sites, I spotted that a new client of ours - who's website was not designed built by us - is returning a 500 internal server error when I try to look at the robots.txt file.
As we don't host / maintain their site, I would have to go through their head office to get this changed, which isn't a problem but I just wanted to check whether this error will actually be having a negative effect on their site / whether there's a benefit to getting this changed?
Thanks in advance!
-
Hi Barry,
Thanks for your swift response on this. The pages certainly seem to be getting cached correctly, and when we initially took over the SEO and made wholesale changes to the site, there were huge improvements, so it looks for all the world like the main pages at least are being looked at.
But I think you make a good point about getting it solved anyway so we can identify any problems that may be occurring / will occur later.
-
robots.txt isn't a requirement, indeed it's only voluntarily followed by spiders (as in they can choose to ignore it), so I think you'll be fine without it. The default is to 'allow all' and 'follow, index', so they should still be crawling the site correctly.
Check in Webmaster tools by fetching as Googlebot or alternative find a page and put cache:pageurl.html into google and see if it's cached it correctly.
That said returning a 500 instead of a 404 may be causing an issue that isn't obviously apparent and 500 is a bit too generic a message to say specifically what, but I would try and solve it as quick as possible. The benefits will depends on what you put in your robots.txt file
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Spike in server errors
Hi, we've recently changed shopping cart platforms. In doing so a lot of our URL's changed, but I 301'ed all of the significant landing pages (as determined by G Analytics) prior to the switch. However, WMT is warning me about this spike in server errors now with all the pages that no longer exist. However they are only crawling them because they used to exist/are linked from pages that used to exist. and no longer actually exist. Is this something I should worry about? Or let it run its course?
Technical SEO | | absoauto0 -
Robots.txt Download vs Cache
We made an update to the Robots.txt file this morning after the initial download of the robots.txt file. I then submitted the page through Fetch as Google bot to get the changes in asap. The cache time stamp on the page now shows Sep 27, 2013 15:35:28 GMT. I believe that would put the cache time stamp at about 6 hours ago. However the Blocked URLs tab in Google WMT shows the robots.txt last downloaded at 14 hours ago - and therefore it's showing the old file. This leads me to believe for the Robots.txt the cache date and the download time are independent. Is there anyway to get Google to recognize the new file other than waiting this out??
Technical SEO | | Rich_A0 -
Is it a problem to have an image + link in your menu
Hi, My menu has a image with links to some of the main pages on the site and text underneath it explaining what the banner is. Will it be beneficial or harmful to have the text hyperlinked to the same pages the images go to?
Technical SEO | | theLotter0 -
Help! Getting 5XX error
Keep getting a 5XX error and my site is obviously losing ranking, Asked the hoster. Nobody seems to know what is wrong. Site is www.monteverdetours.com I know this is probably an obvious problem and easy to fix but I don't know how to do it! Any comments will be greatly appreciated.
Technical SEO | | Llanero0 -
Error Reporting
http://pro.seomoz.org/campaigns/33868/issues/18 Rel Canonical Found about 16 hours ago <dl> <dt>Tag value</dt> <dd>http://www.geeks.com/</dd> <dt>Description</dt> <dd>Using rel=canonical suggests to search engines which URL should be seen as canonical.</dd> <dd>We do have rel canonical on some of the pages this report is recommending that we "fix" this issue.</dd> <dd> Rel Canonical Found about 16 hours ago <dl> <dt>Tag value</dt> <dd>http://www.geeks.com/products.asp?cat=MBB</dd> <dt>Description</dt> <dd>Using rel=canonical suggests to search engines which URL should be seen as canonical.</dd> </dl> <a class="more expanded">Minimize</a> </dd> </dl>
Technical SEO | | JustinGeeks0 -
Robots.txt blocking site or not?
Here is the robots.txt from a client site. Am I reading this right --
Technical SEO | | 540SEO
that the robots.txt is saying to ignore the entire site, but the
#'s are saying to ignore the robots.txt command? See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file To ban all spiders from the entire site uncomment the next two lines: User-Agent: * Disallow: /0 -
Code problem and the impact on links
We have a specific URL naming convention for 'city landing pages': .com/Burbank-CA .com/Boston-MA etc. We use this naming convention almost exclisively as the URLs for links. Our website had a code breakdown and all those URLs within that naming convention led to an error message on the website. Will this impact our links?
Technical SEO | | Storitz0 -
Restricted by robots.txt and soft bounce issues (related).
In our web master tools we have 35K (ish) URLs that are restricted by robots.txt and as have 1200(ish) soft 404s. WE can't seem to figure out how to properly resolve these URLs so that they no longer show up this way. Our traffic from SEO has taken a major hit over the last 2 weeks because of this. Any help? Thanks, Libby
Technical SEO | | GristMarketing0