Robots.txt file getting a 500 error - is this a problem?
-
Hello all!
While doing some routine health checks on a few of our client sites, I spotted that a new client of ours - who's website was not designed built by us - is returning a 500 internal server error when I try to look at the robots.txt file.
As we don't host / maintain their site, I would have to go through their head office to get this changed, which isn't a problem but I just wanted to check whether this error will actually be having a negative effect on their site / whether there's a benefit to getting this changed?
Thanks in advance!
-
Hi Barry,
Thanks for your swift response on this. The pages certainly seem to be getting cached correctly, and when we initially took over the SEO and made wholesale changes to the site, there were huge improvements, so it looks for all the world like the main pages at least are being looked at.
But I think you make a good point about getting it solved anyway so we can identify any problems that may be occurring / will occur later.
-
robots.txt isn't a requirement, indeed it's only voluntarily followed by spiders (as in they can choose to ignore it), so I think you'll be fine without it. The default is to 'allow all' and 'follow, index', so they should still be crawling the site correctly.
Check in Webmaster tools by fetching as Googlebot or alternative find a page and put cache:pageurl.html into google and see if it's cached it correctly.
That said returning a 500 instead of a 404 may be causing an issue that isn't obviously apparent and 500 is a bit too generic a message to say specifically what, but I would try and solve it as quick as possible. The benefits will depends on what you put in your robots.txt file
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt and Multiple Sitemaps
Hello, I have a hopefully simple question but I wanted to ask to get a "second opinion" on what to do in this situation. I am working on a clients robots.txt and we have multiple sitemaps. Using yoast I have my sitemap_index.xml and I also have a sitemap-image.xml I do put them in google and bing by hand but wanted to have it added into the robots.txt for insurance. So my question is, when having multiple sitemaps called out on a robots.txt file does it matter if one is before the other? From my reading it looks like you can have multiple sitemaps called out, but I wasn't sure the best practice when writing it up in the file. Example: User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /wp-admin/ Disallow: /wp-content/plugins/ Sitemap: http://sitename.com/sitemap_index.xml Sitemap: http://sitename.com/sitemap-image.xml Thanks a ton for the feedback, I really appreciate it! :) J
Technical SEO | | allstatetransmission0 -
Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.
Technical SEO | | mkhGT0 -
HTTP 500 Internal Server Error, Need help
Hi, For a few days know google crawlers have been getting 500 errors from our dedicated server whenever they try to crawl the site. Using the "Fetch as Google" tool under health in webmaster tools, I get "Unreachable page" every time I fetch the homepage. Here is exactly what the google crawler is getting: <code>HTTP/1.1 500 Internal Server Error Date: Fri, 21 Jun 2013 19:52:27 GMT Server: Apache/2.2.15 (CentOS) X-Powered-By: PHP/5.3.3 X-Pingback: [http://www.communityadvocate.com/xmlrpc.php](http://www.communityadvocate.com/xmlrpc.php) Connection: close Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8 http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> My url is [http://www.communityadvocate.com](http://www.communityadvocate.com/)</code> and here's the screenshot from Goolge webmater http://screencast.com/t/FoWvqRRtmoEQ How can i fix that? Thank you
Technical SEO | | Vmezoz0 -
How to fix this 404 : Error ( 4XX (Client Error) )
In my report this indicates 404 : Error http://www.thexxxhouse.com/what_sets_us_aparat.html This web page removed from server .How to fix this in SEO friendly way .
Technical SEO | | innofidelity0 -
What can I do if Google Webmaster Tools doesn't recognize the robots.txt file?
I'm working on a recently hacked site for a client and and in trying to identify how exactly the hack is running I need to use the fetch as Google bot feature in GWT. I'd love to use this but it thinks the robots.txt is blocking it's acces but the only thing in the robots.txt file is a link to the sitemap. Unde the Blocked URLs section of the GWT it shows that the robots.txt was last downloaded yesterday but it's incorrect information. Is there a way to force Google to look again?
Technical SEO | | DotCar0 -
Robots.txt for subdomain
Hi there Mozzers! I have a subdomain with duplicate content and I'd like to remove these pages from the mighty Google index. The problem is: the website is build in Drupal and this subdomain does not have it's own robots.txt. So I want to ask you how to disallow and noindex this subdomain. Is it possible to add this to the root robots.txt: User-agent: *
Technical SEO | | Partouter
Disallow: /subdomain.root.nl/ User-agent: Googlebot
Noindex: /subdomain.root.nl/ Thank you in advance! Partouter0 -
Problems with google cache
Hi Can you please advise if the following website is corrupted in the eyes of Google, it has been written in umbraco and I have taken over it from another developer and I am confused to why it is behaving the way it is. cache:www.tangoholidaysolutions.com When I run this all I see is the header, the start of the main content and then the footer. If I view text view all the content is visible. The 2nd issue I have with this site is as follows: Main Page: http://www.tangoholidaysolutions.com/holiday-lettings-spain/ This page is made up of widgets i.e. locations, featured villas, content However the widgets are their own webpages in their own right http://www.tangoholidaysolutions.com/holiday-lettings-spain/location-picker/ My concern is that this part pages will affect the performance of the seo on the site. In an ideal world I would have the CMS setup so these widgets are not classed as pages, but I am working on this. Thanks Andy
Technical SEO | | iprosoftware0 -
REL Canonical Error
In my crawl diagnostics it showing a Rel=Canonical error on almost every page. I'm using wordpress. Is there a default wordpress problem that would cause this?
Technical SEO | | mmaes0