Crawl Diagnostics Report 500 erorr
-
How can I know what is causing my website to have 500 errors and how I locate it and fix it?
-
500 errors could be caused by a mulitude of reasons, and for the non-technical they can be very hard to track down and fix.
The first thing I would look at is if it's a repeating problem in Google Webmasters Tools, or a one-time issue. These errors will show up in GWT for a long time - but if it's not a repeating problem it probably is nothing you need to worry about.
Wait, I assumed you found the problems in GWT, when you may have possibly found them using the SEOmoz crawl report. Either way, you should probably log into Google Webmaster Crawl Errors report and see if Google is experiencing the same problems.
Sometimes 500 errors are caused by over-aggressive robots and/or improperly configured servers that can't handle the load. In this case, a simple crawl delay directive in your robots.txt file may do the trick. It would look something like this:
User-agent: * Crawl-delay: 5
This would request that robots wait at least 5 seconds between page requests. But note, this doesn't necessarly solve the problem of why your server was returning 500s in the first place.
You may need to consult your hosting provider for advice. For example, Bluehost has this excellent article on dealing with 500 errors from their servers: https://my.bluehost.com/cgi/help/594
Hope this helps! Best of luck with your SEO.
-
Thank you Corey for your advise, I see which links it is in google webmasters and in , but I can't reproduce it and don't know whats the best way to fix it?
-
Thomas thank you so much for your advise, and Keri thanks for offering help.
My problem is that when I try to reproduce the 500 error so the host cant help me on how to fix it.
Any help?
-
Hey Keri how are you merry Christmas, I believe that 500 errors are almost always server related errors and unless he tells me about the host or Some other maybe strange unique problem with the computers registry I don't have enough to go on. You be interesting to find out what it is all the best, Tom
-
Hi Yoseph,
Did you get this figured out, or would you still like some assistance?
-
HTTP Error 500 is an Internal Server Error. It's a server-side error, that means there's either a problem with your web server or the code that it's trying to interpret. It may not happen in 100% of scenarios, so you may not always see it happening yourself, but it prevents the page from loading. Obviously, that's bad for search engines and users.
Your best bet in tracking down this error would be to go through your web server's error logs. Or, if you can replicate this happening on the web, you could enable error reporting, and see what errors pop up there. That should tell you how to fix the issue, whatever it may be.
-
I have googled it for you and I definitely think you should contact your web host. Here's what comes up https://my.bluehost.com/cgi/help/594
-
go into the campaign section on seomoz run your site through it. You will then see where the errors are upon seeing error lit up click it use the drop-down to select 500 errors then you will see exactly what link is causing the error.
There is literally no way I can guess what is causing your website guess not to work correctly however a 500 error is a very serious one most likely involving a problem with server.
If you give me your domain I might be able to help more however if your site is just giving 500 errors you might want to call your web host as it sounds like it is not an SEO problem is much as it is a hosting issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
Do YouTube videos in iFrames get crawled?
There seems to be quite a few articles out there that say iframes cause problems with organic search and that the various bots can't/won't crawl them. Most of the articles are a few years old (including Moz's video sitemap article). I'm wondering if this is still the case with YouTube/Vimeo/etc videos, all of which only offer iFrames as an embed option. I have a hard time believing that a Google property (YT) would offer an embed option that it's own bot couldn't crawl. However, let me know if that is in fact the case. Thanks! Jim
Technical SEO | | DigitalAnarchy0 -
Google has deindexed 40% of my site because it's having problems crawling it
Hi Last week i got my fifth email saying 'Google can't access your site'. The first one i got in early November. Since then my site has gone from almost 80k pages indexed to less than 45k pages and the number is lowering even though we post daily about 100 new articles (it's a online newspaper). The site i'm talking about is http://www.gazetaexpress.com/ We have to deal with DDoS attacks most of the time, so our server guy has implemented a firewall to protect the site from these attacks. We suspect that it's the firewall that is blocking google bots to crawl and index our site. But then things get more interesting, some parts of the site are being crawled regularly and some others not at all. If the firewall was to stop google bots from crawling the site, why some parts of the site are being crawled with no problems and others aren't? In the screenshot attached to this post you will see how Google Webmasters is reporting these errors. In this link, it says that if 'Error' status happens again you should contact Google Webmaster support because something is preventing Google to fetch the site. I used the Feedback form in Google Webmasters to report this error about two months ago but haven't heard from them. Did i use the wrong form to contact them, if yes how can i reach them and tell about my problem? If you need more details feel free to ask. I will appreciate any help. Thank you in advance C43svbv.png?1
Technical SEO | | Bajram.Kurtishaj1 -
During my last crawl suddenly no errors or warnings were found, only one, a 403 error on my homepage.
There were no changes made and all my old errors dissapeard, i think something went wrong. Is it possible to start another crawl earlyer then scheduled?
Technical SEO | | KnowHowww0 -
CDN Being Crawled and Indexed by Google
I'm doing a SEO site audit, and I've discovered that the site uses a Content Delivery Network (CDN) that's being crawled and indexed by Google. There are two sub-domains from the CDN that are being crawled and indexed. A small number of organic search visitors have come through these two sub domains. So the CDN based content is out-ranking the root domain, in a small number of cases. It's a huge duplicate content issue (tens of thousands of URLs being crawled) - what's the best way to prevent the crawling and indexing of a CDN like this? Exclude via robots.txt? Additionally, the use of relative canonical tags (instead of absolute) appear to be contributing to this problem as well. As I understand it, these canonical tags are telling the SEs that each sub domain is the "home" of the content/URL. Thanks! Scott
Technical SEO | | Scott-Thomas0 -
Crawl issue
Hi I have a problem with crawl stats. Crawls Only return 3k pages while my site have 27k pages indexed(mostly duplicated content pages), why such a low number of pages crawled any help more than welcomed Dario PS: i have more campaign in place, might that be the reason?
Technical SEO | | Mrlocicero0 -
Internal Links not Crawled by Open Site Explorer
Can someone plz tell me why www.hotelelgreco.gr has only 2 internal links in OSE despite the fact that the text content has a plethora of them. Thanks in advance.
Technical SEO | | socrateskirtsios0 -
Why just 1 Page has been crawled till date?
We have started SEO for our nestle-family.com/english/ site. However, till date only just 1 page has been crawled. What are the reason for the pages not being crawled?
Technical SEO | | Francis_GlobalMediaInsight0