Crawl Diagnostics Report 500 erorr
-
How can I know what is causing my website to have 500 errors and how I locate it and fix it?
-
500 errors could be caused by a mulitude of reasons, and for the non-technical they can be very hard to track down and fix.
The first thing I would look at is if it's a repeating problem in Google Webmasters Tools, or a one-time issue. These errors will show up in GWT for a long time - but if it's not a repeating problem it probably is nothing you need to worry about.
Wait, I assumed you found the problems in GWT, when you may have possibly found them using the SEOmoz crawl report. Either way, you should probably log into Google Webmaster Crawl Errors report and see if Google is experiencing the same problems.
Sometimes 500 errors are caused by over-aggressive robots and/or improperly configured servers that can't handle the load. In this case, a simple crawl delay directive in your robots.txt file may do the trick. It would look something like this:
User-agent: * Crawl-delay: 5
This would request that robots wait at least 5 seconds between page requests. But note, this doesn't necessarly solve the problem of why your server was returning 500s in the first place.
You may need to consult your hosting provider for advice. For example, Bluehost has this excellent article on dealing with 500 errors from their servers: https://my.bluehost.com/cgi/help/594
Hope this helps! Best of luck with your SEO.
-
Thank you Corey for your advise, I see which links it is in google webmasters and in , but I can't reproduce it and don't know whats the best way to fix it?
-
Thomas thank you so much for your advise, and Keri thanks for offering help.
My problem is that when I try to reproduce the 500 error so the host cant help me on how to fix it.
Any help?
-
Hey Keri how are you merry Christmas, I believe that 500 errors are almost always server related errors and unless he tells me about the host or Some other maybe strange unique problem with the computers registry I don't have enough to go on. You be interesting to find out what it is all the best, Tom
-
Hi Yoseph,
Did you get this figured out, or would you still like some assistance?
-
HTTP Error 500 is an Internal Server Error. It's a server-side error, that means there's either a problem with your web server or the code that it's trying to interpret. It may not happen in 100% of scenarios, so you may not always see it happening yourself, but it prevents the page from loading. Obviously, that's bad for search engines and users.
Your best bet in tracking down this error would be to go through your web server's error logs. Or, if you can replicate this happening on the web, you could enable error reporting, and see what errors pop up there. That should tell you how to fix the issue, whatever it may be.
-
I have googled it for you and I definitely think you should contact your web host. Here's what comes up https://my.bluehost.com/cgi/help/594
-
go into the campaign section on seomoz run your site through it. You will then see where the errors are upon seeing error lit up click it use the drop-down to select 500 errors then you will see exactly what link is causing the error.
There is literally no way I can guess what is causing your website guess not to work correctly however a 500 error is a very serious one most likely involving a problem with server.
If you give me your domain I might be able to help more however if your site is just giving 500 errors you might want to call your web host as it sounds like it is not an SEO problem is much as it is a hosting issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Uninstalled WordPress, now getting 200 errors of 500 response code
Hello there, A little while ago I installed WordPress on the server I use with Bluehost to try out a theme. My business domain name is also the primary account on this server. WordPress was causing some serious issues on the server so I uninstalled it, and now I have over 200 "500 response code" errors according to WebMaster tools. I've included a screenshot of some of them. Could anyone advise me on what to do about this? Thanks so much! MYbW6
Technical SEO | | lulu710 -
How to stop crawls for product review pages? Volusion site
Hi guys, I have a new Volusion website. the template we are using has its own product review page for EVERY product i sell (1500+) When a customer purchases a product a week later they receive a link back to review the product. This link sends them to my site, but its own individual page strictly for reviewing the product. (As oppose to a page like amazon, where you review the product on the same page as the actual listing.) **This is creating countless "duplicate content" and missing "title" errors. What is the most effective way to block a bot from crawling all these pages? Via robots txt.? a meta tag? ** Here's the catch, i do not have access to every individual review page, so i think it will need to be blocked by a robot txt file? What code will i need to implement? i need to do this on my admin side for the site? Do i also have to do something on the Google analytics side to tell google about the crawl block? Note: the individual URLs for these pages end with: *****.com/ReviewNew.asp?ProductCode=458VB Can i create a block for all url's that end with /ReviewNew.asp etc. etc.? Thanks! Pardon my ignorance. Learning slowly, loving MOZ community 😃 1354bdae458d2cfe44e0a705c4ec38dd
Technical SEO | | Jerrion0 -
Google not crawling the website from 22nd October
Hi, This is Suresh. I made changes to my website and I see that google is unable to crawl my website from 22nd October. Even it is not showing any content when I use Cache:www.vonexpy.com. Can any body help me in knowing why Google is unable to crawl my website. Is there any technical issue with the website? Website is www.vonexpy.com Thanks in advance.
Technical SEO | | sureshchowdary1 -
Lots of backs links from Woorank reported by GWT
Hello We just sow a lots of links from woorank website ( 138 ) reported in our "Links to your site" at google webmaster tools, do you think we should consider add to submit that website for disavow in google webmaster tools ? Rgds
Technical SEO | | helpgoabroad0 -
Bug in Competitor Rankings Report?
I am looking at the report called: Rankings Report for [Competitor XXX] For all keywords in the report, the rankings on the main page say "Not in Top 50"... however when I drill down I can see that this is not true... there is a graph with valid rankings which were gathered as recently as March 20, 2013 (2 days ago) is this a known bug? Regards Jim Donovan
Technical SEO | | wethink0 -
Crawling and indexing content
If a page element (div, e.g.) is initially hidden and shown only by a hover descriptor or Javascript call, will Google crawl and index it’s content?
Technical SEO | | Mont0 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0