5xx Server Errors
-
Hey all,
Recently my site has seen a rise in 5xx errors which have all popped up since the first of the year.
Our company made a big announcement on 1/4 which drove a lot of traffic to the site, which I thought may be a reason for the sudden errors. But the errors have since continued. Also, the crawl from last week showed no errors at all (which I'm sure was a fluke)
Taking a closer look in GA, the pages that are receiving errors have not been visited for months.
Is this a random occurrence or should I consult someone to investigate our server?
Thanks,
Chris -
The crawl is showing that the errors are happening on very specific pages, so it may be a simpler solution like what you're describing.
-
One client I recently worked with had CloudFlare installed and it was causing tons of 500 errors. We removed it and they went away. Like I said see if you can identify specific pages it's happening on and isolate the code that is on those pages only. Good luck!
-
Thanks for the input, guys.
Looks like the move to make right now is double check code, then if the problem persists, dig into server issues.
-
This could also be something as simple as a bad code on some pages too. look at those old pages and see if there is a template or something associated with these pages that only they share. Might be isolating a script or disabling a plugin.
-
5xx errors refer to server errors, and can be caused by serious issues with the way your server is configured. I would definitely look investigate it further.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Error Code 804: HTTPS (SSL) Error Encountered
I'm seeing the following error in Moz as below. I have not seen any errors when crawling with different tools - is this common or is it something that needs to be looked at. SO far I only have info below. Assuming I would need to open a ticket with hosting provider for this? Thanks! Error Code 804: HTTPS (SSL) Error Encountered Your page requires an SSL security certificate to load (using HTTPS), but the Moz Crawler encountered an error when trying to load the certificate. Our crawler is pretty standard, so it's likely that other browsers and crawlers may also encounter this error. If you have this error on your homepage, it prevents the Moz crawler (and some search engines) from crawling the rest of your site.
Moz Pro | | w4rdy0 -
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
Since July 1, we've had a HUGE jump in errors on our weekly crawl. We don't think anything has changed on our website. Has MOZ changed something that would account for a large leap in duplicate content and duplicate title errors?
Our error report went from 1,900 to 18,000 in one swoop, starting right around the first of July. The errors are duplicate content and duplicate title, as if it does not see our 301 redirects. Any insights?
Moz Pro | | KristyFord0 -
Seo moz has only crawled 2 pages of my site. Ive been notified of a 403 error and need an answer as to why my pages are not being crawled?
SEO Moz has only crawled 2 pages of my clients site. I have noticed the following. A 403 error message screaming frog also cannot crawl the site but IIS can. Due to the lack of crawling ability, im getting no feed back on my on page optimization rankings or crawl diagnostics summary, so my competitive analysis and optimization is suffering Anybody have any idea as to what needs to be done to rectify this issue as access to the coding or cms platform is out of my hands. Thank you
Moz Pro | | nitro-digital0 -
Advice for 4000+ duplicate errors on 1st check
Hi, 1st time use of the SEOMOZ scan has thrown up a lot of duplicate errors. Seems to look like my site has a .com.au/ & .com.au/default for the same pages. We had the domain on a hosted cms solution & have now migrated to magento. We duplicated the pages, but had to redirect all of the old url's to he new magento structure. This was done via a developer adding a 301 wildcard code to the .htaccess. Would that many errors be normal for a 1st scan? Where should I look for someone to fix them? Thanks
Moz Pro | | Paul_MC0 -
Campaign report errors
one of the heavily noted errors for the first crawl of our domain was duplicate titles. I did not see a list of the pages and their current titles, but i am pretty sure it is somewhere to be found. That would help focus the work to do. Am I wrong about that?
Moz Pro | | Jacog0 -
Help with URL parameters in the SEOmoz crawl diagnostics Error report
The crawl diagnostics error report is showing tons of duplicate page titles for my pages that have filtering parameters. These parameters are blocked inside Google and Bing webmaster tools. I do I block them within the SEOmoz crawl diagnostics report?
Moz Pro | | SunshineNYC0 -
Crawl Errors Confusing Me
The SEOMoz crawl tool is telling me that I have a slew of crawl errors on the blog of one domain. All are related to the MSNbot. And related to trackbacks (which we do want to block, right?) and attachments (makes sense to block those, too) ... any idea why these are crawl issues with MSNbot and not Google? My robots.txt is here: http://www.wevegotthekeys.com/robots.txt. Thanks, MJ
Moz Pro | | mjtaylor0