Thousands of 503 Errors
-
I was just checking Google Webmaster Tools for one of the first times (I know this should have been a regular habit).
I noticed that on Feb 8th we had almost 80K errors of type 503. This is obviously very alarming because as far as I know our site was up and available that whole day. This makes me wonder if there is a firewall issue or something else that I'm not aware of.
Any ideas for the best way to determine what's causing this?
Thanks,
Chris
-
Cyrus,
Thanks for the props, but also cool on the crawl delay link. I wish I could say I knew about it before this answer, but I didn't; cool stuff for bigger high update sites.
Always appreciate what you have to say as I learn a lot from you.
Best
-
Hi Chris,
This is a really hard problem to diagnose from the outside like, so I'll just give you my thoughts.
1. Are the URLs throwing the 503 errors real pages? Can they be accessed normally by human visitors through the site? I only mention this because sometimes you get software generating a bunch of random links that go nowhere, and weird stuff starts to happen when Google crawls those URLs. Normally you'd see these result as 404s, however.
2. Is the date in Google Webmaster Tools for the 503 errors recent? Sometimes they log those for a long time after the problem is actually solved, especially for URLs they don't visit much.
3. How often does your site go down?
4. Try performing a "Fetch as Googlebot" test on some of the effected URLs
5. I doubt googlebot is crashing your site, but you could always try a crawl delay
6. If nothing else, you'll find the problem at the serving/hosting level. Can't be much help there, unfortunately.
-
It turns out that the Magento patch did NOT fix the problem. We are still receiving tens of thousands of 503 errors when Googlebot requests a page. The site is not down. I can look in the access_log and see that the request was responded to with a 503 error.
Any ideas? This has to be killing our chances for organic traffic until this gets resolve.
-
Hi Robert,
Thanks for the response. It turns out that this is due to a bug in our hosting software, Magento, that results in googlebot not being handled correctly. Apparently there's a patch that's being tested now.
Thanks,
Chris
-
Do you know where you are hosted? Have you called them to see if the server is down or intermittently down?
Here is a how to resolve link.
Look at bottom and follow the directions regarding using the wayback machine to see if it is temporary or the server is down for maintenance.
That given, if you give us a url, it is easier to assist you.
Best, let us know.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Homepage not ranking main keywords - structure error?
Dear, Moz community We have an issue. We have a classified advertisement website. Our website is built like this**homepage (**Optimized for main keyword, has latest listings from all categories ) - category 1 (we did not want to add alteration of the keyword we want to rank homepage, as we thought this would "compete" with homepage) - category 2 - category 3 - **category **4 The listing URLs look like this www.example.com/categoryname/listingname Now the issue is that the homepage is not ranking at all for the main keywords. When we used URL structure like this "example.com/main-keyword-listing-id" homepage was ranking (other sites). Now with new site we used the best practice and added url's as described above (/categoryname/listingid).This caused our homepage not to rank at all for the main keywords.What did we do wrong? We want our homepage to rank for the main keyword and categories for theirs. Should we 1. Change the category 1 name to main keyword (maybe some long tail) so we have the main keyword in URLs? So at least one of the main categories has the main keyword in the listing URLs2. Should we change the category listing urls all back to /main-keyword-listing-id? We thought that this was a bit spammy, so that´s why we used categories. _This means that all listings have same URL name and not best for ranking cateogries_3. Just link back to homepage internally with the main keyword and google should catch that? _Currently in menu you go to homepage clicking HOME but we can add for example our main keyword there - Latest car advertisements _I would be happy of any feedback.
Technical SEO | | advertisingtech0 -
How to explain "No Return Tags" Error from non-existing page?
In the Search Console of our Google Webmaster account we see 3 "no return tags" errors. The attached screenshot shows the detail of one of these errors. I know that annotations must be confirmed from the pages they are pointing to. If page A links to page B, page B must link back to page A, otherwise the annotations may not be interpreted correctly. However, the originating URL (/#!/public/tutorial/website/joomla) doesn't exist anymore. How could these errors still show up? Screenshot%202016-07-11%2017.36.27.png?dl=0
Technical SEO | | Maximuxxx0 -
Should I Remove Thousands of Bad Links over a Short Time or Long Time?
Hey Moz Community! I've got a website that has hundreds of thousands of old links that don't really offer any great content. They need to be removed. Would it be a better idea to remove them in batches of 5000,10000, or more over a long time... or remove them all at the same time because it doesn't matter? Cheers, Alex
Technical SEO | | Anti-Alex0 -
WebMaster Tools keeps showing old 404 error but doesn't show a "Linked From" url. Why is that?
Hello Moz Community. I have a question about 404 crawl errors in WebmasterTools, a while ago we had an internal linking problem regarding some links formed in a wrong way (a loop was making links on the fly), this error was identified and fixed back then but before it was fixed google got to index lots of those malformed pages. Recently we see in our WebMaster account that some of this links still appearing as 404 but we currently don't have that issue or any internal link pointing to any of those URLs and what confuses us even more is that WebMaster doesn't show anything in the "Linked From" tab where it usually does for this type of errors, so we are wondering what this means, could be that they still in google's cache or memory? we are not really sure. If anyone has an idea of what this errors showing up now means we would really appreciate the help. Thanks. jZVh7zt.png
Technical SEO | | revimedia1 -
Weird 404 error
I have 2 404 errors on my site. The pages which are coming up as errors look like this www.mywebsite.com/a-page-not-belong-to-wordpress.html www.mywebsite.com/another-page-not-belong-to-wordpress.html Just wondering if i can delete these pages? if so how Regards
Technical SEO | | panda320 -
What if 404 Error not possible?
Hi Everyone, I get an 404 error in my page if the URL is simply wrong, but for some parameters, like if a page has been deleted, or has expired, I get an error page indicating that the ID is wrong, but no 404 error. It is for me very difficult to program a function in php that solve the problem and modify the .htaccess with the mod_rewrite. I ask the developer of the system to give a look, but I am not sure if I will get an answer soon. I can control the content of the deleted/expired page, but the URL will be very similar to those that are ok (actually the url could has been fine, but now expired). Thinking of solutions I can set the expired/deleted pages as noindex, would it help to avoid duplicated title/description/content problem? If an user goes to i.e., mywebsite.com/1-article/details.html I can set the head section to noindex if it has expired. Would it be good enough? Other question, is it possible anyhow to set the pages as 404 without having to do it directly in the .htacess, so avoiding the mod_rewrite problems that I am having? Some magical tag in the head section of the page? Many thanks in advance for your help, Best Regards, Daniel
Technical SEO | | te_c0 -
Will I still get Duplicate Meta Data Errors with the correct use of the rel="next" and rel="prev" tags?
Hi Guys, One of our sites has an extensive number category page lsitings, so we implemented the rel="next" and rel="prev" tags for these pages (as suggested by Google below), However, we still see duplicate meta data errors in SEOMoz crawl reports and also in Google webmaster tools. Does the SEOMoz crawl tool test for the correct use of rel="next" and "prev" tags and not list meta data errors, if the tags are correctly implemented? Or, is it necessary to still use unique meta titles and meta descriptions on every page, even though we are using the rel="next" and "prev" tags, as recommended by Google? Thanks, George Implementing rel=”next” and rel=”prev” If you prefer option 3 (above) for your site, let’s get started! Let’s say you have content paginated into the URLs: http://www.example.com/article?story=abc&page=1
Technical SEO | | gkgrant
http://www.example.com/article?story=abc&page=2
http://www.example.com/article?story=abc&page=3
http://www.example.com/article?story=abc&page=4 On the first page, http://www.example.com/article?story=abc&page=1, you’d include in the section: On the second page, http://www.example.com/article?story=abc&page=2: On the third page, http://www.example.com/article?story=abc&page=3: And on the last page, http://www.example.com/article?story=abc&page=4: A few points to mention: The first page only contains rel=”next” and no rel=”prev” markup. Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup. The last page only contains markup for rel=”prev”, not rel=”next”. rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the tag). And, if you include a <base> link in your document, relative paths will resolve according to the base URL. rel=”next” and rel=”prev” only need to be declared within the section, not within the document . We allow rel=”previous” as a syntactic variant of rel=”prev” links. rel="next" and rel="previous" on the one hand and rel="canonical" on the other constitute independent concepts. Both declarations can be included in the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain: rel=”prev” and rel=”next” act as hints to Google, not absolute directives. When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content.0 -
Duplicate page content errors in SEOmoz
Hi everyone, we just launched this new site and I just ran it through SEOmoz and I got a bunch of duplicate page content errors. Here's one example -- it says these 3 are duplicate content: http://www.alicealan.com/collection/alexa-black-3inch http://www.alicealan.com/collection/alexa-camel-3inch http://www.alicealan.com/collection/alexa-gray-3inch You'll see from the pages that the titles, images and small pieces of the copy are all unique -- but there is some copy that is the same (after all, these are pretty much the same shoe, just a different color). So, why am I getting this error and is there any best way to address? Thanks so much!
Technical SEO | | ketanmv
Ketan0