What's the best way to eliminate "429 : Received HTTP status 429" errors?
-
My company website is built on WordPress. It receives very few crawl errors, but it do regularly receive a few (typically 1-2 per crawl) "429 : Received HTTP status 429" errors through Moz.
Based on my research, my understand is that my server is essentially telling Moz to cool it with the requests. That means it could be doing the same for search engines' bots and even visitors, right? This creates two questions for me, which I would greatly appreciate your help with:
-
Are "429 : Received HTTP status 429" errors harmful for my SEO? I imagine the answer is "yes" because Moz flags them as high priority issues in my crawl report.
-
What can I do to eliminate "429 : Received HTTP status 429" errors?
Any insight you can offer is greatly appreciated!
Thanks,
Ryan -
-
I have a customer that is using GoDaddy website hosting (at least according to BuiltWith) and I'm experiencing this same issue.
Any updates on this experiment from user rsigg? I'd love to know if I can remove this from my customer's robots file...
FWIW, Netrepid is a hosting provider for colocation, infrastructure and applications (website hosting being considered an application) and we would never force a crawl delay on a Wordpress install!
Not hating on the other hosting service providers... #justsayin
-
I am also on the same hosting and they have not been able to help with the 429. I have now started getting 429 errors when I attempt to login. Definitely something wrong with wp premium hosting.
-
Interesting. I look forward to hearing your results, as my robots.txt file is also set to:
Crawl-delay: 1.
-
We host on Media Temple's Premium WordPress hosting (which I do not recommend, but that's another post for another place), and the techs there told me that it could be an issue with the robots.txt file:
"The issue may be with the settings in the robots.txt file. It looks fine to me but the "Crawl-delay" line might be causing issues. I understand. For the most part, crawlers tend to use robots.txt to determine how to crawl your site, so you may want to see if Moz requires some special settings in there to work correctly."
Ours is set to:
Crawl-delay: 1
I haven't tried changing these values yet in our file, but may experiment with this very soon. If I get results, I'll post back here as well as start a new forum thread.
-
Chase,
They ran a bunch of internal diagnostic tools on my site, and were unable to replicate the 429 errors. They ended up telling me exactly what they told you. I haven't noticed any issues with my site's rankings, or any flags in Webmaster Tools, so it looks like they are right so far. I just hate logging into Moz and seeing all those crawl errors!
-
What'd they say Ryan? Having the same issue and just contacted Godaddy who told me that basically Moz's software is pinging my client's server too frequently so Godaddy is temporarily blocking their IP. They said it's not a concern though as they would never block Google from pinging/indexing the site.
-
Many thanks - I will contact them now!
-
Contact your host and ask let them know about the errors. More than likely they have mod_sec enabled to limit request rates. Ask them to up the limit that you are getting 429 errors from crawlers and you do not want them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best way to treat URLs ending in /?s=
Hi community, I'm going through the list of crawl errors visible in my MOZ dashboard and there's a few URLs ending in /?s= How should I treat these URLs? Redirects? Thanks for any help
Moz Pro | | Easigrass0 -
Do we need rel="prev" and rel="next" if we have a rel="canonical" for the first page of a series
Despite having a canonical on page 1 of a series of paginated pages for different topics, Google is indexing several, sometimes many pages in each topic. This is showing up as duplicate page title issues in Moz and Screaming Frog. Ideally Google would only index the first page in the series. Do we need to use rel="prev" etc rather than a canonical on page 1? How can we make sure Google crawls but doesn't index the rest of the series?
Moz Pro | | hjsand1 -
I have 2 linking root domains on my URL. But I don't get the whole Root domain thing. So I don't understand how I can improve it?
I have 2 linking root domains on my URL. But I don't get the whole Root domain thing. So I don't understand how I can improve it? I copy and pasted this, from my Links page in my campaign because I can't seem to grasp what a root domain is: 'A higher number of good quality linking root domains improves a page's ranking potential'. Can some one explain to me what this is. As simply as possible. Here's my site www.Thumannagency.com Thanks in advance:)
Moz Pro | | MissThumann0 -
SEOmoz showing crawl errors but webmastertools says no errors, need help!
Hi this is my first question and i couldnt find a similar question on here. basically i have a clients website that is showing 150 duplicate page titles and content errors plus others. SEOmoz analysis is showing me for example is 3 duplicate hompage URLS: 1.www.domain.com 2.domain.com 3.www.domain.com/index.html all 3 are the same page. after explaining to the guy (who built the website) the errors, he ensured me that the main URL is URl 1. and the other 2 are 301 redirects. however SEOmoz analysis doesnt seem to change the results and webmastertools doesnt seem to show any errors at all. also if i try all 3 URL's there are no redirects to URL 1. any help or clarity would be awesome! Thanks e-bob
Moz Pro | | bobsnowzell0 -
Why can't I find uipl?
Hi guys, I'm working with the API at the moment. I have most of the information I want, but I cannot find the uipl in any of the returned data using several methods. The methods I have tried are: http://lsapi.seomoz.com/linkscape/links/www.google.co.uk?Scope=page_to_domain&AccessID=xxx&Expires=xxx&Signature=xxx http://lsapi.seomoz.com/linkscape/url-metrics/www.google.co.uk?AccessID=xxx&Expires=xxx&Signature=xxx I can't seem to find how to get the uipl. Any ideas? Thanks in advance!
Moz Pro | | ClickConsult0 -
404 Page/Content Duplicates & its "Warning"
My website has MANY duplicate pages and content which are both derived from the MANY 404 pages on my website. While these are flagged in SEOmoz as "Warnings," should this be of concern to SEO effectiveness?
Moz Pro | | dhk50180 -
Why am I getting 400 client errors on pages that work?
Hi, I just done the initial crawl on y domain and I sem to have 80 400 client errors. However when I visit the urls the pages are loading fine. Any ideas on why this is happening and how I can resolve the problem?
Moz Pro | | moesian0 -
SEOmoz Crawl CSV in Excel: already split by semicolon. Is this Excel's fault or SEOmoz's?
If for example a page title contains a ë the .csv created by the SEOmoz Crawl Test is already split into columns on that point, even though I haven't used Excel's text to columns yet. When I try to do the latter, Excel warns me that I'm overwriting non-empty cells, which of course is something I would rather not do since that would make me lose valuable data. My question is: is this something caused by opening the .csv in Excel, or earlier in the process when this .csv is created?
Moz Pro | | DeptAgency2