429 Errors?
-
I have over 500,000 429 errors in webmaster tools. Do I need to be concerned about these errors?
-
I am getting the same 429 errors also? The only changes I have made are that I switched my account on Godaddy from a deluxe hostimg to a managed WordPress hosting account?
-
I highly doubt this error would have anything to do with that. I would also recommending cross checking those rankings with another party tool like Authority Labs - or you can look for your average position in Google Webmaster Tools. Moz runs rankings once a week, and sometimes it might happen to pick up on a temporary fluctuation. So I'd confirm the ranking drop before deciding what to do next
-
I have the same question. Also, these are the only errors on my site. MOZ shows that the main keyword for the site just dropped from ranked #1 to ranked #21 on Google. Does this error have anything to do with it?
-
Sounds like probably the same issue wcbuckner describes - if it's problem in any way I would contact GoDaddy about it and see what they have to say.
-
This error is shown only in Moz Analytics. It's oky everywhere except here.
-
I just got a lot of these on my Moz report and I also host with Godaddy. My issue is how do we know if it does or doesn't happen when Google crawls our site? I am trying to get a page rank and I hope this is not stopping my site from getting ranked.
-
What exactly is happening, the same 429 errors? Does wcbuckner's response explain it for you?
-
Facing same problem with Godaddy. But can anybody say how to resolve this problem please?
-
Having the same issue and just spoke with Godaddy who said it's not a concern and that what's happening is Moz's software is pinging the client's server too many times within a given time period, so Godaddy's system is temporarily blocking Moz's IP, which causes the error. The do not ever, according to this rep, block Google's services that hit the server.
-
Interesting. I too had not come across a 429 error either.
I crawled your site once with Screaming Frog at normal speed and got some 429 errors. Those pages are indexed and cached - so there does not seem to be a dire emergency.
I did a second crawl, slower, with Screaming Frog - and still got a few 429 errors but not nearly as many. Thing is though, even though pages are getting indexed and cached, some pages will throw the 429 error on some crawls, and then maybe not the next crawl. So it's enough to get through, but would be better to not have them.
From what I can tell, it seems this code is set at the server level - so perhaps you should contact your host to inquire about it. Are you on a normal hosting setup or are you going through something like WP Engine? The number of requests allowed needs to be increased. Or as Mike said, this could be an included API call that's causing it.
Hope that helps!
-Dan
-
I found this from the Internet Engineering Task Force (IETF):
"429 Too Many Requests
The 429 status code indicates that the user has sent too many requests in a given amount of time ("rate limiting").
The response representations SHOULD include details explaining the condition, and MAY include a Retry-After header indicating how long to wait before making a new request.
For example:
HTTP/1.1 429 Too Many Requests
Content-Type: text/html
Retry-After: 3600<title>Too Many Requests</title>
Too Many Requests
I only allow 50 requests per hour to this Web site per logged in user. Try again soon.
Note that this specification does not define how the origin server identifies the user, nor how it counts requests. For example, an origin server that is limiting request rates can do so based upon counts of requests on a per-resource basis, across the entire server, or even among a set of servers. Likewise, it might identify the user by its authentication credentials, or a stateful cookie.
Responses with the 429 status code MUST NOT be stored by a cache."
From doing a quick read, it looks like this error would be thrown when API request are called too quickly... so... yeah?
Sorry I can't be any more helpful.
Mike
-
Here is a screenshot of webmaster errors: http://prntscr.com/zc5p2.
-
Hmmm...
I have never hear of a 429 error. And that error isn't listed by Google Webmaster Tools or W3.org either.
If you mean a 409 error, that means Conflict, "The server encountered a conflict fulfilling the request. The server must include information about the conflict in the response. The server might return this code in response to a PUT request that conflicts with an earlier request, along with a list of differences between the requests."
If you do mean 429, can you provide a screenshot of it?
Thanks,
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best way to correct GWT telling me I have mobile usability errors in Image directories
In GWT, I wish to remove / resolve the following errors Mobile Usability > Viewport not configured Mobile Usability > Small font size Mobile Usability > Touch elements too close The domain www.sandpiperbeacon.com is responsive, and passes the mobile usability test. A new issue I noticed, is that GWT is reporting 200+ errors just for image index pages such as http://www.sandpiperbeacon.com/images/special-events/ for example. Website users cannot access these pages (without editing the URL manually) so I don't consider these usability issues. BUT, I hate to see 200+ errors, especially when Google itself says "Websites with mobile usability issues will be demoted in mobile search results." I could set the image directories to dissalow in Robots.txt, but I do not want the images to stop appearing in image search, so this seems like a flawed solution. I cannot be the only person experiencing this, but I have been unable to find any suggestions online.
Technical SEO | | RobertoGusto0 -
What is the best way to handle these duplicate page content errors?
MOZ reports these as duplicate page content errors and I'm not sure the best way to handle it. Home
Technical SEO | | ElykInnovation
http://myhjhome.com/
http://myhjhome.com/index.php Blog
http://myhjhome.com/blog/
http://myhjhome.com/blog/?author=1 Should I just create 301 redirects for these? 301 http://myhjhome.com/index.php to http://myhjhome.com/ ? 301 http://myhjhome.com/blog/?author=1 to http://myhjhome.com/ ? Or is there a better way to handle this type of duplicate page content errors? and0 -
Are W3C Validators too strict? Do errors create SEO problems?
I ran a HTML markup validation tool (http://validator.w3.org) on a website. There were 140+ errors and 40+ warnings. IT says "W3C Validators are overly strict and would deny many modern constructs that browsers and search engines understand." What a browser can understand and display to visitors is one thing, but what search engines can read has everything to do with the code. I ask this: If the search engine crawler is reading thru the code and comes upon an error like this: …ext/javascript" src="javaScript/mainNavMenuTime-ios.js"> </script>');}
Technical SEO | | INCart
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element
in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create
cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer
the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error). and this... <code class="input">…t("?");document.write('>');}</code> ✉ The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed). One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error). Does this mean that the crawlers don't know where the code ends and the body text begins; what it should be focusing on and not?0 -
Can you redirect from a 410 server error? I see many 410s that should be directed to an existing page.
We have 150,000 410 server errors. Many of them should be redirected to an existing url. This is a result of a complete website redesign, including new navigation and new web platform. I believe IT may have inadvertently marked many 404s as 410s. Can I fix this or is a 410 error permanent? Thank you for your help.
Technical SEO | | sxsoule0 -
Title errors for pages behind a login
On our website we have content which is located behind a members login. the SEOMoz crawl report has returned these pages with a "no title" error against them. It appears that these pages are being crawled until the website prompts it to login. I can only presume that it follows the url but doesn't have an opportunity to crawl the meta data. what is the solution for these pages? 401, so that the bots know these pages are behind a login? do we implement anything to ensure "no index", "no follow"? I searched the T'interwebs and couldn't find anything conclusive on this issue.
Technical SEO | | digitalez0 -
Google couldn't access your site because of a DNS error
Hello, We've being doing SEO work for a company for about 8 months and it's been working really well, we've lots of top threes and first pages. Or rather we did. Unfortunately the web host who the client uses (who to recommended them not to) has had severe DNS problems. For the last three weeks Google has been unable to access and index the website. I was hoping this was going to be a quickly resolved and everything return to normal. However this week their listing have totally dropped, 25 page one rankings has become none, Google Webmaster tools says 'Google couldn't access your site because of a DNS error'. Even searching for their own domain no longer works! Does anyone know how this will effect the site in the long term? Once the hosts sort it out will the rankings bounce back. Is there any sort of strategy for handling this problem? Ideally we'd move host but I'm not sure that is possible so any other options, or advice on how it will affect long term rankings so I can report to my client would be appreciated. Many thanks Ric
Technical SEO | | BWIRic0 -
150 Duplicate page error
I am told that I have 150 duplicate page content. It seems that it is the login link on each of my pages. Is this an error? Is it something I have to change? Thanks Login/Register at http://irishdancingdress.com/wp-login.php?redirect_to=http%3A%2F%2Firishdancingdress.com%2Fdress
Technical SEO | | ukkpower0 -
404 crawl errors from "tel:" link?
I am seeing thousands of 404 errors. Each of the urls is like this: abc.com/abc123/tel:1231231234 Everything is normal about that url except the "/tel:1231231234" these urls are bad with the tel: extension, they are good without it. The only place I can find this character string is on each page we have this code which is used for Iphones and such. What are we doing wrong? Code: Phone: <a href="[tel:1231231234](tel:7858411943)"> (123) 123-1234a>
Technical SEO | | EugeneF0