Site Crawl Status code 430
-
Hello,
In the site crawl report we have a few pages that are status 430 - but that's not a valid HTTP status code. What does this mean / refer to?
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errorsIf I visit the URL from the report I get a 404 response code, is this a bug in the site crawl report?
Thanks,
Ian.
-
Which, of course, you can't do in Shopify.
Maybe we should just collectively get on Shopify to implement this by default.
-
It's all in this help document:
https://moz.com/help/moz-procedures/crawlers/rogerbot
"Crawl Delay To Slow Down Rogerbot
We want to crawl your site as fast as we can, so we can complete a crawl in good time, without causing issues for your human visitors.
If you want to slow rogerbot down, you can use the Crawl Delay directive. The following directive would only allow rogerbot to access your site once every 10 seconds:
User-agent: rogerbot
Crawl-delay: 10"
So you'd put the specified rule in your robots.txt file
-
This is happening to a client of mine too. Is there a way to set my regular MOZ Pro account to crawl the site slower?
-
This is a common issue with Shopify hosted stores, see this post:
It seems to be related to crawling speed. If a bot crawls your site too fast, you'll get 430s.
It may also be related to the proposed, 'additional' status code 430 documented here:
"430 Request Header Fields Too Large
This status code indicates that the server is unwilling to process the request because its header fields are too large. The request MAY be resubmitted after reducing the size of the request header fields."
I'd probably look at that Shopify thread and see if anything sounds familiar
-
@Angler - yeah thought the same - but why not log it as a 403 in the report. The site is hosted on Shopify - so don't get access to logs unfortunately.
Was wandering if it was related to rate limiting as in a few cases it's a false positive and page loads fine.
Have emailed Eli - thanks,
Best.
Ian.
-
-
Hey Ian,
Thanks for reaching out to us!
Would you be able to contact us at help@moz.com so that we can take a closer look at your Campaign.
Looking forward to hearing from you,
Eli
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are only two pages of my site being crawled by MOZ?
MOZ is only crawling two pages of my site even though there are links on the homepage, and the robot.txt file allows for crawling. Our site address is www.hrparts.com.
Product Support | | hrparts0 -
Website can't be crawled
Hi there, One of our website can't be crawled. We did get the error emails from you (Moz) but we can't find the solution. Can you please help me? Thanks, Tamara
Product Support | | Yenlo0 -
My site crawl has been in progress since last week
Hi there, I've been waiting on my site crawl to complete since Friday (it's Tuesday now), but it still has the 'in progress' notification at the top. Is it normal for it to take over 3 days? Or is there something holding it up?
Product Support | | VAPartners0 -
Haven't received an update on site crawl issues more than a week
Hello, my account has be scheduled to have the next updated report on 1st March. However, up till now, the latest data I have for our site crawl issues is made on 21st Feb. May I know if there is any issue related to this? Any way that i can draw the data for this week?
Product Support | | Robylin10 -
Crawl errors are still shown after fixed
Fixed long ago "title too long" and some 404 errors, but still keep on showing on error statistics
Product Support | | sws10 -
Crawl Limit Question
I'm a little confused as to how the crawl limit works. Since there seems to be a 10K per week max, the crawl limit can't be per week, so what is the time period? Also, does that include crawling sites entered as competitors? Right now I'm at 14/25 sites and most of them are under 1,000 pages so I'm not sure how I hit that limit (other than a one-time spike of 28,000 in November).
Product Support | | David_Moceri0 -
Duplicate Content Report: Duplicate URLs being crawled with "++" at the end
Hi, In our Moz report over the past few weeks I've noticed some duplicate URLs appearing like the following: Original (valid) URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green Duplicate URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green**++** These aren't appearing in Webmaster Tools, or in a Screaming Frog crawl of our site so I'm wondering if this is a bug with the Moz crawler? I realise that it could be resolved using a canonical reference, or performing a 301 from the duplicate to the canonical URL but I'd like to find out what's causing it and whether anyone else was experiencing the same problem. Thanks, George
Product Support | | webmethod0 -
Is it possible to split a moz account. We have 5 sites and one has been sold to another company. Can I split that site onto its own new account?
Is it possible to split a moz account. We have 5 sites and one has been sold to another company. Can I split that site onto its own new account?
Product Support | | BuyandSell0