Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Site Crawl Status code 430
-
Hello,
In the site crawl report we have a few pages that are status 430 - but that's not a valid HTTP status code. What does this mean / refer to?
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errorsIf I visit the URL from the report I get a 404 response code, is this a bug in the site crawl report?
Thanks,
Ian.
-
Which, of course, you can't do in Shopify.
Maybe we should just collectively get on Shopify to implement this by default.
-
It's all in this help document:
https://moz.com/help/moz-procedures/crawlers/rogerbot
"Crawl Delay To Slow Down Rogerbot
We want to crawl your site as fast as we can, so we can complete a crawl in good time, without causing issues for your human visitors.
If you want to slow rogerbot down, you can use the Crawl Delay directive. The following directive would only allow rogerbot to access your site once every 10 seconds:
User-agent: rogerbot
Crawl-delay: 10"
So you'd put the specified rule in your robots.txt file
-
This is happening to a client of mine too. Is there a way to set my regular MOZ Pro account to crawl the site slower?
-
This is a common issue with Shopify hosted stores, see this post:
It seems to be related to crawling speed. If a bot crawls your site too fast, you'll get 430s.
It may also be related to the proposed, 'additional' status code 430 documented here:
"430 Request Header Fields Too Large
This status code indicates that the server is unwilling to process the request because its header fields are too large. The request MAY be resubmitted after reducing the size of the request header fields."
I'd probably look at that Shopify thread and see if anything sounds familiar
-
@Angler - yeah thought the same - but why not log it as a 403 in the report. The site is hosted on Shopify - so don't get access to logs unfortunately.
Was wandering if it was related to rate limiting as in a few cases it's a false positive and page loads fine.
Have emailed Eli - thanks,
Best.
Ian.
-
-
Hey Ian,
Thanks for reaching out to us!
Would you be able to contact us at help@moz.com so that we can take a closer look at your Campaign.
Looking forward to hearing from you,
Eli
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Crawling only the Home of my website
Hello,
Product Support | | Azurius
I don't understand why MOZ crawl only the homepage of our webiste https://www.modelos-de-curriculum.com We add the website correctly, and we asked for crawling all the pages. But the tool find only the homepage. Why? We are testing the tool before to suscribe. But we need to be sure that the tool is working for our website. If you can please help us.0 -
Unsolved 403 crawl error
Hi, Moz( Also reported by GSC)have reported 403 crawl error on some of my pages. The pages are actually working fine when loaded and no visible issue at all. My web developer told me that some times error issues are reported on a working pages and there is nothing to worry about.
Product Support | | ghrisa65
My question is, will the 403 error have bad consequences on my SEO/Page ranking etc. These are some of the pages that have been reported with 403 error but loading fine: https://www.medistaff24.co.uk/hourly-home-care-in-evesham/ https://www.medistaff24.co.uk/contact-us/0 -
Website can't be crawled
Hi there, One of our website can't be crawled. We did get the error emails from you (Moz) but we can't find the solution. Can you please help me? Thanks, Tamara
Product Support | | Yenlo0 -
Merging two Sites into One - how is Final Domain Authority Calculated?
Hi, If we merge two sites say, one with DA 35 and another 20 what will be resulting DA of the merged site? Secondly, will the result vary based on which Domain is kept and which one is 301ed? ps :- all content is transferred from one to another using 301. Regards,
Product Support | | Praveenshah10 -
Crawl test
I used to use the crawl test tool to crawl websites and it presented the information in a really useful hierarchy of pages. The new on-demand crawl test doesn't seem to do this. Is there another tool I should be using to get the data?
Product Support | | Karen_Dauncey0 -
Crawl error robots.txt
Hello, when trying to access the site crawl to be able to analyze our page, the following error appears: **Moz was unable to crawl your site on Nov 15, 2017. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster. Can help us? Thanks!
Product Support | | Mandiram0 -
No crawl data anymore
Using moz quite some time, but I don't have any crawl data anymore. What happened? (www.kbc.be)
Product Support | | KBC
http://analytics.moz.com/settings/campaign/517920.11285160 -
I have removed a subdomain from my main domain. We have stopped the subdomain completely. However the crawl still shows the error for that sub-domain. How to remove the same from crawl reports.
Earlier I had a forum as sub-domain and was mentioned in my main domain. However i have now discontinued the forum and have removed all the links and mention of the forum from my main domain. But the crawler still shows error for the sub-domain. How to make the crawler issues clean or delete the irrelevant crawl issues. I dont have the forum now and no links at the main site, bu still shows crawl errors for the forum which doesnt exist.
Product Support | | potterharry0