Crawler triggering Spam Throttle and creating 4xx errors
-
Hey Folks,
We have a client with an experience I want to ask about.
The Moz crawler is showing 4xx errors. These are happening because the crawler is triggering my client's spam throttling. They could increase from 240 to 480 page loads per minute but this could open the door for spam as well.
Any thoughts on how to proceed?
Thanks! Kirk
-
Thank you Dave!
-
Hey Kirk! We built our crawler to obey robots.txt crawl-delay directives. In the future, if this is ever an issue, you can use the crawl delay to slow Rogerbot down to a more reasonable speed. However, we don't recommend adding a crawl delay larger than 10 or Rogerbot might not be able to finish the crawl of your site.
Just add a crawl delay directive to your robots.txt file like this:
User-agent: rogerbot
Crawl-delay: 10Here's a good article that explains more about this technique: https://moz.com/learn/seo/robotstxt. I hope this helps, feel free to reach out if you have any other questions!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
804 : HTTPS (SSL) Error in Crawl Test
So I am getting this 804 Error but I have checked our Security Certificate and it looks to be just fine. In fact we have another 156 days before renewal on it. We did have some issues with this a couple months ago but it has been fixed. Now, there is a 301 from http to https and I did not start the crawl on https so I am curious if that is the issue? Just wanted to know if anybody else has seen this and if you were able to remedy it? Thanks,
Moz Bar | | DRSearchEngOpt
Chris Birkholm0 -
What is Considered Duplicate Content by Crawlers?
I am asking this because I have a couple of site audit tools that I use to crawl a site I work on every week and they are showing duplicate content issues (which I know there is a lot on this site) but some of what is flagged as duplicate content makes no sense. For example, the following URL's were grouped together as duplicate content: | https://www.firefold.com/contact-us | https://www.firefold.com/gabe | https://www.firefold.com/sale | | | How are these pages duplicate content? I am confused on what site audit tools are considering duplicate content. Just FYI, this is data from Moz crawl diagnostics but SEMrush site auditor is giving me the same type of data. Any help would be greatly appreciated. Ryan
Moz Bar | | RyanRhodes0 -
New Spam Analysis Tool Results Questions
First off, I am incredibly excited about this tool. Secondly, I have a slew of questions, but the ones that are lingering for me are as follows. There are a few URLS that we used to control which are showing up in our list as spammy. The websites no longer exist as of roughly a week ago and I presume they still show up simply because of the last indexing. That being said, if a website doesn't exist anymore, yet the link is showing up in our GWMT or MOZ, is it still necessary to disavow? Is that overkill? http://alcoholdrugrehablosangeles.com/ is an example of a website we used to control and I have removed. My second lingering question is if there are a handful of links that are registering as spammy, and I presume it is due to lack of content/duplicate content, and I move that content to its appropriate place on another website and 301 that domain to its new home, will the "spam score" carry over?
Moz Bar | | HashtagHustler0 -
Moz Crawler URL paramaters & duplicate content
Hi all, this is my first post on Moz Q&A 🙂 Questions: Does the Moz Crawler take into account rel="canonical" for search results pages with sorting / filtering URL parameters? How much time does it take for an issue to disappear from the issues list after it's been corrected? Does it come op in the next weekly report? I'm asking because the crawler is reporting 50k+ pages crawled, when in reality, this number should be closer to 1000. All pages with query parameters have the correct canonical tag pointing to the root URL, so I'm wondering whether I need to noindex the other pages for the crawler to report correct data?: Original (canonical URL): DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas Filter active URL: DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas&booking_date=&booking_days=1&booking_persons=1&priceFilter%5B%5D=0%2C500&includedPriceFilter%5B%5D=drinks-soft Also, if noindex is the only solution, will it impact the ranking of the pages involved? Note: Google and Bing are semi-successful in reporting index page count, each reporting around 2.5k result pages when using the site:DOMAIN.com query. The rel canonical tag was missing for a short period of time about 4 weeks ago, but since fixing the issue these pages still haven't been deindexed. Appreciate any suggestions regarding Moz Crawler & Google / Bing index count!
Moz Bar | | Vukan_Simic0 -
Error 4XX showing by SEOmoz tool
Hi, I am a SEOmoz user. Can anybody guide me how to fix 4XX errors as i got reported by "Crawl Diagnostics Summary". There are many referring URLs reporting same error. Please guide me what to do and how to fix it?? Thanks
Moz Bar | | acelerar0 -
Error for a page that doesn't exist.
Hi, I'm just trailing this service, and I have a couple of questions that I hope someone can help with. 1. I am getting a high priority error regarding a page not being able to be crawled - a 4XX error. Problem is, there is no such page in existence. The URL is my site/comments/feed It's driving me crazy. 2. I'm also getting errors based on missing meta tags in blog posts. I am adding tags at the time of posting, so I am unsure why these errors are showing up. Actually, I didn't add tags to all posts - but there are errors on ALL posts, even those I added tags to. Any help would be wonderful. Thanks!!! Hugh
Moz Bar | | hughanderson0 -
Duplicate content errors
Hi I am getting some errors for duplicate content errors in my crawl report for some of our products www.....com/brand/productname1.html www.....com/section/productname1.html www.....com/productname1.html we have canonical in the header for all three pages rel="canonical" href="www....com/productname1.html" />
Moz Bar | | phes0 -
My 301 Error and Duplicate Title Content Issue is Growing !
When i redirect some of my page - it shows error. not redirecting and i made this 3-4 months before, no effect. All Errors under each category make me gone sick.
Moz Bar | | Esaky0