Crawler triggering Spam Throttle and creating 4xx errors
-
Hey Folks,
We have a client with an experience I want to ask about.
The Moz crawler is showing 4xx errors. These are happening because the crawler is triggering my client's spam throttling. They could increase from 240 to 480 page loads per minute but this could open the door for spam as well.
Any thoughts on how to proceed?
Thanks! Kirk
-
Thank you Dave!
-
Hey Kirk! We built our crawler to obey robots.txt crawl-delay directives. In the future, if this is ever an issue, you can use the crawl delay to slow Rogerbot down to a more reasonable speed. However, we don't recommend adding a crawl delay larger than 10 or Rogerbot might not be able to finish the crawl of your site.
Just add a crawl delay directive to your robots.txt file like this:
User-agent: rogerbot
Crawl-delay: 10Here's a good article that explains more about this technique: https://moz.com/learn/seo/robotstxt. I hope this helps, feel free to reach out if you have any other questions!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Erroneous 404 Errors?
On a New Rankings & Insights email I got today, one of our sites showed over 50 404 errors totally 49% of the site. When viewing the details, every one of the errors shows the URL in the following structure: http://domainname.com/page/domainname.com. I'm not sure why this is happening, but the site and all of its pages are fine. We were having an SSL issue, but that is cleared up now. I just ran a crawl report and all checked out ok, but there were no result in the .csv file that concatenated the domain name to the end of the URL string. That doesn't seem like it would be the issue, but it's the only issue we were aware of with this site. This is the only site of ours that this is happening to. Does anyone have any thoughts on why this happened? Thank you.
Moz Bar | | Indikon0 -
Error: 804 : HTTPS (SSL) error encountered when requesting page
In my crawl report I'm getting the error: 804 : HTTPS (SSL) error encountered when requesting page. How can I fix this? .
Moz Bar | | Yesi.Ortega0 -
902 Error and Page Size Limit
Hello, I am getting a 902 error when attempting to crawl one of my websites that was recently upgraded to a modern platform to be mobile friendly, https, etc. After doing some research it appears this is related to the page size. On Moz's 902 error description it states: "Pages larger than 2MB will not be crawled. For best practices, keep your page sizes to be 75k or less." It appears all pages on my site are over 2MB because Rogbot is no longer doing any crawling and not reporting issues besides the 902. This is terrible for us because we purchased MOZ to track and crawl this site specifically. There are many articles which show the average page size on the web is well over 2MB now: http://www.wired.com/2016/04/average-webpage-now-size-original-doom/ Due to that I would imagine other users have come up against this as well and I'm wondering how they handled it. I hope Moz is planning to increase the size limit on Rogbot as it seems we are on a course towards sites becoming larger and larger. Any insight or help is much appreciated!
Moz Bar | | Paul_FL0 -
What is Considered Duplicate Content by Crawlers?
I am asking this because I have a couple of site audit tools that I use to crawl a site I work on every week and they are showing duplicate content issues (which I know there is a lot on this site) but some of what is flagged as duplicate content makes no sense. For example, the following URL's were grouped together as duplicate content: | https://www.firefold.com/contact-us | https://www.firefold.com/gabe | https://www.firefold.com/sale | | | How are these pages duplicate content? I am confused on what site audit tools are considering duplicate content. Just FYI, this is data from Moz crawl diagnostics but SEMrush site auditor is giving me the same type of data. Any help would be greatly appreciated. Ryan
Moz Bar | | RyanRhodes0 -
Spam score 9/17 and redirect Question
I sat on a .com domain, which name become increasing popular (xyzselfie) for 2 years ... 4 months ago I hired a VA to do a task. A miscommunication made this person submit my domain to the spammiest directories the internet has to offer. Also because of the domain name and the .com a lot of asian or weird sites/things posted links to my site. I have worked on my site for the last 4 months trying to lower my spam score from a 9. I have:
Moz Bar | | onlinegusto
-Disavowed all the sites that pointed to my site.
-Made more internal links
-Tried to make my content thicker
-Included my email and social profiles to the site In the process my competitors site with exact domain name but .net and more authority came on auction, I bought it and I pointed it with a permanent redirect to my site (hoping my site would in time lose its spam score). This site will generate and income by appearing in search and adsense ads. After months of work I'm at a loss what to do. Does the spam score generally take long to drop? Should i try and stop the permanent redirect and direct my .com to the .net domain? Are there experts who can lower my score? Should I look for non spammy directories in its niche and submit my site to them to increase link authority and nofollow links ? Any feedback or insight would be highly appreciated. fFtTOFk0 -
Create a report with keyword, label, difficulty, global search volume, and ranking?
Is it possible to create a report with containing keyword, label, difficulty, global search volume, and ranking? Currently in order to get the data, it seems like I need to manage two lists, the keyword list we are tracking and the keyword list in the Difficulty tool then somehow manually combine the data. Is there an easier way?
Moz Bar | | promfgsystems2 -
We Launched a new site and Rogerbot is still reporting on links/errors from the old site, is there a way to clear those out?
We are mostly a Branding agency, and have not put a lot of effort into SEO for ourselves... SEO tends to take a backseat to design most of the time, making it a little difficult for me at times when it comes to SEO. We recently launched a new site, http://Roninadv.com/ and the developer and I have done quite a bit of work to make it work well for Google. I was really looking forward to a new crawl report from Roger, but alas, It's like Roger crawled the old site? The new site has been up since last Monday. Is there a way to clear out the old errors? Do I just need to give roger more time?
Moz Bar | | PaulRonin0 -
Crawl Diagnostics: Exlude known errors and others that have been detected by mistake? New moz analytics feature?
I'm curious if the new moz analytics will have the feature (filter) to exclude known errors from the crwal diagnostics. For example, the attached screenshot shows the URL as 404 Error, but it works fine: http://en.steag.com.br/references/owners-engineering-services-gas-treatment-ogx.php To maintain a better overview which errors can't be solved (so I just would like to mark them as "don't take this URL into account...") I will not try to fix them again next time. On the other hand I have hundreds of errors generated by forums or by the cms that I can not resolve on my own. Also these kind of crawl errors I would like to filter away and categorize like "errors to see later with a specialist". Will this come with the new moz analytics? Anyway is there a list that shows which new features will still be implemented? knPGBZA.png?1
Moz Bar | | inlinear0