Rogerbot does not catch all existing 4XX Errors
-
Hi I experienced that Rogerbot after a new Crawl presents me new 4XX Errors, so why doesn't he tell me all at once?
I have a small static site and had 9 crawls ago 10 4XX Errors, so I tried to fix them all.
The next crawl Rogerbot fount still 5 Errors so I thought that I did not fix them all... but this happened now many times so that I checked before the latest crawl if I really fixed all the errors 101%.Today, although I really corrected 5 Errors, Rogerbot digs out 2 "new" Errors. So does Rogerbot not catch all the errors that have been on my site many weeks before?
Pls see the screenshot how I was chasing the errors
-
I understand,
I am not using a CMS and the site is not very big, so I wondered why Roberbot did not find all the 404 Error at the first time, because they have been there for many months.
Holger
-
Hey Holger,
Our crawler will catch as many errors as it can. It's possible that these errors were not present or just were not found at the time of the crawl.I'm running a crawl test to see if there's any discrepancy between your current campaign crawl and mine just to double-check.
In general, Kyle is correct that sometimes those errors just crop up, especially if you're using any sort of CMS.
I hope that helps. I'll update here after my crawl test is done.
Cheers,
Joel. -
Hi Holger,
4XX Errors can be quite common depending on your site setup so don't be surprised that Roger will keep returning errors for you to fix.
I would advise checking this data against GWT's own crawl error data which you can find in Webmaster Tools under Health>Crawl Errors.
I hope that helps,
K
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Rogerbot blocked by cloudflare and not display full user agent string.
Hi, We're trying to get MOZ to crawl our site, but when we Create Your Campaign we get the error:
Moz Pro | | BB_NPG
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct. If the issue persists, check out this article for further help. robot.txt is fine and we actually see cloudflare is blocking it with block fight mode. We've added in some rules to allow rogerbot but these seem to be getting ignored. If we use a robot.txt test tool (https://technicalseo.com/tools/robots-txt/) with rogerbot as the user agent this get through fine and we can see our rule has allowed it. When viewing the cloudflare activity log (attached) it seems the Create Your Campaign is trying to crawl the site with the user agent as simply set as rogerbot 1.2 but the robot.txt testing tool uses the full user agent string rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+shiny@moz.com) albeit it's version 1.0. So seems as if cloudflare doesn't like the simple user agent. So is it correct the when MOZ is trying to crawl the site it uses the simple string of just rogerbot 1.2 now ? Thanks
Ben Cloudflare activity log, showing differences in user agent strings
2022-07-01_13-05-59.png0 -
Error Code 902 & 403
Several thousand of these popped up on my Crawl Report and the links appear to be searches, i.e. below 902: http://thespacecollective.com/index.php?route=product/search&tag=nasa+ma-1+jacket%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F 403: http://thespacecollective.com/index.php?route=product/search&tag=periodic+table+tshirt%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F I don't want Moz, let alone Google finding this kind of nonsensical link but I don't exactly know what the problem is or how to fix it. Am I right in thinking these are pages people have searched for? Can anyone shed light on this please?
Moz Pro | | moon-boots0 -
Multiple Countries, Same Language: Receiving Duplicate Page & Content Errors
Hello! I have a site that serves three English-speaking countries, and is using subfolders for each country version: United Kingdom: https://site.com/uk/ Canada: https://site.com/ca/ United States & other English-speaking countries: https://site.com/en/ The site displayed is dependent on where the user is located, and users can also change the country version by using a drop-down flag navigation element in the navigation bar. If a user switches versions using the flag, the first URL of the new language version includes a language parameter in the URL, like: https://site.com/uk/blog?language=en-gb In the Moz crawl diagnostics report, this site is getting dinged for lots of duplicate content because the crawler is finding both versions of each country's site, with and without the language parameter. However, the site has rel="canonical" tags set up on both URL versions and none of the URLs containing the "?language=" parameter are getting indexed. So...my questions: 1. Are the Duplicate Title and Content errors found by the Moz crawl diagnostic really an issue? 2. If they are, how can I best clean this up? Additional notes: the site currently has no sitemaps (XML or HTML), and is not yet using the hreflang tag. I intend to create sitemaps for each country version, like: .com/en/sitemap.xml .com/ca/sitemap.xml .com/uk/sitemap.xml I thought about putting a 'nofollow' tag on the flag navigation element, but since no sitemaps are in place I didn't want to accidentally cut off crawler access to alternate versions. Thanks for your help!
Moz Pro | | Allie_Williams0 -
Duplicate Errors found in my search
I have run my 1st site check with SEOMOZ and have 4000+ errors. The "duplicate Page Content" culprit appears to be a extended url that keeps showing as duplicating. This is only a customer log-in and can be redirected back to the main cust log in page, but is there a short way of doing it (rather than 4000x 301's)? The format of the url is: http://www.????.com.au/default/customer/account/login/referer/aSR0cDovL3d3dy1234YWNiYW Thanks
Moz Pro | | Paul_MC0 -
404 errors in SEOMoz crawl tool
I currently have several 404 errors in the latest crawls from SEOMoz. Here is an example of the error. http://dealerplatform.com/blog/2011/10/23/videos-for-auto-dealers/www.dealerplatform.com/ In all cases the error is a result of www.dealerplatform.com being added to the real url. Anyone seen this before? The site is a wordpress mutlisite. I don't see where this incorrect link is showing up anywhere on the website. Any advice would be helpful. Thanks
Moz Pro | | Chris_Gregory0 -
Why am I getting 400 client errors on pages that work?
Hi, I just done the initial crawl on y domain and I sem to have 80 400 client errors. However when I visit the urls the pages are loading fine. Any ideas on why this is happening and how I can resolve the problem?
Moz Pro | | moesian0 -
Fixing errors from SEOmoz diagnostic survey
I just ran a report from the SEOmoz diagnostic survey and was surprised to see errors. How did I fix these errors? Thanks in advance for your help, I have been pleasantly surprised at the thoughtfulness and responsiveness of this community: Errors: 5XX (server error) Overly Dynamic URL 302 Temporary Redirect Too many on page links (how many is ideal?)
Moz Pro | | TheVolkinator0 -
How come when I export a error list I can only export the first page?
I am working on fixing the 4xx errors. I have found the easiest way to do this would be to export the list, print it out, and check off the ones i've fixed. The site only lets me export the first page. We'll appreciate any help. Thanks, Ryan D. Gran --Not sure what category this question belongs in so selected SEOmoz Tools--
Moz Pro | | dggusmc0