Rogerbot's crawl behaviour vs google spiders and other crawlers - disparate results have me confused.
-
I'm curious as to how accurately rogerbot replicates google's searchbot
I've currently got a site which is reporting over 200 pages of duplicate/titles content in moz tools. The pages in question are all session IDs and have been blocked in the robot.txt (about 3 weeks ago), however the errors are still appearing.
I've also crawled the page using screaming frog SEO spider. According to Screaming Frog, the offending pages have been blocked and are not being crawled. Webmaster tools is also reporting no crawl errors.
Is there something I'm missing here? Why would I receive such different results. Which one's should I trust? Does rogerbot ignore robot.txt? Any suggestions would be appreciated.
-
Thanks for your response. I was beginning to think this question had been left to rot.
I'm not getting any errors in WMT. What is concerning is that Roger is returning almost 300 errors of dupe content, which is obviously a problem. Screaming frog is no longer finding the pages (they've been blocked in the robot.txt) I guess what I'm trying to ask here is how can I be sure that my dupe content has been effectively blocked from google's spider.
Is there anyway to check?
Thanks for your help.
-
I've see similar concerns from others, it seems "rogerbot" does ignore certain things that other bots consider.
Don't worry about it, if it's not being flagged in WMT it shouldn't be an issue.
Take Roger as a guide rather than an iron fist bot like googlebot.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages Crawled: 1 Why?
I have some campaigns which have only 1 page crawled, while some other campaigns, having completely similar URL (subdomain) and number of keywords and pages, have all pages crawled... Why is that so? It has been also a while I waited and so far no change...
Moz Pro | | BritishCouncil0 -
We're back. THANK YOU SEOMoz!
To all the Mozzers reading this, I joined SEOMoz in a panic when we were seeing the first hints of an impending ranking crisis for our company website in October, 2012. We tanked in late December for our most important keywords. Simply GONE. Holy C---! It was devastating to say the least. Thankfully, we have a lot of work due to current clients and referrals, but the phone got quiet. That's a deadly sound.15 years of comfort zone gone. I still don't know precisely why it happened, it may have been a number of factors. But I've learned a huge amount through this crisis and can honestly say that I'm glad it happened. How weird is that? But what other fire under my butt would have gained me so much curiosity, fascination, engagement and excitement with deepening my scant knowledge? A week ago, we got our first report showing us that we have now gotten back our positions on page 1 for the 10 keywords that matter. It's continuing to improve according to today's report. So now that I've gotten somewhere- and I am well aware that it may not stick and there is still more to do -it's time to shout out a great big "Thank You". Truly, I don't think I could have survived this situation without SEOMoz and all you wonderful Mozers answering my frantic questions and lending your expertise and support to my situation. It wasn't only my own questions and the responses to those that helped me through this. I have been essentially addicted to reading a vast majority of all the posted questions because it's so dang interesting. So I've been learning bucketfuls from other people's questions and responses. My own personal SEOMoz education. It's only just begun. What an incredible culture and community we have here. I feel I can say "we" now because I am beginning to feel a part of it. And I'm not going anywhere anytime soon. I love this place. Thank you!
Moz Pro | | gfiedel11 -
Set crawl frequency
Current crawl frequency is weekly, is it possible for me to set this frequency our-self?
Moz Pro | | bhanu22170 -
How to read Crawler downloaded report
I am trying to seperate the duplicate title and description URLs, by looking at the report i am not getting how to find all urls which contain same title and description. Is there any video link on the site which walk me through each part of the report. Thanks, Punam
Moz Pro | | nonlinearcreations0 -
Keyword research & how it's relevant to my site.
How do I know if I can compete on a particular keyword. Say the keyword analysis tool shows the keyword difficulty is 59%. How do I know what 59% means to my site, other than checking domain and page authority relative to my site (i.e. if sites in the top 10 are higher or lower authority than my site). Is there a way of showing what keyword difficulty percentage is a cut off point for my site? Thanks, Dan
Moz Pro | | dcostigan0 -
Excluding parameters from seomoz crawl?
I'm getting a ton of duplicate content errors because almost all of my pages feature a "print this page" link that adds the parameter "printable=Y" to the URL and displays a plain text version of the same page. Is there any way to exclude these pages from the crawl results?
Moz Pro | | AmericanOutlets0 -
SEOMoz's Crawl Diagnostics showing an error where the Title is missing on our Sitemap.xml file?
Hi Everyone, I'm working on our website Sky Candle and I've been running it as a campaign in SEOmoz. I've corrected a few errors we had with the site previously, but today it's recrawled and found a new error which is a missing Title tag on the sitemap.xml file. Is this a little glitch in the SEOmoz system? Or do I need to add a page title and meta description to my XML file. http://www.skycandle.co.uk/sitemap.xml Any help would be greatly appreciated. I didn't think I'd need to add this. Kind Regards Lewis
Moz Pro | | LewisSellers0 -
Filtering OSE Results
Issue: When I export OSE Linking Pages results to .CSV, I'd like to filter only unique domains. It seems to me that there should be a way to do this with Excel, perhaps pivot table. Anyone have a quick solution for this? Alternatively, I can use Linking Domains, which gives all unique roots, but then i lose follow/nofollow filter. Thoughts?
Moz Pro | | Gyi0