What user agent is used by SEOMOZ crawler?
-
We have a pretty tight robots.txt file in place to only allow the major search engines. I do not want to block SEOMOZ.ORG from being able to crawl the site so I want to make sure the user agent is open.
-
Hi Eric,
The useragent of SEOMoz is: rogerbot.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our crawler was not able to access the robots.txt file on your site.
Good morning, Yesterday, Moz gave me an error that is wasn't able to find our robots.txt file. However, this is a new occurrence, we've used Moz and its crawling ability many times prior; not sure why the error is happening now. I validated that the redirects and our robots page are operational and nothing is disallowing Roger in our robots.txt. Any advice or guidance would be much appreciated. https://www.agrisupply.com/robots.txt Thank you for your time. -Danny
Moz Pro | | Danny_Gallagher0 -
No (seomoz) crawler report since 7th may !!
Hi Mozteam, I added a site less than 200 pages in the tool "seomoz crawler", at May 7. We are May 15 and the tool always displays "crawl in progress." Do you have a problem about this tool? This is embarrassing ... Thank you for your reply. David France
Moz Pro | | DavidEichholtzer0 -
Why do crawlers still track meta keywords if it is not needed in my site?
I have crawled three sites already and it returns more than 5000 errors most of which are MIssing Meta Keywords tags. The sites are on Wordpress and using my SEO plugin I can easily edit the meta keywords of each page, but I am having second thoughts. Well should I?
Moz Pro | | jernest0020 -
Question #3) My last question has to do with Some SEOmoz crawl diagnostics -
I recently fixed (or well, I am asking to make sure that this was the right thing to do in my first question posted a few minutes ago), a problem where all of my internal main sidebar category pages were linking using https://, which to my knowledge means SECURE pages. anyways, OSE, and google seem to be not recognizing the link juice. but my rank fell for one of my main keywords by 2 positions about a week after i made the fix to have the pages be indexable. Making my pages properly linked can't be a bad thing right? That's what I said. So I looked deeper, and my crawl diagnostics reports showed a MASSIVE reduction in warnings (about 3,000 301 redirects were removed by changing the https:// to http:// because all the secure pages were re-directing to http:// regular structure) and an INCREASE, in Duplicate Page Titles, and Temporary redirects... Could that have been the reason the rank dropped? I think I am going to fix all the Duplicate Page Title problems tonight, but still, I am a little confused as to why such a major fix didn't help and appeared to hurt me. I feel like it hurt the rank, not because of what I did, but because what I did caused a few extra re-directs, and opened the doors for the search engine to discover more pages that had problems (which could have triggered an algo that says hey, these people have to much duplicate problems) Any thoughts will be GREATLY appreciated thumbed, thanked, and marked as best answers! Thanks in advance for your time, Tyler A.
Moz Pro | | TylerAbernethy0 -
Is there a good SEOMoz training for a new user to learn how to use the software?
Is there a good SEOMoz training for a new user to learn how to use the software?
Moz Pro | | IBS-DCM0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
SEOmoz crawler bug?
I just noticed that a few of my campaigns have number of pages crawled 1. Can someone tell me what this is.... from 5 campaigns 2 have only one pages crawled from which one is an online shop with over 2000 products 🙂
Moz Pro | | mosaicpro0 -
How to get seomoz to re-crawl a site?
I had a lot of duplicate content issues and have fixed all the other warnings. I want to check the site again.
Moz Pro | | adamzski0