Seomoz Spider/Bot Details
-
Hi All
Our website identifies a list of search engine spiders so that it does not show them the session ID's when they come to crawl, preventing the search engines thinking there is duplicate content all over the place.
The Seomoz has bought a over 20k crawl errors on the dashboard due to session ID's. Could someone please give the details for the Seomoz bot so that we can add it to the list on the website so when it does come to crawl it won't show it session ID's and give all these crawl errors.
Thanks
-
An old answer to an old question, but one which has just helped me out tremendously. A true credit to the Q&A system!
-
Thanks Alan. much appreciated.
-
3 dots placed before last bracket, need to be removed,
-
Mozilla/5.0 (compatible; rogerBot/1.0; http://www.seomoz.org/dp/rogerbot)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEOMoz API not working for Scrapebox
I want to import SEOMoz data to list of URLs I have using scrapbox. I added in my credentials according to the API but am getting error 401 as the status of all my links. Any idea why and what I should be doing?
Moz Pro | | theLotter0 -
Noindex/nofollow on blog comments; is it good or bad ?
Hi, I changed the design of one my wordpress website at the beginning of the month. I also added a "facebook seo comments" plugin to rewrite facebook comments as normal comments. As most of the website comments are facebook comments, I went from 250 noindex/nofollow comments to 950; URL's are ?replytocom=4822 etc. Moz campaign noticed it and I'm asking myself : is it good to have comments in noindex/nofollow ? Should I do something about this ? Erwan.
Moz Pro | | johnny1220 -
Multiple Sites/Internal Pages Campaign
Can MOZ do reports/ranks/campaigns separately for each of our sites, then do separate keyword campaigns for specific internal content pages of each of the websites? For a law firm with both a defense and family law sites. We have multiple pages within each site, we will need to do separate and individual campaigns for assault, burglary, traffic tickets, ect. in our defense site. And we'll need other specific campaigns for other specific content in out family law site. Will Moz be able to accommodate our needs with their $99/mo plan or is that service only available for higher packages? (or is it even possible in ANY of their packages???) Thanks.
Moz Pro | | Wallin_Klarich0 -
Does SEOmoz have a tool to find mirror sites?
I heard from a company that is trying to get my clients SEO business that they discovered multiple sites mirroring our site's content. Does SEOmoz have a tool to find these websites? Or does Google?
Moz Pro | | thomas.wittine0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
How to Stop SEOMOZ from Crawling a Sub-domain without redoing the whole campaign?
I am using SEOMOZ for a client to track their website's performance and fix any errors and issues. A few weeks ago, they created a sub-domain (sub.example.com) to create a niche website for some of their specialized content. However, when SEOMOZ re-crawled the main domain (example.com), it also reported the errors for the subdomain. Is there any way to stop SEOMOZ from crawling the subdomain and only crawl the main domain? I know that can be done by starting a new campaign, but is there any way to work around an existing campaign? I'm asking because we would like to avoid the setting up the campaign again and losing the historical data as well. Any input would be greatly appreciated. Thanks!
Moz Pro | | TheNorthernOffice790 -
What causes Crawl Diagnostics Processing Errors in seomoz campaign?
I'm getting the following error when seomoz tries to spider my site: First Crawl in Progress! Processing Issues for 671 pages Started: Apr. 23rd, 2011 Here is the robots.txt data from the site: Disallow ALL BOTS for image directories and JPEG files. User-agent: * Disallow: /stats/ Disallow: /images/ Disallow: /newspictures/ Disallow: /pdfs/ Disallow: /propbig/ Disallow: /propsmall/ Disallow: /*.jpg$ Any ideas on how to get around this would be appreciated 🙂
Moz Pro | | cmaddison0 -
Would an SEOmoz Q & A feed be useful?
I don't see a Q & A feed subscribe button. Am I missing it? Seems to me that being able to receive updates when new questions are posted would be useful.
Moz Pro | | Gyi1