Roger Bot
-
Hi Mozzers,
I have a dev site that I want to run your crawl text on (Roger Bot) but I want to ensure the other engines don't crawl it.
What is the Robots.txt line that I need to make sure only Roger bot can get in and not Google etc?
Please advise
Thanks
Gareth
-
HI Gareth, Your robots.txt should look like this; User-agent: * Disallow: / User-agent: rogerbot Allow: /
-
User-agent: *
Disallow: /
User-agent: rogerbot
Allow: /
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I redeem a tweet from Roger?
Last week, I reached over 100 mozPoints and I've been waiting by my virtual mailbox for days for a tweet from Roger. Level mozPoints Benefits **Aspirant - ** ** - 100 - 199** ** - A tweet from Roger (the week you reach 100 MozPoints)** The past few days have been sad. I've been moping around the office feeling terribly alone, wondering why I've received no Twitter based recognition for my efforts. The radio has been playing 'Careless Whisper' on repeat and it hasn't stopped raining outside. Can anyone help? (Attached an image of my sadness for reference) fYHP3
Moz Pro | | seanginnaw0 -
Have a Campaign, but only states 1 page has been crawled by SEOmoz bots. What needs to be done to have all the pages crawled?
We have a campaign running for a client in SEOmoz and only 1 page has been crawled per SEOmoz' data. There are many pages in the site and a new blog with more and more articles posted each month, yet Moz is not crawling anything, aside from maybe the Home page. The odd thing is, Moz is reporting more data on all the other inner pages though for errors, duplicate content, etc... What should we do so all the pages get crawled by Moz? I don't want to delete and start over as we followed all the steps properly when setting up. Thank you for any tips here.
Moz Pro | | WhiteboardCreations0 -
What Are Roger's Super Powers?
I ended up with a box of SEOMOZ swag. (Thanks! As to how this came to pass...I shall draw a veil, as they used to say in Victorian novels.) My upstairs neighbour Max, age 5, enjoys a rich fantasy life and is very much into superheroes and costumes. Naturally, he ended up with a lot of the Roger stickers. Alas, I was unable to answer all of Max's questions. When he asked: "What does Roger do?" I replied: "Roger makes your computer work." Pretty good, I thought. But then Max asked: "What does the antenae do?" I was kind of stumped. Then it got worse. Max asked what Roger's superpowers are and if he could beat Spiderman. I tried to change the subject. Max wasn't impressed. What are the answers? Enquiring five year old minds want to know!
Moz Pro | | DanielFreedman6 -
Will SEOMoz offer URL data relating to Bot visits
Does SEOMoz in the future plan to report on Bot visits for each URL, when they are spidered and when they appear in for example Google's index ?
Moz Pro | | NeilTompkins0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0 -
Crawl test. Bot crawled only 200 or so links when it should have crawled thousands
Hi everyone, I just recieved my crawl test report and its only given me 200 or so URL's when my site has thousands, any thoughts?
Moz Pro | | Ev840 -
SEOmoz Bot indexing JSON as content
Hello, We have a bunch of pages that contain local JSON we use to display a slideshow. This JSON has a bunch of<a links="" in="" it. <="" p=""></a> <a links="" in="" it. <="" p="">For some reason, these</a><a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p=""></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">One example page this is happening on is: http://www.trendhunter.com/trends/a2591-simplifies-product-logos . Searching for the string '<a' yields="" 1100+="" results="" (all="" of="" which="" are="" recognized="" as="" links="" for="" that="" page="" in="" seomoz),="" however,="" ~980="" these="" json="" code="" and="" not="" actual="" on="" the="" page.="" this="" leads="" to="" a="" lot="" invalid="" our="" site,="" super="" inflated="" count="" on-page="" page. <="" span=""></a'></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">Is this a bug in the SEOMoz bot? and if not, does google work the same way?</a>
Moz Pro | | trendhunter-1598370 -
Seomoz Spider/Bot Details
Hi All Our website identifies a list of search engine spiders so that it does not show them the session ID's when they come to crawl, preventing the search engines thinking there is duplicate content all over the place. The Seomoz has bought a over 20k crawl errors on the dashboard due to session ID's. Could someone please give the details for the Seomoz bot so that we can add it to the list on the website so when it does come to crawl it won't show it session ID's and give all these crawl errors. Thanks
Moz Pro | | blagger1