Does Rogerbot respect the robots.txt file for wildcards?
-
Hi All,
Our robots.txt file has wildcards in it, which Googlebot recognizes. Can anyone tell me whether or not Rogerbot recognizes wildcards in the robots.txt file?
We've done a Rogerbot site crawl since updating the robots.txt file and the pages that are set to disallow using the wildcards are still showing.
BTW, Googlebot is not crawling these pages according to Webmaster Tools.
Thanks in advance,
Robert
-
Thanks! RogerBot is now working. Perhaps it had a cached copy of the old robots.txt file. All is well now.
Thank you!
-
Yes, rogerbot follows robots exclusion protocol - http://www.seomoz.org/dp/rogerbot
-
Roger should obey wildcards. It sounds like he's not, so could you tattle on him to the help team and they'll see why he's not following directions? http://www.seomoz.org/help Thanks!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Problems with csv file from OSE
Hello Support, I have problems with the formatting of csv files from OSE in Excel. I got lines that only contain -- and these lines break up the data. It is possible to correct this manually but a bit annoying if you have 1500+ links generated in the file. I work a lot with csv files from other tools and programs and those give me no problems. Can you help me out please? Greetings Rob
Moz Pro | | FindFactory0 -
Rogerbot's crawl behaviour vs google spiders and other crawlers - disparate results have me confused.
I'm curious as to how accurately rogerbot replicates google's searchbot I've currently got a site which is reporting over 200 pages of duplicate/titles content in moz tools. The pages in question are all session IDs and have been blocked in the robot.txt (about 3 weeks ago), however the errors are still appearing. I've also crawled the page using screaming frog SEO spider. According to Screaming Frog, the offending pages have been blocked and are not being crawled. Webmaster tools is also reporting no crawl errors. Is there something I'm missing here? Why would I receive such different results. Which one's should I trust? Does rogerbot ignore robot.txt? Any suggestions would be appreciated.
Moz Pro | | KJDMedia0 -
Why am I not getting my allowance of 10,000 inbound links in csv download file? 370 out of 4700??
Hi, I'm desparately trying to audit my backlinks to remove a penguin penalty on my site livefit.co.uk When I do the inbound link report i'm not getting all the links in the download. I know there is a limit of 25 links from each linking site so we get the full picture of links bu: I have 4700 links so why does it need to limit it when we are supposed to see up to 10,000? When you check the link profile on the report it doesn't seem there are many sites with anything close to 25, so surely that rule is invalid as an explanation here? Should I just work off OSE? But there is less useful info than on the csv.. I'd be very grateful for your thoughts. Thanks! James
Moz Pro | | LiveFit0 -
Do the SEOmoz Campaign Reports follow Robots.txt?
Hello, Do the SEOmoz Campaign Reports (that track errors and warnings for a website) follow rules I write in the robots.txt file? I've done all that I can to fix the legitimate errors with my website, as reported by the fabulous SEOmoz tools. I want to clean up my pages indexed with the search engines so I've written a few rules to exclude content from Wordpress tag URLs for instance. Will my campaign report errors and warnings also drop as a result of this?
Moz Pro | | Flexcin0 -
In Open Site Explorer is it possible to use wildcards?
If I have a section on my website called lists with articles in there can I use wildcards in Open Site Explorer to find how many backlinks all articles in that section have - and ideally which pages are most linked to? Something like www.example.com/lists/* to give number of backlinks to all articles in that website section and which are the most highly linked to. Would be a great feature to have! Cheers Siimon
Moz Pro | | SimonCh0 -
Broken CSV files?
Hi, I have been downloading fiels from seoMOZ for quite a while and I keep getting broken CSV files. ON my Mac, I wrote a script that cleaned them out but I cannot (don't know how) do this on the Windows side. The problem is that some of the record lines get broken up mainly in the "Anchor Text" field (maybe the anchor text in question contained a return). This of course messes up my file when I open it in Excel. Is there a way get/open these files without having them messed up like this? P..S. I am also seeing problem with text encoding, I would like to see the anchor text that is being diaplayed on the referring site rather than its HTML/Unicode/UTF-8 code value. TIA, Michel
Moz Pro | | analysestc0 -
Robots review
Anything in this that would have caused Rogerbot to stop indexing my site? It only saw 34 of 5000+ pages on the last pass. It had no problems seeing the whole site before. User-agent: Rogerbot Disallow: /default.aspx?*
Moz Pro | | sprynewmedia
//Keep from crawling the CMS urls default.aspx?Tabid=234. Real home page is home.aspx Disallow: /ctl/
// Keep from indexing the admin controls Disallow: ArticleAdmin
// Keep from indexing article admin page Disallow: articleadmin
// same in lower case Disallow: /images/
// Keep from indexing CMS images Disallow: captcha
// keep from indexing the captcha image which appears to be a page to crawls. general rules lacking wildcards User-agent: * Disallow: /default.aspx Disallow: /images/ Disallow: /DesktopModules/DnnForge - NewsArticles/Controls/ImageChallenge.captcha.aspx0