Does Rogerbot respect the robots.txt file for wildcards?
-
Hi All,
Our robots.txt file has wildcards in it, which Googlebot recognizes. Can anyone tell me whether or not Rogerbot recognizes wildcards in the robots.txt file?
We've done a Rogerbot site crawl since updating the robots.txt file and the pages that are set to disallow using the wildcards are still showing.
BTW, Googlebot is not crawling these pages according to Webmaster Tools.
Thanks in advance,
Robert
-
Thanks! RogerBot is now working. Perhaps it had a cached copy of the old robots.txt file. All is well now.
Thank you!
-
Yes, rogerbot follows robots exclusion protocol - http://www.seomoz.org/dp/rogerbot
-
Roger should obey wildcards. It sounds like he's not, so could you tattle on him to the help team and they'll see why he's not following directions? http://www.seomoz.org/help Thanks!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Meta Robots query
Hi guys, I was ranking really well on my home page for certain keywords which has all dropped pretty dramatically over the last 3/4 weeks - I think the issue is since since the configuration of Yoast SEO Wordpress plugin. In March (when my rankings were strong) my crawl test showed the top data in the attached image, and in May (now the rankings have dropped severly) they show the bottom data. I don't fully understand canonical and Meta Robots so I am hoping someone can shed some light on the following points. 1. Will the change result in my loss of rankings.
Moz Pro | | RocketStats
2. How can I put it back to how it was in March? PS. I haven't had any Google penalties. Thanks,
Joshua RfTar0 -
Allow only Rogerbot, not googlebot nor undesired access
I'm in the middle of site development and wanted to start crawling my site with Rogerbot, but avoid googlebot or similar to crawl it. Actually mi site is protected with login (basic Joomla offline site, user and password required) so I thought that a good solution would be to remove that limitation and use .htaccess to protect with password for all users, except Rogerbot. Reading here and there, it seems that practice is not very recommended as it could lead to security holes - any other user could see allowed agents and emulate them. Ok, maybe it's necessary to be a hacker/cracker to get that info - or experienced developer - but was not able to get a clear information how to proceed in a secure way. The other solution was to continue using Joomla's access limitation for all, again, except Rogerbot. Still not sure how possible would that be. Mostly, my question is, how do you work on your site before wanting to be indexed from Google or similar, independently if you use or not some CMS? Is there some other way to perform it?
Moz Pro | | MilosMilcom
I would love to have my site ready and crawled before launching it and avoid fixing issues afterwards... Thanks in advance.0 -
Do the SEOmoz Campaign Reports follow Robots.txt?
Hello, Do the SEOmoz Campaign Reports (that track errors and warnings for a website) follow rules I write in the robots.txt file? I've done all that I can to fix the legitimate errors with my website, as reported by the fabulous SEOmoz tools. I want to clean up my pages indexed with the search engines so I've written a few rules to exclude content from Wordpress tag URLs for instance. Will my campaign report errors and warnings also drop as a result of this?
Moz Pro | | Flexcin0 -
In Open Site Explorer is it possible to use wildcards?
If I have a section on my website called lists with articles in there can I use wildcards in Open Site Explorer to find how many backlinks all articles in that section have - and ideally which pages are most linked to? Something like www.example.com/lists/* to give number of backlinks to all articles in that website section and which are the most highly linked to. Would be a great feature to have! Cheers Siimon
Moz Pro | | SimonCh0 -
Is seomoz rogerbot only crawling the subdomains by links or as well by id?
I´m new at seomoz and just set up a first campaign. After the first crawling i got quite a few 404 errors due to deleted (spammy) forum threads. I was sure there are no links to these deleted threads so my question is weather the seomoz rogerbot is only crawling my subdomains by links or as well by ids (the forum thread ids are serially numbered from 1 to x). If the rogerbot crawls as well serially numbered ids do i have to be concerned by the 404 error on behalf of the googlebot as well?
Moz Pro | | sauspiel0 -
What is the full User Agent of Rogerbot?
What's the exact string that Rogerbot send out as his UserAgent within the HTTP Request? Does it ever differ?
Moz Pro | | rightmove0 -
Is there a whitelist of the RogerBot IP Addresses?
I'm all for letting Roger crawl my site, but it's not uncommon for malicious spiders to spoof the User-Agent string. Having a whitelist of Roger's IP addresses would be immensely useful!
Moz Pro | | EricCholis1 -
The Site Explorer crawl shows errors for files/folders that do not exist.
I'm fairly certain there is ultimately something amiss on our server but the Site Explorer report on my website (www.kpmginstitutes.com) is showing thousands of folders that do not exist. Example: For my "About Us" page (www.kpmginstitutes.com/about-us.aspx), the report shows a link: www.kpmginstitutes.com/rss/industries/404-institute/404-institute/about-us.aspx. We do have "rss", "industries", "404-institute" folders but they are parallel in the architecture, not sequential as indicated in the error url. Has anyone else seen these types of error in your Site Explorer reports?
Moz Pro | | dturkington0