Robots review
-
Anything in this that would have caused Rogerbot to stop indexing my site? It only saw 34 of 5000+ pages on the last pass. It had no problems seeing the whole site before.
User-agent: Rogerbot
Disallow: /default.aspx?*
//Keep from crawling the CMS urls default.aspx?Tabid=234. Real home page is home.aspxDisallow: /ctl/
// Keep from indexing the admin controlsDisallow: ArticleAdmin
// Keep from indexing article admin pageDisallow: articleadmin
// same in lower caseDisallow: /images/
// Keep from indexing CMS imagesDisallow: captcha
// keep from indexing the captcha image which appears to be a page to crawls.general rules lacking wildcards
User-agent: * Disallow: /default.aspx Disallow: /images/ Disallow: /DesktopModules/DnnForge - NewsArticles/Controls/ImageChallenge.captcha.aspx
-
Well, our crawler is supposed to respect all standard robots.txt rules, so you should be good just adding them all back in as you normally would and seeing what happens. If it doesn't go through properly, I'll ask our engineers to take a look and find out what's happening!
-
Thanks Aaron.
I will add the rules back as I want Roger to have nearly the same experience to Google and Bing.
Is it best to add one at a time? That could take over a month to figure out what's happening. Is there an easier way to test? Perhaps something like the Google Webmaster Tools Crawler Access tool?
-
Hey! Sorry you didn't have a good experience with your help ticket. I talked with Chiaryn and it sounds like there was some confusion over what you wanted removed from your crawl; it had mentioned that you wanted only one particular page blocked. I think she found something different in your robots.txt - the rules you outline above - so she tried to help you with that situation. Roger does honor all robots.txt parameters so the crawl should only be limited in the way you define, though the wildcards do open you up to a lot of blockage.
It looks like you've since removed your restrictions from roger. Chiaryn and I spoke about it and we'll try to help with your specific site over your ticket. Hope this helps explain! If you want to re-add those parameters and then see what pages are wrongly blocked, I'd love to do that with you - just let us know when you've changed the robots.txt.
-
All urls are rewritten to default.aspx?Tabid=123&Key=Var. None of these are publicly visible once the re-writer is active. I added the rule just to make sure the page is never accidentally exposed and indexed
-
Could you clarify the URL structure for the default.aspx and the true home page. It's only because if you add Disallow: /default.aspx?* (with the wild card) then it will disallow all pages within the /default.aspx folder structure. Just use the same rule for rogerbot as you did for the general rule, this being Disallow: /default.aspx Hope this helps, Vahe
-
Actually, I asked help this question (essentially) first then the lady said she wasn't a web developer and I should ask the community. I was a little taken back frankly.
-
Can't. Default.aspx is the root of the CMS and the redirect will take down the entire website. Rule exists for only a small period where Google indexed the page incorrectly.
-
Hi,
If I was you, I would 301 redirect the default.aspx to the real home page. Once you do that simply remove it from the robots.txt file.
Not only would you strengthen the true home page, but prevent from crawling errors to occur.
There would be a concern that people might even still link to default.aspx which might be causing search engines to index the page. This might be the reason to which rogerbot has stopped crawling your site.
If that's an issue just put a canonical tag for that URL, but still remove that reference.
Hope this helps,
Vahe
-
Hi! If you don't get an answer from the community by Monday, send an email to help@seomoz.org and they'll look at it to see what might be the problem (they're not in on the weekends, otherwise I'd have you send them an email right away).
Thanks!
Keri
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt file issues on Shopify server
We have repeated issues with one of our ecommerce sites not being crawled. We receive the following message: Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. Are you aware of an issue with robots.txt on the Shopify servers? It is happening at least twice a month so it is quite an issue.
Moz Pro | | A_Q0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Moz campaign works around my robots.txt settings
My robots.txt file looks like this: User-agent: * Disallow: /*? Disallow: /search So, it should block (deindex) all dynamic URLs. If I check this url in Google: site:http://www.webdesign.org/search/page-1.html?author=47 Google tells me: A description for this result is not available because of this site's robots.txt – learn more. So far so good. Now, I ran a Moz SEO campaign and I got a bunch of duplicate page content errors. One of the links is this one: http://www.webdesign.org/search/page-1.html?author=47 (the same I tested in Google and it told me that the page is blocked by robots.txt which I want) So, it makes me think that Moz campaigns check files regardless of what robots.txt say? It’s my understanding User-agent: * should forbid Rogerbot from crawling as well. Am I missing something?
Moz Pro | | VinceWicks0 -
Do the SEOmoz Campaign Reports follow Robots.txt?
Hello, Do the SEOmoz Campaign Reports (that track errors and warnings for a website) follow rules I write in the robots.txt file? I've done all that I can to fix the legitimate errors with my website, as reported by the fabulous SEOmoz tools. I want to clean up my pages indexed with the search engines so I've written a few rules to exclude content from Wordpress tag URLs for instance. Will my campaign report errors and warnings also drop as a result of this?
Moz Pro | | Flexcin0 -
Does Rogerbot respect the robots.txt file for wildcards?
Hi All, Our robots.txt file has wildcards in it, which Googlebot recognizes. Can anyone tell me whether or not Rogerbot recognizes wildcards in the robots.txt file? We've done a Rogerbot site crawl since updating the robots.txt file and the pages that are set to disallow using the wildcards are still showing. BTW, Googlebot is not crawling these pages according to Webmaster Tools. Thanks in advance, Robert
Moz Pro | | AC_Pro0 -
Seomoz bar: No Follow and Robots.txt
Should the Mozbar pickup 'nofollow" links that are handled in robots.txt ? the robots.tx blocks categories, but is still show as a followed (green) link when using the mozbar. Thanks! Holly ETA: I'm assuming that- disallow: myblog.com/category/ - is comparable to the nofollow tag on catagory?
Moz Pro | | squareplug0 -
Why does SEOMoz crawler ignore robots.txt?
The SEOMoz crawler ignores robots.txt It also "indexes" pages marked as noindex. That means it is filling up the reports with things that don't matter. Is there any way to stop it doing that?
Moz Pro | | loopyal0 -
How to get rid of the message "Search Engine blocked by robots.txt"
During the Crawl Diagnostics of my website,I got a message Search Engine blocked by robots.txt under Most common errors & warnings.Please let me know the procedure by which the SEOmoz PRO Crawler can completely crawl my website?Awaiting your reply at the earliest. Regards, Prashakth Kamath
Moz Pro | | 1prashakth0