Robots.txt questions...
-
All,
My site is rather complicated, but I will try to break down my question as simply as possible.
I have a robots.txt document in the root level of my site to disallow robot access to /_system/, my CMS. This looks like this:
# /robots.txt file for http://webcrawler.com/
# mail webmaster@webcrawler.com for constructive criticism**User-agent: ***
Disallow: /_system/I have another robots.txt file in another level down, which is my holiday database - www.mysite.com/holiday-database/ - this is to disallow access to /holiday-database/ControlPanel/, my database CMS. This looks like this:
**User-agent: ***
Disallow: /ControlPanel/Am I correct in thinking that this file must also be in the root level, and not in the /holiday-database/ level? If so, should my new robots.txt file look like this:
# /robots.txt file for http://webcrawler.com/
# mail webmaster@webcrawler.com for constructive criticism**User-agent: ***
Disallow: /_system/
Disallow: /holiday-database/ControlPanel/Or, like this:
# /robots.txt file for http://webcrawler.com/
# mail webmaster@webcrawler.com for constructive criticism**User-agent: ***
Disallow: /_system/
Disallow: /ControlPanel/Thanks in advance.
Matt
-
Good answer Yannick.
here are some resources:
http://www.free-seo-news.com/all-about-robots-txt.htm
http://www.robotstxt.org/robotstxt.html
Good luck
-
Cheers gents.
-
Like:
# /robots.txt file for http://webcrawler.com/
# mail webmaster@webcrawler.com for constructive criticism**User-agent: ***
Disallow: /_system/
Disallow: /holiday-database/ControlPanel/Search engines typically only look in the root of your domain to find robots.txt and sitemap.xml files.
-
Hey Matt
The first of your options looks right and google and other engines look for the robots.txt file in the site root rather than for each directory.
If you had a reason for not wanting that info in the root robots.txt file you can always use the robots meta tag on the pages in a given directory.
Few useful links:
Robots.txt
http://www.google.com/support/webmasters/bin/answer.py?answer=156449&&hl=enRobots Meta Tag
http://www.google.com/support/webmasters/bin/answer.py?answer=93710Marcus
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Forced Redirects/HTTP<>HTTPS 301 Question
Hi All, Sorry for what's about to be a long-ish question, but tl;dr: Has anyone else had experience with a 301 redirect at the server level between HTTP and HTTPS versions of a site in order to maintain accurate social media share counts? This is new to me and I'm wondering how common it is. I'm having issues with this forced redirect between HTTP/HTTPS as outlined below and am struggling to find any information that will help me to troubleshoot this or better understand the situation. If anyone has any recommendations for things to try or sources to read up on, I'd appreciate it. I'm especially concerned about any issues that this may be causing at the SEO level and the known-unknowns. A magazine I work for recently relaunched after switching platforms from Atavist to Newspack (which is run via WordPress). Since then, we've been having some issues with 301s, but they relate to new stories that are native to our new platform/CMS and have had zero URL changes. We've always used HTTPS. Basically, the preview for any post we make linking to the new site, including these new (non-migrated pages) on Facebook previews as a 301 in the title and with no image. This also overrides the social media metadata we set through Yoast Premium. I ran some of the links through the Facebook debugger and it appears that Facebook is reading these links to our site (using https) as redirects to http that then redirect to https. I was told by our tech support person on Newspack's team that this is intentional, so that Facebook will maintain accurate share counts versus separate share counts for http/https, however this forced redirect seems to be failing if we can't post our links with any metadata. (The only way to reliably fix is by adding a query parameter to each URL which, obviously, still gives us inaccurate share counts.) This is the first time I've encountered this intentional redirect thing and I've asked a few times for more information about how it's set up just for my own edification, but all I can get is that it’s something managed at the server level and is designed to prevent separate share counts for HTTP and HTTPS. Has anyone encountered this method before, and can anyone either explain it to me or point me in the direction of a resource where I can learn more about how it's configured as well as the pros and cons? I'm especially concerned about our SEO with this and how this may impact the way search engines read our site. So far, nothing's come up on scans, but I'd like to stay one step ahead of this. Thanks in advance!
Technical SEO | | ogiovetti0 -
Is there a limit to how many URLs you can put in a robots.txt file?
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda. Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
Technical SEO | | kcb81780 -
Canonical Expert question!
Hello, I am looking for some help here with an estate agent property web site. I recently finished the MoZ crawling report and noticed that MoZ sees some pages as duplicate, mainly from pages which list properties as page 1,2,3 etc. Here is an example: http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=2
Technical SEO | | artdivision
http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=3 etc etc Now I know that the best practise says I should set a canonical url to this page:
http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=all but here is where my problem is. http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=1 contains good written content (around 750 words) before the listed properties are displayed while the "page=all" page do not have that content, only the properties listed. Also http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=1 is similar with the originally designed landing page http://www.xxxxxxxxx.com/property-for-rent/london/houses I would like yoru advise as to what is the best way to can url this and sort the problem. My original thoughts were to can=url to this page http://www.xxxxxxxxx.com/property-for-rent/london/houses instead of the "page=all" version but your opinion will be highly appreciated.0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
Site blocked by robots.txt and 301 redirected still in SERPs
I have a vanity URL domain that 301 redirects to my main site. That domain does have a robots.txt to disallow the entire site as well. However, for a branded enough search that vanity domain still shows up in SERPs and has the new Google message of: A description for this result is not available because of this site's robots.txt I get why the message is there - that's not my , my question is shouldn't a 301 redirect trump this domain showing in SERPs, ever? Client isn't happy about it showing at all. How can I get the vanity domain out of the SERPs? THANKS in advance!
Technical SEO | | VMLYRDiscoverability0 -
Questionable Referral Traffic
Hey SEOMozers, I'm working with a client that has a suspicious traffic pattern going on. In October, a referral domain called profitclicking.com started passing visits to the site. Almost, in parallel the overall visits decreased anywhere from 35 to 50%. After checking out profitclicking.com more, it promises more traffic "with no SEO knowledge". The client doesn't think that this service was signed up for internally. Regardless, it obviously smells pretty fishy, and I'm searching for a way I can disallow traffic from this site. Could I simply just write a simple disallow statement in the robots.txt and be done with it? Just wanted to see if anyone else had any other ideas before recommending a solution. Thanks!
Technical SEO | | kylehungate0 -
Canonical Question
Our site has thousands of items, however using the old "Widgets" analogy we are unsure on how to implement the canonical tag, and if we need to at all. At the moment our main product pages lists all different "widget" products on one page, however the user can visit other sub pages that filter out the different versions of the product. I.e. glass widgets (20 products)
Technical SEO | | Corpsemerch
glass blue widgets (15 products)
glass red widgets (5 products)
etc.... I.e. plastic widgets (70 products)
plastic blue widgets (50 products)
plastic red widgets (20 products)
etc.... As the sub pages are repeating products from the main widgets page we added the canonical tag on the sub pages to refer to the main widget page. The thinking is that Google wont hit us with a penalty for duplicate content. As such the subpages shouldnt rank very well but the main page should gather any link juice from these subpages? Typically once we added the canonical tag it was coming up to the penguin update, lost a 20%-30% of our traffic and its difficult not to think it was the canonical tag dropping our subpages from the serps. Im tempted to remove the tag and return to how the site used to be repeating products on subpages.. not in a seo way but to help visitors drill down to what they want quickly. Any comments would be welcome..0 -
Mobile site: robots.txt best practices
If there are canonical tags pointing to the web version of each mobile page, what should a robots.txt file for a mobile site have?
Technical SEO | | bonnierSEO0