Robots.txt vs. meta noindex, follow
-
Hi guys,
I wander what your opinion is concerning exclution via the robots.txt file.
Do you advise to keep using this? For example:User-agent: *
Disallow: /sale/*
Disallow: /cart/*
Disallow: /search/
Disallow: /account/
Disallow: /wishlist/*Or do you prefer using the meta tag 'noindex, follow' instead?
I keep hearing different suggestions.
I'm just curious what your opinion / suggestion is.Regards,
Tom Vledder -
Hi Tom
Agree with Martijn that it depends for example, the robots.txt is generally the first port of call for bots as it allows them to understand where you want them to spend their finite time crawling your site. You can aslo give direction to all bots at once or specify a subset. It is generally the best option for blocking pages such as you /cart/ etc were they don't need crawling.
The problem with robots.txt is that it doesn't always keep pages from being indexed especially if there are other external sources linking to the pages in question.
The meta tag noindex on the other hand can be applied to individual pages and you are actually commanding the robots to NOT Index the relevant page in serps, use this option if you have pages you don't want appearing in Google (or other search engines) but the page may still be relevant for authority or able to acquire links (make sure to use Noindex follow) as you still want the robots to crawl the page. Otherwise use Noindex Nofollow hope that this helps.
-
Hi Tom,
It depends, for the /sale/ I would make an exception to make sure that it could be sales pages. But for the other pages I wouldn't want a search engine to waste any crawl budget by looking at these pages for a start. That's why I would go there with a robots.txt implementation instead of META robots as then they'll still visit the page to figure out there they won't index the page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Utilizing one robots.txt for two sites
I have two sites that are facilitated hosting in similar CMS. Maybe than having two separate robots.txt records (one for every space), my web office has made one which records the sitemaps for the two sites, similar to this:
Technical SEO | | eulabrant0 -
Unfamiliar Meta Description Tags
I'm working with a client who uses a CMS which loads meta tags into their site through its backend. On-page I see this in the source:
Technical SEO | | medtouch0 -
Noindex Success?
Has anyone had success implementing noindex/follow to pages from their site which has been hit by a Panda penalty? Our site has a lot of duplicate content for products descriptions that we had permission to use from our distributor (who is also online). We went ahead and noindex/follow those pages in the hopes that google will focus on the products that we carry that do have original descriptions (about 1/3 of our products). We didn't want to just remove those products since they are actually beneficial to our customers. Most of the duplication of content is in the form of ingredients lists.
Technical SEO | | dustyabe0 -
Blocking Affiliate Links via robots.txt
Hi, I work with a client who has a large affiliate network pointing to their domain which is a large part of their inbound marketing strategy. All of these links point to a subdomain of affiliates.example.com, which then redirects the links through a 301 redirect to the relevant target page for the link. These links have been showing up in Webmaster Tools as top linking domains and also in the latest downloaded links reports. To follow guidelines and ensure that these links aren't counted by Google for either positive or negative impact on the site, we have added a block on the robots.txt of the affiliates.example.com subdomain, blocking search engines from crawling the full subddomain. The robots.txt file is the following code: User-agent: * Disallow: / We have authenticated the subdomain with Google Webmaster Tools and made certain that Google can reach and read the robots.txt file. We know they are being blocked from reading the affiliates subdomain. However, we added this affiliates subdomain block a few weeks ago to the robots.txt, but links are still showing up in the latest downloads report as first being discovered after we added the block. It's been a few weeks already, and we want to make sure that the block was implemented properly and that these links aren't being used to negatively impact the site. Any suggestions or clarification would be helpful - if the subdomain is being blocked for the search engines, why are the search engines following the links and reporting them in the www.example.com subdomain GWMT account as latest links. And if the block is implemented properly, will the total number of links pointing to our site as reported in the links to your site section be reduced, or does this not have an impact on that figure?From a development standpoint, it's a much easier fix for us to adjust the robots.txt file than to change the affiliate linking connection from a 301 to a 302, which is why we decided to go with this option.Any help you can offer will be greatly appreciated.Thanks,Mark
Technical SEO | | Mark_Ginsberg0 -
Staging & Development areas should be not indexable (i.e. no followed/no index in meta robots etc)
Hi I take it if theres a staging or development area on a subdomain for a site, who's content is hence usually duplicate then this should not be indexable i.e. (no-indexed & nofollowed in metarobots) ? In order to prevent dupe content probs as well as non project related people seeing work in progress or finding accidentally in search engine listings ? Also if theres no such info in meta robots is there any other way it may have been made non-indexable, or at least dupe content prob removed by canonicalising the page to the equivalent page on the live site ? In the case in question i am finding it listed in serps when i search for the staging/dev area url, so i presume this needs urgent attention ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
CNAME vs 301 redirect
Hi all, Recently I created a website for a new client and my next job is trying to get them higher in Google. I added them in OSE and noticed some strange backlinks. To my surprise the client has about 20 domain names. All automatically poiting to (showing) the same new mainsite now. www.maindomain.nl www.maindomain.be
Technical SEO | | Houdoe
www.maindomain.eu
www.maindomain.com
www.otherdomain.nl
www.otherdomain.com
... Some of these domains have backlinks too (but not so much). I suggested to 301 redirect them all to the main site. Just to avoid duplicate content. But now the webhoster comes into play: "It's a problem, client has only 1 hosting account, blablabla...". They told me they could CNAME the 20 domains to the main domain. Or A-record them to an IP address. This is too technical stuff for me. So my concrete questions are: Is it smart to do anything at all or am I just harming my client? The main site is ranking pretty well now. And some backlinks are from their copy sites (probably because everywhere the logo links to the full mainsite url). Does the CNAME or A-record solution has the same effect as a 301 redirect, from SEO perspective? Many thanks,
Hans0 -
NoIndex user generated pages?
Hi, I have a site, downorisitjustme (dot) com It has over 30,000 pages in google which have been generated by people searching to check if a specific site is working or not and then possibly adding a link to a msg board to the deeplink of the results page or something which is why the pages have been picked up. Am I best to noindex the res.php page where all the auto generated content is showing up and just have the main static pages as the only ones available to be indexed?
Technical SEO | | Wardy0 -
How do I use the Robots.txt "disallow" command properly for folders I don't want indexed?
Today's sitemap webinar made me think about the disallow feature, seems opposite of sitemaps, but it also seems both are kind of ignored in varying ways by the engines. I don't need help semantically, I got that part. I just can't seem to find a contemporary answer about what should be blocked using the robots.txt file. For example, I have folders containing site comps for clients that I really don't want showing up in the SERPS. Is it better to not have these folders on the domain at all? There are also security issues I've heard of that make sense, simply look at a site's robots file to see what they are hiding. It makes it easier to hunt for files when they know the directory the files are contained in. Do I concern myself with this? Another example is a folder I have for my xml sitemap generator. I imagine google isn't going to try to index this or count it as content, so do I need to add folders like this to the disallow list?
Technical SEO | | SpringMountain0