Using Meta Header vs Robots.txt
-
Hey Mozzers,
I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes.
For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense).
I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues.
Thoughts?
-
"there is no penalty for have duplicates of your own content"
Alan,
I must respectfully disagree with this statement. Perhaps google will not penalize you directly, but it is easy to self-canabalize key terms if one has many facets that only differ slightly. I have seen this on a site where they don't rank on the first page, but they have 3-4 pages on the second page or SERPs. This is the exact issue that I am trying to resolve.
Evan
ps. sorry I hit the wrong button, but you got a good answer out of it
-
Hey Craig,
I agree with you regarding the robots.txt, however, how does one isolate parameters that are commonly used within product names, thus being the the product url as well. We are using a plugin the makes the urls more user friendly, but it makes it tough to isolate "small" or "blue" because the parameters don't include a "?sort=" or "color=" prefix anymore.
This is why I am considering using the meta header in order to control help with the issues of the duplicate content and crawl allowance?
Since I can edit the meta headers on a variety of pages, is it a viable option to use NOINDEX,FOLLOW?
-
As mentioned initially, the CMS doesn't allow me to edit canonicals for individual pages that are created via facets. The other question I had about canonicals is that a rel canonical is meant to help bots understand that different variations of the same page are, in fact, the same page: example.com = example.com/. But, for the user (which ultimately bots care about), example.com/sony/50 may not always be the same as example.com/sony.
Anyways, that is a little beside the point. I have visited the options of canonicals, but I am not sure it can be done.
-
This sounds like a job for a canonical tag.
-
Hey Craig,
Thanks for your response. This is the common answer that I have found. Here is the challenge I am having (I will use your example above):
Let's say that example.com/tv/sony is the main category page for this brand, but I only carry a few Sony tvs. Therefore, the only difference between that page and this page: example.com/tv/sony/50 is a category description that disappears when further facets are chosen.
When I search in the SERPS for "Sony TVs", rather than ranking well for one of these pages, both rank moderately well, but not well enough for first page results, and I would think this is confusing to customers as well to find two very closely related pages side by side.
So, while I agree that robots.txt is a tool that I can apply for limiting search engines from getting dizzy with the facets by limiting them to (say) 4, is NOINDEX the best solution for controlling duplicate content issues that are not that deep, and more case-by-case?
One more thing I might add is that these issues don't happen site-wide. If I carry many products from Samsung, than example.com/tv/samsung and example.com/tv/samsung/50 and even example.com/tv/samsung/50/HD will produce very different results. The issue usually occurs where there are few products for a brand, and they filter the same way with many facets.
Does that make sense? I agree with you whole heartedly, I am just trying to figure out how to deal with the shallow duplicate issues.
Cheers,
-
they will be linked to by internal links,
There is no penalty for have duplicates of your own content, but having links pouring away link juice is a self imposed penalty.
-
Hi Alan, I understand that, but the problem Evan is describing seems to be related to duplicate content and crawl allowance. There's no perfect answer but in my experience the types of pages that Evan is describing aren't often linked to. Taking that into consideration, IMO robots.txt is the correct solution.
-
The problem with robots text is that any link pointing to a no-indexed page is passing link juice that will never be returned, it is wasted. robots.txt is the last resort, IMO its should never be used.
-
Hi Even, this is quite a common problem. There are a couple of things to consider when deciding if Noindex is the solution rather than robots.txt.
Unless there is a reason the pages need to be crawled (like there are pages on the site that are only linked to from those pages) I would use robots.txt. Noindex doesn't stop search engines crawling those pages, only from putting them in the index. So in theory, search engines could spend all there time crawling pages that you don't want to be in the index.
Here's what I'd do:
Decide on a reasonable number of facets, for example, if you're selling TVs people might search for:
- Sony TV (Brand search)
- 50 inch sony tv (size + brand)
- Sony 50 inch HD TV (brand + size + specification)
But past 3 facets tends to get very little search volume (do keyword research for your own market)
In this case I'd create a rule that appends something to the URL after 3 facets hat would make it easy to block in robots.txt. For example I might make my structure:
But as soon as I add a 4th facet, for example 'colour'- I add in the filter subfolder
- example.com**/filter/**tv/sony/50/HD/white
I can then easily block all these pages in robots.txt using:
Disallow: /filter/
I hope this helps.
-
It is a problem in the SERPS because if I run a query for the brand, I can see faceted variations of that brand (say "brand" "blue") is ranking right below, but neither of them are ranking on the first page. I won't NOINDEX all pages, just those that don't provide value for customers searching, and those that are competing with competitive terms that are causing the preferred page to rank lower.
It was brought to my attention through Moz analytics, and once I began to investigate it further, I found many sources mentioning that this is very common for e-commerce. Common practice is robots.txt and a plugin, but we are using a different plugin. So, for this reason, I am trying to figure out if NOINDEX meta headers are a good option.
Does that make sense?
-
I'm not sure you have a problem, why not let them all get indexed?
-
Hey Alan,
Again, I thank you for your feedback. Unfortunately rel prev/next are not relevant in this circumstance. Also, it is all unique content on my client's own site, and I know that it is a duplicate content problem because I have 2 similar pages with slightly different facets ranking 14 and 15 in SERPS. If search engines were to choose one over the other, they would not rank them back to back.
For clarification, this is an e-commerce application with faceted navigation. Not a pagination issue.
Thanks for your input.
-
I would look at canonical and rel previous next,
Also I would establish do you have a problem?
duplicate content is not always a problem, if it is duplicate content on your own site then there is not a lot to worry about, google will rank just one page. There is no penalty for DC itself, if you are screen scaping then you may have a problem,
-
Hey Alan,
Thanks for your feedback. I guess I am not sure what "other solutions there are for this circumstance. The CMS does allow me to use rel=canonicals for individual pages with facets, I definitely don't think 301s are the way to go. I figured the NOINDEX, FOLLOW is best because it blocks bots from confusing duplicate content, but can still take advantage of some link juice. Mind you, these are faceted pages, not top level pages.
Thoughts?
-
robotx.txt is a bad way to do things, because any links pointing to a noindexed page wastes its link juice. using noindex,follow is a better way as it allows the links to be followed and link juice to return to your indexed pages.
but best not to noindex at all, and find another solution if posible
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to use Rich Snippets?
Hi there! I have been hearing a lot about Rich Snippets lately but I don't really know how they work. Are they a very important factor to consider for SEO? I would love to know your thoughts about this. Thanks!
Intermediate & Advanced SEO | | lucywrites0 -
Meta refresh bad for SEO
Hi there, Some external developers have created a wishlist for a website that allows visitors to add products to a wishlist and then send an enquiry. Very similar set-up to a shopping basket really (without the payment option). However, this wishlist lives in a separate iframe and refreshes every 30 seconds to reflect any items visitors add to their wishlist. This refreshing is done with a meta refresh. I'm aware of the obvious usability issue that the visitor's product only appears after 30 seconds in their wishlist. However, are there also any SEO issues due to the refreshing of the iframe every 30 seconds? Please let me know, whether small or large issues.
Intermediate & Advanced SEO | | Robbern0 -
Meta robots or robot.txt file?
Hi Mozzers! For parametric URL's would you recommend meta robot or robot.txt file?
Intermediate & Advanced SEO | | eLab_London
For example: http://www.exmaple.com//category/product/cat no./quickView I want to stop indexing /quickView URLs. And what's the real difference between the two? Thanks again! Kay0 -
Changing domains - best process to use?
I am about to move my Thailand-focused travel website into a new, broader Asia-focused travel website. The Thailand site has had a sad history with Google (algorithmic, not penalties) so I don't want that history to carry over into the new site. At the same time though, I want to capture the traffic that Google is sending me right now and I would like my search positions on Bing and Yahoo to carry through if possible. Is there a way to make all that happen? At the moment I have migrated all the posts over to the new domain but I have it blocked to search engines. I am about to start redirecting post for post using meta-refresh redirects with a no-follow for safety. But at the point where I open the new site up to indexing, should I at the same time block the old site from being indexed to prevent duplicate content penalties? Also, is there a method I can use to selectively 301 redirect posts only if the referrer is Bing or Yahoo, but not Google, before the meta-refresh fires? Or alternatively, a way to meta-refresh redirect if the referrer is Google but 301 redirect otherwise? Or is there a way to "noindex, nofollow" the redirect only if the referrer is Google? Is there a danger of being penalised for doing any of these things? Late Edit: It occurs to me that if my penalties are algorithmic (e.g. due to bad backlinks), does 301 redirection even carry that issue through to the new website? Or is it left behind on the old site?
Intermediate & Advanced SEO | | Gavin.Atkinson0 -
Meta canonical or simply robots.txt other domain names with same content?
Hi, I'm working with a new client who has a main product website. This client has representatives who also sells the same products but all those reps have a copy of the same website on another domain name. The best thing would probably be to shut down the other (same) websites and redirect 301 them to the main, but that's impossible in the minding of the client. First choice : Implement a conical meta for all the URL on all the other domain names. Second choice : Robots.txt with disallow for all the other websites. Third choice : I'm really open to other suggestions 😉 Thank you very much! 🙂
Intermediate & Advanced SEO | | Louis-Philippe_Dea0 -
Should we Use rel=canonical in ccTLDs websites
We have multilingual eCommerce websites with some content variations but majority of the content remains the same We have used rel=alternate hreflang on corresponding ccTLDs respective countries. for example on example.com -which is the oldest of these sites- we have used Now should we also use link rel="canonical" href="example.com" on all ccTLDs? What are the advantages and disadvantages of using it?
Intermediate & Advanced SEO | | CyrilWilson0 -
What would be the SEO benefits of using Lulu ? (ebook)
Hi there, how are you guys doing? We wrote an ebook to promote our new website and I was investigating the idea of putting it on Lulu. Did anyone ever use Lulu? If so, do you have any feedback please? I got a free ISBN number on Lulu and it looks like I may be able to distribute it on Amazaon. Also, I'm giving out the ebook for free. Thanks guys! I hope you're having a great day
Intermediate & Advanced SEO | | Ericc220 -
Ad units or % of ads vs content?
When looking at content "above the fold" is it more important to look at ad units or the visual % of unique content to ads? For example, if there are 6 small ad units or one large ad unit that takes up 30% of the page, which is better for search engines? In general, is 50% unique content above the fold with 50% ads adequate or what % do you try to optimize for?
Intermediate & Advanced SEO | | nicole.healthline0