Using Meta Header vs Robots.txt
-
Hey Mozzers,
I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes.
For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense).
I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues.
Thoughts?
-
"there is no penalty for have duplicates of your own content"
Alan,
I must respectfully disagree with this statement. Perhaps google will not penalize you directly, but it is easy to self-canabalize key terms if one has many facets that only differ slightly. I have seen this on a site where they don't rank on the first page, but they have 3-4 pages on the second page or SERPs. This is the exact issue that I am trying to resolve.
Evan
ps. sorry I hit the wrong button, but you got a good answer out of it
-
Hey Craig,
I agree with you regarding the robots.txt, however, how does one isolate parameters that are commonly used within product names, thus being the the product url as well. We are using a plugin the makes the urls more user friendly, but it makes it tough to isolate "small" or "blue" because the parameters don't include a "?sort=" or "color=" prefix anymore.
This is why I am considering using the meta header in order to control help with the issues of the duplicate content and crawl allowance?
Since I can edit the meta headers on a variety of pages, is it a viable option to use NOINDEX,FOLLOW?
-
As mentioned initially, the CMS doesn't allow me to edit canonicals for individual pages that are created via facets. The other question I had about canonicals is that a rel canonical is meant to help bots understand that different variations of the same page are, in fact, the same page: example.com = example.com/. But, for the user (which ultimately bots care about), example.com/sony/50 may not always be the same as example.com/sony.
Anyways, that is a little beside the point. I have visited the options of canonicals, but I am not sure it can be done.
-
This sounds like a job for a canonical tag.
-
Hey Craig,
Thanks for your response. This is the common answer that I have found. Here is the challenge I am having (I will use your example above):
Let's say that example.com/tv/sony is the main category page for this brand, but I only carry a few Sony tvs. Therefore, the only difference between that page and this page: example.com/tv/sony/50 is a category description that disappears when further facets are chosen.
When I search in the SERPS for "Sony TVs", rather than ranking well for one of these pages, both rank moderately well, but not well enough for first page results, and I would think this is confusing to customers as well to find two very closely related pages side by side.
So, while I agree that robots.txt is a tool that I can apply for limiting search engines from getting dizzy with the facets by limiting them to (say) 4, is NOINDEX the best solution for controlling duplicate content issues that are not that deep, and more case-by-case?
One more thing I might add is that these issues don't happen site-wide. If I carry many products from Samsung, than example.com/tv/samsung and example.com/tv/samsung/50 and even example.com/tv/samsung/50/HD will produce very different results. The issue usually occurs where there are few products for a brand, and they filter the same way with many facets.
Does that make sense? I agree with you whole heartedly, I am just trying to figure out how to deal with the shallow duplicate issues.
Cheers,
-
they will be linked to by internal links,
There is no penalty for have duplicates of your own content, but having links pouring away link juice is a self imposed penalty.
-
Hi Alan, I understand that, but the problem Evan is describing seems to be related to duplicate content and crawl allowance. There's no perfect answer but in my experience the types of pages that Evan is describing aren't often linked to. Taking that into consideration, IMO robots.txt is the correct solution.
-
The problem with robots text is that any link pointing to a no-indexed page is passing link juice that will never be returned, it is wasted. robots.txt is the last resort, IMO its should never be used.
-
Hi Even, this is quite a common problem. There are a couple of things to consider when deciding if Noindex is the solution rather than robots.txt.
Unless there is a reason the pages need to be crawled (like there are pages on the site that are only linked to from those pages) I would use robots.txt. Noindex doesn't stop search engines crawling those pages, only from putting them in the index. So in theory, search engines could spend all there time crawling pages that you don't want to be in the index.
Here's what I'd do:
Decide on a reasonable number of facets, for example, if you're selling TVs people might search for:
- Sony TV (Brand search)
- 50 inch sony tv (size + brand)
- Sony 50 inch HD TV (brand + size + specification)
But past 3 facets tends to get very little search volume (do keyword research for your own market)
In this case I'd create a rule that appends something to the URL after 3 facets hat would make it easy to block in robots.txt. For example I might make my structure:
But as soon as I add a 4th facet, for example 'colour'- I add in the filter subfolder
- example.com**/filter/**tv/sony/50/HD/white
I can then easily block all these pages in robots.txt using:
Disallow: /filter/
I hope this helps.
-
It is a problem in the SERPS because if I run a query for the brand, I can see faceted variations of that brand (say "brand" "blue") is ranking right below, but neither of them are ranking on the first page. I won't NOINDEX all pages, just those that don't provide value for customers searching, and those that are competing with competitive terms that are causing the preferred page to rank lower.
It was brought to my attention through Moz analytics, and once I began to investigate it further, I found many sources mentioning that this is very common for e-commerce. Common practice is robots.txt and a plugin, but we are using a different plugin. So, for this reason, I am trying to figure out if NOINDEX meta headers are a good option.
Does that make sense?
-
I'm not sure you have a problem, why not let them all get indexed?
-
Hey Alan,
Again, I thank you for your feedback. Unfortunately rel prev/next are not relevant in this circumstance. Also, it is all unique content on my client's own site, and I know that it is a duplicate content problem because I have 2 similar pages with slightly different facets ranking 14 and 15 in SERPS. If search engines were to choose one over the other, they would not rank them back to back.
For clarification, this is an e-commerce application with faceted navigation. Not a pagination issue.
Thanks for your input.
-
I would look at canonical and rel previous next,
Also I would establish do you have a problem?
duplicate content is not always a problem, if it is duplicate content on your own site then there is not a lot to worry about, google will rank just one page. There is no penalty for DC itself, if you are screen scaping then you may have a problem,
-
Hey Alan,
Thanks for your feedback. I guess I am not sure what "other solutions there are for this circumstance. The CMS does allow me to use rel=canonicals for individual pages with facets, I definitely don't think 301s are the way to go. I figured the NOINDEX, FOLLOW is best because it blocks bots from confusing duplicate content, but can still take advantage of some link juice. Mind you, these are faceted pages, not top level pages.
Thoughts?
-
robotx.txt is a bad way to do things, because any links pointing to a noindexed page wastes its link juice. using noindex,follow is a better way as it allows the links to be followed and link juice to return to your indexed pages.
but best not to noindex at all, and find another solution if posible
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel-canonical vs Href-lang use for an international website.
I have a multi-country website that uses country subfolders to separate countries. When I run a Moz scan, I am getting canonical related alerts (this is probably related to some of our US content being duplicated on the other country websites). Shouldn't I be using href-lang instead since I am telling search engines that a certain article in country B, is just a copy of the same article in country A?
Intermediate & Advanced SEO | | marshdigitalmarketing0 -
Robots.txt - Googlebot - Allow... what's it for?
Hello - I just came across this in robots.txt for the first time, and was wondering why it is used? Why would you have to proactively tell Googlebot to crawl JS/CSS and why would you want it to? Any help would be much appreciated - thanks, Luke User-Agent: Googlebot Allow: /.js Allow: /.css
Intermediate & Advanced SEO | | McTaggart0 -
Https vs Http Link Equity
Hi Guys, So basically have a site which has both HTTPs and HTTP versions of each page. We want to consolidate them due to potential duplicate content issues with the search engines. Most of the HTTP pages naturally have most of the links and more authority then the HTTPs pages since they have been around longer. E.g. the normal http hompage has 50 linking root domains while the https version has 5. So we are a bit concerned of adding a rel canonical tag & telling the search engines that the preferred page is the https page not the http page (where most of the link equity and social signals are). Could there potentially be a ranking loss if we do this, what would be best practice in this case? Thanks, Chris
Intermediate & Advanced SEO | | jayoliverwright0 -
What is better for Meta description ??
Hi everybody, I noticed that a lot of websites prefer their meta description would be the first words of the content inside.
Intermediate & Advanced SEO | | roeesa
I on the other hand thought that google will prefer the meta description to be like a peek to what going to be inside.
anyone can explain me, what is better? Thanks 🙂0 -
Ranking in SERPs but not using terms on website.
As far as I know, it's not normally possible for a website to rank for a keyword that is not mentioned on the website. I have seen a website that ranks very well for key terms and yet they are not mentioned anywhere on the website, I have run advanced search & checked using tools including cloak checker on my findings. How can this be?
Intermediate & Advanced SEO | | lee-murphy0 -
I have two sitemaps which partly duplicate - one is blocked by robots.txt but can't figure out why!
Hi, I've just found two sitemaps - one of them is .php and represents part of the site structure on the website. The second is a .txt file which lists every page on the website. The .txt file is blocked via robots exclusion protocol (which doesn't appear to be very logical as it's the only full sitemap). Any ideas why a developer might have done that?
Intermediate & Advanced SEO | | McTaggart0 -
Use of Rel=Canonical
I have been pondering whether I am using this tag correctly or not. We have a custom solution which lays out products in the typical eCommerce style with plenty of tick box filters to further narrow down the view. When I last researched this it seemed like a good idea to implement rel=canonical to point all sub section pages at a 'view-all' page which returns all the products unfiltered for that given section. Normally pages are restricted down to 9 results per page with interface options to increase that. This combined with all the filters we offer creates many millions of possible page permutations and hence the need for the Canonical tag. I am concerned because our view-all pages get large, returning all of that section's product into one place.If I pointed the view-all page at say the first page of x results would that defeat the object of the view-all suggestion that Google made a few years back as it would require further crawling to get at all the data? Alternatively as these pages are just product listings, would NoIndex be a better route to go given that its unlikely they will get much love in Google anyway?
Intermediate & Advanced SEO | | motiv80 -
If i disallow unfriendly URL via robots.txt, will its friendly counterpart still be indexed?
Our not-so-lovely CMS loves to render pages regardless of the URL structure, just as long as the page name itself is correct. For example, it will render the following as the same page: example.com/123.html example.com/dumb/123.html example.com/really/dumb/duplicative/URL/123.html To help combat this, we are creating mod rewrites with friendly urls, so all of the above would simply render as example.com/123 I understand robots.txt respects the wildcard (*), so I was considering adding this to our robots.txt: Disallow: */123.html If I move forward, will this block all of the potential permutations of the directories preceding 123.html yet not block our friendly example.com/123? Oh, and yes, we do use the canonical tag religiously - we're just mucking with the robots.txt as an added safety net.
Intermediate & Advanced SEO | | mrwestern0