Using Meta Header vs Robots.txt
-
Hey Mozzers,
I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes.
For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense).
I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues.
Thoughts?
-
"there is no penalty for have duplicates of your own content"
Alan,
I must respectfully disagree with this statement. Perhaps google will not penalize you directly, but it is easy to self-canabalize key terms if one has many facets that only differ slightly. I have seen this on a site where they don't rank on the first page, but they have 3-4 pages on the second page or SERPs. This is the exact issue that I am trying to resolve.
Evan
ps. sorry I hit the wrong button, but you got a good answer out of it
-
Hey Craig,
I agree with you regarding the robots.txt, however, how does one isolate parameters that are commonly used within product names, thus being the the product url as well. We are using a plugin the makes the urls more user friendly, but it makes it tough to isolate "small" or "blue" because the parameters don't include a "?sort=" or "color=" prefix anymore.
This is why I am considering using the meta header in order to control help with the issues of the duplicate content and crawl allowance?
Since I can edit the meta headers on a variety of pages, is it a viable option to use NOINDEX,FOLLOW?
-
As mentioned initially, the CMS doesn't allow me to edit canonicals for individual pages that are created via facets. The other question I had about canonicals is that a rel canonical is meant to help bots understand that different variations of the same page are, in fact, the same page: example.com = example.com/. But, for the user (which ultimately bots care about), example.com/sony/50 may not always be the same as example.com/sony.
Anyways, that is a little beside the point. I have visited the options of canonicals, but I am not sure it can be done.
-
This sounds like a job for a canonical tag.
-
Hey Craig,
Thanks for your response. This is the common answer that I have found. Here is the challenge I am having (I will use your example above):
Let's say that example.com/tv/sony is the main category page for this brand, but I only carry a few Sony tvs. Therefore, the only difference between that page and this page: example.com/tv/sony/50 is a category description that disappears when further facets are chosen.
When I search in the SERPS for "Sony TVs", rather than ranking well for one of these pages, both rank moderately well, but not well enough for first page results, and I would think this is confusing to customers as well to find two very closely related pages side by side.
So, while I agree that robots.txt is a tool that I can apply for limiting search engines from getting dizzy with the facets by limiting them to (say) 4, is NOINDEX the best solution for controlling duplicate content issues that are not that deep, and more case-by-case?
One more thing I might add is that these issues don't happen site-wide. If I carry many products from Samsung, than example.com/tv/samsung and example.com/tv/samsung/50 and even example.com/tv/samsung/50/HD will produce very different results. The issue usually occurs where there are few products for a brand, and they filter the same way with many facets.
Does that make sense? I agree with you whole heartedly, I am just trying to figure out how to deal with the shallow duplicate issues.
Cheers,
-
they will be linked to by internal links,
There is no penalty for have duplicates of your own content, but having links pouring away link juice is a self imposed penalty.
-
Hi Alan, I understand that, but the problem Evan is describing seems to be related to duplicate content and crawl allowance. There's no perfect answer but in my experience the types of pages that Evan is describing aren't often linked to. Taking that into consideration, IMO robots.txt is the correct solution.
-
The problem with robots text is that any link pointing to a no-indexed page is passing link juice that will never be returned, it is wasted. robots.txt is the last resort, IMO its should never be used.
-
Hi Even, this is quite a common problem. There are a couple of things to consider when deciding if Noindex is the solution rather than robots.txt.
Unless there is a reason the pages need to be crawled (like there are pages on the site that are only linked to from those pages) I would use robots.txt. Noindex doesn't stop search engines crawling those pages, only from putting them in the index. So in theory, search engines could spend all there time crawling pages that you don't want to be in the index.
Here's what I'd do:
Decide on a reasonable number of facets, for example, if you're selling TVs people might search for:
- Sony TV (Brand search)
- 50 inch sony tv (size + brand)
- Sony 50 inch HD TV (brand + size + specification)
But past 3 facets tends to get very little search volume (do keyword research for your own market)
In this case I'd create a rule that appends something to the URL after 3 facets hat would make it easy to block in robots.txt. For example I might make my structure:
But as soon as I add a 4th facet, for example 'colour'- I add in the filter subfolder
- example.com**/filter/**tv/sony/50/HD/white
I can then easily block all these pages in robots.txt using:
Disallow: /filter/
I hope this helps.
-
It is a problem in the SERPS because if I run a query for the brand, I can see faceted variations of that brand (say "brand" "blue") is ranking right below, but neither of them are ranking on the first page. I won't NOINDEX all pages, just those that don't provide value for customers searching, and those that are competing with competitive terms that are causing the preferred page to rank lower.
It was brought to my attention through Moz analytics, and once I began to investigate it further, I found many sources mentioning that this is very common for e-commerce. Common practice is robots.txt and a plugin, but we are using a different plugin. So, for this reason, I am trying to figure out if NOINDEX meta headers are a good option.
Does that make sense?
-
I'm not sure you have a problem, why not let them all get indexed?
-
Hey Alan,
Again, I thank you for your feedback. Unfortunately rel prev/next are not relevant in this circumstance. Also, it is all unique content on my client's own site, and I know that it is a duplicate content problem because I have 2 similar pages with slightly different facets ranking 14 and 15 in SERPS. If search engines were to choose one over the other, they would not rank them back to back.
For clarification, this is an e-commerce application with faceted navigation. Not a pagination issue.
Thanks for your input.
-
I would look at canonical and rel previous next,
Also I would establish do you have a problem?
duplicate content is not always a problem, if it is duplicate content on your own site then there is not a lot to worry about, google will rank just one page. There is no penalty for DC itself, if you are screen scaping then you may have a problem,
-
Hey Alan,
Thanks for your feedback. I guess I am not sure what "other solutions there are for this circumstance. The CMS does allow me to use rel=canonicals for individual pages with facets, I definitely don't think 301s are the way to go. I figured the NOINDEX, FOLLOW is best because it blocks bots from confusing duplicate content, but can still take advantage of some link juice. Mind you, these are faceted pages, not top level pages.
Thoughts?
-
robotx.txt is a bad way to do things, because any links pointing to a noindexed page wastes its link juice. using noindex,follow is a better way as it allows the links to be followed and link juice to return to your indexed pages.
but best not to noindex at all, and find another solution if posible
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal search pages (and faceted navigation) solutions for 2018! Canonical or meta robots "noindex,follow"?
There seems to conflicting information on how best to handle internal search results pages. To recap - they are problematic because these pages generally result in lots of query parameters being appended to the URL string for every kind of search - whilst the title, meta-description and general framework of the page remain the same - which is flagged in Moz Pro Site Crawl - as duplicate, meta descriptions/h1s etc. The general advice these days is NOT to disallow these pages in robots.txt anymore - because there is still value in their being crawled for all the links that appear on the page. But in order to handle the duplicate issues - the advice varies into two camps on what to do: 1. Add meta robots tag - with "noindex,follow" to the page
Intermediate & Advanced SEO | | SWEMII
This means the page will not be indexed with all it's myriad queries and parameters. And so takes care of any duplicate meta /markup issues - but any other links from the page can still be crawled and indexed = better crawling, indexing of the site, however you lose any value the page itself might bring.
This is the advice Yoast recommends in 2017 : https://yoast.com/blocking-your-sites-search-results/ - who are adamant that Google just doesn't like or want to serve this kind of page anyway... 2. Just add a canonical link tag - this will ensure that the search results page is still indexed as well.
All the different query string URLs, and the array of results they serve - are 'canonicalised' as the same.
However - this seems a bit duplicitous as the results in the page body could all be very different. Also - all the paginated results pages - would be 'canonicalised' to the main search page - which we know Google states is not correct implementation of canonical tag
https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html this picks up on this older discussion here from 2012
https://moz.com/community/q/internal-search-rel-canonical-vs-noindex-vs-robots-txt
Where the advice was leaning towards using canonicals because the user was seeing a percentage of inbound into these search result pages - but i wonder if it will still be the case ? As the older discussion is now 6 years old - just wondering if there is any new approach or how others have chosen to handle internal search I think a lot of the same issues occur with faceted navigation as discussed here in 2017
https://moz.com/blog/large-site-seo-basics-faceted-navigation1 -
HTTPS pages - To meta no-index or not to meta no-index?
I am working on a client's site at the moment and I noticed that both HTTP and HTTPS versions of certain pages are indexed by Google and both show in the SERPS when you search for the content of these pages. I just wanted to get various opinions on whether HTTPS pages should have a meta no-index tag through an htaccess rule or whether they should be left as is.
Intermediate & Advanced SEO | | Jamie.Stevens0 -
Which automatic redirects to use in International SEO
Hi, I need help with international SEO redirects. I'm going to have intelligencebank.com/au for Australian visitors and intelligencebank.com for the rest of the world. I would like to automatically redirect aus users that land on .com to .com/au and vice versa for non-australian users. 1. Which automatic redirects should I use: a) java script because it will allow US based google bots to crawl my /au website (bots won't read javascript so they won't be redirected) b) http redirects c) 301 redirects d) 302 redirects e) anything else? a) Should I still use rel alternate even though I only use english? b) if I should add rel alternate, can I still keep my existing rel canonical tags that are use to avoid duplicate content (I use a lot of utm codes when advertising)
Intermediate & Advanced SEO | | intelligencebank0 -
Extensions Vs Non Extensions
Hello, I'm a big fan of clean urls. However i'm curious as to what you guys do, to remove them in a friendly way which doesn't cause confusion. Standard URLS
Intermediate & Advanced SEO | | Whittie
http://www.example.com/example1.html
http://www.example.com/example2.html
http://www.example.com/example3.html
http://www.example.com/example4.php
http://www.example.com/example5.php What looks better (in my eyes)
http://www.example.com/example1/
http://www.example.com/example2/
http://www.example.com/example3/
http://www.example.com/example4/
http://www.example.com/example5/ Do you keep extensions throughout your website, avoiding any sort of confusion and page duplication; OR Put a canonical link pointing to the extension-less version of each page, with the anticipation of this version indexing into Google and other Search Engines. OR 301 Each page which has an extension to an extension-less version, and remove all linking to ".html" site wide causing errors within software like Dreamweaver, but working properly. OR Another way? Please emphasise I'm sorry if this is a little vague and I appreciate any angles on this, I quite like clean url's but unsure a hassle-less way to create it. Thanks for any advice in advance0 -
Can URLs blocked with robots.txt hurt your site?
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us? I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
Intermediate & Advanced SEO | | nicole.healthline0 -
Shared Hosting Vs VPS Hosting
From an SEO perspective what are the advantages of VPS Hosting VS Shared Hosting for a local website that has less that 200 pages and gets max 2000 hits per month? Is VPS Hosting worth the extra expense for a local Real Estate Website?
Intermediate & Advanced SEO | | bronxpad0 -
Do you use your own Blog networks?
Do you use a network of sites you own for links to your clients in your seo efforts? I see so many seo companies doing this from such junk sites with all their clients in the blog roll, it seems totally crazy. It seems this stuff works do any of you do this if so how do you keep it white hat?
Intermediate & Advanced SEO | | DavidKonigsberg0