Using Meta Header vs Robots.txt
-
Hey Mozzers,
I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes.
For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense).
I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues.
Thoughts?
-
"there is no penalty for have duplicates of your own content"
Alan,
I must respectfully disagree with this statement. Perhaps google will not penalize you directly, but it is easy to self-canabalize key terms if one has many facets that only differ slightly. I have seen this on a site where they don't rank on the first page, but they have 3-4 pages on the second page or SERPs. This is the exact issue that I am trying to resolve.
Evan
ps. sorry I hit the wrong button, but you got a good answer out of it
-
Hey Craig,
I agree with you regarding the robots.txt, however, how does one isolate parameters that are commonly used within product names, thus being the the product url as well. We are using a plugin the makes the urls more user friendly, but it makes it tough to isolate "small" or "blue" because the parameters don't include a "?sort=" or "color=" prefix anymore.
This is why I am considering using the meta header in order to control help with the issues of the duplicate content and crawl allowance?
Since I can edit the meta headers on a variety of pages, is it a viable option to use NOINDEX,FOLLOW?
-
As mentioned initially, the CMS doesn't allow me to edit canonicals for individual pages that are created via facets. The other question I had about canonicals is that a rel canonical is meant to help bots understand that different variations of the same page are, in fact, the same page: example.com = example.com/. But, for the user (which ultimately bots care about), example.com/sony/50 may not always be the same as example.com/sony.
Anyways, that is a little beside the point. I have visited the options of canonicals, but I am not sure it can be done.
-
This sounds like a job for a canonical tag.
-
Hey Craig,
Thanks for your response. This is the common answer that I have found. Here is the challenge I am having (I will use your example above):
Let's say that example.com/tv/sony is the main category page for this brand, but I only carry a few Sony tvs. Therefore, the only difference between that page and this page: example.com/tv/sony/50 is a category description that disappears when further facets are chosen.
When I search in the SERPS for "Sony TVs", rather than ranking well for one of these pages, both rank moderately well, but not well enough for first page results, and I would think this is confusing to customers as well to find two very closely related pages side by side.
So, while I agree that robots.txt is a tool that I can apply for limiting search engines from getting dizzy with the facets by limiting them to (say) 4, is NOINDEX the best solution for controlling duplicate content issues that are not that deep, and more case-by-case?
One more thing I might add is that these issues don't happen site-wide. If I carry many products from Samsung, than example.com/tv/samsung and example.com/tv/samsung/50 and even example.com/tv/samsung/50/HD will produce very different results. The issue usually occurs where there are few products for a brand, and they filter the same way with many facets.
Does that make sense? I agree with you whole heartedly, I am just trying to figure out how to deal with the shallow duplicate issues.
Cheers,
-
they will be linked to by internal links,
There is no penalty for have duplicates of your own content, but having links pouring away link juice is a self imposed penalty.
-
Hi Alan, I understand that, but the problem Evan is describing seems to be related to duplicate content and crawl allowance. There's no perfect answer but in my experience the types of pages that Evan is describing aren't often linked to. Taking that into consideration, IMO robots.txt is the correct solution.
-
The problem with robots text is that any link pointing to a no-indexed page is passing link juice that will never be returned, it is wasted. robots.txt is the last resort, IMO its should never be used.
-
Hi Even, this is quite a common problem. There are a couple of things to consider when deciding if Noindex is the solution rather than robots.txt.
Unless there is a reason the pages need to be crawled (like there are pages on the site that are only linked to from those pages) I would use robots.txt. Noindex doesn't stop search engines crawling those pages, only from putting them in the index. So in theory, search engines could spend all there time crawling pages that you don't want to be in the index.
Here's what I'd do:
Decide on a reasonable number of facets, for example, if you're selling TVs people might search for:
- Sony TV (Brand search)
- 50 inch sony tv (size + brand)
- Sony 50 inch HD TV (brand + size + specification)
But past 3 facets tends to get very little search volume (do keyword research for your own market)
In this case I'd create a rule that appends something to the URL after 3 facets hat would make it easy to block in robots.txt. For example I might make my structure:
But as soon as I add a 4th facet, for example 'colour'- I add in the filter subfolder
- example.com**/filter/**tv/sony/50/HD/white
I can then easily block all these pages in robots.txt using:
Disallow: /filter/
I hope this helps.
-
It is a problem in the SERPS because if I run a query for the brand, I can see faceted variations of that brand (say "brand" "blue") is ranking right below, but neither of them are ranking on the first page. I won't NOINDEX all pages, just those that don't provide value for customers searching, and those that are competing with competitive terms that are causing the preferred page to rank lower.
It was brought to my attention through Moz analytics, and once I began to investigate it further, I found many sources mentioning that this is very common for e-commerce. Common practice is robots.txt and a plugin, but we are using a different plugin. So, for this reason, I am trying to figure out if NOINDEX meta headers are a good option.
Does that make sense?
-
I'm not sure you have a problem, why not let them all get indexed?
-
Hey Alan,
Again, I thank you for your feedback. Unfortunately rel prev/next are not relevant in this circumstance. Also, it is all unique content on my client's own site, and I know that it is a duplicate content problem because I have 2 similar pages with slightly different facets ranking 14 and 15 in SERPS. If search engines were to choose one over the other, they would not rank them back to back.
For clarification, this is an e-commerce application with faceted navigation. Not a pagination issue.
Thanks for your input.
-
I would look at canonical and rel previous next,
Also I would establish do you have a problem?
duplicate content is not always a problem, if it is duplicate content on your own site then there is not a lot to worry about, google will rank just one page. There is no penalty for DC itself, if you are screen scaping then you may have a problem,
-
Hey Alan,
Thanks for your feedback. I guess I am not sure what "other solutions there are for this circumstance. The CMS does allow me to use rel=canonicals for individual pages with facets, I definitely don't think 301s are the way to go. I figured the NOINDEX, FOLLOW is best because it blocks bots from confusing duplicate content, but can still take advantage of some link juice. Mind you, these are faceted pages, not top level pages.
Thoughts?
-
robotx.txt is a bad way to do things, because any links pointing to a noindexed page wastes its link juice. using noindex,follow is a better way as it allows the links to be followed and link juice to return to your indexed pages.
but best not to noindex at all, and find another solution if posible
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should you use www?
The age old question. Should I use "www." for a brand new content site assuming my goal (and most goals starting out) is to get to millions of visits per month? Does the community agree with, http://www.yes-www.org/why-use-www/? The only reason I question it honestly, since most high traffic companies in my search use www., is because moz doesn't. Thanks for your help. Seems it would be quite a pain to go back once you have a lot of traffic.
Intermediate & Advanced SEO | | mag7770 -
Ranking without use of keywords on page & without use of matching anchor text??
Howdy folks. So, here is a dilemma. One of competitors of ours is somehow ranking for a keyphrase "houston chronicle obituaries" without any usage of these keywords on the page, without any full or partial anchor text match ("chronicle" is not used anywhere). The rest of competitiors' rankings make sense. Any ideas?
Intermediate & Advanced SEO | | DmitriiK0 -
Subdomain vs totally new domain
Problem: Our organization publish maps for public viewing using google maps. We are currently getting limited value from these links. We need to separate our public and private maps for infrastructure purposes, and are weighing up the strengths and weaknesses of separating by domain or sub domain with regards SEO and infrastructure. Current situation: maps.mycompany.com currently has a page authority of 30 and mycompany.com has a domain authority of 39. We are currently only getting links from 8 maps which are shared via social media whereas most people embed our maps on their website using an iframe which I believe doesn't do us any favour with SEO. We currently have approx 3K public maps. Question: What SEO impact can you see if we move our public maps from the subdomain maps.mycompany.com to mycompanypublicmaps.com? Thanks in advance for your help and happy to give more info if you need it!
Intermediate & Advanced SEO | | eSpatial0 -
Google News and Meta Title
Hi,
Intermediate & Advanced SEO | | JohnPalmer
1. I just read this article: https://www.seroundtable.com/google-news-titles-h1-19876.html
Google want the same title. no problem. but what about the brand? for example
POST TITLE BLA BLU | My Brand
The "post title bla blu" is the H1 and title of the article and | My Brand is my brand...
I can keep it as is with the My brand? or remove it? what about posts with long title for example "POST TITLE BLA BLU POST TITLE BLA BLU | My Brand"
What is you suggestion, I know Google doesn't show all the text and we'll see "...". it's still important to write the brand name in the title or just the post title? (without the brand). Thanks,0 -
Using subdomains for related landing pages?
Seeking subdomain usage and related SEO advice... I'd like to use multiple subdomains for multiple landing pages all with content related to the main root domain. Why?...Cost: so I only have to register one domain. One root domain for better 'branding'. Multiple subdomains that each focus on one specific reason & set of specific keywords people would search a solution to their reason to hire us (or our competition).
Intermediate & Advanced SEO | | nodiffrei0 -
Meta Tags (again)
Hey, I know this has been discussed to death but look back through previous postings there doesn't seem to be a consensus on the exact Meta tags that an eCommerce site should include, specifically whether to remove the keyword tag or not since it is believed that Yahoo potentially still makes use of it. Currently our homepage has the following Meta Tags: <title>Buy Printer Cartridges | Ink and Toner Cartridge for Inkjet and Laser Printers</title> Description" content="<a class="attribute-value">Visit Refresh Cartridges for great prices on ink cartridges, toner cartridges, ink, printers and accessories.</a>" /> Keywords" content="<a class="attribute-value">ink cartridges, cheap cartridges, inkjet cartridges, inkjet ink cartridges, ink cartridge, printer ink cartridges, laser cartridges, toner, laser printers</a>" /> Content-Type" content="<a class="attribute-value">text/html; charset=iso-8859-1</a>"/> author" content="<a class="attribute-value">Ink Cartridges, Inkjet Cartridge, Printer Cartridge, Toner Cartridges Refresh Cartridges</a>" /> expires" content="<a class="attribute-value">0</a>" /> robots" content="<a class="attribute-value">noodp,index,follow</a>" /> Language" content="<a class="attribute-value">English</a>" /> Cache-Control" content="<a class="attribute-value">Public</a>" /> verify-v1" content="<a class="attribute-value">sJXqAAWP6ar/LTEOMyUgG6nqothxk62tJTid+ryBJxo=</a>" /> viewport" content="<a class="attribute-value">width=1024</a>" /> This is too messy but before I do something drastic that I'll possibly regret please can you confirm that, in your opinion, I am best to remove everything with the exception of this: <title>Buy Printer Cartridges | Ink and Toner Cartridge for Inkjet and Laser Printers</title> Description" content="<a class="attribute-value">Visit Refresh Cartridges for great prices on ink cartridges, toner cartridges, ink, printers and accessories.</a>" /> Content-Type" content="<a class="attribute-value">text/html; charset=iso-8859-1</a>"/>
Intermediate & Advanced SEO | | ChrisHolgate
viewport" content="<a class="attribute-value">width=1024</a>" /> I realise there is a verify-v1 tag in there but this can be done through a file on our server so while cleaning up that might as well go. Would there be an argument for keeping any of the other tags or are they all pretty much redundant now? Many thanks! Chris0 -
Which search engines still use Meta Keywords?
I know Google doesn't use meta keywords in meta tags, but i was wondering if there are other smaller search engines that still do? Id it worth it to add meta keywords for them?
Intermediate & Advanced SEO | | jhinchcliffe0 -
Canonical vs noindex for blog tags
Our blog started to user tags & I know this is bad for Panda, but our product team wants use them for user experience. Should we canonizalize these tags to the original blog URL or noindex them?
Intermediate & Advanced SEO | | nicole.healthline0