Blog tags are creating excessive duplicate content...should we use rel canonicals or 301 redirects?
-
We are having an issue with our cilent's blog creating excessive duplicate content via blog tags. The duplicate webpages from tags offer absolutely no value (we can't even see the tag). Should we just 301 redirect the tagged page or use a rel canonical?
-
The easiest way to resolve issues with tags is to noindex them. I wrote a post about how you can safely do this: http://www.evolvingseo.com/2012/08/10/clean-sweep-yo-tag-archives-now (you basically just double check to see if they are receiving traffic, and leave the few that receive traffic via search indexed).
But at the root level it comes down to knowing how to use tags correctly on a blogging platform to begin with - and knowing how they function, and what happens when you tag something.
First off, tagging any post creates a new page called a "tag archive". The only way someone can get to tag archives by default is if you allow some sort of navigation or links to them on the site itself. This is usually in the form of a "tag cloud" (sidebar or footer) or at the bottom of posts when it says "tagged in....." and links to the tags.
Then if they are internally linked to, they will get indexed (unless you noindex them like I have suggested above). They are typically low to no-value pages because most bloggers just tag everything, and use lots of tags per post. Then you end up with hundreds of pages (tag archives) with no value.
So noindexing them is the safest way to go, except for very extreme cases where a blogger uses them 100% perfect (which is rare, so I always assume most people asking should just noindex but use my post to check for traffic to any of them first).
-
Thanks for chiming in! Just to reiterate something - canonical tags are only a suggestion, not a hard directive. Google can and does ignore them. The canonical tag and also pass noindexing directives to the page you point them at. So with tag archives, if they are set to noindex and you canonical them to posts, you might deindex your posts.
And finally, canonical is only something that should be used that can't be solved via indexation, crawling or architecture solutions. In the case of tags in a blogging system (probably wordpress) the easiest and 100% definite way to handle tags is just to noindex them. Then you don't need to worry about canonicals or duplicate content.
Also, tags are no harmful because of duplicate content per se, but just that they add a lot of unneeded pages to the index.
-
You can set tags to noindex/follow. If you're using WordPress and one of the more popular SEO plugins, this could be done with a couple of clicks. But are these tags actually generating duplicate content? Usually a snippet of the tagged posts isn't considered duplicate.
Anyway, noindex should be more effective than it was in the past. And as Highland has said, setting a canonical would be a good idea as well.
If the tags aren't really helping out site users, they aren't using them - etc., and they don't have any link equity - you could just 410 them. Plus you could submit the tag URLs for removal in GWT.
So check the referral traffic and backlinks for those pages and go with either removal or noindex follow and a canonical.
-
Canonical hands down. This is what canonical was made for anyways: duplicate content you can't remove.
Canonical simply lets you tell Google which duplicate content should "win" the indexation race and Google will take it into consideration. I can think of many reasons why you'd have overlapping tags but would not want to remove them (which is what a 301 would do)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Move to new domain using Canonical Tag
At the moment, I am moving from olddomain.com (niche site) to the newdomain.com (multi-niche site). Due to some reasons, I do not want to use 301 right now and planning to use the canonical pointing to the new domain instead. Would Google rank the new site instead of the old site? From what I have learnt, the canonical tag lets Google know that which is the main source of the contents. Thank you very much!
Intermediate & Advanced SEO | | india-morocco0 -
Canonical tags for duplicate listings
Hi there, We are restructuring a website. The website originally lists jobs that will have duplicate content. We have tried to ask the client not to use duplicates but apparently their industry is not something they can control. The recommendations I had is to have categories (which will have the idea description for a group of jobs), and the job listing pages. The job listing pages will then have canonical tags pointing to the category page as the primary URL to be indexed. Another opinion came from a third party that this can be seen as if we are tricking Google and would get penalised, **Is that even true? **Why would Google penalise for this if thats their recommendations in the first place? This third party suggested using nofollow on the links to these listings, or even not not index them all together. What are your thoughts? Thanks Issa
Intermediate & Advanced SEO | | iQi0 -
Duplicate title tags due to lightbox use
I am looking at a site and am pulling up duplicate title tags because of their lightbox use so... So they have a page: http://www.website.com/page and then a duplicate of that page: http://www.website.com/page?width=500&height=600 on a huge number of pages (using Drupal)... that kind of thing - what would be the best / cleanest solution?
Intermediate & Advanced SEO | | McTaggart0 -
Shall I use a 301 or 302 redirect when people leave the company?
Hello, At my company, we have instances where client-facing people leave the company and so we need to remove their profile page from the website. As opposed to people receiving a 404 when they search for them, I thought it would be best to divert visitors to a generic landing page to explain that the person they are looking for has left the company with details on how to get in touch. I'm tempted to use a 302 redirect so the person they are searching for stays in the search results longer. But longer-term, will this cause any harm? Should it be eventually be turned into a 301 redirect? Or should I just use a 301 in the first instance. Thanks in advance, Stu
Intermediate & Advanced SEO | | Stuart260 -
Product Syndication and duplicate content
Hi, It's a duplicate content question. We sell products (vacation rental homes) on a number of websites as well as our own. Generally, these affiliate sites have a higher domain authority and much more traffic than our site. The product content (text, images, and often availability and rates) is pulled by our affiliates into their websites daily and is exactly the same as the content on our site, not including their page structure. We receive enquiries by email and any links from their domains to ours are nofollow. For example, all of the listing text on mysite.com/listing_id is identical to my-first-affiliate-site.com/listing_id and my-second-affiliate-site.com/listing_id. Does this count as duplicate content and, if so, can anyone suggest a strategy to make the best of the situation? Thanks
Intermediate & Advanced SEO | | McCaldin0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Do I need to use rel="canonical" on pages with no external links?
I know having rel="canonical" for each page on my website is not a bad practice... but how necessary is it for pages that don't have any external links pointing to them? I have my own opinions on this, to be fair - but I'd love to get a consensus before I start trying to customize which URLs have/don't have it included. Thank you.
Intermediate & Advanced SEO | | Netrepid0 -
Duplicate Title Tags & Duplication Meta Description after 301 Redirect
Today, I was checking my Google webmaster tools and found 16,000 duplicate title tags and duplicate meta description. I have investigate for this issue and come to know about as follow. I have changed URL structure for 11,000 product pages on 3rd July, 2012 and set up 301 redirect from old product pages to new product pages. Google have started to crawl my new product pages but, De-Indexing of old URLs are quite slower. That's why I found this issue on Google webmaster tools. Can anyone suggest me, How can I increase ratio of De-Indexing for old URLs? OR any other suggestions? How much time Google will take to De-Index old URLs from web search?
Intermediate & Advanced SEO | | CommercePundit0