Tags, Categories, & Duplicate Content
-
Looking for some advice on a duplicate content issue that we're having that definitely isn't unique to us.
See, we are allowing all our tag and category pages, as well as our blog pagination be indexed and followed, but Moz is detecting that all as duplicate content, which is obvious since it is the same content that is on our blog posts.
We've decided in the past to keep these pages the way they are as it hasn't seemed to hurt us specifically and we hoped it would help our overall ranking. We haven't seen positive or negative signals either way, just the warnings from Moz.
We are wondering if we should noindex these pages and if that could cause a positive change, but we're worried it might cause a big negative change as well.
Have you confronted this issue? What did you decide and what were the results?
Thanks in advance!
-
Erica, Thank you for sticking with this and continuing to share your thoughts. It's very helpful and much appreciated!
-
Thanks Erica.
We're deindexing the tags pages for now and going to see what happens. If all goes well, we might deindex the category pages as well.
Thanks!
-
EGOL is definitely correct that those pages can hold a ton of value if a brand/company has the time/resources/bandwidth to optimize them. Most don't, so it's better to noindex than have duplicate and/or thin content category pages. But if you can and will optimize, do it!
-
That makes sense. But I really want to make sure I (and others) understand because of EGOL's earlier referenced comments (June 2011).
"If I kept my category pages out of the search indexes I would be walking away from hundreds of search engine visitors per minute.
Do analytics to see how much traffic is coming into these pages from search, who is linking to them, how much revenue they earn and also consider their future traffic potential.
Its not good to follow generalized advice blindly." and (February 2012) ...
"I have two wordpress blogs and category pages are where most of my search engine traffic enters. Some bring in thousands per month. Most of my post pages bring in very little traffic.
If you are not having any problem with duplicate content at present maybe it would be a good idea to allow indexing of the main page, the post pages and the category pages. They if you do have a duplicate content problem you can remove from the index the pages that bring in the least amount of traffic."
So is the key then, ensuring the category pages contain unique content in addition to whatever else is on the category pages? I would have thought the mere fact that you're creating a unique combination of unique content by the grouping excerpts from identically tagged posts might have been enough. That content would also get updated each time a new post gets published.
I'd appreciate your thoughts on this Erica.
-
You can either choose to deindex pages one by one or deindex the whole subfolder.
Since usually category pages have the same content as or a preview of the content on your other pages, this doesn't affect your long tail traffic as that traffic will go to the other pages. Usually the problem with category pages is that the content's thin or duplicate. Now, you can make content just for category pages and keep them to drive traffic to. I worked in e-commerce pre-Moz and we wanted to rank/land people on category pages, such as women's shirts, and made unique, solid content for those page.
-
This is certainly what we've heard and it's good to hear of a real case where you went from indexing to noindexing. My bet is that we would have the same result, my hope is that we would have an increase over time, and my fear is that we'll have a decrease.
-
We do upgrade our blog twice a week and keep a pretty good spread across our categories and tags.
The hope is that we'll have the boost you mentioned, in long-tail and whatnot, but the fear is that it could hurt us (like Erica mentioned above).
Like Erica, we haven't seen any negativity, but we wonder if we're being affected without even knowing it and by setting them to noindex we could potentially get a boost. It's a dream, we just don't want the opposite to happen.
-
Erica, shouldn't the decision to noindex category pages be done on a case-by-case basis? If the blog has few posts, or if posts aren't updated frequently, then the chance of category pages being viewed as thin increases and it would make sense to noindex them.
If, on the other hand:
- category pages have different content from that of the main blog page;
- the main blog and category pages use excerpts;
- tag, archive and author pages are noindexed;
- and frequent updates;
doesn't it then make a case to index category pages? They can be a rich source of long-tail keywords and therefore a good draw for new entrants to the site as explained in this earlier Q&A post.
-
It's highly recommend that you noindex category, tag, archives, and author pages in WordPress. (I assume you're using WP; though there are many similar blogging platforms out there.) The reason is because these pages come across as thin and/or duplicate content, and you are risking getting hit by Panda. Now that doesn't always happen. My own personal blog had these pages indexed for a very long time, and I didn't have any problems. But I also didn't seen any problems when I did deindex them. But I don't get a ton of traffic, and I'm sure traffic to, popularity of site, and competitive nature all factor into Google's radar.
-
Hi Bradjn, With duplicate content i would go for the use of canonicals. Give the original page and its duplicates the same cannonical url, so searchengines will know what's the original and (most of the time) won't see it as duplicate.
Here some more about duplicate content and also canonicals: http://moz.com/learn/seo/duplicate-content
I can't give you a good answer about using noindex or this wil be a positive change. Did you check in webmastertools about duplicates? or only MOZ?
Grtz, Leonie
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to handle one section of duplicate content
Hi guys, i'm wondering if I can get some best practice advice in preparation for launching our new e-commerce website. For the new website we are creating location pages with a description and things to do which will lead the user to hotels in the location. For each hotel page which relates to the location we will have the same 'Things to do' content. This is what the content will look like on each page: Location page Location title (1-3 words) Location description (150-200 words) Things to do (200-250 words) Reasons to visit location (15 words) Hotel page Hotel name and address (10 words) Short description (25 words) Reasons to book hotel (15 words) Hotel description (100-200 words) Friendly message why to visit (15 words) Hotel reviews feed from trust pilot Types of break and information (100-200 words) Things to do (200-250 words) My question is how much will we penalised for having the same 'Things to do' content on say up to 10 hotels + 1 location page? In an ideal world we want to develop a piece of code which tells search engines that the original content lies on the location page but this will not be possible before we go live. I'm unsure whether we should just go and take the potential loss in traffic or remove the 'Things to do' section on hotel pages until we develop the piece of code?
Technical SEO | | CHGLTD1 -
Fullsite=true coming up as duplicate content?
Hello, I am new to the fullsite=true method of mobile site to desktop site, and have recently found that about 50 of the instances in which I added fullsite=true to links from our blog show as a duplicate to the page that it is pointing to? Could someone tell me why this would be? Do I need to add some sort of rel=canonical to the main page (non-fullsite=true) or how should I approach this? Thanks in advance for your help! L
Technical SEO | | lfrazer0 -
Duplicate Content due to CMS
The biggest offender of our website's duplicate content is an event calendar generated by our CMS. It creates a page for every day of every year, up to the year 2100. I am considering some solutions: 1. Include code that stops search engines from indexing any of the calendar pages 2. Keep the calendar but re-route any search engines to a more popular workshops page that contains better info. (The workshop page isn't duplicate content with the calendar page). Are these solutions possible? If so, how do the above affect SEO? Are there other solutions I should consider?
Technical SEO | | ycheung0 -
Content relaunch without content duplication
We write great Content for blog and websites (or at least we try), especially blogs. Sometimes few of them may NOT get good responses/reach. It could be the content which is not interesting, or the title, or bad timing or even the language used. My question for the discussion is, what will you do if you find the content worth audience's attention missed it during its original launch. Is that fine to make the text and context better and relaunch it ? For example: 1. Rechristening the blog - Change Title to make it attractive
Technical SEO | | macronimous
2. Add images
3. Check spelling
4. Do necessary rewrite, spell check
5. Change the timeline by adding more recent statistics, references to recent writeups (external and internal blogs for example), change anything that seems outdated Also, change title and set rel=cannoical / 301 permanent URLs. Will the above make the blog new? Any ideas and tips to do? Basically we like to refurbish (:-)) content that didn't succeed in the past and relaunch it to try again. If we do so will there be any issues with Google bots? (I hope redirection would solve this, But still I want to make sure) Thanks,0 -
Duplicate page content
hi I am getting an duplicate content error in SEOMoz on one of my websites it shows http://www.exampledomain.co.uk http://www.exampledomain.co.uk/ http://www.exampledomain.co.uk/index.html how can i fix this? thanks darren
Technical SEO | | Bristolweb0 -
Help With Joomla Duplicate Content
Need another set of eyes on my site from someone with Joomla experience. I'm running Joomla 2.5 (latest version) and SEOmoz is giving my duplicate content errors on a lot of my pages. I checked my sitemap, I checked my menus, and I checked my links, and I can't figure out how SEOmoz is finding the alternate paths to my content. Home page is: http://www.vipfishingcharters.com/ There's only one menu at the top. Take the first link "Dania Beach" under fishing charters for example. This generates the SEF url: http://www.vipfishingcharters.com/fishing-charters/broward-county/dania-beach-fishing-charters-and-fishing-boats.html Somehow SEOmoz (and presumably all other robots) are finding duplicate content at: http://www.vipfishingcharters.com/broward-county/dania-beach-fishing-charters-and-fishing-boats.html SEOmoz says the referrer is the homepage/root. The first URL is constructed using the menu aliases. The second one is constructed using the Joomla category and article alias. Where is it getting this and how can I stop it? <colgroup><col width="601"></colgroup>
Technical SEO | | NoahC0 -
Is 100% duplicate content always duplicate?
Bit of a strange question here that would be keen on getting the opinions of others on. Let's say we have a web page which is 1000 lines line, pulling content from 5 websites (the content itself is duplicate, say rss headlines, for example). Obviously any content on it's own will be viewed by Google as being duplicate and so will suffer for it. However, given one of the ways duplicate content is considered is a page being x% the same as another page, be it your own site or someone elses. In the case of our duplicate page, while 100% of the content is duplicate, the page is no more than 20% identical to another page so would it technically be picked up as duplicate. Hope that makes sense? My reason for asking is I want to pull latest tweets, news and rss from leading sites onto a site I am developing. Obviously the site will have it's own content too but also want to pull in external.
Technical SEO | | Grumpy_Carl0 -
Duplicate content
This is just a quickie: On one of my campaigns in SEOmoz I have 151 duplicate page content issues! Ouch! On analysis the site in question has duplicated every URL with "en" e.g http://www.domainname.com/en/Fashion/Mulberry/SpringSummer-2010/ http://www.domainname.com/Fashion/Mulberry/SpringSummer-2010/ Personally my thoughts are that are rel = canonical will sort this issue, but before I ask our dev team to add this, and get various excuses why they can't I wanted to double check i am correct in my thinking? Thanks in advance for your time
Technical SEO | | Yozzer0