Best way to deal with over 1000 pages of duplicate content?
-
Hi
Using the moz tools i have over a 1000 pages of duplicate content. Which is a bit of an issue!
95% of the issues arise from our news and news archive as its been going for sometime now.
We upload around 5 full articles a day. The articles have a standalone page but can only be reached by a master archive. The master archive sits in a top level section of the site and shows snippets of the articles, which if a user clicks on them takes them to the full page article. When a news article is added the snippets moves onto the next page, and move through the page as new articles are added.
The problem is that the stand alone articles can only be reached via the snippet on the master page and Google is stating this is duplicate content as the snippet is a duplicate of the article.
What is the best way to solve this issue?
From what i have read using a 'Meta NoIndex' seems to be the answer (not that i know what that is). from what i have read you can only use a canonical tag on a page by page basis so that going to take to long.
Thanks Ben
-
Hi Guys,
Thanks for your help.
I decided that updating the robot text would be the best option.
Ben
-
Technically, your URL:
http://www.capitalspreads.com/news
is really:
http://www.capitalspreads.com/news/index.php
So just add this line to robots.txt:
Disallow: /news/index.php
You won't be disallowing the pages underneath it but you will be blocking the page that contains all dupe content.
Also, if you prefer to do this with a meta tag on the news page, you could always do "noindex, follow" to make sure Google follows the links - they just don't index the page.
-
It may not be helpful to you in this situation. I was just saying that if your server creates multiple URLs containing the same content, as long as those URLs also contain the identical rel=canonical directive, a single canonical version of that content will be established.
-
Hi Chris,
I've read about the canonicalization but from what i could work I'd have to tag each of the 400 plus page individually to solve the issue and i don't think this is the best use of anyone's time.
I don't under how placing the tag and pointing back at itself will help? Can you explain a little more.
Ideally i want the full article page to be indexed as this will be more beneficial to the user. By placing the canonical tag on the snippets page and pointing it to itself would i not be telling the spider this is the page to index?
Here some examples
http://www.capitalspreads.com/news - Snippets page
http://www.capitalspreads.com/news/uk-economic-recovery-will-take-years - Full article, that would ideally be the page that wants to be indexed.
Regards
Ben
-
Ben, you use the rel=canonical directive in the header of the page with the original source of the content (pointing to itself), every reproduction of that page that also contains the rel=canonical directive pointing to the original source. So it's not necessarily a page by page solution. Have you read through this yet? Canonicalization and the Canonical Tag - Learn SEO - Moz
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site Crawl -> Duplicate Page Content -> Same pages showing up with duplicates that are not
These, for example: | https://im.tapclicks.com/signup.php/?utm_campaign=july15&utm_medium=organic&utm_source=blog | 1 | 2 | 29 | 2 | 200 |
Technical SEO | | writezach
| https://im.tapclicks.com/signup.php?_ga=1.145821812.1573134750.1440742418 | 1 | 1 | 25 | 2 | 200 |
| https://im.tapclicks.com/signup.php?utm_source=tapclicks&utm_medium=blog&utm_campaign=brightpod-article | 1 | 119 | 40 | 4 | 200 |
| https://im.tapclicks.com/signup.php?utm_source=tapclicks&utm_medium=marketplace&utm_campaign=homepage | 1 | 119 | 40 | 4 | 200 |
| https://im.tapclicks.com/signup.php?utm_source=blog&utm_campaign=first-3-must-watch-videos | 1 | 119 | 40 | 4 | 200 |
| https://im.tapclicks.com/signup.php?_ga=1.159789566.2132270851.1418408142 | 1 | 5 | 31 | 2 | 200 |
| https://im.tapclicks.com/signup.php/?utm_source=vocus&utm_medium=PR&utm_campaign=52release | Any suggestions/directions for fixing or should I just disregard this "High Priority" moz issue? Thank you!0 -
Duplicate content on Product pages for different product variations.
I have multiple colors of the same product, but as a result I'm getting duplicate content warnings. I want to keep these all different products with their own pages, so that the color can be easily identified by browsing the category page. Any suggestions?
Technical SEO | | bobjohn10 -
A problem with duplicate content
I'm kind of new at this. My crawl anaylsis says that I have a problem with duplicate content. I set the site up so that web sections appear in a folder with an index page as a landing page for that section. The URL would look like: www.myweb.com/section/index.php The crawl analysis says that both that URL and its root: www.myweb.com/section/ have been indexed. So I appear to have a situation where the page has been indexed twice and is a duplicate of itself. What can I do to remedy this? And, what steps should i take to get the pages re-indexed so that this type of duplication is avoided? I hope this makes sense! Any help gratefully received. Iain
Technical SEO | | iain0 -
Over 700+ duplicate content pages -- help!
I just signed up for SEO Moz pro for my site. The initial report came back with over 700+ duplicate content pages. My problem is that while I can see why some of the content is duplicated on some of the pages I have no idea why it's coming back as duplicated. Is there a tutorial for a novie on how to read the duplicate content report and what steps to take? It's an e-commerce website and there is some repetitive content on all the product pages like our "satisfaction guaranteed" text and the fabric material... and not much other text. There's not a unique product description because an image speaks for itself. Could this be causing the problem? I have lots of URLs with over 50+ duplicates. Thx for any help.
Technical SEO | | Santaur0 -
Duplicate page content
Hello, My site is being checked for errors by the PRO dashboard thing you get here and some odd duplicate content errors have appeared. Every page has a duplicate because you can see the page and the page/~username so... www.short-hairstyles.com is the same as www.short-hairstyles.com/~wwwshor I don't know if this is a problem or how the crawler found this (i'm sure I have never linked to it). But I'd like to know how to prevent it in case it is a problem if anyone knows please? Ian
Technical SEO | | jwdl0 -
Duplicate page error
SEO Moz gives me an duplicate page error as my homepage www.monteverdetours.com is the same as www.monteverdetours.com/index is this actually en error? And is google penalizing me for this?
Technical SEO | | Llanero0 -
What is the best way to close my blog?
I have a blog on a separate address to my website. http://cheshireweddingphotographyblog.co.uk/ and http://celynnenphotography.co.uk Now I'm going to have a new website which is going to be wordpress based and it will sit on the main website (http://celynnenphotography.co.uk ) and include both gallery and blog. now the blog does well enough on google, etc.. so i wanted to mix their SEO juju and all that, but what happens with my blog? Do i: Stop paying for hosting, nice and simple. OR Do I need to do something?
Technical SEO | | IoanSaid0 -
Large Scale Ecommerce. How To Deal With Duplicate Content
Hi, One of our clients has a store with over 30,000 indexed pages but less then 10,000 individual products and make a few hundred static pages. Ive crawled the site in Xenu (it took 12 hours!) and found it to by a complex mess caused by years of hack add ons which has caused duplicate pages, and weird dynamic parameters being indexed The inbound link structure is diversified over duplicate pages, PDFS, images so I need to be careful in treating everything correctly. I can likely identify & segment blocks of 'thousands' of URLs and parameters which need to be blocked, Im just not entirely sure the best method. Dynamic Parameters I can see the option in GWT to block these - is it that simple? (do I need to ensure they are deinxeded and 301d? Duplicate Pages Would the best approach be to mass 301 these pages and then apply a no-index tag and wait for it to be crawled? Thanks for your help.
Technical SEO | | LukeyJamo0