Best way to deal with over 1000 pages of duplicate content?
-
Hi
Using the moz tools i have over a 1000 pages of duplicate content. Which is a bit of an issue!
95% of the issues arise from our news and news archive as its been going for sometime now.
We upload around 5 full articles a day. The articles have a standalone page but can only be reached by a master archive. The master archive sits in a top level section of the site and shows snippets of the articles, which if a user clicks on them takes them to the full page article. When a news article is added the snippets moves onto the next page, and move through the page as new articles are added.
The problem is that the stand alone articles can only be reached via the snippet on the master page and Google is stating this is duplicate content as the snippet is a duplicate of the article.
What is the best way to solve this issue?
From what i have read using a 'Meta NoIndex' seems to be the answer (not that i know what that is). from what i have read you can only use a canonical tag on a page by page basis so that going to take to long.
Thanks Ben
-
Hi Guys,
Thanks for your help.
I decided that updating the robot text would be the best option.
Ben
-
Technically, your URL:
http://www.capitalspreads.com/news
is really:
http://www.capitalspreads.com/news/index.php
So just add this line to robots.txt:
Disallow: /news/index.php
You won't be disallowing the pages underneath it but you will be blocking the page that contains all dupe content.
Also, if you prefer to do this with a meta tag on the news page, you could always do "noindex, follow" to make sure Google follows the links - they just don't index the page.
-
It may not be helpful to you in this situation. I was just saying that if your server creates multiple URLs containing the same content, as long as those URLs also contain the identical rel=canonical directive, a single canonical version of that content will be established.
-
Hi Chris,
I've read about the canonicalization but from what i could work I'd have to tag each of the 400 plus page individually to solve the issue and i don't think this is the best use of anyone's time.
I don't under how placing the tag and pointing back at itself will help? Can you explain a little more.
Ideally i want the full article page to be indexed as this will be more beneficial to the user. By placing the canonical tag on the snippets page and pointing it to itself would i not be telling the spider this is the page to index?
Here some examples
http://www.capitalspreads.com/news - Snippets page
http://www.capitalspreads.com/news/uk-economic-recovery-will-take-years - Full article, that would ideally be the page that wants to be indexed.
Regards
Ben
-
Ben, you use the rel=canonical directive in the header of the page with the original source of the content (pointing to itself), every reproduction of that page that also contains the rel=canonical directive pointing to the original source. So it's not necessarily a page by page solution. Have you read through this yet? Canonicalization and the Canonical Tag - Learn SEO - Moz
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Page Title
Hi I just got back from first crawl report and there were plenty of errors. I know this has been asked before but I am newbie here so bear with me. I captured the video. Any ideas on how to address the issue? ktXKDxRttK
Technical SEO | | mcardenal0 -
Duplicate Page Content error but I can't see it
Hi All We're getting a lot of Duplicate Page Content errors but I can't match it up. For example this page: http://www.daytripfinder.co.uk/attractions/32-antique-cottage It is saying the on page properties as follows: Title DayTripFinder - Things to do reviewed by you - 7,000 attractions <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">Meta Description</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">Read Reviews, Browse Opening Hours and Prices. View Photos, Maps. 7,000 UK Visitor Attractions.</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">But this isn't the page title or meta description.
Technical SEO | | KateWaite85
</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">And it's showing five (many others) example pages that share it. Again the page titles and description are different.</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/mckinlay-theatre</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/bakers-dolphin</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/shipley-park-fishing</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/king-johns-lodge-and-gardens</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/city-hall
</dt> Any ideas? Not sure if I'm missing something here! Thanks!0 -
Issue: Duplicate Page Content
Hi All, I am getting warnings about duplicate page content. The pages are normally 'tag' pages. I have some blog posts tagged with multiple 'tags'. Does it really affect my site?. I am using wordpress and Yoast SEO plugin. Thanks
Technical SEO | | KLLC0 -
Dealing with duplicate content
Manufacturer product website (product.com) has an associated direct online store (buyproduct.com). the online store has much duplicate content such as product detail pages and key article pages such as technical/scientific data is duplicated on both sites. What are some ways to lessen the duplicate content here? product.com ranks #1 for several key keywords so penalties can't be too bad and buyproduct.com is moving its way up the SERPS for similar terms. Ideally I'd like to combine the sites into one, but not in the budget right away. Any thoughts?
Technical SEO | | Timmmmy0 -
Best way to condense content on a page?
We want to add a video transcript to the same page as the video, but it doesn't really fit the design of the page. Is it fine to use CSS/DIVs to either have a "click to read full transcript" or a scroll box?
Technical SEO | | nicole.healthline0 -
Duplicate content handling.
Hi all, I have a site that has a great deal of duplicate content because my clients list the same content on a few of my competitors sites. You can see an example of the page here: http://tinyurl.com/62wghs5 As you can see the search results are on the right. A majority of these results will also appear on my competitors sites. My homepage does not seem to want to pass link juice to these pages. Is it because of the high level of Dup Content or is it because of the large amount of links on the page? Would it be better to hide the content from the results in a nofollowed iframe to reduce duplicate contents visibilty while at the same time increasing unique content with articles, guides etc? or can the two exist together on a page and still allow link juice to be passed to the site. My PR is 3 but I can't seem to get any of my internal pages(except a couple of pages that appear in my navigation menu) to budge of the PR0 mark even if they are only one click from the homepage.
Technical SEO | | Mulith0 -
50+ duplicate content pages - Do we remove them all or 301?
We are working on a site that has 50+ pages that all have duplicate content (1 for each state, pretty much). Should we 301 all 50 of the URLs to one URL or should we just completely get rid of all the pages? Are there any steps to take when completely removing pages completely? (submit sitemap to google webmaster tools, etc) thanks!
Technical SEO | | Motava0 -
Up to my you-know-what in duplicate content
Working on a forum site that has multiple versions of the URL indexed. The WWW version is a top 3 and 5 contender in the google results for the domain keyword. All versions of the forum have the same PR, but but the non-WWW version has 3,400 pages indexed in google, and the WWW has 2,100. Even worse yet, there's a completely seperate domain (PR4) that has the forum as a subdomain with 2,700 pages indexed in google. The dupe content gets completely overwhelming to think about when it comes to the PR4 domain, so I'll just ask what you think I should do with the forum. Get rid of the subdomain version, and sometimes link between two obviously related sites or get rid of the highly targeted keyword domain? Also what's better, having the targeted keyword on the front of Google with only 2,100 indexed pages or having lower rankings with 3,400 indexed pages? Thanks.
Technical SEO | | Hondaspeder0