Best way to deal with over 1000 pages of duplicate content?
-
Hi
Using the moz tools i have over a 1000 pages of duplicate content. Which is a bit of an issue!
95% of the issues arise from our news and news archive as its been going for sometime now.
We upload around 5 full articles a day. The articles have a standalone page but can only be reached by a master archive. The master archive sits in a top level section of the site and shows snippets of the articles, which if a user clicks on them takes them to the full page article. When a news article is added the snippets moves onto the next page, and move through the page as new articles are added.
The problem is that the stand alone articles can only be reached via the snippet on the master page and Google is stating this is duplicate content as the snippet is a duplicate of the article.
What is the best way to solve this issue?
From what i have read using a 'Meta NoIndex' seems to be the answer (not that i know what that is). from what i have read you can only use a canonical tag on a page by page basis so that going to take to long.
Thanks Ben
-
Hi Guys,
Thanks for your help.
I decided that updating the robot text would be the best option.
Ben
-
Technically, your URL:
http://www.capitalspreads.com/news
is really:
http://www.capitalspreads.com/news/index.php
So just add this line to robots.txt:
Disallow: /news/index.php
You won't be disallowing the pages underneath it but you will be blocking the page that contains all dupe content.
Also, if you prefer to do this with a meta tag on the news page, you could always do "noindex, follow" to make sure Google follows the links - they just don't index the page.
-
It may not be helpful to you in this situation. I was just saying that if your server creates multiple URLs containing the same content, as long as those URLs also contain the identical rel=canonical directive, a single canonical version of that content will be established.
-
Hi Chris,
I've read about the canonicalization but from what i could work I'd have to tag each of the 400 plus page individually to solve the issue and i don't think this is the best use of anyone's time.
I don't under how placing the tag and pointing back at itself will help? Can you explain a little more.
Ideally i want the full article page to be indexed as this will be more beneficial to the user. By placing the canonical tag on the snippets page and pointing it to itself would i not be telling the spider this is the page to index?
Here some examples
http://www.capitalspreads.com/news - Snippets page
http://www.capitalspreads.com/news/uk-economic-recovery-will-take-years - Full article, that would ideally be the page that wants to be indexed.
Regards
Ben
-
Ben, you use the rel=canonical directive in the header of the page with the original source of the content (pointing to itself), every reproduction of that page that also contains the rel=canonical directive pointing to the original source. So it's not necessarily a page by page solution. Have you read through this yet? Canonicalization and the Canonical Tag - Learn SEO - Moz
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best way to redirect visitors to certain pages of your site based on their location?
One website I manage wants to redirect users to state specific pages based on their location. What is the best way to accomplish this? For example a user enters the through site.com but they are in Colorado so we want to direct them to site.com/colorado.
Technical SEO | | Firestarter-SEO0 -
Duplicate Page Content but where?
Hi All Moz is telling me I have duplicate page content and sure enough the PA MR mT are all 0 but it doesnt give me a link to this content! This is the page: http://www.orsgroup.com/index.php?page=Scanning-services But I cant find where the duplicate content is other than on our own youtube page which I will get removed here: http://www.youtube.com/watch?v=Pnjh9jkAWuA Can anyone help please? Andy
Technical SEO | | ORS-Group0 -
Duplicate Content Vs No Content
Hello! A question that has been throw around a lot at our company has been "Is duplicate content better than no content?". We operate a range of online flash game sites, most of which pull their games from a feed, which includes the game description. We have unique content written on the home page of the website, but aside from that, the game descriptions are the only text content on the website. We have been hit by both Panda and Penguin, and are in the process of trying to recover from both. In this effort we are trying to decide whether to remove or keep the game descriptions. I figured the best way to settle the issue would be to ask here. I understand the best solution would be to replace the descriptions with unique content, however, that is a massive task when you've got thousands of games. So if you have to choose between duplicate or no content, which is better for SEO? Thanks!
Technical SEO | | Ryan_Phillips0 -
Google doesn't rank the best page of our content for keywords. How to fix that?
Hello, We have a strange issue, which I think is due to legacy. Generally, we are a job board for students in France: http://jobetudiant.net (jobetudiant == studentjob in french) We rank quite well (2nd or 3rd) on "Job etudiant <city>", with the right page (the one that lists all job offers in that city). So this is great.</city> Now, for some reason, Google systematically puts another of our pages in front of that: the page that lists the jobs offers in the 'region' of that city. For example, check this page. the first link is a competitor, the 3rd is the "right" link (the job offers in annecy), but the 2nd link is the list of jobs in Haute Savoie (which is the 'departement'- equiv. to county) in which Annecy is... that's annoying. Is there a way to indicate Google that the 3rd page makes more sense for this search? Thanks
Technical SEO | | jgenesto0 -
How do I deal with my pages being seen as duplicate content by SeoMoz?
My Dashboard is giving my lots of warnings for duplicate content but it all seems to have something to do with the www and the slash / For example: http://www.ebow.ie/ is seen as having the same duplicate content as http:/ebow.ie/ and http://www.ebow.ie Alos lots to do with how Wordpress categorizes pages and tags that is driving me bonkers! Any help appreciated! Dave. seomoz.png
Technical SEO | | ebowdublin0 -
New Magento shop - how to best avoid duplicate content?
Hi all, My clients are about to have th elatest version of the free Magento store set up. It will sell in at least to different languages, so this need to be taken into account. Could any of you give some advice on what is the best way to avoid DC (if possible)? The shop is by now clean (from dc) but from experience I konw this will no continue... Thanks, Best regards, Christian
Technical SEO | | sembseo0 -
Duplicate content
I have just ran a report in seomoz on my domain and has noticed that there are duplicate content issues, the issues are: www.domainname/directory-name/ www.domainname/directory-name/index.php All my internal links and external links point to the first domain, as i prefer this style as it looks clear & concise, however doing this has created duplicate content as within the site itself i have an index.php page inside this /directory-name/ to show the page. Could anyone give me some advice on what i should do please? Kind Regards
Technical SEO | | Paul780 -
Duplicate Content issue
I have been asked to review an old website to an identify opportunities for increasing search engine traffic. Whilst reviewing the site I came across a strange loop. On each page there is a link to printer friendly version: http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes That page also has a link to a printer friendly version http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes&printfriendly=yes and so on and so on....... Some of these pages are being included in Google's index. I appreciate that this can't be a good thing, however, I am not 100% sure as to the extent to which it is a bad thing and the priority that should be given to getting it sorted. Just wandering what views people have on the issues this may cause?
Technical SEO | | CPLDistribution0