Duplicate content mess
-
One website I'm working with keeps a HTML archive of content from various magazines they publish. Some articles were repeated across different magazines, sometimes up to 5 times. These articles were also used as content elsewhere on the same website, resulting in up to 10 duplicates of the same article on one website.
With regards to the 5 that are duplicates but not contained in the magazine, I can delete (resulting in 404) all but the highest value of each (most don't have any external links). There are hundreds of occurrences of this and it seems unfeasible to 301 or noindex them.
After seeing how their system works I can canonical the remaining duplicate that isn't contained in the magazine to the corresponding original magazine version - but I can't canonical any of the other versions in the magazines to the original. I can't delete the other duplicates as they're part of the content of a particular issue of a magazine. The best thing I can think of doing is adding a link in the magazine duplicates to the original article, something along the lines of "This article originally appeared in...", though I get the impression the client wouldn't want to reveal that they used to share so much content across different magazines.
The duplicate pages across the different magazines do differ slightly as a result of the different Contents menu for each magazine.
Do you think it's a case of what I'm doing will be better than how it was, or is there something further I can do? Is adding the links enough?
Thanks.
-
You're right about the 301s, and noindex would be a massive task that I'm not sure is worthwhile. Also I'm not sure if I want to list hundreds of pages in robots.txt.
By "back to back" do you mean "compare link metrics"? A lot of these pages show as "No Data Available for this URL" some of them are quite deep down within the site, so I don't know if that's why or if Mozscape can tell that they're duplicate content. The articles that are not part of the magazines usually seem to have a PA of 30+ judging by my spot-checks, but even some of those duplicated from magazine articles (and outside of the magazines) have no data available despite being easier to crawl than the magazine content.
-
If adding meta tags, redirects etc to all of the pages is too labor intensive and the return from any SEO goodness those pages is low, then perhaps you could just block search engines access to certain sections of the website via robots.txt file.
-
Given the way Alex describes the separate magazines, I am thinking they wouldn't like having the 301-redirects from a branding perspective. I like the idea of adding an attribution link to the original article. I have doubts about the "noindex" because I think that in many cases Google completely ignores this attribute. I'm not sure that's worth going through all the trouble of doing.
Have you tried putting the "duplicates" back to back in Open Site Explorer? I am really curious to know what that looks like.
-
-
Instead of deleting, you can just noindex + add a link to the original article.
-
Instead of deleting, you can 301 redirect to the original article.
This removes all duplicate content issues.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content question
Hi there, I work for a Theater news site. We have an issue where our system creates a chunk of duplicate content in Google's eyes and we're not sure how best to solve. When an editor produces a video, it simultaneously 1) creates a page with it's own static URL (e.g. http://www.theatermania.com/video/mary-louise-parker-tommy-tune-laura-osnes-and-more_668.html); and 2) displays said video on a public index page (http://www.theatermania.com/videos/). Since the content is very similar, Google sees them as duplicate. What should we do about this? We were thinking that one solution would to be dynamically canonicalize the index page to the static page whenever a new video is posted, but would Google frown on this? Alternatively, should we simply nofollow the index page? Lastly, are there any solutions we may have missed entirely?
Intermediate & Advanced SEO | | TheaterMania0 -
Best Way to Incorporate FAQs into Every Page - Duplicate Content?
Hi Mozzers, We want to incorporate a 'Dictionary' of terms onto quite a few pages on our site, similar to an FAQ system. The 'Dictionary' has 285 terms in it, with about 1 sentence of content for each one (approximately 5,000 words total). The content is unique to our site and not keyword stuffed, but I am unsure what Google will think about us having all this shared content on these pages. I have a few ideas about how we can build this, but my higher-ups really want the entire dictionary on every page. Â Thoughts? Image of what we're thinking here -Â http://screencast.com/t/GkhOktwC4I Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
Is a different location in page title, h1 title, and meta description enough to avoid Duplicate Content concern?
I have a dynamic website which will have location-based internal pages that will have a <title>and <h1> title, and meta description tag that will include the subregion of a city. Each page also will have an 'info' section describing the generic product/service offered which will also include the name of the subregion. The 'specific product/service content will be dynamic but in some cases will be almost identical--ie subregion A may sometimes have the same specific content result as subregion B. Will the difference of just the location put in each of the above tags be enough for me to avoid a Duplicate Content concern?</p></title>
Intermediate & Advanced SEO | | couponguy0 -
HELP! How does one prevent regional pages as being counted as "duplicate content," "duplicate meta descriptions," et cetera...?
The organization I am working with has multiple versions of its website geared towards the different regions. US -Â http://www.orionhealth.com/ CA -Â http://www.orionhealth.com/ca/ DE -Â http://www.orionhealth.com/de/ UK -Â http://www.orionhealth.com/uk/ AU -Â http://www.orionhealth.com/au/ NZ -Â http://www.orionhealth.com/nz/ Some of these sites have very similar pages which are registering as duplicate content, meta descriptions and titles. Two examples are: http://www.orionhealth.com/terms-and-conditions http://www.orionhealth.com/uk/terms-and-conditions Now even though the content is the same, the navigation is different since each region has different product options / services, so a redirect won't work since the navigation on the main US site is different from the navigation for the UK site. AÂ rel=canonical seems like a viable option, but (correct me if I'm wrong) it tells search engines to only index the main page, in this case, it would be the US version, but I still want the UK site to appear to search engines. So what is the proper way of treating similar pages accross different regional directories? Any insight would be GREATLY appreciated! Thank you!
Intermediate & Advanced SEO | | Scratch_MM0 -
How do I best handle Duplicate Content on an IIS site using 301 redirects?
The crawl report for a site indicates the existence of both www and non-www content, which I am aware is duplicate. However, only the www pages are indexed**, which is throwing me off. There are not any 'no-index' tags on the non-www pages and nothing in robots.txt and I can't find a sitemap. I believe a 301 redirect from the non-www pages is what is in order. Is this accurate? I believe the site is built using asp.net on IIS as the pages end in .asp. (not very familiar to me) There are multiple versions of the homepage, including 'index.html' and 'default.asp.' Meta refresh tags are being used to point to 'default.asp'. What has been done: 1. I set the preferred domain to 'www' in Google's Webmaster Tools, as most links already point to www. 2. The Wordpress blog which sits in a /blog subdirectory has been set with rel="canonical" to point to the www version. What I have asked the programmer to do: 1. Add 301 redirects from the non-www pages to the www pages. 2. Set all versions of the homepage to redirect to www.site.org using 301 redirects as opposed to meta refresh tags. Have all bases been covered correctly? One more concern: I notice the canonical tags in the source code of the blog use a trailing slash - will this create a problem of inconsistency? (And why is rel="canonical" the standard for Wordpress SEO plugins while 301 redirects are preferred for SEO?) Thanks a million! **To clarify regarding the indexation of non-www pages: A search for 'site:site.org -inurl:www' returns only 7 pages without www which are all blog pages without content (Code 200, not 404 - maybe deleted or moved - which is perhaps another 301 redirect issue).
Intermediate & Advanced SEO | | kimmiedawn0 -
Need help with duplicate content. Same content; different locations.
We have 2 sites that will have duplicate content (e.g., one company that sells the same products under two different brand names for legal reasons). The two companies are in different geographical areas, but the client will put the same content on each page because they're the same product. What is the best way to handle this? Thanks a lot.
Intermediate & Advanced SEO | | Rocket.Fuel0 -
Is this duplicate content something to be concerned about?
On the 20th February a site I work on took a nose-dive for the main terms I target. Unfortunately I can't provide the url for this site. All links have been developed organically so I have ruled this out as something which could've had an impact. During the past 4 months I've cleaned up all WMT errors and applied appropriate redirects wherever applicable. During this process I noticed that mydomainname.net contained identical content to the main mydomainname.com site. Upon discovering this problem I 301 redirected all .net content to the main .com site. Nothing has changed in terms of rankings since doing this about 3 months ago. I also found paragraphs of duplicate content on other sites (competitors in different countries). Although entire pages haven't been copied there is still enough content to highlight similarities. As this content was written from scratch and Google would've seen this within it's crawl and index process I wanted to get peoples thoughts as to whether this is something I should be concerned about? Many thanks in advance.
Intermediate & Advanced SEO | | bfrl0 -
Managing Large Regulated or Required Duplicate Content Blocks
We work with a number of pharmaceutical sites that under FDA regulation must include an "Important Safety Information" (ISI) content block on each page of the site. In many cases this duplicate content is not only provided on a specific ISI page, it is quite often longer than what would be considered the primary content of the page. At first blush a rel=canonical tag might appear to be a solution to signal search engines that there is a specific page for the ISI content and avoid being penalized, but the pages also contain original content that should be indexed as it has user benefit beyond the information contained within the ISI.  Anyone else running into this challenge with regulated duplicate boiler plate and has developed a work around for handling duplicate content at the paragraph level and not the page level? One clever suggestion was to treat it as a graphic, however for a pharma site this would be a huge graphic.
Intermediate & Advanced SEO | | BlooFusion380