Any experience regarding what % is considered duplicate?
-
Some sites (including 1 or two I work with) have a legitimate reason to have duplicate content, such as product descriptions. One way to deal with duplicate content is to add other unique content to the page.
It would be helpful to have guidelines regarding what percentage of the content on a page should be unique. For example, if you have a page with 1,000 words of duplicate content, how many words of unique content should you add for the page to be considered OK?
I realize that a) Google will never reveal this and b) it probably varies a fair bit based on the particular website. However...
Does anyone have any experience in this area?
(Example: You added 300 words of unique content to all 250 pages on your site, that each had 100 words of duplicate content before, and that worked to improve your rankings.)
Any input would be appreciated!
Note: Just to be clear, I am NOT talking about "spinning" duplicate content to make it "unique". I am talking about adding unique content to a page that has legitimate duplicate content.
-
Check out this video: http://www.seomoz.org/blog/whiteboard-friday-dealing-with-duplicate-content
It will give you a much more thorough answer than just a percentage of uniqueness. But if you want that kind of answer I mostly hear guesses between 20% - 40% unique content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate without user-selected canonical excluded
We have pdf files uploaded in the media of wordpress and used in our website. As these pdfs are duplicate content of the original publishers, we have marked links to these pdf urls as nofollow. These pages are also disallowed in robots.txt Now, Google Search Console has shown these pages Excluded as "Duplicate without user-selected canonical" As it comes out we cannot use canonical tag with pdf pages so as to point to the original pdf source If we embed a pdf viewer in our website and fetch the pdfs by passing the urls of the original publisher, would the pdfs be still read as text by google and again create duplicate content issue? Another thing, when the pdf expires and is removed, it would lead to 404 error. If we direct our users to the third party website, then it would add up to our bounce rate. What should be the appropriate way to handle duplicate pdfs? Thanks
Intermediate & Advanced SEO | | dailynaukri1 -
Duplicate Content Question With New Domain
Hey Everyone, I hope your day is going well. I have a question regarding duplicate content. Let's say that we have Website A and Website B. Website A is a directory for multiple stores & brands. Website B is a new domain that will satisfy the delivery niche for these multiple stores & brands (where they can click on a "Delivery" anchor on Website A and it'll redirect them to Website B). We want Website B to rank organically when someone types in " <brand>delivery" in Google. Website B has NOT been created yet. The Issue Website B has to be a separate domain than Website A (no getting around this). Website B will also pull all of the content from Website A (menus, reviews, about, etc). Will we face any duplicate content issues on either Website A or Website B in the future? Should we rel=canonical to the main website even though we want Website B to rank organically?</brand>
Intermediate & Advanced SEO | | imjonny0 -
Duplicate currency page variations?
Hi guys, I have duplicate category pages across a ecommerce site. http://s30.postimg.org/dk9avaij5/screenshot_160.jpg For the currency based pages i was wondering would it be best (or easier) to exclude them in the robots.txt or use a rel canonical? If using the robots.txt (would be much easier to implement then rel canonical) to exclude the currency versions from being indexed what would the correct exclusion be? Would it look something like: Disallow: */?currency/ Google is indexing the currency based pages also: http://s4.postimg.org/hjgggq1tp/screenshot_161.jpg Cheers,
Intermediate & Advanced SEO | | jayoliverwright
Chris0 -
Duplicate Content: Organic vs Local SEO
Does Google treat them differently? I found something interesting just now and decided to post it up http://www.daviddischler.com/is-duplicate-content-treated-differently-when-local-seo-comes-into-play/
Intermediate & Advanced SEO | | daviddischler0 -
Duplicated Pages and Forums
Does duplicate content hurt that particular duplicated content, or the entire site? There are some parts of my site that I don’t care about getting high rankings on search engines. For example, I have a forum and there are certain links that only logged in people can see. If you aren’t logged in, they will take you to a page where it tells u to log in. google, obviously not logged in, interprets this as lots and lots of the same duplicated page. Should I just leave it alone cause I dont care if those pages makes it to search engines. Will it not hurt the entire site? For example, can my homepage search rankings decrase? That leads to my next question. What is the best way to optimize a forum? Whenever someone posts a new post, it seems another url for the same forum thread is created..... which is obviously duplicated….in other words, if like 20 people post on a thread, i believe my site adds 20 urls for that page...anyone know how to fix this?
Intermediate & Advanced SEO | | waltergah0 -
Penalized for duplication?
Hi there, In February 2012 one my web pages (.co.uk) dropped from page 1 to page 5 for the keyword 'Menopause' and was replaced with a .PDF Late January 2012 I launched a duplicate version of this webpage however targeting .ie due to difference currency and legalities, I had made sure in webmaster tools that both websites were both Geographically correct, I am also using hreflang tags on both webpages. One thing that is strange is if I copy the first few paragraphs of the webpage in question into Google.co.uk, it's the .ie webpage that appears. Any help would be appreciated in why this has happened. Kind Regards
Intermediate & Advanced SEO | | Paul780 -
SERP Experience After You Resubmit Your Site to Google
Hello Everyone, We suddenly noticed that our keywords fell off the map and discovered that porn had been placed (via.htaccess redirects and masking) on our site. The porn links caused Google to drop us.We scrubbed our .htaccess file and asked Google to reindex our site 3 weeks ago.Does anyone have experience with reindexing?If so, how long were you down and did your keyword positions return eventually?Thanks,Bob
Intermediate & Advanced SEO | | impressem0 -
BEING PROACTIVE ABOUT CONTENT DUPLICATION...
So we all know that duplicate content is bad for SEO. I was just thinking... Whenever I post new content to a blog, website page etc...there should be something I should be able to do to tell Google (in fact all search engines) that I just created and posted this content to the web... that I am the original source .... so if anyone else copies it they get penalised and not me... Would appreciate your answers... 🙂 regards,
Intermediate & Advanced SEO | | TopGearMedia0