Duplicate content
-
Hello mozzers,
I have an unusual question. I've created a page that I am fully aware that it is near 100% duplicate content. It quotes the law, so it's not changeable. The page is very linkable in my niche. Is there a way I can build quality links to it that benefit my overall websites DA (i'm not bothered about the linkable page being ranked) without risking panda/dupe content issues?
Thanks,
Peter
-
Using Noindex is a solution to prevent any problems,
but what about linking to your original (law) source? The page could be indexed and as long as you have enough good quality content on your website you should not have any trouble i think.
CleverPHD makes a good point about writing some text of your own!
-
If you have to quote the law, then why not make the page more unique and provide more analysis around it. Why not add information from other laws and legal input. Nothing is ever 100% original. All modern science is built upon the "shoulders of giants", they reference previous works and expand from there, but very commonly, summarizing (known as a meta analysis) is a new way of looking at old data and is considered original and helpful.
Say you needed to quote the law on drunk driving in a particular city. You probably need to not just quote the law, but answer the question, "If I get pulled over for drunk driving, what should I do?" "If I need a lawyer what should I do?" "How do I find a good lawyer who specializes in drunk driving?" Show stats on how many drunk driving offenses occur in that particular city and suburbs. You get the idea. If it is appropriate quote the law and then link back to the government page that you found it on. Shoot a video with an expert talking about all these things - you get the idea.
None of the individual pieces are "original" per se, but pulling it all together is, and this is not only helpful to the user (who now does not have to spend all this time researching), but you have a great page that covers a nice range of keywords related to drunk driving laws. The page I mention above is a very linkable and shareable page on the topic.
Quote the law, link to the reference, but build content around it and you can potentially rank for it.
Good luck!
-
Hi Peter,
First of all, I would ensure the page is noindexed as this will prevent issues. As long as the page is followed (don't set nofollow) then page rank will flow from this. All noindex does is prevent Google from showing the page. However, this will only benefit other pages linked from there as you will get pass-through page rank - Google isn't going to rank a noindexed page or one that is just duplication.
Is there a way I can build quality links to it that benefit my overall websites DA
So to answer your question, yes.
Alternatively, try and get people to link to your homepage?
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Email and landing page duplicate content issue?
Hi Mozers, my question is, if there is a web based email that goes to subscribers, then if they click on a link it lands on a Wordpress page with very similar content, will Google penalize us for duplicate content? If so is the best workaround to make the email no index no follow? Thanks!
Technical SEO | | CalamityJane770 -
Issue with duplicate content
Hello guys, i have a question about duplicate content. Recently I noticed that MOZ's system reports a lot of duplicate content on one of my sites. I'm a little confused what i should do with that because this content is created automatically. All the duplicate content comes from subdomain of my site where we actually share cool images with people. This subdomain is actually pointing to our Tumblr blog where people re-blog our posts and images a lot. I'm really confused how all this duplicate content is created and what i should do to prevent it. Please tell me whether i need to "noindex", "nofollow" that subdomain or you can suggest something better to resolve that issue. Thank you!
Technical SEO | | odmsoft0 -
Content relaunch without content duplication
We write great Content for blog and websites (or at least we try), especially blogs. Sometimes few of them may NOT get good responses/reach. It could be the content which is not interesting, or the title, or bad timing or even the language used. My question for the discussion is, what will you do if you find the content worth audience's attention missed it during its original launch. Is that fine to make the text and context better and relaunch it ? For example: 1. Rechristening the blog - Change Title to make it attractive
Technical SEO | | macronimous
2. Add images
3. Check spelling
4. Do necessary rewrite, spell check
5. Change the timeline by adding more recent statistics, references to recent writeups (external and internal blogs for example), change anything that seems outdated Also, change title and set rel=cannoical / 301 permanent URLs. Will the above make the blog new? Any ideas and tips to do? Basically we like to refurbish (:-)) content that didn't succeed in the past and relaunch it to try again. If we do so will there be any issues with Google bots? (I hope redirection would solve this, But still I want to make sure) Thanks,0 -
Duplicate content or titles
Hello , I am working on a site, I am facing the duplicate title and content errors,
Technical SEO | | KLLC
there are following kind of errors : 1- A link with www and without www having same content. actually its a apartment management site, so it has different bedrooms apartments and booking pages , 2- my second issue is related to booking and details pages of bedrooms, because I am using 1 file for all booking and 1 file for all details page. these are the main errors which i am facing ,
can anyone give me suggestions regarding these issues ? Thnaks,0 -
Duplicate Home Page content and title ... Fix with a 301?
Hello everybody, I have the following erros after my first crawl: Duplicate Page Content http://www.peruviansoul.com http://www.peruviansoul.com/ http://www.peruviansoul.com/index.php?id=2 Duplicate Page title http://www.peruviansoul.com http://www.peruviansoul.com/ http://www.peruviansoul.com/index.php?id=2 Do you think I could fix them redirecting to http://www.peruviansoul.com with a couple of 301 in the .htaccess file? Thank you all for you help. Gustavo
Technical SEO | | peruviansoul0 -
Aspx filters causing duplicate content issues
A client has a url which is duplicated by filters on the page, for example: - http://www.example.co.uk/Home/example.aspx is duplicated by http://www.example.co.uk/Home/example.aspx?filter=3 The client is moving to a new website later this year and is using an out-of-date Kentico CMS which would need some development doing to it in order to enable implementation of rel canonical tags in the header, I don't have access to the server and they have to pay through the nose everytime they want the slightest thing altering. I am trying to resolve this duplicate content issue though and am wondering what is the best way to resolve it in the short term. The client is happy to remove the filter links from the page but that still leaves the filter urls in Google. I am concerned that a 301 redirect will cause a loop and don't understand the behaviour of this type of code enough. I hope this makes sense, any advice appreciated.
Technical SEO | | travelinnovations0 -
Using robots.txt to deal with duplicate content
I have 2 sites with duplicate content issues. One is a wordpress blog. The other is a store (Pinnacle Cart). I cannot edit the canonical tag on either site. In this case, should I use robots.txt to eliminate the duplicate content?
Technical SEO | | bhsiao0 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0