BEING PROACTIVE ABOUT CONTENT DUPLICATION...
-
So we all know that duplicate content is bad for SEO.
I was just thinking... Whenever I post new content to a blog, website page etc...there should be something I should be able to do to tell Google (in fact all search engines) that I just created and posted this content to the web... that I am the original source ....
so if anyone else copies it they get penalised and not me...
Would appreciate your answers...
regards,
-
Thanks for this link & to break this MYTH that google accept that "first indexed is original"
-
Daylan offers some good points. If you decide to use TYNT, you may want to consider AddThis which offers the same feature but also includes many social networking options.
-
As Ryan said, rel="author" would help.
Another thing which might help is an app called Tynt. It adds a little snippet of code for whenever your content is copied from your site, then attributes it to your domain by attaching a link when that copy is pasted. It's not fool proof of course and the url can be removed, but I like to think of it as one more 'safe guard' to attributing content to your site (plus it's good for link building).
One bit of advice I found useful is if you do link to other content on your site in your posts, to use absolute paths to your own domain, this way if your content is just scraped you at least get the direct link to your site as well and added kudos as being relevant to the topic even if the post is picked up by someone else.
Of course, none of this is fool proof, but it's just a couple of added aspects that might help your site benefit a little if your content is stolen.
-
Thanks for the response Ryan...again you were great help.
-
http://www.google.com/support/webmasters/bin/answer.py?answer=1229920
Google is working on it. They implemented rel=author which is the above link. They do not accept the "first indexed is original" idea presently, but it could be taken as an indicator.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If I have two brands and I market one in English (BrandA.com) and one in Spanish (BrandB.com), and the websites are identical but in different languages, would that have a negative impact on SEO due to duplicate content?
I have a client who wants a website in Spanish and one in English. Typically we would use a multi-language plugin for a single site (brandA.com/en or /es), but this client markets to their Spanish-speaking constituents under a different brand. So I am wondering if we have BrandA.com in English, and the exact same content in Spanish at BrandB.com if there will be negative SEO implications and/or if it will be recognized as duplicate content by search engines?
Intermediate & Advanced SEO | | Designworks-SJ1 -
Search Causing Duplicate Content
I use Opencart and have found that a lot of my duplicate content (mainly from Products) which is caused by the Search function. Is there a simple way to tell Google to ignore the Search function pathway? Or is this particular action not recommended? Here are two examples: http://thespacecollective.com/index.php?route=product/search&tag=cloth http://thespacecollective.com/index.php?route=product/search
Intermediate & Advanced SEO | | moon-boots0 -
Bigcommerce & Blog Tags causing Duplicate Content?
Curious why moz would pick up our blog tags as causing duplicate content, when each blog has a rel canonical tag pointing to either the blog post itself and on the tag pages points to the blog as a whole. Kinda want to get rid of the tags in general now, but also feel they can add some extra value to UX later on when we have many more blog posts. Curious if anyone knows a way around this or even a best solution practice when faced with such odd issues? I can see why the duplicate content would happen, but when grouping content into categories?
Intermediate & Advanced SEO | | Deacyde0 -
Duplicate Page Content
We have different plans that you can signup for - how can we rectify the duplicate page content and title issue here? Thanks. | http://signup.directiq.com/?plan=100 | 0 | 1 | 32 | 1 | 200 |
Intermediate & Advanced SEO | | directiq
| http://signup.directiq.com/?plan=104 | 0 | 1 | 32 | 1 | 200 |
| http://signup.directiq.com/?plan=116 | 0 | 1 | 32 | 1 | 200 |
| http://signup.directiq.com/?plan=117 | 0 | 1 | 32 | 1 | 200 |
| http://signup.directiq.com/?plan=102 | 0 | 1 | 32 | 1 | 200 |
| http://signup.directiq.com/?plan=119 | 0 | 1 | 32 | 1 | 200 |
| http://signup.directiq.com/?plan=101 | 0 | 1 | 32 | 1 | 200 |
| http://signup.directiq.com/?plan=103 | 0 | 1 | 32 | 1 | 200 |
| http://signup.directiq.com/?plan=5 |0 -
Subcategories within "New Arrivals" section - duplicate content?
Hi there, My client runs an e-commerce store selling shoes that features a section called "New Arrivals" with subcategories, such as "shoes," "wedges," "boots," "sandals," etc. There are already main subcategories on the site that target these terms. These are specifically pages for "New Arrivals - Boots," etc. The shoes listed on each new arrivals subcategory page are also listed in the main subcategory page. Given that there is not really any search volume for "Brand + new arrivals in boots," but lots of search volume for "Brand + boots," what is the proper way to handle these new arrivals subcategory pages? Should each subcategory have a rel=canonical tag pointing to the main subcategory? Should they be de-indexed? Should I keep them all indexed but try to make the content as unique as possible? Thank you!
Intermediate & Advanced SEO | | FPD_NYC0 -
Is legacy duplicate content an issue?
I am looking for some proof, or at least evidence to whether or not sites are being hurt by duplicate content. The situation is, that there were 4 content rich newspaper/magazine style sites that were basically just reskins of each other. [ a tactic used under a previous regime 😉 ] The least busy of the sites has since been discontinued & 301d to one of the others, but the traffic was so low on the discontinued site as to be lost in noise, so it is unclear if that was any benefit. Now for the last ~2 years all the sites have had unique content going up, but there are still the archives of articles that are on all 3 remaining sites, now I would like to know whether to redirect, remove or rewrite the content, but it is a big decision - the number of duplicate articles? 263,114 ! Is there a chance this is hurting one or more of the sites? Is there anyway to prove it, short of actually doing the work?
Intermediate & Advanced SEO | | Fammy0 -
Is all duplication of HTML title content bad?
In light of Hummingbird and that HTML titles are the main selling point in SERPs, is my approach to keyword rich HTML titles bad? Where possible I try to include the top key phrase to descripe a page and then a second top keyphrase describing what the company/ site as a whole is or does. For instance an estate agents site could consist of HTML title such as this Buy Commercial Property in Birmingham| Commercial Estate Agents Birmingham Commercial Property Tips | Commercial Estate Agents In order to preserve valuable characters I have also been omitting brand names other than on the home page... is this also poor form?
Intermediate & Advanced SEO | | SoundinTheory0 -
How to remove duplicate content, which is still indexed, but not linked to anymore?
Dear community A bug in the tool, which we use to create search-engine-friendly URLs (sh404sef) changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. <code>Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake)</code> After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David
Intermediate & Advanced SEO | | rmvw0