Copying Content With Permission
-
Hi, we received an email about a guy who wants to copy and paste our content on his website, he says he will keep all the links we put there and give us full credit for it, so besides keeping all the links on the page, which is the best way for him to give us the credit? a link to the original article? an special meta tag? what?
Thank you
PS.Our site its much more authorative than his and we get indexed within 10min from the moment we publish a page, so I don't worry about him out raking us with our own content.
-
Very controversial...duplicate content...
-
Syndication Source and and Original Source are both generally used for Google News algo at this point. For the main SERPs you would use a cross-domain rel="canonical". The problem with all of these is that they require the re-publisher to edit their html header file on a per-content basis. That is not technologically scalable for many sites so it could kill the deal. If they are willing to give you the rel canonical tag pointing to your domain, that is best (especially if the story includes links to your site). Otherwise, getting your site indexed first and making sure their links to your site int he copy are followable should do the trick.
Don't let them publish every single story you write though. You want readers to have a reason to come subscribe to your site if they read something on the other site.
-
Thanks Matt, that's great stuff! I always keep track of what gets indexed. And yes, choosing who to share the content with is for sure very important, I would not want a content farm related to our site in any way, specially now
-
Hi Andres,
As long as you're getting direct followed links back to your original article, then that should be enough. A couple of other things though:
- Even though you're confident you'll be indexed before the other site, I'd still implement some embargo time on when they can publish on their site as a fallback.
- Take a look at the site itself that will be linking to you... is it something you a) want your content associated with, and b) want your link profile associated with?
Some resources you may be interested in:
[1] http://www.seomoz.org/blog/whiteboard-friday-content-technology-licensing
[2] http://googlewebmastercentral.blogspot.com/2006/12/deftly-dealing-with-duplicate-content.html (deals with syndication)
[3] http://www.mattcutts.com/blog/duplicate-content-question/
-
If this happens often you should consider using http://www.tynt.com/ and modify your attribution settings to suit your needs.
-
I have not tested the "syndication-source" or "original-source" tags personally but I have seen a very good case of credit syndication being used at http://www.privatecloud.com
Almost 95% of the content on this website is duplicate word for word of the original article located on the third party websites. I have been tracking this site for almost 6 months now and have seen several instances of duplicate pages (with credit to original article) indexed and ranking on Google SERPs.
Using this example I would agree that your technique should work fine.
-
Hi Sameer, I am not sure about using a canonical tag since its not our site and maybe there will be more content than just ours, he ask permission just to copy and paste so yes its dupe and we wanted index for the backlinks, this is my idea:
http://googlenewsblog.blogspot.com/2010/11/credit-where-credit-is-due.html
syndication-source indicates the preferred URL for a syndicated article. If two versions of an article are exactly the same, or only very slightly modified, we're asking publishers to use syndication-source to point us to the one they would like Google News to use. For example, if Publisher X syndicates stories to Publisher Y, both should put the following metatag on those articles:
let me know what you think.
-
Hey Andrés,
As a general rule, content is considered duplicate only if it is more than 35-40% copy of the original. If the person wants to copy your website word for word then here are the few ways you can avoid duplicate content penalty
- Rel canonical - Add a rel canonical tag to the section of the non-canonical page. This will inform Google on what page is the most relevant to be indexed (your webpages in this case).
2. Reduce duplication - Ask the person to modify the content and rewrite in their own words. DupeCop is a good tool that will allow you to compare two content pieces and measure the duplication percentage. (Don't use respun content always rewrite in your own words.)
3. NoIndex Meta Robot tags - If they are not willing to change the page content then you can ask them to prevent those pages getting spidered by adding a noindex meta tags.
Best
Sameer
-
So the best way to get the credit from the article are just the links? is there any special tag? something like meta name=syndication-source? no need?
And yes, you are right its manual syndication and he will keep all the links.
thank you Gianluca
-
Hi...
what you describe is somehow a sort of syndication of your content. A manual one, but still a syndication.
I believe that the guy, when he says he will give you full credit for the content, was meaning an optimized full link to the original article.
If it is so, I would say yes to that guy. If not, ask him to do it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best format for animated content
We want to use some movement in our designs, charts etc. what format is the most SEO friendly?
Technical SEO | | remkoallertz1 -
Page content not being recognised?
I moved my website from Wix to Wordpress in May 2018. Since then, it's disappeared from Google searches. The site and pages are indexed, but no longer ranking. I've just started a Moz campaign, and most pages are being flagged as having "thin content" (50 words or less), when I know that there are 300+ words on most of the pages. Looking at the page source I find this bit of code: page contents Does this mean that Google is finding this and thinks that I have only two words (page contents) on the page? Or is this code to grab the page contents from somewhere else in the code? I'm completely lost with this and would appreciate any insight.
Technical SEO | | Photowife1 -
Effects of forwarding content to another site?
Our e-commerce store is moving away from a set of products. Instead of just redirecting these to an another section of our site, we were considering redirecting them to another site that sells the product. I don't mind losing that traffic, but I don't want it to inadvertently hurt our other product lines. Any thoughts on the impact of 301 redirecting a certain section of traffic to another domain? Anything I should we on the lookout for or consider?
Technical SEO | | CPollock0 -
Our Panda Content Audit Process
We've put together this process over the past year that has shown success when it comes to sites that appear to be hit by Panda. The idea was to put together a process that would allow us to give our clients an understanding of the problem at hand and metrics we can use to explain how recovery is going. Would love to hear your opinion or if you have a different/similar strategy.
Technical SEO | | eyeflow1 -
Tired of finding solution for duplicate contents.
Just my site was scanned by seomoz and seen lots of duplicate content and titles found. Well I am tired of finding solutions of duplicate content for a shopping site product category page. You can see the screenshot below. http://i.imgur.com/TXPretv.png You can see below in every link its showing "items_per_page=64, 128 etc.". This happened in every category in which I was created. I am already using Canonical add-on to avoid this problem but still it's there. You can check my domain here - http://www.plugnbuy.com/computer-software/pc-security/antivirus-internet-security/ and see if the add-on working correct. I recently submitted my sitemap to GWT, so that's why it's not showing me any report regarding duplicate issues. Please help ME
Technical SEO | | chandubaba0 -
Duplicate Content on SEO Pages
I'm trying to create a bunch of content pages, and I want to know if the shortcut I took is going to penalize me for duplicate content. Some background: we are an airport ground transportation search engine(www.mozio.com), and we constructed several airport transportation pages with the providers in a particular area listed. However, the problem is, sometimes in a certain region multiple of the same providers serve the same places. For instance, NYAS serves both JFK and LGA, and obviously SuperShuttle serves ~200 airports. So this means for every airport's page, they have the super shuttle box. All the provider info is stored in a database with tags for the airports they serve, and then we dynamically create the page. A good example follows: http://www.mozio.com/lga_airport_transportation/ http://www.mozio.com/jfk_airport_transportation/ http://www.mozio.com/ewr_airport_transportation/ All 3 of those pages have a lot in common. Now, I'm not sure, but they started out working decently, but as I added more and more pages the efficacy of them went down on the whole. Is what I've done qualify as "duplicate content", and would I be better off getting rid of some of the pages or somehow consolidating the info into a master page? Thanks!
Technical SEO | | moziodavid0 -
Duplicate Page Content Report
In Crawl Diagnostics Summary, I have 2000 duplicate page content. When I click the link, my Wordpress return "page not found" and I see it's not indexed by Google, and I could not find the issue in Google Webmaster. So where does this link come from?
Technical SEO | | smallwebsite0 -
Solution for duplicate content not working
I'm getting a duplicate content error for: http://www.website.com http://www.website.com/default.htm I searched for the Q&A for the solution and found: Access the.htaccess file and add this line: redirect 301 /default.htm http://www.website.com I added the redirect to my .htaccess and then got the following error from Google when trying to access the http://www.website.com/default.htm page: "This webpage has a redirect loop
Technical SEO | | Joeuspe
The webpage at http://www.webpage.com/ has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer." "Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects." How can I correct this? Thanks0