Letting Others Use Our Content: Risk-Free Attribution Methods
-
Hello Moz!
A massive site that you've all heard of is looking to syndicate some of our original editorial content. This content is our bread and butter, and is one of the primary reasons why people use our site.
Note that this site is not a competitor of ours - we're in different verticals.
If this massive site were to use the content straight up, I'm fairly confident that they'd begin to outrank us for related terms pretty quickly due to their monstrous domain authority.
This is complex because they'd like to use bits and pieces of the content interspersed with their own content, so they can't just implement a cross-domain canonical. It'd also be difficult to load the content in an iframe with noindex,nofollow header tags since their own content (which they want indexed) will be mixed up with ours.
They're also not open to including a link back to the product pages where the corresponding reviews live on our site.
Are there other courses of action that could be proposed that would protect our valuable content?
Is there any evidence that using schema.org (Review and Organization schemas) pointing back to our review page URLs would provide attribution and prevent them from outranking us for associated terms?
-
Logan, I found your replies very helpful. We have allowed a site to replicate some of our pages / content on their site and have the rel canonical tag in place pointing back to us. However, Google has indexed the pages on the partner's site as well. Is this common or has something gone wrong? the partner temporarily had an original source tag pointing to their page as well as the canonical pointing to us. We caught this issue a few weeks ago and had the original source tag removed. GSC sees the rel canonical tag for our site. But I am concerned our site could be getting hurt for dupe content issues and the partner site may out rank us as their site is much stronger. Any insight would be greatly appreciated
-
"Why did this offer come my way?"
When someone asks to use your content, that is what you should be asking yourself.
When someone asks to use my content, my answer is always a fast. NO! Even if the Pope is asking, the answer will be NO.
-
This is exactly my concern. Our site is massive in it's own industry, but this other site is a top player across many industries - surely we'd be impacted by such an implementation without some steps taken to confirm attribution.
Thank you for confirming my suspicions.
-
Google claims that they are good at identifying the originator of the content. I know for a fact that they are overrating their ability on this.
Publish an article first on a weak site, allow it to be crawled and remain for six months. Then, put that same article on a powerful site. The powerful site will generally outrank the other site for the primary keywords of the article or the weak site will go into the supplemental results. Others have given me articles with the request that I publish them. After I published them they regretted that they were on my site.
Take pieces of an article from a strong site and republish them verbatim on a large number of weak sites. The traffic to the article on the strong site will often drop because the weak sites outrank it for long-tail keywords. I have multiple articles that were ranking well for valuable keywords. Then hundreds of mashup sites grabbed pieces of the article and published them verbatim. My article tanked in the SERPs. A couple years later the mashups fell from the SERPs and my article moved back up to the first page.
-
But, I would not agree with their site being the one to take the damage. YOU will lose a lot of long-tail keyword traffic because now your words are on their site and their site is powerful.
Typically, the first one that's crawled will be considered the originator of the content--then if a site uses that content it will be the one who is damaged (if that's the case). I was under the impression that your content was indexed first--and the other site will be using your content. At least that's the way I understood it.
So, if your content hasn't already been indexed then you may lose in this.
-
This is complex because they'd like to use bits and pieces of the content interspersed with their own content, so they can't just implement a cross-domain canonical. It'd also be difficult to load the content in an iframe with noindex,nofollow header tags since their own content (which they want indexed) will be mixed up with ours.
Be careful. This is walking past the alligator ambush. I agree with Eric about the rel=canonical. But, I would not agree with their site being the one to take the damage. YOU will lose a lot of long-tail keyword traffic because now your words are on their site and their site is powerful.
They're also not open to linking back to our content.
It these guys walked into my office with their proposal they might not make it to the exit alive.
My only offer would be for them to buy me out completely. That deal would require massive severances for my employees and a great price for me.
-
You're in the driver's seat here. _You _have the content _they _want. If you lay down your requirements and they don't want to play, then don't give them permission to use your content. It's really that simple. You're gaining nothing here with their rules, and they gain a lot. You should both be winning in this situation.
-
Thank you for chiming in Eric!
There pages already rank extraordinarily well. #1 for almost every related term that they have products for, across the board.
They're also not open to linking back to our content.
-
In an ideal situation, the canonical tag is preferred. Since you mentioned that it's not the full content, and you can't implement it, then there may be limited options. We haven't seen any evidence that pointing back to your review page URLs would prevent them from outranking you--but it's not likely. If there are links there, then you'd get some link juice passed on.
Most likely, though, if that content is already indexed on your site then it's going to be seen as duplicate content on their site--and would only really hurt their site, in that those pages may not rank.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content within sections of a page but not full page duplicate content
Hi, I am working on a website redesign and the client offers several services and within those services some elements of the services crossover with one another. For example, they offer a service called Modelling and when you click onto that page several elements that build up that service are featured, so in this case 'mentoring'. Now mentoring is common to other services therefore will feature on other service pages. The page will feature a mixture of unique content to that service and small sections of duplicate content and I'm not sure how to treat this. One thing we have come up with is take the user through to a unique page to host all the content however some features do not warrant a page being created for this. Another idea is to have the feature pop up with inline content. Any thoughts/experience on this would be much appreciated.
Intermediate & Advanced SEO | | J_Sinclair0 -
Using IP to deliver different sidebar content on homepage
We have a site with a generic top level domain and we'd like to use a small portion of the homepage to cater content based on the IP of a visiting user. The content is for product dealerships around different regions/states of the US, not internationally. The idea being that someone from Seattle would see dealerships for this product near their location in Seattle. The section on the homepage is relatively small and would churn out 5 links and images according to location. The rest of the homepage would be the same for everyone, which includes links to news and reviews and fuller content. We have landing pages for regional/state content deeper in the site that don't use an IP to deliver content and also have unique URLs for the different regions/states. An example being a "Washington State Dealerships" landing page with links to all the dealerships there. We're wondering what kind of SEO impact there would be to having a section of the homepage delivering different content based on IP, and if there's anything we should do about it (or if we should be doing it all!). Thank you.
Intermediate & Advanced SEO | | seoninjaz0 -
Is this a Correct Time to Use 302 Redirects?
Hi Mozzers! We are going through a rebranding process, and as of this morning we have 3 domains, all with identical content. For example (not real domain names): www.fantastic.com
Intermediate & Advanced SEO | | Travis-W
www.fantasticfireworks.com
www.fireworks.com We are using 3 domains to ease the rebranding transition. We currently only want people to visit 'www.fantastic.com,' and if they visit the other 2 domains we want them redirected. Since we will be using these other domains eventually, should we use 302 redirects instead of 301s? The other domains are new and do not have any domain authority or sites linking in, so we do not need to worry about link juice. Does it really matter what type of redirect we use? Thanks!0 -
How do you archive content?
In this video from Google Webmasters about content, https://www.youtube.com/watch?v=y8s6Y4mx9Vw around 0:57 it is advised to "archive any content that is no longer relevant". My question is how do you exactly do that? By adding noindex to those pages, by removing all internal links to that page, by completely removing those from the website? How do you technically archive content? watch?v=y8s6Y4mx9Vw
Intermediate & Advanced SEO | | SorinaDascalu1 -
SEO Tools for Content Audit
Hi i'm looking for a tool which can do a full content audit for a site for instance - Find pages which: • Lack text content. • Finds pages with lengthy meta descriptions • Finds missing H1 tags or multiple H1 tags . • Duplicate meta descriptions. • Find images with no alt text Are there any tools besides the ones on SEMOZ which can enable me to do a full content audit on factors like these. Or any SEO audit tools out there which you can recommend. Cheers, Mark
Intermediate & Advanced SEO | | monster990 -
Duplicate Content/ Indexing Question
I have a real estate Wordpress site that uses an IDX provider to add real estate listings to my site. A new page is created as a new property comes to market and then the page is deleted when the property is sold. I like the functionality of the service but it creates a significant amount of 404's and I'm also concerned about duplicate content because anyone else using the same service here in Las Vegas will have 1000's of the exact same property pages that I do. Any thoughts on this and is there a way that I can have the search engines only index the core 20 pages of my site and ignore future property pages? Your advice is greatly appreciated. See link for example http://www.mylvcondosales.com/mandarin-las-vegas/
Intermediate & Advanced SEO | | AnthonyLasVegas0 -
Ajax Content Indexed
I used the following guide to implement the endless scroll https://developers.google.com/webmasters/ajax-crawling/docs/getting-started crawlers and correctly reads all URLs the command "site:" show me all indexed Url with #!key=value I want it to be indexed only the first URL, for the other Urls I would be scanned but not indexed like if there were the robots meta tag "noindex, follow" how I can do?
Intermediate & Advanced SEO | | wwmind1 -
Should I be using rel canonical here?
I am reorganizing the data on my informational site in a drilldown menu. So, here's an example. One the home page are several different items. Let's say you clicked on "Back Problems". Then, you would get a menu that says: Disc problems, Pain relief, paralysis issues, see all back articles. Each of those pages will have a list of articles that suit. Some articles will appear on more than one page. Should I be worried about these pages being partially duplicates of each other? Should I use rel-canonical to make the root page for each section the one that is indexed. I'm thinking no, because I think it would be good to have all of these pages indexed. But then, that's why I'm asking!
Intermediate & Advanced SEO | | MarieHaynes0