Google how deal with licensed content when this placed on vendor & client's website too. Will Google penalize the client's site for this ?
-
One of my client bought licensed content from top vendor of Health Industry.
This same content is on the vendor's website & my client's site also but on my site there is a link back to vendor is placed which clearly tells to anyone that this is a licensed content & we bought from this vendor.
My client bought paid top quality content for best source of industry but at this same this is placed on vendor's website also. Will Google penalize my client's website for this ?
Niche is HEALTH
-
DA is Moz's estimate of the importance of domains in relation to each other. Google does this themselves, so it's not that they "see" DA, but they have something similar.
As long as you don't expect the content from the vendor to bring you organic traffic, you should be okay. You said you have the canonical in place to them, so as long as that is there, there should be no impact from algorithm updates. You wouldn't be penalized for this.
-
Thanks Stramark for your answer. So, I am feeling from your expert advice this is not bad at all to buy content from them.
We already did & doing things as you mentioned
-
Canonical tag
-
Good & creative website design
-
Regularly updating the content in website via Blog section
-
Content bought only for specfic section of website so rest of content is original & written by us
But our DA is 55 & Vendor's DA is 94 . Is this a reason worrry for us because Vendor's have more high reputation than us. Is DA a metric for us or Google will also see this ?
-
-
As long as you do not overdo the content spinning (that is the word you are looking for) Say 10 or more copies of the same content and it is on purpose you will not be in trouble.
The worst thing that can happen in this case is that the vendor of the content gets the credit for the content. This can happen but often google cannot or does not know what was first. Even with the link to the original content you might come up in google very high (and higher than the vendor) on specifick keywords.
And, you did everything you could to credit the original website. Make sure you have a good maintained website with it's own design and that not all content is from this vendor (add your own articles and content)
If you want to prevent your website for getting any value from it you could add a canonical tag. (but i would not do that)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How google bot see's two the same rel canonicals?
Hi, I have a website where all the original URL's have a rel canonical back to themselves. This is kinda like a fail safe mode. It is because if a parameter occurs, then the URL with the parameter will have a canonical back to the original URL. For example this url: https://www.example.com/something/page/1/ has this canonical: https://www.example.com/something/page/1/ which is the same since it's an original URL This url https://www.example.com/something/page/1/?parameter has this canonical https://www.example.com/something/page/1/ like i said before, parameters have a rel canonical back to their original url's. SO: https://www.example.com/something/page/1/?parameter and this https://www.example.com/something/page/1/ both have the same canonical which is this https://www.example.com/something/page/1/ Im telling you all that because when roger bot tried to crawl my website, it gave back duplicates. This happened because it was reading the canonical (https://www.example.com/something/page/1/) of the original url (https://www.example.com/something/page/1/) and the canonical (https://www.example.com/something/page/1/) of the url with the parameter (https://www.example.com/something/page/1/?parameter) and saw that both were point to the same canonical (https://www.example.com/something/page/1/)... So, i would like to know if google bot treats canonicals the same way. Because if it does then im full of duplicates 😄 thanks.
Technical SEO | | dos06590 -
Does Google see the connection between our 2 websites?
We have a main website and we created a satellite site to support the original one with backlinks. Can I add both sites to the same Google Analytics profile? My programmer said that there is no reason to use a new GA profile for the satellite site, since Google will see the connection between the 2 websites via scripts, Google Plus buttons and other programmed solutions. So, is there a reason to use a new GA account for the satellite site (and later the new satellite sites) as well?
Technical SEO | | Romaine0 -
How to Remove /feed URLs from Google's Index
Hey everyone, I have an issue with RSS /feed URLs being indexed by Google for some of our Wordpress sites. Have a look at this Google query, and click to show omitted search results. You'll see we have 500+ /feed URLs indexed by Google, for our many category pages/etc. Here is one of the example URLs: http://www.howdesign.com/design-creativity/fonts-typography/letterforms/attachment/gilhelveticatrade/feed/. Based on this content/code of the XML page, it looks like Wordpress is generating these: <generator>http://wordpress.org/?v=3.5.2</generator> Any idea how to get them out of Google's index without 301 redirecting them? We need the Wordpress-generated RSS feeds to work for various uses. My first two thoughts are trying to work with our Development team to see if we can get a "noindex" meta robots tag on the pages, by they are dynamically-generated pages...so I'm not sure if that will be possible. Or, perhaps we can add a "feed" paramater to GWT "URL Parameters" section...but I don't want to limit Google from crawling these again...I figure I need Google to crawl them and see some code that says to get the pages out of their index...and THEN not crawl the pages anymore. I don't think the "Remove URL" feature in GWT will work, since that tool only removes URLs from the search results, not the actual Google index. FWIW, this site is using the Yoast plugin. We set every page type to "noindex" except for the homepage, Posts, Pages and Categories. We have other sites on Yoast that do not have any /feed URLs indexed by Google at all. Side note, the /robots.txt file was previously blocking crawling of the /feed URLs on this site, which is why you'll see that note in the Google SERPs when you click on the query link given in the first paragraph.
Technical SEO | | M_D_Golden_Peak0 -
How to rank in Google Places
Normally, I don't have a problem with local SEO (more of a multi-channel sort of online marketing guy) but this one has got me scratching my head. Look at https://www.google.co.uk/search?q=wedding+venues+in+essex Theres two websites there (fennes and quendon park) that both have a much more powerful DA but don't appear in the Google Places (Google + Business or whatever it's labeled as). Why are websites such as Boreham house ranking top in the map listings? Quendon Park has a Google places listing, it's full of content, the NAP all matches up. Its a stronger website. Boreham House isn't any closer to the centroid than Quendon Park Just got me struggling this one
Technical SEO | | jasonwdexter0 -
Lots of Pages Dropped Out of Google's Index?
Until yesterday, my website had about 1200 pages indexed in Google. I did lots of changes: removed low quality content, rewrote passable content to make it better, wrote high quality content, got lots of likes and shares on social networks, etc. Now this morning I see that out of 1252 pages submitted, only 691 are indexed. Is that a temporary situation related to the recent updates? Anyone seeing this? What should I interpret about this?
Technical SEO | | sbrault740 -
Google seems to be penalizing my site for some reason
I recently took control of a website which did have some pretty big SEO problems, duplicate content being one of the main ones!! Looking back at ranking data the website ranked very well for it's main keyword, #5 for Google, Yahoo and Bing. The ranking then dropped in February 2012 for Google to #64 but stayed the same for Yahoo and Bing. I scrapped the dodgy content and completely rewrote it using a Wordpress framework about 6 weeks ago, still targeting the same keywords and 301 redirecting the old pages to the new pages where applicable. My rankings for Yahoo and Bing are still maintaining their page 1 rankings but Google is still ranking the website on page 5/6. My question is. Is the website getting punished for something that was part of the old website? If so how can I find out what it is and fix it? This website ranked on page 1 for Google for most of it's popular keywords but now it doesn't. I appreciate any feedback Many Thanks : )
Technical SEO | | alexhowe0 -
Google doesn't rank the best page of our content for keywords. How to fix that?
Hello, We have a strange issue, which I think is due to legacy. Generally, we are a job board for students in France: http://jobetudiant.net (jobetudiant == studentjob in french) We rank quite well (2nd or 3rd) on "Job etudiant <city>", with the right page (the one that lists all job offers in that city). So this is great.</city> Now, for some reason, Google systematically puts another of our pages in front of that: the page that lists the jobs offers in the 'region' of that city. For example, check this page. the first link is a competitor, the 3rd is the "right" link (the job offers in annecy), but the 2nd link is the list of jobs in Haute Savoie (which is the 'departement'- equiv. to county) in which Annecy is... that's annoying. Is there a way to indicate Google that the 3rd page makes more sense for this search? Thanks
Technical SEO | | jgenesto0 -
Should we use Google's crawl delay setting?
We’ve been noticing a huge uptick in Google’s spidering lately, and along with it a notable worsening of render times. Yesterday, for example, Google spidered our site at a rate of 30:1 (google spider vs. organic traffic.) So in other words, for every organic page request, Google hits the site 30 times. Our render times have lengthened to an avg. of 2 seconds (and up to 2.5 seconds). Before this renewed interest Google has taken in us we were seeing closer to one second average render times, and often half of that. A year ago, the ratio of Spider to Organic was between 6:1 and 10:1. Is requesting a crawl-delay from Googlebot a viable option? Our goal would be only to reduce Googlebot traffic, and hopefully improve render times and organic traffic. Thanks, Trisha
Technical SEO | | lzhao0