Scraping / Duplicate Content Question
-
Hi All,
I understanding the way to protect content such as a feature rich article is to create authorship by linking to your Google+ account.
My Question
You have created a webpage that is informative but not worthy to be an article, hence no need create authorship in Google+
If a competitor comes along and steals this content word for word, something similar, creates their own Google+ page, can you be penalised?Is there any way to protect yourself without authorship and Google+?
Regards Mark
-
Hi Mark
Glad I could help!
Regards, Gary
-
Hi Gary,
I was not aware of DMCA ... Thank you
Your right, Google Authorship is key.Regards Mark
-
Unfortunately you can be penalised which stinks! I'd suggest a Copyscape url run on your site just to to make sure no one is ripping out the great content you've created. Or you can set up Google Alerts.
**What happens if your find anyone copying your content? **
Google Authorship is of course great to establish who and when the material was published (evidence is key).
1. Contact the site owner who had copied your material asking 'politely' to remove.
2. Contact http://www.dmca.com/FAQ/What-is-a-DMCA-Takedown and ask them to act.
Hope this helps?
Gary
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will I be flagged for duplicate content by Google?
Hi Moz community, Had a question regarding duplicate content that I can't seem to find the answer to on Google. My agency is working on a large number of franchisee websites (over 40) for one client, a print franchise, that wants a refresh of new copy and SEO. Each print shop has their own 'microsite', though all services and products are the same, the only difference being the location. Each microsite has its own unique domain. To avoid writing the same content over and over in 40+ variations, would all the websites be flagged by Google for duplicate content if we were to use the same base copy, with the only changes being to the store locations (i.e. where we mention Toronto print shop on one site may change to Kelowna print shop on another)? Since the print franchise owns all the domains, I'm wondering if that would be a problem since the sites aren't really competing with one another. Any input would be greatly appreciated. Thanks again!
Intermediate & Advanced SEO | | EdenPrez0 -
Duplicate content but different pages?
Hi there! Im getting LOTS of "duplicate content" pages but the thing is they are different pages. My website essentially is a niche video hosting site with embedded videos from Youtube. Im working on adding personal descriptions to each video but keeping the same video title (should I re-word it from the original also? Any help?
Intermediate & Advanced SEO | | sarevme0 -
Duplicate content issue
Hello! We have a lot of duplicate content issues on our website. Most of the pages with these issues are dictionary pages (about 1200 of them). They're not exactly duplicate, but they contain a different word with a translation, picture and audio pronunciation (example http://anglu24.lt/zodynas/a-suitcase-lagaminas). What's the better way of solving this? We probably shouldn't disallow dictionary pages in robots.txt, right? Thanks!
Intermediate & Advanced SEO | | jpuzakov0 -
Duplicating content from manufacturer for client site and using canonical reference.
We manage content for many clients in the same industry, and many of them wish to keep their customers on their individualized websites (understandably). In order to do this, we have duplicated content in part from the manufacturers' pages for several "models" on the client's sites. We have put in a Canonical reference at the start of the content directing back to the manufacturer's page where we duplicated some of the content. We have only done a handful of pages while we figure out the canonical reference potential issue. So, my questions are: Is this necessary? Does this hurt, help or not do anything SEO-wise for our ranking of the site? Thanks!
Intermediate & Advanced SEO | | moz1admin1 -
Noindex Valuable duplicate content?
How could duplicate content be valuable and why question no indexing it? My new client has a clever african safari route builder that you can use to plan your safari. The result is 100's of pages that have different routes. Each page inevitably has overlapping content / destination descriptions. see link examples. To the point - I think it is foolish to noindex something like this. But is Google's algo sophisticated enough to not get triggered by something like this? http://isafari.nathab.com/routes/ultimate-tanzania-kenya-uganda-safari-july-november
Intermediate & Advanced SEO | | Rich_Coffman
http://isafari.nathab.com/routes/ultimate-tanzania-kenya-uganda-safari-december-june0 -
Duplicate content issue - online retail site.
Hello Mozzers, just looked at a website and just about every product page (there are hundreds - yikes!) is duplicated like this at end of each url (see below). Surely this is a serious case of duplicate content? Any idea why a web developer would do this? Thanks in advance! Luke prod=company-081
Intermediate & Advanced SEO | | McTaggart
prod=company-081&cat=20 -
Removing Content 301 vs 410 question
Hello, I was hoping to get the SEOmoz community’s advice on how to remove content most effectively from a large website. I just read a very thought-provoking thread in which Dr. Pete and Kerry22 answered a question about how to cut content in order to recover from Panda. (http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware). Kerry22 mentioned a process in which 410s would be totally visible to googlebot so that it would easily recognize the removal of content. The conversation implied that it is not just important to remove the content, but also to give google the ability to recrawl that content to indeed confirm the content was removed (as opposed to just recrawling the site and not finding the content anywhere). This really made lots of sense to me and also struck a personal chord… Our website was hit by a later Panda refresh back in March 2012, and ever since then we have been aggressive about cutting content and doing what we can to improve user experience. When we cut pages, though, we used a different approach, doing all of the below steps:
Intermediate & Advanced SEO | | Eric_R
1. We cut the pages
2. We set up permanent 301 redirects for all of them immediately.
3. And at the same time, we would always remove from our site all links pointing to these pages (to make sure users didn’t stumble upon the removed pages. When we cut the content pages, we would either delete them or unpublish them, causing them to 404 or 401, but this is probably a moot point since we gave them 301 redirects every time anyway. We thought we could signal to Google that we removed the content while avoiding generating lots of errors that way… I see that this is basically the exact opposite of Dr. Pete's advice and opposite what Kerry22 used in order to get a recovery, and meanwhile here we are still trying to help our site recover. We've been feeling that our site should no longer be under the shadow of Panda. So here is what I'm wondering, and I'd be very appreciative of advice or answers for the following questions: 1. Is it possible that Google still thinks we have this content on our site, and we continue to suffer from Panda because of this?
Could there be a residual taint caused by the way we removed it, or is it all water under the bridge at this point because Google would have figured out we removed it (albeit not in a preferred way)? 2. If there’s a possibility our former cutting process has caused lasting issues and affected how Google sees us, what can we do now (if anything) to correct the damage we did? Thank you in advance for your help,
Eric1 -
Dynamic 301's causing duplicate content
Hi, wonder if anyone can help? We have just changed our site which was hosted on IIS and the page url's were like this ( example.co.uk/Default.aspx?pagename=About-Us ). The new page url is example.co.uk/About-Us/ and is using Apache. The 301's our developer told us to use was in this format: RewriteCond %{REQUEST_URI} ^/Default.aspx$
Intermediate & Advanced SEO | | GoGroup51
RewriteCond %{QUERY_STRING} ^pagename=About-Us$
RewriteRule ^(.*)$ http://www.domain.co.uk/About-Us/ [R=301,L] This seemed to work from a 301 point of view; however it also seemed to allow both of the below URL's to give the same page! example.co.uk/About-Us/?pagename=About-Us example.co.uk/About-Us/ Webmaster Tools has now picked up on this and is seeing it a duplicate content. Can anyone help why it would be doing this please. I'm not totally clued up and our host/ developer cant understand it too. Many Thanks0