Copied Content - Who is a winner
-
Someone copied the content from my website just I publish the article. So who is the winner? and I am in any problem? What to do? Please check Image.
-
When we find our content on other sites we choose one of a few routes to take.
If the infringing content is on the site of a reputable business, it usually appears there as a result of an employee who does not realize that taking the content of others is an action that can result in civil or criminal action, or it is a result of a dirtbag SEO or marketing service who steals content instead of writing their own. In these cases we write to an officer of the reputable business and inform them of the problem. They usually thank us for letting them know, take the content down right away and educate the employee or fire the SEO or marketer who did this.
More often the infringer is simply a spammer. In those cases we use the DMCA dashboard of our Google Search Console account to file a complaint with Google. Google usually acts within 48 hours, often the same day. If the infringer is using Adsense, we then click the "Ad Choices" button on one of their ads, and follow the route to complain about copyright infringement. When Adsense receives these complaints they often turn off all ads on the infringing page, and if lots of complaints are filed about the website, the turn off all of the ads to that site or close the adsense account. Hitting spammers in the wallet or putting fear into them that their adsense account might be turned off is effective and getting the infringer to say away from your sites.
Before you start filing DMCAs or complaining to reputable businesses, it is important to understand fair use and understand the limits of your copyright rights. A consultation with an intellectual property attorney can help you understand this. They can also craft complaint letters that you can send, offer to send them for you, and take over if you send an informal complaint and the company does not comply. I've found that copyright attorneys cost less than I feared and are worth more than I pay them.
-
Hi Varun
This could well cause problems for you especially if they did it quickly. Usually, if there is a reasonable gap, say one month then Google will assign authority to the site who published the content first. The problem comes when the second site is a large one with a higher Domain Authority - it could be that their published copy ranks higher than yours.
Whatever it is simply bad to have two articles with duplicate content so my best advice is to ask Google to take the copied version down.
This is quite a simple process, all you have to do is to tell Google here:
https://support.google.com/legal/answer/3110420?hl=en-GB
Scroll to the bottom: Submit A Legal request and follow the link.
Then choose: Web Search
Then choose: I have a legal issue that is not mentioned above (Bottom one) Then select: I have found content that may violate my copyright Then fill in all of the details and wait for them to come back to you.
You could then send them a legal letter telling them to remove it - Google will remove the duplicated content from the web.
I hope that help
Regards Nigel
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hidden category content really bad?
Hi Guys, I'm working with a site which has hidden based category content see: http://i.imgur.com/Sgko2we.jpg It seems google are still indexing these pages but i heard Google might ignore or reduce the benefit of hidden content like this.I just want to confirm if this is the case? And if this is a really bad thing for SEO?Cheers.Sgko2we.jpg
Intermediate & Advanced SEO | | seowork2140 -
Heading Tags & Content Count
Hi everyone I am looking into this page on our site http://www.key.co.uk/en/key/sack-trucks Just comparing it against competitors in SEMRush, the tool shows a wordcount of this page for over 4089 words, compared with http://www.wickes.co.uk/Wickes-Green-General-Purpose-Sack-Truck-200kg/p/500302 which only has 2658 - it has a lot more written content than our page - where is this word count coming from? Also looking at the same page on our site Woorank suggests we have the word 'sack truck' in the h1 and title too many times - it's only there once, but its this showing because its an exact match keyword? I'm just wondering if there is something wrong with the html or how the page is being crawed?
Intermediate & Advanced SEO | | BeckyKey0 -
Search console, duplicate content and Moz
Hi, Working on a site that has duplicate content in the following manner: http://domain.com/content
Intermediate & Advanced SEO | | paulneuteboom
http://www.domain.com/content Question: would telling search console to treat one of them as the primary site also stop Moz from seeing this as duplicate content? Thanks in advance, Best, Paul. http0 -
Faceted Navigation and Dupe Content
Hi, We have a Magento website using layered navigation - it has created a lot of duplicate content and I did ask Google in GWT to "No URLS" most of the querystrings except the "p" which is for pagination. After reading how to tackle this issue, I tried to tackle it using a combination of Meta Noindex, Robots, Canonical but still it was a snowball I was trying to control. In the end, I opted for using Ajax for the layered navigation - no matter what option is selected there is no parameters latched on to the url, so no dupe/near dupe URL's created. So please correct me if I am wrong, but no new links flow to those extra URL's now so presumably in due course Google will remove them from the index? Am I correct in thinking that? Plus these extra URL's have Meta Noindex on them too - I still have tens of thousands of pages indexed in Google. How long will it take for Google to remove them from index? Will having Meta No Index on the pages that need to be removed help? Any other way of removing thousands of URLS from GWT? Thanks again, B
Intermediate & Advanced SEO | | bjs20100 -
Optimize the category page or a content page?
Hi, We wish to start ranking on a specific keyword ("log house prices" in italian). We have two options on what pages we should optimize for this keyword: A long content page (1000+ words with images) Log houses category page, optimized for the keyword (we have 50+ houses on this page, together with a short price summary). I would think that we have better chances with ranking with option nr.2 , but then we can't use that page for ranking with a more short-tail keyword (like "log houses"). What would you suggest? Is there maybe a third option for this?
Intermediate & Advanced SEO | | JohanMattisson0 -
Differentiating Content
I have a piece of content (that is similar) that legitimately shows up on two different sites. I would like both to link, but it seems as if they are "flip flopping" in ranking. Sometimes one shows up, sometimes another. What's the best way to differentiate a piece of content like this? Does it mean rewriting one entirely? http://www.simplifiedbuilding.com/solutions/ada-handrail/ http://simplifiedsafety.com/solutions/ada-handrail/ I want to the Simplified Building one to be found first if I had a preference.
Intermediate & Advanced SEO | | CPollock0 -
About robots.txt for resolve Duplicate content
I have a trouble with Duplicate content and title, i try to many way to resolve them but because of the web code so i am still in problem. I decide to use robots.txt to block contents that are duplicate. The first Question: How do i use command in robots.txt to block all of URL like this: http://vietnamfoodtour.com/foodcourses/Cooking-School/
Intermediate & Advanced SEO | | magician
http://vietnamfoodtour.com/foodcourses/Cooking-Class/ ....... User-agent: * Disallow: /foodcourses ( Is that right? ) And the parameter URL: h
ttp://vietnamfoodtour.com/?mod=vietnamfood&page=2
http://vietnamfoodtour.com/?mod=vietnamfood&page=3
http://vietnamfoodtour.com/?mod=vietnamfood&page=4 User-agent: * Disallow: /?mod=vietnamfood ( Is that right? i have folder contain module, could i use: disallow:/module/*) The 2nd question is: Which is the priority " robots.txt" or " meta robot"? If i use robots.txt to block URL, but in that URL my meta robot is "index, follow"0 -
Http and https duplicate content?
Hello, This is a quick one or two. 🙂 If I have a page accessible on http and https count as duplicate content? What about external links pointing to my website to the http or https page. Regards, Cornel
Intermediate & Advanced SEO | | Cornel_Ilea0