Copied Content - Who is a winner
-
Someone copied the content from my website just I publish the article. So who is the winner? and I am in any problem? What to do? Please check Image.
-
When we find our content on other sites we choose one of a few routes to take.
If the infringing content is on the site of a reputable business, it usually appears there as a result of an employee who does not realize that taking the content of others is an action that can result in civil or criminal action, or it is a result of a dirtbag SEO or marketing service who steals content instead of writing their own. In these cases we write to an officer of the reputable business and inform them of the problem. They usually thank us for letting them know, take the content down right away and educate the employee or fire the SEO or marketer who did this.
More often the infringer is simply a spammer. In those cases we use the DMCA dashboard of our Google Search Console account to file a complaint with Google. Google usually acts within 48 hours, often the same day. If the infringer is using Adsense, we then click the "Ad Choices" button on one of their ads, and follow the route to complain about copyright infringement. When Adsense receives these complaints they often turn off all ads on the infringing page, and if lots of complaints are filed about the website, the turn off all of the ads to that site or close the adsense account. Hitting spammers in the wallet or putting fear into them that their adsense account might be turned off is effective and getting the infringer to say away from your sites.
Before you start filing DMCAs or complaining to reputable businesses, it is important to understand fair use and understand the limits of your copyright rights. A consultation with an intellectual property attorney can help you understand this. They can also craft complaint letters that you can send, offer to send them for you, and take over if you send an informal complaint and the company does not comply. I've found that copyright attorneys cost less than I feared and are worth more than I pay them.
-
Hi Varun
This could well cause problems for you especially if they did it quickly. Usually, if there is a reasonable gap, say one month then Google will assign authority to the site who published the content first. The problem comes when the second site is a large one with a higher Domain Authority - it could be that their published copy ranks higher than yours.
Whatever it is simply bad to have two articles with duplicate content so my best advice is to ask Google to take the copied version down.
This is quite a simple process, all you have to do is to tell Google here:
https://support.google.com/legal/answer/3110420?hl=en-GB
Scroll to the bottom: Submit A Legal request and follow the link.
Then choose: Web Search
Then choose: I have a legal issue that is not mentioned above (Bottom one) Then select: I have found content that may violate my copyright Then fill in all of the details and wait for them to come back to you.
You could then send them a legal letter telling them to remove it - Google will remove the duplicated content from the web.
I hope that help
Regards Nigel
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to solve JavaScript paginated content for SEO
In our blog listings page, we limit the number of blogs that can be seen on the page to 10. However, all of the blogs are loaded in the html of the page and page links are added to the bottom. Example page: https://tulanehealthcare.com/about/newsroom/ When a user clicks the next page, it simply filters the content on the same page for the next group of postings and displays these to the user. Nothing in the html or URL change. This is all done via JavaScript. So the question is, does Google consider this hidden content because all listings are in the html but the listings on page are limited to only a handful of them? Or is Googlebot smart enough to know that the content is being filtered by JavaScript pagination? If this is indeed a problem we have 2 possible solutions: not building the HTML for the next pages until you click on the 'next' page. adding parameters to the URL to show the content has changed. Any other solutions that would be better for SEO?
Intermediate & Advanced SEO | | MJTrevens1 -
Dynamically Changing pages same content
Hey there Mozzers, I have a commerce site that is dynamically adding more products in the same page when you scroll down. I have added SEO Content on the footer of the page. The url is changing when you scroll to ?page-2, ?page-3 and so on. The content stays the same even though the page is dynamically changing. Is there a way to solve that issue? Should I always use canonical pointing to the initial page thus solving the duplication but indicate rel=next and rel=prev to the other pages etc? Thanks in advance
Intermediate & Advanced SEO | | AngelosS0 -
Copy of domain to serve different continent
I want to duplicate the main site.com and service Asia from a different datacenter. The content is the same but the domain will be site.asia. How to properly tag to avoid duplicate content?
Intermediate & Advanced SEO | | Pandjarov0 -
SEO value of article title content?
I work for an online theater news publisher. Our article page titles include various pieces of data: the title, publication date, article category, and our domain name (theatermania.com). Are all of these valuable from an SEO standpoint? My sense it'd be cleaner to just show the title (and nothing more) on a SERP. But we'll certainly keep whatever helps us with rankings.
Intermediate & Advanced SEO | | TheaterMania0 -
Cross Domain duplicate content...
Does anyone have any experience with this situation? We have 2 ecommerce websites that carry 90% of the same products, with mostly duplicate product descriptions across domains. We will be running some tests shortly. Question 1: If we deindex a group of product pages on Site A, should we see an increase in ranking for the same products on Site B? I know nothing is certain, just curious to hear your input. The same 2 domains have different niche authorities. One is healthcare products, the other is general merchandise. We've seen this because different products rank higher on 1 domain or the other. Both sites have the same Moz Domain Authority (42, go figure). We are strongly considering cross domain canonicals. Question 2 Does niche authority transfer with a cross domain canonical? In other words, for a particular product, will it rank the same on both domains regardless of which direction we canonical? Ex: Site A: Healthcare Products, Site B: General Merchandise. I have a health product that ranks #15 on site A, and #30 on site B. If I use rel=canonical for this product on site B pointing at the same product on Site A, will the ranking be the same if I use Rel=canonical from Site A to Site B? Again, best guess is fine. Question 3: These domains have similar category page structures, URLs, etc, but feature different products for a particular category. Since the pages are different, will cross domain canonicals be honored by Google?
Intermediate & Advanced SEO | | AMHC1 -
Should sub domains to organise content and directories?
I'm working on a site that has directories for service providers and content about those services. My idea is to organise the services into groups, e.g. Web, Graphic, Software Development since they are different topics. Each sub domain (hub) has it's own sales pages, directory of services providers and blog content. E.g. the web hub has web.servicecrowd.com.au (hub home) web.servicecrowd.com.au/blog (hub blog) http://web.servicecrowd.com.au/dir/p (hub directory) Is this overkill or will it help in the long run when there are hundreds of services like dog grooming and DJing? Seems better to have separate sub domains and unique blogs for groups of services and content topics.
Intermediate & Advanced SEO | | ServiceCrowd_AU0 -
How are they avoiding duplicate content?
One of the largest stores in USA for soccer runs a number of whitelabel sites for major partners such as Fox and ESPN. However, the effect of this is that they are creating duplicate content for their products (and even the overall site structure is very similar). Take a look at: http://www.worldsoccershop.com/23147.html http://www.foxsoccershop.com/23147.html http://www.soccernetstore.com/23147.html You can see that practically everything is the same including: product URL product title product description My question is, why is Google not classing this as duplicate content? Have they coded for it in a certain way or is there something I'm missing which is helping them achieve rankings for all sites?
Intermediate & Advanced SEO | | ukss19840 -
Subdomains - duplicate content - robots.txt
Our corporate site provides MLS data to users, with the end goal of generating leads. Each registered lead is assigned to an agent, essentially in a round robin fashion. However we also give each agent a domain of their choosing that points to our corporate website. The domain can be whatever they want, but upon loading it is immediately directed to a subdomain. For example, www.agentsmith.com would be redirected to agentsmith.corporatedomain.com. Finally, any leads generated from agentsmith.easystreetrealty-indy.com are always assigned to Agent Smith instead of the agent pool (by parsing the current host name). In order to avoid being penalized for duplicate content, any page that is viewed on one of the agent subdomains always has a canonical link pointing to the corporate host name (www.corporatedomain.com). The only content difference between our corporate site and an agent subdomain is the phone number and contact email address where applicable. Two questions: Can/should we use robots.txt or robot meta tags to tell crawlers to ignore these subdomains, but obviously not the corporate domain? If question 1 is yes, would it be better for SEO to do that, or leave it how it is?
Intermediate & Advanced SEO | | EasyStreet0