Duplicate Content
-
Let's say a blog is publishing original content. Now let's say a second blog steals that original content via bot and publishes it as it's own. Now further assume the original blog doesn't notice this for several years.
How much damage could this do to blog A for Google results? Any opinions?
-
Removing any duplicate text is absolutely essential, as this could potentially negatively affect your business's organic SEO, do you have any duplicated text?
-
Thanks for the response Peter re: the original post.
We are very convinced at this point the issue isn't a technical one. We're not sure however if there's an issue with that duplicate content site we found stealing some of our articles, or as you mentioned a quality score issue. We're approaching it as a need to re-group around the quality issue for now, and monitor results over time. We've identified several areas for improvement in that regard.
This stuff is so frustrating to be honest. I get why Google can't show their cards, but the complete lack of transparency or ability to get some feedback from them makes this a difficult game.
Thanks again for the response, much appreciated.
-
CYNOT: I saw the original question via email (I'll avoid details in the public answer), and unfortunately I'm not seeing any clear signs of technical issues with the original content. This looks more like an aggressive filter than a penalty, but it's really hard to tell if the filter is a sign of quality issues or if Google is treating the wrong site as a duplicate.
-
Unfortunately, a lot of it does depend on the relative authority of the sites. People scrape (including some bots, which do it almost immediately) Moz posts all the time, and they rank, but they don't have nearly our link profile or other ranking signals, and so we don't worry about it. For a smaller site with a relatively new or weak link profile, though, it is possible for a stronger site to outrank you on your own content.
Google does try to look at cache dates and other signals, but a better-funded site can often get indexed more quickly as well. It's rare for this to do serious damage, but it can happen. As Balachandar said, at that point you may have to resort to DMCA take-down requests and other legal actions. Ultimately, that becomes a cost/benefit trade-off, as legal action is going to take time and money.
There's no technical tricks (markup, etc.) to tell Google that a page is the source, although there are certainly tactics, like maintaining good XML sitemaps, that can help Google find your new content more quickly. Of course, you also want to be the site that has that stronger link profile, regardless of whether or not someone is copying you.
-
It will affect your ranking if the second blog steals your content. If the second blog which had stealed your content have high DA, your content will be under-valued. Google updating the algorithms by analyzing which website posts the content in web(date analysis) to solve this problem. You can see traffic drops as an indication to identify that the page is duplicated by some other blogs. If you have big website and many blog posts, you can use DMCA which takes care of all the things. If you have any questions, feel free to ask.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shall we add engaging and useful FAQ content in all our pages or rather not because of duplication and reduction of unique content?
We are considering to add at the end of alll our 1500 product pages answers to the 9 most frequently asked questions. These questions and answers will be 90% identical for all our products and personalizing them more is not an option and not so necessary since most questions are related to the process of reserving the product. We are convinced this will increase engagement of users with the page, time on page and it will be genuinely useful for the visitor as most visitors will not visit the seperate FAQ page. Also it will add more related keywords/topics to the page.
Intermediate & Advanced SEO | | lcourse
On the downside it will reduce the percentage of unique content per page and adds duplication. Any thoughts about wether in terms of google rankings we should go ahead and benefits in form of engagement may outweight downside of duplication of content?0 -
A lot of news / Duplicate Content - what to do?
Hi All, I have a blog with a lot of content (news and pr messages), I want to move my blog to new domain. What is your recommendation? 1. Keep it as is. old articles -> 301 -> same article different URL
Intermediate & Advanced SEO | | JohnPalmer
2. Remove all the duplicate content and create 301 from the old URL to my homepage.
3. Keep it as is, but add in the meta-tags NoIndex in duplicate articles. Thanks !0 -
Duplicate Content Dilemma for Category and Brand Pages
Hi, I have a online shop with categories such as: Trousers Shirts Shoes etc. But now I'm having a problem with further development.
Intermediate & Advanced SEO | | soralsokal
I'd like to introduce brand pages. In this case I would create new categories for Brand 1, Brand 2, etc... The text on categories and brand pages would be unique. But there will be an overlap in products. How do I deal with this from a duplicate content perspective? I'm appreciate your suggestions. Best, Robin0 -
Will using 301 redirects to reduce duplicate content on a massive scale within a domain hurt the site?
We have a site that is suffering a duplicate content problem. To help resolve this we intend to reduce the amount of landing pages within the site. There are a HUGE amount of pages. We have identified the potential to reduce the pages by half at first by combing the top level directories, as we believe they are semantically similar enough that they no longer warrant being seperated.
Intermediate & Advanced SEO | | Silkstream
For instance: Mobile Phones & Mobile Tablets (Its not mobile devices). We want to remove this directory path and 301 these pages to the others, then rewrite the content to include both phones and tablets on the same landing page. Question: Would a massive amount of 301's (over 100,000) cause any harm to the general health of the website? Would it affect the authority? We are also considering just severing them from the site, leaving them indexed but not crawlable from the site, to try and maintain a smooth transition. We dont want traffic to tank. Has anyone performed anything similar? Id be interested to hear all opinions. Thanks!0 -
Opinion on Duplicate Content Scenario
So there are 2 pest control companies owned by the same person - Sovereign and Southern. (The two companies serve different markets) They have two different website URLs, but the website code is actually all the same....the code is hosted in one place....it just uses an if/else structure with dynamic php which determines whether the user sees the Sovereign site or the Southern site....know what I am saying? Here are the two sites: www.sovereignpestcontrol.com and www.southernpestcontrol.com. This is a duplicate content SEO nightmare, right?
Intermediate & Advanced SEO | | MeridianGroup0 -
Why are these pages considered duplicate content?
I have a duplicate content warning in our PRO account (well several really) but I can't figure out WHY these pages are considered duplicate content. They have different H1 headers, different sidebar links, and while a couple are relatively scant as far as content (so I might believe those could be seen as duplicate), the others seem to have a substantial amount of content that is different. It is a little perplexing. Can anyone help me figure this out? Here are some of the pages that are showing as duplicate: http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Seth+Green/?bioid=5554 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Solomon+Northup/?bioid=11758 http://www.downpour.com/catalogsearch/advanced/byNarrator/?mediatype=audio+books&bioid=3665 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Marcus+Rediker/?bioid=10145 http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Robin+Miles/?bioid=2075
Intermediate & Advanced SEO | | DownPour0 -
Last Panda: removed a lot of duplicated content but no still luck!
Hello here, my website virtualsheetmusic.com has been hit several times by Panda since its inception back in February 2011, and so we decided 5 weeks ago to get rid of about 60,000 thin, almost duplicate pages via noindex metatags and canonical (we have no removed physically those pages from our site giving back a 404 because our users may search for those items on our own website), so we expected this last Panda update (#25) to give us some traffic back... instead we lost an additional 10-12% traffic from Google and now it looks even really badly targeted. Let me say how disappointing is this after so much work! I must admit that we still have many pages that may look thin and duplicate content and we are considering to remove those too (but those are actually giving us sales from Google!), but I expected from this last Panda to recover a little bit and improve our positions on the index. Instead nothing, we have been hit again, and badly. I am pretty desperate, and I am afraid to have lost the compass here. I am particularly afraid that the removal of over 60,000 pages via noindex metatags from the index, for some unknown reason, has been more damaging than beneficial. What do you think? Is it just a matter of time? Am I on the right path? Do we need to wait just a little bit more and keep removing (via noindex metatags) duplicate content and improve all the rest as usual? Thank you in advance for any thoughts.
Intermediate & Advanced SEO | | fablau0 -
Duplicate content
I run about 10 sites and most of them seemed to fall foul of the penguin update and even though I have never sought inorganic links I have been frantically searching for a link based answer since April. However since asking a question here I have been pointed in another direction by one of your contributors. It seems At least 6 of my sites have duplicate content issues. If you search Google for "We have selected nearly 200 pictures of short haircuts and hair styles in 16 galleries" which is the first bit of text from the site short-hairstyles.com about 30000 results appear. I don't know where they're from nor why anyone would want to do this. I presume its automated since there is so much of it. I have decided to redo the content. So I guess (hope) at some point in the future the duplicate nature will be flushed from Google's index? But how do I prevent it happening again? It's impractical to redo the content every month or so. For example if you search for "This facility is written in Flash® to use it you need to have Flash® installed." from another of my sites that I coincidently uploaded a new page to a couple of days ago, only the duplicate content shows up not my original site. So whoever is doing this is finding new stuff on my site and getting it indexed on google before even google sees it on my site! Thanks, Ian
Intermediate & Advanced SEO | | jwdl0