Scraped content ranking above the original source content in Google.
-
I need insights on how “scraped” content (exact copy-pasted version) rank above the original content in Google.
4 original, in-depth articles published by my client (an online publisher) are republished by another company (which happens to be briefly mentioned in all four of those articles). We reckon the articles were re-published at least a day or two after the original articles were published (exact gap is not known). We find that all four of the “copied” articles rank at the top of Google search results whereas the original content i.e. my client website does not show up in the even in the top 50 or 60 results.
We have looked at numerous factors such as Domain authority, Page authority, in-bound links to both the original source as well as the URLs of the copied pages, social metrics etc. All of the metrics, as shown by tools like Moz, are better for the source website than for the re-publisher. We have also compared results in different geographies to see if any geographical bias was affecting results, reason being our client’s website is hosted in the UK and the ‘re-publisher’ is from another country--- but we found the same results. We are also not aware of any manual actions taken against our client website (at least based on messages on Search Console).
Any other factors that can explain this serious anomaly--- which seems to be a disincentive for somebody creating highly relevant original content.
We recognize that our client has the option to submit a ‘Scraper Content’ form to Google--- but we are less keen to go down that route and more keen to understand why this problem could arise in the first place.
Please suggest.
-
**Everett Sizemore - Director, R&D and Special Projects at Inflow: **Use the Google Scraper Report form.
Thanks. I didn't know about this.
If that doesn't work, submit a DMCA complaint to Google.
This does work. We submit dozens of DMCAs to Google every month. We also send notices to sites who have used our content but might know understand copyright infringement.
Everett Sizemore - Director, R&D and Special Projects at Inflow Endorsed 2 minutes ago Until Manoj gives us the URLs so we can look into it ourselves, I'd have to say this is the best answer: Google sucks sometimes. Use the Google Scraper Report form. If that doesn't work, submit a DMCA complaint to Google.
-
Oh, that is a very good point. This is very bad for people who have clients.
-
Thanks, EGOL.
The other big challenge is to get clients to also buy into the idea that it is Google's problem!
-
**In this specific instance, the original source outscores the site where content is duplicated on almost all the common metrics that are deemed to be indicative of a site's relative authority/standing. **
Yes, this happens. It states the problem and Google's inabilities more strongly than I have stated it above.
**Any ideas/ potential solutions that you could help with ---- will be much appreciated. **
I have this identical problem myself. Actually, its Google's problem. They have crap on their shoes but say that they can't smell it.
-
Hi,
Thanks for the response. I'd understand if the original source was indeed new or not so 'powerful' or an established site in the niche that it serves.
In this specific instance, the original source outscores the site where content is duplicated on almost all the common metrics that are deemed to be indicative of a site's relative authority/standing.
Any ideas/ potential solutions that you could help with ---- will be much appreciated.
Thanks
-
Scraped content frequently outranks the original source, especially when the original source is a new site or a site that is not powerful.
Google says that they are good at attributing content to the original publisher. They are delusional. Lots of SEOs believe Google. I'll not comment on that.
If scraped content was not making money for people this practice would have died a long time ago. I submit that as evidence. Scrapers know what Google does not (or refused to admit) and what many SEOs refuse to believe.
-
No, John - we don't use the 'Fetch as Googlebot' for every post. I am intrigued by the possibility you suggest.
Yes, there are lots of unknowns and certain results seem inexplicable --- as we feel this particular instance is. We have looked at and evaluated most of the obvious things to be considered, including the likelihood of the re-publisher having gotten more social traction. However, the actual results are opposite to what we'd expect.
I'm hoping that you/ some of the others in this forum could shed some light on any other factors that could be influencing the results.
Thanks.
-
Thanks for the link, Umar.
Yes, we did fetch the cached versions of both pages--- but that doesn't indicate when the respective pages were first indexed, it just shows when the pages were last cached.
-
No Martijn, the articles have excerpts from representatives of the republisher; there are no links to the re-publisher website.
-
When you're saying you're mentioning the re-publisher briefly in the posts itself does that mean you're also linking to them?
-
Hey Manoj,
That's indeed very weird. There can be multiple reasons for this, for instance, did you try to fetch the cached version of both sites to check when they got indexed? Usually online publication sites have fast indexing rate and it might be possible that your client shared the articles on social before they got indexed and the other site lifted them up.
Do check out this brilliant Moz post, I'm sure you will get the idea what caused this,
https://moz.com/blog/postpanda-your-original-content-is-being-outranked-by-scrapers-amp-partners
Hope this helps!
-
Do you use fetch for google WMT with every post?
If your competitors monitor the site, harvest the content and then publish and use fetch for google - that could explain why google ranks them first. ie google would likely have indexed their content first.
That said there are so many unknown factors at play, ie how does social stack up. Are they using google + etc.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do cross domain rel canonical and original source tags have to be the same?
I have placed content on a partner site using the same content that is on my site. I want the link juice from the site and the canonical tag points back to my site. However, they are also using the original source tag as they publish a lot of news. If they have the original source tag as the page on their site and the canonical as mine, is this killing the link juice from the canonical and putting me in jeopardy of a duplicate content penalty? Google has already started indexing the page on their site with the same content.
Intermediate & Advanced SEO | | SecuritiesCE0 -
Google Rank 0 - Best way?
We are trying to create tables or bullet points on each of our pages summarising the content of the page and get it to rank on position 0 on Google. This technique worked for some searches but not all so we were wondering: Is it beneficial to add links or not ? Is there a keyword limit? We are on Magento 2 if that helps. Thanks James
Intermediate & Advanced SEO | | JamesDavison0 -
Prioritise a page in Google/why is a well-optimised page not ranking
Hello I'm new to Moz Forums and was wondering if anyone out there could help with a query. My client has an ecommerce site selling a range of pet products, most of which have multiple items in the range for difference size animals i.e. [Product name] for small dog
Intermediate & Advanced SEO | | LauraSorrelle
[Product name] for medium dog
[Product name] for large dog
[Product name] for extra large dog I've got some really great rankings (top 3) for many keyword searches such as
'[product name] for dogs'
'[product name]' But these rankings are for individual product pages, meaning the user is taken to a small dog product page when they might have a large dog or visa versa. I felt it would be better for the users (and for conversions and bounce rates), if there was a group page which showed all products in the range which I could target keywords '[product name]', '[product name] for dogs'. The page would link through the the individual product pages. I created some group pages in autumn last year to trial this and, although they are well-optimised (score of 98 on Moz's optimisation tool), they are not ranking well. They are indexed, but way down the SERPs. The same group page format has been used for the PPC campaign and the difference to the retention/conversion of visitors is significant. Why are my group pages not ranking? Is it because my client's site already has good rankings for the target term and Google does not want to show another page of the site and muddy results?
Is there a way to prioritise the group page in Google's eyes? Or bring it to Google's attention? Any suggestions/advice welcome. Thanks in advance Laura0 -
Responsive Content
At the moment we are thinking about switching to another CMS. We are discussing the use of responsive content.Our developer states that the technique uses hidden content. That is sort of cloaking. At the moment I'm searching for good information or tests with this technique but I can't find anything solid. Do you have some experience with responsive content and is it cloaking? Referring to good articles is also a plus. Looking forward to your answers!
Intermediate & Advanced SEO | | Maxaro.nl0 -
Website not ranking
Firstly, apologies for the long winded question. I'm 'newish' to SEO We have a website built on Magento , www.excelclothing.com We have been online for 5 years and had reasonable success. Having used a few SEO companies in the past we found ourselves under a 'partial manual penalty' early last year. By July we were out of penalty. We have been gradually working our way through getting rid of 'spammy' links. Currently the website ranks for a handful of non competitive keywords looking at the domain on SEM RUSH. This has dropped drastically over the last 2 years. Our organic traffic over the last 2-3 years has seen no 'falling off a cliff' and has maintained a similar pattern. I've been told so many lies by SEO companies trying to get into my wallet I'm not sure who to believe. We have started to add content onto all our Category pages to make more unique although most of our Meta Descriptions are a 'boiler plate' template. I'm wondering.... Am I still suffering from Penquin ? Am I trapped by Panda and if so how can I know that? Do I need more links removed? How can I start to rank for more keywords I have a competitor online with the same DA, PA and virtually same number of links but they rank for 3500 keywords in the top 20. Would welcome any feedback. Many Thanks.
Intermediate & Advanced SEO | | wgilliland1 -
Handling duplicate content, whilst making both rank well
Hey MOZperts, I run a marketplace called Zibbet.com and we have 1000s of individual stores within our marketplace. We are about to launch a new initiative giving all sellers their own stand-alone websites. URL structure:
Intermediate & Advanced SEO | | relientmark
Marketplace URL: http://www.zibbet.com/pillowlink
Stand-alone site URL: http://pillowlink.zibbet.com (doesn't work yet) Essentially, their stand-alone website is a duplicate of their marketplace store. Same items (item title, description), same seller bios, same shop introduction content etc but it just has a different layout. You can scroll down and see a preview of the different pages (if that helps you visualize what we're doing), here. My Questions: My desire is for both the sellers marketplace store and their stand-alone website to have good rankings in the SERPS. Is this possible? Do we need to add any tags (e.g. "rel=canonical") to one of these so that we're not penalized for duplicate content? If so, which one? Can we just change the meta data structure of the stand-alone websites to skirt around the duplicate content issue? Keen to hear your thoughts and if you have any suggestions for how we can handle this best. Thanks in advance!0 -
How to Avoid Duplicate Content Issues with Google?
We have 1000s of audio book titles at our Web store. Google's Panda de-valued our site some time ago because, I believe, of duplicate content. We get our descriptions from the publishers which means a good
Intermediate & Advanced SEO | | lbohen
deal of our description pages are the same as the publishers = duplicate content according to Google. Although re-writing each description of the products we offer is a daunting, almost impossible task, I am thinking of re-writing publishers' descriptions using The Best Spinner software which allows me to replace some of the publishers' words with synonyms. I have re-written one audio book title's description resulting in 8% unique content from the original in 520 words. I did a CopyScape Check and it reported "65 duplicates." CopyScape appears to be reporting duplicates of words and phrases within sentences and paragraphs. I see very little duplicate content of full sentences
or paragraphs. Does anyone know whether Google's duplicate content algorithm is the same or similar to CopyScape's? How much of an audio book's description would I have to change to stay away from CopyScape's duplicate content algorithm? How much of an audio book's description would I have to change to stay away from Google's duplicate content algorithm?0 -
Duplicate content
I run about 10 sites and most of them seemed to fall foul of the penguin update and even though I have never sought inorganic links I have been frantically searching for a link based answer since April. However since asking a question here I have been pointed in another direction by one of your contributors. It seems At least 6 of my sites have duplicate content issues. If you search Google for "We have selected nearly 200 pictures of short haircuts and hair styles in 16 galleries" which is the first bit of text from the site short-hairstyles.com about 30000 results appear. I don't know where they're from nor why anyone would want to do this. I presume its automated since there is so much of it. I have decided to redo the content. So I guess (hope) at some point in the future the duplicate nature will be flushed from Google's index? But how do I prevent it happening again? It's impractical to redo the content every month or so. For example if you search for "This facility is written in Flash® to use it you need to have Flash® installed." from another of my sites that I coincidently uploaded a new page to a couple of days ago, only the duplicate content shows up not my original site. So whoever is doing this is finding new stuff on my site and getting it indexed on google before even google sees it on my site! Thanks, Ian
Intermediate & Advanced SEO | | jwdl0