How are they avoiding duplicate content?
-
One of the largest stores in USA for soccer runs a number of whitelabel sites for major partners such as Fox and ESPN. However, the effect of this is that they are creating duplicate content for their products (and even the overall site structure is very similar). Take a look at:
http://www.worldsoccershop.com/23147.html
http://www.foxsoccershop.com/23147.html
http://www.soccernetstore.com/23147.html
You can see that practically everything is the same including:
-
product URL
-
product title
-
product description
My question is, why is Google not classing this as duplicate content? Have they coded for it in a certain way or is there something I'm missing which is helping them achieve rankings for all sites?
-
-
The answer is right in your question - "runs a number of whitelabel sites". As mentioned, it is largely due to the original publisher publishing the content first and getting indexed - from there, anytime the google bot stumbles across the same content - it will figure out that it has seen the content before, and attribute the ranking to the original. Something that google themselves covered last year here (although more specifically for news at the time).
Duplicate content unfortunately isn't just "not shown" by the search engines (imagine how "clean" the SERPS would be if that were the case!) it's just ranked lower than the original publisher that google is aware of. Occasionally you will get the odd page that will rank from a different domain - but that is usually due to being fresh content, I have seen this myself with my own content being aggregated by a large news site - they might outrank me on occasion for a day on one or two pieces - but my original url comes out on top in the end.
-
They rank as #1 for the relevant terms. It is very clear Google feels they are the original source of the content, and the other sites are duplicates.
I don't have a crystal ball to see the future, but based on current information, the original source site is not suffering in any manner.
-
Interesting feedback - are worldsoccershop (the original source) likely to suffer any penalties as a result of the whitelabel sites carrying the duplicate content?
-
Hey
I just did a search for some phrases I found on one of their product pages and I wrapped up this long query in double quotes.
"Large graffiti print on front that illustrates the club's famous players and history. The traditional blue jersey has gold details including team badge, adidas logo and sponsor design"
the results that are returned shows the worldsoccershop.com result first & second and therefore they seem to be an authority on this product description.
I have a client that is setting up a store to take on some rather big boys like notonthehighstreet.com and in this industry where they have several, established competitors for each product the big authority stores seem to rank for the generic product descriptions with no real issues.
This is ultimately difficult for the smaller stores as whilst they have less resources, pages on my clients site that use these duplicate descriptions are just getting filtered out of the results. We can see this filtering in action with very specific searches like the one above where we get the 'we have filtered out similar results' message in the search results and low and behold, my clients results are in those that are filtered.
So, to answer your original question:
They have not 'coded' anything in a specific way and there is nothing you are missing as such. They are just an authority site and as such are 'getting away with it' - which, for the smaller players, kind of sucks. That said, only the worldofsoccer pages are returned so the other sites could well be filtered out.
Still, as I am coaching our client, see this not as a problem but as an opportunity. By creating unique content, we can hopefully piggy back other more authoritative sites that are all returning an exact same product description and whilst I don't expect us to get 1st place, we can work towards first page and out of that filter.
Duplicate content is a massive problem and on this site we are working on there is one product description that copyscape tells us is on 300 other sites. Google wants to return rich result sets, some shops, some information, some pictures etc and not just 10 sets of the same thing so dare to be different and give them a reason to display your page.
Hope it helps
Marcus -
My question is, why is Google not classing this as duplicate content?
Why do you feel this content has not been flagged as duplicate content?
The reasonable search for these pages is Barcelona Soccer Jersey. Only one of the three sites has results for this term in the top 50, and it is the #1 and #2 results. If this was not duplicate content, you would expect to find the other two sites listed on the first page of google results as well.
The perfect search for the page (very longtail and unrealistic) is Barcelona 11/12 home soccer jersey. For this result, the worldsoccershop.com site ranks as #1 and 3, the foxsoccershop ranks as #8 which is a big drop down considering the content is the same, and the soccernetstore.com site is not in the top 50 results.
The other two sites have clearly been identified as duplicate content or are otherwise being penalized quite severely.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301 redirect to avoid duplicate content penalty
I have two websites with identical content. Haya and ethnic Both websites have similar products. I would like to get rid of ethniccode I have already started to de-index ethniccode. My question is, Will I get any SEO benefit or Will it be harmful if I 301 direct the below only URL’s https://www.ethniccode/salwar-kameez -> https://www.hayacreations/collections/salwar-kameez https://www.ethniccode/salwar-kameez/anarkali-suits - > https://www.hayacreations/collections/anarkali-suits
Intermediate & Advanced SEO | | riyaaaz0 -
Duplicate page content errors for Web App Login
Hi There I have 6 duplicate content errors, but they are for the WebApp login from our website. I have put a Noindex on the Sitemap to stop google from indexing them to see if that would work. But it didn't. These links as far as I can see are not even on the website www.skemaz.net, but are links beyond the website and on the Web App itself eg : <colgroup><col width="529"></colgroup>
Intermediate & Advanced SEO | | Skemazer
| http://login.skemaz.net |
| http://login.skemaz.net/LogIn?ReturnUrl=%2Fchangepassword |
| http://login.skemaz.net/Login |
| http://login.skemaz.net/LogIn?ReturnUrl=%2FHome | Any suggestions would be greatly appreciated. Kind regards Sarah0 -
Concerns of Duplicative Content on Purchased Site
Recently I purchased a site of 50+ DA (oldsite.com) that had been offline/404 for 9-12 months from the previous owner. The purchase included the domain and the content previously hosted on the domain. The backlink profile is 100% contextual and pristine. Upon purchasing the domain, I did the following: Rehosted the old site and content that had been down for 9-12 months on oldsite.com Allowed a week or two for indexation on oldsite.com Hosted the old content on my newsite.com and then performed 100+ contextual 301 redirects from the oldsite.com to newsite.com using direct and wild card htaccess rules Issued a Press Release declaring the acquisition of oldsite.com for newsite.com Performed a site "Change of Name" in Google from oldsite.com to newsite.com Performed a site "Site Move" in Bing/Yahoo from oldsite.com to newsite.com It's been close to a month and while organic traffic is growing gradually, it's not what I would expect from a domain with 700+ referring contextual domains. My current concern is around original attribution of content on oldsite.com shifting to scraper sites during the year or so that it was offline. For Example: Oldsite.com has full attribution prior to going offline Scraper sites scan site and repost content elsewhere (effort unsuccessful at time because google know original attribution) Oldsite.com goes offline Scraper sites continue hosting content Google loses consumer facing cache from oldsite.com (and potentially loses original attribution of content) Google reassigns original attribution to a scraper site Oldsite.com is hosted again and Google no longer remembers it's original attribution and thinks content is stolen Google then silently punished Oldsite.com and Newsite.com (which it is redirected to) QUESTIONS Does this sequence have any merit? Does Google keep track of original attribution after the content ceases to exist in Google's search cache? Are there any tools or ways to tell if you're being punished for content being posted else on the web even if you originally had attribution? Unrelated: Are there any other steps that are recommend for a Change of site as described above.
Intermediate & Advanced SEO | | PetSite0 -
Manage category pages and duplicate content issues
Hi everybody, I am now auditing this website www.disfracessimon.com
Intermediate & Advanced SEO | | teconsite
this website has some issues with canonicals and other things. But right now I have found something that I would like to know your opinion. When I was checking parts of the content in google to find duplicate content issues I found this: I google I searched: "Chaleco de streck decorado con botones" and found First result: "Hombre trovador" is the one I was checking -> Correct
The following results are category pages where the product is listed in. I was wondering if this could cause any problem related with duplicated content. Should I no index category pages or should I keep it?
The first result in google was the product page. And category pages I think are good for link juice transfer and to capture some searchs from Google. Any advice? Thank you0 -
Lot of duplicate content and still traffic is increasing... how does it work?
Hello Mozzers, I've a dilemma with a client's site I am working on that is make me questioning my SEO knowledge, or the way Google treat duplicate content. I'll explain now. The situation is the following: organic traffic is constantly increasing since last September, in every section of the site (home page, categories and product pages) even though: they have tons of duplicate content from same content in old and new URLs (which are in two different languages, even if the actual content on the page is in the same language in both of the URL versions) indexation is completely left to Google decision (no robots file, no sitemap, no meta robots in code, no use of canonical, no redirect applied to any of the old URLs, etc) a lot (really, a lot) of URLs with query parameters (which brings to more duplicated content) linked from the inner page of the site (and indexed in some case) they have Analytics but don't use Webmaster Tools Now... they expect me to help them increase even more the traffic they're getting, and I'll go first on "regular" onpage optimization, as their title, meta description and headers are not optimized at all according to the page content, but after that I was thinking on fixing the issues with indexation and content duplication, but I am worried I can "break the toy", as things are going well for them. Should I be confident that fixing these issues will bring to even better results or do you think is better for me to focus on other kind of improvements? Thanks for your help!
Intermediate & Advanced SEO | | Guybrush_Threepw00d0 -
HELP! How does one prevent regional pages as being counted as "duplicate content," "duplicate meta descriptions," et cetera...?
The organization I am working with has multiple versions of its website geared towards the different regions. US - http://www.orionhealth.com/ CA - http://www.orionhealth.com/ca/ DE - http://www.orionhealth.com/de/ UK - http://www.orionhealth.com/uk/ AU - http://www.orionhealth.com/au/ NZ - http://www.orionhealth.com/nz/ Some of these sites have very similar pages which are registering as duplicate content, meta descriptions and titles. Two examples are: http://www.orionhealth.com/terms-and-conditions http://www.orionhealth.com/uk/terms-and-conditions Now even though the content is the same, the navigation is different since each region has different product options / services, so a redirect won't work since the navigation on the main US site is different from the navigation for the UK site. A rel=canonical seems like a viable option, but (correct me if I'm wrong) it tells search engines to only index the main page, in this case, it would be the US version, but I still want the UK site to appear to search engines. So what is the proper way of treating similar pages accross different regional directories? Any insight would be GREATLY appreciated! Thank you!
Intermediate & Advanced SEO | | Scratch_MM0 -
BEING PROACTIVE ABOUT CONTENT DUPLICATION...
So we all know that duplicate content is bad for SEO. I was just thinking... Whenever I post new content to a blog, website page etc...there should be something I should be able to do to tell Google (in fact all search engines) that I just created and posted this content to the web... that I am the original source .... so if anyone else copies it they get penalised and not me... Would appreciate your answers... 🙂 regards,
Intermediate & Advanced SEO | | TopGearMedia0 -
Removing Duplicate Content Issues in an Ecommerce Store
Hi All OK i have an ecommerce store and there is a load of duplicate content which is pretty much the norm with ecommerce store setups e.g. this is my problem http://www.mystoreexample.com/product1.html
Intermediate & Advanced SEO | | ChriSEOcouk
http://www.mystoreexample.com/brandname/product1.html
http://www.mystoreexample.com/appliancetype/product1.html
http://www.mystoreexample.com/brandname/appliancetype/product1.html
http://www.mystoreexample.com/appliancetype/brandname/product1.html so all the above lead to the same product
I also want to keep the breadcrumb path to the product Here's my plan Add a canonical URL to the product page
e.g. http://www.mystoreexample.com/product1.html
This way i have a short product URL Noindex all duplicate pages but do follow the internal links so the pages are spidered What are the other options available and recommended? Does that make sense?
Is this what most people are doing to remove duplicate content pages? thanks 🙂0