Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Duplicate product content - from a manufacturer website, to retailers
-
Hi Mozzers,
We're working on a website for a manufacturer who allows retailers to reuse their product information. Now, this of course raises the issue of duplicate content. The manufacturer is the content owner and originator, but retailers will copy the information for their own site and not link back (permitted by the manufacturer) - the only reference to the manufacturer will be the brand name citation on the retailer website.
How would you deal with the duplicate content issues that this may cause. Especially considering the domain authority for a lot of the retailer websites is better than the manufacturer site?
Thanks!!
-
This is one of the biggest issues I see with ecommerce sites these days. Automatic feeds of data, titles, descriptions, product info, etc. right into the store. It never ranks all that well without massive authority.
To be honest, the more you can change pre-import the better. Add something to the titles, change them up manually with an intern - whatever it takes to create unique value. For a while I was outranking Polyvore, David Jones & Amazon for the word "skirt" just because my product and category descriptions were massively different than anything on Google at the time (longer, more detailed, more useful.)
In this type of situation, don't be afraid to experiment. You're not going to outrank eBay, Amazon or the OEM for most products individually but if you try mixing things up you may find weaknesses that are easy enough to exploit. Whenever I imported, I'd alter the CSV files directly and then import. Made life so much better (and drastically improves rankings on long tail searches.)
(A small, small e-tailer won't do this but a mid size to large one always should be.)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do content copycats (plagiarism) hurt original website rankings?
Hi all, Found some websites stolen our content and using the same sentences in their website pages. Does this content hurt our website rankings? Their DA is low, still we are worried about the damage about this plagiarism. Thanks
White Hat / Black Hat SEO | | vtmoz0 -
Why to add a product id in the url
Hello ! shop.com/en/2628-buy-key-origin-the-sims-4-seasons/ Why will people use a product id in the link? Is there any advantage like better ranking or else?
White Hat / Black Hat SEO | | kh-priyam0 -
Robots.txt file in Shopify - Collection and Product Page Crawling Issue
Hi, I am working on one big eCommerce store which have more then 1000 Product. we just moved platform WP to Shopify getting noindex issue. when i check robots.txt i found below code which is very confusing for me. **I am not getting meaning of below tags.** Disallow: /collections/+ Disallow: /collections/%2B Disallow: /collections/%2b Disallow: /blogs/+ Disallow: /blogs/%2B Disallow: /blogs/%2b I can understand that my robots.txt disallows SEs to crawling and indexing my all product pages. ( collection/*+* ) Is this the query which is affecting the indexing product pages? Please explain me how this robots.txt work in shopify and once my page crawl and index by google.com then what is use of Disallow: Thanks.
White Hat / Black Hat SEO | | HuptechWebseo0 -
How to find Spam Website?
Hi guys, I'm seo newbie and really want to find websites that hurt seo ranking to avoid get link. Which tools or trick can help me to find those site?
White Hat / Black Hat SEO | | denakalami0 -
How do you change the 6 links under your website in Google?
Hello everyone, I have no idea how to ask this question, so I'm going to give it a shot and hopefully someone can help me!! My company is called Eteach, so when you type in Eteach into Google, we come in the top position (phew!) but there are 6 links that appear underneath it (I've added a picture to show what I mean). How do you change these links?? I don't even know what to call them, so if there is a particular name for these then please let me know! They seem to be an organic rank rather than PPC...but if I'm wrong then do correct me! Thanks! zorIsxH.jpg
White Hat / Black Hat SEO | | Eteach_Marketing0 -
My website is coming up under a proxy server "HideMyAss.com." How do I stop this from happening?
We've noticed that when we search our web copy in Google the first result is under a proxy server "HideMyAss.com," and our actual website is no where in sight. We've called Google and they really didn't have an answer for us (well the 2-3 people) we spoke with. Any suggestions or ideas would be greatly appreciated.
White Hat / Black Hat SEO | | AAC_Adam0 -
Asynchronous loading of product prices bad for SEO?
We are currently looking into improving our TTFB on our ecommerce site. A huge improvement would be to asynchronously load the product prices on the product list pages. The product detail page – on which the product is ordered- will be left untouched. The idea is that all content like product data, images and other static content is sent to the browser first(first byte). The product prices depend on a set of user variables like delivery location, vat inclusive/exclusive,… etc. So they would requested via an ajax call to reduce the TTFB. My question is whether google considers this as black hat SEO or not?
White Hat / Black Hat SEO | | jef22200 -
Noindexing Thin Content Pages: Good or Bad?
If you have massive pages with super thin content (such as pagination pages) and you noindex them, once they are removed from googles index (and if these pages aren't viewable to the user and/or don't get any traffic) is it smart to completely remove them (404?) or is there any valid reason that they should be kept? If you noindex them, should you keep all URLs in the sitemap so that google will recrawl and notice the noindex tag? If you noindex them, and then remove the sitemap, can Google still recrawl and recognize the noindex tag on their own?
White Hat / Black Hat SEO | | WebServiceConsulting.com0