How to avoid duplicate content
-
Hi there,
Our client has an ecommerce website, their products are also showing on an aggregator website (aka on a comparison website where multiple vendors are showing their products). On the aggregator website the same photos, titles and product descriptions are showing.
Now with building their new website, how can we avoid such duplicate content? Or does Google even care in this case? I have read that we could show more product information on their ecommerce website and less details on the aggregator's website. But is there another or better solution?
Many thanks in advance for any input!
-
yes, since you are not changing domain name and keeping the same content, you should be fine, since you were original author of that content
-
Unfortunately we can't control the content on the aggregator website (e.g. with rel="canonical" etc.)
-
Hi there,
No we can't control what is being put on the aggregator website (chrono24.com, a large website displaying watches from different dealers).
We won't be changing domain names, copying over all product content, just restyling and adding new content in the about us/services pages.
So I assume the only option is to have Google index our content first. Thanks for the video!
-
Hi there.
Can you control what is being put on aggregator website? if so - there shouldn't be any problem, right - just make it different. If you can't control aggregated material - usually Google relies on date/time of indexing pages to find out who copied from whom. So what you can do is after creating new pages for products etc., go to webmaster tools and go "fetch as google" to insure that your website would be crawled first.
You said that you're doing new website. Are you changing domain names? Are you copying all content over without any changes? or you just restyling?
Anyway, idea stays the same - either make content different from aggregator website or make sure that your website is being crawled first. Oh, depending on how your content is being scraped, you can utilize canonical links (if aggregator simply copies full page into iframes or something).
P.S. I'm trying to find a video from matt cutts about websites being indexed earlier than original content.
Here you go: https://www.youtube.com/watch?v=4LsB19wTt0Q
-
Anytime where you have known duplicate content you want to use the rel = "canonical" tag to signify the original content, and rel = "alternate" href = "http://otherDomainWithDupContent.com"
More info in Google Webmaster Documentation for Duplicate Content
http://googlewebmastercentral.blogspot.com/2010/09/unifying-content-under-multilingual.html
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Avoiding duplicate content in manufacturer's [of single product] website
Hello, So I have read a lot of articles about duplicate content/ keyword canibalism/ competing with yourself, and so on. But none of these articles really fit to manufacturer website who produces one product. For example, lets say I make ceramic tiles, this means: Homepage: "Our tiles are the best tiles, we have numerous designs of tiles. We make them only from natural ceramic" Product list: "Here is a list of our tiles: Poesia tile, white tile, textured tile, etc" Page for each tile: Gallery: a bunch of images trying to prove that these tiles look best 🙂 Where to buy page: a map From what I understand this page is already doomed - it will not go well against larger retailers who don't focus only on tiles but they sell everything. This page is set to have a lot of duplicate content. But I hope I am wrong, can someone please make some suggestions how to do SEO on such a website where all pages are about the same thing? Any help would be much appreciated! Juris
Intermediate & Advanced SEO | | JurisBBB0 -
Putting my content under domain.com/content, or under related categories: domain.com/bikes/content ?
Hello This questions plays on what Joe Hall talked about during this years' MozCon: Rethinking Information Architecture for SEO and Content Marketing. My Case:
Intermediate & Advanced SEO | | Inevo
So.. we're working out guidelines and templates for a costumer (sporting goods store) on how to publish content (articles, videos, guides) on their category pages, product pages, and other pages. At this moment I have 2 choices:
1. Use a url-structure/information architecture where all the content is placed in one subfolder, for example domain.com/content. Although it's placed here, there's gonna be extensive internal linking from /content to the related category pages, so the content about bikes (even if it's placed under domain.com/bikes) will be just as visible on the pages related to bikes. 2. Place the content about bikes on a subdirectory under the bike category, **for example domain.com/bikes/content. ** The UX/interface for these two scenarios will be identical, but the directories/folder-hierarchy/url structure will be different. According to Joe Hall, the latter scenario will build up more topical authority and relevance towards the category/topic, and should be the overall most ideal setup. Any thoughts on which of the two solutions is the most ideal? PS: There is one critical caveat her: my costumer uses many url-slugs subdirectories for their categories, for example domain.com/activity/summer/bikes/, which means the content in the first scenario will be 4 steps away from the home page. Is this gonna be a problem? Looking forward to your thoughts 🙂 Sigurd, INEVO0 -
Complicated Duplicate Content Question...but it's fun, so please help.
Quick background: I have a page that is absolutely terrible, but it has links and it's a category page so it ranks. I have a landing page which is significantly - a bizillion times - better, but it is omitted in the search results for the most important query we need. I'm considering switching the content of the two pages, but I have no idea what they will do. I'm not sure if it will cause duplicate content issues or what will happen. Here are the two urls: Terrible page that ranks (not well but it's what comes up eventually) https://kemprugegreen.com/personal-injury/ Far better page that keeps getting omitted: https://kemprugegreen.com/location/tampa/tampa-personal-injury-attorney/ Any suggestions (other than just wait on google to stop omitting the page, because that's just not going to happen) would be greatly appreciated. Thanks, Ruben
Intermediate & Advanced SEO | | KempRugeLawGroup0 -
Duplicate Content with URL Parameters
Moz is picking up a large quantity of duplicate content, consists mainly of URL parameters like ,pricehigh & ,pricelow etc (for page sorting). Google has indexed a large number of the pages (not sure how many), not sure how many of them are ranking for search terms we need. I have added the parameters into Google Webmaster tools And set to 'let google decide', However Google still sees it as duplicate content. Is it a problem that we need to address? Or could it do more harm than good in trying to fix it? Has anyone had any experience? Thanks
Intermediate & Advanced SEO | | seoman100 -
Best method for blocking a subdomain with duplicated content
Hello Moz Community Hoping somebody can assist. We have a subdomain, used by our CMS, which is being indexed by Google.
Intermediate & Advanced SEO | | KateWaite
http://www.naturalworldsafaris.com/
https://admin.naturalworldsafaris.com/ The page is the same so we can't add a no-index or no-follow.
I have both set up as separate properties in webmaster tools I understand the best method would be to update the robots.txt with a user disallow for the subdomain - but the robots text is only accessible on the main domain. http://www.naturalworldsafaris.com/robots.txt Will this work if we add the subdomain exclusion to this file? It means it won't be accessible on https://admin.naturalworldsafaris.com/robots.txt (where we can't create a file). Therefore won't be seen within that specific webmaster tools property. I've also asked the developer to add a password protection to the subdomain but this does not look possible. What approach would you recommend?0 -
Are all duplicate content issues bad? (Blog article Tags)
If so how bad? We use tags on our blog and this causes duplicate content issues. We don't use wordpress but with such a highly used cms having the same issue it seems quite plausible that Google would be smart enough to deal with duplicate content issues caused by blog article tags and not penalise at all. Here it has been discussed and I'm ready to remove tags from our blog articles or monitor them closely to see how it effects our rankings. Before I do, can you give me some advice around this? Thanks,
Intermediate & Advanced SEO | | Daniel_B
Daniel.0 -
Last Panda: removed a lot of duplicated content but no still luck!
Hello here, my website virtualsheetmusic.com has been hit several times by Panda since its inception back in February 2011, and so we decided 5 weeks ago to get rid of about 60,000 thin, almost duplicate pages via noindex metatags and canonical (we have no removed physically those pages from our site giving back a 404 because our users may search for those items on our own website), so we expected this last Panda update (#25) to give us some traffic back... instead we lost an additional 10-12% traffic from Google and now it looks even really badly targeted. Let me say how disappointing is this after so much work! I must admit that we still have many pages that may look thin and duplicate content and we are considering to remove those too (but those are actually giving us sales from Google!), but I expected from this last Panda to recover a little bit and improve our positions on the index. Instead nothing, we have been hit again, and badly. I am pretty desperate, and I am afraid to have lost the compass here. I am particularly afraid that the removal of over 60,000 pages via noindex metatags from the index, for some unknown reason, has been more damaging than beneficial. What do you think? Is it just a matter of time? Am I on the right path? Do we need to wait just a little bit more and keep removing (via noindex metatags) duplicate content and improve all the rest as usual? Thank you in advance for any thoughts.
Intermediate & Advanced SEO | | fablau0 -
Duplicate Content
http://www.pensacolarealestate.com/JAABA/jsp/HomeAdvice/answers.jsp?TopicId=Buy&SubtopicId=Affordability&Subtopicname=What%20You%20Can%20Afford http://www.pensacolarealestate.com/content/answers.html?Topic=Buy&Subtopic=Affordability I have no idea how the first address exists at all... I ran the SEOMOZ tool and I got 600'ish DUPLICATE CONTENT errors! I have errors on content/titles etc... How do I get rid of all the content being generated from this JAABA/JSP "jibberish"? Please ask questions that will help you help me. I have always been 1st on google local and I have a business that is starting to hurt very seriously from being number three 😞
Intermediate & Advanced SEO | | JML11790