Handling duplicate content, whilst making both rank well
-
Hey MOZperts,
I run a marketplace called Zibbet.com and we have 1000s of individual stores within our marketplace. We are about to launch a new initiative giving all sellers their own stand-alone websites.
URL structure:
Marketplace URL: http://www.zibbet.com/pillowlink
Stand-alone site URL: http://pillowlink.zibbet.com (doesn't work yet)Essentially, their stand-alone website is a duplicate of their marketplace store. Same items (item title, description), same seller bios, same shop introduction content etc but it just has a different layout. You can scroll down and see a preview of the different pages (if that helps you visualize what we're doing), here.
My Questions: My desire is for both the sellers marketplace store and their stand-alone website to have good rankings in the SERPS.
- Is this possible?
- Do we need to add any tags (e.g. "rel=canonical") to one of these so that we're not penalized for duplicate content? If so, which one?
- Can we just change the meta data structure of the stand-alone websites to skirt around the duplicate content issue?
Keen to hear your thoughts and if you have any suggestions for how we can handle this best.
Thanks in advance!
-
No. We're actually not launching this initiative for SEO purposes. We just want to create value for our users and having their own stand-alone website is valuable to them.
I just want to make sure we're structured properly from an SEO point of view so that we don't compromise the SEO of our marketplace, or their stand-alone site.
Also, each site has unique content, but it is identical data to their marketplace store. So, every seller has a marketplace store (with items, a profile etc) AND a stand-alone website (with the same items, same profile etc, just designed differently and accessible via a sub-domain).
Hope that makes sense.
-
Thanks so much for your input.
I must admit, I'm not too familiar with Panda, so will need to do some digging there. We literally launched the new version of Zibbet 2 months ago, with different meta data etc, so I'm not sure how that affects it.
If we don't add the rel=canonical, do you think we'll get punished by Google?
-
If I understand correctly, you're asking how you can create a business model that fills up the search results with a bunch of sites that all have the same content. I think you're somewhat late to that party. The Google of today doesn't really let you do that and it's pretty good at preventing it. And if you were thinking of maybe linking back to your main site from all those dupes, I'd rethink that strategy, as well.
-
Hi,
First of all I would really take care of that Panda issue you have there: http://screencast.com/t/SzbL6hTFwWr
To answer your questions:
- Is this possible?
They can't rank both. You need to decide canonical version - the one to rule them all
- Do we need to add any tags (e.g. "rel=canonical") to one of these so that we're not penalized for duplicate content? If so, which one?
There are no duplicate content issues but yes, it's best that you chose and don't let google do that for you. Add rel canonical.
- Can we just change the meta data structure of the stand-alone websites to skirt around the duplicate content issue?
That won't help.
Overall you got hit by Panda already - you should take care of what content you push into the index as that's not the way to recover. before pushing more content into the index you should clean the site for all issues related with Panda.
Cheers.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content issue
Hello! We have a lot of duplicate content issues on our website. Most of the pages with these issues are dictionary pages (about 1200 of them). They're not exactly duplicate, but they contain a different word with a translation, picture and audio pronunciation (example http://anglu24.lt/zodynas/a-suitcase-lagaminas). What's the better way of solving this? We probably shouldn't disallow dictionary pages in robots.txt, right? Thanks!
Intermediate & Advanced SEO | | jpuzakov0 -
Potential Pagination Issue/ Duplicate content issue
Hi All, We upgraded our framework , relaunched our site with new url structures etc and re did our site map to Google last week. However, it's now come to light that the rel=next, rel=Prev tags we had in place on many of our pages are missing. We are putting them back in now but my worry is , as they were previously missing when we submitted the , will I have duplicate content issues or will it resolve itself , as Google re-crawls the site over time ?.. Any advice would be greatly appreciated? thanks Pete
Intermediate & Advanced SEO | | PeteC120 -
How should I manage duplicate content caused by a guided navigation for my e-commerce site?
I am working with a company which uses Endeca to power the guided navigation for our e-commerce site. I am concerned that the duplicate content generated by having the same products served under numerous refinement levels is damaging the sites ability to rank well, and was hoping the Moz community could help me understand how much of an impact this type of duplicate content could be having. I also would love to know if there are any best practices for how to manage this type of navigation. Should I nofollow all of the URLs which have more than 1 refinement used on a category, or should I allow the search engines to go deeper than that to preserve the long tail? Any help would be appreciated. Thank you.
Intermediate & Advanced SEO | | FireMountainGems0 -
HELP! How does one prevent regional pages as being counted as "duplicate content," "duplicate meta descriptions," et cetera...?
The organization I am working with has multiple versions of its website geared towards the different regions. US - http://www.orionhealth.com/ CA - http://www.orionhealth.com/ca/ DE - http://www.orionhealth.com/de/ UK - http://www.orionhealth.com/uk/ AU - http://www.orionhealth.com/au/ NZ - http://www.orionhealth.com/nz/ Some of these sites have very similar pages which are registering as duplicate content, meta descriptions and titles. Two examples are: http://www.orionhealth.com/terms-and-conditions http://www.orionhealth.com/uk/terms-and-conditions Now even though the content is the same, the navigation is different since each region has different product options / services, so a redirect won't work since the navigation on the main US site is different from the navigation for the UK site. A rel=canonical seems like a viable option, but (correct me if I'm wrong) it tells search engines to only index the main page, in this case, it would be the US version, but I still want the UK site to appear to search engines. So what is the proper way of treating similar pages accross different regional directories? Any insight would be GREATLY appreciated! Thank you!
Intermediate & Advanced SEO | | Scratch_MM0 -
Blog Not Ranking Well at All in Search Engines, Need Help!
Hi Mozers, Need some help on a CMS I've been working with over the last year. The CMS is built by a team of guys here in Washington State. Basically, I'm having issues with clients content on the blog system not getting ranking correctly at all. Here's a few problems I've noticed: Could you confirm and scale these problems based upon being, "not a problem" "a problem" and "critical must fix" 1. The title tag is pulling from the title of the article which is also automatically generating a URL with underscores instead of dashes. Is having a duplicate URL, Title, and Title tag spammy looking to search engines? Are underscores on long URL's confusing google? Where shorter one's are fine (i.e. domain/i_pad/
Intermediate & Advanced SEO | | Keith-Eneix
(i.e.http://www.ductvacnw.com/blog/archives/2013/05/20/5_reasons_to_hire_a_professional_to_clean_your_air_ducts_and_vents), 2. The CMS is resolving all URL's with a canonical instead of a 301 redirect (I've told webmaster tools which preferred url should be indexed). Does using a canonical over a 301 redirect cause any confusion with Google? Is one better practice then the other? 3. The H1 tags on the blog pull from "blog category" instead of the title of the blog post. Is this is a problem? 4. The URl's are quite long with the added "archives/2013/05/20/5". Does this cause problems by pushing the main target keyword further away from the domain name? 5. I'm also noticing the blog post is actually not part of the breadcrumbs where we normally would expect that to populate after the blog category name, Problem? These are some of the things I've noticed and need clarification on. If you see anything else please let me know?0 -
Same content pages in different versions of Google - is it duplicate>
Here's my issue I have the same page twice for content but on different url for the country, for example: www.example.com/gb/page/ and www.example.com/us/page So one for USA and one for Great Britain. Or it could be a subdomain gb. or us. etc. Now is it duplicate content is US version indexes the page and UK indexes other page (same content different url), the UK search engine will only see the UK page and the US the us page, different urls but same content. Is this bad for the panda update? or does this get away with it? People suggest it is ok and good for localised search for an international website - im not so sure. Really appreciate advice.
Intermediate & Advanced SEO | | pauledwards0 -
Handling Similar page content on directory site
Hi All, SEOMOZ is telling me I have a lot of duplicate content on my site. The pages are not duplicate, but very similar, because the site is a directory website with a page for cities in multiple states in the US. I do not want these pages being indexed and was wanting to know the best way to go about this. I was thinking I could do a rel ="nofollow" on all the links to those pages, but not sure if that is the correct way to do this. Since the folders are deep within the site and not under one main folder, it would mean I would have to do a disallow for many folders if I did this through Robots.txt. The other thing I am thinking of is doing a meta noindex, follow, but I would have to get my programmer to add a meta tag just for this section of the site. Any thoughts on the best way to achieve this so I can eliminate these dup pages from my SEO report and from the search engine index? Thanks!
Intermediate & Advanced SEO | | cchhita0 -
Duplicate Content from Article Directories
I have a small client with a website PR2, 268 links from 21 root domains with mozTrusts 5.5, MozRank 4.5 However whenever I check in google for the amount of link: Google always give the response none. My client has a blog and many articles on the blog. However they have submitted their blog article every time to article directories as well, plain and simle creating duplicate and content. Is this the reason why their link: is coming up as none? Is there something to correct the situation?
Intermediate & Advanced SEO | | danielkamen0