Questions created by webmethod
-
Duplicate Content / Canonical Conundrum on E-Commerce Website
Hi all, I’m looking for some expert advice on use of canonicals to resolve duplicate content for an e-Commerce site. I’ve used a generic example to explain the problem (I do not really run a candy shop). SCENARIO I run a candy shop website that sells candy dispensers and the candy that goes in them. I sell about 5,000 different models of candy dispensers and 10,000 different types of candy. Much of the candy fits in more than one candy dispenser, and some candy dispensers fit exactly the same types of candy as others. To make things easy for customers who need to fill up their candy dispensers, I provide a “candy finder” tool on my website which takes them through three steps: 1. Pick your candy dispenser brand (e.g. Haribo) 2. Pick your candy dispenser type (e.g. soft candy or hard candy) 3. Pick your candy dispenser model (e.g. S4000-A) RESULT: The customer is then presented with a list of candy products that they can buy. on a URL like this: Candy-shop.com/haribo/soft-candy/S4000-A All of these steps are presented as HTML pages with followable/indexable links. PROBLEM: There is a duplicate content issue with the results pages. This is because a lot of the candy dispensers fit exactly the same candy (e.g. S4000-A, S4000-B and S4000-C). This means that the content on these pages are the basically same because the same candy products are listed. I’ll call these the “duplicate dispensers” E.g. Candy-shop.com/haribo/soft-candy/S4000-A Candy-shop.com/haribo/soft-candy/S4000-B Candy-shop.com/haribo/soft-candy/S4000-C The page titles/headings change based on the dispenser model, but that’s not enough for the pages to be deemed unique by Moz. I want to drive organic traffic searches for the dispenser model candy keywords, but with duplicate content like this I’m guessing this is holding me back from any of these dispenser pages ranking. SOLUTIONS 1. Write unique content for each of the duplicate dispenser pages: Manufacturers add or discontinue about 500 dispenser models each quarter and I don’t have the resources to keep on top of this content. I would also question the real value of this content to a user when it’s pretty obvious what the products on the page are. 2. Pick one duplicate dispenser to act as a rel=canonical and point all its duplicates at it. This doesn’t work as dispensers get discontinued so I run the risk of randomly losing my canonicals or them changing as models become unavailable. 3. Create a single page with all of the duplicate dispensers on, and canonical all of the individual duplicate pages to that page. e.g. Canonical: candy-shop.com/haribo/soft-candy/S4000-Series Duplicates (which all point to canonical): candy-shop.com/haribo/soft-candy/S4000-Series?model=A candy-shop.com/haribo/soft-candy/S4000-Series?model=B candy-shop.com/haribo/soft-candy/S4000-Series?model=C PROPOSED SOLUTION Option 3. Anyone agree/disagree or have any other thoughts on how to solve this problem? Thanks for reading.
Intermediate & Advanced SEO | | webmethod0 -
Kill, pimp or cut loose? Ideas for a legacy ECommerce blog
Hi, I'm looking to revamp the fortunes of an ailing Fashion ECommerce blog, which once had an impact on SEO for the site which it linked to but now has fallen by the wayside. Blog sits here: www.mydomain.com/blog and links to products and categories on the ECommerce site www.mydomain.com. The blog has about 2000 posts on it written over the past 5 years, which are almost all rewritten content about existing stories, events or embedded youtube videos related to fashion on the Web. None of the blog topics are unique, but the posts have been rewritten well and in an entertaining way - i.e. it's not just a copy and paste. The blog is written on an old, proprietary platform and only has basic Social sharing. You can't comment on posts, or see "most popular" posts or tag clouds etc. It is optimised for SEO though, with fashion category tags, date archives and friendly URLs. The company badly needs a shot in the arm for its content marketing efforts - so we're looking into the creation of infographics and other types of high quality, sharable content with an outreach effort. Ideally I want this content to be hosted on the Ecommerce site, but am faced with a few options which I'd appreciate the community's view on: How I should handle the mix of the legacy content on /blog and the addition of new, "high quality" content? (Pimp v1) Leave the /blog exactly as is and add the new, high quality content as new posts to it. Invest in pimping the /blog UI so that it has features such as commenting/tag clouds etc. They could migrate the blog to Wordpress, but leave it on the same URL. (Cut loose) Leave the /blog alone, and start afresh with a new Wordpress blog for the new, high quality content. e.g. /News or news.mydomain.com. The old blog posts probably aren't worth bothering about, but it might be risky to delete them as there are a lot and are better off with them than without. (Pimp v2) Set up a new Wordpress blog (e.g. /News or news.mydomain.com) for the new content and move the old /blog content to it. 301 the old /blog posts to the new location. The depth of old content that exists will add weight to the new content from a user's perspective, but will seem sparse if published on its own. Not sure why I would do this, but it's an option... (Kill) Kill the old /blog content, start a new one for the new, high quality content. Maybe there's another option I haven't considered. Thanks in advance, George
Intermediate & Advanced SEO | | webmethod1 -
Duplicate Content Report: Duplicate URLs being crawled with "++" at the end
Hi, In our Moz report over the past few weeks I've noticed some duplicate URLs appearing like the following: Original (valid) URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green Duplicate URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green**++** These aren't appearing in Webmaster Tools, or in a Screaming Frog crawl of our site so I'm wondering if this is a bug with the Moz crawler? I realise that it could be resolved using a canonical reference, or performing a 301 from the duplicate to the canonical URL but I'd like to find out what's causing it and whether anyone else was experiencing the same problem. Thanks, George
Product Support | | webmethod0