Analyzing ZAPPOS.com - how do they get away with it?
-
Hi All,
The fun thing about our industry is that unlike poker - most cards are open.
While trying to learn what the big guys are doing I chose to focus on www.Zappos.com - one of the largest sports wear (especially shoes).
I looked how they categories, interlink and on their product pages.
I have a question about duplication in an age where it is SO important.
If you look in their running sneakers category you'd see that they show the same item (in different color) as two separate items - how are these pages no considered duplication?It gets even worse - If you look inside a shoe page (a product page) in the tab "About the Brand" you'd learn that all shoes from Nike (just an example) the about the brand is exactly the same. This is about 90% of the page for hundreds of Nike shoes pages - and goes the same for all other brands.
How come they are ranked so high and not penalized in the era of Panda?
Is it as always - big brands get away with anything and everything?Here are two example shoe pages:
Nike Dart 10 (a)
Nike Dart 10 (b)Thanks!
-
As I said, that is what I would recommend doing. Zappos is not, and it could easily be due to limitations with their eCommerce or fulfillment systems since each color is probably a different sku. It could just as easily be due to the ability of these pages to rank better for each color, in which case they have an advantage over most other competitors because they can get away with it, as you have noticed.
-
Thanks for the detailed answer.
If you are putting a canonical tag then why not simply have one page with a drop down for colors?
-
Hello BeytzNet,
It is not uncommon at all for ecommerce sites to have product variants like this, each with their own SKU. They are, after all, two different products. If someone ordered one color and got the other they would be upset. If someone searched Google Shopping for Gray Nike Shoes and ended up on a page for Pink Nike Shoes it would not be a good experience for them.
Yes, a better way to do this would be to have unique on-page content for each variant of this shoe, or even to have one page that allows the user to choose their color from a drop-down list (oh wait, Zappos does that too...) so the page isn't optimal, but it is unlikely that Google would see this as something worth applying a penalty for. They would more likely just decide to rank only one version. Rather than being sneaky, it is probably just a scalability problem.
With that said, I know lots of lesser-known brands and websites that have been hit hard by Panda for similar "scalability problems". The fact that big, well-known brands can get away with a lot more is something that has been going on for a long time and isn't about to change any time soon. So to answer your question "how do they get away with it" - They get away with it by being a huge, well-known brand. It sucks, but that apparently provides a better user experience for Google searchers. I don't think there is any malicious purpose to that (e.g. Adsense revenue, helping Google partner sites...), rather it has to do with the way we, as searchers, react to branding by clicking on the results we are already familiar with and buying from sites we already trust.
If I were to handle the same situation I'd probably choose a canonical version and redirect the other pages to it since writing unique copy for each color shoe wouldn't be scaleable for a site that size. Of course you would lose some ability to rank for color-specific searches, but you could minimize that by listing the colors out in title or on-page content while allowing the user to select the color from a drop-down.
-
Cody that is not accurate. Only one of the pages references ...10~2 as the canonical URL. The other ones uses <link rel="canonical" href="/nike-dart-10~1" />.
-
It's because they utilize canonicals to specify the url that should get all of the authority. Both of your examples have this:
<link rel="<a class="attribute-value">canonical</a>" href="[/nike-dart-10~2](view-source:http://www.zappos.com/nike-dart-10%7E2)" /><script type="<a class="attribute-value">text/javascript</a>">
-
Hi, great question and find.
I recently read an article, I think that it was from distilled, on SEO Myths. One of the Myths was about duplicate content penalties.
"has the potential to dilute link equity," but apparently google weren't imposing serious penalties,
It was an interesting little piece, but i would suggest they are using a lot of no follow links.
As an e commerce developer, product variations are a hard one to index well.
I would be interested to get a few takes on how people are doing it well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do I need to remove pages that don't get any traffic from the index?
Hi, Do I need to remove pages that don't get any traffic from the index? Thanks Roy
Intermediate & Advanced SEO | | kadut1 -
How to get product info into Google Search Result box
Hi, in the last couple of weeks I get more and more search results with a product and prices of retailers below (see sample attached). Are there Schema parameters one could use to have a bigger chance to appear there? Thanks in advance Dieter Lang 0EYJtRJ
Intermediate & Advanced SEO | | Storesco1 -
Getting SEO Juice back after Redirect
Hi, On my website, many product pages were redirected over time to its product category, due to the product being unavailable. I understand with a 301 redirect, the final URL would have lost about 15% of the link juice. However - if after some time (e.g. 2 months, or 1 year) I remove the redirection - is the original page going to have any SEO juice, or did it already lose all of it? Thanks,
Intermediate & Advanced SEO | | viatrading10 -
Is there a way to get a list of Total Indexed pages from Google Webmaster Tools?
I'm doing a detailed analysis of how Google sees and indexes our website and we have found that there are 240,256 pages in the index which is way too many. It's an e-commerce site that needs some tidying up. I'm working with an SEO specialist to set up URL parameters and put information in to the robots.txt file so the excess pages aren't indexed (we shouldn't have any more than around 3,00 - 4,000 pages) but we're struggling to find a way to get a list of these 240,256 pages as it would be helpful information in deciding what to put in the robots.txt file and which URL's we should ask Google to remove. Is there a way to get a list of the URL's indexed? We can't find it in the Google Webmaster Tools.
Intermediate & Advanced SEO | | sparrowdog0 -
Https://www.mywebsite.com/blog/tag/wolf/ setting tag pages as blog corner stone article?
We do not have enough content rich page to target all of our keywords. Because of that My SEO guy wants to set some corner stone blog articles in order to rank them for certain key words on Google. He is asking me to use the following rule in our article writing(We have blog on our website):
Intermediate & Advanced SEO | | AlirezaHamidian
For example in our articles when we use keyword "wolf", link them to the blog page:
https://www.mywebsite.com/blog/tag/wolf/
It seems like a good idea because in the tag page there are lots of material with the Keyword "wolf" . But the problem is when I search for keyword "wolf" for example on the Google, some other blog pages are ranked higher than this tag page. But he tells me in long run it is a better strategy. Any idea on this?0 -
Offering discounts and getting backlinks - concerned.
Hiya Mozzers, My client is about to offer discounts (to a few large multinationals... for staff) and there's every possibility these will appear on the web, with a backlink to my client's website (perhaps direct via websitest / via online newsletters and so on). I am thinking of telling client to restrict the number of companies they interact with while I monitor backlinks in case there's some kind of problem with backlinks generated. I am also telling them on no account to ask for backlinks or encourage keyword rich links. Any thoughts on this, anybody? Is there a risk of penalty or am I just being paranoid?
Intermediate & Advanced SEO | | McTaggart0 -
Switch from CCTLD to .com - Am I missing anything?
We currently have 14 international sites. (.co.uk, .fr, .es, .com.au, etc) and (language differences aside) the content is the same on all. I want to move this content from example.co.uk to example.com/uk/ (and from example.com.sg to example.com/sg/) to consolidate our domain authority, for brand consistency, and to reduce the overhead of maintaining 14 different domains. Our .com has by far the most domain authority (90) and often outcompetes newer smaller sites like .com.sg in local search) Other sites, however, (like .co.uk DA74) do quite well locally. My goal is to improve the performance of those sites with a low DA, without hurting the larger sites, and also to avoid the disappearance of local content in local search. e.g. currently when a user searches for "widgets" they find example.co.uk/widgets/ but in future I want them to find example.com/uk/widgets My plan is to redirect pages with 301 redirects, and use rel-alternate and hreflang metadata to manage indexing. So in the example above, I'd 301 example.co.uk/widgets to example.com/uk/widgets, then use the following metatag on that new page to suggest that it is the UK english version (for users in the UK) of a canonical page in the .com: (this is in accordance with the suggestion on this page http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077) My question is: Am I going to severely damage the ranking of, e.g., UK pages in UK search engines by doing this? Is there a better way to do this? Any input greatly appreciated. Thanks, Dennis
Intermediate & Advanced SEO | | dennis.globalsign0 -
Best solution to get mass URl's out the SE's index
Hi, I've got an issue where our web developers have made a mistake on our website by messing up some URL's . Because our site works dynamically IE the URL's generated on a page are relevant to the current URL it ment the problem URL linked out to more problem URL's - effectively replicating an entire website directory under problem URL's - this has caused tens of thousands of URL's in SE's indexes which shouldn't be there. So say for example the problem URL's are like www.mysite.com/incorrect-directory/folder1/page1/ It seems I can correct this by doing the following: 1/. Use Robots.txt to disallow access to /incorrect-directory/* 2/. 301 the urls like this:
Intermediate & Advanced SEO | | James77
www.mysite.com/incorrect-directory/folder1/page1/
301 to:
www.mysite.com/correct-directory/folder1/page1/ 3/. 301 URL's to the root correct directory like this:
www.mysite.com/incorrect-directory/folder1/page1/
www.mysite.com/incorrect-directory/folder1/page2/
www.mysite.com/incorrect-directory/folder2/ 301 to:
www.mysite.com/correct-directory/ Which method do you think is the best solution? - I doubt there is any link juice benifit from 301'ing URL's as there shouldn't be any external links pointing to the wrong URL's.0