Analyzing ZAPPOS.com - how do they get away with it?
-
Hi All,
The fun thing about our industry is that unlike poker - most cards are open.
While trying to learn what the big guys are doing I chose to focus on www.Zappos.com - one of the largest sports wear (especially shoes).
I looked how they categories, interlink and on their product pages.
I have a question about duplication in an age where it is SO important.
If you look in their running sneakers category you'd see that they show the same item (in different color) as two separate items - how are these pages no considered duplication?It gets even worse - If you look inside a shoe page (a product page) in the tab "About the Brand" you'd learn that all shoes from Nike (just an example) the about the brand is exactly the same. This is about 90% of the page for hundreds of Nike shoes pages - and goes the same for all other brands.
How come they are ranked so high and not penalized in the era of Panda?
Is it as always - big brands get away with anything and everything?Here are two example shoe pages:
Nike Dart 10 (a)
Nike Dart 10 (b)Thanks!
-
As I said, that is what I would recommend doing. Zappos is not, and it could easily be due to limitations with their eCommerce or fulfillment systems since each color is probably a different sku. It could just as easily be due to the ability of these pages to rank better for each color, in which case they have an advantage over most other competitors because they can get away with it, as you have noticed.
-
Thanks for the detailed answer.
If you are putting a canonical tag then why not simply have one page with a drop down for colors?
-
Hello BeytzNet,
It is not uncommon at all for ecommerce sites to have product variants like this, each with their own SKU. They are, after all, two different products. If someone ordered one color and got the other they would be upset. If someone searched Google Shopping for Gray Nike Shoes and ended up on a page for Pink Nike Shoes it would not be a good experience for them.
Yes, a better way to do this would be to have unique on-page content for each variant of this shoe, or even to have one page that allows the user to choose their color from a drop-down list (oh wait, Zappos does that too...) so the page isn't optimal, but it is unlikely that Google would see this as something worth applying a penalty for. They would more likely just decide to rank only one version. Rather than being sneaky, it is probably just a scalability problem.
With that said, I know lots of lesser-known brands and websites that have been hit hard by Panda for similar "scalability problems". The fact that big, well-known brands can get away with a lot more is something that has been going on for a long time and isn't about to change any time soon. So to answer your question "how do they get away with it" - They get away with it by being a huge, well-known brand. It sucks, but that apparently provides a better user experience for Google searchers. I don't think there is any malicious purpose to that (e.g. Adsense revenue, helping Google partner sites...), rather it has to do with the way we, as searchers, react to branding by clicking on the results we are already familiar with and buying from sites we already trust.
If I were to handle the same situation I'd probably choose a canonical version and redirect the other pages to it since writing unique copy for each color shoe wouldn't be scaleable for a site that size. Of course you would lose some ability to rank for color-specific searches, but you could minimize that by listing the colors out in title or on-page content while allowing the user to select the color from a drop-down.
-
Cody that is not accurate. Only one of the pages references ...10~2 as the canonical URL. The other ones uses <link rel="canonical" href="/nike-dart-10~1" />.
-
It's because they utilize canonicals to specify the url that should get all of the authority. Both of your examples have this:
<link rel="<a class="attribute-value">canonical</a>" href="[/nike-dart-10~2](view-source:http://www.zappos.com/nike-dart-10%7E2)" /><script type="<a class="attribute-value">text/javascript</a>">
-
Hi, great question and find.
I recently read an article, I think that it was from distilled, on SEO Myths. One of the Myths was about duplicate content penalties.
"has the potential to dilute link equity," but apparently google weren't imposing serious penalties,
It was an interesting little piece, but i would suggest they are using a lot of no follow links.
As an e commerce developer, product variations are a hard one to index well.
I would be interested to get a few takes on how people are doing it well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Trying to get Google to stop indexing an old site!
Howdy, I have a small dilemma. We built a new site for a client, but the old site is still ranking/indexed and we can't seem to get rid of it. We setup a 301 from the old site to the new one, as we have done many times before, but even though the old site is no longer live and the hosting package has been cancelled, the old site is still indexed. (The new site is at a completely different host.) We never had access to the old site, so we weren't able to request URL removal through GSC. Any guidance on how to get rid of the old site would be very appreciated. BTW, it's been about 60 days since we took these steps. Thanks, Kirk
Intermediate & Advanced SEO | | kbates0 -
How Can I Rank My Website Quickly and get traffic 20k per months
Hello moz webmasters, PLZ tell me How Can I Rank My Website Quickly and get traffic 20k per months. if you have backlinks lists of edu and gov sites plz donate me. check my site https://www.steemseo.com [Link removed by a forum moderator.]
Intermediate & Advanced SEO | | tushartosi0 -
Fetch as Google -- Does not result in pages getting indexed
I run a exotic pet website which currently has several types of species of reptiles. It has done well in SERP for the first couple of types of reptiles, but I am continuing to add new species and for each of these comes the task of getting ranked and I need to figure out the best process. We just released our 4th species, "reticulated pythons", about 2 weeks ago, and I made these pages public and in Webmaster tools did a "Fetch as Google" and index page and child pages for this page: http://www.morphmarket.com/c/reptiles/pythons/reticulated-pythons/index While Google immediately indexed the index page, it did not really index the couple of dozen pages linked from this page despite me checking the option to crawl child pages. I know this by two ways: first, in Google Webmaster Tools, if I look at Search Analytics and Pages filtered by "retic", there are only 2 listed. This at least tells me it's not showing these pages to users. More directly though, if I look at Google search for "site:morphmarket.com/c/reptiles/pythons/reticulated-pythons" there are only 7 pages indexed. More details -- I've tested at least one of these URLs with the robot checker and they are not blocked. The canonical values look right. I have not monkeyed really with Crawl URL Parameters. I do NOT have these pages listed in my sitemap, but in my experience Google didn't care a lot about that -- I previously had about 100 pages there and google didn't index some of them for more than 1 year. Google has indexed "105k" pages from my site so it is very happy to do so, apparently just not the ones I want (this large value is due to permutations of search parameters, something I think I've since improved with canonical, robots, etc). I may have some nofollow links to the same URLs but NOT on this page, so assuming nofollow has only local effects, this shouldn't matter. Any advice on what could be going wrong here. I really want Google to index the top couple of links on this page (home, index, stores, calculator) as well as the couple dozen gene/tag links below.
Intermediate & Advanced SEO | | jplehmann0 -
Should I have app deep links from by m.example.com site?
Hello Mozzers I had one small question to ask regarding app deep linking. I noticed websites ike http://www.huffingtonpost.com/ & http://www.trulia.com/ only include app deep links within desktop (www) versions of their websites but not include them on their mobile (m.) versions. Is this the best way to implement app deep links? Shouldn't websites include app deep links from both mobile and desktop versions of their website. Any help or tips will be highly appreciated. Thank you mozzers in advance.
Intermediate & Advanced SEO | | Vsood2 -
Some site's links look different on google search. For example Games.com › Flash games › Decoration games How can we do our url's like this?
For example Games.com › Flash games › Decoration games How can we do our url's like this?
Intermediate & Advanced SEO | | lutfigunduz0 -
Organic searches are a roller coaster when should I get worried?
Hello Moz'ers I know organic searches go up and down and there is no way to control that. When should I be worried about search results I.E. site is being de-listed or some other SEO problem Screen-Shot-2014-12-15-at-10.08.37-AM.png
Intermediate & Advanced SEO | | ryanparrish0 -
Rankings Drop since Humingbird - Could it be my link ratio between .co.uk / .com ?
Hi All, I have an UK tool hire eccomerce muliti location website with different locations pages for each category. My stratedgy has been to specialise on local search for each location as oppose to try and compete with highly competitive keywords on a national level. I do have some duplicate/ thin content issues on these location pages but I've been actively writting additional unique content on these pages to address this issue which also making sure my title tags, h1 , h2 tags etc are unique for each location along with having individual google local + pages etc etc. I have never previously been affected by any duplicate contents issues and always ranked first page (mainly top 5) for most of my local keywords). However, when google humingbird update came out , I suffered approx 25% drop in traffic and rankings. rom what I read , local search sites have suffered somewhat in this update and I did a link detox report to try and asterain toxic links etc. I found a few which I disavowled but I have had no manul penalty message in my GWT so I can only assume I was affected by an google algorithmic penalty. From looking at opensite explorer , I can see my link ratio for my .co.uk site shows 43% .com 37% .co.uk I am wondering if it could be this which has been the cause of my local rankings to fail ?. Has anyone else suffered the same as I am at my witts end as to what are the likely factors which could have caused such a drop ? Any tips, greatly appreciated. Happy to give my sites url if anyone would like to take a look ? thanks Sarah.
Intermediate & Advanced SEO | | SarahCollins0 -
Making sense of MLB.com domain structure
Although the subject of subdomains has been discussed quite often on these boards, I never found a clear answer to something I am pondering. I am about to launch a network of 8 to 10 related sites - all sharing a the same concept, layout, etc. but each site possessing unique content. My concept will be somewhat similar to how MLB.com (Major League Baseball) is set up. Each of the 30 teams in the league has it's unique content as a subdomain. My goal in the initial research was to try to find the answer to this question - **"Do the subdomains of a network contribute any increased value to the Root Domain? ** As I was trying to find the answer to my question and analyzing how MLB.com did it, I began to notice some structure that made very little sense to me and am hoping an expert can explain why they are doing it the way they are. Let me try to illustrate: Root Domain = http://mlb.com (actually redirects to: http://mlb.mlb.com/index.jsp) This root domain serves universal content that appeals to all fans of the league and also as a portal to the other subdomains from the main navigation. SubDomain Example = http://tampabay.rays.mlb.com/index.jsp **Already there are a couple of questions. ** 1. Why does MLB.com redirect to http://mlb.mlb.com/index.jsp ? - why the mlb subdomain? 2. - Why two subdomains for tampabay.rays.mlb.com/index.jsp.? Why not just make the subdomain "tampabayrays", "newyorkmets", "newyorkyankees" etc. **Here is where things get a little more complicated and confusing for me. **
Intermediate & Advanced SEO | | bluelynxmarketing
From the home page, if I click on an article about the San Francisco Giants, I was half expecting to be led to content hosted from the http://sanfrancisco.giants.mlb subdomain but instead the URL was: http://mlb.mlb.com/news/article.jsp?ymd=20121030&content_id=40129938&vkey=news_mlb&c_id=mlb I can understand the breakdown of this URL
YMD = Year, Month, Date
Content ID = Identifying the content
VKey = news_MLB (clicked from the "news section found from the mlb subdomain.
c_id=mlb (?) Now, if I go to the San Francisco Giants page, I see a link to the same exact article but the URL is this: http://sanfrancisco.giants.mlb.com/news/article.jsp?ymd=20121030&content_id=40129938&vkey=news_sf&c_id=sf It get's even stranger...when I went to the Chicago Cubs subdomain, the URL to the same exact article does not even link to the general mlb.mlb.com content, instead the URL looks like this: http://chicago.cubs.mlb.com/news/article.jsp?ymd=20121030&content_id=40129938&vkey=news_mlb&c_id=mlb When I looked at the header from the http://chicago.cubs.mlb.com ULR, I could see the OG:URL as: http://sanfrancisco.giants.mlb.com/news/article.jsp?ymd=20121030&content_id=40129938&vkey=news_sf&c_id=sf but I did not see anything relating to rel=canonical I am sure there is a logical answer to this as the content management for a site like MLB.COM must be a real challenge. However, it seems that they would have some major issues with duplicate content. So aside from MLB's complex structure...I am also still searching for the answer to my initial question which is - **"Do the subdomains of a network contribute any increased value to the Root Domain?" For example, does http://tampabay.rays.mlb.com/index.jsp bring value to http://mlb.com? And what if the subdomain is marketed as http://raysbaseball.com and then redirected to the subdomain? Thanks in advance. **0