Analyzing ZAPPOS.com - how do they get away with it?
-
Hi All,
The fun thing about our industry is that unlike poker - most cards are open.
While trying to learn what the big guys are doing I chose to focus on www.Zappos.com - one of the largest sports wear (especially shoes).
I looked how they categories, interlink and on their product pages.
I have a question about duplication in an age where it is SO important.
If you look in their running sneakers category you'd see that they show the same item (in different color) as two separate items - how are these pages no considered duplication?It gets even worse - If you look inside a shoe page (a product page) in the tab "About the Brand" you'd learn that all shoes from Nike (just an example) the about the brand is exactly the same. This is about 90% of the page for hundreds of Nike shoes pages - and goes the same for all other brands.
How come they are ranked so high and not penalized in the era of Panda?
Is it as always - big brands get away with anything and everything?Here are two example shoe pages:
Nike Dart 10 (a)
Nike Dart 10 (b)Thanks!
-
As I said, that is what I would recommend doing. Zappos is not, and it could easily be due to limitations with their eCommerce or fulfillment systems since each color is probably a different sku. It could just as easily be due to the ability of these pages to rank better for each color, in which case they have an advantage over most other competitors because they can get away with it, as you have noticed.
-
Thanks for the detailed answer.
If you are putting a canonical tag then why not simply have one page with a drop down for colors?
-
Hello BeytzNet,
It is not uncommon at all for ecommerce sites to have product variants like this, each with their own SKU. They are, after all, two different products. If someone ordered one color and got the other they would be upset. If someone searched Google Shopping for Gray Nike Shoes and ended up on a page for Pink Nike Shoes it would not be a good experience for them.
Yes, a better way to do this would be to have unique on-page content for each variant of this shoe, or even to have one page that allows the user to choose their color from a drop-down list (oh wait, Zappos does that too...) so the page isn't optimal, but it is unlikely that Google would see this as something worth applying a penalty for. They would more likely just decide to rank only one version. Rather than being sneaky, it is probably just a scalability problem.
With that said, I know lots of lesser-known brands and websites that have been hit hard by Panda for similar "scalability problems". The fact that big, well-known brands can get away with a lot more is something that has been going on for a long time and isn't about to change any time soon. So to answer your question "how do they get away with it" - They get away with it by being a huge, well-known brand. It sucks, but that apparently provides a better user experience for Google searchers. I don't think there is any malicious purpose to that (e.g. Adsense revenue, helping Google partner sites...), rather it has to do with the way we, as searchers, react to branding by clicking on the results we are already familiar with and buying from sites we already trust.
If I were to handle the same situation I'd probably choose a canonical version and redirect the other pages to it since writing unique copy for each color shoe wouldn't be scaleable for a site that size. Of course you would lose some ability to rank for color-specific searches, but you could minimize that by listing the colors out in title or on-page content while allowing the user to select the color from a drop-down.
-
Cody that is not accurate. Only one of the pages references ...10~2 as the canonical URL. The other ones uses <link rel="canonical" href="/nike-dart-10~1" />.
-
It's because they utilize canonicals to specify the url that should get all of the authority. Both of your examples have this:
<link rel="<a class="attribute-value">canonical</a>" href="[/nike-dart-10~2](view-source:http://www.zappos.com/nike-dart-10%7E2)" /><script type="<a class="attribute-value">text/javascript</a>">
-
Hi, great question and find.
I recently read an article, I think that it was from distilled, on SEO Myths. One of the Myths was about duplicate content penalties.
"has the potential to dilute link equity," but apparently google weren't imposing serious penalties,
It was an interesting little piece, but i would suggest they are using a lot of no follow links.
As an e commerce developer, product variations are a hard one to index well.
I would be interested to get a few takes on how people are doing it well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to get visibility in Google Discover?
Hey everyone, I run a website that publish articles about pets. I have read some great things about Google Discover and the potential traffic it can bring to publishers (Condé Nast reported up to 20% of traffic coming from Discover in the US, at a certain point). I am currently trying to get indexed and after reading Google guidelines and a Ahrefs guide, I have made many optimizations to my site: structured data, creating an author page, fixing image size and publishing date... so far, it's not working. I feel the lack of a knowledge graph for my business may affect my chances. I'm currently building a GMB page to fix this. Do you have other recommendations or success stories of your own experiments with Discover? An example of an article I tried to get indexed was https://www.lebernard.ca/teletravail-chien-guide-survie/. Obviously, I'm not expecting feedback on the quality of the content since it's in French, but I'm curious if you see anything from a technical perspective that doesn't work. Thanks a lot for your help! Charles
Intermediate & Advanced SEO | | Cheebee1240 -
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
Indexed Pages Different when I perform a "site:Google.com" site search - why?
My client has an ecommerce website with approx. 300,000 URLs (a lot of these are parameters blocked by the spiders thru meta robots tag). There are 9,000 "true" URLs being submitted to Google Search Console, Google says they are indexing 8,000 of them. Here's the weird part - When I do a "site:website" function search in Google, it says Google is indexing 2.2 million pages on the URL, but I am unable to view past page 14 of the SERPs. It just stops showing results and I don't even get a "the next results are duplicate results" message." What is happening? Why does Google say they are indexing 2.2 million URLs, but then won't show me more than 140 pages they are indexing? Thank you so much for your help, I tried looking for the answer and I know this is the best place to ask!
Intermediate & Advanced SEO | | accpar0 -
301 / 404 & Getting Rid of Keyword Pages
I had a feeling that my keyword focused pages were causing my site not to rank well. I do not have that many keywords. I have 2 main keyword phrases along with 6 city locations. For example (fake) "tea house tampa" "tea house clearwater" "tea house sarasota" and "tea room tampa" "tea room cleawater" "tea house sarasota". So, I don't feel that I need that many pages. I feel like I can optimize my home page and maybe 1 or 2 topic pages. Right now, I have a keyword for each of those phrases. These are all internal pages on 1 domain. Not multiple domains. Sooo... I tested it by 301ing a few of my "tea house" KW pages to the home page. And low and behold... my home page rose BIG TIME! Major improvement! I'm talking like 13th to 2nd! Here is my question... how should I proceed? My SEO has warned me against 301ing too many pages all pointing to the home page. He says that will negatively impact my ratings. Should I 404 some pages? Should I build a "tea room" topic page and 301 that set there? What is worse? 301 or 404? How many is too many? I'm really excited by these results, but I'm scare to move forward and hurt what has happened. Thanks in advance!
Intermediate & Advanced SEO | | CalicoKitty20000 -
Penalized because of Pharma Wordpress Hack, Fixed, When can we expect to get out?
Hey Guys, so one of our clients hired a web designers to re do his site. Unfortunately in the process the client got a nasty pharma hack and we had to completely re do his site in drupal by scratch because it was so difficult to remove the hack. In this process his lost all his rankings, sub 100 and the hack produced super low quality links from drug related sites pointing to his pages. We're 100% certain the hack is gone, we've disavowed every link, and used WMT to deindex all the drug pages the hack had created. Still 2 weeks later he is sub 100. Does anyone else know of any way to push this along faster? I wish there was some way to get Google to recognize its fixed faster as his business is destroyed.
Intermediate & Advanced SEO | | iAnalyst.com0 -
How do I create a strategy to get rid of dupe content pages but still keep the SEO juice?
We have about 30,000 pages that are variations of "<product-type>prices/<type-of-thing>/<city><state "<="" p=""></state></city></type-of-thing></product-type> These pages are bringing us lots of free conversions because when somebody searches for this exact phrase for their city/state, they are pretty low-funnel. The problem that we are running into is that the pages are showing up as dupe content. One solution we were discussing is to 301-redirect or canonical all the city-state pages back to jus tthe "<type of="" thing="">" level, and then create really solid unique content for the few hundred pages we would have at that point.</type> My concern is this. I still want to rank for the city-state because as I look through our best-converting search-terms, they nearly always have the city-state in the search term, so the search is some variation of " <product-type><type of="" thing=""><city><state>"</state></city></type></product-type> One thing we thought about doing is dynamically changing the meta-data & headers to add the city-state info there. Are there other potential solutions to this?
Intermediate & Advanced SEO | | editabletext0 -
Is it worth switching from .net to .com if you own both domain names
For over 20 years the company I work for has used www.company.net as their TLD, because we could not register www.company.com at that time. However, currently we also own www.company.com www.company.com has a 301 re-direct to www.company.net We are a global company, and market leader in our industry. Our company name is associated with the product we make, and our competitors use our company name as their targeted keywords to attract visitors to their sites because our company name is synonym with the product we and they make. As we are a global company we also have lots of TLDcc's. The email address of all our employees worldwide have a .net email address extension. Would you advice switching from www.company.net to www.company.com??? And if so, what would be the reasons for this switch. Would it only be for branding purposes? Looking forward to some insights before taking on such an invasive switch (because of the switch of all email addresses of employees worldwide). Best regards, Astrid Groeneveld
Intermediate & Advanced SEO | | Cordstrap0 -
Has anyone found a way to get site links in the SERPs?
I am wanting to get some site links in the serps to increase the size of my "space", has anyone found a way of getting them? I know google says that its automatic and only generated if they feel it would benifit browsers but there must be a rule of thumb to follow. I was thinking down the line of a tight catagorical system that is implimented throughout the site that is clearly related to the content (how it should be I guess)... Any comments, suggestions welcome
Intermediate & Advanced SEO | | CraigAddyman0