Altering Breadcrumbs based on User Path to Product URL
-
Hi,
Our products are listed in multiple categories, and as the URLs are path dependent (example.com/fruit/apples/granny-smith/, example.com/fruit/green-fruit/granny-smith/ and so forth) we canonicalise to the 'default' URL (in this case example.com/fruit/apples/granny-smith/).
For mainly crawling bandwidth issues I'm looking to change all product URL's to path neutral so there is only ever one URL per product (example.com/granny-smith/), but still list the product in multiple categories.
If a user comes directly to example.com/granny-smith/ then the breadcrumbs will use the default path "Fruit > Apples", however if the user navigated to the product via another category then I'd like the breadcrumbs to reflect this. I'm not worried about cloaking as it's not based on user-agent and it's very logical why it's being done so I don't expect a penalty.
My question is - how do you recommend this is achieved from a technical standpoint? Many sites use path neutral product URL's (Ikea, PCWorld etc) but none alter the breadcrumbs depending upon path.
Our site is mostly behind a CDN so it has to be a client side solution. I currently view the options as:
- Store Path to product in a cookie and/or browsers local-cache
- Attach the Path details after a # in the URL and use Javascript to alter breadcrumbs onload with JQuery
- When a user clicks to a product from a listing page, use AJAX to pull in the product info but leave the rest of the page (including the breadcrumbs) as-is, updating the URL accordingly
Do you think any of these wouldn't work? Do you have a preference on which one is best? Is there another method you'd recommend?
We also have "Next/Previous" functionality (links to the previous and next product URLs) on the page so I suspect we'd need to attach the path after a # and make another round trip to the server onload to update the previous and next links.
Finally, does anyone know of any sites that do update the breadcrumbs depending upon path?
Thanks in advance for your time
FashionLux
-
Further update to this. Ran into a problem with option 3... this solution works really well when navigating the site internally, however a user landing on one of these URL's directly (bookmark, social share etc) would have a slow loading page as (for non-default product variations) the page will load after the 1st request, then a 2nd request to the server is needed to pull in the image via AJAX.
Loading the other images, stock information, prices, copy etc into an array and doing the work on the client side wasn't an option as the page would get too heavy. So option 3 ruled out.
Ultimately the goal was to reduce duplicate content of product pages and none of the 3 options above do this whilst not affecting page loading times. I did look to fall back on using canonical tags however I've just now found that Facebook are using this tag, so if a user wanted to share a 'red apple' when the canonical is 'green apple' - Facebook would show an image of the 'green apple'.... so at the moment that is ruled out also.
I'll start a new thread on product page duplicates and the best solution - but if anyone has any ideas then please do let me know.
Thanks
Dean
-
Thanks for the response Dana. Option 3 did feel like the best option and that is the one I'm choosing to go with.
Point 2 (with the hash) provides the desired result of Search Engines only seeing the clean URL as the parameters behind the hash will never be seen, but the browser will use them to power the breadcrumbs. In the end it was a toss-up between 2 & 3 but 3 is the most maintainable and quickest for users.
Thanks again
Dean
-
Dean,
This is a great, great question and I am eager to find out what my fellow technical SEOs think because I have faced very similar situations on one of my sites. Thanks for asking this question.
My gut instinct is to select #3 of your options. But not really being a developer, it's hard for me to articulate as to why I think this is the best option. I am really only thinking of it from a user-end standpoint in that I want to know where, in the hierarchy of the site this page lives so that if I need to find it again, I can.
I disagree with your option #2 from an SEO standpoint because anything after a "#" or hashtag in a URL is ignored by search engines....so putting it there isn't going to benefit your SEO in any way.
Interested to hear what others think,
Dana
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If my products aren't showing in rich snippets, is there still value in adding product schema?
I'm adding category pages for an online auction site and trying to determine if its worth marking up the products listed on the page. All of the individual product pages have product schema, but I have never seen them show up in rich snippets likely due to the absence of the price element and the unique nature of the items. Is there still value in adding the product schema even if the items won't show in rich snippets? Also, is it possible the product schema will help optimize for commerce related keywords such as [artist name] + for sale?
Intermediate & Advanced SEO | | Haleyb350 -
Does the url in for your homepage impact SEO
Is there any harm to SEO having a homepage url that is not clean like www.domain.com. For example citi uses https://online.citi.com/US/login.do Does that matter in any way? Would a company like citi benefit from changing to www.citi.com as their homepage?
Intermediate & Advanced SEO | | kcb81781 -
Sitemap generator which only includes canonical urls
Does anyone know of a 3rd party sitemap generator that will only include the canonical url's? Creating a sitemap with geo and sorting based parameters isn't the most ideal way to generate sitemaps. Please let me know if anyone has any ideas. Mind you we have hundreds of thousands of indexed url's and this can't be done with a simple text editor.
Intermediate & Advanced SEO | | recbrands0 -
Canonical Vs No Follow for Duplicate Products
I am in the process of migrating a site from Volusion to BigCommerce. There is a limitation on the ability to display one product in 2 different ways. Here is the situation. One of the manufacturers will not allow us to display products to customers who are not logged in. We have convinced them to let us display the products with no prices. Then we created an Exclusive Contractor section that will allow users to see the price and be able to purchase the products online. Originally we were going to just direct users to call to make purchases like our competitors are doing. Because we have a large amount of purchasers online we wanted to manipulate the system to be able to allow online purchases. Since these products will have duplicates with no pricing I was thinking that Canonical tags would be kind of best practice. However, everything will be behind a firewall with a message directing people to log in. Since this will undoubtedly create a high bounce rate I feel like I need to no follow those links. This is a rather large site, over 5000 pages. The 250 no follow URLs most likely won't have a large impact on the overall performance of the site. Or so I hope anyway. My gut tells me if these products are going to technically be hidden from the searcher they should also be hidden from the engines. Does Disallowing these URLs seem like a better way to do this than simply using the Canonical tags? Any thoughts or suggestions would be really helpful!
Intermediate & Advanced SEO | | MonicaOConnor0 -
Website Re-Launch - New URLS / Old URL WMT
Hello... We recently re-launched website with a new CMS (Magento). We kept the same domain name, however most of the structure changed. We were diligent about inputting the 301 redirects. The domain is over 15 years old and has tons of link equity and history. Today marks 27 days since launch...And Google Webmaster Tools showed me a recently detected (dated two days ago) URL from the old structure. Our natural search traffic has take a slow dive since launch...Any thoughts? Some background info: The old site did not have a sitemap.xml. The relaunched site does. Thanks!
Intermediate & Advanced SEO | | 19prince0 -
Can Anybody Link to my URL to Hurt SEO? Weird URL pointing at my Domaine!
Our ranking has drop since a few weeks. I did not do any major change in my site. Surfing WebMaster Tool, I found lots of new URL linking at our site: url.org linkarena.com seoprofiler.com folkd.com digitalhome.ca bustingprice.com surepurchase.com lowpricetoday.com oyax.com couponfollow.com aspringcleaning.com pamabuy.com etzone.ca How do I find if those was done intentionelly to hurt SEO? Could it be possible? Thank you, BigBlaze
Intermediate & Advanced SEO | | BigBlaze2050 -
How to prevent Google from crawling our product filter?
Hi All, We have a crawler problem on one of our sites www.sneakerskoopjeonline.nl. On this site, visitors can specify criteria to filter available products. These filters are passed as http/get arguments. The number of possible filter urls is virtually limitless. In order to prevent duplicate content, or an insane amount of pages in the search indices, our software automatically adds noindex, nofollow and noarchive directives to these filter result pages. However, we’re unable to explain to crawlers (Google in particular) to ignore these urls. We’ve already changed the on page filter html to javascript, hoping this would cause the crawler to ignore it. However, it seems that Googlebot executes the javascript and crawls the generated urls anyway. What can we do to prevent Google from crawling all the filter options? Thanks in advance for the help. Kind regards, Gerwin
Intermediate & Advanced SEO | | footsteps0 -
Migrating a site with new URL structure
I recently redesigned a website that is now in WordPress. It was previously in some odd, custom platform that didn't work very well. The URL's for all the pages are now more search engine friendly and more concise. The problem is, now Google has all of the old pages and all of the new pages in its index. This is a duplicate problem since content is the same. I have set up a 301 redirect for every old URL to it's new counterpart. I was going to do a remove URL request in Webmaster Tools but it seems I need to have a 404 code and not a 301 on those pages to do that. Which is better to do to get the old URL's out of the index? 404 them and do a removal request or 301 them to the new URL? How long will it take Google to find these 301 redirects and keep just the new pages in the index?
Intermediate & Advanced SEO | | DanDeceuster0