Best way to handle page filters and sorts
-
Hello Mozzers, I have a question that has to do with the best way to handle filters and sorts with Googlebot.
I have a page that returns a list of widgets. I have a "root" page about widgets and then filter and sort functionality that shows basically the same content but adds parameters to the URL. For example, if you filter the page of 10 widgets by color, the page returns 3 red widgets on the top, and 7 non-red widgets on the bottom. If you sort by size, the page shows the same 10 widgets sorted by size. We use traditional php url parameters to pass filters and sorts, so obviously google views this as a separate URL.
Right now we really don't do anything special in Google, but I have noticed in the SERPs sometimes if I search for "Widgets" my "Widgets" and "Widgets - Blue" both rank close to each other, which tells me Google basically (rightly) thinks these are all just pages about Widgets. Ideally though I'd just want to rank for my "Widgets" root page.
What is the best way to structure this setup for googlebot? I think it's maybe one or many of the following, but I'd love any advice:
- put rel canonical tag on all of the pages with parameters and point to "root"
- use the google parameter tool and have it not crawl any urls with my parameters
- put meta no robots on the parameter pages
Thanks!
-
The only thing I might add is that, depending on the business, it might be worth building a "Red Widgets" category (as an example). However, you would treat this like a sub-category and write its own category description. You would give it its own rel canonical tag, treating it as the root of the "Red Widgets" category root.
Nine times out of ten it isn't necessary to give sorting and filtering options their own category page though, and a rel canonical tag to the canonical version of that page is the second best option. The first best option would be to not change the URL at all, only re-order the items, hiding some and featuring others. Most eCommerce platforms don't have this functionality at present, however. Rel Canonical was made to span the gap until they do.
-
I'd definitely go with option 1 - to canonicalise all the parameter variations to the root page. This is a textbook example of what the canonical meta-tag is designed for.
In addition, because you say that many of the variations are also ranking, this will pass that ranking to the root page, instead of throwing it away as would happen if you used the GWT to ignore the parameters.
Lastly, the canonical will be understood by most engines and only needs implementing once. If you go the GWT route, you'll also have to do it manually in Bing Webmaster Tools as well, and then you'll have to remember to update both each time new parameters are implemented. And this still won't work for secondary search engines, assuming they have any importance to your site.
I always think of the Webmaster Tools solution as the method of last resort if for some technical reason I am unable to implement correct canonicalisation/redirects. Consistency and lack of manual intervention are paramount for me in these situations.
Hope that helps?
Paul
-
I'd go with the parameter option:
- Go to Webmaster tools > Crawl > URL Parameters > Configure URL Parameters and enter all of the sorting/filtering parameters there.
2A) If all of your items are on one page, you can set up a canonical URL for that page (which would ignore all sorting parameters)
2B) If your categories have multiple pages, be sure to use rel=next/prev for pagination
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What's the best way to A/B test new version of your website having different URL structure?
Hi Mozzers, Hope you're doing good. Well, we have a website, up and running for a decent tenure with millions of pages indexed in search engines. We're planning to go live with a new version of it i.e a new experience for our users, some changes in site architecture which includes change in URL structure for existing URLs and introduction of some new URLs as well. Now, my question is, what's the best way to do a A/B test with the new version? We can't launch it for a part of users (say, we'll make it live for 50% of the users, an remaining 50% of the users will see old/existing site only) because the URL structure is changed now and bots will get confused if they start landing on different versions. Will this work if I reduce crawl rate to ZERO during this A/B tenure? How will this impact us from SEO perspective? How will those old to new 301 URL redirects will affect our users? Have you ever faced/handled this kind of scenario? If yes, please share how you handled this along with the impact. If this is something new to you, would love to know your recommendations before taking the final call on this. Note: We're taking care of all existing URLs, properly 301 redirecting them to their newer versions but there are some new URLs which are supported only on newer version (architectural changes I mentioned above), and these URLs aren't backward compatible, can't redirect them to a valid URL on old version.
Intermediate & Advanced SEO | | _nitman0 -
After adding a ssl certificate to my site I encountered problems with duplicate pages and page titles
Hey everyone! After adding a ssl certificate to my site it seems that every page on my site has duplicated it's self. I think that is because it has combined the www.domainname.com and domainname.com. I would really hate to add a rel canonical to every page to solve this issue. I am sure there is another way but I am not sure how to do it. Has anyone else ran into this problem and if so how did you solve it? Thanks and any and all ideas are very appreciated.
Intermediate & Advanced SEO | | LovingatYourBest0 -
Best format for E-Commerce Pages in Title Text / Link Text & Markup
Hello Please comment on which you think is best SEO practice for each & any comments on link juice following through. Title text ( on Product Page ) <title>Brandname ProductName</title>
Intermediate & Advanced SEO | | s_EOgi_Bear
OR
<title>ProductName by Brandname</title> on category page <a <span="" class="html-attribute-name">itemprop="name" href="[producturl]">ProductName</a>
<a <span="" class="html-attribute-name">itemprop="brand" href="[brandurl]>BrandName</a> OR <a <span class="html-attribute-name">itemprop="name" href="[producturl]">BrandName ProductName
( Leave Brand Link Out)</a <span> Product Page <a itemprop="name" href="[producturl]">ProductName
<a itemprop="brand" href="[brandurl]>BrandName</a itemprop="brand" href="[brandurl]></a itemprop="name" href="[producturl]"> OR <a itemprop="name" href="[producturl]">BrandName ProductName
( Leave Brand Link Out)</a itemprop="name" href="[producturl]"> Thoughts?0 -
Best practices for robotx.txt -- allow one page but not the others?
So, we have a page, like domain.com/searchhere, but results are being crawled (and shouldn't be), results look like domain.com/searchhere?query1. If I block /searchhere? will it block users from crawling the single page /searchere (because I still want that page to be indexed). What is the recommended best practice for this?
Intermediate & Advanced SEO | | nicole.healthline0 -
Best way to SEO crowdsourcing site
What is the best way to SEO a crowdsourcing site? The websites content is entirely propagated by the user
Intermediate & Advanced SEO | | StreetwiseReports0 -
Best way to block a search engine from crawling a link?
If we have one page on our site that is is only linked to by one other page, what is the best way to block crawler access to that page? I know we could set the link to "nofollow" and that would prevent the crawler from passing any authority, and we can set the page to "noindex" to prevent it from appearing in search results, but what is the best way to prevent the crawler from accessing that one link?
Intermediate & Advanced SEO | | nicole.healthline0 -
Does a page on a site with high domain authority build page authority easier? i.e. less inbound links?
Is this also why people build backlinks to their BBB profiles, Yellowpages Profiles, etc. i.e. why do people build backlinks to other pages that link to them? Wouldn't it be more beneficial to just build that backlink directly to your target?
Intermediate & Advanced SEO | | adriandg0 -
On Page question
HI folks, I have a warning that I have missing meta tag descriptions on two pages. 1) http://bluetea.com.au/wp-login.php 2) http://bluetea.com.au/wp-login.php?action=lostpassword Is this something I should just ignore? Or is there a best practice I should be implementing? Thank you for your time
Intermediate & Advanced SEO | | PHDAustralia680