What is the best way to treat URLs ending in /?s=
-
-
Hi Alex
These are parameters that sit after the main URL and often include 'sort' 'page'. (They can also be created in some eCommerce pages as 'products' but these should be dealt with a mod-rewrite to show properly constructed URLs with category name and title). There are a number of ways with dealing with them:
1. Google search console - you have to be very careful messing with the rules in parameter handling but for some, this is the way.
- 'sort' then you can tell Google that it narrows the content on the page - you can then choose to let Googlebot decide or block the URLs - I often block them as they just create skinny and duplicate content.
- Pagination - 'page' you can tell Google that this paginates and then let Google decide. Look at rel/prev tag on those pages as well.
- Attributes - like size and colour - I generally block those as they just create skinny duplicates of main categories
- Others - like Catalog - it depends on what platform you use but there could be other parameters being created - I block most of them as they create useless URLs
2. Robots.txt
You can use this file to block the indexing of these pages depending on the parameter by excluding them from being followed by the search bots. Once again be very careful as you don't want to accidentally block indexing of useful areas the site.
https://moz.com/learn/seo/robotstxt
3. Canonicals
If you are able a great way of dealing with attributes like size and colour is to canonicalize back to the non size specific URL - this is a great way of maintaining the link juice for those URLs which may otherwise be lost if you blocked them all together. You add a rel=canonical tag pointing to the non-parameter version.
https://moz.com/learn/seo/canonicalization
4. As a last resort you can 301 redirect them but frankly, if you have dealt with them properly you shouldn't have to. It's also bad practice to have live 301 redirects in the internal structure of a website. Best to use the correct URL.
There is more reading here:
https://moz.com/community/q/which-is-the-best-way-to-handle-query-parameters
https://moz.com/community/q/do-parameters-in-a-url-make-a-difference-from-an-seo-point-of-view
https://moz.com/community/q/how-do-i-deindex-url-parametersRegards
Nigel
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content report - question on best practice
Hello all, New to MOZ Pro and SEO - so lots to get my head round! I’m working through the Duplicate Content section of the Crawl report and am not sure what the best practice is for my situation. Background: We are a reference guide for luxury hotels around the world, but the hotels that are featured on the site vary year on year. When we add a new hotel page, it sets up the url as ourwebsite.com/continent/country/regionORcity/hotel. When the hotels come off, I redirect their URL to the country or region where we have other hotels. Example: http://www.johansens.com/europe/switzerland/zermatt/ The hotel in Zermatt has come off the site, showing 0 results on this landing page. Question: My duplicate content report is showing a number of these regional pages that are displaying the copy “0 places - Region’ because the hotel has come off, but the landing page is still live. Should I redirect the regional page back to the main country page? And then if I add a new hotel to the site from that region in the future, simply remove the redirect? Should I also delete the page? Any tips would be much appreciated!
Moz Pro | | CN_Johansens0 -
What is the best way to add a noindex./nofollow meta tags to tags in a blog?
Could anyone tell me the best way to add noindex./nofollow meta tags as I have around 12 duplicate tags in a blog. I have the Yoast SEO plugin - unpaid version.
Moz Pro | | SEM_at_Lees0 -
How do you create tracking URLs in Wordpress without creating duplicate pages?
I use Wordpress as my CMS, but I want to track click activity to my RFQ page from different products and services on my site. The easiest way to do this is through adding a string to the end of a URL (ala http://www.netrepid.com/request-for-quote/?=colocation) The downside to this, of course, is that when Moz does its crawl diagnostic every week, I get notified that I have multiple pages with the same page title and the dup content. I'm not a programming expert, but I'm pretty handy with Wordpress and know a thing or two about 'href-fing' (yeah, that's a thing). Can someone who tracks click activity in WP with URL variables please enlighten me on how to do this without creating dup pages? Appreciate your expertise. Thanks!
Moz Pro | | Netrepid0 -
What's the best way to eliminate "429 : Received HTTP status 429" errors?
My company website is built on WordPress. It receives very few crawl errors, but it do regularly receive a few (typically 1-2 per crawl) "429 : Received HTTP status 429" errors through Moz. Based on my research, my understand is that my server is essentially telling Moz to cool it with the requests. That means it could be doing the same for search engines' bots and even visitors, right? This creates two questions for me, which I would greatly appreciate your help with: Are "429 : Received HTTP status 429" errors harmful for my SEO? I imagine the answer is "yes" because Moz flags them as high priority issues in my crawl report. What can I do to eliminate "429 : Received HTTP status 429" errors? Any insight you can offer is greatly appreciated! Thanks,
Moz Pro | | ryanjcormier
Ryan0 -
SEO process suggestions / keyword analysis
hi all, we've done a bit of SEO over the years for our site, and are looking to pick things up this year and spend more time on SEO. first off, we are working on revising our keyword list & would like to view what keywords specific competitors are ranking for and try to target those keywords as well. is this possible via SEOmoz (similar to spyfu)? please let me know if you feel this is a good first step in picking up our SEO efforts for the new year. i've been looking at the research tools & is seems the Keyword Difficulty Tool hasn't been fully functional for the past week or so ("Our Keyword Difficulty Tool is having issues displaying AdWords API data. We are working with the AdWords API folks to get a fix soon!"). also, i noticed i can enter competitor sites in open site explorer & compare link metrics, but i don't see anywhere to compare keyword metrics. is there something i'm missing? thanks again everyone!
Moz Pro | | lsat0 -
Where does the crawler find the urls?
The SEO Moz crawler has found a number of 500 error pages, and 404s etc which is very useful 🙂 however some of the urls are weird/broken formats we don't recognise and nobody remembers ever using - not weird enough to imply hacking, but something broken in the CMS Is there anyway to find out where the crawler found these urls? I can patch up and redirect the end result as best I can but I would prefer to fix plug the leak thanks 🙂
Moz Pro | | Fammy1 -
SEOmoz v's Google Webmaster Keyword Ranking
Hi, how accurate is SEOmoz keyword ranking v's Google Webmaster? Some Keywords in SEOmoz say I'm in position 3 in Ireland for some Keywords and in Google Webmaster they are figures like 16. I understand in Google Webmaster that the keyword position is an Average What should I be going by and what is the credibility of both ranking estimators?
Moz Pro | | Socialdude0 -
Where is the best SEO Glossary of Terms?
Where do you feel the best existing Glossary of SEO / SEM Terms explained currently resides. (and is there a better way to ask this question?) 🙂
Moz Pro | | iansears0