What is the best way to treat URLs ending in /?s=
-
-
Hi Alex
These are parameters that sit after the main URL and often include 'sort' 'page'. (They can also be created in some eCommerce pages as 'products' but these should be dealt with a mod-rewrite to show properly constructed URLs with category name and title). There are a number of ways with dealing with them:
1. Google search console - you have to be very careful messing with the rules in parameter handling but for some, this is the way.
- 'sort' then you can tell Google that it narrows the content on the page - you can then choose to let Googlebot decide or block the URLs - I often block them as they just create skinny and duplicate content.
- Pagination - 'page' you can tell Google that this paginates and then let Google decide. Look at rel/prev tag on those pages as well.
- Attributes - like size and colour - I generally block those as they just create skinny duplicates of main categories
- Others - like Catalog - it depends on what platform you use but there could be other parameters being created - I block most of them as they create useless URLs
2. Robots.txt
You can use this file to block the indexing of these pages depending on the parameter by excluding them from being followed by the search bots. Once again be very careful as you don't want to accidentally block indexing of useful areas the site.
https://moz.com/learn/seo/robotstxt
3. Canonicals
If you are able a great way of dealing with attributes like size and colour is to canonicalize back to the non size specific URL - this is a great way of maintaining the link juice for those URLs which may otherwise be lost if you blocked them all together. You add a rel=canonical tag pointing to the non-parameter version.
https://moz.com/learn/seo/canonicalization
4. As a last resort you can 301 redirect them but frankly, if you have dealt with them properly you shouldn't have to. It's also bad practice to have live 301 redirects in the internal structure of a website. Best to use the correct URL.
There is more reading here:
https://moz.com/community/q/which-is-the-best-way-to-handle-query-parameters
https://moz.com/community/q/do-parameters-in-a-url-make-a-difference-from-an-seo-point-of-view
https://moz.com/community/q/how-do-i-deindex-url-parametersRegards
Nigel
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
99/100 in MOZPRO but content not indexed ?
Hi All, I am just new to Moz pro. We have a site that has 99/100 for its page optimisation score in Moz pro but it's still not indexed anywhere in the first 5 pages? Any ideas what I'm missing and how to start resolving?
Moz Pro | | timcbambrick0 -
Best tools for an initial website health check?
Hi,
Moz Pro | | CamperConnect14
I'd like to offer free website health checks (basic audits) and am wondering what tools other people use for this? It would be good to use something that presents the data well. Moz is great but it gets expensive if I want to offer these to many businesses in the hope of taking on just a few as clients and doing a full manual audit for them. So far I've tried seositecheckup.com (just checks a single page though), metaforensics.io and mysiteauditor. Thanks!0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
What is the best free 'contact finder' tool?
What is the best free 'contact finder' tool? By contact finder I mean a tool that can search multiple websites and display the contact details of each site. Why do Moz not provide such a tool? Thanks
Moz Pro | | conor10050 -
How do I add a logo to a campaign/report I had before I updated my account?
I have tried to go in and add a logo to a report on one of my campaigns, but I can't see where to do that anywhere. Thanks.
Moz Pro | | Rocket.Fuel0 -
How exactly can I measure SEO - the correct way?
Hi, A very challenging questions I always ask myself is that how exactly do I measure SEO. I am writing content, building pages, correcting links and so on. So now how I measure as to what I am doing is this really working out for me? Are the content which I am building is it working for the benefit of my website etc. What tools can I use to measure SEO specifically. What factors should I look at to measure when in cases where I am just begun my seo strategy. Could anyone highlight the correct tools and where inside these tools or which features inside those tools could give me good measures of the SEO Implementation. Thanks
Moz Pro | | shanky10 -
Getting relevant keywords from URL with Google KW Tool.
Hi, When I first start researching a site, I like to see what Google "thinks" it is relevant to. I use the Google KW Tool and enter the website URL only. I sort the results by relevance. I can then show the prospective client what Google thinks his site is optimized for and use that info to show him what opportunities exist to rank for terms more relevant to his business. I show him keyword, volume and I also get current SERP rank for his site. For larger sites, I do this for the top pages based Domain Authority. I want to automate this process using excel and APIs but Google refused my API token request. I told them I wanted to use the "Google AdWords API Extension for Excel" from http://seogadget.co.uk/google-adwords-plugin-excel. The Google API token team replied: Please note, after reviewing your application in detail, we are sorry to let you know that we won't be able to approve your token. We understand that you are planning to use the AdWords API mainly for Targeting Idea Service (TIS) and Traffic Estimation Service (TES) such as 'keyword research'. Please note that as per the Required Minimum Functionality (RMF) outlined in the API Terms & Conditions, using the AdWords API exclusively for TIS and TES type of services is not allowed. Q1: What does the KW Tool relevancy data mean, anyway? Q2: is there another way to get it or is there another way to do this? Q3: Is there a better approach I should take with the Google API team? Q4: Are there other APIs and Excel plugins that can do this, including the SEOMoz APIs? Thanks,
Moz Pro | | phersh
Phil0