Should we use URL parameters or plain URL's=
-
Hi,
Me and the development team are having a heated discussion about one of the more important thing in life, i.e. URL structures on our site.
Let's say we are creating a AirBNB clone, and we want to be found when people search for
apartments new york.
As we have both have houses and apartments in all cities in the U.S it would make sense for our url to at least include these, so
clone.com/Appartments/New-York
but the user are also able to filter on price and size. This isn't really relevant for google, and we all agree on clone.com/Apartments/New-York should be canonical for all apartment/New York searches. But how should the url look like for people having a price for max 300$ and 100 sqft?
clone.com/Apartments/New-York?price=30&size=100
or (We are using Node.js so no problem)
clone.com/Apartments/New-York/Price/30/Size/100
The developers hate url parameters with a vengeance, and think the last version is the preferable one and most user readable, and says that as long we use canonical on everything to clone.com/Apartments/New-York it won't matter for god old google.
I think the url parameters are the way to go for two reasons. One is that google might by themselves figure out that the price parameter doesn't matter (https://support.google.com/webmasters/answer/1235687?hl=en) and also it is possible in webmaster tools to actually tell google that you shouldn't worry about a parameter.
We have agreed to disagree on this point, and let the wisdom of Moz decide what we ought to do. What do you all think?
-
Personally, I would agree with you an opt for the following option:
clone.com/Apartments/New-York?price=30&size=100I don't think it matters whether that section of the URL is readable to everyone. I would actually say that anyone who has a technical background would find the URL above easier to change than the other one, as having /'s in the URL almost symbolised different directories rather than a parameter (that's how I would generally interpret it anyway).
I think in the grand scheme of things, It's going to make little different as you don't want the additional sections to actually be indexed in the search engines. Like Gary correctly pointed out, you can setup 'URL Parameters' in GWT and I think that's your best option. There's more information about that here - http://googlewebmastercentral.blogspot.co.uk/2011/07/improved-handling-of-urls-with.html
You could also use robots.txt to block the parameters in the URL but this depends on whether the search engine crawling your website chooses to use it.
Hope this helps!
Lewis -
Good example of a site that does show up in the SERPs for all things related
-
OK, not to sit on the fence here but both are good options.
However when it comes to "URL Parameters" there is a section in Webmaster Tools that you can set to ignore certsin parameters. So that's always an option.
I like to look at sites like oodle in cases like this.
Here is an example
they spent a lot of time working out the best process and they use the node type url.
However Google has been said to prefer shorter urls recently.
Hope my sitting on the fence did not make things worse LOL
-
Personally I would just $_POST price and size - and be done with it. ( as opposed to $_GET which shows the parameter in the URL ) - No need to over think creating more URLs and complicating life.
If anything - you can define in WMT what price is and what size is but just keep it clean. Also, remember # tags in the URL doesn't get followed by google. So, clone.com/Apartments/New-York#price=30&size=100 could work too.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using Similar Expired URLs to Send Traffic to My Site
Thanks in advance for any help! I have an existing website with content on a particular topic. I have discovered a few similar expired URLs that might still get some traffic. One in particular still has a number of valid links from other sites. Would it make sense for me to buy those URLs (which are really cheap) and just use them to send that traffic to my site? If so, am I better using a 301 redirect or having a home page on the new site that just mentions that the old site is expired, and that they might want to instead link over to my site?
Intermediate & Advanced SEO | | alanjosephs0 -
How to optimize an ecommerce catalog that uses parameters only
Hello ! I am facing a problem concerning a client's website that has been developped using filters that create parameters - there are no categories. This means that, no matter what I choose as a filter, the page title, desc and my H1 stays the same. In a beautiful, unicorn rainbow filled world - I could just tell them to restructure their site with new categories/sub categories AND with filters. For SEO purposes and to find a temporary solution until we can change the architecture, what would be the best choice? Should we create individual pages that serves the same content as the catalog, but with rewritten URL, Title, Description and canonical?
Intermediate & Advanced SEO | | Charles-O
ie: http:/domain.com/catalog/?brand=moz canonical to http:/domain.com/catalog/brand/moz ? I noticed indeed.com does that (https://ca.indeed.com/SEO-Specialist-jobs vs https://ca.indeed.com/jobs?q=SEO+Specialist&l=) Should we dynamise the content depending on which filters has been selected? Of course, some filters are real filters that wouldn't attract or add any value (such as order by) Thanks for your input!0 -
My site has a loft of leftover content that's irrelevant to the main business -- what should I do with it?
Hi Moz! I'm working on a site that has thousands of pages of content that are not relevant to the business anymore since it took a different direction. Some of these pages still get a lot of traffic. What should I do with them? 404? Keep them? Redirect? Are these pages hurting rankings for the target terms? Thanks for reading!
Intermediate & Advanced SEO | | DA20130 -
Will adding 1000's of outbound links to just a few website impact rankings?
I manage a large website that hosts 1000's of business listings that comprise an area that covers 7 state counties. Currently a category page (such as lodging) hosts a group of listings which then link to it's own page. From these pages links are present directly to the business it represents. The client is proposing that we change all listings to link to the representative county website and remove the individual pages. This essentially would create 1000's of external links to 7 different websites and remove 1000's of pages from our site.
Intermediate & Advanced SEO | | Your_Workshop
Does anyone have thoughts on how adding 1000's of links (potentially upwards of 3000) to only 7 websites (that I would deem relevant links) would affect SEO? I know if 1000's of links are added pointing to 1000's of websites the site can be considered a link farm, but I can't find any info online that speaks of a case like this.0 -
Google's form for "Small sites that should rank better" | Any experiences or results?
Back in August of 2013 Google created a form that allowed people to submit small websites that "should be ranking better in Google". There is more info about it in this article http://www.seroundtable.com/google-small-site-survey-17295.html Has anybody used it? Any experiences or results you can share? *private message if you do not want to share publicly...
Intermediate & Advanced SEO | | GregB1230 -
Remove URLs that 301 Redirect from Google's Index
I'm working with a client who has 301 redirected thousands of URLs from their primary subdomain to a new subdomain (these are unimportant pages with regards to link equity). These URLs are still appearing in Google's results under the primary domain, rather than the new subdomain. This is problematic because it's creating an artificial index bloat issue. These URLs make up over 90% of the URLs indexed. My experience has been that URLs that have been 301 redirected are removed from the index over time and replaced by the new destination URL. But it has been several months, close to a year even, and they're still in the index. Any recommendations on how to speed up the process of removing the 301 redirected URLs from Google's index? Will Google, or any search engine for that matter, process a noindex meta tag if the URL's been redirected?
Intermediate & Advanced SEO | | trung.ngo0 -
Canonical URL's - Do they need to be on the "pointed at" page?
My understanding is that they are only required on the "pointing pages" however I've recently heard otherwise.
Intermediate & Advanced SEO | | DPSSeomonkey0 -
Need to duplicate the index for Google in a way that's correct
Usually duplicated content is a brief to fix. I find myself in a little predicament: I have a network of career oriented websites in several countries. the problem is that for each country we use a "master" site that aggregates all ads working as a portal. The smaller nisched sites have some of the same info as the "master" sites since it is relevant for that site. The "master" sites have naturally gained the index for the majority of these ads. So the main issue is how to maintain the ads on the master sites and still make the nische sites content become indexed in a way that doesn't break Google guide lines. I can of course fix this in various ways ranging from iframes(no index though) and bullet listing and small adjustments to the headers and titles on the content on the nisched sites, but it feels like I'm cheating if I'm going down that path. So the question is: Have someone else stumbled upon a similar problem? If so...? How did you fix it.
Intermediate & Advanced SEO | | Gustav-Northclick0