URL Optimisation Dilemma
-
First of all, I fully appreciate that I may be over analysing this, so feel free to highlight if you think I’m going overboard on this one.
I’m currently trying to optimise the URLs for a group of new pages that we have recently launched. I would usually err on the side of leaving the urls as they are so that any incoming links are not diluted through the 301 re-direct. In this case, however, there are very few links to these pages, so I don’t think that changing URLs will harm them.
My main question is between short URLs vs. long URLs (I have already read Dr. Pete’s post on this). Note: the URLs I have listed below are not the actual URLs, but very similar examples that I have created.
The URLs currently exist in a similar format to the examples below:
http://www.company.com/products/dlm/hire-ca
My first response was that we could put a few descriptive keywords in the url, with something like the following:
http://www.company/products/debt-lifecycle-management/hire-collection-agents - I’m worried though that the URL will get too long for any pages sitting under this.
As a compromise, I am considering the following:
http://www.company/products/dlm/hire-collection-agents
My feeling is that the second approach will give the best balance between having the keywords for the products and trying to ensure good user experience. My only concern is whether the /dlm/ category page would suffer slightly, but this would have ‘debt-lifecycle-management’ in the title tag.
Does this sound like a good approach to people? Or do you think I’m being a little obsessive about this? Any help would be appreciated
-
Makes sense - I understand now. Thanks for the clarification
-
Sure! What I mean is if (for example) your domain is debtlifestylemanagement.com, then having the dlm folder spelled out in the URL (i.e. debtlifestylemanagement.com/products/debt-lifecycle-management/hire-collection-agents) would be redundant & appear spammy. The same could happen if your domain had 'collection agents' in it.
I'm suggesting that I would "tend towards not including a keyword in the URI if it alreadyappears in the domain," especially if including it would only be for SEO purposes.
-
Thanks for your response Sheena, it's great to hear that I'm on the right track with this!
I was wondering if you could further explain the following part of your answer:
"What I can say is that the 'better way' depends on what words might already be in the domain, as I try to not be redundant (when possible) so it doesn't appear spammy/kw stuffed."
Are you suggesting that you'd tend towards not including a keyword if it appears elsewhere on the site and so search engines have enough context? Also, what do you mean by 'redundant'?
-
I'd say you're thinking about this in a smart way. First off, the existing URL structure isn't bad. I would consider this a low priority update, unless (or until) all other possible site issues are taken care of.
You're being smart about trying to find a balance of having descriptive yet not too long of a URL structure. What I can say is that the 'better way' depends on what words might already be in the domain, as I try to not be redundant (when possible) so it doesn't appear spammy/kw stuffed.
I hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
A grade optimised posts not showing in SERPs
Hi all, I've been using Moz to research, optimise and grade a broad range of copy and blog posts over the years. After the optimisation process I've always seen a relatively quick improvement of pages/posts in SERPs. I am currently working on a new website launched earlier in the year on a subdomain. There's a sitemap, fresh content added every month and the site has an verified Google Analytics and Search Console account. The content is quite niche with low traffic data for related terms, however, I am finding that after three or four weeks the optimised posts aren't displaying in the top 50 results in Google. These are the posts: https://sykeshome.europe.sykes.com/cut-the-cost-of-customer-support-use-a-work-at-home-model/ - optimised for "Cut the cost of customer support" (and also "Cut the cost of customer support: use a work-at-home model") https://sykeshome.europe.sykes.com/quality-and-compliance-in-a-work-at-home-environment/ - optimised for "Quality and compliance" (and also "Quality and compliance in a work-at-home environment") As a new website launched on a subdomain there aren't currently any inbound links, but I wanted to know if I am simply being impatient in expecting the above posts to rank higher (if only slightly), or if there could be a reason optimised content with a Moz A grade isn't showing in the first 50 results. Any advice or pointers would be much appreciated. Jonathan
Intermediate & Advanced SEO | | JCN-SBWD0 -
How much does URLs with CAPS and URLs with non-CAPS existing on an IIS site matter nowadays?
I work on a couple ecommerce sites that are on IIS. Both sites have return a 200 header status for the CAPS and non CAPS version of the URLs. While I suppose it would be ok if the canonicals pointed to the same version of the page, in some cases it doesn't (ie; /Home-Office canonicalizes to itself and /home-office canonicalizes to itself). I came across this article (http://www.searchdiscovery.com/blog/case-sensitive-urls-and-seo-case-matters/) that is a few years old and I'm wondering how much of an issue it is and how I would determine if it is/isn't?
Intermediate & Advanced SEO | | OfficeFurn0 -
Duplicate Content with URL Parameters
Moz is picking up a large quantity of duplicate content, consists mainly of URL parameters like ,pricehigh & ,pricelow etc (for page sorting). Google has indexed a large number of the pages (not sure how many), not sure how many of them are ranking for search terms we need. I have added the parameters into Google Webmaster tools And set to 'let google decide', However Google still sees it as duplicate content. Is it a problem that we need to address? Or could it do more harm than good in trying to fix it? Has anyone had any experience? Thanks
Intermediate & Advanced SEO | | seoman100 -
Canonical Issue with urls
I saw some urls of my site showing duplicate page content, duplicate page title issues on crawl reports. So I have set canonical url for every urls , that has dupicate content / page title. But still SeoMoz crawl test is showing issue. I am giving here one url with issue. The below given urls shown duplicate content and duplicate page title with some other urls all are given below. Checked URL http://www.cyrusrugs.com/bridge-traditional-area-rug-item-7635 dup page content http://www.cyrusrugs.com/bridge-traditional-area-rug-item-7622&category_id=270&colors=Black_Tones&click=colors&ci=1
Intermediate & Advanced SEO | | trixmediainc
http://www.cyrusrugs.com/bridge-traditional-area-rug-item-7622 dup page Title http://www.cyrusrugs.com/bridge-traditional-area-rug-item-7636&category_id=270&sizes=12x15,12x18&click=sizes
http://www.cyrusrugs.com/bridge-traditional-area-rug-item-7636
http://www.cyrusrugs.com/bridge-traditional-area-rug-item-7622&category_id=270&colors=Black_Tones&click=colors&ci=1
http://www.cyrusrugs.com/bridge-traditional-area-rug-item-7622 But I have set canonical url for all these urls already , that is :- http://www.cyrusrugs.com/bridge-traditional-area-rug-item-7622 This should actually solve the problem right ? Search engine should identify the canonical url as original url and only should consider that. Thanks0 -
Will our PA be retained after URL updates?
Our web hosting company recently applied a seo update to our site to deal with canonicalization issues and also rewrote all urls to lower case. As a result our PA is now 1 on all pages its effected. I took this up with them and they had this to say. "I must confess I’m still a bit lost however can assure you our consolidation tech uses a 301 permanent redirect for transfers. This should ensure any back link equity isn’t lost. For instance this address: http://www.towelsrus.co.uk/towels-bath-sheets/aztex/egyptian-cotton-Bath-sheet_ct474bd182pd2731.htm Redirects to this page: http://www.towelsrus.co.uk/towels-bath-sheets/aztex/egyptian-cotton-bath-sheet_ct474bd182pd2731.htm And the redirect returns 301 header response – as discussed in your attached forum thread extract" Firstly, is canonicalization working as the number of duplicate pages shot up last week and also will we get our PA back? Thanks Craig
Intermediate & Advanced SEO | | Towelsrus0 -
URL stucture like Zappos?
Hi, My site structure looks like this. domainname.com/nl/holidayhouses/villa-costa
Intermediate & Advanced SEO | | remcozwaan
domainname.com/nl/apartments/apartment-caifem ect. I just went to zappos to research the site and het notice me that zappos.com has no directories. If i implement this my structure looks like this. domainname.com/nl/holidayhouse-villa-costa
domainname.com/nl/apartments-apartment-caifem Is this a better approach? Ciao, Remco0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180 -
Url with hypen or.co?
Given a choice, for your #1 keyword, would you pick a .com with one or two hypens? (chicago-real-estate.com) or a .co with the full name as the url (chicagorealestate.co)? Is there an accepted best practice regarding hypenated urls and/or decent results regarding the effectiveness of the.co? Thank you in advance!
Intermediate & Advanced SEO | | joechicago0