URL Length Issue
-
MOZ is telling me the URLs are too long.
I did a little research and I found out that the length of the URLs is not really a serious problem. In fact, others recommend ignoring the situation.
Even on their blog I found this explanation:
"Shorter URLs are generally preferable. You do not need to take this to the extreme, and if your URL is already less than 50-60 characters, do not worry about it at all. But if you have URLs pushing 100+ characters, there's probably an opportunity to rewrite them and gain value.
This is not a direct problem with Google or Bing - the search engines can process long URLs without much trouble. The issue, instead, lies with usability and user experience. Shorter URLs are easier to parse, copy and paste, share on social media, and embed, and while these may all add up to a fractional improvement in sharing or amplification, every tweet, like, share, pin, email, and link matters (either directly or, often, indirectly)."
And yet, I have these questions: In this case, why do I get this error telling me that the urls are too long, and what are the best practices to get this out?
Thank You
-
Question: if you start redirecting the longer URLs for the shorter, don't you get actually do get dinged for - ecessive 301s or daisy-chain 301s.
How do you handle this for large sites with products? Canonicals?
-
Agree with Steve above. URL length is a very minimal SEO factor, to attempt to shorten them you could do an analysis of your URLs "silo structure" and see if you can get rid of unnecessary parts of the link.
For example, if you have "xyz.com/services/marketing/seo/local-seo", you can maybe cut "services" and "marketing" (and maybe even "seo" from the structure, so the URL can just be "xyz.com/local-seo", and then 301 the old URL to the new of course.
Check out this article from Search Engine Land for more info on URL Structure for SEO- https://searchengineland.com/infographic-ultimate-guide-seo-friendly-urls-249397
-
It's a signal that may be valuable for sites but one that may not have a huge impact on it's own.
I like to have shorter URLs because it ends up being easier and more friendly to share. Longer URLs can be a deterrent. I also like to pay closer attention to the click depth of the page, not just length of the URL.
I'm not sure you'll be hurt by a long URL like:
http://www.yourtld.com/blog/02/08/18/some-crazy-long-slug-nested-with-dates-that-triggers-moz-errorsI would bet you can get that page ranked if the content is valuable enough, even with a longer URL. But the issues start to pop up If you have a site that has content only found by browsing the nested pages it is a larger issue.
For example:
http://yourtold.com/services/cleaning/windows/indoors
This is an example where I'm really worried about the length, but more importantly the overall structure of the URL and site. It may be difficult users, and crawlers to find this content making it less search friendly overall when the content is 4-5 layers deep.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Are there any free (or paid) tools available online that download Meta Tags for ALL URL's of a website?
Hi, I am looking to run an On-Site audit for a website and I'm wondering if there are any tools available online that take the existing Meta Tags on ALL pages of a website and downloads them to a .CSV or .XLS. Would need Meta Title and Meta Description for all pages at the very least. Any suggestions are appreciated - looking for Free or Paid options. Thanks.
Moz Pro | | SEO5Team0 -
Are there tools to discover duplicate content issues with the other websites?
We have issues with users copy-pasting content from other sources into our site. The only way I know to find out, is to manually (!!) copy a snippet of their text into google, to see if I get results from other sites. I have been googling for tools to help automate this process, but without luck. Can you recommend any?
Moz Pro | | betternow0 -
Overly Dynamic URLS
I should be able to set URL Parameters in my Google Webmasters Tool that allows be to stop my overly dynamic page URL problem. Please help me on how to do this.
Moz Pro | | pinksgreens0 -
I am trying to find inbound links for one of my site urls. My question is does SEOMoz able to track all internal links as the Open Site Explorer shows 0 internal links?
It shows 0 internal links when I am pretty sure we have multiple internal links.Should we use absolute urls or relative urls for internal links?
Moz Pro | | SulekhaUSLLC0 -
Why don't Google+ URL's work in OSE?
Is there any reason why Google+ URLs does not work in OSE? Is it just that it is a secure URL or is there something bigger there? Why? Be cool to determine every website the person has been published on; especially if it is rel="author" verified. Jeff
Moz Pro | | WebBizIdeas1 -
How can I see the URL's affected in Seomoz Crawl when Notices increase
Hi, When Seomoz crawled my site, my notices increased by 255. How can I only these affected urls ? thanks Sarah
Moz Pro | | SarahCollins0 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0