URL Length Issue
-
MOZ is telling me the URLs are too long.
I did a little research and I found out that the length of the URLs is not really a serious problem. In fact, others recommend ignoring the situation.
Even on their blog I found this explanation:
"Shorter URLs are generally preferable. You do not need to take this to the extreme, and if your URL is already less than 50-60 characters, do not worry about it at all. But if you have URLs pushing 100+ characters, there's probably an opportunity to rewrite them and gain value.
This is not a direct problem with Google or Bing - the search engines can process long URLs without much trouble. The issue, instead, lies with usability and user experience. Shorter URLs are easier to parse, copy and paste, share on social media, and embed, and while these may all add up to a fractional improvement in sharing or amplification, every tweet, like, share, pin, email, and link matters (either directly or, often, indirectly)."
And yet, I have these questions: In this case, why do I get this error telling me that the urls are too long, and what are the best practices to get this out?
Thank You
-
Question: if you start redirecting the longer URLs for the shorter, don't you get actually do get dinged for - ecessive 301s or daisy-chain 301s.
How do you handle this for large sites with products? Canonicals?
-
Agree with Steve above. URL length is a very minimal SEO factor, to attempt to shorten them you could do an analysis of your URLs "silo structure" and see if you can get rid of unnecessary parts of the link.
For example, if you have "xyz.com/services/marketing/seo/local-seo", you can maybe cut "services" and "marketing" (and maybe even "seo" from the structure, so the URL can just be "xyz.com/local-seo", and then 301 the old URL to the new of course.
Check out this article from Search Engine Land for more info on URL Structure for SEO- https://searchengineland.com/infographic-ultimate-guide-seo-friendly-urls-249397
-
It's a signal that may be valuable for sites but one that may not have a huge impact on it's own.
I like to have shorter URLs because it ends up being easier and more friendly to share. Longer URLs can be a deterrent. I also like to pay closer attention to the click depth of the page, not just length of the URL.
I'm not sure you'll be hurt by a long URL like:
http://www.yourtld.com/blog/02/08/18/some-crazy-long-slug-nested-with-dates-that-triggers-moz-errorsI would bet you can get that page ranked if the content is valuable enough, even with a longer URL. But the issues start to pop up If you have a site that has content only found by browsing the nested pages it is a larger issue.
For example:
http://yourtold.com/services/cleaning/windows/indoors
This is an example where I'm really worried about the length, but more importantly the overall structure of the URL and site. It may be difficult users, and crawlers to find this content making it less search friendly overall when the content is 4-5 layers deep.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
/essions/essions keeps appending to 1 url on our website
Moz keeps giving us an error showing URL too long, when I investigate the offending url, I get this in the crawl. We can't work out what /essions is or why it's appending to the end of the url. Is this a Moz or website issue? <colgroup><col width="841"></colgroup>
Moz Pro | | NickWillWright
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/ |0 -
Duplicate content issues with file download links (diff. versions of a downloadable application)
I'm a little unsure how canonicalisation works with this case. 🙂 We have very regular updates to the application which is available as a download on our site. Obviously, with every update the version number of the file being downloaded changes; and along with it, the URL parameter included when people click the 'Download' button on our site. e.g. mysite.com/download/download.php?f=myapp.1.0.1.exe mysite.com/download/download.php?f=myapp.1.0.2.exe mysite.com/download/download.php?f=myapp.1.0.3.exe, etc In the Moz Site Crawl report all of these links are registering as Duplicate Content. There's no content per se on these pages, all they do is trigger a download of the specified file from our servers. Two questions: Are these links actually hurting our ranking/authority/etc? Would adding a canonical tag to the head of mysite.com/download/download.php solve the crawl issues? Would this catch all of the download.php URLs? i.e. Thanks! Jon
Moz Pro | | jonmc
(not super up on php, btw. So if I'm saying something completely bogus here...be kind 😉 )0 -
Recovering rankings after a botched url change
Hi there, I have for a long time had a bicycle maintenance website at madegood.org. Over the years the film branch of this business has taken off and moved in a slightly different direction, so I thought in March I decided to move madegood.org to madegoobikes.com, and create a new website for my film business at madegood.com. I thought I did a good job of telling google about my change of domain, but my rankings completely died, so about a month I moved madegoodbikes.com back to madegood.org. So far I haven't seen any sign of a recovery in my rankings, I'm getting almost no visits. I've check all my top pages on OSE and everything seems to be in place. https://moz.com/researchtools/ose/pages?site=http%3A%2F%2Fwww.madegood.org%2F&no_redirects=0&sort=page_authority&filter=all&page=1 Is it normal to wait over a month for my rankings to recover, or is there anything else I should be doing? Any tips/ideas/advice whatsoever will of huge help!
Moz Pro | | madegood0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Are the CSV downloads malformatted, when a comma appears in a URL?
Howdy folks, we've been a PRO member for about 24 hours now and I have to say we're loving it! One problem I am having with however is a CSV exported from our crawl diagnostics summary that I've downloaded. The CSV contains all the data fine, however I am having problems with it when a URL contains a comma. I am making a little tool to work with the CSVs we download and I can't parse it properly because there sometimes URLs contain commas and aren't quoted the same as other fields, such as meta_description_tag, are. Is there something simple I'm missing or is it something that can be fixed? Looking forward to learn more about the various tools. Thanks for the help.
Moz Pro | | Safelincs0 -
Why do pages with canonical urls show in my report as a "Duplicate Page Title"?
eg: Page One
Moz Pro | | DPSSeomonkey
<title>Page one</title>
No canonical url Page Two
<title>Page one</title> Page two is counted as being a page with a duplicate page title.
Shouldn't it be excluded?0 -
How do I delete a url from a keyword campaign
I have a couple of urls that are associated with the keywords in my campaign. They are no longer valid so how do I remove them?
Moz Pro | | PerriCline0 -
4xx (not found) errors seem spurious, caused by a "\" added to the URL
Hi SEOmoz folks We're getting a lot of 404 (not found) errors in our weekly crawl. However the weird thing is that the URLs in question all have the same issue. They are all a valid URL with a backsalsh ("") added. In URL encoding, this is an extra %5C at the end of the URL. Even weirder, we do not have any such URLs in our (Wordpress-based) website. Any insight on how to get rid of this issue? Thanks
Moz Pro | | GPN0