Solving URL Too Long Issues
-
Moz.com is reporting that many URL's are to long, these particularly affect Product URL's where the URL is typically https://www.domainname.com/collections/category-name/products/product-name, (You guessed it we're using Shopify).
However, we use Canonicals that ignore all most of the URL and are just structured https://www.domainname.com/product-name, so Google should be reading the Canonical and not the long-winded version.
However, Moz cannot seem to spot this... does anyone else have this problem and how to solve so that we can satisfy the Moz.com crawl engine?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Backlinks issues.
I am hoping this is the right forum to ask this question. But, I was looking at my backlinks and noticed that I had some go to a page that resembles my site, but had porn words on it. however, it would link to a place on my site. Under more information it has a real link to my site. What is this? I noticed the domain authority was in I believe the 70's; but PA was in 40's. I am new, so can someone explain this. Thank you. example below:
Moz Pro | | Fashion220 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Solving 'duplicate content' for page 2 of X for 1 blog post
Hi to all SEO wizards, For my Dutch blog google-plus-marketing.nl I'm using WordPress Genesis framework 2.0 with news theme pro 2.0 responsive theme. I love the out of the box SEO friendliness and features. One of those features is that it allows for a blog post or page to be divided into several pages. This results in MOZ signaling duplicate titles for all pages after the 1st page. Now I was thinking that a canonical url set to the first page should do the trick for me as I reason that the rank will go the the first page and the rest will not be seen as duplicates. Genesis does some good stuff on it's own and places the following meta tags in the header for the first page. All looks well and my question is about the same meta tags for the 2nd page and higher that I pasted below this one for the 1st page. Meta tags page 1 of X for blog post Meta tags page 2 of X for the same blog post Would it not be better to point the canonical url for page 2 till X to always point to the first page? In this case:
Moz Pro | | DanielMulderNL0 -
5XX (Server Error) on all urls
Hi I created a couple of new campaigns a few days back and waited for the initial crawl to be completed. I have just checked and both are reporting 5XX (Server Error) on all the pages it tried to look at (one site I have 110 of these and the other it only crawled the homepage). This is very odd, I have checked both sites on my local pc, alternative pc and via my windows vps browser which is located in the US (I am in UK) and it all works fine. Any idea what could be the cause of this failure to crawl? I have pasted a few examples from the report | 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/bags.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/gloves.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/purses.html 500 1 0 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories/sunglasses.html | 500 | 1 | 0 | Am extra puzzled why the messages say time out. The server dedicated is 8 core with 32 gb of ram, the pages ping for me in about 1.2 seconds. What is the rogerbot crawler timeout? Many thanks Carl
Moz Pro | | GrumpyCarl0 -
Issue in number of pages crawled
i wanted to figure out how our friend Roger Bot works. On the first crawl of one of my large sites, the number of pages crawled stopped at 10000 (due to the restriction on the pro account). However after a few weeks, the number of pages crawled went down to about 5500. This number seemed to be a more accurate count of the pages on our site. Today, it seems that Roger Bot has completed another crawl and the number is up to 10000 again. I know there has been no downtime on our site, and the items that we fixed on our site did not reduce or increase the number of pages we had. Just making sure there are no known issues with Roger Bot before I look deeper into our site to see if there is an issue. Thanks!
Moz Pro | | cchhita0 -
How long does the seomoz crawl take?
It's been doing it's thing for over 48 hours now and Ive got less than 350 pages... is this norma? It's NOT the first crawl.
Moz Pro | | borderbound0 -
Opensite explorer issues still there
Hi I asked a question a few weeks ago regarding the linkings being shown in opensite explorer showing crazy stuff like psds and odd file formats as being the source of the links. I was told it was being repaired , the results were almost useless and I stopped using , I have just logged back in to check again and see even more random odd results with sites that would have no reason to link to us. Is this tool ever going to be repaired ? Is there a ETA for the fix ? How come you are launching new tools when the existing ones dont work ? Kind Regards
Moz Pro | | jbloggs0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but these pages are noindex
Hello guys, our site is nearly perfect - according to SEOmoz campaign overview. But, it shows me 5200 Errors, more then 2500 Pages with Duplicate Content plus more then 2500 Duplicated Page Titles. All these pages are sites to edit profiles. So I set them "noindex, follow" with meta robots. It works pretty good, these pages aren't indexed in the search engines. But why the SEOmoz tools list them as errors? Is there a good reason for it? Or is this just a little bug with the toolset? The URLs which are listet as duplicated are http://www.rimondo.com/horse-edit/?id=1007 (edit the IDs to see more...) http://www.rimondo.com/movie-edit/?id=10653 (edit the IDs to see more...) The crawling picture is still running, so maybe the errors will be gone away in some time...? Kind regards
Moz Pro | | mdoegel0