Dynamic URL pages in Crawl Diagnostics
-
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages.
Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site.
The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site.
These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories.
So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
-
I am having a similar issue. I am getting hit with 404 errors for pages that do not exist anymore of have been fixed. How do I get these to stop showing up?
-
I am having a similar issue. I am getting hit with 403 errors for pages that do not exist anymore of have been fixed. How do I get these to stop showing up?
-
Based on what has happened from time to time on our sites, my guess will be that it is caused by a widget or plug in on your CMS in some way interacting with the Bot. You are likely being crawled on these urls by Google (and producing 404's) as well and it is not likely it is just Roger bot picking it up. There is a lot on the GWMT forums regarding this with a myriad of suggested fixes: mod rewrite, http 410 for 404, etc.
One fix used by many is if your site has relative links you can do full out urls. If you have a ton of pages this might be a bit more of a pain. (Our clients typically have smaller sites so not too much of a problem).
If you are using WordPress (or another CMS that can utilize Extra Options Plug In) it is stated in the forums that the 404's can be stopped by:
In Extra Options plugin: I checked off all of the below options,, the last two do the job.. read about the nonindex nonfollow where appropriate,,, in that plugin,, this could be the answer.
Make meta descriptions from excerpts
Make home meta description from taglineAdd noindex where appropriate
Add nofollow where appropriateAnother option is to insure you have no
There are plenty of bright coders on the moz who can pitch in here and be more eloquent,
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it normal for Moz to report on nofollow pages in crawl diagnostics?
I have a dev version of my website, for example, devwww.website.com. The htaccess page has a noindex and nofollow request, but I got crawl issues reported from these pages in my Moz report. Does this mean that I don't have the development site hidden from search like I thought I did?
Moz Pro | | houstonbrooke0 -
How to know exactly which page links to a 404 page on my website?
Hi Moz users, Sometimes I get 404 crawl errors using Moz Pro and when my website has a few dozen pages it is hard for me to find the original page that links to a 404 page. Is there a way to find this automatically using Moz or do I have to look for it manually? I just need to find the original link and delete it to fix my 404 issue. Please let me know thank you for you help. -Marc
Moz Pro | | marcandre0 -
5XX (Server Error) on all urls
Hi I created a couple of new campaigns a few days back and waited for the initial crawl to be completed. I have just checked and both are reporting 5XX (Server Error) on all the pages it tried to look at (one site I have 110 of these and the other it only crawled the homepage). This is very odd, I have checked both sites on my local pc, alternative pc and via my windows vps browser which is located in the US (I am in UK) and it all works fine. Any idea what could be the cause of this failure to crawl? I have pasted a few examples from the report | 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/bags.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/gloves.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/purses.html 500 1 0 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories/sunglasses.html | 500 | 1 | 0 | Am extra puzzled why the messages say time out. The server dedicated is 8 core with 32 gb of ram, the pages ping for me in about 1.2 seconds. What is the rogerbot crawler timeout? Many thanks Carl
Moz Pro | | GrumpyCarl0 -
Question #3) My last question has to do with Some SEOmoz crawl diagnostics -
I recently fixed (or well, I am asking to make sure that this was the right thing to do in my first question posted a few minutes ago), a problem where all of my internal main sidebar category pages were linking using https://, which to my knowledge means SECURE pages. anyways, OSE, and google seem to be not recognizing the link juice. but my rank fell for one of my main keywords by 2 positions about a week after i made the fix to have the pages be indexable. Making my pages properly linked can't be a bad thing right? That's what I said. So I looked deeper, and my crawl diagnostics reports showed a MASSIVE reduction in warnings (about 3,000 301 redirects were removed by changing the https:// to http:// because all the secure pages were re-directing to http:// regular structure) and an INCREASE, in Duplicate Page Titles, and Temporary redirects... Could that have been the reason the rank dropped? I think I am going to fix all the Duplicate Page Title problems tonight, but still, I am a little confused as to why such a major fix didn't help and appeared to hurt me. I feel like it hurt the rank, not because of what I did, but because what I did caused a few extra re-directs, and opened the doors for the search engine to discover more pages that had problems (which could have triggered an algo that says hey, these people have to much duplicate problems) Any thoughts will be GREATLY appreciated thumbed, thanked, and marked as best answers! Thanks in advance for your time, Tyler A.
Moz Pro | | TylerAbernethy0 -
Does a url with no trailing slash (/)need A special redirect to the same url with a trailing slash (/)
I recently moved a website to wordpress which the wordpress default includes the trailing slash (/) after ALL urls. My url structure used to look like: www.example.com/blue-widgets Now it looks like: www.example.com/blue-widgets/ Today I checked the urls using Open Site Explorer and below is what I discovered: www.example.com/blue-widgets returned all my links, authority, etc HOWEVER there is a note that says........."Oh Hey! it looks like that URL redirects to www.example.com/blue-widgets/. Would you like to see data for that URL instead?" When I click on the link to THAT URL I get a note that says_.....NO DATA AVAILABLE FOR THIS URL._ Does this mean that www.example.com/blue-widgets/ really has NO DATA? How do I fix this?
Moz Pro | | webestate0 -
Page Authority is the same on every page of my site
I'm analyzing a site and the page authority is the exact same for every page in the site. How can this be since the page authority is supposed to be unique to each page?
Moz Pro | | azjayhawk0 -
El "Crawl Diagnostics Summary" me indica que tengo contenido duplicado
Como El "Crawl Diagnostics Summary"pero la verdad es que son las mismas paginas con el wwww y sin el www. que puedo hacer para quitar esta situacionde error.
Moz Pro | | arteweb20 -
Dismiss crawl diagnostics error
Hello everyone, Is there a way to dismiss some errors in the Crawl Diagnostics tool so they don't appear again? It happens so that some of the errors are never going to be fixed because of their nature. For example, 'Title too long' errors that point to some of the threads on my forum - it doesn't make sense to change the title of a thread posted by user just for the sake of the error disappearing from the 'Crawl Diagnostics' tool. 🙂 Otherwise the CD interface gets a little bit cluttered with errors which I will never fix anyway. I wonder how others deal with this problem. Thanks.
Moz Pro | | MaratM0