Dynamic URL pages in Crawl Diagnostics
-
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages.
Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site.
The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site.
These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories.
So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
-
I am having a similar issue. I am getting hit with 404 errors for pages that do not exist anymore of have been fixed. How do I get these to stop showing up?
-
I am having a similar issue. I am getting hit with 403 errors for pages that do not exist anymore of have been fixed. How do I get these to stop showing up?
-
Based on what has happened from time to time on our sites, my guess will be that it is caused by a widget or plug in on your CMS in some way interacting with the Bot. You are likely being crawled on these urls by Google (and producing 404's) as well and it is not likely it is just Roger bot picking it up. There is a lot on the GWMT forums regarding this with a myriad of suggested fixes: mod rewrite, http 410 for 404, etc.
One fix used by many is if your site has relative links you can do full out urls. If you have a ton of pages this might be a bit more of a pain. (Our clients typically have smaller sites so not too much of a problem).
If you are using WordPress (or another CMS that can utilize Extra Options Plug In) it is stated in the forums that the 404's can be stopped by:
In Extra Options plugin: I checked off all of the below options,, the last two do the job.. read about the nonindex nonfollow where appropriate,,, in that plugin,, this could be the answer.
Make meta descriptions from excerpts
Make home meta description from taglineAdd noindex where appropriate
Add nofollow where appropriateAnother option is to insure you have no
There are plenty of bright coders on the moz who can pitch in here and be more eloquent,
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Ajax4SEO and rogerbot crawling
Has anyone had any experience with seo4ajax.com and moz? The idea is that it points a bot to a html version of an ajax page (sounds good) without the need for ugly urls. However, I don't know how this will work with rogerbot and whether moz can crawl this. There's a section to add in specific user agents and I've added "rogerbot". Does anyone know if this will work or not? Otherwise, it's going to create some complications. I can't currently check as the site is in development and the dev version is noindexed currently. Thanks!
Moz Pro | | LeahHutcheon0 -
Crawl Test - Taking too long
The last crawl test I invoked seems to be in progress for over 24 hours. The one before that completed in a few hours. Wish there was a progress indicator or an option to cancel. The crawl (from Tool > Crawl Test) should not take this long. Any ideas or suggestions? Also, the keyword research tool (plus a few others) have been down ever since I signed up. Is this a normal?
Moz Pro | | MomoMasta0 -
SEOMOZ Crawl Test
Guys I really have an issue that i know have but cannot see if that makes sense. Basically 3 months ago i did a site wide 301 from economyleasinguk.co.uk to www.economy-car-leasing.co.uk Every thing looks good get all the correct header responses , all canonicals work perfectly , Google webmaster tools is updated fetch as google bot shows the old site is 301 I tried the seomoz crawl test today on the old domain and got this message Oh no! Looks like the page you were trying to access is temporarily down which at first thought ok because the site was not there it wont do it on an old 301 domain, however i tried it on a domain i know has just been 301'd and i got this message The URL http://www.site1.com/ redirects to http://site2.com/. Do you want to crawl http://site2.com/ instead?
Moz Pro | | kellymandingo
Would you like to:
Continue with www.site1.com
Continue with site2.com I really do not know what to do, its either the redirect script is missing something however its doing what it should or the server is a problem but again its doing what it should so why would SEOMOZ not be able to crawl the old URL like it example site above. Now the strange thing is Open Site Explorer does see the 301 and asks if i want to check the new URL instead Ps the redirect is done using PHP redirect which i am asking him to change to a htaccess as its now on a apache server and was wondering if this could be an issue, all pages go to correct pages as requested Thanks in Advance1 -
El "Crawl Diagnostics Summary" me indica que tengo contenido duplicado
Como El "Crawl Diagnostics Summary"pero la verdad es que son las mismas paginas con el wwww y sin el www. que puedo hacer para quitar esta situacionde error.
Moz Pro | | arteweb20 -
Settings to crawl entire site
Not sure what happened but I started a third campaign yesterday and only 1 pages was crawled, The other two campaigns has 472 and 10K respectively. What is the proper setting to choose in the beginning of campaign setup to have the entire site crawled. Not sure what I did different and I must be reading the instructions incorrectly. Thanks, Don
Moz Pro | | NicheGuy210 -
Too many on-page links
one of my SEOmoz pro campaigns has given me the warning: Too many on-page links and the page in question is my html sitemap. How do i resolve this because I obviously need my sitemap. How do i get around this?
Moz Pro | | CompleteOffice1 -
20000 site errors and 10000 pages crawled.
I have recently built an e-commerce website for the company I work at. Its built on opencart. Say for example we have a chair for sale. The url will be: www.domain.com/best-offers/cool-chair Thats fine, seomoz is crawling them all fine and reporting any errors under them url great. On each product listing we have several options and zoom options (allows the user to zoom in to the image to get a more detailed look). When a different zoom type is selected it adds on to the url, so for example: www.domain.com/best-offers/cool-chair?zoom=1 and there are 3 different zoom types. So effectively its taking for urls as different when in fact they are all one url. and Seomoz has interpreted it this way, and crawled 10000 pages(it thinks exist because of this) and thrown up 20000 errors. Does anyone have any idea how to solve this?
Moz Pro | | CompleteOffice0 -
Question about when new crawls start
Hi everyone, I'm currently using the trial of seomoz and I absolutely love what I'm seeing. However, I have 2 different websites (one has over 10,000 pages and one has about 40 pages). I've noticed that the smaller website is crawled every few days. However, the larger site hasn't been crawled in a few days. Although both campaigns state that the sites won't be crawled until next Monday, is there any way to get the crawl to start sooner on the large site? The reason that I've asked is that I've implemented some changes that will likely decrease the amount of pages that are crawled simply based upon the recommendations on this site. So, I'm excited to see the potential changes. Thanks, Brian
Moz Pro | | beeneeb0