Crawl Diagnostics Update
-
I have corrected some errors in my SEOMoz Crawl Diagnostics, however the errors are still showing. It says a crawl has happen since. Any idea's why?
-
You can do an on-demand crawl test of up to 3000 URLs at http://pro.seomoz.org/tools/crawl-test, which might help.
-
Hi Brad
Can I ask what applications you use and would you recommend them?
Thanks
Pete
-
I agree...is there really no way to crawl before the next scheduled crawl date? With other applications, I am able to do this type of thing whenever I want.
-
Thanks for the reply. It says my next crawl starts: May 14th, 2012. Can I make it happen any earlier or do I need to wait?
-
Hey Pete,
Sorry about the confusion. The crawls kick off a little earlier (3 days before) your actual crawl date, so that may the reason why. It is hard to say without specific dates.
Best,
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
WEbsite cannot be crawled
I have received the following message from MOZ on a few of our websites now Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. I have spoken with our webmaster and they have advised the below: The Robots.txt file is definitely there on all pages and Google is able to crawl for these files. Moz however is having some difficulty with finding the files when there is a particular redirect in place. For example, the page currently redirects from threecounties.co.uk/ to https://www.threecounties.co.uk/ and when this happens, the Moz crawler cannot find the robots.txt on the first URL and this generates the reports you have been receiving. From what I understand, this is a flaw with the Moz software and not something that we could fix form our end. _Going forward, something we could do is remove these rewrite rules to www., but these are useful redirects and removing them would likely have SEO implications. _ Has anyone else had this issue and is there anything we can do to rectify, or should we leave as is?
Moz Pro | | threecounties0 -
Meta Tag Descriptions not being found in Moz Crawls
Hey guys, I have been managing a few websites and have input them into Moz for crawl reports, etc. For a while I have noticed that we were getting a gratuitous amount of errors when it came to the number of missing meta tags. It was numbering in the 200's. The sites were in place before I got here and a lot of the older posts no one had even attempted to include tags, links of the page or anything. As they are all Wordpress Sites and they all already had the Yoast/Wordpress SEO plug-in installed on them, I decided I would go through each post and media file one at a time and update their meta tags via the plug in. I personally did this so I know that I added and saved each one, however the Moz crawl reports continue to show that we are missing roughly 200 meta tags. I've seen a huge drop off in 404 errors and stuff since I went through and double checked everything on the sites, however the meta tag errors persist. Is this the case that Moz is not recognizing the tags when it crawls because I used the Yoast Plugin? Or would you say that the plugin is the issue and I should find another way to add meta tags to the pages and posts on the site? My main concern is that if Moz is having issues crawling the sites, is Google also seeing the same thing? The URLS include:
Moz Pro | | MOZ.info
sundancevacationsblog.com
sundancevacationsnews.com
sundancevacationscharities.com Any help would be appreciated!0 -
Moz crawl duplicate pages issues
Hi According to the moz crawl on my website I have in the region of 800 pages which are considered internal duplicates. I'm a little puzzled by this, even more so as some of the pages it lists as being duplicate of another are not. For example, the moz crawler considers page B to be a duplicate of page A in the urls below: Not sure on the live link policy so ive put a space in the urls to 'unlive' them. Page A http:// nuchic.co.uk/index.php/jeans/straight-jeans.html?manufacturer=3751 Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/accessories/id/92/?cat=97&manufacturer=3603 One is a filter page for Curvety Jeans and the other a filter page for Charles Clinkard Accessories. The page titles are different, the page content is different so Ive no idea why these would be considered duplicate. Thin maybe, but not duplicate. Like wise, pages B and C are considered a duplicate of page A in the following Page A http:// nuchic.co.uk/index.php/bags.html?dir=desc&manufacturer=4050&order=price Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/purses/id/98/?manufacturer=4001 Page C http:// nuchic.co.uk/index.php/coats/waistcoats.html?manufacturer=4053 Again, these are product filter pages which the crawler would have found using the site filtering system, but, again, I cannot find what makes pages B and C a duplicate of A. Page A is a filtered result for Great Plains Bags (filtered from the general bags collection). Page B is the filtered results for Chic Look Purses from the Purses section and Page C is the filtered results for Apricot Waistcoats from the Waistcoat section. I'm keen to fix the duplicate content errors on the site before it goes properly live at the end of this month - that's why anyone kind enough to check the links will see a few design issues with the site - however in order to fix the problem I first need to work out what it is and I can't in this case. Can anyone else see how these pages could be considered a duplicate of each other please? Checking ive not gone mad!! Thanks, Carl
Moz Pro | | daedriccarl0 -
Moz & Xenu Link Sleuth unable to crawl a website (403 error)
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this) Moz Result Title 403 : Error Meta Description 403 Forbidden Meta Robots_Not present/empty_ Meta Refresh_Not present/empty_ Xenu Link Sleuth Result Broken links, ordered by link: error code: 403 (forbidden request), linked from page(s): Thanks in advance!
Moz Pro | | ZaddleMarketing0 -
El "Crawl Diagnostics Summary" me indica que tengo contenido duplicado
Como El "Crawl Diagnostics Summary"pero la verdad es que son las mismas paginas con el wwww y sin el www. que puedo hacer para quitar esta situacionde error.
Moz Pro | | arteweb20 -
Crawl reports, date/time error found
Hello! I need to filter out the crawl errors found before a certain date/time. I find the date and time the errors were discovered to be the same. It looks more like the time the report was generated. Fix?
Moz Pro | | AJPro0 -
Lots of site errors after last crawl....
Something interesting happened on the last update for my site on SEOmoz pro tools. For the last month or so the errors on my site were very low, then on the last update I had a huge spike in errors, warnings, and notices. I'm not sure if somehow I made a change to my site (without knowing it) and I caused all of these errors, or if it just took a few months to find all the errors on my site? My duplicate page content went from 0 to 45, my duplicate page titles went from 0 to 105, my 4xx (client error) went from 0 to 4, and my title missing or empty went from 0 to 3. On the warnings sections my missing meta description tag went form a hand full to 444. (most of these looking to be archive pages.) Down in the notices I have over 2000 that are blocked by meta robots, meta-robots nofollow, and Rel canonical. I didn't have any where near this many prior to the last update of my site. I just wanted to see what I need to do to clean this up, and figure out if I did something to cause all the errors. I'm assuming the red errors are the first things I need to clean up. Any help you guys can provide would be greatly appreciated. Also if you'd like me to post any additional information, please let me know and I'd be glad to.
Moz Pro | | NoahsDad0 -
Crawling a website with redirects
Hi, I started a campaign for a website which uses multiple redirects before showing the real content. in the crawling report only one page is crawled. Is there a way to let the crawler pass the redirects to get usefull reports? The website is www.cegeka.be Thank you
Moz Pro | | Cegeka0