Odd crawl test issues
-
Hi all, first post, be gentle...
Just signed up for moz with the hope that it, and the learning will help me improve my web traffic. Have managed to get a bit of woe already with one of the sites we have added to the tool. I cannot get the crawl test to do any actual crawling. Ive tried to add the domain three times now but the initial of a few pages (the auto one when you add a domain to pro) will not work for me.
Instead of getting a list of problems with the site, i have a list of 18 pages where it says 'Error Code 902: Network Errors Prevented Crawler from Contacting Server'. Being a little puzzled by this, i checked the site myself...no problems. I asked several people in different locations (and countries) to have a go, and no problems for them either. I ran the same site through Raven Tool site auditor and got some results. it crawled a few thousand pages. I ran the site through screaming frog as google bot user agent, and again no issues. I just tried the fetch as Gbot in WMT and all was fine there.
I'm very puzzled then as to why moz is having issues with the site but everyone is happy with it. I know the homepage takes 7 seconds to load - caching is off at the moment while we tweak the design - but all the other pages (according to SF) take average of 0.72 seconds to load.
The site is a magento one so we have a lengthy robots.txt but that is not causing problems for any of the other services. The robots txt is below.
Google Image Crawler Setup
User-agent: Googlebot-Image
Disallow:Crawlers Setup
User-agent: *
Directories
Disallow: /ajax/
Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /errors/
Disallow: /includes/
#Disallow: /js/
#Disallow: /lib/
Disallow: /magento/
#Disallow: /media/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /scripts/
Disallow: /shell/
Disallow: /skin/
Disallow: /stats/
Disallow: /var/
Disallow: /catalog/product
Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
#Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /catalog/product/gallery/Files
Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txtPaths (no clean URLs)
#Disallow: /.js$
#Disallow: /.css$
Disallow: /.php$
Disallow: /?SID=Pagnation
Disallow: /?dir=
Disallow: /&dir=
Disallow: /?mode=
Disallow: /&mode=
Disallow: /?order=
Disallow: /&order=
Disallow: /?p=
Disallow: /&p=If anyone has any suggestions then please i would welcome them, be it with the tool or my robots. As a side note, im aware that we are blocking the individual product pages. Too many products on the site at the moment (250k plus) which manufacturer default descriptions so we have blocked them and are working on getting the category pages and guides listed. In time we will rewrite the most popular products and unblock them as we go
Many thanks
Carl
-
Thanks for the hints re the robots, will tidy that up.
-
Network errors can be somewhere between us and your site and not necessarily directly with your server itself. The best bet would be to check with your ISP for any connectivity issues to your server. Since your issues are only the first time they are reported, the next crawl may be more successful.
One thing though you will want to keep your user-agent directives in a single block of code without spaces.
so
Crawlers Setup
User-agent: *
Directories
Disallow: /ajax/
Disallow: /404/
Disallow: /app/would need to look like:
Crawlers Setup
User-agent: *
Directories
Disallow: /ajax/
Disallow: /404/
Disallow: /app/ -
Many thanks for the reply. The server we use is a dedicated server which we set up ourselves inc OS and control panel. Just seems very odd that every other tool is working fine etc but moz won't. I cannot see how it would need anything special from, say, Raven's site crawler.
I will check out those other threads though to see if i missed anything, thanks for the links.
Just checked port 80 using http:// www.yougetsignal. com/tools/open-ports/ (not sure if links allowed) and no problems there.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Pro OnDemand Crawl fail on on WordPress site
Hello, I just can't seem to understand why OnDemand Crawl fails on further attempts only 4 pages out of 68 I am using WordPress, Divi Theme and on LiteSpeed server. Robots.txt allows rogerbot just can seem to find the issue
Moz Bar | | ChrisSanClaire0 -
Different Errors Running 2 Crawls on Effectively the Same Setup
Our developers are moving away from utilising robots.txt files due to security risks, so e have been in the process of removing them from sites. However we, and our clients still want to run Moz crawl reports as they can highlight useful information. The two sites in question sit on the same server with the same settings (in fact running on the same Magento install). We do not have a robots.txt files present (they 404), and as per Chiaryn's response here https://moz.com/community/q/without-robots-txt-no-crawling this should work fine? However for www.iconiclights.co.uk we got: 902 : Network errors prevented crawler from contacting server for page. While for www.valuelights.co.uk we got: 612 : Page banned by error response for robots.txt. These crawls were both run recently, and there was no robots.txt present. Not to mention, they are on the same setup/server etc as mentioned. Now, we have just tested this, by uploading a blank robots.txt file to see if it changed anything - but we get exactly the same errors. I have had a look, but can't find anything that really matches this on here - help would really be appreciated! Thanks!
Moz Bar | | I-COM0 -
Issues with Crawl Test and SSL Certificate
So I have having issues with the Crawl Test being able to crawl my site accurately due to what the tool is saying is a "SSL Certificate Error" (804 : HTTPS (SSL) error encountered when requesting page.) Only thing is that I have no warnings about this SSL issue in Search Console and when I check the SSL on https://www.sslshopper.com it comes back just fine. Anybody know why this might be happening or have encountered this issue before?
Moz Bar | | DRSearchEngOpt0 -
Crawl Report Internal Links Count
We recently ran a crawl report on www.phase1tech.com. Some of the pages are coming back with a large amount of 'internal links'. These 2 pages for example are showing 800 internal links: http://www.phase1tech.com/Upcoming-Events
Moz Bar | | AISEO
http://www.phase1tech.com/Contact At best there are approximately 70 links on the page. Where is the 800 number coming from?0 -
Moz Crawl Test says pages have no internal links
Greetings, I am working on a website, https://www.nasscoinc.com, and ran a Moz Crawl Test on it. According to the crawl test, only 2 of the website's hundreds of pages are receiving internal links. When I run a similar test on the site using Screaming Frog, I see that most of the pages have at least one internal link. I'm wondering if anyone has seen this before with the crawl test; and there is a way to get the crawl test to see the internal links? Thanks!
Moz Bar | | TopFloor0 -
MOZ crawl test is not reporting on all the pages on my site.
I've run the crawl test one of the sites I've taken over SEO for, however its only picking all the pages. For instance it indexes all the pages under xxxxx/us but none under xxxxx/au or xxxxx/uk The pages are being indexed as they're ranking in Google. Thanks.
Moz Bar | | ahyde0 -
Crawling password protected sites such as dev or staging areas to look at sites b4 going live ?
Hi Ive instructed clients to password protect dev areas so dont get crawled and indexed but how do we set up Moz crawl software so we can crawl theses sites for final check of any issues before going live ? Is there an option i havnt seen to add logins/passwords for crawl software to access ? cheers dan
Moz Bar | | Dan-Lawrence0 -
Moz "Crawl Diagnostics" doesn't respect robots.txt
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like: Duplicate content Overly dynamic URLs Duplicate Page Titles The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Moz Bar | | Vitalized
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored): Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/ Many thanks for any info on this issue.0