Crawl Test is now On-Demand Crawl!
-
If you've been with Moz a while, you may have used our old Crawl Test tool. A year ago we launched an all new, campaign-based Site Crawl (with an entirely rebuild crawl engine), but Crawl Test fell into disrepair and we haven't had a solid tool for crawling non-campaign domains.
I'm happy to announce that we've just launched an all new On-Demand Crawl, built on the new Site Crawl engine, with a UI that's focused on quick insights. Moz Pro Standard tier customers can run up to 5 crawls per month at 3,000 page per crawl (crawls are saved for 90 days), with per-month limits increasing at higher levels.
Most On-Demand Crawls should run in a few minutes, making the tool perfect to get quick insights for sales meetings, vetting prospects, or analyzing competitors. We've written up a sample case study or logged-in customers can go directly to On-Demand Crawl.
Try it out -- we'd love to hear your use cases (either here or in the blog post comments).
-
That's a big help thank you!
-
It's been a while since I tried Moz's crawler. This new "major refresh" looks great Dr Pete and Team Moz. Cheers.
-
thank you
-
On-Demand Crawl, built on the new Site Crawl engine, with a UI that's focused on quick insights
That is a huge plus!
Thanks, Dr. Pete hope you're doing well man.
Tom Zickell
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz bot has trouble crawling Angular JS - I believe it's seeing the SPA (Single Page Application) before Universal. Anyone else have this issue? What is the fix?
The Moz bot user agent detection settings are able to read Universal, but the Single Page Application (SPA) version partially loads on the website before Universal. Because of this, Moz (and possibly search engines) think we have massive duplicate content issues. For example, the crawl report said a particular product page (which has about 1,000 words) has 33,000 words and has duplicate content with over 300 other pages. This makes me believe it's only picking up the SPA version. Has anyone come across this, and what would be the fix?
Moz Bar | | laurengdicenso1 -
Site Crawl 1-page 301 status error but httpstatus.io says its 403
I am trying to run a site crawl for my website and MOZ is only resulting in 1 page crawled with the home page URL Status Code of 301. However when I run it in httpstatus.io it is giving me a 403 status error. Im curious as to why MOZ is saying its a 301 and httpstatus.io is saying 403. Is there anything I can do in MOZ first to get the site crawled before asking my developers to look into the 403 error?
Moz Bar | | JohnConover0 -
Site Crawl report show strange duplicate pages
Beginning in early in Feb, we got a big bump in duplicate pages. The URLs of the pages are very odd: Example URL:
Moz Bar | | Neo4j
http://firstname.lastname@website.com/dir/page.php
is duplicate with http://website.com/dir/page.php I checked though the site, nginx conf files, and referral pages, and could not find what is prefixing the pages with 'http://firstname.lastname@'. Any ideas? The person whose name is 'Firstname Lastname' is stumped as well. Thanks.0 -
4 days waiting for a Moz Crawl - How quick are yours?
Hi there Please could anyone say how long they have been waiting for crawl results. I requested a crawl on a 20 page website and I have been waiting 4 days since last weekend. I checked Moz Health and there have been no related issues there: http://health.moz.com/ Your response would be welcome. Thanks
Moz Bar | | SEOguy10 -
How Do I Troubleshoot 804 HTTPS Crawl Error?
In my Moz crawl report I get: Crawl Error
Moz Bar | | digium
Moz encountered an error on one or more pages on your site
Error Code 804: HTTPS (SSL) Error Encountered The Moz Help Section only says: 804 HTTPS (SSL) error 804 errors result from a site with misconfigured SSL software. If Moz's crawlers cannot correctly interpret an SSL response for a home page, the crawl ends immediately. My site is publicly accessible on https - https://www.respoke.io/ And I'm not seeing any issues with my certificate. Can anyone help me out? What steps can I take to troubleshoot this error? If SSL is misconfigured, how do I configure it properly?0 -
I'm getting, "you're not using the rel="canonical" META attribute" in my crawl diagnotic
I'm running a campaign crawler through Moz on this particular page: http://www.henley.ac.uk/executive-education/leadership-and-management-programmes/ but I'm getting a notifcaiton from Moz saying, "you're not using the rel="canonical" META attribute" I don't understand what this means!! Has anyone else had this problem, or can they help me understand what this means and how to fix it? Oh, and Happy Thanksgiving from the UK! Virginia
Moz Bar | | blackboxideas0 -
408 errors in crawl diagnostics
Best community, The Crawl Diagnostics Report of Moz gave our website a lot of 408 errors like below: <dl> <dt>Title</dt> <dd>408 : Error</dd> <dt>Meta Description</dt> <dd>408 Request Time-out</dd> <dt>Meta Robots</dt> <dd>Not present/empty</dd> <dt>Meta Refresh</dt> <dd>Not present/empty</dd> <dd>-----------------------------------------------------------------------</dd> <dd>The report has diagnosed a lot of these (around 320), even though we cannot reproduce the error (we cannot seem to find it ourself). </dd> <dd>2 questions relating to this: </dd> <dd>* Can you (the people of Moz) reproduce the errors manually? </dd> <dd>* Is it possible that it is a bug in the spider of Moz itself (too many spiders crawling at the same time)?</dd> </dl>
Moz Bar | | arjen.koedam0 -
Moz "Crawl Diagnostics" doesn't respect robots.txt
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like: Duplicate content Overly dynamic URLs Duplicate Page Titles The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Moz Bar | | Vitalized
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored): Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/ Many thanks for any info on this issue.0