Crawl-test not doesn't finish
-
Hello,
I have used this crawl-test on 2 website 3 days ago, and it hasn't finished yet. I'm wondering if the crawler is on an infinite loop, or has crashed without sending back an error.
I could re-launch the test, but if it's really still crawling, I don't want to loose any work in progress.
Is there any way to check the status of a crawl?
-
Just a note that we've discontinued the old Crawl Test tool and have launched an entirely new On-Demand Crawl tool based on our upgraded Site Crawl engine (launched last year). The new tool has an enhanced UI, entirely rebuilt back-end, full export capability, and will save your old crawls for up to 90 days.
We've written up a sample case study or logged-in customers can go directly to On-Demand Crawl.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to turn off automated site crawls
Hi there, Is there a way to turn off the automated site crawl feature for an individual campaign? Thanks
Moz Bar | | SEONOW1230 -
Moz Crawl Test says pages have no internal links
Greetings, I am working on a website, https://www.nasscoinc.com, and ran a Moz Crawl Test on it. According to the crawl test, only 2 of the website's hundreds of pages are receiving internal links. When I run a similar test on the site using Screaming Frog, I see that most of the pages have at least one internal link. I'm wondering if anyone has seen this before with the crawl test; and there is a way to get the crawl test to see the internal links? Thanks!
Moz Bar | | TopFloor0 -
Rank Checker Won't Accept New gTLDs
Hi everyone, I've got some domains with extension** .solutions** however, these extensions are not yet accepted by some of the very useful, and now dearly missed tools on this site. One of those tools is the Rank Checker: error message TDanTx1.png
Moz Bar | | SSsseeeooOO0 -
Where to find one off crawl report
Hello, I don't know if I am being a bit daft but I don't seem to be able to find the area where I can request a one off crawl report anymore (rather than setting up a campaign). Can someone let me know where this is now? Thanks!
Moz Bar | | RikkiD220 -
Why my on-page report card won't improve?
For my page www.homedestination.com/Minneapolis-homes-for-sale.html, Moz ranks the page as an "F" for the keyword "Minneapolis home buyers". All data in the report card is from months ago. using the "Grade my On-Page Optimization" button fails to show any updates - even after months. For example the title is actually "Minneapolis Homes For Sale| buying a Minneapolis home | Homebuyers" while Moz says it is ""Minneapolis Homes For Sale | Home Destination | Jenna Thuening" Moz says the description tag is "Realtor Jenna Theuning helping real estate buyers find Minnetonka homes for sale, owner of Home Destination. CDPE also helps with foreclosure and short sales buying." and "Minneapolis homes for sale, and Minneapolis listing, and Minneapolis upscale real estate, and Minneapolis lakeside properties provided by Home Destination"" when it is actually "Minneapolis homes for sale, and Minneapolis listing, Minneapolis home buyers, and Minneapolis lakeside properties provided by Home Destination." Additionally, my weekly ranking reports seems "stuck" and continues to show many "F" ranked keywords that have been updated for some time. Amy advice on how to see improvements?
Moz Bar | | jessential1 -
Crawl Diagnostics: Exlude known errors and others that have been detected by mistake? New moz analytics feature?
I'm curious if the new moz analytics will have the feature (filter) to exclude known errors from the crwal diagnostics. For example, the attached screenshot shows the URL as 404 Error, but it works fine: http://en.steag.com.br/references/owners-engineering-services-gas-treatment-ogx.php To maintain a better overview which errors can't be solved (so I just would like to mark them as "don't take this URL into account...") I will not try to fix them again next time. On the other hand I have hundreds of errors generated by forums or by the cms that I can not resolve on my own. Also these kind of crawl errors I would like to filter away and categorize like "errors to see later with a specialist". Will this come with the new moz analytics? Anyway is there a list that shows which new features will still be implemented? knPGBZA.png?1
Moz Bar | | inlinear0 -
"Avoid Keyword Self-Cannibalization" - can't find the problem
Hi, I understand what this means (or at least I think I do!), but I can't find where the problem lies. The keyword is "fire warden training" and the url is http://www.tutis-fire.co.uk/fire-warden-training-courses/ If anyone could lend a helping hand, I'd appreciate it.
Moz Bar | | Gordon_Hall0 -
Blocked Production Site from Search Engines - How to get it Crawled by Moz Crawler
I have an 'under development' site hosted, (which is an exact replica of live site as working on to add new functionalities & modules) - but its password protected, excluded from robots.txt (Disallow) & also marked noindex on all pages in the index - so that Googlebot & other Search Engines can not crawl the site At present the development work is almost 95% completed., Now - feel like to crawl the site through SEOMOZ Roger Bot - to know the errors and all indexed urls by Rogerbot. What's the best way to get Moz Bot crawl the site - but simultaneously continue it blocking its access to Search Engines I have gone through - https://support.google.com/webmasters/answer/93708?hl=en, it says a) Save it in a password-protected directory. Googlebot and other spiders won't be able to access the content- But this way Moz will also not be able to crawl the site b) Use a robots.txt to control access to files and directories on your server - However it also says - It's important to note that even if you use a robots.txt file to block spiders from crawling content on your site, Google could discover it in other ways and add it to our index. c) Use a noindex meta tag to prevent content from appearing in our search results - It also says that a link to the page can still appear in their search results. Because we have to crawl your page in order to see the noindex tag, there's a small chance that Googlebot won't see and respect the noindex meta tag Password Protected thus seems the best way to continue blocking. However, continuing with it will also block Moz bot to crawl the site. Any suggestions Thanks
Moz Bar | | Modi0