Crawl Diagnostic | Starter Crawl taken 14hrs.. so far
-
We started a starter crawl 14hrs ago and it's still going, can anyone help on why this is taking so long, when it says '2 hrs' on the interface..
Thanks,
Rory
-
Hi Rory. Most of our help desk is on holiday today, since it's the Fourth of July in the states. We do have a record of your ticket and one other person who is having a slow starter crawl, and a help desk specialist is looking into this now. Sorry for the delays.
Keri
-
I've asked — now heard yet, think i'll wait to hear.
Thanks for your help, appreciate it.
-
Send an email to help (at) seomoz.org for someone to have a look.
-
It's a fairly big site, but it does say:
'To get you started quickly Roger is crawling up to 250 pages on your site. You should see these results within two hours. The full crawl will complete within 7 days.'
There's no option to do anything else, like cancel, reset etc — it just says 'Starter crawl in progress', it's been 16hrs now + bit frustraing as needed to send this through to a client this morning.. Anyone from SeoMoz around to look into this?
-
And here is how you reset the crawl:
1. On your webserver, edit the robots.txt file.
2. Block the seomoz bot from crawling the site by blocking its access to the root.
You can do so by adding the following lines:
User-agent: rogerbot
Disallow: /
This would end the crawl session.
But, before you do this, it may a good idea to check if your site indeed has a lot of content and outgoing links?
-
Rory,
What is the sub-domain that you are crawling? It may just be that there is a lot of content to crawl.
-
How would I reset the crawl? I don't appear to have an option to?
-
Rory,
I would guess that this crawl session has hung-up; it would be a good idea to start a new session. The session could have been left in the middle due to a server side issue on your website or a temporary drop in connection between the API server and your website's server.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shopify crawl issues
Hi Moz'ers, I am a total newcomer to this level of seo. Recently I transitioned to Shopify and I'm puzzled by why I'm getting 803 errors - incomplete crawl attempts due to server timing out. Wouldn't this have to do with Shopify? How would I go about fixing it? I'm also getting 804 - SSL issues, but I assume that will go away. Any advice? Thanks! Sharon
Moz Pro | | Sharon2016
www.ZeldasSong.com0 -
Site Crawl Error
In moz crawling error this message is appears: MOST COMMON ISSUES 1Search Engine Blocked by robots.txt Error Code 612: Error response for robots.txt i asked help staff but they crawled again and nothing changed. there's only robots.XML (not TXT) in root of my webpage it contains: User-agent: *
Moz Pro | | nopsts
Allow: /
Allow: /sitemap.htm anyone please help me? thank you0 -
Crawl diagnostics incorrectly reporting duplicate page titles
Hi guys, I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!). Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community! Cheers, Brad
Moz Pro | | brad_dubs0 -
Where has the old seomoz crawl tool gone? I can't seem to find it
I'm looking for the (SEO)moz crawl tool - but can't find it. Where has it gone?
Moz Pro | | SearchMotion0 -
Crawl Errors and Notices drop to zero
Hi all, After setting up a campaign in Moz the crawl is successful and it showed the Errors and Warnings in crawl diagnostics (each one had about 40-50), but after a few days the number dropped to zero. Only the "notices" seems to stay normal, with a slight drop since the campaign set up, but not dropping to zero. I set this campaign up in a colleague's account and the same thing happened shortly after set up. I didn't find any Q&A already posted so any insight is appreciated!
Moz Pro | | Vanessa120 -
Recent SEOMoz Crawl = Strange Results
Did anyone else get some really strange results in their weekly crawls this week with the campaign tool? Either my ranks sky rocked across three different sites or the tools is busted. Something to the tune of having 4 pages ranking in the top 30 to now having 15-16 pages ranking in the top 30. I'd love to find out it is just all the hard work paying off but i am worried it is the later. Regards - Kyle
Moz Pro | | kchandler0 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0 -
How long does a crawl take?
A crawl of my site started on the 8th July & is still going on - is there something wrong???
Moz Pro | | Brian_Worger1