Why is my crawl STILL in progress?
-
I'm a bit new here, but we've had a few crawls done already. They are always finished by Wednesday night.
Our website is not large (by any means), but the crawl still says it's in progress now 3 days later.
What's the deal here?!?
-
Hi there!
Thanks for writing in! I checked your account it seems that the crawl was finished. Various factors sometimes can affect the speed at which Roger will check out your site, it usually tries to be as efficient as possible and not take the scenic route, every once in a while he gets stuck trying to resolve something on the site (crossword puzzle maybe) which can cause the crawl to last a little longer. We're constantly keeping track of how long the crawls take for our customers and try to come up with a better more efficient Roger!
If your next crawl is taking more than a day (24 hours), i would send us a ticket at help@seomoz.org. Thanks again for writing in!
Best,
Peter
SEOmoz Help Team. -
Cool, thanks Andrea. Glad it's not just me...
-
I've had mine take several days and Moz has told me that's not unexpected (even if it hasn't happened to you before).
If it goes longer than let's say today, contact their help desk. It's possible something got stuck (it's happened to me) and they have a great help team who will get you taken care of. They are very responsible and approachable.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl diagnostics incorrectly reporting duplicate page titles
Hi guys, I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!). Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community! Cheers, Brad
Moz Pro | | brad_dubs0 -
Campaign Crawl
I have a site with 8036 pages in my sitemap index. But the MozBot only Crawled 2169 pages. It's been several months and each week it crawls roughly the same number of pages. Any idea why I'm not getting fully crawled?
Moz Pro | | JMFieldMarketing0 -
Is there a easy way to see what pages are crawled?
Hello! Like the questions says... Is there a easy way to see what pages are crawled? I don't mean the ones that have issues, but just the ones that have been crawled? Regards,
Moz Pro | | MattDG0 -
Crawl Diagnostics Warnings - Duplicate Content
Hi All, I am getting a lot of warnings about duplicate page content. The pages are normally 'tag' pages. I have some news stories or blog posts tagged with multiple 'tags'. Should I ask google not to index the tag pages? Does it really affect my site? Thanks
Moz Pro | | skehoe0 -
Can Google see all the pages that an seomoz crawl picks up?
Hi there My client's site is showing around 90 pages indexed in Google. The seomoz crawl is returning 1934 pages. Many of the pages in the crawl are duplicates, but there are also pages which are behind the user login. Is it theoretically correct to say that if a seomoz crawl finds all the pages, then Google has the potential to as well, even if they choose not to index? Or would Google not see the pages behind the login? And how come seomoz can see the pages? Many thanks in anticipation! Wendy
Moz Pro | | Chammy0 -
A question about Mozbot and a recent crawl on our website.
Hi All, Rogerbot has been reporting errors on our website's for over a year now, and we correct the issues as soon as they are reported. However I have 2 questions regarding the recent crawl report we got on the 8th. 1.) Pages with a "no-index" tag are being crawled by roger and are being reported as duplicate page content errors. I can ignore these as google doesnt see these pages, but surely roger should ignore pages with "no-index" instructions as well? Also, these errors wont go away in our campaign until Roger ignores the URL's. 2.) What bugs me most is that resource pages that have been around for about 6 months have only just been reported as being duplicate content. Our weekly crawls have never picked up these resources pages as being a problem, why now all of a sudden? (Makes me wonder how extensive each crawl is?) Anyone else had a similar problem? Regards GREG
Moz Pro | | AndreVanKets0 -
SEOMoz's Crawl Diagnostics showing an error where the Title is missing on our Sitemap.xml file?
Hi Everyone, I'm working on our website Sky Candle and I've been running it as a campaign in SEOmoz. I've corrected a few errors we had with the site previously, but today it's recrawled and found a new error which is a missing Title tag on the sitemap.xml file. Is this a little glitch in the SEOmoz system? Or do I need to add a page title and meta description to my XML file. http://www.skycandle.co.uk/sitemap.xml Any help would be greatly appreciated. I didn't think I'd need to add this. Kind Regards Lewis
Moz Pro | | LewisSellers0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0