Crawl Disgnosis only crawling 250 pages not 10,000
-
My crawl diagnosis has suddenly dropped from 10,000 pages to just 250. I've been tracking and working on an ecommerce website with 102,000 pages (www.heatingreplacementparts.co.uk) and the history for this was showing some great improvements. Suddenly the CD report today is showing only 250 pages! What has happened? Not only is this frustrating to work with as I was chipping away at the errors and warnings, but also my graphs for reporting to my client are now all screwed up. I have a pro plan and nothing has (or should have!) changed.
-
Hey Scott,
I just checked out your campaigns and everything looks good right now. We are really sorry about any inconveniences this may have caused. Let me update you on what happened and what we have done to make sure it doesn't happen in the future.
Over the weekend our server hosting provider experienced some temporary power outages that last for a few hours. When this happened some of our databases that contain user membership status went offline. When this happened our crawlers assumed that the campaigns had been archived and when the database servers came back online then the crawlers thought the campaigns had been unarchived.
In the past we have had the practice of kicking off a 250 page starter crawl when a campaign has been unarchived and then scheduling the full crawl for 7 days out. Your campaign would have received a full crawl on it's next scheduled crawl though. This is much like what happens when you first create your campaign. This isn't ideal for a few reasons though. One being a scenario like what happened over the weekend and two that it can skew your historical data by having a 250 page crawl stuck in the middle, even if archiving was intentionally done.
Moving forward we will be implementing a change to this that makes it so when you unarchive a campaign your full crawl will be scheduled and you won't receive a starter crawl. If you need more immediate crawl data then I recommend using our crawl test tool. With that tool you can receive up to 3,000 pages crawled. The only difference being it comes in the form of a csv file without the pretty web interface.
Let me know if you have any additional questions. Also, in the future if you are experiencing any issues with your service go ahead an let our support team know. If you go to seomoz.org/help you can generate a help ticket quite easily. By generating a customer support ticket our Help Team will keep you up to date on any issues with your account and work with you to resolve any issues as quickly as possible.
Again, my sincere apologies for this issue with your crawl.
Have a great day!
Kenny
-
Many thanks Keri
-
Hi Scott,
We have rolled out a fix for this! I'm waiting to hear how long it will take to get through the backlog of crawls, but did want to let you know that your campaign is being worked on.
Keri
-
Thanks Keri. If you could please keep me informed that will help me to explain this to clients.
regards,
Scott.
-
I think we've had a bug, Scott. A couple of SEOmoz staff also got emails that the starter crawl had finished. We're looking into this to figure out what has happened, and really apologize. I'm assigning this to the help desk, and they'll commenting when we have some more information.
-
If you have run crawler today than yes seomoz default run 250 pages and than crawler takes 7 days to scan all your website pages..
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Aren't domain.com/page and domain.com/page/ the same thing?
Hi All, A recent Moz scan has turned up quite a few duplicate content notifications, all of which have the same issue. For instance: domain.com/page and domain.com/page/ are listed as duplicates, but I was under the impression that these pages would, in fact, be the same page. Is this even something to bother fixing or a fluke scan? If I should fix it does anyone know of an .htaccess modification that might be used? Thanks!
Moz Pro | | G2W0 -
Crawl Diagnostics: Next crawl date is in the past
Hi - I have quite a few crawl diagnostic errors and warnings. I have attempted to fix many of them but noticed this note at the bottom of the crawl diagnostics chart: "Last Crawl Completed: Mar. 22nd, 2013 Next Crawl Starts: Mar. 29th, 2013" It looks like SEOMoz thinks the next crawl date is Mar 29th, 2013, which is two weeks ago. Is there any way to "force" the crawl and get it back on regular schedule? This may have happened when my account was disabled because my credit card expired...Thoughts?
Moz Pro | | 6thirty0 -
Duplicate page titles in SEOMoz
My on page reports are showing a good number of duplicate title tags, but they are all because of a url tracking parameter that tells us which link the visitor clicked on. For example, http://www.example.com/example-product.htm?ref=navside and http://www.example.com/example-product.htm are the same page, but are treated as to different urls in SEOMoz. This is creating "fake" number of duplicate page titles in my reports. This has not been a problem with Google, but SEOMoz is treating it like this and it's confusing my data. Is there a way to specify this as a url parameter in the Moz software? Or does anybody have another suggestion? Should I specify this in GWT and BWT?
Moz Pro | | InetAll0 -
Crawl slow again
Once again the weekly crawl on my site is very slow. I have around 441 pages in the crawl and this has been running for over 12 hours. This last happened two weeks ago (ran for over 48 hours). Last week's crawl was much quicker (not sure exactly how long but guessing an hour or so). Is this a known issue and is there anything that can be done to unblock it? Weekends are the best time for me to assess and respond to changes I have made to my site so having this (small) crawl take most of the weekend is really quite problematic. Thanks. Mark
Moz Pro | | MarkWill0 -
On page optimisation tool issues
When viewing my campaign and looking at the on page optimisation tool, I have a few issues. I seems to only shows the keywords I want rankings for and how optimised my homepage is for those keywords. Is there any way I can get it to analyse permanently specifc keywords for specific pages because my homepage isnt optimised for some keywords which are on my list, which I have optimised other pages for, and because its looking at my homepage its getting a really low grade, and looks really bad and frustrates me because I cant work this out. Any help greatly appreciated.
Moz Pro | | CompleteOffice1 -
Crawl Diagnostics and missing meta tags on noindex blog pages
Hi Guys/Gals We do love the Crawl Diagnostics, but do find the missing meta tags ("Missing Meta Description" Tag in this case) somewhat spammy. We use the "All in One SEO Pack" for our blog and it does stick in noindex,follow (as it should) on the pages that is of no use to us. "2008/04/page/2/" and the likes. Maybe I'm wrong but should the Diagnostics tool not respect the noindex tag and just ignore any warnings, since it should really mean that these pages are NOT included in the search index. Meaning that the other meta tags are really useless. Any thoughts?
Moz Pro | | sfseo0