Crawl Disgnosis only crawling 250 pages not 10,000
-
My crawl diagnosis has suddenly dropped from 10,000 pages to just 250. I've been tracking and working on an ecommerce website with 102,000 pages (www.heatingreplacementparts.co.uk) and the history for this was showing some great improvements. Suddenly the CD report today is showing only 250 pages! What has happened? Not only is this frustrating to work with as I was chipping away at the errors and warnings, but also my graphs for reporting to my client are now all screwed up. I have a pro plan and nothing has (or should have!) changed.
-
Hey Scott,
I just checked out your campaigns and everything looks good right now. We are really sorry about any inconveniences this may have caused. Let me update you on what happened and what we have done to make sure it doesn't happen in the future.
Over the weekend our server hosting provider experienced some temporary power outages that last for a few hours. When this happened some of our databases that contain user membership status went offline. When this happened our crawlers assumed that the campaigns had been archived and when the database servers came back online then the crawlers thought the campaigns had been unarchived.
In the past we have had the practice of kicking off a 250 page starter crawl when a campaign has been unarchived and then scheduling the full crawl for 7 days out. Your campaign would have received a full crawl on it's next scheduled crawl though. This is much like what happens when you first create your campaign. This isn't ideal for a few reasons though. One being a scenario like what happened over the weekend and two that it can skew your historical data by having a 250 page crawl stuck in the middle, even if archiving was intentionally done.
Moving forward we will be implementing a change to this that makes it so when you unarchive a campaign your full crawl will be scheduled and you won't receive a starter crawl. If you need more immediate crawl data then I recommend using our crawl test tool. With that tool you can receive up to 3,000 pages crawled. The only difference being it comes in the form of a csv file without the pretty web interface.
Let me know if you have any additional questions. Also, in the future if you are experiencing any issues with your service go ahead an let our support team know. If you go to seomoz.org/help you can generate a help ticket quite easily. By generating a customer support ticket our Help Team will keep you up to date on any issues with your account and work with you to resolve any issues as quickly as possible.
Again, my sincere apologies for this issue with your crawl.
Have a great day!
Kenny
-
Many thanks Keri
-
Hi Scott,
We have rolled out a fix for this! I'm waiting to hear how long it will take to get through the backlog of crawls, but did want to let you know that your campaign is being worked on.
Keri
-
Thanks Keri. If you could please keep me informed that will help me to explain this to clients.
regards,
Scott.
-
I think we've had a bug, Scott. A couple of SEOmoz staff also got emails that the starter crawl had finished. We're looking into this to figure out what has happened, and really apologize. I'm assigning this to the help desk, and they'll commenting when we have some more information.
-
If you have run crawler today than yes seomoz default run 250 pages and than crawler takes 7 days to scan all your website pages..
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blog article split into a few pages
Hi, I came across multiple websites where blog articles are split into multiple pages and once you reach the end of the page, you need to click 'Next' or '2/3' to continue reading. Is this a good practice? I understand you keep clicking through multiple pages instead of reading an article only on one page, but you also split the strength of the targeted keyword between multiple pages. Can anyone advise? Thanks Katarina
Moz Pro | | Katarina-Borovska0 -
I got an 803 error yesterday on the Moz crawl for most of my pages. The page loads normally in the browser. We are hosted on shopify
I got an 803 error yesterday on the Moz crawl for most of my pages. The page loads normally in the browser. We are hosted on shopify, the url is www.solester.com please help us out
Moz Pro | | vasishta0 -
Duplicate Page
I just Check Crawl the status error with Duplicate Page Content. As Mentioned Below. Songs.pk | Download free mp3, Hindi Music, Indian Mp3 Songs http://www.getmp3songspk.com Songs.pk | Download free mp3, Hindi Music, Indian Mp3 Songs http://getmp3songspk.com and then i added these lines to my htaccess file RewriteBase /
Moz Pro | | Getmp3songspk
RewriteCond %{HTTP_HOST} !^www.getmp3songspk.com$ [NC]
RewriteRule ^(.*)$ http://www.getmp3songspk.com/$1 [L,R=301] But Still See that error again when i crawl a new test.0 -
How to use Crawl Report for a test directory
I have a client's new site set up in a folder that is not linked to from the main site. When setting up the Crawl Report, I put in the starting url for the new folder, http://oldsite.com/new/start.php. The Crawl Report came back with a crawl of the current site instead. How do folks run the crawl report to test sites before they are public? Thanks!
Moz Pro | | SWDDM0 -
Find pages containing broken links.
hi everyone, for each internal broken links I need to find all the pages that contain it. In the Seomoz report there is only a refferer link for each broken link, but google webmaster tools indicates that the dead link is present in many pages of the site. there is a way to have these data with SEOmoz or other software, in a csv report ? thanks
Moz Pro | | wwmind0 -
Duplicate Content being caused by home page?
Hello everyone, I am new to SEOmoz and SEO in general and I have a quick questions. When running a SEO Web Crawler report on my URL, I noticed in the report that my home page (also known as my index page) was listed twice. Here is what the report was showing: www.example.com/ www.example.com/index.php So are these 2 different urls? If so, is this considered duplicate content and should I block crawler access to the index.php? Thanks in advance for the help!
Moz Pro | | threebiz0 -
Fixing the Too Many On-Page Links
In our campaign I see that it reported that some of our pages have too many on-page links. But I think most of the links that was seen by MozBot is related to our images. There are a lot of images in our site and at the same time we support 11 languages which adds additional links One of the pages that have a lot of links is www.florahospitality.com/dining.aspx What can you <a></a>suggest to fix this? Thanks. <a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a>
Moz Pro | | shebinhassan0