Third crawl of my sites back to 250 pages
-
Hi all,
I've been waiting some days for the third crawl of my sites, but SEOMOZ only crawled 277 pages. The next phrase appeared on my crawl report:
Pages Crawled: 277 | Limit: 250
My last 2 crawls were of about 10K limit. Any idea?
Kind regards, Simon.
-
Hey Simon,
I just checked out your campaigns and everything looks good right now. We are really sorry about any inconveniences this may have caused. Let me update you on what happened and what we have done to make sure it doesn't happen in the future.
Over the weekend our server hosting provider experienced some temporary power outages that last for a few hours. When this happened some of our databases that contain user membership status went offline. When this happened our crawlers assumed that the campaigns had been archived and when the database servers came back online then the crawlers thought the campaigns had been unarchived.
In the past we have had the practice of kicking off a 250 page starter crawl when a campaign has been unarchived and then scheduling the full crawl for 7 days out. Your campaign would have received a full crawl on it's next scheduled crawl though. This is much like what happens when you first create your campaign. This isn't ideal for a few reasons though. One being a scenario like what happened over the weekend and two that it can skew your historical data by having a 250 page crawl stuck in the middle, even if archiving was intentionally done.
Moving forward we will be implementing a change to this that makes it so when you unarchive a campaign your full crawl will be scheduled and you won't receive a starter crawl. If you need more immediate crawl data then I recommend using our crawl test tool. With that tool you can receive up to 3,000 pages crawled. The only difference being it comes in the form of a csv file without the pretty web interface.
Let me know if you have any additional questions. Also, in the future if you are experiencing any issues with your service go ahead an let our support team know. If you go to seomoz.org/help you can generate a help ticket quite easily. By generating a customer support ticket our Help Team will keep you up to date on any issues with your account and work with you to resolve any issues as quickly as possible.
Again, my sincere apologies for this issue with your crawl.
Have a great day!
Kenny
-
We have some type of bug that's doing this, and we're working on tracking it down now. Thanks for emailing help, as that will help us figure out things faster. Sorry for the error!
-
Thanks Brendan, will do.
Kind regards, Simon.
-
Hi Simon,
You're best bet would be to email the SEOmoz help team at help@seomoz.org. The usually reply pretty quickly and will you sorted much faster than posting here.
Cheers,
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Moz can't crawl my site
Moz is being blocked from crawling the following site - https://www.cleanchain.com. When looking at Robot.txt, the following is disallowing access but don't know whether this is preventing Moz from crawling too? User-agent: *
Moz Pro | | danhart2020
Disallow: /adeci/
Disallow: /core/
Disallow: /connectors/
Disallow: /assets/components/ Could something else be preventing the crawl?0 -
Why did Moz crawl our development site?
In our Moz Pro account we have one campaign set up to track our main domain. This week Moz threw up around 400 new crawl errors, 99% of which were meta noindex issues. What happened was that somehow Moz found the development/staging site and decided to crawl that. I have no idea how it was able to do this - the robots.txt is set to disallow all and there is password protection on the site. It looks like Moz ignored the robots.txt, but I still don't have any idea how it was able to do a crawl - it should have received a 401 Forbidden and not gone any further. How do I a) clean this up without going through and manually ignoring each issue, and b) stop this from happening again? Thanks!
Moz Pro | | MultiTimeMachine0 -
Why is my MOZ report only crawling 1 page?
just got this weeks MOZ report and it states that it have only crawled: Pages Crawled: 1 | Limit: 10,000 it was over 1000 a couple of weeks ago, we have moved servers recently but is there anything i have done wrong here? indigocarhire.co.uk thanks
Moz Pro | | RGOnline0 -
Duplicate content pages
Crawl Diagnostics Summary shows around 15,000 duplicate content errors for one of my projects, It shows the list of pages with how many duplicate pages are there for each page. But i dont have a way of seeing what are the duplicate page URLs for a specific page without clicking on each page link and checking them manually which is gonna take forever to sort. When i export the list as CSV, duplicate_page_content column doest show any data. Can anyone please advice on this please. Thanks <colgroup><col width="1096"></colgroup>
Moz Pro | | nam2
| duplicate_page_content |1 -
Initiate crawl
Anyway to start the crawl of a site immediately after changes have been made? Or must you wait for the next scheduled crawl? Thanks.
Moz Pro | | dave_whatsthebigidea.com0 -
On page links tool here at Seomoz
Hi Seomoz - first of all, thanks for the best SEO tools I have ever worked with (this is my first question in this forum, and also I just subscribed as a paying customer after the 30 days trial you guys offer). My question: After having worked for several weeks on getting the numbers of links in our forum on www.texaspoker.dk down, we are somewhat surprised to see that we didn't succeed in getting lower numbers. For instance, this page: http://www.texaspoker.dk/forum/aktuelle-konkurrencer/coaching-projekt-bliver-du-den-udvalgte has (that's what Seomoz seo tool tells us): 239 on page links. Can this really be true? We can't find these links, and we actuually did a lot to lower the numbers of links, for instance the forum members picture was a link before, and also there was a "go to top" link in each post in the forum. Thanks a lot.
Moz Pro | | MPO0 -
About the rankings report in the Pro Dashboard, does it track the ranking of every page on a root domain, or just the home page or whichever page you set up the campaign with?
I noticed that one of the pages on my root domain has a #5 rank for a keyword, yet the ranking report says that there are no results in the top 50. So I am assuming it is only tracking the home page. That is one thing I liked about the Rank Tracker, that it would find any page that was ranking on a root domain. Thanks, Lara
Moz Pro | | larahill0 -
How to crawl the whole domain?
Hi, I have a website an e-commerce website with more than 4.600 products. I expect that Seomoz scan check all url's. I don't know why this doesn't happens. The Campaign name is Artigos para festa and should scan the whole domain festaexpress.com. But it crels only 100 pages I even tried to create a new campaign named Festa Express - Root Domain to check if it scans but had the same problem it crawled only 199 pages. Hope to have a solution. Thanks,
Moz Pro | | EduardoCoen
Eduardo0