How long would a SEOMoz crawl usually take for a site with around 4000 pages?
-
We are working through optimising a site for one of our clients and the SEOMoz crawl progress says it has been running since the 8th Feburary. It's now almost a week later and it still hasn't finished.
The first run took a few days, is there any way of restarting the process?
-
Hi Mark,
Each crawl report takes approximately 1 week. The crawl commences each week on the day that you originally started your campaign. So if you set up your campaign on a Monday, each week your new crawl would also start on Monday.
(The dates in the Campaign manager can be a little deceptive. Most dates refer to the date the crawl started, not the date it ended, so it make it look like the time between crawls is longer than it actually is.)
So you should be getting your new report in a day or two. If it takes longer than that, feel free to contact the help team at help@seomoz.org.
As a PRO member, you can also try the custom crawl for a 3000 page crawl. It usually takes about 24 hours and returns your results in a spreadsheet format.
-
Hi Mark,
Usually you get 1 crawl / week. If the first report was on 8th then it will probably come tomorrow. (If it is set up as a campaign).
Gr.,
Istvan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
On page Grader is not working on specific site
Hello.
Moz Pro | | livedigm
When I try to use 'On Page Grader' on specific site, I get an error message. "
Page Optimization Error
There was a problem loading this page. Please make sure the page is loading properly and that our user-agent, rogerbot, is not blocked from accessing this page.
"
example : https://www.livedigm.com Site's robots.txt settings are good. and I think there's no blocking factor. But On Page Grader cannot crawl the sites.
But campaign crawler is working well on the site. only On Page Grader is not working.. What should I change my server's setting or site's setting for crawling site on my site?
I'm using wordpress on cloudways / Digitalocean(singapore) server. Thank you.0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Is it normal for Moz to report on nofollow pages in crawl diagnostics?
I have a dev version of my website, for example, devwww.website.com. The htaccess page has a noindex and nofollow request, but I got crawl issues reported from these pages in my Moz report. Does this mean that I don't have the development site hidden from search like I thought I did?
Moz Pro | | houstonbrooke0 -
Duplicate page title
Hello my page has this Although with seomoz crawl it says that this pages has duplicate titles. If my blog has 25 pages, i have according seomoz 25 duplicate titles. Can someone tell me if this is correct or if the seomoz crawl cannot recognize rel="next" or if there is another better way to tell google when there a pages generated from the blog that as the same title Should i ignore these seomoz errors thank you,
Moz Pro | | maestrosonrisas0 -
How does SEOmoz pull its duplicate page title and content information?
I ask because I am getting errors based on URLs that do not even exist on our site. For example: http://www.robots.com/applications/abb/panasonic/robots this URL does not even exist for our site, but somehow it is listed in the error section of page title duplication tool. http://www.robots.com/applications/ exists, but there is no place to get to an ABB or a Panasonic robot from this page, not to mention an ABB/Panasonic (which for sure does not exist). ?? We have quite a few of these out there and just wondering how to find out where the link is coming from. When we checked our URLs through Integrity, links like the one listed above (which we had 29 of them listed) that do not show up. Thoughts? Thanks! Janelle
Moz Pro | | jwanner0 -
Is seomoz rogerbot only crawling the subdomains by links or as well by id?
I´m new at seomoz and just set up a first campaign. After the first crawling i got quite a few 404 errors due to deleted (spammy) forum threads. I was sure there are no links to these deleted threads so my question is weather the seomoz rogerbot is only crawling my subdomains by links or as well by ids (the forum thread ids are serially numbered from 1 to x). If the rogerbot crawls as well serially numbered ids do i have to be concerned by the 404 error on behalf of the googlebot as well?
Moz Pro | | sauspiel0 -
Archiving Campaigns in SEOmoz
First off, I love the campaign archive feature. Very useful for my purposes. My question is: Is there a limit to how many campaigns I can archive? Thanks in advance!
Moz Pro | | CollinJarman0 -
Recent SEOMoz Crawl = Strange Results
Did anyone else get some really strange results in their weekly crawls this week with the campaign tool? Either my ranks sky rocked across three different sites or the tools is busted. Something to the tune of having 4 pages ranking in the top 30 to now having 15-16 pages ranking in the top 30. I'd love to find out it is just all the hard work paying off but i am worried it is the later. Regards - Kyle
Moz Pro | | kchandler0