How long is a full crawl?
-
It's been now over 3 days that the dashboard for one of our campaigns shows "Next Crawl in Progress!".
I am not complaining about the length... but I have to agree that SEOMoz is quite addictive, and it's quite frustrating to see that everyday
Thanks
-
Thanks for taking care of this!
-
Hey Julien,
While it can take a few days for a crawl to finish up, it does look like your site has been crawling for about 4 days, which is a little longer than usual. I'm really sorry about that! I'm going to create a ticket in our help desk under your email for the engineers to look into. You should get a confirmation email once I've submitted the ticket with a link to the ticket for your reference.
-Chiaryn
-
It all depends on the amount of pages on the website. I would suspect that you will see results any day now.
Yes, SEOmoz is addictive!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
FollowerWonk: How long have I been following someone?
Is there a way within followerwonk to find out how long you have been following someone for or indeed how long they have been following you for? I have downloaded the excel document from the "sort your followers" tab Which has sections called. "follows @name" and "@name follows" but these just give a last checked date(if they do follow/are being followed) rather than the metric I want!
Moz Pro | | BBI_Brandboost0 -
Is SeoMOZ Crawl Diagnostics wrong here?
We've been getting a ton of critical errors (about 80,000) in SeoMoz' Crawl Diagnostics saying we have duplicate content in our client's E-commerce site. Some of the errors are correct, but a lot of the pages are variations like: www.example.com/productlist?page=1 www.example.com/productlist?page=2 However, in our source code we have used rel="prev" and rel="next" so in my opinion we should be alright. Would love to hear from you if we have made a mistake or if it is an error in SeoMoz. Here's a full paste of the script:
Moz Pro | | Webdannmark0 -
What tools can I use to crawl a site which uses #! hasbhang?
I have a site which was created in a way that it uses hasbang #!. I am using 3 different SEO tools and they can't seem to crawl the website. Or what suggestion can you give me in dealing with hasbang. Any ideas please. Thanks a lot for your help. Allan
Moz Pro | | AllanDuncan0 -
Settings to crawl entire site
Not sure what happened but I started a third campaign yesterday and only 1 pages was crawled, The other two campaigns has 472 and 10K respectively. What is the proper setting to choose in the beginning of campaign setup to have the entire site crawled. Not sure what I did different and I must be reading the instructions incorrectly. Thanks, Don
Moz Pro | | NicheGuy210 -
Did anyone else see "Rel Canonical" drop to zero after their latest SEOmoz crawl?
In the Crawl Diagnostics section of the SEOmoz reports, we get errors in red, warnings in yellow, and notices in blue. After my latest crawl, I saw the "Rel Canonical" part go from about 300 down to 0. Obviously, this isn't right, so I'm wondering if this is a bug that everyone is experiencing. U9W5I
Moz Pro | | UnderRugSwept0 -
Why does my crawl report show just one page result?
I just ran a crawl report on my site: http://dozoco.com The result report shows results for just one page - the home page, but no other pages. The report doesn't indicate any errors or "do not follows" so I'm unclear on the issue, although I suspect user error - mine.
Moz Pro | | b1lyon0 -
The Site Explorer crawl shows errors for files/folders that do not exist.
I'm fairly certain there is ultimately something amiss on our server but the Site Explorer report on my website (www.kpmginstitutes.com) is showing thousands of folders that do not exist. Example: For my "About Us" page (www.kpmginstitutes.com/about-us.aspx), the report shows a link: www.kpmginstitutes.com/rss/industries/404-institute/404-institute/about-us.aspx. We do have "rss", "industries", "404-institute" folders but they are parallel in the architecture, not sequential as indicated in the error url. Has anyone else seen these types of error in your Site Explorer reports?
Moz Pro | | dturkington0