Crawl slow again
-
Once again the weekly crawl on my site is very slow. I have around 441 pages in the crawl and this has been running for over 12 hours. This last happened two weeks ago (ran for over 48 hours). Last week's crawl was much quicker (not sure exactly how long but guessing an hour or so).
Is this a known issue and is there anything that can be done to unblock it? Weekends are the best time for me to assess and respond to changes I have made to my site so having this (small) crawl take most of the weekend is really quite problematic.
Thanks.
Mark
-
Your best bet right now is to contact the SEOmoz help desk with an email to help@seomoz.org. They may (not sure) be able to have your crawl start on a different day of the week so it's ready for you by the weekend, and should be able to answer all of your crawl-related questions in general.
-
Thank you for the response. I do not have a robots.txt file. I'm not following how the Google crawl rate effects this (but would love to learn why). I thought that is only used to define how often Google itself crawls my site and has nothing directly to do with SEOMoz. Also, it is not how often the SEOMoz crawl takes (it starts on time each Saturday) but, rather, how long it takes to complete. For two of the last three weeks it has taken almost 48 hours from start to finish.
Thanks again.
Mark
-
Have you tried changing the crawl rate in Google Webmaster Tools? Also make sure your robots.txt isn't specifying a crawl rate delay
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Moz Pro crawl signaling missing canonical which are not?
Hi,
Moz Pro | | rolandvintners
I'm trying MozPro considering using it.
One of the tool which is appealing is the crawl and insights.
After quick use, I really question many of the alerts, for instance, I got a "missing canonical tag" on this url: https://vintners.co/wine/grawu_gto#2020 but when I check my markup, there's clearly a canonical tag: <link rel="canonical" href="https://vintners.co/wine/grawu_gto"> Anybody can explain?
I asked Moz Pro staff when being onboarded but didn't get an answer...
Honestly, I'm questioning the value of these crawls, or may be I miss something?0 -
Moz crawling doesn't show all of my Backlinks
Hello, I'm trying to make an SEO backlinks report on my website When using the Link Explorer, I see only a few backlinks while I have much more backlinks on this website. Anyone has an idea about how to fix this issue. How can I check and correct this? My website is www.signsny.com.
Moz Pro | | signsny1 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Moz Pro | | JoshMcLean0 -
Why might Google be crawling via old sitemap, when the new one has been submitted and verified?
We have recently relaunched Scoutzie.com and re-submitted our new sitemap to Google. When I look on Webmaster tools, our new sitemap has been submitted just fine, but at the same time, Google is finding a lot of 404s when crawling the site. My understanding, it is still using crawling the old links, which do not exists. How can I tell Google to refresh it's index and to stop looking at all the old links?
Moz Pro | | scoutzie0 -
Why is my crawl STILL in progress?
I'm a bit new here, but we've had a few crawls done already. They are always finished by Wednesday night. Our website is not large (by any means), but the crawl still says it's in progress now 3 days later. What's the deal here?!?
Moz Pro | | Kibin0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but sample pages are not related to page indicated with duplicate content
In the crawl diagnostics for my campaign, the duplicate content warnings have been increasing, but when I look at the sample pages that SEOMoz says have duplicate content, they are completely different pages from the page identified. They have different Titles, Meta Descriptions and HTML content and often are different types of pages, i.e. product page appearing as having duplicate content vs. a category page. Anyone know what could be causing this?
Moz Pro | | EBCeller0 -
Has anyone else not had an SEOmoz crawl since Dec 22?
Before the holidays, I completed a website redesign on an eCommerce website. One of the reasons for this was duplicate content. The new design has removed all duplicate content. (Product descriptions on 2 pages) I took a look at my Crawl Diagnostics Summary this morning and this is what I saw: Last Crawl Completed: Dec. 15th, 2011 Next Crawl Starts: Dec. 22nd, 2011 Thinking it might have something to do with the holidays. Although, I would like to see this data as soon as possible. Is there a way I can request a crawl from seomoz ? Thanks, John Parker
Moz Pro | | JohnParker27920