Crawl rate
-
How often does Moz crawl my website ?
(I have a number of issues I believe I have fixed, and wondered if there was a manual request to re-crawl ?)
Thanks.
Austin.
-
Many thanks
-
Perfect, thanks.
-
Thank you, Auston, I was unsure of the number of pages thanks for clearing that up
-
Hi Austin
All campaign updates occur once a week on the same day based on when the campaign was first created. The crawl tests Thomas suggested is helpful as well for instant crawls. Our crawl-test in research tools will limit to 3000 pages.
Cheers!
-
You asked "how often Moz crawls a site" I'm assuming you are asking about a campaign that would be once a week per a campaign & depending on the size/ number of pages of the site
See FAQ
https://moz.com/help/guides/moz-pro-overview/crawl-diagnostics
There are methods you could either create another campaign however my advice would be to use Moz simply crawl test it will let you crawl your site for issues immediately.
https://moz.com/researchtools/crawl-test
That should take care of it, however.
If for whatever reason that does not help some tools are free if your site has less than 500 pages you can use https://www.screamingfrog.co.uk/seo-spider/
Although to get what you want you most likely will have to spend money.
It sounds like crawl test will do what you want. If it does not still needs to be crawled.
Third-party tools like Screaming Frog SEO Spider Pro and deepcrawl.com Are instant and very helpful Additions to the fantastic tools Moz already offers.
I hope this helps,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Changing the Moz Crawl Date
Hello, I am wondering where I can change the date of Crawl by Moz. I would like to change this crawl period from one week to 2 or even 3 weeks for Moz to crawl my website. Hope to hear from anyone soon. Kind regards, Koen.
Getting Started | | Koenniiee1 -
Moz unable to crawl my Zenfolio website
Hey guys, I am attempting to optimize a website for my wife's business but Moz is unable to crawl it. Zenfolio is the web hosting service (she is a photographer). The error message is: **Moz was unable to crawl your site on Apr 1, 2019. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. I did read the troubleshooting guide but nothing worked. My robots.txt file disallows a few bots, but not roger bot. Anyone have any idea what is going on? Or do I need to request server logs from Zenfolio? Thanks
Getting Started | | bpenn111 -
Crawling issue
Hi, I have to set up a campaign for a webshop. This webshop is a subdomain itself. First question: The two subfolders I need to track are /nl_BE and /fr_BE. What is the best way to handle this? Shall I set up two different campaigns for each subfolder, or shall I just make one campaign and add tags to keywords? **Second question: **it seems like Moz can't crawl enough pages. There are no disallows in the robots.txt. Should I try putting the following at the top into my robots.txt? User-agent: rogerbot
Getting Started | | Mat_C
Disallow: Or is it because I want to crawl only a subdomain that it doesn't work? Thanks0 -
Why do ignored crawl issues still count as issues?
I use Cloudflare, so I can't avoid the Crawl Error for "Pages with no Meta Noindex" because of the way Cloudflare protects email addresses from harvesting (it creates a new page that has no meta noindex values). I marked this issue as "ignore" because there's nothing I can do about it, and it doesn't really affect my site's performance from an SEO standpoint. But even marked as ignore, it is still included in my site crawl issues count. Of course, I want to see that issues count drop to zero, but that can't happen if the ignored issues are counted. I don't want mark it fixed, because technically it's not fixed. KwPld
Getting Started | | troy.brophy0 -
Standard Syntax in robots.txt doesn't prevent Moz bot from crawling
A client is getting many false positive site crawl errors for things like duplicate titles and duplicate content on pages that include /tag/ in the URL. An example is https://needquest.com/place_tag/autism-spectrum-disorder/page/4/ To resolve this we have set up a disallow statement in the robots.txt file that says
Getting Started | | btreloar
Disallow: /page/ For some reason this appears not to work, as the site crawl errors continue to list pages like this. Does anyone understand why that would be and what we need to do to properly disallow crawling these pages?0 -
Scheduled update - Re-Crawl - recrawl
Can I not perform a manual update? I setup a campaign without GA as I did not have access, I got access, added the GA account to the campaign but no data is showing as I think I require an update, but have to wait 7 days? Is that right? Thanks
Getting Started | | SJMDT0 -
Why wont rogerbot crawl my page?
How can I find out why rogerbot won't crawl an individual page I give it to crawl for page-grader? Google, bing, yahoo all crawl pages just fine, but I put in one of the internal pages fo page-grader to check for keywords and it gave me an F -- it isn't crawling the page because the keyword IS in the title and it says it isn't. How do I diagnose the problem?
Getting Started | | friendoffood0 -
Crawl Diagnostics
Hello Experts, today i was analyse one of my website with moz and get issue overview and get total 212 issue 37 high all derive to this same url http://blogname.blogspot.com/search?updated-max=2013-10-30T17:59:00%2B05:30&max-results=4&reverse-paginate=true so can anyone help me how to find this url and remove all high priority error. and even on page website get A grade then why not performing well in SE ?
Getting Started | | JulieWhite0