Set crawl frequency
-
Current crawl frequency is weekly, is it possible for me to set this frequency our-self?
-
This post is five years old, and I haven't worked as a Moz employee in over two years. If you email help@moz.com, they should be able to help you.
-
Can you please provide the right URL? This URL is out dated and there is no redirect... Isn't that bad SEO?
-
If you go to the URL in my first response, the instructions there will explain how it works. Have you tried visiting that URL?
-
Can you please tell me the steps for that.
Thank you.
-
You can run the crawl on demand for up to 3000 urls at a time. That should help you out between regular crawls.
-
Now i made some error correction based on the crawl report, Now i have to wait for a week for the crawl to happen to see that the errors are gone.
Is there a way for me to verify correction or regenerate the error report?
-
The crawl frequency is weekly and cannot be changed. What you can do is run a crawl test from http://pro.seomoz.org/tools/crawl-test if you need a crawl on demand.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shopify crawl issues
Hi Moz'ers, I am a total newcomer to this level of seo. Recently I transitioned to Shopify and I'm puzzled by why I'm getting 803 errors - incomplete crawl attempts due to server timing out. Wouldn't this have to do with Shopify? How would I go about fixing it? I'm also getting 804 - SSL issues, but I assume that will go away. Any advice? Thanks! Sharon
Moz Pro | | Sharon2016
www.ZeldasSong.com0 -
Hoe to crawl specific subfolders
I tried to create a campaign to crawl the subfolders of my site, but it stops at just 1 folder. Basically what I want to do is crawl everything after folder1: www.domain.com/web/folder1/* I tried to create 2 campaigns: Subfolder Campaign 1: www.domain.com/web/folder1/*
Moz Pro | | gofluent
Subfolder Campaign 2: www.domain.com/web/folder1/ In both cases, it did not crawl and folders after the last /. Can you help me ?0 -
SEOMoz Private Message Settings
Hi all, Is there a way to be notified (to my personal email address) when I get sent a private message? Thanks,
Moz Pro | | Unity
Davinia0 -
Dot Net Nuke generating long URL showing up as crawl errors!
Since early July a DotNetNuke site is generating long urls that are showing in campaigns as crawl errors: long url, duplicate content, duplicate page title. URL: http://www.wakefieldpetvet.com/Home/tabid/223/ctl/SendPassword/Default.aspx?returnurl=%2F Is this a problem with DNN or a nuance to be ignored? Can it be controlled? Google webmaster tools shows no crawl errors like this.
Moz Pro | | EricSchmidt0 -
Third crawl of my sites back to 250 pages
Hi all, I've been waiting some days for the third crawl of my sites, but SEOMOZ only crawled 277 pages. The next phrase appeared on my crawl report: Pages Crawled: 277 | Limit: 250 My last 2 crawls were of about 10K limit. Any idea? Kind regards, Simon.
Moz Pro | | Aureka0 -
In my crawl diagnostics, there are links to duplicate content. How can I track down where these links originated in?
How can I find out how SEOMOz found these links to begin with? That would help fix the issue. Where's the source page where the link was first encountered listed at?
Moz Pro | | kirklandsl0 -
Open Site Explorer Issue - Pullng up No-follow links when settings ask for Follow + 301 Redirect..
Anybody else having this issue? Here lately when I am doing competitive research on open site explorer I set it to only pull up followed + 301 redirects and it will still pull up no-follow competitors links. Can anybody help me out here?
Moz Pro | | axzm0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0