Schedule crawls for 2 subdomains every 24 hours
-
I saw at this link:
http://pro.seomoz.org/tools/crawl-test
"As a PRO member, you can schedule crawls for 2 subdomains every 24 hours, and you'll get up to 3,000 pages crawled per subdomain."
However I am having trouble finding where to schedule this 24 hour crawl in my Pro Dashboard. I did not see the option for this setting in the crawl diagnostics tab or in the campaign settings section from the dashboard home page. Can you help?
thanks!
Michael
-
Generally, I correct the error and then wait until the next crawl if I'm confident that I've fixed the problem.
If I'm unsure then I may run a manual crawl using the crawling tool, however for me the format just isn't as easy to read as the weekly crawl diagnostics.
-
Interesting, thank you both for the feedback. I am new to SEOMoz and am just curious how most users use the crawl tool when they fix errors...do you wait until the next week's crawl to see if the error is resolved? or do you immediately use the crawl test tool to check and see if their changes corrected the errors..(saving time) No right or wrong answer here, just curious as to how most SEO Moz pro users the tool. Any tips or tricks from other users on how they use the tool no matter how big or small would be greatly appreciated! thanks
-
Shelly is right on here. Send an email to help@seomoz.org and they'll answer your question for you.
-
You can use the crawl test tool to run a 2 crawls per day for up to 3,000 pages, however as far as I'm aware the SEO Tools are moving more towards campaign setups where your sites are crawled once per week (or within 24 hours for a new campaign).
It might be worth referring a request to SEOMoz help to get the team to clarify
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is MOZ crawl is returning URLs with variable results showing Missing Meta Desc? Example: http://nw-naturals.net/?page_number_0=47
Can you help me dive down into my website guts to find out why the MOZ crawl is returning URLs with variable results? And saying this is missing a description when it's not really a page? Example: http://nw-naturals.net/?page_number_0=47. I've asked MOZ but it's a web development issue so they can't help me with it. Has anyone had an issue with this on their website? Thank you!
Moz Pro | | lewisdesign0 -
Why Only Our Homepage Can Be Crawled Showing a Redirect Message as the Meta Title
Hello Everyone, So recently when we checked our domain using a Moz Crawl Test and Screaming Frog only the homepage comes up and the meta title says “You are being redirected to…”. We have several pages that used to come up and when submitting them to GSC no issues come up. The robots.txt file looks fine as well. We thought this might be ‘server’ related but it’s a little out of our field of expertise so we thought we would find out if anyone has any experience with this (ideas of reasons, how to check…etc.) or any potential suggestions. Any extra insight would be really appreciated. Please let us know if there is anything we could provide further details for that might. Looking forward to hearing from all of you! Thanks in advance. Best,
Moz Pro | | Ben-R0 -
Only One page crawled..Need help
I have run a website in Seomoz which have many URLs with it. But when I saw the seomoz report that showing Pages Crawled: 1. Why this is happen my campaign limit is OK Tell me what to do for all page crawling in seomoz report. wV6fMWx
Moz Pro | | lucidsoftech0 -
1 week has passed: Crawled pages still N/A
Roughly one week ago I went pro, and then I created a campaing for the smallish webshop that I'm employed at, however it doesn't seem to crawl. I've check our visitors log and while we find other bots such as google, bing, yandex and so fourth, seomoz bot hasn't been visible. Perhaps I'm looking for a normal useragent, ohwell, onwards. While I thought it might take time, as a small test I added a domain that I've owned for sometime but don't really use, that target site is only 17 pages, now this site was crawled almost within the hour, and I realised that our ~5000pages on the main campaing would take some time, but wouldn't the initial 250 pages be crawled by now? I should add, that I didn't add http:// to the original Campaing, but the one that got crawled I did. I cannot seem to change this myself inorder to spot if that's the problem or not. Anyone has any ideas, should I just wait or is there something I can activly do to force it to start rolling?
Moz Pro | | Hultin0 -
Is it possible to re-run crawl in seomoz pro campaign?
Hello, for my campaign designzzz.com the crawl ran today, and next crawl is after a week. but unfortunately when the crawling was in progress i had a little issue on site which added lots of 500 server errors and other notifications. So is it possible to re-run the crawl ?
Moz Pro | | wickedsunny10 -
OSE Domains & Subdomains
Is there a compelling reason that OSE treats subdomains as part of a parent domain, rather than as a separate site? Or is this just a technical limitation? I ask because it's my understanding that Google treats subdomains more like separate domains than like parts of the parent domain. OSE treating them more like folders creates some frustrating situations when researching niches that are heavily filled with blospot and wordpress.com blogs. First of all, the domain authority in these situations is not at all indicative of the strength of the site. It also makes it hard to evaluate linking root domains at a glance, since all blogspot blogs count as one domain. So to see all blogspot sites linking you have to go to the full link list -- where each site may be listed hundreds of times -- and you can't group them because they're all considered the same domain. To be sure you when researching these niches you can just throw out domain authority as a metric, and export every report to excel where you can sort things in a way to make it easier to separate sites. But if there isn't a compelling SEO reason to have OSE function this way, I'd love to see those subdomains treated as separate sites so I can have access to all the easy to use SEOmoz metrics and layouts without the extra work. And of course if there is a compelling SEO reason for subdomains to be treated as domains, I'd love to be educated! : )
Moz Pro | | Ecreativeworks0 -
How to resolve Duplicate Content crawl errors for Magento Login Page
I am using the Magento shopping cart, and 99% of my duplicate content errors come from the login page. The URL looks like: http://www.site.com/customer/account/login/referer/aHR0cDovL3d3dy5tbW1zcGVjaW9zYS5jb20vcmV2aWV3L3Byb2R1Y3QvbGlzdC9pZC8xOTYvY2F0ZWdvcnkvNC8jcmV2aWV3LWZvcm0%2C/ Or, the same url but with the long string different from the one above. This link is available at the top of every page in my site, but I have made sure to add "rel=nofollow" as an attribute to the link in every case (it is done easily by modifying the header links template). Is there something else I should be doing? Do I need to try to add canonical to the login page? If so, does anyone know how to do it using XML?
Moz Pro | | kdl01 -
Crawl Diagnostics finding pages that dont exist. Will Rel Canon Help?
I have recently set up a campaign for www.completeoffice.co.uk. Im the in-house developer there. When the crawl diagnostics completed, i went to check the results, and to my surprise, it had well over 100 missing or empty title tags. I then clicked it to see what pages, and nearly all the pages it say have missing or empty title tags, DO NOT EXIST. This has really confused me and need help figuring out how to solve this. Can anyone help? Attached image is a screen shot of some of the links it showed me on crawl diagnostics, nearly all of these do not exist. Will the relation Canonical tag in the head section of the actual pages help? For example, The actual page that exist is: www.completeoffice.co.uk/Products.php Whereas, when crawled it actually showed www.completeoffice.co.uk/Products/Products.php Will have the rel can tag in the header of the real products.php solve this?
Moz Pro | | CompleteOffice0