When will be the 250 pages crawled limit eliminated?
-
Hi,
I signed up yesterday for a SEOMoz Pro Account, and would like to know, please, when will be the 250 pages crawled limit eliminated?
Thanks in advance for your help!
-
Happy to help anytime.
Look forward to catching up with you around SEOmoz as you settle in to the community
Sha
-
Hello Sha,
I understood.
Thanks a lot for your kindness.
-
Hi Andarilho,
The 250 page crawl is just a quick version of the crawl to allow you to get started on your campaigns because a full site crawl generally takes up to 7 days to complete. So, you can start working on the first 250 pages in the meantime
Hope that helps,
Sha
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
When Is Page Optimization Section Going To Be Fixed?
In the last weekly reports, the page optimization has not been working correctly. When Is Page Optimization Section Going To Be Fixed? Please let us know. Thank you.
Moz Pro | | Videogamefan0 -
How to remove 404 pages wordpress
I used the crawl tool and it return a 404 error for several pages that I no longer have published in Wordpress. They must still be on the server somewhere? Do you know how to remove them? I think they are not a file on the server like an html file since Wordpress uses databases? I figure that getting rid of the 404 errors will improve SEO is this correct? Thanks, David
Moz Pro | | DJDavid0 -
Crawl Errors from URL Parameter
Hello, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login. I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls. Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't. So what I ended doing was going to the robots.txt and disallowing rogerbot. It looks like this: User-agent: rogerbot Disallow:/login However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix? Thanks!
Moz Pro | | WrightIMC0 -
Inbound Links To Deleted Pages
Hi, I recently deleted some pages from my website and believe that there will be external inbound links pointing to these pages. I would like to find them and put redirects in place - can anybody tell me how to use SEOMOZ to find where external links are poiting to moved/deleted pages Thanks
Moz Pro | | stayin1 -
SEOMoz Crawling Only 1 Page
I entered a new site into my dashboard 2 days ago - everything looked kosher, there were a few hundred pages crawled and a whole bunch of errors. I came back this morning to start work on the site and SEOMoz has crawled the site again, this time returning only 1 page and 0 errors. I haven't even logged in to the site since the first crawl, so I couldn't have broken anything. Has anyone seen this before?
Moz Pro | | Junction0 -
On Page Analysis and Grading
I received an email that my on page analysis for my campaigns were completed. But when I click on the link there are no grades there. What does that mean? Another question on this topic....when your campaign is graded are pages graded on all the keywords in the campaign or is each keyword graded invidividually? Thanks!
Moz Pro | | Confections0 -
Crawl Errors Confusing Me
The SEOMoz crawl tool is telling me that I have a slew of crawl errors on the blog of one domain. All are related to the MSNbot. And related to trackbacks (which we do want to block, right?) and attachments (makes sense to block those, too) ... any idea why these are crawl issues with MSNbot and not Google? My robots.txt is here: http://www.wevegotthekeys.com/robots.txt. Thanks, MJ
Moz Pro | | mjtaylor0