Crawl test limitaton - ways to take advantage of large sites?
-
Hello
I have a large site (120,000+) and crawl test is limited to 3,000 pages.
I want to know if you have a way to take advantage to crawl a type of this sites. Can i do a regular expression for example?
Thanks!
-
Hi there. Kristina from Moz's Help Team here.
The Crawl Test tool is limited to only 3,000 pages by design as the full Site Crawl that happens within your campaign(s) is a much larger, weekly crawl that occurs.
All Moz Pro Standard & Medium subscription levels (you currently have access to a Standard account) default all Site Crawls within a Campaign at a max of 50,000 pages. If you're interested in crawling more than 50,000 pages within a single Campaign, it would require subscribing to a higher subscription.
You can view all of our subscription plans on our pricing page here: https://moz.com/products/pro/pricing
As always, you can reach out to our team in the future with product questions such as this one by either emailing help@moz.com or clicking on the blue chat box on the lower right hand corner of your screen while in the product itself.
I hope this helps but please let me know if there's anything else I can assist with!
Thank you,
-Kristina
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why my site not crawl?
my error in dashboard: **Moz was unable to crawl your site on Jul 23, 2020. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster i thing edit robots.txt how fix that?
Feature Requests | | alixxf0 -
Why my site's DA and Pa Not Show Accurate By MOZ Extension.
my site has some minor problem which about DA and PA that DA and PA do not show accurate some time my site is yts subtitle, please tell me something about it.
Feature Requests | | darkwebvampire0 -
Any way to programatically retrieve a campaign's Search Visibility score?
I would like to retrieve a campaign's Search Visibility score, and preferably historical too, in an automated fashion. However, as far as I know the API has no access to this. Is there something I'm missing? If not, is there a formula I can use to calculate this score?
Feature Requests | | MaddenMediaSEO1 -
What is the best way to display historical ranking data?
I have utilized MOZ for years but have always struggled with a nice graph that illustrates historical ranking data for all tracked keywords. Can someone help me find the best solution for this in MOZ?
Feature Requests | | WebMarkets0 -
Does moz offer a site auditor tool to imbed on your website?
similar service as http://mysiteauditor.com. just want to embed a tool on our website that allows the visitor to enter their url and have a report emailed to them.
Feature Requests | | WebMarkets1 -
Migrate campaign from when site changes from http:// to https://
We moved from our site from http:// to https://, redirecting all traffic to the new https connection. However, in our campaign all scores have dropped (only 1 page crawled). Is there something we can do to migratie the data, or update the existing campaign to reflect this minor change to our existing site?
Feature Requests | | BentoPres0 -
Crawl diagnostic errors due to query string
I'm seeing a large amount of duplicate page titles, duplicate content, missing meta descriptions, etc. in my Crawl Diagnostics Report due to URLs' query strings. These pages already have canonical tags, but I know canonical tags aren't considered in MOZ's crawl diagnostic reports and therefore won't reduce the number of reported errors. Is there any way to configure MOZ to not consider query string variants as unique URLs? It's difficult to find a legitimate error among hundreds of these non-errors.
Feature Requests | | jmorehouse0