What to do with a site of >50,000 pages vs. crawl limit?
-
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages?
Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder?
I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc.
I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean:
To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence.
www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get?
www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?)
Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
-
Hi Sean -- Can you clarify for me how competitors in a campaign figure in to the 50,000 page limit? Does the main page in the campaign get thoroughly crawled first and then competitors are crawled up to the limit?
Some examples:
If the main site is 100 pages, and I pick 2 competitors that are 100 to 1000 pages and a 3rd gargantuan competitor of 300,000 pages, what happens? Does it matter in what order I enter competitors in this situation as to whether the 100-page and 1000-page competitors get crawled vs. whether the limit maxes out on the 300K competitor before crawling the smaller competitors?
If the main site is 300,000 pages, do any competitors in the campaign just not get crawled at all because the 50,000 limit gets all used up on the main site?
What if the main site is 20,000 pages and a competitor is 45,000 pages? Thorough crawl of main site and then partial crawl of competitor?
I feel like I have a direction to go in based on our previous discussion for the main site in the campaign, but now I'm still a little stumped and confused about how competitors operate within the crawl limit.
-
Hi There,
Thanks for writing us and this is a tricky one because it is difficult to say if there is an objectively right answer. In this case your best bet would be to create a sub folder that is under the standard subscription campaign limit and attempting to pick up what you miss using the other research tools. Although, our research tools are predominantly designed for one off interactions, you could probably use them to capture information that is a bit outside of the campaigns purview. Here is a link to our research tools for your reference: moz.com/researchtools/ose/
If you do decide to enter a website that far surpasses the crawl limits then, what will be cut off is determined by the existing site structure. The way that our crawler works is that it will go from the link provided and use the existing link structure to keep crawling the site or until we run into a dead end.
Both approaches may present issues so it will be more of a judgement call. One thing that I will say is that we have a much easier time crawling fewer pages so that may be something to keep in mind.
Hope this helps and if you have any questions for me please let me know.
Have a fantastic day!
-
Thanks Patrick for the tip about ScreamingFrog! I checked out the link you shared, and it looks like a powerful tool. I'm going to put it on my list of additional tools I need to get going on using.
In the meantime, though, I still need a strategy for what to do in Moz. Any opinions on whether I should set my Moz campaigns to the smaller sub-folders of a few thousand pages vs. the humongous full sites of 100,000+ pages? I guess I'm leaning towards setting them to the smaller sub-folders. Or maybe I should do a small sub-folder for one of the huge sites and do the full site for another campaign, and see what kind of results I get.
-
Hi there
I would look into ScreamingFrog - you can crawl 500 URIs for free, otherwise, if you have a license, you can crawl as many pages as you'd like.
Let me know if this helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to deal with auto generated pages on our site that are considered thin content
Hi there, Wondering how to deal w/ about 300+ pages on our site that are autogenerated & considered thin content. Here is an example of those pages: https://app.cobalt.io/ninp0 The pages are auto generated when a new security researcher joins our team & then filled by each researcher with specifics about their personal experience. Additionally, there is a fair amount of dynamic content on these pages that updates with certain activities. These pages are also getting marked as not having a canonical tag on them, however, they are technically different pages just w/ very similar elements. I'm not sure I would want to put a canonical tag on them as some of them have a decent page authority & I think could be contributing to our overall SEO health. Any ideas on how I should deal w/ this group of similar but not identical pages?
Moz Pro | | ChrissyOck0 -
Site Crawl Error
In moz crawling error this message is appears: MOST COMMON ISSUES 1Search Engine Blocked by robots.txt Error Code 612: Error response for robots.txt i asked help staff but they crawled again and nothing changed. there's only robots.XML (not TXT) in root of my webpage it contains: User-agent: *
Moz Pro | | nopsts
Allow: /
Allow: /sitemap.htm anyone please help me? thank you0 -
Moz Crawl Test: WordPress sites with and without /feed and /trackback entires?
I have multiple WP websites and on some of the websites, on my Moz Crawl test, I see an entry for every blog post but also entries for /feed and /trackback for that single blog post. For example, www...com/someArticle www....com/someArticle/feed www...com/someArticle/trackback 1. Can anyone explain why the Crawl test is picking up the /feed and /trackback items? Is it simply because they are 301 redirects to the original post (www...com/someArticle)? 2. What setting(s) in WordPress are making this information appear? Or is it just that the site(s) that have the /feed and /trackback are displaying "normal" behavior for a WP site with a lot of trackbacks and feed entires? 3. Should /fee and /trackback, as well as /author be blocked in robots.txt? Thanks in advance for your advice and input!
Moz Pro | | Titan5520 -
How can I remove on-page reports from the Summary page?
Hi, I'd like to remove some on-page reports from the Summary page. I've already stopped them from running weekly. Is there a way to remove them completely?
Moz Pro | | csmm0 -
Still Cant Crawl My Site
I've removed all blocks but two from our htaccess. They are for amazonaws.com to block amazon from crawling us. I did a fetch as google in our WM tools on our robots txt with success. SEOMoz crawler here hit's our site and gets a 403. I've looks in our blocked request logs and amazon is the only one in there. What is going on here?
Moz Pro | | martJ0 -
Page Rank Report says #6 in Google but I can't find the page anywhere
So SEOMoz says that I've consistently ranked #6 for a certain keyword. But when I search I'm no where to be found. I've done regular searches, incognito and some non-seomoz reports and all come up with nothing in Google. I noticed it a week or two ago, but didn't think it would continue. This is no bueno. I wouldn't be surprised if I got penalized (luckily my homepage relatively well for similar keywords), an old seo consultant used very spammy tactics. I recently removed them, but not before I started to notice that I fell off the map. Why would SEOMoz not recognize this, and continue to say I'm ranking well? The keyword is bpi building analyst the page is http://www.cleanedison.com/courses/bpi-building-analyst
Moz Pro | | CleanEdisonInc0 -
How to increase the page authority
Need good rank on this 2 keyword, plz help me if anyone can help. Keywords are : gift card http://www.giftbig.com/gift-cards.html gift voucher http://www.giftbig.com/gift-vouchers.html
Moz Pro | | Joydeep_das0 -
Crawl Diagnostics and missing meta tags on noindex blog pages
Hi Guys/Gals We do love the Crawl Diagnostics, but do find the missing meta tags ("Missing Meta Description" Tag in this case) somewhat spammy. We use the "All in One SEO Pack" for our blog and it does stick in noindex,follow (as it should) on the pages that is of no use to us. "2008/04/page/2/" and the likes. Maybe I'm wrong but should the Diagnostics tool not respect the noindex tag and just ignore any warnings, since it should really mean that these pages are NOT included in the search index. Meaning that the other meta tags are really useless. Any thoughts?
Moz Pro | | sfseo0