What is the best tool to crawl a site with millions of pages?
-
I want to crawl a site that has so many pages that Xenu and Screaming Frog keep crashing at some point after 200,000 pages.
What tools will allow me to crawl a site with millions of pages without crashing?
-
Don't forget to exclude pages that don't contain the information you are looking for - exclude query parameters which just result in duplicate content, system files, etc. That may help to bring the amount down.
-
Only basic stuff: URL, Title, Description, and a few HTML elements.
I am aware that building a crawler would be fairly easy, but is there one out there that already does it without consuming too many resources?
-
For what purpose do you want to crawl the site?
A web crawler isn't really hard to write. In 100 lines of code you can probably code one. The question is of course: what do you want out of the crawl?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Need references to a company that can transition our 1000 page website from Http to Https without breaking our SEO backlinks and site structure
Hi Ya'll I'm looking for a company or independent who can transition our website from http to https. I want to make sure they know what they're doing with a Wordpress website. More importantly, i want to make sure they don't break any seo juice from external sources while internally nothing gets broken. Anyone have any good recommendations? You can reply back or DM me. Best, Shawn
Intermediate & Advanced SEO | | Shawn1240 -
Links to my site still showing in Webmaster Tools from a non-existent site
We owned 2 sites, with the pages on Site A all linking over to similar pages on Site B. We wanted to remove the links from Site A to Site B, so we redirected all the links on Site A to the homepage on Site A, and took Site A down completely. Unfortunately we are still seeing the links from Site A coming through on Google Webmaster Tools for Site B. Does anybody know what else we can do to remove these links?
Intermediate & Advanced SEO | | pedstores0 -
Best way to start a fresh site from a penalized one
Dear all, I was dealing with a penalized domain (Penguin, Panda), hundred of spamy links (Disavoved with no success), tiny content resolved in part and so on .... I think the best way is to start a new fresh domain but we want to use some of the well written content from the old (penalized site). To do this task I will mark as NOINDEX the source (penalized) page and move this content to the new fresh domain. Question: do you think this is a non-dangerous aprouch or do you know other strategy? I'll appreciate your point of view Thank you
Intermediate & Advanced SEO | | SharewarePros0 -
Best link analysis tool before moving to disavow them
I want to use Google disavow tool. I have 2 questions relating it 1. How to use Google Disavow tool? 2. How to use Moz or OSE for deep backlink mining? If Moz not best fit needs alternative for it.
Intermediate & Advanced SEO | | csfarnsworth0 -
Could you use a robots.txt file to disalow a duplicate content page from being crawled?
A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO. I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution?
Intermediate & Advanced SEO | | gregelwell0 -
Link anchor text: only useful for pages linked to directly or distributed across site?
As a SEO I understand that link anchor text for the focus keyword on the page linked to is very important, but I have a question which I can not find the answer to in any books or blogs, namely: does inbound anchor text 'carry over' to other pages in your site, like linkjuice? For instance, if I have a homepage focusing on keyword X and a subpage (with internal links to it) focusing on keyword Y. Does is then help to link to the homepage with keyword Y anchor texts? Will this keyword thematically 'flow through' the internal link structure and help the subpage's ranking? In a broader sense: will a diverse link anchor text profile to your homepage help all other pages in your domain rank thematically? Or is link anchor text just useful for the direct page that is linked to? All views and experiences are welcome! Kind regards, Joost van Vught
Intermediate & Advanced SEO | | JoostvanVught0 -
Best way to stop pages being indexed and keeping PageRank
If for example on a discussion forum, what would be the best way to stop pages such as the posting page (where a user posts a topic or message) from being indexed AND not diluting PageRank too? If we added them to the Disallow on robots.txt, would pagerank still flow through the links to those blocked pages or would it stay concentrated on the linking page? Your ideas and suggestions will be greatly appreciated.
Intermediate & Advanced SEO | | Peter2640 -
Working out exactly how Google is crawling my site if I have loooots of pages
I am trying to work out exactly how Google is crawling my site including entry points and its path from there. The site has millions of pages and hundreds of thousands indexed. I have simple log files with a time stamp and URL that google bot was on. Unfortunately there are hundreds of thousands of entries even for one day and as it is a massive site I am finding it hard to work out the spiders paths. Is there any way using the log files and excel or other tools to work this out simply? Also I was expecting the bot to almost instantaneously go through each level eg. main page--> category page ---> subcategory page (expecting same time stamp) but this does not appear to be the case. Does the bot follow a path right through to the deepest level it can/allowed to for that crawl and then returns to the higher level category pages at a later time? Any help would be appreciated Cheers
Intermediate & Advanced SEO | | soeren.hofmayer0