A suggestion to help with linkscape crawling and data processing
-
Since you guys are understandably struggling with crawling and processing the sheer number of URLs and links, I came up with this idea:
In a similar way to how SETI@Home (is that still a thing? Google says yes: http://setiathome.ssl.berkeley.edu/) works, could SEOmoz use distributed computing amongst SEO moz users to help with the data processing? Would people be happy to offer up their idle processor time and (optionally) internet connections to get more accurate, broader data?
Are there enough users of the data to make distributed computing worthwhile?
Perhaps those who crunched the most data each month could receive moz points or a free month of Pro.
I have submitted this as a suggestion here:
http://seomoz.zendesk.com/entries/20458998-crowd-source-linkscape-data-processing-and-crawling-in-a-similar-way-to-seti-home -
Sean - I share Rand' sentiments, thanks so much for the suggestion!
We have considered distributed crawling in the past (or even distributed rank checking because then it would be in that user's locale) but there are a whole different set of challenges. For example, you have to handle all the edge cases: what if a user's computer isn't on, or loses connectivity, what if we crawl too fast and the user gets blocked from a site, how do you write all that data securely?
Of course all of these concerns can be overcome, but right now we feel like we have a good handle on the problems, and it will be much faster for us to just fix what we have
Although, I know all of us are so appreciative of the ideas and support, and we will have something really great soon!
-
Thanks a ton Sean! We have considered distributed computing as a way to help crawl, index, process, etc. It's so flattering and humbling to hear that you'd be willing to help out and that the community would, too
For now, we believe we can get to the index size/quality/freshness using our hosted system, but the engineering team will certainly be encouraged to hear that folks in our community might contribute to this. Distributed systems present their own challenges, and we'd have to write that code from scratch, but if we find that we can't do what we want with our existing network, we might reach out.
BTW - I wanted to let folks know that the team here does feel very confident that come December/January, we're going to be producing indices that reach exceptional quality bars. The problems we face are largely known, and we now have the team and the solutions to tackle it, so we're pretty excited.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why did Moz crawl our development site?
In our Moz Pro account we have one campaign set up to track our main domain. This week Moz threw up around 400 new crawl errors, 99% of which were meta noindex issues. What happened was that somehow Moz found the development/staging site and decided to crawl that. I have no idea how it was able to do this - the robots.txt is set to disallow all and there is password protection on the site. It looks like Moz ignored the robots.txt, but I still don't have any idea how it was able to do a crawl - it should have received a 401 Forbidden and not gone any further. How do I a) clean this up without going through and manually ignoring each issue, and b) stop this from happening again? Thanks!
Moz Pro | | MultiTimeMachine0 -
Site Crawl 4xx Errors?
Hello! When I check our website's critical crawler issues with Moz Site Crawler, I'm seeing over 1000 pages with a 4xx error. All of the pages that are showing to have a 4xx error appear to be the brand and product pages we have on our website, but with /URL at the end of each permalink. For example, we have a page on our site for a brand called Davinci. The URL is https://kannakart.com/davinci/. In the site crawler, I'm seeing the 4xx for this URL: https://kannakart.com/davinci/URL. Could this be a plugin on our site that is generating these URLs? If they're going to be an issue, I'd like to remove them. However, I'm not sure exactly where to begin. Thanks in advance for the help, -Andrew
Moz Pro | | mostcg0 -
Moz crawl duplicate pages issues
Hi According to the moz crawl on my website I have in the region of 800 pages which are considered internal duplicates. I'm a little puzzled by this, even more so as some of the pages it lists as being duplicate of another are not. For example, the moz crawler considers page B to be a duplicate of page A in the urls below: Not sure on the live link policy so ive put a space in the urls to 'unlive' them. Page A http:// nuchic.co.uk/index.php/jeans/straight-jeans.html?manufacturer=3751 Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/accessories/id/92/?cat=97&manufacturer=3603 One is a filter page for Curvety Jeans and the other a filter page for Charles Clinkard Accessories. The page titles are different, the page content is different so Ive no idea why these would be considered duplicate. Thin maybe, but not duplicate. Like wise, pages B and C are considered a duplicate of page A in the following Page A http:// nuchic.co.uk/index.php/bags.html?dir=desc&manufacturer=4050&order=price Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/purses/id/98/?manufacturer=4001 Page C http:// nuchic.co.uk/index.php/coats/waistcoats.html?manufacturer=4053 Again, these are product filter pages which the crawler would have found using the site filtering system, but, again, I cannot find what makes pages B and C a duplicate of A. Page A is a filtered result for Great Plains Bags (filtered from the general bags collection). Page B is the filtered results for Chic Look Purses from the Purses section and Page C is the filtered results for Apricot Waistcoats from the Waistcoat section. I'm keen to fix the duplicate content errors on the site before it goes properly live at the end of this month - that's why anyone kind enough to check the links will see a few design issues with the site - however in order to fix the problem I first need to work out what it is and I can't in this case. Can anyone else see how these pages could be considered a duplicate of each other please? Checking ive not gone mad!! Thanks, Carl
Moz Pro | | daedriccarl0 -
My moz only one page was crawled
I recently moved my shopping cart from one provider to another and today moz only crawled one page, could this be because maybe google has not indexed it yet or should i be concerned? I pointed the DNS at the new cart monday night if that helps. I would have expected it to be indexed by now
Moz Pro | | SmartVapes0 -
Discrepancy in OSE data
When I look up a domain in OSE, it shows that there are 3K+ links from 349 domains. However, when I download the CSV, I only see around 1,500 links. Why the discrepancy?
Moz Pro | | inhouseseo0 -
Crawl Diagnostics Summary
Sorry if I am not asking in the right place. On Crawl Diagnostics Summary it says this right..?? : "To get you started quickly Roger is crawling up to 250 pages on your site. You should see these results within two hours. The full crawl will complete within 7 days.". so it's passed a day and it still doesn't show nothing. It says "Processing Crawl Data for 358 pages" How much should i wait??
Moz Pro | | Dussk0 -
How do i get to know th pages crawled by SEOMOZ?
My SEOMOZ campaign says that "n" number of pages were crawled. How do i get access to the list of the pages crawled by SEOMOZ?
Moz Pro | | IM_Learner0