Most of the time getting error.
-
Hi,
i am getting this error most of the time in linkscape since last month.
Sorry dude, no inlinks found matching this criteria.
Pl guide is this a bug and the sites I am trying to use linkscape for were having lot of pages crawled earlier by SEOMOZ.
Thanks,
Preet
-
Hey Preet,
That's a good question, with a lot of information involved, actually! I'm so sorry that you still haven't been able to see your links in Linkscape. Most new sites and links will be indexed by our spiders and available in Linkscape and Open Site Explorer within 60 days, but some take even longer for a plethora of reasons, including crawl-ability of sites, the amount of inbound links to them, and the depth of pages in subdirectories. Just so you know, here's how we do our index: we take the last index, take the 10 billion URLs with the highest mozrank (with a fixed limit on some of the larger domains), and start crawling from the top-down until we've crawled 40,000,000,000 pages (which is about 1/4 of the amount in Google's index). Therefore, if the site is not linked to by one of these seed URLs (or one of the URLs linked to by them in the next update) then it won't show up in our index
We update our Linkscape Index every 3 to 5 weeks. Crawling the whole internet to look for links takes 2-3 weeks. And then we've got 1-2 weeks of processing to do on those links to determine which are the most important links etc. You can see a schedule of how often we update, and planned updates here: http://seomoz.zendesk.com/entries/345964-linkscape-update-schedule
Linkscape focuses on a breadth-first approach, and thus we nearly always have content from the homepage of websites, externally linked-to pages and pages higher up in a site's information hierarchy. However, deep pages that are buried beneath many layers of navigation are sometimes missed and it may be several index updates before we catch all of these.
If our crawlers or data sources are blocked from reaching those URLs, they may not be included in our index (though links that points to those pages will still be available). Finally, the URLs seen by Linkscape must be linked-to by other documents on the web or our index will not include them.
For now, the best thing you can do to help your domain become indexed is to work on link building for links from sites with high mozrank. If you need help with that, you may want to ask the PRO Q&A community here!
I hope this information helps! While the site and links may not be indexed yet, give it some time - maybe we'll see it in OSE next month.
Best of luck,
Aaron -
Can you share the site you are getting that error for?
You might also want to email help@seomoz.org
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Getting keywords to rank on new landing pages
I've built new landing pages for a website, and have loaded them with researched keywords in the content, alt image attributes and metas etc - but, after a number of crawls, the keywords are currently being matched to other existing web pages on the website. Does anyone have any advice on 'unlatching' these keywords from these pages, and instead getting them to match with pages that have been optimised for them? Many thanks!
Moz Pro | | Darkstarr6660 -
What's the best way to eliminate "429 : Received HTTP status 429" errors?
My company website is built on WordPress. It receives very few crawl errors, but it do regularly receive a few (typically 1-2 per crawl) "429 : Received HTTP status 429" errors through Moz. Based on my research, my understand is that my server is essentially telling Moz to cool it with the requests. That means it could be doing the same for search engines' bots and even visitors, right? This creates two questions for me, which I would greatly appreciate your help with: Are "429 : Received HTTP status 429" errors harmful for my SEO? I imagine the answer is "yes" because Moz flags them as high priority issues in my crawl report. What can I do to eliminate "429 : Received HTTP status 429" errors? Any insight you can offer is greatly appreciated! Thanks,
Moz Pro | | ryanjcormier
Ryan0 -
404 Error
When I get a 404 error report like below, I can't find a reference to the specific link on the page that is in error? Can you help me. 404 : Error http://www.boxtheorygold.com/blog/www.boxtheorygold.com/blog/bid/23385/Measuring-Your-Business-Processes-Pays-Big-Dividends Thanks, Ron Carroll
Moz Pro | | Rong0 -
Does SEOmoz give a way to know what link on what page produces the 404 errors that SEOmoz is telling me I have??
SEOmoz gives me a report of 404 errors on my site. Do they give a way to know from what link on what page produces the error?
Moz Pro | | MeridianGroup0 -
Is there a quick and easy way to fix 8776 errors, 19131 warnings and 164 notices on a campaign?
My account dashboard shows several types of errors,, warnings and notices. I am just asking if there is a quick way to fix this.
Moz Pro | | Jchapman0 -
Should I worry about duplicate content errors caused by backslashes?
Frequently we get red-flagged for duplicate content in the MozPro Crawl Diagnostics for URLs with and without a backslash at the end. For example: www.example.com/ gets flagged as being a duplicate of www.example.com I assume that we could rel=canonical this, if needed, but our assumption has been that Google is clever enough to discount this as a genuine crawl error. Can anyone confirm or deny that? Thanks.
Moz Pro | | MackenzieFogelson0 -
4xx (not found) errors seem spurious, caused by a "\" added to the URL
Hi SEOmoz folks We're getting a lot of 404 (not found) errors in our weekly crawl. However the weird thing is that the URLs in question all have the same issue. They are all a valid URL with a backsalsh ("") added. In URL encoding, this is an extra %5C at the end of the URL. Even weirder, we do not have any such URLs in our (Wordpress-based) website. Any insight on how to get rid of this issue? Thanks
Moz Pro | | GPN0 -
Solving duplicate content errors for what is effectively the same page.
Hello,
Moz Pro | | jcarter
I am trying out your SEOMOZ and I quite like it. I've managed to remove most of the errors on my site however I'm not sure how to get round this last one. If you look at my errors you will see most of them revolve around things like this: http://www.containerpadlocks.co.uk/categories/32/dead-locks
http://www.containerpadlocks.co.uk/categories/32/dead-locks?PageSize=9999 These are essentially the same pages because the category for Dead Locks does not contain enough products to view over more than one resulting in the fact that when I say 'View all products' on my webpage, the results are the same. This functionality works with categories with more than the 20 per page limit. My question is, should I be either: Removing the link to 'show all products' (which adds the PageSize query string value) if no more products will be shown. Or putting a no-index meta tag on the page? Or some other action entirely? Looking forward to your reply and you showing how effective Pro is. Many Thanks,
James Carter0