Why is blocking the SEOmoz crawler considered a red "error?"
-
-
I think because that section is labeled "crawl errors", an area blocked from crawling would be considered an error. I can see where you're coming from, but think of it as an error found with an attempt to crawl, not necessarily an error found in the site itself.
-
So,
about 4xx errors read this article: http://webdesign.about.com/cs/http/p/http4xx.htm
for Seomoz crawler blocked by robots.txt , on this file, you have added two links, and are blocking the search engine robots to crawl/index this pages on their database.
about this error issue read here please: http://www.google.com/support/webmasters/bin/answer.py?answer=156449
hope helps,
thanks
-
It seems to me that it should be a "Notice" not an "Error." I am intentionally blocking bots from a defunct directory. Keeping SEOmoz out of an old directory should not (does not?) affect SEO, you know?
-
Sorry about that. I uploaded it 3 times and finally noticed the "Update" button after uploading on the 3rd attempt.
-
Hi, i can´t see the attached image, upload it on any imageshack or something like that and share here the url, and i will try to help you.
If the semozbot find errors on crawling,this mean your site have failures on programming of your site, it fails the " search engine friendly " optimisation.
send me image, i will try to help you.
-
wheres the attached image? its only an error b/c then they cant crawl and build data but thats just a guess
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Error Code 902 & 403
Several thousand of these popped up on my Crawl Report and the links appear to be searches, i.e. below 902: http://thespacecollective.com/index.php?route=product/search&tag=nasa+ma-1+jacket%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F 403: http://thespacecollective.com/index.php?route=product/search&tag=periodic+table+tshirt%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F I don't want Moz, let alone Google finding this kind of nonsensical link but I don't exactly know what the problem is or how to fix it. Am I right in thinking these are pages people have searched for? Can anyone shed light on this please?
Moz Pro | | moon-boots0 -
Feedback on Content Ideation / "Skyscraper" Spreadsheet Template
Hi All - I've been getting a ton of use out of the MOZ API for discovering the popularity of content - which I'm using for content ideation or to implement the Skyscraper concept. I built a spreadsheet template that combines MOZ with some other APIs to apply this to new topics of my choosing, and my friends encouraged me to clean it up a bit and share with the broader community. So, here it is - fire away! I'd love any and all feedback about the spreadsheet - it's a prototype still so it could stand to pull back more results. For example: would you want to include Domain Authority in the results? Focus more or less on the social sharing elements - or let you choose the thresholds? Would love to know if there other methodologies for which you'd be interested in seeing spreadsheet templates produced. Cheers! skyscraper-template.png
Moz Pro | | paulkarayan0 -
"redirects" with no "redirect address"?
Episode 2 of "Damon the idiot noob" web series . . . I have like . . ..90 plus temporary redirects in my moz "medium priority diagnostics". But the majority of them have a url, but no redicret url. How can it be a temporary redirect if there is no redirect address? Some of the addresses simply don't make any sense. Like: http://www.thirdcoastsigns.com/catalog/seo_sitemap/product How on earth would a "seo_sitemap" be followed my a "/product"? This is a Magento site, so I know some of these things get created automatically . .. but what on earth is going on here? Help welcome, appreciated, and welcome. Did I mention it is welcome and appreciated?
Moz Pro | | damon12120 -
404: Error - MBP Ninja Affiliate
Hello, I use the plugin MBP Ninja Affiliate to redirect links. I did Crawl Diagnostics and it appears 404: Error, but the link is working, it exists. Why Crawl Diagnostics appear 404: Error?
Moz Pro | | antoniojunior0 -
SEOMoz Ranking section only showing my Homepage
My competition has a few pages showing for each term, but only my homepage shows for those terms. My SEO shouldn't be so bad that no inner pages show. What's happening? Do I have something set up incorrectly?
Moz Pro | | Ocularis0 -
About how long does it take Google WMT to refresh stats on "Links to Your Site"?
About how long does it take Google WMT to refresh stats on "Links to Your Site"? We're dealing with an unnatural link/anchor phrase issue and I'm curious as to the "typical" time it takes for Google to recognize the links removal or Anchor Text change. Any refresh time ideas on OpenSiteExplorer or AHREFS as well would be a plus... Thanks! Dan Using this guide (very helpful thanks SEOmoz!) http://www.seomoz.org/blog/identifying-link-penalties-in-2012
Moz Pro | | MTteam0 -
Can I calculate "Keyword Difficulty" metric using Mozscape API data?
We already have a web application that pulls certain metrics about websites using the Mozscape API, but we are wanting to extend the usefulness of this application to enable users of the app to pull "Keyword Difficulty" metrics in bulk, instead of one at a time (or 5 at a time). I wouldn't mind the 5 at a time limitation if we could just automate the API calls and let the tool pull data for 50 or so keywords without user-interaction. I know that it's a "formula", but I don't know what SEOMoz uses for it's formula. Has anyone figured out a way to calculate this, based on the Mozscape API data? Has anyone ever tried to reverse engineer this metric?
Moz Pro | | brchap0