Why is blocking the SEOmoz crawler considered a red "error?"
-
-
I think because that section is labeled "crawl errors", an area blocked from crawling would be considered an error. I can see where you're coming from, but think of it as an error found with an attempt to crawl, not necessarily an error found in the site itself.
-
So,
about 4xx errors read this article: http://webdesign.about.com/cs/http/p/http4xx.htm
for Seomoz crawler blocked by robots.txt , on this file, you have added two links, and are blocking the search engine robots to crawl/index this pages on their database.
about this error issue read here please: http://www.google.com/support/webmasters/bin/answer.py?answer=156449
hope helps,
thanks
-
It seems to me that it should be a "Notice" not an "Error." I am intentionally blocking bots from a defunct directory. Keeping SEOmoz out of an old directory should not (does not?) affect SEO, you know?
-
Sorry about that. I uploaded it 3 times and finally noticed the "Update" button after uploading on the 3rd attempt.
-
Hi, i can´t see the attached image, upload it on any imageshack or something like that and share here the url, and i will try to help you.
If the semozbot find errors on crawling,this mean your site have failures on programming of your site, it fails the " search engine friendly " optimisation.
send me image, i will try to help you.
-
wheres the attached image? its only an error b/c then they cant crawl and build data but thats just a guess
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Increase in authorization permission errors (Access Denied - Error 403)
Hi MOZ community, Since last week when I changed my theme in a WP installation I noticed (in WMT and MOZ tool) that I have increased number in authorization permission errors (error 403-forbidden). What happens is that I received a 403 error for almost every single URL of my site. All these URLs are not "real" ones but they all have my email in the end. i.e. I get an 403 error for the "/contact/support@fantasylogic.com" whilst the real URL is just "/contact/" This happens, as I said, for almost every single page of my site. I have no other crawling or indexation issues, all URLs are correctly indexed. All new pages are correctly indexed as well. URIs ending with "support@fantasylogic.com" are not indexed off course. WP and all installed plugins & theme are on the latest available release. For SEO purposes I use Yoast SEO WP plugin. The site in questions is: fantasylogic.com Any suggestions would be highly appreciated. Thank you in advance
Moz Pro | | gpapatheodorou0 -
Why do I see a duplicate content errors when rel="canonical" tag is present
I was reviewing my first Moz crawler report and noticed the crawler returned a bunch of duplicate page content errors. The recommendations to correct this issue are to either put a 301 redirect on the duplicate URL or use the rel="canonical" tag so Google knows which URL I view as the most important and the one that should appear in the search results. However, after poking around the source code I noticed all of the pages that are returning duplicate content in the eyes of the Moz crawler already have the rel="canonical" tag. Does the Moz crawler simply not catch whether that tag is being used? If I have that tag in place, is there anything else I need to do in order to get that error to stop showing up in the Moz crawler report?
Moz Pro | | shinolamoz0 -
I've got quite a few "Duplicate Page Title" Errors in my Crawl Diagnostics for my Wordpress Blog
Title says it all, is this an issue? The pages seem to be set up properly with Rel=Canonical so should i just ignore the duplicate page title erros in my Crawl Diagnostics dashboard? Thanks
Moz Pro | | SheffieldMarketing0 -
SEOmoz crawler bug?
I just noticed that a few of my campaigns have number of pages crawled 1. Can someone tell me what this is.... from 5 campaigns 2 have only one pages crawled from which one is an online shop with over 2000 products 🙂
Moz Pro | | mosaicpro0 -
Why is the SEOmoz customer service on this site so awful?
Half the time the tools don't work and there isn't any accountability. Has anyone else had the same experience?
Moz Pro | | wpsoule1 -
Why did SEOMoz only crawl 1 page?
I have multiple campaigns and on a few of them SEOMoz has only crawled one page. I think this may have to do with how I set up the campaign. How do I get SEOMoz to crawl more than one page on these campaigns.
Moz Pro | | HermanAdvertising0 -
New To SEOMOZ, 2 Questions...
Hi, 1. I got signed up with SEOMOZ yesterday and would like to know about the competitive analysis tool. I saw in a demo video of SEOMOZ that there was an aspect of Social links (Facebook etc) included in the analysis, I can't see this in my analysis however. 2. Is it possible to change websites over time? Let say in a few months I want to work on a new website, can I remove one of my 5 and add a new one? Thanks! James
Moz Pro | | James10 -
"no urls with duplicate content to report"
Hi there, i am trying to clean up some duplicate content issues on a website. The crawl diagnostics says that one of the pages has 8 other URLS with the same content. When i click on the number "8" to see the pages with duplicate content, i get to a page that says "no urls with duplicate content to report". Why is this happening? How do i fix it?
Moz Pro | | fourthdimensioninc0