Why is blocking the SEOmoz crawler considered a red "error?"
-
-
I think because that section is labeled "crawl errors", an area blocked from crawling would be considered an error. I can see where you're coming from, but think of it as an error found with an attempt to crawl, not necessarily an error found in the site itself.
-
So,
about 4xx errors read this article: http://webdesign.about.com/cs/http/p/http4xx.htm
for Seomoz crawler blocked by robots.txt , on this file, you have added two links, and are blocking the search engine robots to crawl/index this pages on their database.
about this error issue read here please: http://www.google.com/support/webmasters/bin/answer.py?answer=156449
hope helps,
thanks
-
It seems to me that it should be a "Notice" not an "Error." I am intentionally blocking bots from a defunct directory. Keeping SEOmoz out of an old directory should not (does not?) affect SEO, you know?
-
Sorry about that. I uploaded it 3 times and finally noticed the "Update" button after uploading on the 3rd attempt.
-
Hi, i can´t see the attached image, upload it on any imageshack or something like that and share here the url, and i will try to help you.
If the semozbot find errors on crawling,this mean your site have failures on programming of your site, it fails the " search engine friendly " optimisation.
send me image, i will try to help you.
-
wheres the attached image? its only an error b/c then they cant crawl and build data but thats just a guess
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 Errors generating in WP
Our crawl reports are generating back several 404 errors for pages with urls that look like: /category/consulting/page/5/ The tag changes, the page number changes, but the result is always the same: A big glaring 404. Our sites are built on WordPress Multi-site, and I am fairly certain this issue is on the WP end, but I can't figure out why it is generating pages out to infinity, essentially, from the tags and categories. It is worse on some sites than others, but is happening across the board (my initial concern was that it might be a theme issue, but that does not seem to be the case). If anyone has run into this issue and knows a fix, you're insight would be greatly appreciated. Thanks!
Moz Pro | | SIXSEO0 -
When do the "just discovered" links on Open Site Explorer count?
I have been working hard to get follow backlinks but they have all been in the Just Discovered part of Open Site Explorer for a long time. So they don't count in my stats for Domain Authority and such. When do they move OUT of Just Discovered?
Moz Pro | | dealblogger0 -
Duplicate page titles in SEOMoz
My on page reports are showing a good number of duplicate title tags, but they are all because of a url tracking parameter that tells us which link the visitor clicked on. For example, http://www.example.com/example-product.htm?ref=navside and http://www.example.com/example-product.htm are the same page, but are treated as to different urls in SEOMoz. This is creating "fake" number of duplicate page titles in my reports. This has not been a problem with Google, but SEOMoz is treating it like this and it's confusing my data. Is there a way to specify this as a url parameter in the Moz software? Or does anybody have another suggestion? Should I specify this in GWT and BWT?
Moz Pro | | InetAll0 -
Slowing down SEOmoz Crawl Rate
Is there a way to slow down SEOmoz crawl rate? My site is pretty huge and I'm getting 10k pages crawled every week, which is great. However I sometimes get multiple page requests in one second which slows down my site a bit. If this feature exists I couldn't find it, if it doesn't, it's a great idea to have, in a similar way to how Googlebot do it. Thanks.
Moz Pro | | corwin0 -
SEOMoz says i have errors but goole webmaster doesnt show them - which one is right ?
I have about 350 websites all created in farcry 4.0 cms platform. When i do a site crawl using any seo tool ( seomoz, raven, screaming frog) it comes back telling me I have duplicate titles, description and content for a bunch of my pages. The pages are the same page its just that the crawl is showing the object Id and the friendly URL which is autocreated in the CMS as different pages. EXAMPLE these are the samge page but are recognised as different in SEOMOZ crawl test and therefore flagged as having duplicate title tags and content ... <colgroup span="1"><col style="width: 488pt; mso-width-source: userset; mso-width-alt: 23771;" span="1" width="650"></colgroup>
Moz Pro | | cassi
| www.westendautos.com.au/go/latest-news-and-specials <colgroup span="1"><col style="width: 488pt; mso-width-source: userset; mso-width-alt: 23771;" span="1" width="650"></colgroup>
| www.westendautos.com.au/index.cfm?objectid=9CF82BBD-9B98-B545-33BC644C0FA74C8E | | GOOGLE WEBMASTER however does not show me these errors ? It shows no errors at all. Now i believe i can fix this by chucking in a rel=canonical at the top of each page ? (a big job over 350 sites) But even so - my problem is that the website developers are telling me that SEOMOZ and all the other tools are wrong - that google will see these the way it should, that the object ID's would not get indexed ( although i have seen at least one object id show up in the serps.) Do i believe the developers and trust that google has it sorted or go through the process of hassling the developers to get a rel=canonical added to all the pages? (the issue sees my homepage as about 4 different pages www.domain.com/ www.domain.com/home /index AND object id.0 -
How to get rid of the message "Search Engine blocked by robots.txt"
During the Crawl Diagnostics of my website,I got a message Search Engine blocked by robots.txt under Most common errors & warnings.Please let me know the procedure by which the SEOmoz PRO Crawler can completely crawl my website?Awaiting your reply at the earliest. Regards, Prashakth Kamath
Moz Pro | | 1prashakth0 -
SEOmoz Q&A down last few days
It seems the Q&A section had some issues since the 25th. Users could post new Q&As but they were not visible to most users. Roger was caught slacking! The issue appears to be resolved at this time. I just wanted to share to anyone who asked a question the past few days who did not receive a response you may wish to repost your question as many readers will not go back and check questions from prior days.
Moz Pro | | RyanKent1