How can I fix this home page crawl error ?
-
My website shows this crawl error =>
612 : Home page banned by error response for robots.txt.
I also did not get any page data in my account for this website ...
I did get keyword rankings and traffic data, I am guessing from the analytics account.
url = www.mississaugakids.com
Not sure really what to do with this !
Any help is greatly appreciated.
-
Hi there,
Is this still happening, or does it seem to have been taken care of?
Cheers,
Jane
-
Or just noindex it for now? Seems worth a try unless someone more technical has a better suggestion.
Are you seeing the error only when Moz crawls the site? Google Webmaster Tools isn't signaling any problems?
D
-
Thank you Donna,
That's pretty much what I had found, but did not think that would cause the home page error I am seeing. I know that events calendar is a problem for speed. Maybe I will move the calendar off the domain and re scan. If the crawl is fine maybe I will move the calendar to a separate domain permanently.
-
Hi.
https://publib.boulder.ibm.com/infocenter/discover/v8r4/index.jsp?topic=/com.ibm.discovery.es.ad.doc/monitoring/iiysawhttp.htm says it's an error that occurred when the crawler attempted to connect to your Web server. It says a slow site or network might be the cause of the problem.
Your robots.txt is set correctly and your site is accessible (I just tried), perhaps your best bet is to wait and see if the problem recurs.
I also ran a scan of your site using screaming frog and got close to 2,000 internal server errors (response code 500) primarily from pages in this directory - http://mississaugakids.com/mississauga-events-calendar/. The pages are loading very slowly. That might be contributing to your problem.
Maybe start there and then circle back to see if the 612 error is recurring? I'm not very technical, but perhaps they're somehow related?
-
Yeah, your robot.txt seems fine, but the answer sounded like the error code could be misleading, so maybe you're looking in the wrong area for the root of the problem due to that reason. Wish I could be of more help.
-
Hello William, Thnaks for the heads up on that thread. I did see it, however my robot.txt file should be correct, so the answer in that thread did not help in my case. Looking for other options that could be the problem here. Cheers !
-
This was brought up a little while ago, hopefully Chiaryn's answer here can help: http://moz.com/community/q/without-robots-txt-no-crawling
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help Center/Knowledgebase effects on SEO: Is it worth my time fixing technical issues on no-indexed subdomain pages?
We're a SaaS company and have a pretty extensive help center resource on a subdomain (help.domain.com). This has been set up and managed over a few years by someone with no knowledge of SEO, meaning technical things like 404 links, bad redirects and http/https mixes have not been paid attention to. Every page on this subdomain is set to NOT be indexed in search engines, but we do sometimes link to help pages from indexable posts on the main domain. After spending time fixing problems on our main website, our site audits now flag almost solely errors and issues on these non-indexable help center pages every week. So my question is: is it worth my time fixing technical issues on a help center subdomain that has all its pages non-indexable in search engines? I don't manage this section of the site, and so getting fixes done is a laborious process that requires going through someone else - something I'd rather only do if necessary.
Technical SEO | | mglover19880 -
Googlebot crawl error Javascript method is not defined
Hi All, I have this problem, that has been a pain in the ****. I get tons of crawl errors from "Googlebot" saying a specific Javascript method does not exist in my logs. I then go to the affected page and test in a web browser and the page works without any Javascript errors. Can some help with resolving this issue? Thanks in advance.
Technical SEO | | FreddyKgapza0 -
Google sees 2 home pages while I only have 1
How to solve the problem of google seeing both domain.com and domain.com/index.htm when I only have one file? Will the cannonical work? If so which? Or any other solutions for a novice? I learned from previous blogs that it needs to be done by hosting service, but Yahoo has no solution.
Technical SEO | | Kurtyj0 -
Odd URL errors upon crawl
Hi, I see this in Google Webmasters, and am now also seeing it here...when a crawl is performed on my site, I get many 500 server error codes for URLs that I don't believe exist. It's as if it sees a normal URL but adds this to it: %3Cdiv%20id= It's like this for hundreds of URLs. Good URL that actually exists http://www.ffr-dsi.com/food-retailing/supplies/ URL that causes error and I have no idea why http://www.ffr-dsi.com/food-retailing/supplies/%3Cdiv%20id= Thanks!
Technical SEO | | Matt10 -
Can SEOMoz crawl a single page as oppose to an entire subfolder?
I would like the following page to be crawled: http://www.ob.org/_programs/water/water_index.asp Instead, SEOMoz changes the page to the following subfolder which is an invalid url: http://www.ob.org/_programs/water/
Technical SEO | | OBIAnalytics0 -
2 links on home page to each category page ..... is page rank being watered down?
I am working on a site that has a home page containing 2 links to each category page. One of the links is a text link and one link is an image link. I think I'm right in thinking that Google will only pay attention to the anchor text/alt text of the first link that it spiders with the anchor text/alt text of the second being ignored. This is not my question however. My question is about the page rank that is passed to each category page..... Because of the double links on the home page, my reckoning is that PR is being divided up twice as many times as necessary. Am I also right in thinking that if Google ignore the 2nd identical link on a page only one lot of this divided up PR will be passed to each category page rather than 2 lots ..... hence horribly watering down the 'link juice' that is being passed to each category page?? Please help me win this argument with a developer and improve the ranking potential of the category pages on the site 🙂
Technical SEO | | QubaSEO0 -
What to do about Google Crawl Error due to Faceted Navigation?
We are getting many crawl errors listed in Google webmaster tools. We use some faceted navigation with several variables. Google sees these as "500" response code. It looks like Google is truncating the url. Can we tell Google not to crawl these search results using part of the url ("sort=" for example)? Is there a better way to solve this?
Technical SEO | | EugeneF0 -
301 lots of old pages to home page
Will it hurt me if i redirect a few hundred old pages to my home page? I currently have a mess on my hands with many 404's showing up after moving my site to a new ecommerce server. We have been at the new server for 2 years but still have 337 404s showing up in google webmaster tools. I don't think it would affect users as very few people woudl find those old links but I don't want to mess with google. Also, how much are those 404s hurting my rank?
Technical SEO | | bhsiao1