Oh no googlebot can not access my robots.txt file
-
I just receive a n error message from google webmaster
Wonder it was something to do with Yoast plugin.
Could somebody help me with troubleshooting this?
Here's original message
Over the last 24 hours, Googlebot encountered 189 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%.
Recommended action
If the site error rate is 100%:
- Using a web browser, attempt to access http://www.soobumimphotography.com//robots.txt. If you are able to access it from your browser, then your site may be configured to deny access to googlebot. Check the configuration of your firewall and site to ensure that you are not denying access to googlebot.
- If your robots.txt is a static page, verify that your web service has proper permissions to access the file.
- If your robots.txt is dynamically generated, verify that the scripts that generate the robots.txt are properly configured and have permission to run. Check the logs for your website to see if your scripts are failing, and if so attempt to diagnose the cause of the failure.
If the site error rate is less than 100%:
- Using Webmaster Tools, find a day with a high error rate and examine the logs for your web server for that day. Look for errors accessing robots.txt in the logs for that day and fix the causes of those errors.
- The most likely explanation is that your site is overloaded. Contact your hosting provider and discuss reconfiguring your web server or adding more resources to your website.
After you think you've fixed the problem, use Fetch as Google to fetch http://www.soobumimphotography.com//robots.txt to verify that Googlebot can properly access your site.
-
I can open text file but Godaddy told me robots.txt file is not on my server (root level).
Also told me that my site is not crawled because robot.txt file is not there.
Basically all of those might have resulted from plug in I was using (term optimizer)
Based on what Godaddy told me, my .htaccess file was crashed because of that and had to be recreated. So now .htaceess file is good.
Now I have to figure out is why my site is not accessible from Googlebot.
Let me know Keith if this is a quick fix or need some time to troubleshoot. You can send me a message to discuss about fees if nessary.
Thanks again
-
Hi,
You have a robots.txt file here: http://www.soobumimphotography.com/robots.txt
Can you write this again in English so it makes sense?
"I called Godaddy and told me if I used any plug ins etc. Godaddy fixed .htaccss file and my site was up and runningjust fine."
Yes google xml sitemaps will add the location of your stitemap to the robots.txt file - but there is nothing wrong with your robots.txt file.
-
I just called Godaddy and told me that I don't have robots.txt tile. Can anyone help with this issue?
So here's what happen:
I purchased Joos de Vailk's Term Optimizer to consolidate tags etc.
As soon as I installed & opened it, my site crashed.
I called Godaddy and told me if I used any plug ins etc. Godaddy fixed .htaccss file and my site was up and runningjust fine.
Isn't plugin like the Google XML Sitemaps automatically generates robots.txt file?
-
Yes, my site was down.
-
I had a .htaccess issue past 24 hour with plug in and Godaddy had fixed it for me.
I think this caused problem.
I just fetched again and still getting unreachable page. I wonder if I have bad .htaccess file
-
Was your site down during this period?
I would recommend setting up pingdom.com (free site monitoring), this will email you if your site goes down - I suspect this is a hosting related issue.
FYI, I can access your robots.txt fine from here.
-
Hi Bistoss, You should log into Google Webmaster Tools to check the day the problem occurred. It is not uncommon for host to have problems that temporarily cause access problems. In some rare cases Google itself could be having problems. For example, in July we had 1 day with a 11% failure rate, it was the host. Since then no problems. If your problems are persistent, then you may have an issue like this: http://blog.jitbit.com/2012/08/fixing-googlebot-cant-access-your-site.html old Analytic code. Other things to look at is any recent changes, specifically anything that had to do with .htaccess Be sure to use the FETCH AS GOOGLE bot after any changes to verify that Google can now crawl your site. Hope this helps
-
I also use Robots Meta Configuration plug in
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there a limit to how many URLs you can put in a robots.txt file?
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda. Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
Technical SEO | | kcb81780 -
One server, two domains - robots.txt allow for one domain but not other?
Hello, I would like to create a single server with two domains pointing to it. Ex: domain1.com -> myserver.com/ domain2.com -> myserver.com/subfolder. The goal is to create two separate sites on one server. I would like the second domain ( /subfolder) to be fully indexed / SEO friendly and have the robots txt file allow search bots to crawl. However, the first domain (server root) I would like to keep non-indexed, and the robots.txt file disallowing any bots / indexing. Does anyone have any suggestions for the best way to tackle this one? Thanks!
Technical SEO | | Dave1000 -
Can the Hosting location of image files have a negative effect if 'off-site' such as on the devs own media server ?
Hi Can the Hosting location of image files have a negative effect if 'off-site' such as if they are on the developers own media server ? As opposed to on the actual websites server or file structure ? In the case i'm looking at the image files are hosted on a totally separate server (a media subdomain of the developers site server) from the subject sites dedicated server. Will engines still attribute the properties of files hosted in this manner to the main website (such as file name, alt attributes, etc etc) ? Or should they really be on the subject sites server own media folder ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Can I remove 301 redirects after some time?
Hello, We have an very large number of 301 redirects on our site and would like to find a way to remove some of them. Is there a time frame after which Google does not need a 301 any more? For example if A is 301 redirected to B, does Google know after a while not to serve A any more, and replaces any requests for A with B? How about any links that go to A? Or: Is the only option to have all links that pointed to A point to B and then the 301 can be removed after some time? Thank you for you you help!
Technical SEO | | Veva0 -
600+ Visitors a day after 6 months, can you do it?
So since the Penguin update the clients of the company I work for have gradually been losing traffic and money. Noone (except for me) has noticed this yet and connected the dots. Yesterday we all get called in to have a bollocking and the manager asks the head of our department if he would be confident of being able to get 600+ visitors a day to an 'average website' that has just started up, to which he replied 'yes'. Since I started here back in February there has not been a single new client that has been able to gain that many visitors (many have not gained even 25% of that figure), which in the post-panda and post-pegnuin world, I find completely understandable. When I first started here they were using SEO 'tactics' which people used to employ 5+ years ago and didn't even use exact match keyword data.I have had a few talks with them about how SEO has changed over the last few years and they still don't seem to understand that it is now significantly more difficult to gain traffic using SEO than it once was. If you were asked about the same question, thinking about the 'average' client you might get, would you be confident enough to guarantee that at the 6 month mark they would be getting 600+ visitors a day?
Technical SEO | | Kinsel0 -
Location Based Content / Googlebot
Our website has local content specialized to specific cities and states. The url structure of this content is as follows: www.root.com/seattle www.root.com/washington When a user comes to a page, we are auto-detecting their IP and sending them directly to the relevant location based page - much the way that Yelp does. Unfortunately, what appears to be occurring is that Google comes in to our site from one of its data centers such as San Jose and is being routed to the San Jose page. When a user does a search for relevant keywords, in the SERPS they are being sent to the location pages that it appears that bots are coming in from. If we turn off the auto geo, we think that Google might crawl our site better, but users would then be show less relevant content on landing. What's the win/win situation here? Also - we also appear to have some odd location/destination pages ranking high in the SERPS. In other words, locations that don't appear to be from one of Google's data center. No idea why this might be happening. Suggestions?
Technical SEO | | Allstar0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0