Robots.txt error message in Google Webmaster from a later date than the page was cached, how is that?
-
I have error messages in Google Webmaster that state that Googlebot encountered errors while attempting to access the robots.txt. The last date that this was reported was on December 25, 2012 (Merry Christmas), but the last cache date was November 16, 2012 (http://webcache.googleusercontent.com/search?q=cache%3Awww.etundra.com/robots.txt&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a).
How could I get this error if the page hasn't been cached since November 16, 2012?
-
That's what our next move is. I'll let you all know what comes of it. Thanks for the response!
-
I've noticed several discrepancies in google's cache system. It seems many of their documents lag and are not updated immediately. It could just be errors in cross data population. If you really want to know the last time the Google bot visited your website then you will want to visit your server logs.
If your server logs don't show a visit from google on the 25th then we really do have to wonder. My guess is that the webmaster tools is reflective of the correct date. Either way, I'd check for the error they are reporting.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Stuck on the 2nd page of google! Help
I run a McAfee Technical Support website. I has been 2.3 months since I have been practicing seo on it. It was slick until it appeared on the second page of google. But now it doesnt rank up as it's frozen. Can i get any advices and suggestions for my website to break the 2nd page cage. My website:-** mcafee.com/activate**
Intermediate & Advanced SEO | | six_figures0 -
Blocking Dynamic Search Result Pages From Google
Hi Mozzerds, I have a quick question that probably won't have just one solution. Most of the pages that Moz crawled for duplicate content we're dynamic search result pages on my site. Could this be a simple fix of just blocking these pages from Google altogether? Or would Moz just crawl these pages as critical crawl errors instead of content errors? Ultimately, I contemplated whether or not I wanted to rank for these pages but I don't think it's worth it considering I have multiple product pages that rank well. I think in my case, the best is probably to leave out these search pages since they have more of a negative impact on my site resulting in more content errors than I would like. So would blocking these pages from the Search Engines and Moz be a good idea? Maybe a second opinion would help: what do you think I should do? Is there another way to go about this and would blocking these pages do anything to reduce the number of content errors on my site? I appreciate any feedback! Thanks! Andrew
Intermediate & Advanced SEO | | drewstorys0 -
Google shows date in the SERPS for the homepage
Hi SEO's we've build a site and our now trying to rank it but it won't go up despite of regular new unique content and a higher DA than most competitors. All tools like MOZ en Yoast SEO show green lights so we're kinda out of ideas right now. Till we saw the date in the SERPS for the meta description. This gives us the idea that Google sees it as a post and not as a page which might explain the low ranking. However there are no technical causes we can think off for Google to show the date. Any ideas on this matter? Could it be that Yoast SEO is causing this even although we tell it not to show dates? Love to hear from you!
Intermediate & Advanced SEO | | Heers0 -
If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?
Hi MOZers, This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers? A) would Google totally ignore the image and the ALT tags information? OR B) Google would consider the ALT tags information? I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently. Looking forward to all your responses 🙂 Malika
Intermediate & Advanced SEO | | Malika11 -
Help with Robots.txt On a Shared Root
Hi, I posted a similar question last week asking about subdomains but a couple of complications have arisen. Two different websites I am looking after share the same root domain which means that they will have to share the same robots.txt. Does anybody have suggestions to separate the two on the same file without complications? It's a tricky one. Thank you in advance.
Intermediate & Advanced SEO | | Whittie0 -
Should I care about this Webmaster Tools Message
Here is the message: "Googlebot found an extremely high number of URLs on your site: http://www.uncommongoods.com/" Should i try to do anything about this? We are not having any indexation issues so we think Google is still crawling our whole site. What could be some possible repercussions of ignoring this? Thanks Mozzers! -Zack
Intermediate & Advanced SEO | | znotes0 -
Recovering from robots.txt error
Hello, A client of mine is going through a bit of a crisis. A developer (at their end) added Disallow: / to the robots.txt file. Luckily the SEOMoz crawl ran a couple of days after this happened and alerted me to the error. The robots.txt file was quickly updated but the client has found the vast majority of their rankings have gone. It took a further 5 days for GWMT to file that the robots.txt file had been updated and since then we have "Fetched as Google" and "Submitted URL and linked pages" in GWMT. In GWMT it is still showing that that vast majority of pages are blocked in the "Blocked URLs" section, although the robots.txt file below it is now ok. I guess what I want to ask is: What else is there that we can do to recover these rankings quickly? What time scales can we expect for recovery? More importantly has anyone had any experience with this sort of situation and is full recovery normal? Thanks in advance!
Intermediate & Advanced SEO | | RikkiD220 -
Why is noindex more effective than robots.txt?
In this post, http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo, it mentions that the noindex tag is more effective than using robots.txt for keeping URLs out of the index. Why is this?
Intermediate & Advanced SEO | | nicole.healthline0