Google Unable to Access Robots.txt
-
We haven't made any changes to the robots.txt file and suddenly Google claims they can no longer access the file.
The site has been up and active for well over a year now. What are my next steps?
I have included a screenshot of the top half of the file. See anything wrong?
-
Would you mind sharing the solution - if not related to domain intricacies
Curious to know - only for knowledge purpose
Many thanks
-
Thanks Phillip. I think we're on the way to discovering the issue.
I certainly appreciate the feedback!
-
I think you should allow general access before beginning to disallow various sections below:
Allow: /
Though semantically, I think your robots.txt is in order. But if Google says they cannot acces robots.txt it seems like they cannot physically access the file itself on yourdomain.com/robots.txt, not that they cannot crawl the page due to your robots.txt.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How long does Google indexes the website after algorithm update?
I've noticed that the page ranks for some queries were improved unexpectedly, without any actions from my side. Is it possible that this improvement is connected with with Google algorithm update Sep, 4-5?
Algorithm Updates | | AurigaPR0 -
Does omitted results shown by Google always mean that website has duplicate content?
Google search results for a particular query was appearing in top 10 results but now the page appears but only after clicking on the " omitted results by google." My website lists different businesses in a particular locality and sometimes results for different localities are same because we show results from nearby area if number of businesses in that locality (search by users) are less then 15. Will this be considered as "duplicate content"? If yes then what steps can be taken to resolve this issue?
Algorithm Updates | | prsntsnh0 -
Fetch as Google - removes start words from Meta Title ?? Help!
Hi all, I'm experiencing some strange behaviour with Google Webmaster Tools. I noticed that some of our pages from our ecom site were missing start keywords - I created a template for meta titles that uses Manufacturer - Ref Number - Product Name - Online Shop; all trimmed under 65 chars just in case. To give you an idea, an example meta title looks like:
Algorithm Updates | | bjs2010
Weber 522053 - Electric Barbecue Q 140 Grey - Online Shop The strange behaviour is if I do a "Fetch as Google" in GWT, no problem - I can see it pulls the variables and it's ok. So I click submit to index. Then I do a google site:URL search, to see what it has indexed, and I see the meta description has changed (so I know it's working), but the meta title has been cut so it looks like this:
Electric Barbecue Q 140 Grey - Online Shop So I am confused - why would Google cut off some words at start of meta title? Even after the Fetch as Googlebot looks perfectly ok? I should point out that this method works perfect on our other pages, which are many hundreds - but it's not working on some pages for some weird reason.... Any ideas?0 -
Google indexing my website's Search Results pages. Should I block this?
After running the SEOmoz crawl test, i have a spreadsheet of 11,000 urls of which 6381 urls are search results pages from our website that have been indexed. I know I've read that /search should be blocked from the engines, but can't seem to find that information at this point. Does anyone have facts behind why they should be blocked? Or not blocked?
Algorithm Updates | | Jenny10 -
When did Google include display results per page into their ranking algorithm?
It looks like the change took place approx. 1-2 weeks ago. Example: A search for "business credit cards" with search settings at "never show instant results" and "50 results per page", the SERP has a total of 5 different domains in the top 10 (4 domains have multiple results). With the slider set at "10 results per page", there are 9 different domains with only 1 having multiple results. I haven't seen any mention of this change, did I just miss it? Are they becoming that blatant about forcing as many page views as possible for the sake of serving more ads?
Algorithm Updates | | BrianCC0 -
Rankings changing every couple of MINUTES in Google?
We've been experiencing some unusual behaviour in the Google.co.uk SERPs recently... Basically, the ranking of some of our websites for certain keywords appears to be changing by the minute. For example, doing a search for "our keyword" might show us at #20. Then a few minutes later, doing the same search shows us at #14, and then the same search a few minutes later shows us at #26, and then sometimes we're not ranked at all, etc etc. I know the algorithm changes a lot, but does it really change every couple of minutes? Has anyone else experienced this kind of behaviour in the SERPs? What could be causing it to happen?
Algorithm Updates | | d4online0 -
What is the critical size to reach for a content farm to be under google spot?
We're looking for building a content farm, as an igniter for another site, so there will be some duplicate content. Is it a good or a bad strategy in terms of SEO.
Algorithm Updates | | sarenausa0 -
Working in the world of Google Farmer Update
So I know have seen how my websites have taken a nose dive from the google farmer update most likely with traffic significantly hit. Example site is callcatalog.com. What recommendations are there to deal with the new world order? How can we look at optimizing, changing, modifying our process to improve rankings and traffic?
Algorithm Updates | | seo_ploom0