Robots.txt gone wild
-
Hi guys, a site we manage, http://hhhhappy.com received an alert through web master tools yesterday that it can't be crawled. No changes were made to the site.
Don't know a huge amount about the robots.txt configuration expect that using Yoast by default it sets it not to crawl wp admin folder and nothing else. I checked this against all other sites and the settings are the same. And yet 12 hours later after the issue Happy is still not being crawled and meta data is not showing in search results. Any ideas what may have triggered this?
-
Hi Radi!
Have Matt and/or Martijn answered your question? If so, please mark one or both of their responses "Good Answer."
Otherwise, what's still tripping you up?
-
Have you checked the downtime of the site recently? Sometimes it could be that Google isn't able to reach your robots.txt file and because of that they'll stop crawling your site temporarily.
-
Are you getting the message in Search Console that there were errors crawling your page?
This typically means that your host was temporarily down when Google landed on your page. These types of things happen all the time and are no big deal.
Your homepage cache shows a crawl date of today so I'm assuming things are working properly ... if you really want to find out, try doing a "Fetch" of your site in Search Console.
Crawl > Fetch as Google > Fetch (big red button)
You should get a status of "Complete." If you get anything else there should be an error message with it. If so, paste that here.
I have checked the site headers, cache, crawlability with Screaming Frog, and everything is fine. This seems like one of those temporary messages but if the problem persists definitely let us know!
-
Our host has just offered this response which does not get me any closer:
Hi Radi,
It looks like your site has its own robots.txt file, which is not blocking any user agents. The only thing it's doing is blocking bots from indexing your admin area:
<code>User-agent: * Disallow: /wp-admin/</code>
This is a standard robots.txt file, and you shouldn't be having any issues with Google indexing your site from a hosting standpoint. To test this, I curled the site as Googlebot and received a 200OK response:
<code>curl -A "Googlebot/2.1" -IL [hhhhappy.com](http://hhhhappy.com) HTTP/1.1 200 OK Date: Sat, 05 Mar 2016 22:17:26 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Set-Cookie: __cfduid=d3177a1baa04623fb2573870f1d4b4bac1457216246; expires=Sun, 05-Mar-17 22:17:26 GMT; path=/; domain=.[hhhhappy.com](http://hhhhappy.com); HttpOnly X-Cacheable: bot Cache-Control: max-age=10800, must-revalidate X-Cache: HIT: 17 X-Cache-Group: bot X-Pingback: [http://hhhhappy.com/xmlrpc.php](http://hhhhappy.com/xmlrpc.php) Link: <[http://hhhhappy.com/](http://hhhhappy.com/)>; rel=shortlink Expires: Thu, 19 Nov 1981 08:52:00 GMT X-Type: default X-Pass-Why: Set-Cookie: X-Mapping-fjhppofk=2C42B261F74DA203D392B5EC5BF07833; path=/ Server: cloudflare-nginx CF-RAY: 27f0f02445920f09-IAD</code>
I didn't see any plugins on your site that looked like they would overwrite robots.txt, but I urge you to take another look at them, and then dive into your site's settings for the meta value that Googlebot would pick up. Everything on our end seems to be giving the green light.
Please let us know if you have any other questions or issues in the meantime.
Cheers,
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt - Googlebot - Allow... what's it for?
Hello - I just came across this in robots.txt for the first time, and was wondering why it is used? Why would you have to proactively tell Googlebot to crawl JS/CSS and why would you want it to? Any help would be much appreciated - thanks, Luke User-Agent: Googlebot Allow: /.js Allow: /.css
Intermediate & Advanced SEO | | McTaggart0 -
Search engine blocked by robots-crawl error by moz & GWT
Hello Everyone,. For My Site I am Getting Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag, Also google Webmaster Also not able to fetch my site, tajsigma.com is my site Any expert Can Help please, Thanx
Intermediate & Advanced SEO | | falguniinnovative0 -
How to make Google index your site? (Blocked with robots.txt for a long time)
The problem is the for the long time we had a website m.imones.lt but it was blocked with robots.txt.
Intermediate & Advanced SEO | | FCRMediaLietuva
But after a long time we want Google to index it. We unblocked it 1 week or 8 days ago. But Google still does not recognize it. I type site:m.imones.lt and it says it is still blocked with robots.txt What should be the process to make Google crawl this mobile version faster? Thanks!0 -
Meta canonical or simply robots.txt other domain names with same content?
Hi, I'm working with a new client who has a main product website. This client has representatives who also sells the same products but all those reps have a copy of the same website on another domain name. The best thing would probably be to shut down the other (same) websites and redirect 301 them to the main, but that's impossible in the minding of the client. First choice : Implement a conical meta for all the URL on all the other domain names. Second choice : Robots.txt with disallow for all the other websites. Third choice : I'm really open to other suggestions 😉 Thank you very much! 🙂
Intermediate & Advanced SEO | | Louis-Philippe_Dea0 -
Penalty recovery gone
Back in August we got a manual penalty lifted by Google for spammy links that we never created. This had been affecting us for almost a year. For about six weeks our traffic bounced back up to pre-penalty levels - between 60 and 120% greater a day from Google search traffic. Since then, our Google organic traffic has decayed to the point where yesterday we were back below our penalty level and we don't have a new penalty. Can anyone give me some advice about what may have caused this? Link to our site.
Intermediate & Advanced SEO | | IanTheScot0 -
Website gone from SERPs - please help to understand why
Dear SeoMozers, My website www.buy-hosting.net ist around for about 6 months. It performed quiet well the first months and for the main keyword "Buy Hosting" it went continously better, until #7 in SERPS of Google. Then, some days ago, it suddenly disappeared and traffic went down to nearly zero. It is not even in the Top 100 for "buy hosting" anymore. Can anyone please advice me, what could be the reason for that and what I could do about it? I'am desperate, beacuse I worked about 8 months nearly 100% of my time on that project... Thank you and kind regards
Intermediate & Advanced SEO | | ie4mac0 -
Non www has 110 links the www has 5 - rankings have gone
A site I'm working on resolves on the non www address and has 100+ links pointing at this address, last month it started to rank and had various phases within the top 50, this month it's totally gone from the search results. The www has 5 links. My questions Which is best? Www or non How do you fix it? Any reason why the rankings have disappeared!? It's a word press site domainname.co.uk = 100+ links www.domainname.co.uk = 5 links
Intermediate & Advanced SEO | | therealmarkhall0 -
Subdomains - duplicate content - robots.txt
Our corporate site provides MLS data to users, with the end goal of generating leads. Each registered lead is assigned to an agent, essentially in a round robin fashion. However we also give each agent a domain of their choosing that points to our corporate website. The domain can be whatever they want, but upon loading it is immediately directed to a subdomain. For example, www.agentsmith.com would be redirected to agentsmith.corporatedomain.com. Finally, any leads generated from agentsmith.easystreetrealty-indy.com are always assigned to Agent Smith instead of the agent pool (by parsing the current host name). In order to avoid being penalized for duplicate content, any page that is viewed on one of the agent subdomains always has a canonical link pointing to the corporate host name (www.corporatedomain.com). The only content difference between our corporate site and an agent subdomain is the phone number and contact email address where applicable. Two questions: Can/should we use robots.txt or robot meta tags to tell crawlers to ignore these subdomains, but obviously not the corporate domain? If question 1 is yes, would it be better for SEO to do that, or leave it how it is?
Intermediate & Advanced SEO | | EasyStreet0