Robots.txt error
-
Moz Crawler is not able to access the robots.txt due to server error. Please advice on how to tackle the server error.
-
Hello Shanidel,
Jo from the Moz help team here.
I've had a look at your site and I've not been able to access your robot.txt file, this is what I'm seeing in the browser
https://screencast.com/t/JjQI1WTH3ni
I'm also seeing this error when I check your robots.txt file through a third party tool
https://screencast.com/t/pxsP9pL5
So it looks to me like may be some intermittent issues with your robots.txt file. I would advise reaching out to your web developer to see if they can check your robots.txt file and make sure it's accessible.
If you're still having trouble please let us know at help@moz.com
Best of luck!
Jo
-
Hi,
I'm still having this problem. Moz is unable to crawl the site saying there is a problem with the robots.txt file.
Sorry.
-
happy to been useful
-
Below is the exact message that i received:
**Moz was unable to crawl your site on Aug 29, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.
-
yoursite.com/robot.txt -----> this is how your robot.txt file should be, so first I will recommend you test your robot.txt file to see if everything is ok, if dont there is an explanation about how to create a robot.txt
How to create a /robots.txt file
Where to put it
The short answer: in the top-level directory of your web server.
The longer answer:
When a robot looks for the "/robots.txt" file for URL, it strips the path component from the URL (everything from the first single slash), and puts "/robots.txt" in its place.
For example, for "http://www.example.com/shop/index.html, it will remove the "/shop/index.html", and replace it with "/robots.txt", and will end up with "http://www.example.com/robots.txt".
So, as a web site owner you need to put it in the right place on your web server for that resulting URL to work. Usually that is the same place where you put your web site's main "index.html" welcome page. Where exactly that is, and how to put the file there, depends on your web server software.
Remember to use all lower case for the filename: "robots.txt", not "Robots.TXT.
See also:
-
Hi,
Can you please share the message you're receiving ? Also, did you check your Google Search Console to see if Google can access to your website ? Knowing the type of errors is the key to advice you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Homepage De-Indexed - No Errors, No Warnings
Hi, I am currently working on this project. Sometime between March 7th & 8th homepage was de-indexed. The rest of the pages are there. Found it out through decreased traffic on GA. No notifications of any kind of penalty/errors recieved. Tried to manually re-index through "Fetch as Google" in WMT to no avail. Site is redirected to https. Any suggestions would be highly appreciated. Thank you in advance.
Technical SEO | | gpapatheodorou0 -
Should I block Map pages with robots.txt?
Hello, I have a website that was started in 1999. On the website I have map pages for each of the offices listed on my site, for which there are about 120. Each of the 120 maps is in a whole separate html page. There is no content in the page other than the map. I know all of the offices love having the map pages so I don't want to remove the pages. So, my question is would these pages with no real content be hurting the rankings of the other pages on our site? Therefore, should I block the pages with my robots.txt? Would I also have to remove these pages (in webmaster tools?) from Google for blocking by robots.txt to really work? I appreciate your feedback, thanks!
Technical SEO | | imaginex0 -
One server, two domains - robots.txt allow for one domain but not other?
Hello, I would like to create a single server with two domains pointing to it. Ex: domain1.com -> myserver.com/ domain2.com -> myserver.com/subfolder. The goal is to create two separate sites on one server. I would like the second domain ( /subfolder) to be fully indexed / SEO friendly and have the robots txt file allow search bots to crawl. However, the first domain (server root) I would like to keep non-indexed, and the robots.txt file disallowing any bots / indexing. Does anyone have any suggestions for the best way to tackle this one? Thanks!
Technical SEO | | Dave1000 -
4XX(Client Error)
Hello there Please help! I am getting this kind of error in the whole site. http://www.mileycyrus-online.co.uk/leaked-hannah-montana-the-movie-pictures.html/comments Running on wordpress site. I chagned the template few times.. most of the error ends with a /comments. Infact all my post has the same issue: http://www.mileycyrus-online.co.uk/miley-cyrus-at-golden-globes-ceremony.html/comments http://www.mileycyrus-online.co.uk/miley-cyrus-at-president-obamas-inauguration-concert.html/comments 404 Error.
Technical SEO | | ExpertSolutions0 -
How to allow one directory in robots.txt
Hello, is there a way to allow a certain child directory in robots.txt but keep all others blocked? For instance, we've got external links pointing to /user/password/, but we're blocking everything under /user/. And there are too many /user/somethings/ to just block every one BUT /user/password/. I hope that makes sense... Thanks!
Technical SEO | | poolguy0 -
Robots exclusion
Hi All, I have an issue whereby print versions of my articles are being flagged up as "duplicate" content / page titles. In order to get around this, I feel that the easiest way is to just add them to my robots.txt document with a disallow. Here is my URL make up: Normal article: www.mysite.com/displayarticle=12345 Print version of my article www.mysite.com/displayarticle=12345&printversion=yes I know that having dynamic parameters in my URL is not best practise to say the least, but I'm stuck with this for the time being... My question is, how do I add just the print versions of articles to my robots file without disallowing articles too? Can I just add the parameter to the document like so? Disallow: &printversion=yes I also know that I can do add a meta noindex, nofollow tag into the head of my print versions, but I feel a robots.txt disallow will be somewhat easier... Many thanks in advance. Matt
Technical SEO | | Horizon0 -
Use of Robots.txt file on a job site
We are performing SEO on a large niche Job Board. My question revolves around the thought of no following all the actual job postings from their clients as they only last for 30 to 60 days. Anybody have any idea on the best way to handle this?
Technical SEO | | WebTalent0 -
Robots.txt file question? NEver seen this command before
Hey Everyone! Perhaps someone can help me. I came across this command in the robots.txt file of our Canadian corporate domain. I looked around online but can't seem to find a definitive answer (slightly relevant). the command line is as follows: Disallow: /*?* I'm guessing this might have something to do with blocking php string searches on the site?. It might also have something to do with blocking sub-domains, but the "?" mark puzzles me 😞 Any help would be greatly appreciated! Thanks, Rob
Technical SEO | | RobMay0