Robots.txt error
-
Moz Crawler is not able to access the robots.txt due to server error. Please advice on how to tackle the server error.
-
Hello Shanidel,
Jo from the Moz help team here.
I've had a look at your site and I've not been able to access your robot.txt file, this is what I'm seeing in the browser
https://screencast.com/t/JjQI1WTH3ni
I'm also seeing this error when I check your robots.txt file through a third party tool
https://screencast.com/t/pxsP9pL5
So it looks to me like may be some intermittent issues with your robots.txt file. I would advise reaching out to your web developer to see if they can check your robots.txt file and make sure it's accessible.
If you're still having trouble please let us know at help@moz.com
Best of luck!
Jo
-
Hi,
I'm still having this problem. Moz is unable to crawl the site saying there is a problem with the robots.txt file.
Sorry.
-
happy to been useful
-
Below is the exact message that i received:
**Moz was unable to crawl your site on Aug 29, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.
-
yoursite.com/robot.txt -----> this is how your robot.txt file should be, so first I will recommend you test your robot.txt file to see if everything is ok, if dont there is an explanation about how to create a robot.txt
How to create a /robots.txt file
Where to put it
The short answer: in the top-level directory of your web server.
The longer answer:
When a robot looks for the "/robots.txt" file for URL, it strips the path component from the URL (everything from the first single slash), and puts "/robots.txt" in its place.
For example, for "http://www.example.com/shop/index.html, it will remove the "/shop/index.html", and replace it with "/robots.txt", and will end up with "http://www.example.com/robots.txt".
So, as a web site owner you need to put it in the right place on your web server for that resulting URL to work. Usually that is the same place where you put your web site's main "index.html" welcome page. Where exactly that is, and how to put the file there, depends on your web server software.
Remember to use all lower case for the filename: "robots.txt", not "Robots.TXT.
See also:
-
Hi,
Can you please share the message you're receiving ? Also, did you check your Google Search Console to see if Google can access to your website ? Knowing the type of errors is the key to advice you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Error after scanning with browseo.net
Good day! I have done a scan on my site with browseo.net ( and a few other similar scanners ) and got the mess seen in the screenshot. I've tried deleting all the files in the website folder, replace it with a single image file, but it still shows the same error. What could this mean and should i be worried? P.S Found my answer after contacting the helpful support of browseo.net : It took me some time to figure out what was going on, but it seems as if you are mixing content types. Browsers are quite smart when it comes to interpreting the contents, so they are much more forgiving than we are. Browseo crawls your website and detects that you are setting utf-8 as part of the meta information. By doing so, it converts the content in a different character encoding then what they are supposed to be. In a quick test, I tried to fetch the content type based on the response object, but without any success. So I am suspecting that in reality your content is not utf-8 encoded when you parse it into joomla. The wrong character type is then carried over for the body (which explains why we can still read the header information). All of this explains the error. In order for it to work in browseo, you’d have to set the content type correctly, or convert your own content into utf-8 before parsing. It may be that you are either storing this incorrectly in the database (check your db settings for a different content type other than utf-8) or that other settings are a bit messed up. The good news is, that google is probably interpreting your websites correctly, so you won’t be punished for this, but perhaps something to look into… From Paul Piper VKNNnAL.png?1
Technical SEO | | AlexElks0 -
?_escaped_fragment_= Duplicate error in Webmaster
Hi I am not sure where this came from ... ?escaped_fragment= But in webmaster we are seeing hundreds of pages with this and thus webmaster is saying that we have Pages with duplicate title tags How do I fix this, or remove it. Regards T
Technical SEO | | Taiger0 -
Robots.txt & Mobile Site
Background - Our mobile site is on the same domain as our main site. We use a folder approach for our mobile site abc.com/m/home.html We are re-directing traffic to our mobile site vie device detection and re-direction exists for a handful of pages of our site ie most of our pages do not redirect the user to a mobile equivalent page. Issue – Our mobile pages are being indexed in desktop Google searches Input Required – How should we modify our robots.txt so that the desktop google index does not index our mobile pages/urls User-agent: Googlebot-Mobile Disallow: /m User-agent: `YahooSeeker/M1A1-R2D2` Disallow: /m User-agent: `MSNBOT_Mobile` Disallow: /m Many thanks
Technical SEO | | CeeC-Blogger0 -
Htaccess - multiple matches by error
Hi all, I stumbled upon an issue on my site. We have a video section: www.holdnyt.dk/video htaccess rule: RewriteCond %{REQUEST_FILENAME} !-f
Technical SEO | | rasmusbang
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^video index.php?area=video [L,QSA] Problem is that these URLs give the same content:
www.holdnyt.dk/anystring/video
www.holdnyt.dk/whatsoever/video Any one with a take on whats wrong with the htaccess line? -Rasmus0 -
150 Duplicate page error
I am told that I have 150 duplicate page content. It seems that it is the login link on each of my pages. Is this an error? Is it something I have to change? Thanks Login/Register at http://irishdancingdress.com/wp-login.php?redirect_to=http%3A%2F%2Firishdancingdress.com%2Fdress
Technical SEO | | ukkpower0 -
Duplicate content error - same URL
Hi, One of my sites is reporting a duplicate content and page title error. But it is the same page? And the home page at that. The only difference in the error report is a trailing slash. www.{mysite}.co.uk www.{mysite}.co.uk/ Is this an easy htaccess fix? Many thanks TT
Technical SEO | | TheTub1 -
How can I exclude display ads from robots.txt?
Google has stated that you can do this to get spiders to content only, and faster. Our IT guy is saying it's impossible.
Technical SEO | | GregBeddor
Do you know how to exlude display ads from robots.txt? Any help would be much appreciated.0 -
Google webmasters shows 37K not found errors
Hello we are using Joomla as our cms, months ago we used a component to create friendly urls, lots of them got indexed by google, testing the component we created three different types of URL, the problem now is that all of this tests are showing in google webmasters as 404 errors, 37,309 not found pages and this number is increasing everyday. What do you suggest to fix this?? Regards.
Technical SEO | | Zertuxte0