Two Robots.txt files
-
Hi there
Can somebody please help me that one of my client site have two robot.txt files (please see below). One txt file is blocked few folders and another one is blocked completely all the Search engines. Our tech team telling that due to some technical reasons they using second one which placed in inside the server and search engines unable to see this file. www.example.co.uk/robots.txt - Blocked few folderswww.example.co.uk/Robots.txt - Blocked all Search Engines I hope someone can give me the help I need in this one.
Thanks in advance!
Cheers,
Satla -
Thank Riera
-
Hi Satia,
You mentioned that one robots.txt file placed in inside the server and search engine unable to see the file. If search engine won't see robots.txt file then what is the use of that robots.txt file?
AFAIK it must be placed under root directory an there is no way to keep two files with the same name. So you must have only one robots.txt and that should be placed under root directory.
Hope this helps.
Thanks
-
Hi Satla,
You're going to need to get rid of that 2nd version ASAP. The official standard for a robots.txt file is all lower case in the file name, so that's most likely what bots are seeing. But to err on the side of caution, I'd remove any possibilities of a "disallow: /" and remove that Robots.txt version.
Some servers are case sensitive, so you could run into issues here as well.
-
Hi,
I don't find any good reason why there are two files. There should be just one, where you specify everything you'd like to done.
If the tech team doesn't want to correct and leave just one file, may be because they are lazy or there might be some other issue that if they delete one file, the hole site blows up.Here, I leave you 2 Moz's articles about the robots.txt file.
What is Robots.txt? - Moz Learn
Learn About Robots.txt with Interactive Examples - Moz BlogTake into account that the name of the file must be in lower case. I've never seen any different and the servers are usually case sensitive with the filenames.
Hope its helpful.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is robots.txt file issue?
I hope you are well. Mostly moz send me a notification that your website can,t be crawled and it says me o check robots.txt file. Now the Question is how can solve this problem and what should I write in robots.txt file? Here is my website. https://www.myqurantutor.com/ need your help brohers.... and Thanks in advance
On-Page Optimization | | matee.usman0 -
Robot.txt file issue on wordpress site.
I m facing the issue with robot.txt file on my blog. Two weeks ago i done some development work on my blog. I just added few pages in robot file. Now my complete site seems to be blocked. I have checked and update the file and still having issue. The search result shows that "A description for this result is not available because of this site's robots.txt – learn more." Any suggestion to over come with this issue
On-Page Optimization | | Mustansar0 -
Internal Linking : File Name or URI/filename.html
Hi, For crawlability, what would be the best way to do internal linking. /aboutus.html or www.xyz.com/aboutus.com Would this make any difference to the website in terms of SEO or bot crawling the website? Regards, Sree
On-Page Optimization | | jungleegames0 -
Duplicate Content - What can be duplicate in two different product pages.
I am having a hard time understanding how my 3 different product pages are being shown up as Duplicate Content in s crawl. Some of my 21 different pages are being shown as duplicate content. Here are 3 of those: 1. http://champu.in/korn-rock-band-mens-round-neck-t-shirt-india 2. http://champu.in/stop-the-burning-mens-round-neck-t-shirt-india 3. http://champu.in/funny-t-shirts/absolut-punjabi-red-men-s-round-neck-t-shirt Can someone help me with this. Thanks in advance 🙂
On-Page Optimization | | sidjain4you0 -
How to exclude URL filter searches in robots.txt
When I look through my MOZ reports I can see it's included 'pages' which it shouldn't have included i.e. adding filtering rules such as this one http://www.mydomain.com/brands?color=364&manufacturer=505 How can I exclude all of these filters in the robots.txt? I think it'll be: Disallow: /*?color=$ Is that the correct syntax with the $ sign in it? Thanks!
On-Page Optimization | | neenor0 -
Solve duplicate content issues by using robots.txt
Hi, I have a primary website and beside that I also have some secondary websites with have same contents with primary website. This lead to duplicate content errors. Because of having many URL duplicate contents, so I want to use the robots.txt file to prevent google index the secondary websites to fix the duplicate content issue. Is it ok? Thank for any help!
On-Page Optimization | | JohnHuynh0 -
New CMS system - 100,000 old urls - use robots.txt to block?
Hello. My website has recently switched to a new CMS system. Over the last 10 years or so, we've used 3 different CMS systems on our current domain. As expected, this has resulted in lots of urls. Up until this most recent iteration, we were unable to 301 redirect or use any page-level indexation techniques like rel 'canonical' Using SEOmoz's tools and GWMT, I've been able to locate and redirect all pertinent, page-rank bearing, "older" urls to their new counterparts..however, according to Google Webmaster tools 'Not Found' report, there are literally over 100,000 additional urls out there it's trying to find. My question is, is there an advantage to using robots.txt to stop search engines from looking for some of these older directories? Currently, we allow everything - only using page level robots tags to disallow where necessary. Thanks!
On-Page Optimization | | Blenny0 -
Wordpress categories tags and robots.txt
I am relatively new at this and see a variety of people that seem to disagree on if you should block google from indexing category and tag pages through robot.txt or no-follow because of google viewing it as duplicate content. I tryst this communities answers over the web at large obviosly, so what do you all think? Thanks, Steven
On-Page Optimization | | sfmatthews0