Two Robots.txt files
-
Hi there
Can somebody please help me that one of my client site have two robot.txt files (please see below). One txt file is blocked few folders and another one is blocked completely all the Search engines. Our tech team telling that due to some technical reasons they using second one which placed in inside the server and search engines unable to see this file. www.example.co.uk/robots.txt - Blocked few folderswww.example.co.uk/Robots.txt - Blocked all Search Engines I hope someone can give me the help I need in this one.
Thanks in advance!
Cheers,
Satla -
Thank Riera
-
Hi Satia,
You mentioned that one robots.txt file placed in inside the server and search engine unable to see the file. If search engine won't see robots.txt file then what is the use of that robots.txt file?
AFAIK it must be placed under root directory an there is no way to keep two files with the same name. So you must have only one robots.txt and that should be placed under root directory.
Hope this helps.
Thanks
-
Hi Satla,
You're going to need to get rid of that 2nd version ASAP. The official standard for a robots.txt file is all lower case in the file name, so that's most likely what bots are seeing. But to err on the side of caution, I'd remove any possibilities of a "disallow: /" and remove that Robots.txt version.
Some servers are case sensitive, so you could run into issues here as well.
-
Hi,
I don't find any good reason why there are two files. There should be just one, where you specify everything you'd like to done.
If the tech team doesn't want to correct and leave just one file, may be because they are lazy or there might be some other issue that if they delete one file, the hole site blows up.Here, I leave you 2 Moz's articles about the robots.txt file.
What is Robots.txt? - Moz Learn
Learn About Robots.txt with Interactive Examples - Moz BlogTake into account that the name of the file must be in lower case. I've never seen any different and the servers are usually case sensitive with the filenames.
Hope its helpful.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301 Redirects - Large .htaccess file question
We are moving about 5000 pages from root into different folders. We need to individually 301 each page because the are sitting at root level now: mysite.com/page.com We want to move them to: mysite.com/folder/page.html etc I dont think redirect match can works because of the different files names and folders they are being moved in to. Will 5000 entries in .htacess slow site loading? Any other suggestions how to handle?
On-Page Optimization | | leadforms0 -
Need to change permalink structure - need advice on these two options
I have a client with a robust Wordpress blog that ranks well for a number of high-volume terms. We are doing a site rebuild, and they are adamant they don't want to lose any SERP ranking. However, their permalink structure contains the date the post was published, i.e. https://www.company.com/blog/2018/10/28/blog-post-title The date inclusion is not ideal for obvious reasons, but also because it's creating 404 links when pre-scheduling with Hubspot. Option 1: Change the permalink structure for all posts, past and future, and implement 301 redirects The only reason I'm not outright doing this is because there will be some slippage in the rankings, given that redirects only pass on a percentage of juice. The client doesn't want any slippage. Option 2: Create a new instance of Wordpress in a new subdirectory and continue the blog from there This will mandate a new structure for the blog posts; now posts will be children of /industry-blog/ instead of /blog/, but the old posts will remain as they are under /blog/ (and with dates intact). But I don't know if this will cause some hierarchical confusion that will negatively impact both the old blog and the new blog. Any advice given is appreciated. Please feel free to correct me if I've gotten anything wrong, I've only been practicing SEO for a few years now.
On-Page Optimization | | Marce5210 -
Two sites into one
I have two sites owned by one client, he wants to merge them into one keeping one website, but which one? I've been using the Moz Pro to look at the stats for both sites; page authority, inbound links etc, but they're both fairly close in results. The client wants to know what would be the best course to take with these two sites, what site should he keep and which should he merge? Any advice?
On-Page Optimization | | barrowr0 -
Image File Names for eCommerce?
Hi everyone! I'm wondering about naming my product photo file names for an E-Commerce site. Let's say I say have product named Abe Lincoln in the **Print **category for sale with 4 images, relatively similar but from different views for example.Could I name them as follows? 1) abe-lincoln-print.jpg 2) abe-lincoln-print-side-view.jpg 3) abe-lincoln-print-close-up.jpg 4) abe-lincoln-print-font-view.jpg Or is that too many keywords for the page? Should I be worried about keyword stuffing? Plus once I add in title and alt tags and descriptions this could also increase the keyword count for "abe lincoln print"?
On-Page Optimization | | TheFlyingSweetPotato0 -
Www.colourbanners.co.uk/ & colourbanners.co.uk showing up as two seperate URLs - is this going to be dupliacte content issue?
Hi Guys, I have just created a report in Moz and there appears to be 91 duplicate content issues with the site which i need to fix as i think it could be the reason why we are suffering from a penalty. One of the main questions i have is these 3 variations of the URL http://www.colourbanners.co.uk/ http://colourbanners.co.ukhttp://colourbanners.co.uk/Each have links pointing to them. My question is, could this be causing a dupe issue?regardsGerry
On-Page Optimization | | gezzagregz0 -
My company's product is referred to by two different names (SVN and Subversion). When cleaning up our Title tags, is it OK to use either name to keep the title tags around 70 characters?
I am cleaning up title tags that are too long or not correct. In our title tag we reference our product (a version of OSS source code). This product is often referred to as both SVN or Subversion. When writing Title tags is it OK to use one or the other depending on the length of the Title Tag? For instance: Contact Us | Free SVN & Git Hosting | Bug & Issue tracking | CloudForge vs **About CloudForge | Free Subversion & Git Hosting | Bug Tracking ** | |
On-Page Optimization | | CollabNet0 -
Two keyowrds for one page
Hi there! I just optimize two pages for the same keyword as I didn't find a especific keyword for each one independently. On the other hand, these keyword was "suitalbe" for the same two pages. Obviously Google will have to "make a decision" regarding what page should it be indexed in the first postion? In what aspects or elements of the page should I incide in order to give priority to one page more than the other one? Thnaks
On-Page Optimization | | juanmiguelcr0 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0