Two Robots.txt files
-
Hi there
Can somebody please help me that one of my client site have two robot.txt files (please see below). One txt file is blocked few folders and another one is blocked completely all the Search engines. Our tech team telling that due to some technical reasons they using second one which placed in inside the server and search engines unable to see this file. www.example.co.uk/robots.txt - Blocked few folderswww.example.co.uk/Robots.txt - Blocked all Search Engines I hope someone can give me the help I need in this one.
Thanks in advance!
Cheers,
Satla -
Thank Riera
-
Hi Satia,
You mentioned that one robots.txt file placed in inside the server and search engine unable to see the file. If search engine won't see robots.txt file then what is the use of that robots.txt file?
AFAIK it must be placed under root directory an there is no way to keep two files with the same name. So you must have only one robots.txt and that should be placed under root directory.
Hope this helps.
Thanks
-
Hi Satla,
You're going to need to get rid of that 2nd version ASAP. The official standard for a robots.txt file is all lower case in the file name, so that's most likely what bots are seeing. But to err on the side of caution, I'd remove any possibilities of a "disallow: /" and remove that Robots.txt version.
Some servers are case sensitive, so you could run into issues here as well.
-
Hi,
I don't find any good reason why there are two files. There should be just one, where you specify everything you'd like to done.
If the tech team doesn't want to correct and leave just one file, may be because they are lazy or there might be some other issue that if they delete one file, the hole site blows up.Here, I leave you 2 Moz's articles about the robots.txt file.
What is Robots.txt? - Moz Learn
Learn About Robots.txt with Interactive Examples - Moz BlogTake into account that the name of the file must be in lower case. I've never seen any different and the servers are usually case sensitive with the filenames.
Hope its helpful.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will Google Count Links Loaded from JavaScript Files After the Page Loads
Hi, I have a simple question. If I want to put an image with a link to another site like a banner ad on my page, but do not want it counted by Google. Can I simply load the link and banner using jQuery onload from a separate .js file? The ideal result would be for Google to index a script tag instead of a link.
On-Page Optimization | | CopBlaster.com1 -
How do you implement an SEO site structure with content that falls under two silos?
We primarily produce two different types of content: concise fact sheets on topics and video briefings + transcripts of topics. Often these two content types cover the same topic area and since we're currently siloing by content type, these pages end up competing against each other for rankings. Advice on a site structure that'd avoid these issues?
On-Page Optimization | | jay_elsie1 -
I have more pages in my site map being blocked by the robot file than I have being allowed to be crawled. Is Google going to hate me for this?
Using some rules to block all pages which start with "copy-of" on my website because people have a bad habit of duplicating new product listings to create our refurbished, surplus etc. listings for those products. To avoid Google seeing these as duplicate pages I've blocked them in the robot file, but of course they are still automatically generated in our sitemap. How bad is this?
On-Page Optimization | | absoauto0 -
Login webpage blocked by robots
Hi, the SEOMOZ crawl diagnostics shows that this page: www.tarifakitesurfcamp.com/wp-login.php is blocked (noindex, nofollow) Is there any problem with that?
On-Page Optimization | | juanmiguelcr0 -
How to optimize for a product by two names
So let us assume I am selling an item on my website and it comes in a large array of varieties. Let us also assume that this item is commonly referred to by two different names. (i.e. Cover & Case, Car & Automobile, Notepad & Notebook) Both of these names that are used, in regards to this product, have, for the sake of argument here, the exact same search volume. I want to make sure that I rank for both terms. In my Title Tags I am currently thinking about the following methodology to help that cause. "GE Motors Super Fast and Awesome Car / Automobile" "Ghostwriter Kids Notebook / Notepad" "Super Soft Pillow Cover / Case" Notice I have the space in between the words and the / but my question is if this is necessary or not? What is Google's policy on how they view that / ? Can I do this and still have Google see it as two different words? "GE Motors Super Fast and Awesome Car/Automobile" "Ghostwriter Kids Notebook/Notepad" "Super Soft Pillow Cover/Case" Apologies if this is a fairly basic question but cannot seem to find this information.
On-Page Optimization | | DRSearchEngOpt0 -
Yahoo small business host has two web sites and confuses the spiders
Both another consultant and my first seomoz test stated that I have two websites and that confuses the spiders. . One is www.mystore.com ( example) and the other is mystore.com. Yahoo states that's how they role and I can not delete one. I can do a 301 redirect and redirect one site to the other but they do not have a recommendaation as to which one to redicrect. Which one should I redirrect?
On-Page Optimization | | Wales0 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0 -
Image Optimization - File Name Important?
I am currently working on a site with 100+ recipes that all have image file names that are relevant, but not optimized for keyword purposes. I'm wondering - from an SEO perspective - would it be worth my time to go back through all of the images and rename them with keywords in mind? On my own site I have always done this as a "best practice" but I'm curious - does it make a difference to search engines? Does anyone have any recent research/experiences that they would like to share? Thanks!
On-Page Optimization | | EssEEmily0