What are your thoughts on security of placing CMS-related folders in a robots.txt file?
-
So I was just about to add a whole heap of CMS-related folders to my robots.txt file to exclude them from search, and thought "hey, I'm publicly telling people where my admin folders are"...surely that's not right?!
Should I leave them out of the robots.txt file, and hope for the best that they never get indexed? Should I use noindex meta data on every page?
What are people's thoughts?
Thanks,
James
PS. I know this is similar to lots of other discussions around meta noindex vs. robots.txt, but I'm after specific thoughts around the security aspect of listing your admin folders in a robots.txt file...
-
surly your admin folders are secured?, it would not matter if someone knows where they are.
-
As a rule, you want to avoid using robots.txt files whenever possible. It does not consistently protect you from crawlers and when it does block crawlers it kills any PR on those pages.
If you can block those pages with a noindex tag, it would be a preferable solution.
With respect to security for a CMS site, it really needs to be a comprehensive effort. Many site owners take a couple steps and then have a false-sense of security. Here are a few thoughts:
-
try the site address with /administrator after it to access Joomla and other sites
-
try the site address or blog with /wp-admin/ after it to access Joomla sites
-
make up a webpage and try accessing it to view the site's 404 page
-
right-click on a page and choose View Page Source. Often you will see the name of the CMS clearly listed. Other times you will see clear clues such as /wp/ in folder names. Other times you will find unique extensions such as Yoast SEO which will give you an idea of the CMS
Once a bad guy knows which CMS is in use, they know the default folder structure and more. The point is it requires a lot more effort then most people realize to hide the CMS in use. I applaud your effort, but be very thorough about it. There is a lot more involved then simply covering your robots.txt file.
-
-
I found three options for you: http://www.techiecorner.com/106/how-to-disable-directory-browsing-using-htaccess-apache-web-server/
I think if you do it with.htacces that is a folder specific file than nobody will be able to detect where admin contet is located.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots and Canonicals on Moz
We noticed that Moz does not use a robots "index" or "follow" tags on the entire site, is this best practice? Also, for pagination we noticed that the rel = next/prev is not on the actual "button" rather in the header Is this best practice? Does it make a difference if it's added to the header rather than the actual next/previous buttons within the body?
Technical SEO | | PMPLawMarketing0 -
Keyword place in page HTML code? Higher is better?
Hello, is it important to place keyword more higher in html code Our situation: item page. H1 and all text about this item with keyword mentioned three times is in the end of html code? Competitors pages with info about item, but higher keyword place and description in html code make better in SERPS. Could it be reason? Could we change place of text about item in html code ? Giedrius, Lithuania
Technical SEO | | Patogupirkti0 -
RegEx help needed for robots.txt potential conflict
I've created a robots.txt file for a new Magento install and used an existing site-map that was on the Magento help forums but the trouble is I can't decipher something. It seems that I am allowing and disallowing access to the same expression for pagination. My robots.txt file (and a lot of other Magento site-maps it seems) includes both: Allow: /*?p= and Disallow: /?p=& I've searched for help on RegEx and I can't see what "&" does but it seems to me that I'm allowing crawler access to all pagination URLs, but then possibly disallowing access to all pagination URLs that include anything other than just the page number? I've looked at several resources and there is practically no reference to what "&" does... Can anyone shed any light on this, to ensure I am allowing suitable access to a shop? Thanks in advance for any assistance
Technical SEO | | MSTJames0 -
Question about Robot.txt
I just started my own e-commerce website and I hosted it to one of the popular e-commerce platform Pinnacle Cart. It has a lot of functions like, page sorting, mobile website, etc. After adjusting the URL parameters in Google webmaster last 3 weeks ago, I still get the same duplicate errors on meta titles and descriptions based from Google Crawl and SEOMOZ crawl. I am not sure if I made a mistake of choosing pinnacle cart because it is not that flexible in terms of editing the core website pages. There is now way to adjust the canonical, to insert robot.txt on every pages etc. however it has a function to submit just one page of robot.txt. and edit the .htcaccess. The website pages is in PHP format. For example this URL: www.mycompany.com has a duplicate title and description with www.mycompany.com/site-map.html (there is no way of editing the title and description of my sitemap) Another error is www.mycompany.com has a duplicate title and description with http://www.mycompany.com/brands?url=brands Is it possible to exclude those website with "url=" and my "sitemap.html" in the robot.txt? or the URL parameters from Google is enough and it just takes a lot of time. Can somebody help me on the format of Robot.txt. Please? thanks
Technical SEO | | paumer800 -
Duplicate content problem from an index.php file
Hi One of my sites is flagging a duplicate content problem which is affecting the search rankings. The duplicate problem is caused by http://www.mydomain.com/index.php which has a page rank of 26 How can I sort the duplicate content problem, as the main page should just be http://www.mydomain.com which has a page rank of 42 and is the stronger page with stronger links etc Many Thanks
Technical SEO | | ocelot0 -
Removing robots.txt on WordPress site problem
Hi..am a little confused since I ticked the box in WordPress to allow search engines to now crawl my site (previously asked for them not to) but Google webmaster tools is telling me I still have robots.txt blocking them so am unable to submit the sitemap. Checked source code and the robots instruction has gone so a little lost. Any ideas please?
Technical SEO | | Wallander0 -
Search engines have been blocked by robots.txt., how do I find and fix it?
My client site royaloakshomesfl.com is coming up in my dashboard as having Search engines have been blocked by robots.txt, only I have no idea where to find it and fix the problem. Please help! I do have access to webmaster tools and this site is a WP site, if that helps.
Technical SEO | | LeslieVS0 -
Blog in subfolder or folder
SEO best practices says that one should put blog in a subfolder. Like www.example,com/blog In the above case, should we say that the blog is in folder or subfolder. Actually, i have been unsure about this folder vs subfolder thing. Some examples of this would be appreciated. What is the example of a blog in a subdomain ? Thanks
Technical SEO | | seoug_20050