What is consider best practice today for blocking admins from potentially getting indexed
-
What is consider best practice today for blocking pages, for instance xyz.com/admin pages, from getting indexed by the search engines or easily found. Do you recommend to still disallow it in the robots.txt file or is the robots.txt not the best place to notate your /admin location because of hackers and such? Is it better to hide the /admin with an obscure name, use the noidex tag on the page and don't list in the robots.txt file?
-
Agreed with the above two answers. Use an obscure url and use meta tags to noindex/nofollow the pages.
I wouldn't worry too much about people finding your admin pages. You should already have security measures in place that prevent people from hacking your site or "guessing" your admin credentials. If you don't have these types of measures in place then I would recommend concentrating on these.
Some ideas of things to look at:
- Ensure pages do not allow SQL injection attacks
- Use complex usernames and passwords
- Stop people from entering the wrong username and password more than x times within y minutes (e.g. lock out the account either permanently or for a temporary time restriction)
- If someone tries to enter a username and password within a given period of time, prompt them with a captcha check to ensure no bots are trying to access the site
- Ensure passwords are changed regularly
- Set up an alerting system should incorrect credentials be entered
- Plus there are LOADS more things you should do
-
I agree with Nick, using robots.txt, meta, and obscure page url
-
name='robots' content='noindex,nofollow' /> before the and mix that with an obscure page URL. It'll never get found.
What you could do with the robots.txt is disallow a directory like /admin/ but then have the login page @ domain.com/admin/obscure-login-url. If you do all of that then you're pretty damn safe in the knowledge that no one will ever find your login URL.
-
One of my customers just has a page that is hidden from public view (www.url.co.uk/adminpage), no-indexed and isn't in the robots file and in 10 years, there has never been a hack attempt.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best Practice - Linking out to client websites in niche industry
I have a client in a niche building industry that provides 4 different services to them. She has provided me with a list of 131 past clients of hers that she wants hyperlinked on her site to theirs. The logic is that a lot of these clients are heavy hitters and quite impressive to their peers so the links will be reinforcing my client's value. Is there a best practice for determining whether the link should be follow/no follow? Should I be checking the client's site's spam score, page rank, anything else? Some of these 131 links will be duplicated due to the client performing more than one service for them.
Technical SEO | | JanetJ1 -
Some of my website urls are not getting indexed while checking (site: domain) in google
Some of my website urls are not getting indexed while checking (site: domain) in google
Technical SEO | | nlogix0 -
Best Practice for Blocking a site from 1 countries search engines
A client cannot appear in any search engines in one given country but they are ok in rest of the world. Has anybody had any experience blocking a site from appearing in just google.de, bing.de and yahoo.de for example?
Technical SEO | | Salience_Search_Marketing0 -
Block Domain in robots.txt
Hi. We had some URLs that were indexed in Google from a www1-subdomain. We have now disabled the URLs (returning a 404 - for other reasons we cannot do a redirect from www1 to www) and blocked via robots.txt. But the amount of indexed pages keeps increasing (for 2 weeks now). Unfortunately, I cannot install Webmaster Tools for this subdomain to tell Google to back off... Any ideas why this could be and whether it's normal? I can send you more domain infos by personal message if you want to have a look at it.
Technical SEO | | zeepartner0 -
600+ 404 Errors: Best Practice for Redirects?
Hi All, I've just checked my GWMT profile for one of my client's sites and found that there are currently over 600 404 Error notifications! This is not that surprising given that we very recently redesigned and launched their new corporate site, which previously had a ton of "junk" legacy pages. I was wondering if it would work in terms of efficient SEO to simply apply a 301 redirect from the 404 page to our root to solve this issue? If not what would be a good solution? Thanks in advance for all your great advice!
Technical SEO | | G2W1 -
Website is not indexed in Google
Hi Guys, I have a problem with a website from a customer. His website is not indexed in Google (except for the homepage). I could not find anything that can possibly be the cause. I already checked the robots.txt, sitemap, and plugins on the website. In the HTML code i also couldn't find anything which makes indexing harder than usual. This is the website i am talking about: http://www.xxxx.nl/ (Dutch) The only thing that i am guessing now is the Google sandbox, but even that is quite unlikely. I hope you guys discover something i could not find! Thanks in advance 🙂
Technical SEO | | B.Great0 -
Site not indexing correctly
I am trying to figure out what is going on with my site listings. Google is only displaying my title and url - no description. You can see it when you search for Franchises for Sale. The site is www.franchisesolutions.com. Why could this happen? Also I saw a big drop off in a handful of keyword rankings today. Could this be related?
Technical SEO | | franchisesolutions0 -
How to tell if PDF content is being indexed?
I've searched extensively for this, but could not find a definitive answer. We recently updated our website and it contains links to about 30 PDF data sheets. I want to determine if the text from these PDFs is being archived by search engines. When I do this search http://bit.ly/rRYJPe (google - site:www.gamma-sci.com and filetype:pdf) I can see that the PDF urls are getting indexed, but does that mean that their content is getting indexed? I have read in other posts/places that if you can copy text from a PDF and paste it that means Google can index the content. When I try this with PDFs from our site I cannot copy text, but I was told that these PDFs were all created from Word docs, so they should be indexable, correct? Since WordPress has you upload PDFs like they are an image could this be causing the problem? Would it make sense to take the time and extract all of the PDF content to html? Thanks for any assistance, this has been driving me crazy.
Technical SEO | | zazo0