How do you disallow HTTPS?
-
I currently have a site (startuploans.org) that runs everything as http, recently we decided to start an online application to process loan apps. Now, for one certain section we configured ssl to work (https://www.startuploans.org/secure/).
If I go to the HTTPS url for any of my other pages they show up...I was going to just 301 everything from https but because it is in a subdirectiory I can't...
Also, canonical URL's won't work either because it's a totally different system and the pages are generated in an odd manor.
It's really just 1 page that needs to be disallowed..
Is there any way to disallow all HTTPS requests from robots.txt while keeping all the HTTP requests working as normal?
-
Hi Rick,
Your first thought was correct. If you apply the noindex meta tag to every page in the secure part of the site, then all of those pages will be de-indexed and you will have no duplicate content problem.
For Wordpress, you just need to install a plugin that allows you to edit and apply page elements and meta tags. My preference is Yoast SEO. If you do a plugin search from your dashboard you will find it.
Hope that helps,
Sha
-
Perfect. This is the answer I was looking for...I will just use the meta tag globally in HTTPS....BUT...what about the fact that my entire site is duplicated in HTTPS?
It's all good for the /secure/ part, but what about my Wordpress install...how do I handle that? Maybe my best option is to just load 2 different robots.txt files...
-
Hi Rick,
If you wish to use the robots.txt method to disallow all or part of your site's https protocol, you simply need to load two separate robots.txt files.
The http and https protocols are basically viewed by bots as if they were two completely separate root domains (which I guess you already know as you have mentioned the fact that port 443 is used for the secure protocol).
Google's advice is that to use this method, you should have a separate robots.txt file for each protocol with code as follows:
For your http protocol (http://www.startuploans.org/robots.txt
User-agent: *
Allow: /For the https protocol (https://www.startuploans.org/robots.txt
User-agent: *
Disallow: /However, blocking crawlers with robots.txt is not the most reliable method for excluding pages from Search engines. The reason for this is that the page will continue to be indexed if it happens to be found via a link from another page. Basically, the robots.txt is the sign on the front door that says "Please stay out of our house", but it is never seen by the people who enter via the rear exit or climb in a window!
The most reliable method of excluding pages is to add the noindex meta tag as suggested by MagentoWebDeveloper and Alan.When a bot encounters the noindex meta tag it will send a signal to the search engine to de-index the page and there is no further problem.
I would generally use noindex, follow rather than noindex, nofollow as the nofollow tag will stop the flow of link value through your site. In most cases, as long as the noindex is in place, there is no reason to be worried about the links on the pages being followed.
You should NEVER use both methods at the same time.
Hope that helps,
Sha
-
I agree. Best practices dictate that the proper answer is to block the entire folder from indexing.
-
Why not just NO INDEX / NO FOLLOW the page? What is the reason behind this? Do you want Google not to index your https page? Duplicate content? All checkouts have https.
-
I should have added that -the code above goes in the htaccess...that code would deliver two different robots.txt files based on if it's port 443 (secure) or the normal robots.txt file if it's any other port (normal).
Is there any easier way? I feel like one misstep on this and I could block bots from my site.
-
Nope...thanks though Code is no problem for us...it's just a technical question. Here is what I want:
I want to restrict robots from the HTTPS version (secure) of my site while leaving the HTTP version (unsecure) perfectly normal and accessible by bots.
Basically what I am asking is..is this the best way (below)? Is there a simpler way...to my knowledge robots.txt doesn't support protocols so doing something like disallow:https://......yada yada won't work.
RewriteEngine on
RewriteCond %{SERVER_PORT} ^443$
RewriteRule ^robots.txt$ robots_ssl.txt [L] -
Hello Rick,
First caveat is I am not sure what you want to accomplish: You want it so that once the app is done, the person is no longer in https:// ?? If that is it, then while I am not sure I will be able to help, I want to clarify the issue.
Currently, you have one page that is https: and that is your loan app page with url of https://startuploans.org/secure/site/step1 (I did not get a step two on my test, but the next page was https://startuploans.org/secure/step3.) You want a person to finish the app, and then not be in https when they return to the site?
I am not a coder per se, but I am wondering if y ou change the target on the menu link to the secure pages to open in a new window there would be no option to go back. once finished, page 3 have an option to close to secure my information. Then, they are left at the page they were on before going to application.
Now, if none of this was what you wanted, I owe you a beer.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Resolving 301 Redirect Chains from Different URL Versions (http, https, www, non-www)
Hi all, Our website has undergone both a redesign (with new URLs) and a migration to HTTPS in recent years. I'm having difficulties ensuring all URLs redirect to the correct version all the while preventing redirect chains. Right now everything is redirecting to the correct version but it usually takes up to two redirects to make this happen. See below for an example. How do I go about addressing this, or is this not even something I should concern myself with? Redirects (2) <colgroup><col width="123"><col width="302"></colgroup>
Technical SEO | | theyoungfirm
| Redirect Type | URL |
| | http://www.theyoungfirm.com/blog/2009/index.html 301 | https://theyoungfirm.com/blog/2009/index.html 301 | https://theyoungfirm.com/blog/ | This code below was what we added to our htaccess file. Prior to adding this, the various subdomain versions (www, non-www, http, etc.) were not redirecting properly. But ever since we added it, it's now created these additional URLs (see bolded URL above) as a middle step before resolving to the correct URL. RewriteEngine on RewriteCond %{HTTP_HOST} ^www.(.*)$ [NC] RewriteRule ^(.*)$ https://%1/$1 [R=301,L] RewriteCond %{HTTPS} !on RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] Your feedback is much appreciated. Thanks in advance for your help. Sincerely, Bethany0 -
Changing Domains - 301 old https to new https
Brief History: Our company made change to a new domain. Both domains had an SSL configured on it in which the old domain SSL was controlled and created by Shopify which gave us limited control. Because we couldn't redirect the old https:// to the new https:// So basically we duplicated our new HTML website and put canonical ref on all duplicate pages to the final domain to help get search to navigate to the newer domain. Question: In the near future I would like to take down the old domain and do a 301 domain forwarding. What is the correct course of action to complete this? Our old domain was indexed and SERP results were tied to it's https:// url's.
Technical SEO | | bnewt1 -
Our site recently switched from http to https. Do I still need to setup a redirect for the incoming links pointing to http?
Our site recently switched from http to https. If you type in the http://www.websitename.com then it will automatically go to https://www.websitename.com ... my question is... do I still need to create a redirect in the htaccess file to ensure we don't lose all the links currently pointing to the http version of the website?
Technical SEO | | ninel_P0 -
How bad is it to have duplicate content across http:// and https:// versions of the site?
A lot of pages on our website are currently indexed on both their http:// and https:// URLs. I realise that this is a duplicate content problem, but how major an issue is this in practice? Also, am I right in saying that the best solution would be to use rel canonical tags to highlight the https pages as the canonical versions?
Technical SEO | | RG_SEO0 -
Robots.txt on http vs. https
We recently changed our domain from http to https. When a user enters any URL on http, there is an global 301 redirect to the same page on https. I cannot find instructions about what to do with robots.txt. Now that https is the canonical version, should I block the http-Version with robots.txt? Strangely, I cannot find a single ressource about this...
Technical SEO | | zeepartner0 -
Pro's & contra's: http vs https
Hi there, We are planning to take the step and go from http to https. The main reason to do this, is to mean trustfull to our clients. And of course the rumours that it would be better for ranking (in the future). We have a large e-commerce site. A part of this site ia already HTTPS. I've read a lot of info about pro's and contra's, also this MOZ article: http://moz.com/blog/seo-tips-https-ssl
Technical SEO | | Leonie-Kramer
But i want to know some experience from others who already done this. What did you encountered when changing to HTTPS, did you had ranking drops, or loss of links etc? I want to make a list form pro's and contra's and things we have to do in advance. Thanx, Leonie0 -
Dear Support, my client is a large bank and his site is https. and seomoz does not give any data. what can i do ? thank you
Dear Support, what to do in case of https pages. seomoz does not give any data about it. please help asap. Thank you.
Technical SEO | | SebestynMrton0 -
How long does it take for traffic to bounce back from and accidental robots.txt disallow of root?
We accidentally uploaded a robots.txt disallow root for all agents last Tuesday and did not catch the error until yesterday.. so 6 days total of exposure. Organic traffic is down 20%. Google has since indexed the correct version of the robots.txt file. However, we're still seeing awful titles/descriptions in the SERPs and traffic is not coming back. GWT shows that not many pages were actually removed from the index but we're still seeing drastic rankings decreases. Anyone been through this? Any sort of timeline for a recovery? Much appreciated!
Technical SEO | | bheard0