Hi Shashank,
If I understand you correctly once a user logs in, they switch to https.
I have 2 answers for you, depending on your current configuration.
If you can view a HTTP version of your site without logging in. Meaning you don't have to log in to view any important content on the site, then you would simply block the crawlers from indexing the HTTPS version, and allow them to index the HTTP. This is done with a robots.txt file and .httaccess
A common way would be to check if the user agent is using PORT 443 which is the HTTPS port, then redirect them to the HTTPS specific robots.txt file.
"Then add the following lines to your .htaccess (in the root of your webhosting).
RewriteCond %{SERVER_PORT} ^443$
RewriteRule ^robots.txt$ robots_ssl.txt
If you dont have an .htaccess file, create a new one - be sure to put these 2 lines at the top of it:
Options +FollowSymLinks
RewriteEngine on "
REF: VN7.com Forums
USER WorldWideTrading
POST #2
LINK..
If your site forces a person to login before they can view any content, then you have an issue. The problem with that is there is no way for a crawler to login to your site, thus they can not see (crawl) your site to index it.
It is normal for websites to have both HTTP and HTTPS versions. For example, you can view our e-commerce site in either protocol, and we require HTTPS for anything account related. That said, we also block bots from viewing any content on HTTPS because we want them to only index the HTTP version.
One last thing.. Your site may have pages that are only HTTPS, such pages are generally used to display or record personal information, like a person's phone, email, address, or for conducting financial transactions. These types of pages server no value in having them crawled so there is no need to try and get them indexed.
Hope that helps and makes since,
Don