Can I Block https URLs using Host directive in robots.txt?
-
Hello Moz Community,
Recently, I have found that Google bots has started crawling HTTPs urls of my website which is increasing the number of duplicate pages at our website.
Instead of creating a separate robots.txt file for https version of my website, can I use Host directive in the robots.txt to suggest Google bots which is the original version of the website.
Host: http://www.example.com
I was wondering if this method will work and suggest Google bots that HTTPs URLs are the mirror of this website.
Thanks for all of the great responses!
Regards,
Ramendra -
Hi Ramendra,
To my knowledge, you can only provide directives in the robots.txt file for the domain on which it lives. This goes for both http/https and www/non-www versions of domains. This is why it's important to handle all preferred domain formatting with redirects, that point to your canonicalized version. So if you want http://www to index, all other versions redirect to that.
There might be a work around of some sort, but honestly, what I described above with redirection towards preferred versions is the direction you should take. Then you can manage one robots.txt file and your indexing will align with what you want better.
-
Thanks Logan,
I have read somewhere that using Host directive in the robots.txt file we can suggest Google bots which is the original version of the website if there are number of mirror sites. So, I was wondering if we can prevent indexing/crawling of HTTPS URLs by using Host directive in robots.txt of HTTP site.
We are using an ecommerce SAAS platform for our website where we have only one robots.txt file that we can use for HTTP site.
Is there any other way to prevent indexing/crawling of HTTPS URLs?
Regards,
Ramendra -
Hi Ramendra,
Based on what you said, it sounds like both versions of your site exist and are indexed, and you want to mitigate your duplicate content risk. If that's accurate, here are my recommendations on this:
- Robots.txt cannot be used on a HTTP site to prevent indexing/crawling of HTTPS URLs
- Google crawls HTTPS by default, so if your site is fully secure, then you need to redirect (this can be done with a redirect rule in HTACCESS, you don't need to do one-to-one redirects) HTTP URLs over to their HTTPS twin
- In addition to your HTTP>HTTPS redirects, you should also use canonical tags to push your preferred version to search engines
- Your HTTPS site should have its own robots.txt file
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Utilizing one robots.txt for two sites
I have two sites that are facilitated hosting in similar CMS. Maybe than having two separate robots.txt records (one for every space), my web office has made one which records the sitemaps for the two sites, similar to this:
Technical SEO | | eulabrant0 -
Submitted URL has crawl issue - Submitted URL seems to be a Soft 404 - but all looks fine
Google Search Console is showing some pages up as "Submitted URL has crawl issue" but they look fine to me. I have set them as fixed but after a month they were finally re-crawled and google states the issue persists. Examples are: https://www.rscpp.co.uk/counselling/175809/psychology-alcester-lanes-end.html
Technical SEO | | TommyNewmanCEO
https://www.rscpp.co.uk/browse/location-index/889/index-of-therapy-in-hanger-lane.html
https://www.rscpp.co.uk/counselling/274646/psychology-waltham-forest-sexual-problems.html There's also some "Submitted URL seems to be a Soft 404": https://www.rscpp.co.uk/counselling/112585/counselling-moseley-depression.html I also have more which are "pending", but again I couldn't see a problem with them in the first place. I'm at a bit of a loss as to what to do next. Any advice? Thanks in advance.0 -
Is it a good idea to direct a new url to a subfolder?
Hello everyone, I have a client who is in public relations. He is wanting to integrate SEO into his business and is hiring me to help. He purchased a domain that is separate from his existing website, but wants to keep the services and existing blog (which will include SEO) on his current site. For example, he has: www.example.com and wants to add a SEO folder which will contain our services: www.example.com/prseo the domain he purchased (as another example) is: www.prseo.com but wants to direct it to the folder he has set up @ example.com/prseo Can anyone offer advise? Is this a good idea?
Technical SEO | | visabelmedia0 -
WP URL issue - Concatenated URLs (LOTS of them)
WP is doing this somehow, and creating URLs for hundreds of pages that don't exist. HOW is this happening, and how do I stop It? I have many, many URLS like this: https://www.atouchofrust.com/terms-of-use/atouchofrust.com/vendor-news. Of note, atouchofrust.com/terms-of-use, and atouchofrust.com/vendor-news are both legit pages on the site. Why they are being concatenated is beyond my limited understanding of WP. Please, somebody, help. Cori
Technical SEO | | FlyingC0 -
Robots.txt
Hello, My client has a robots.txt file which says this: User-agent: * Crawl-delay: 2 I put it through a robots checker which said that it must have a **disallow command**. So should it say this: User-agent: * Disallow: crawl-delay: 2 What effect (if any) would not having a disallow command make? Thanks
Technical SEO | | AL123al0 -
Do i have my robots.txt file set up properly
Hi, just doing some seo on my site and i am not sure if i have my robots file set correctly. i use joomla and my website is www.in2town.co.uk. here is my robots file, does this look correct to you User-agent: *
Technical SEO | | ClaireH-184886
Disallow: /administrator/
Disallow: /cache/
Disallow: /components/
Disallow: /includes/
Disallow: /installation/
Disallow: /language/
Disallow: /libraries/
Disallow: /media/
Disallow: /modules/
Disallow: /plugins/
Disallow: /templates/
Disallow: /tmp/
Disallow: /xmlrpc/ many thanks1 -
Blocking URL's with specific parameters from Googlebot
Hi, I've discovered that Googlebot's are voting on products listed on our website and as a result are creating negative ratings by placing votes from 1 to 5 for every product. The voting function is handled using Javascript, as shown below, and the script prevents multiple votes so most products end up with a vote of 1, which translates to "poor". How do I go about using robots.txt to block a URL with specific parameters only? I'm worried that I might end up blocking the whole product listing, which would result in de-listing from Google and the loss of many highly ranked pages. DON'T want to block: http://www.mysite.com/product.php?productid=1234 WANT to block: http://www.mysite.com/product.php?mode=vote&productid=1234&vote=2 Javacript button code: onclick="javascript: document.voteform.submit();" Thanks in advance for any advice given. Regards,
Technical SEO | | aethereal
Asim0 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0