Negative impact on crawling after upload robots.txt file on HTTPS pages
-
I experienced negative impact on crawling after upload robots.txt file on HTTPS pages. You can find out both URLs as follow.
Robots.txt File for HTTP: http://www.vistastores.com/robots.txt
Robots.txt File for HTTPS: https://www.vistastores.com/robots.txt
I have disallowed all crawlers for HTTPS pages with following syntax.
User-agent: *
Disallow: /Does it matter for that? If I have done any thing wrong so give me more idea to fix this issue.
-
Hi CP,
If you wish to use robots.txt to block crawlers, then your two robots.txt files should be as follows:
For your http protocol (http://vistastores.com/robots.txt
User-agent: * Allow: /
For the https protocol (https://vistastores.com/robots.txt
User-agent: * Disallow: / Personally, I prefer to use the noindex meta tag for page blocking because it is a more reliable way of ensuring that the pages are not indexed. (Never try to use both at once) This link explains the difference between the two: [Google Webmaster Tools Help.](http://www.google.com/support/webmasters/bin/answer.py?answer=35302 "Robots blocking crawlers") Hope that helps, Sha ```You can use a robots.txt file to request that search engines remove your site and prevent robots from crawling it in the future. (It's important to note that if a robot discovers your site by other means - for example, by following a link to your URL from another site - your content may still appear in our index and our search results. To entirely prevent a page from being added to the Google index even if other sites link to it, use a [noindex meta tag](http://www.google.com/support/webmasters/bin/answer.py?answer=61050).)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
From http to https
Hi Guys, I have mixed http and https content on my ecommerce store. My server people is telling me force all to https as it is better for ssl certificate. All versions of the site are declared on search console. Can forcing https and not having more mixed content impact my site badly ? Thanks.
Intermediate & Advanced SEO | | Kepass0 -
Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.
Intermediate & Advanced SEO | | andyheath0 -
SSL and robots.txt question - confused by Google guidelines
I noticed "Don’t block your HTTPS site from crawling using robots.txt" here: http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html Does this mean you can't use robots.txt anywhere on the site - even parts of a site you want to noindex, for example?
Intermediate & Advanced SEO | | McTaggart0 -
How to associate content on one page to another page
Hi all, I would like associate content on "Page A" with "Page B". The content is not the same, but we want to tell Google it should be associated. Is there an easy way to do this?
Intermediate & Advanced SEO | | Viewpoints1 -
HTTPS Certificate Expired. Website with https urls now still in index issue.
Hi Guys This week the Security certificate of our website expired and basically we now have to wail till next Tuesday for it to be re-instated. So now obviously our website is now index with the https urls, and we had to drop the https from our site, so that people will not be faced with a security risk screen, which most browsers give you, to ask if you are sure that you want to visit the site, because it's seeing it as an untrusted one. So now we are basically sitting with the site urls, only being www... My question what should we do, in order to prevent google from penalizing us, since obviously if googlebot comes to crawl these urls, there will be nothing. I did however re-submitted it to Google to crawl it, but I guess it's going to take time, before Google picks up that now only want the www urls in the index. Can somebody please give me some advice on this. Thanks Dave
Intermediate & Advanced SEO | | daveza0 -
Robots.txt: Syntax URL to disallow
Did someone ever experience some "collateral damages" when it's about "disallowing" some URLs? Some old URLs are still present on our website and while we are "cleaning" them off the site (which takes time), I would like to to avoid their indexation through the robots.txt file. The old URLs syntax is "/brand//13" while the new ones are "/brand/samsung/13." (note that there is 2 slash on the URL after the word "brand") Do I risk to erase from the SERPs the new good URLs if I add to the robots.txt file the line "Disallow: /brand//" ? I don't think so, but thank you to everyone who will be able to help me to clear this out 🙂
Intermediate & Advanced SEO | | Kuantokusta0 -
Optimize the category page or a content page?
Hi, We wish to start ranking on a specific keyword ("log house prices" in italian). We have two options on what pages we should optimize for this keyword: A long content page (1000+ words with images) Log houses category page, optimized for the keyword (we have 50+ houses on this page, together with a short price summary). I would think that we have better chances with ranking with option nr.2 , but then we can't use that page for ranking with a more short-tail keyword (like "log houses"). What would you suggest? Is there maybe a third option for this?
Intermediate & Advanced SEO | | JohanMattisson0 -
Block all but one URL in a directory using robots.txt?
Is it possible to block all but one URL with robots.txt? for example domain.com/subfolder/example.html, if we block the /subfolder/ directory we want all URLs except for the exact match url domain.com/subfolder to be blocked.
Intermediate & Advanced SEO | | nicole.healthline0