Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Robots.txt on http vs. https
-
We recently changed our domain from http to https. When a user enters any URL on http, there is an global 301 redirect to the same page on https.
I cannot find instructions about what to do with robots.txt. Now that https is the canonical version, should I block the http-Version with robots.txt?
Strangely, I cannot find a single ressource about this...
-
Glad to be of help. Check out this Google link to confirm you picked up the 180 day crawl
https://support.google.com/webmasters/answer/83106?hl=en
Second URLs helpful as well.
http://blog.raventools.com/moving-site-from-http-to-ssl/
all the best,
tom
-
Good point with the backlinks! Currently, both robots.txt files are open and google does not seem to have canonicalization problems so far. So it makes sense to leave it this way anyways... Thanks Thomas!
-
"Now that https is the canonical version, should I block the http-Version with robots.txt?"
Absolutely not GWT will handel all of it think about backlinks both https:// & http:// urls you will not want to lose the flow of link juice that you would cut off
Remake robost.txt with
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
But use https:// for the xml sitemap.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there a limit to how many URLs you can put in a robots.txt file?
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda. Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
Technical SEO | | kcb81780 -
Redirecting HTTP to HTTPS - How long does it take Google to re-index the site?
hello Moz We know that this year, Moz changed its domain to moz.com from www.seomoz.org
Technical SEO | | joony
however, when you type "site:seomoz.org" you still can find old urls indexed on Google (on page 7 and above) We also changed our site from http://www.example.com to https://www.example.com
And Google is indexing both sites even though we did proper 301 redirection via htaccess. How long would it take Google to refresh the index? We just don't worry about it? Say we redirected our entire site. What is going to happen to those websites that copied and pasted our content? We have already DMCAed their webpages, but making our site https would mean that their website is now more original than our site? Thus, Google assumes that we have copied their site? (Google is very slow on responding to our DMCA complaint) Thank you in advance for your reply.0 -
Should I block Map pages with robots.txt?
Hello, I have a website that was started in 1999. On the website I have map pages for each of the offices listed on my site, for which there are about 120. Each of the 120 maps is in a whole separate html page. There is no content in the page other than the map. I know all of the offices love having the map pages so I don't want to remove the pages. So, my question is would these pages with no real content be hurting the rankings of the other pages on our site? Therefore, should I block the pages with my robots.txt? Would I also have to remove these pages (in webmaster tools?) from Google for blocking by robots.txt to really work? I appreciate your feedback, thanks!
Technical SEO | | imaginex0 -
Direct link vs 302 redirect
So we have recently relaunched a site that we manage. As part of this we have changed the domain. The webdesign agency that built the new site have implemented a direct link from the old domain to the new domain. What is best practice a direct link or a 302 redirect? Thanks
Technical SEO | | cbarron0 -
Meta Description VS Rich Snippets
Hello everyone, I have one question: there is a way to tell Google to take the meta description for the search results instead of the rich snippets? I already read some posts here in moz, but no answer was found. In the post was said that if you have keywords in the meta google may take this information instead, but it's not like this as i have keywords in the meta tags. The fact is that, in this way, the descriptions are not compelling at all, as they were intended to be. If it's not worth for ranking, so why google does not allow at least to have it's own website descriptions in their search results? I undestand that spam issues may be an answer, but in this way it penalizes also not spammy websites that may convert more if with a much more compelling description than the snippets. What do you think? and there is any way to fix this problem? Thanks!
Technical SEO | | socialengaged
Eugenio0 -
Root directory vs. subdirectories
Hello. How much more important does Google consider pages in the root directory relative to pages in a subdirectory? Is it best to keep the most important pages of a site in the root directory? Thanks!
Technical SEO | | nyc-seo0 -
Allow or Disallow First in Robots.txt
If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page
Technical SEO | | irvingw0 -
Robots.txt file getting a 500 error - is this a problem?
Hello all! While doing some routine health checks on a few of our client sites, I spotted that a new client of ours - who's website was not designed built by us - is returning a 500 internal server error when I try to look at the robots.txt file. As we don't host / maintain their site, I would have to go through their head office to get this changed, which isn't a problem but I just wanted to check whether this error will actually be having a negative effect on their site / whether there's a benefit to getting this changed? Thanks in advance!
Technical SEO | | themegroup0