Blocking https from being crawled
-
I have an ecommerce site where https is being crawled for some pages. Wondering if the below solution will fix the issue
www.example.com will be my domain
In the nav there is a login page www.example.com/login which is redirecting to the https://www.example.com/login
If I just disallowed /login in the robots file wouldn't it not follow the redirect and index that stuff?
The redirect part is what I am questioning.
-
Correct once /login gets redirected to https://www.example.com/login all nav links etc are https
What I ended up doing was blocking /login in robots and now doing canonicals on https as well as nofollow the /login link that is in the nav that redirects
Willl see what happens now.
-
So, the "/login" page gets redirected to https: and then every link on that page goes secure and Google crawls them all? I think blocking the "/login" page is a perfectly good way to go here - cut the crawl path, and you'll cut most of the problem.
You could request removal of "/login" in Google Webmaster Tools, too. Sometimes, I find that Robots.txt isn't great at removing pages that are already indexed. I would definitely add the canonical as well, if it's feasible. Cutting the path may not cut the pages that have already been indexed with https:.
Sorry, I'd actually reverse that:
(1) Add the canonicals, and let Google sweep up the duplicates
(2) A few weeks later, block the "/login" page
Sounds counter-intuitive, but if you block the crawl path to the https: pages first, then Google won't crawl the canonical tags on those versions. Use canonical to clean up the index, and then block the page to prevent future problems.
-
Gotcha. Yea I commented above how I was going to add a canonical as well as a noindex in the meta but was curious how it handled the redirect that was happening.
thanks for your help
-
Yea I was going to nofollow the link in the nav and add a meta tag but was curious how the robots file would handle this since the url is a redirect.
Thanks for your input
-
The pages that are being crawled under https, are the same pages available under http as well ? If yes, can you just add a canonical tag on these pages to go to the http version. That should fix it. And if your login page is the entry point, your fix will help as well. But then as Rebekah said, what if somebody is linking to your https page. I would suggest you look into making a canonical tag on these pages to http if that makes sense and is doable.
-
You can disallow the https portion in robots.txt, but remember robots.txt isn't always a sure fire way of not getting an area of your site crawled. If you have other important content to crawl from the secured page, be careful you are not blocking robots from there.
If this is linked to other places on the web, and the link doesn't include no-follow, search engines may still crawl the page. Can you change the link in your navigation to no-follow as well? I would also add a meta noindex tag to the page itself, and a canonical tag to the https version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I am looking for best way to block a domain from getting indexed ?
We have a website http://www.example.co.uk/ which leads to another domain (https://online.example.co.uk/) when a user clicks,in this case let us assume it to be Apply now button on my website page. We are getting meta data issues in crawler errors from this (https://online.example.co.uk/) domain as we are not targeting any meta content on this particular domain. So we are looking to block this domain from getting indexed to clear this errors & does this effect SERP's of this domain (**https://online.example.co.uk/) **if we use no index tag on this domain.
Technical SEO | | Prasadgotteti0 -
Transfering Site from Http to HTTPS
Migrating all of our pages from HTTP to HTTPS. I am listing few of my concerns regarding the same: Currently, all HTTPS traffic to our Homepage and SEO page is 301 Redirected to HTTP equivalent. So, when we enable HTTPS on all our pages and 301 all HTTP traffic to HTTPS and stop current 301 Redirection to HTTP, will it still cause a loop during Google crawl due to old indexing? Will we move whole SEO facing site to HTTPS at once or will it be in phases? Which of the two approach is better keeping SEO in mind? what all SEO changes will be required on all pages.(eg. Canonical URLs on our website as well as affiliate websites), sitemaps etc.
Technical SEO | | RobinJA1 -
#1 rankings on both HTTP and HTTPS vs duplicate content
We're planning a full migrate to HTTPS for our website which is accessible today by both **www.**website.com, **http://**www.website.com as well as **https://**www.website.com. After the migrate the website will only be accessible by https requests and every other request (Ex. www or http) will be redirected to the same page but in HTTPS by 301 redirects. We've taken a lot of precautions like fixing all the internal links to HTTPS instead of HTTP etc. My questions is: What happened to your rankings for HTTP after making a full migrate to HTTPS?
Technical SEO | | OliviaStokholm0 -
301 Redirecting http to https
In the Moz Site Crawl issue, I was seeing an error that said we were temporarily redirecting our homepage to https URLs. So I changed the code in htaccess to make it 301 redirect but I'm still getting the same error. I implemented it last week and we just had a new crawl yesterday. Here is the new code: RewriteEngine on
Technical SEO | | Heydarian
RewriteCond %{HTTP_HOST} ^heritagelawmarketing.com [NC]
RewriteRule ^(.*)$ http://www.heritagelawmarketing.com/$1 [L,R=301,NC] Does anyone know why I'm still getting 302 redirects? Thanks0 -
How google crawls images and which url shows as source?
Hi, I noticed that some websites host their images to a different url than the one their actually website is hosted but in the end google link to the one that the site is hosted. Here is an example: This is a page of a hotel in booking.com: http://www.booking.com/hotel/us/harrah-s-caesars-palace.en-gb.html When I try a search for this hotel in google images it shows up one of the images of the slideshow. When I click on the image on Google search, if I choose the Visit Page button it links to the url above but the actual image is located in a totally different url: http://r-ec.bstatic.com/images/hotel/840x460/135/13526198.jpg My question is can you host your images to one site but show it to another site and in the end google will lead to the second one?
Technical SEO | | Tz_Seo0 -
What is the best way to deal with https?
Currently, the site I am working on is using HTTPS throughout the website. The non-HTTPS pages are redirected through a 301 redirect to the HTTPS- this happens for all pages. Is this the best strategy going forward? if not, what changes would you suggest?
Technical SEO | | adarsh880 -
Crawl errors: 301 (permanent redirect)
Hi, here are some questions about SEO Crawl Diagnostics. We've recently found out this 301 (permanent redirect) errors in our website and we concluded that the two factors below are the causes. 1. Some of our URLs that has no / at the end is automatically redirected to the same URL but with / at the end. 2. For SEO reasons we have designed our website in a way that when we type in a URL it will automatically redirect to a more SEO friendly URL. For example, if one of the URLs is www.example.com/b1002/, it will automatically redirect to www.example.com/banana juice/. The question is, are these so significant for our SEO and needs to be modified? One of the errors in our blog was having too many on-page links. Is this also a significant error and if so, how many on-page links are recommended from the SEO perspective? Thanks in advance.
Technical SEO | | Glassworks0 -
Accidentally blocked Googlebot for 14 days
Today after I noticed a huge drop in organic traffic to inner pages of my sites, I looked into the code and realized a bug in last commit cause the server to showing captcha pages to all Googlebot requests from Apr 24. My site has more than 4,000,000 in the index. Before last code change, Googlebot are exempt from being shown the captcha requests so each inner pages are crawled and indexed perfectly with no problem. The bug broke the whitelisting mechanism and treat requests from Google's ip addresses the same as regular users. It leads to the captcha page being crawled when Googlebot visits thousands of my site's inner pages. This makes Google thinks all my inner pages are identical to each other. Google remove all the inner pages from SERP starting from May 5th before when many of those inner pages have good rankings. I formerly thought this was a manual or algorithm penalty but 1. I did not receive a warning message in GWT
Technical SEO | | Bull135
2. The ranking for main url is good. I tried with "Fetch as Google" in GWT and realize all Googlebot saw in the past 14 days are the same captcha page for all my inner pages. Now, I have fixed the bug and updated the production site. I just wanted to ask: 1. How long will it take for Google to remove the "duplicated content" flag on my inner pages and show them in SERP again? From my experience, Googlebot revisits urls quite often. But once a url is flagged as "contains similar content", it could be difficult to recover, is it correct? 2. Besides waiting for Google to update its index, what else can I do right now? Thanks in advance for your answers.0