Blocking https from being crawled
-
I have an ecommerce site where https is being crawled for some pages. Wondering if the below solution will fix the issue
www.example.com will be my domain
In the nav there is a login page www.example.com/login which is redirecting to the https://www.example.com/login
If I just disallowed /login in the robots file wouldn't it not follow the redirect and index that stuff?
The redirect part is what I am questioning.
-
Correct once /login gets redirected to https://www.example.com/login all nav links etc are https
What I ended up doing was blocking /login in robots and now doing canonicals on https as well as nofollow the /login link that is in the nav that redirects
Willl see what happens now.
-
So, the "/login" page gets redirected to https: and then every link on that page goes secure and Google crawls them all? I think blocking the "/login" page is a perfectly good way to go here - cut the crawl path, and you'll cut most of the problem.
You could request removal of "/login" in Google Webmaster Tools, too. Sometimes, I find that Robots.txt isn't great at removing pages that are already indexed. I would definitely add the canonical as well, if it's feasible. Cutting the path may not cut the pages that have already been indexed with https:.
Sorry, I'd actually reverse that:
(1) Add the canonicals, and let Google sweep up the duplicates
(2) A few weeks later, block the "/login" page
Sounds counter-intuitive, but if you block the crawl path to the https: pages first, then Google won't crawl the canonical tags on those versions. Use canonical to clean up the index, and then block the page to prevent future problems.
-
Gotcha. Yea I commented above how I was going to add a canonical as well as a noindex in the meta but was curious how it handled the redirect that was happening.
thanks for your help
-
Yea I was going to nofollow the link in the nav and add a meta tag but was curious how the robots file would handle this since the url is a redirect.
Thanks for your input
-
The pages that are being crawled under https, are the same pages available under http as well ? If yes, can you just add a canonical tag on these pages to go to the http version. That should fix it. And if your login page is the entry point, your fix will help as well. But then as Rebekah said, what if somebody is linking to your https page. I would suggest you look into making a canonical tag on these pages to http if that makes sense and is doable.
-
You can disallow the https portion in robots.txt, but remember robots.txt isn't always a sure fire way of not getting an area of your site crawled. If you have other important content to crawl from the secured page, be careful you are not blocking robots from there.
If this is linked to other places on the web, and the link doesn't include no-follow, search engines may still crawl the page. Can you change the link in your navigation to no-follow as well? I would also add a meta noindex tag to the page itself, and a canonical tag to the https version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to Switch My Site to HTTPS in GWT?
I recently bought an SSL certificate and moved my site over to HTTPS. Now how do I make the change in Google Webmaster Tools?
Technical SEO | | sbrault740 -
What is the better way to fix duplication https and http?
Hi All! I have a doubt about how to fix the duplication problem https @ http. What is the better way to fix it in your opionion/experience? Me, for instance, I have chosen to put "noindex, nofollow" into https version. Each page of my site has a https version so I put this metarobots into it....But I am not sure about what happens with all backlinks with "https" URLs I have, I've just checked I have some...What do you think about it? Thanks in advance for helping!
Technical SEO | | Red_educativa0 -
Odd URL errors upon crawl
Hi, I see this in Google Webmasters, and am now also seeing it here...when a crawl is performed on my site, I get many 500 server error codes for URLs that I don't believe exist. It's as if it sees a normal URL but adds this to it: %3Cdiv%20id= It's like this for hundreds of URLs. Good URL that actually exists http://www.ffr-dsi.com/food-retailing/supplies/ URL that causes error and I have no idea why http://www.ffr-dsi.com/food-retailing/supplies/%3Cdiv%20id= Thanks!
Technical SEO | | Matt10 -
Having a massive amount of duplicate crawl errors
Im having over 400 crawl errors over duplicate content looking like this: http://www.mydomain.com/index.php?task=login&prevpage=http%3A%2F%2Fwww.mydomain.com%2Ftag%2Fmahjon http://www.mydomain.com/index.php?task=login&prevpage=http%3A%2F%2Fwww.mydomain.com%2Findex.php%3F etc.. etc... So there seems to be something with my login script that is not working, Anyone knows how to fix this? Thanks
Technical SEO | | stanken0 -
Best method of redirecting http to https on homepage
Hi everyone, I'm looking to redirect all http requests to https for a site's homepage. It only needs to be for the homepage, not site wide. What's the best method of doing this without losing pagerank or ranking? I'm using IIS7.5 so I've been looking at a URL Rewrite or possibly this ASP.Net solution; http://www.xdevsoftware.com/blog/post/Redirect-from-Http-to-Https-in-ASPNET.aspx Or is a simple 301 or 302 (for some reason Microsoft's site says to do a 302 re-direct, though I'm not sure if this is great from an SEO perspective?) re-direct from http version to the https version the best method? Also if the solution retained the URL query string that would be even better! Any help appreciated! Thanks
Technical SEO | | PeterAlexLeigh0 -
Crawl Errors for duplicate titles/content when canonicalised or noindexed
Hi there, I run an ecommerce store and we've recently started changing the way we handle pagination links and canonical links. We run Magento, so each category eg /shoes has a number of parameters and pages depending on the number of products in the category. For example /shoes?mode=grid will display products in grid view, /shoes?mode=grid&p=2 is page 2 in grid mode. Previously, all URL variations per category were canonicalised to /shoes. Now, we've been advised to paginate the base URLs with page number only. So /shoes has a pagination next link to /shoes?p=2, page 2 has a prev link to /shoes and a next link to /shoes?p=3. When any other parameter is introduced (such as mode=grid) we canonicalise that back to the main category URL of /shoes and put a noindex meta tag on the page. However, SEOMoz is picking up duplicate title warnings for urls like /shoes?p=2 and /shoes?mode=grid&p=2 despite the latter being canonicalised and having a noindex tag. Presumably search engines will look at the canonical and the noindex tag so this shouldn't be an issue. Is that correct, or should I be concerned by these errors? Thanks.
Technical SEO | | Fergus_Macdonald0