Google Indexed the HTTPS version of an e-commerce site
-
Hi, I am working with a new e-commerce site. The way they are setup is that once you add an item to the cart, you'll be put onto secure HTTPS versions of the page as you continue to browse.
Well, somehow this translated to Google indexing the whole site as HTTPS, even the home page. Couple questions:
1. I assume that is bad or could hurt rankings, or at a minimum is not the best practice for SEO, right?
2. Assuming it is something we don't want, how would we go about getting the http versions of pages indexed instead of https? Do we need rel-canonical on each page to be to the http version? Anything else that would help?
Thanks!
-
let the people redirect to non https versions. what is the problem here? They won't loose items from their basket when redirected from https to http version. And when they are 'checking out', the protocol will remain secure via SSL as pages which need to be https wont redirect to non https.
-
Hi Irving, thanks for your reply. That all makes sense to me except "noindex meta tag them". They are the same product pages whether they are https or http, so I can't put 'noindex' on exclusively the https page...
Or are you suggesting that I figure out some conditional code that if the https version is called, it inserts a 'noindex'.
Is there a reason nobody is suggestion rel canonical to the http version?
-
block the https pages in robots.txt and nofindex meta tag them.
then make sure that all of your links coming off of the https pages are absolute http links.
Your problem is probably relative links on the https pages getting spidered and staying https when they come off the secure pages onto the http pages if that makes sense.
-
301'ing https versions would not work, because people who belong on the HTTPS versions (because they have something in their cart) would be force-redirected to the non-https version.
I'm thinking that rel-canonical to the http version along with Robots.txt rules as you've suggested may be the way to go.
-
1. it can create duplicate content issues and not a good seo practice.
2. You can 301 redirect all the https versions to the http versions and apply meta robots noindex, follow to the handful of pages that need to be https.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unknown index.html links coming to my site.
I'm getting a lot of domain/index.html urls on my site which I didn't create initially. We recently transfered to a new site so those links could come from the old site. Does any know how to get a comprehensive list of all the urls that lead to 404?
Intermediate & Advanced SEO | | greenshinenewenergy0 -
Should I use noindex or robots to remove pages from the Google index?
I have a Magento site and just realized we have about 800 review pages indexed. The /review directory is disallowed in robots.txt but the pages are still indexed. From my understanding robots means it will not crawl the pages BUT if the pages are still indexed if they are linked from somewhere else. I can add the noindex tag to the review pages but they wont be crawled. https://www.seroundtable.com/google-do-not-use-noindex-in-robots-txt-20873.html Should I remove the robots.txt and add the noindex? Or just add the noindex to what I already have?
Intermediate & Advanced SEO | | Tylerj0 -
Indexed Pages Different when I perform a "site:Google.com" site search - why?
My client has an ecommerce website with approx. 300,000 URLs (a lot of these are parameters blocked by the spiders thru meta robots tag). There are 9,000 "true" URLs being submitted to Google Search Console, Google says they are indexing 8,000 of them. Here's the weird part - When I do a "site:website" function search in Google, it says Google is indexing 2.2 million pages on the URL, but I am unable to view past page 14 of the SERPs. It just stops showing results and I don't even get a "the next results are duplicate results" message." What is happening? Why does Google say they are indexing 2.2 million URLs, but then won't show me more than 140 pages they are indexing? Thank you so much for your help, I tried looking for the answer and I know this is the best place to ask!
Intermediate & Advanced SEO | | accpar0 -
2 Duplicate E-commerce sites Risk vs Reward
Hi guys I have a duplicate content question I was hoping someone might be able to give me some advice on? I work for a small company in the UK and in our niche we have a huge product range and an excellent website providing the customer with a very good experience. We’re also backed up by a bespoke warehouse/logistic management system further enhancing the quality of our product. We get most traffic through PPC and are not one of the biggest brands in the industry and have to fight for marketshare. Recently we were approached by another company in our industry that have built up a huge and engaged audience over decades but can’t logistically tap into their following to sell the products so they have suggested a partnership. They are huge fans of what we do and basically want a copy of our site to be rebranded and hosted on a subdomain of their website and we would then pay them a commission of all the sales the new site received. So 2 identical sites with different branding would exist. Based on tests they have carried out we could potentially double our sales in weeks and the potential is huge so we are excited about the possibility. But…..how would we handle the duplicate content, would we be penalised? Would just one of the sites be penalised? Or if sales increase as much as we think they might, would it be worth a penalty as our current rankings aren’t great? Any advice would be great. Cheers Richard
Intermediate & Advanced SEO | | Rich_9950 -
Google Index Constantly Decreases Week over Week (for over 1 year now)
Hi, I recently started working with two products (one is community driven content), the other is editorial content, but I've seen a strange pattern in both of them. The Google Index constantly decreases week over week, for at least 1 year. Yes, the decrease increased 🙂 when the new Mobile version of Google came out, but it was still declining before that. Has it ever happened to you? How did you find out what was wrong? How did you solve it? What I want to do is take the sitemap and look for the urls in the index, to first determine which are the missing links. The problem though is that the sitemap is huge (6 M pages). Have you find out a solution on how to deal with such big index changes? Cheers, Andrei
Intermediate & Advanced SEO | | andreib0 -
302 Redirect of www. version of a site - Pros/Cons
Hi, I am considering making the 301 redirect from the domain to a 302 temporary redirect. Currently if a user lands on "www" version of the site, they are redirected to the non "www" version. But after the redirect, they will land on an external webpage (so if a user lands on the "www" version, they are redirected to a different website, not related to my domain). Reason I'm considering this is because I have received a large number of spammy backlinks on the "www" version of the site (negative seo). So I'm hoping that the temporary redirect will help me recover. Your thoughts on this: 1. Is this the best way to do a 302 redirect (redirecting the www version to an external domain)? 2. Will the redirect help the main domain recover, considering all spammy backlinks are pointing to the www version? 3. What are the pros/cons, if any? Thanks in advance for your thoughts! Howard
Intermediate & Advanced SEO | | howardd0 -
Our login pages are being indexed by Google - How do you remove them?
Each of our login pages show up under different subdomains of our website. Currently these are accessible by Google which is a huge competitive advantage for our competitors looking for our client list. We've done a few things to try to rectify the problem: - No index/archive to each login page Robot.txt to all subdomains to block search engines gone into webmaster tools and added the subdomain of one of our bigger clients then requested to remove it from Google (This would be great to do for every subdomain but we have a LOT of clients and it would require tons of backend work to make this happen.) Other than the last option, is there something we can do that will remove subdomains from being viewed from search engines? We know the robots.txt are working since the message on search results say: "A description for this result is not available because of this site's robots.txt – learn more." But we'd like the whole link to disappear.. Any suggestions?
Intermediate & Advanced SEO | | desmond.liang1 -
Translated site on same server as English version
Hi We are in the process of getting our .com (English) website translated to Chinese. My question is, what are the pitfulls if the site is hosted on the SAME server as the English version. So the server would host both .com and .com.cn versions Thoughts ?? Thanks Neil
Intermediate & Advanced SEO | | NeilTompkins0