Google Indexed the HTTPS version of an e-commerce site
-
Hi, I am working with a new e-commerce site. The way they are setup is that once you add an item to the cart, you'll be put onto secure HTTPS versions of the page as you continue to browse.
Well, somehow this translated to Google indexing the whole site as HTTPS, even the home page. Couple questions:
1. I assume that is bad or could hurt rankings, or at a minimum is not the best practice for SEO, right?
2. Assuming it is something we don't want, how would we go about getting the http versions of pages indexed instead of https? Do we need rel-canonical on each page to be to the http version? Anything else that would help?
Thanks!
-
let the people redirect to non https versions. what is the problem here? They won't loose items from their basket when redirected from https to http version. And when they are 'checking out', the protocol will remain secure via SSL as pages which need to be https wont redirect to non https.
-
Hi Irving, thanks for your reply. That all makes sense to me except "noindex meta tag them". They are the same product pages whether they are https or http, so I can't put 'noindex' on exclusively the https page...
Or are you suggesting that I figure out some conditional code that if the https version is called, it inserts a 'noindex'.
Is there a reason nobody is suggestion rel canonical to the http version?
-
block the https pages in robots.txt and nofindex meta tag them.
then make sure that all of your links coming off of the https pages are absolute http links.
Your problem is probably relative links on the https pages getting spidered and staying https when they come off the secure pages onto the http pages if that makes sense.
-
301'ing https versions would not work, because people who belong on the HTTPS versions (because they have something in their cart) would be force-redirected to the non-https version.
I'm thinking that rel-canonical to the http version along with Robots.txt rules as you've suggested may be the way to go.
-
1. it can create duplicate content issues and not a good seo practice.
2. You can 301 redirect all the https versions to the http versions and apply meta robots noindex, follow to the handful of pages that need to be https.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google has discovered a URL but won't index it?
Hey all, have a really strange situation I've never encountered before. I launched a new website about 2 months ago. It took an awfully long time to get index, probably 3 weeks. When it did, only the homepage was indexed. I completed the site, all it's pages, made and submitted a sitemap...all about a month ago. The coverage report shows that Google has discovered the URL's but not indexed them. Weirdly, 3 of the pages ARE indexed, but the rest are not. So I have 42 URL's in the coverage report listed as "Excluded" and 39 say "Discovered- currently not indexed." When I inspect any of these URL's, it says "this page is not in the index, but not because of an error." They are listed as crawled - currently not indexed or discovered - currently not indexed. But 3 of them are, and I updated those pages, and now those changes are reflected in Google's index. I have no idea how those 3 made it in while others didn't, or why the crawler came back and indexed the changes but continues to leave the others out. Has anyone seen this before and know what to do?
Intermediate & Advanced SEO | | DanDeceuster0 -
Site move-Redirecting and Indexing dynamic pages
I have an interesting problem I would like to pick someone else’s brain. Our business has over 80 different products, each with a dedicated page (specs, gallery, copy etc.) on the main website. Main site itself, is used for presentation purpose only and doesn’t offer a direct path to purchase. A few years ago, to serve a specific customer segment, we have created a site where customers can perform a quick purchase via one of our major strategic partners. Now we are looking to migrate this old legacy service, site and all its pages under the new umbrella (main domain/CMS). Problem #1 Redirects/ relevancy/ SEO equity Ideally, we could simply perform 1:1 - 301 redirect from old legacy product pages to the relevant new site products pages. The problem is that Call to action (buy), some images and in some cases, parts of the copy must be changed to some degree to accommodate this segment. The second problem is in our dev and creative team. There are not enough resources to dedicate for the creation of the new pages so we can perform 1:1 301 redirects. So, the potential decision is to redirect a visitor to the dynamic page URL where parent product page will be used to apply personalization rules and a new page with dynamic content (buy button, different gallery etc.) is displayed to the user (see attached diagram). If we redirect directly to parent URL and then apply personalization rules, URL will stay the same and this is what we are trying to avoid (we must mention in the URL that user is on purchase path, otherwise this redirect and page where the user lands, can be seen as deceptive). Also Dynamic pages will have static URLs and unique page/title tag and meta description. Problem #2 : Indexation/Canonicalization The dynamic page is canonicalized to the parent page and does have nearly identical content/look and feel, but both serve a different purpose and we want both indexed in search. Hope my explanation is clear and someone can chip in. Any input is greatly appreciated! vCm2Dt.jpg
Intermediate & Advanced SEO | | bgvsiteadmin1 -
"Unnatural links to your site" manual action by Google
Hi, My site has been hit by a "Unnatural links to your site" manual action penalty and I've just received a decline on my 2nd reconsideration request, after disavowing even more links than I did in the first request. I went over all the links in WMT to my site with an SEO specialist and we both thought things have been resolved but apparently they weren't. I'd appreciate any help on this so as to lift the penalty and get my site back to its former rankings, it has ranked well before and the timing couldn't have been worse. Thanks,
Intermediate & Advanced SEO | | ishais
Yael0 -
Should we host our magazine on a subdomain of E-com site or its own domain?
We host a online fashion magazine on a subdomain of our e-commerce site. Currently we host the blog which is word press based on a subdomain ex: stylemag.xxxxxxx.com First question is are all the links from our blog considered internal links? They do not show in the back links profile. Also would it be better to host this on its own domain? Second question Is my main URL getting credit for the unique content published to the blog on the subdomain and if so is it helping the overall SEO of my website more then if it and the links were hosted on its own wordpress.com
Intermediate & Advanced SEO | | kushvision0 -
Apps content Google indexation ?
I read some months back that Google was indexing the apps content to display it into its SERP. Does anyone got any update on this recently ? I'll be very interesting to know more on it 🙂
Intermediate & Advanced SEO | | JoomGeek0 -
Google can't access/crawl my site!
Hi I'm dealing with this problem for a few days. In fact i didn't realize it was this serious until today when i saw most of my site "de-indexed" and losing most of the rankings. [URL Errors: 1st photo] 8/21/14 there were only 42 errors but in 8/22/14 this number went to 272 and it just keeps going up. The site i'm talking about is gazetaexpress.com (media news, custom cms) with lot's of pages. After i did some research i came to the conclusion that the problem is to the firewall, who might have blocked google bots from accessing the site. But the server administrator is saying that this isn't true and no google bots have been blocked. Also when i go to WMT, and try to Fetch as Google the site, this is what i get: [Fetch as Google: 2nd photo] From more than 60 tries, 2-3 times it showed Complete (and this only to homepage, never to articles). What can be the problem? Can i get Google to crawl properly my site and is there a chance that i will lose my previous rankings? Thanks a lot
Intermediate & Advanced SEO | | granitgash
Granit FvhvDVR.png dKx3m1O.png0 -
How long does google index old urls?
Hey guys, We are currently in the process of redesigning a site but in two phases as the timeline issues. So there will be up to a 4 week gap between the 1st and 2nd set of redirects. These urls will be idle 4 weeks before the phase content is ready. What effect if any will this have on the domain and page authority? Thanks Rob
Intermediate & Advanced SEO | | daracreative0 -
How do you achieve Google Authorship verification on a site with no clearly defined authors?
Google Authorship seems to be the current buzz topic in SEO. It seems perfect for people who write lots of articles of blog posts, but what about sites where the main focus isn't articles e.g. e-commerce sites? Can the website as a whole get verified?
Intermediate & Advanced SEO | | statman870