Can you have an SSL cert but still have http?
-
I was under the impression that if you got an SSL cert for your site that the site would change to https. I ran this site: http://thekinigroup.com/ through an SSL checker and it said it had one...but it's http.
1. Why didn't it change to https? Is there an extra step there that needs to be done?
2. Is there a reason someone would choose to get an SSL cert, but not have https?
Thanks,
Ruben
-
Absolutely! Yeah I'm just not sold on...
A) The amount of work it would take to transition each site
B) The risk of something going wrong
C) The actual ranking benefit/advantage it would actually produce
D) The value of it for our users
And for that reason... I'm out! lol
-
Oh yeah, I have seen that before. Okay, that make sense.Thanks so much!
- Ruben
-
Hi Bryan,
It seems like a lot of knowledgeable people are not jumping on the SSL bandwagon just yet, so you're in good company. I appreciate the help.
Thanks,
Ruben
-
Hey Ruben!
We have several eCommerce sites that only have the SSL on pages that "need" to be secured like the checkout pages, etc. I am not totally buying into all the SEO value of the HTTPS just yet. And honestly doubt I ever will.
IMHO I don't feel that all pages of a site "NEED" to be secured. To me it just doesn't make sense and we have plenty of sites that are ranking #1 for competitive terms that only use the SSL on checkout. Hope this helps!!!
-
They could have the SSL for transactions. It was very common a few years ago (and many sites still do this) to have a site be http: and then once you start into their transaction funnel (such as a purchase) to switch over to https:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can bad html code hurt your website from ranking ?
Hello,For example if I search for “ Bike Tours in France” I am looking for a page with a list of tours in France.Does it mean that if my html doesn’t have list * in the code but only that apparently doesn’t have any semantic meaning for a search engine my page won’t rank because of that ?Example on this page : https://bit.ly/2C6hGUn According to W3schools: "A semantic element clearly describes its meaning to both the browser and the developer. Examples of non-semantic elements: <div> and - Tells nothing about its content. Examples of semanticelements: <form>, , and- Clearly defines its content."Has anyone any experience with something similar ?Thank you, </form>
Technical SEO | | seoanalytics0 -
Google Search Console Site Map Anomalies (HTTP vs HTTPS)
Hi I've just done my usual Monday morning review of clients Google Search Console (previously Webmaster Tools) dashboard and disturbed to see that for 1 client the Site Map section is reporting 95 pages submitted yet only 2 indexed (last time i looked last week it was reporting an expected level of indexed pages) here. It says the sitemap was submitted on the 10th March and processed yesterday. However in the 'Index Status' its showing a graph of growing indexed pages up to & including yesterday where they numbered 112 (so looks like all pages are indexed after all). Also the 'Crawl Stats' section is showing 186 pages crawled on the 26th. Then its listing sub site-maps all of which are non HTTPS (http) which seems very strange since the site is HTTPS and has been for a few months now and the main sitemap index url is an HTTPS: https://www.domain.com/sitemap_index.xml The sub sitemaps are:http://www.domain.com/marketing-sitemap.xmlhttp://www.domain.com/page-sitemap.xmlhttp://www.domain.com/post-sitemap.xmlThere are no 'Sitemap Errors' reported but there are 'Index Error' warnings for the above post-sitemap, copied below:_"When we tested a sample of the URLs from your Sitemap, we found that some of the URLs were unreachable. Please check your webserver for possible misconfiguration, as these errors may be caused by a server error (such as a 5xx error) or a network error between Googlebot and your server. All reachable URLs will still be submitted." _
Technical SEO | | Dan-Lawrence
Also for the below site map URL's: "Some URLs listed in this Sitemap have a high response time. This may indicate a problem with your server or with the content of the page" for:http://domain.com/en/post-sitemap.xmlANDhttps://www.domain.com/page-sitemap.xmlAND https://www.domain.com/post-sitemap.xmlI take it from all the above that the HTTPS sitemap is mainly fine and despite the reported 0 pages indexed in GSC sitemap section that they are in fact indexed as per the main 'Index Status' graph and that somehow some HTTP sitemap elements have been accidentally attached to the main HTTPS sitemap and the are causing these problems.What's best way forward to clean up this mess ? Resubmitting the HTTPS site map sounds like right option but seeing as the master url indexed is an https url cant see it making any difference until the http aspects are deleted/removed but how do you do that or even check that's what's needed ? Or should Google just sort this out eventually ? I see the graph in 'Crawl > Sitemaps > WebPages' is showing a consistent blue line of submitted pages but the red line of indexed pages drops to 0 for 3 - 5 days every 5 days or so. So fully indexed pages being reported for 5 day stretches then zero for a few days then indexed for another 5 days and so on ! ? Many ThanksDan0 -
Can someone interpret this entry in my htaccess file into english so that I can understand?
There are a number of entries in my htaccess and I'd like to understand what they are doing so that I can understand if they need to be there or not. So, can someone tell me what this says...in plain english? RewriteCond %{HTTP_HOST} ^legacytravel.com$ [OR]
Technical SEO | | cathibanks
RewriteCond %{HTTP_HOST} ^www.legacytravel.com$
RewriteRule ^carrollton-travel-agent$ "http://www.legacytravel.com/carrollton-travel-agent" [R=301,L] Thank you a million times in advance.0 -
Cached pages still showing on Google
We noticed our QA site showing up on Google so we blocked them in our robot.txt file. We still had an issue with them crawling it so we blocked the site from the public. Now Google is still showing a cached version from the first week in March. Do we just have to wait until they try to re-crawl the site to clear this out or is there a better way to try and get these pages removed from results?
Technical SEO | | aspenchicago0 -
Can backlinks from advertising cause a traffic drop?
Hi, I recently noticed that our organic traffic has started to drop and maybe coincidently our adwords traffic has increased. I was asked to investigate the drop. I know that from the google update that unnatural backlinks would be penalized so I thought it might be the backlinks from a site that we advertise on because of the sheer number we have required from them in the last month. Would you think that would be the cause? if not, what could it be? and if it is, how do I go about correcting it as fast as possible? Any Help with this would be greatly appreciated. Many Thanks, Colin
Technical SEO | | digital.moretogether.com0 -
Https-pages still in the SERP's
Hi all, my problem is the following: our CMS (self-developed) produces https-versions of our "normal" web pages, which means duplicate content. Our it-department put the <noindex,nofollow>on the https pages, that was like 6 weeks ago.</noindex,nofollow> I check the number of indexed pages once a week and still see a lot of these https pages in the Google index. I know that I may hit different data center and that these numbers aren't 100% valid, but still... sometimes the number of indexed https even moves up. Any ideas/suggestions? Wait for a longer time? Or take the time and go to Webmaster Tools to kick them out of the index? Another question: for a nice query, one https page ranks No. 1. If I kick the page out of the index, do you think that the http page replaces the No. 1 position? Or will the ranking be lost? (sends some nice traffic :-))... thanx in advance 😉
Technical SEO | | accessKellyOCG0 -
How can I see if my website was penalize by Google?
Hello, I have a website http://digitaldiscovery.eu that I have been working for 7 months. Everything is alright in the index of the search engines like Google, Bing e Yahoo. I also have like 1000 visits a month wich is not bad for the topic Im pointing at in my country. However my pagerank insist to be on 0, and I really dont understand why. Some of the my competitors that started at the same time, already have a pagerank of 3 and they do not have the same visitors that I do. In the rank system of Alexa im climbing very fast and the visits of my website are growing. So why does the pagerank dont climb aswell?! Tks in advance, Pedro M Pereira
Technical SEO | | PedroM0 -
301ed Pages Still Showing as Duplicate Content in GWMT
I thank anyone reading this for their consideration and time. We are a large site with millions of URLs for our product pages. We are also a textbook company, so by nature, our products have two separate ISBNs: a 10 digit and a 13 digit form. Thus, every one of our books has at least two pages (10 digit and 13 digit ISBN page). My issue is that we have established a 301 for all the 10 digit URLs so they automatically redirect to the 13 digit page. This fix has been in place for months. However, Google still reports that they are detecting thousands of pages with duplicate title and meta tags. Google is referring to these page URLs that I already have 301ed to the canonical version many months ago! Is there anything that I can do to fix this issue? I don't understand what I am doing wrong. Example:
Technical SEO | | dfinn
http://www.bookbyte.com/product.aspx?isbn=9780321676672
http://www.bookbyte.com/product.aspx?isbn=032167667X As you can see the 10 digit ISBN page 301s to 13 digit canonical version. Google reports that they have detected duplicate title and meta tags between the two pages and there are thousands of these duplicate pages listed. To add some further context: The ISBN is just a parameter that allows us to provide content when someone searches for a product with the 10 or 13 digit ISBN. The 13 digit version of the page is the only physical page that exists, the 10 digit is only a part of the virtual URL structure of the website. This is why I cannot simply change the title and meta tags of the 10 digit pages because they only exist in the sense that the URL redirects to the 13 digit version. Also, we submit a sitemap every day of all the 13 digit pages so Google knows exactly what our physical URL structure is. I have submitted this question to GWMT forums and received no replies.0