Still Not Secure in Chrome
-
Hi
We migrated to HTTPs in November - but we still aren't showing as Secure.
I thought it was due to there being an Insecure SHA-1 script in the SSlL Certificate, so am waiting to get this fixed.
We had a few http links outstanding so they have been updated, but we're still getting the issue.
Does anyone have an idea of what it could be? https://www.key.co.uk/en/key/
-
I'm surprised to say... that SSL certificate you have is very poor quality and has a number of pretty significant security issues, in addition to the SHA-1 encryption.]
To answer your specific question, there's nothing you or your devs can do about the SHA-1 encryption problem, as that problem exists on one of the certificates in the chain that is owned and controlled by Thawte (the cert issuer or "Certificate Authority"), not your own certificate. It is up to them to fix it.
As you can see from the cert security scan, there are a number of other issues with the certificate that are unacceptable. Especially in a paid certificate. [Edited for clarity - some of those warnings are likely server-specific, meaning the server is being allowed to communicate with certificate in less than optimal ways]
https://www.ssllabs.com/ssltest/analyze.html?d=www.key.co.ukIt's unlikely that the encryption problem is whats giving the "not secure" warning on the site at the moment (although it will become a major issue later in February) so you'll need to keep looking for resources called over HTTP if you're still getting warnings.
When I had a quick look at the home page, I didn't see any more warnings, as it appears you've fixed the image call that Andrew mentioned. You can use Chrome or Firefox Dev Tools to inspect any pages that are not secure to be shown exactly what element is causing the failure. It often comes down to hardcoded images like those in CSS/background images etc, or hardcoded scripts. For example, your Quotations page is calling a script from Microsoft to validate the form, but it's failing as it's called over HTTP.
Knowing this, you'd want to check any other pages using such form validation. A thorough Screaming Frog crawl to look for any other wayward HTTP calls can also help dig our the remaining random culprits.
Hope that helps?
Paul
Sidenote: Your certificate authority is Thawte, which is connected with Symantec. Which has done such a bad job of securing their certificates that Chrome and other browsers no longer trust them and are in the near future are going to be officially distrusted and ignored. Symantec has in fact given up their Certificate Authority status and is transferring their business to a new company which does have a trusted infrastructure for issuing certificates. So you're going to need to deal with a new certificate in the not too distant future anyway.
Given the poor security of your existing cert, and the upcoming issues, if it were me, I'd be asking for a refund of my current cert, and replacing it with one from a more reliable issuer. I know that can mean a lot of extra work, but as these existing problematic certs go through the distrust process over the next 8 months, sites that haven't dealt with the issue are going to break.
It's possible that Thawte will build out a reliable process for migrating. At the very least, you need to have a strong conversation with your issuer about how to insure you are getting the security and long-term reliability you've paid for. Sorry to be the bearer of bad news that is a much bigger issue. You can read up about it more here:
https://security.googleblog.com/2017/09/chromes-plan-to-distrust-symantec.html -
Thank you.
Also, does anyone know if we need to rekey the SHA-1 signature algorithm, what we rekey it with or should my dev team know this?
-
I also got this report from https://www.whynopadlock.com
Soft FailureAn image with an insecure url of "http://www.key.co.uk/img/W/KEY/F7/IC/F7-112H204-1-LX.jpg" was loaded on line: 1 of https://www.key.co.uk/en/key.
Errors that are reported on line 1 are generally not part of the source code. This error may be caused by an external javascript file which is writing to the page, however we are unable to reliably detect these scripts in our automated test.
Please contact us using the "Need Help?" link below if you need assistance with resolving this error.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Priority Attribute in XML Sitemaps - Still Valid?
Is the priority value (scale of 0-1) used for each URL in an XML sitemap still a valid way of communicating to search engines which content you (the webmaster) believe is more important relative to other content on your site? I recall hearing that this was no longer used, but can't find a source. If it is no longer used, what are the easiest ways to communicate our preferences to search engines? Specifically, I'm looking to preference the most version version of a product's documentation (version 9) over the previous version (version 8). Thanks!
Intermediate & Advanced SEO | | Allie_Williams0 -
Site Has Not Recovered (Still!) from Penguin
Hello, I have a site that just keeps getting hit with a ton of bad, unnatural backlinks due to the sins from previous SEO companies they've hired. About every quarter, I have to add more bad links to their disavow file... still. Is it time to move them to a new domain? Perhaps a .net? If so, do we just completely trash the old domain & not redirect it? I've never had a client like this in the past but they still want to maintain their branded name. Thanks for your feedback!
Intermediate & Advanced SEO | | TinaMumm0 -
HTTPS Certificate Expired. Website with https urls now still in index issue.
Hi Guys This week the Security certificate of our website expired and basically we now have to wail till next Tuesday for it to be re-instated. So now obviously our website is now index with the https urls, and we had to drop the https from our site, so that people will not be faced with a security risk screen, which most browsers give you, to ask if you are sure that you want to visit the site, because it's seeing it as an untrusted one. So now we are basically sitting with the site urls, only being www... My question what should we do, in order to prevent google from penalizing us, since obviously if googlebot comes to crawl these urls, there will be nothing. I did however re-submitted it to Google to crawl it, but I guess it's going to take time, before Google picks up that now only want the www urls in the index. Can somebody please give me some advice on this. Thanks Dave
Intermediate & Advanced SEO | | daveza0 -
Dev Site Out of SERP But Still Indexed
One of our dev sites get indexed (live site robots.txt was moved to it, that has been corrected) 2-3 weeks ago. I immediately added it to our Webmaster Tools and used the Remove URL tool to get the whole thing out of the SERPs. A site:devurl search in Google now returns no results, but checking Index Status in WMT shows 2,889 pages of it still indexed. How can I get all instances of it completely removed from Google?
Intermediate & Advanced SEO | | Kingof50 -
Should I let Google crawl my production server if the site is still under development?
I am building out a brand new site. It's built on Wordpress so I've been tinkering with the themes and plug-ins on the production server. To my surprise, less than a week after installing Wordpress, I have pages in the index. I've seen advice in this forum about blocking search bots from dev servers to prevent duplicate content, but this is my production server so it seems like a bad idea. Any advice on the best way to proceed? Block or no block? Or something else? (I know how to block, so I'm not looking for instructions). We're around 3 months from officially launching (possibly less). We'll start to have real content on the site some time in June, even though we aren't planning to launch. We should have a development environment ready in the next couple of weeks. Thanks!
Intermediate & Advanced SEO | | DoItHappy0 -
Our website scores A but on google we are still on 7th page
Hi all, I have run on page keyword optimizations with exact terminology used to find our company service or our competition on google. We have ranked A, with almost all points complete. I did the same for our main competitor and they ranked F. Then i did page positioning on Google and they get on page 1 fifth line and we get page 7. We have plenty of unique content and extensive website.
Intermediate & Advanced SEO | | EMGCSR
Could there be any other reason than reason for this other than backlinks? Many thanks for your help.0 -
How do I create a strategy to get rid of dupe content pages but still keep the SEO juice?
We have about 30,000 pages that are variations of "<product-type>prices/<type-of-thing>/<city><state "<="" p=""></state></city></type-of-thing></product-type> These pages are bringing us lots of free conversions because when somebody searches for this exact phrase for their city/state, they are pretty low-funnel. The problem that we are running into is that the pages are showing up as dupe content. One solution we were discussing is to 301-redirect or canonical all the city-state pages back to jus tthe "<type of="" thing="">" level, and then create really solid unique content for the few hundred pages we would have at that point.</type> My concern is this. I still want to rank for the city-state because as I look through our best-converting search-terms, they nearly always have the city-state in the search term, so the search is some variation of " <product-type><type of="" thing=""><city><state>"</state></city></type></product-type> One thing we thought about doing is dynamically changing the meta-data & headers to add the city-state info there. Are there other potential solutions to this?
Intermediate & Advanced SEO | | editabletext0 -
We recently fixed a Meta Refresh that was affecting our home page - But something still seems wrong. Any suggestions?
We recently fixed a meta refresh issue on our home page. Our store URL: http://www.ccisolutions.com had a meta refresh on it that was going to: www.ccisolutions.com/StoreFront/IAFDispatcher?iafAction=showMain The meta refresh is now gone, however there still seem to be some problems: Our IT Director has not been successful in trying to make www.ccisolutions.com/StoreFront/IAFDispatcher?iafAction=showMain 301 redirect to http://www.ccisolutions.com - so I believe we now have a duplicate content issue If you look at both of these URLs in OSE, you will see that www.ccisolutions.com/StoreFront/IAFDispatcher?iafAction=showMain is getting credit for almost all of the Internal Followed Links, while http://www.ccisolutions.com is getting all the credit for External Followed links. Why doesn't http://www.ccisolutions.com show the same number of Internal Followed Links? I realize this is more of a developer/webmaster question and would be very appreciative of any suggestions or advice. Thanks!
Intermediate & Advanced SEO | | danatanseo0