Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
One of my Friend's website Domain Authority is Reducing? What could be the reason?
-
Hello Guys,
One of my friend's website domain authority is decreasing since they have moved their domain from HTTP to https.
There is another problem that his blog is on subfolder with HTTP.
So, can you guys please tell me how to fix this issue and also it's losing some of the rankings like 2-5 positions down.Here is website URL: myfitfuel.in/
here is the blog URL: myfitfuel.in/mffblog/ -
http://www.redirect-checker.org/index.php
http://www.contentforest.com/seo-tools/redirect-checker
See http://i.imgur.com/mIqqCla.png
Redirecting all traffic to the www SSL domain
You can force all of your traffic to go to the
www
domain, and to use SSL, even if they did not request it initially.ensure www.
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]ensure https
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting all traffic to the bare SSL domain
With dedicated load balancers or who have purchased a slot on the UCC certificate on shared load balancers have the option of redirecting all traffic to the bare domain using the HTTPS protocol:
# Redirecting http://www.domain.com and https://www.domain.com to https://domain.com RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ https://%1%{REQUEST_URI} [L,R=301]
Redirecting http://domain.com to https://domain.com
RewriteCond %{HTTPS} off
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]An example of how the requests work
The preceding examples of how and when you would use a rewrite are complex; here's a breakdown of the scenarios, which may help you determine what your website really needs.
A security warning will occur on a bare domain only if the request specifically includes the https protocol, like https://mysite.com, and there's no SSL certificate on the load balancer that covers the bare domain. A request for
http://mysite.com
using the http protocol, however, will not produce a security warning because a secure connection to the bare domain has not been requested.| Domain | DNS record type | IP/Hostname |
| www.mysite.com | CNAME | dc-2459-906772057.us-east-1.elb.amazonaws.com |
| mysite.com | A | 123.45.67.89 |For AWS ELB,
www.mysite.com
has a CNAME record that points to the hostname of the elastic load balancer (ELB), because that's where the SSL certificate is installed when it's uploaded using the self-service UI. But, bare domains/non-FQDNs like mysite.com can't have CNAME records without something like Route 53, so it must point to the elastic IP address of the balancer pair behind the ELB.If there's a redirect in the
.htaccess
file that will take all requests for the bare domain and redirect them towww
, due to how the DNS records are set up, this is what happens if you requesthttp://example.com
:- The request for
http://mysite.com
hits the load balancers behind the ELB. - The
.htaccess
rule 301 redirects request tohttps://www.mysite.com
. - A new request for
https://www.mysite.com
hits the ELB where the certificate lives and everything is happy, secure, and green.
But, if a specific request is sent to
https://mysite.com
with the https protocol, here's what happens:- A request for
https://mysite.com
hits the load balancers behind the ELB. - Your browser displays the normal security warning.
- You examine the certificate and decide to move ahead.
- The .
htaccess
rule 301 redirects request tohttps://www.mysite.com
. - A new request for
https://www.mysite.com
hits the ELB where the cert lives and everything is happy, secure, and green.
Redirecting all HTTP traffic to HTTPS
In the following example, the server variable
HTTP_X_FORWARDED_PROTO
is set tohttps
if you're accessing the website using HTTPS, the following code will work with yourRedirect HTTP to HTTPS
RewriteCond %{HTTPS} off
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting all HTTPS traffic to HTTP
In addition, if visitors to a customer's website are receiving insecure content warnings due to Google indexing documents using the HTTPS protocol, traffic may need to be redirected from HTTPS to HTTP.
The rule is basically the same as the preceding example, but without the first
Rewrite
condition. If no SSL certificate is installed, the value of%{HTTPS}
is always set tooff
, even when you are accessing the website using HTTPS. Use the following rule set in this case:Redirect HTTPS to HTTP
RewriteCond %{HTTP:X-Forwarded-Proto} =https
RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting from a bare domain to the www subdomain
SSL certificates can not cover the bare domain for websites unless you are using Route 53 or some other similar provider. This is because the SSL certificates for Acquia Cloud Professional websites are placed on an Elastic Load Balancer (ELB). While ELBs require CNAME records for domain name resolution, bare domains require an IP address in an A-record for the domain name (DNS) configuration and cannot have CNAME records. Therefore, it's not possible to terminate traffic to bare domains on the ELB where your SSL certificate is located without Route 53.
Even if all requests for the bare domain are redirected to
www
, visitors to ELB websites that explicitly request the bare domain using the HTTPS protocol, likehttps://mysite.com
, will always receive a security warning in their browser before being redirected tohttps://www.mysite.com
. For a more detailed explanation of why this happens, refer to the An example of how the requests work section.Redirect http://domain.com to http://www.domain.com
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting all traffic to the www SSL domain You want this!
You can force all of your traffic to go to the
www
domain, and to use SSL, even if they did not request it initially.ensure www.
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]ensure https
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting all traffic to the bare SSL domain
AWS dedicated load balancers or who have purchased a slot on the UCC certificate on our shared load balancers have the option of redirecting all traffic to the bare domain using the HTTPS protocol:
Redirecting http://www.domain.com and https://www.domain.com to https://domain.com
RewriteCond %{HTTP_HOST} ^www.(.+)$ [NC]
RewriteRule ^(.*)$ https://%1%{REQUEST_URI} [L,R=301]Redirecting http://domain.com to https://domain.com
RewriteCond %{HTTPS} off
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]As an example, if you wanted to ensure that all the domains were redirected to
https://www.
except for Acquia domains acquia-sites.com, you would use something like this:ensure www.
RewriteCond %{HTTP_HOST} !prod.acquia-sites.com [NC] # exclude Acquia domains
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]ensure https
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]elb 2.2.15 | intermediate profile | OpenSSL 1.0.1e | link
Oldest compatible clients : Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7
This Amazon Web Services CloudFormation template will create an Elastic Load Balancer which terminates HTTPS connections using the Mozilla recommended ciphersuites and protocols.
{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "Example ELB with Mozilla recommended ciphersuite", "Parameters": { "SSLCertificateId": { "Description": "The ARN of the SSL certificate to use", "Type": "String", "AllowedPattern": "^arn:[^:]*:[^:]*:[^:]*:[^:]*:.*$", "ConstraintDescription": "SSL Certificate ID must be a valid ARN. http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-arns" } }, "Resources": { "ExampleELB": { "Type": "AWS::ElasticLoadBalancing::LoadBalancer", "Properties": { "Listeners": [ { "LoadBalancerPort": "443", "InstancePort": "80", "PolicyNames": [ "Mozilla-intermediate-2015-03" ], "SSLCertificateId": { "Ref": "SSLCertificateId" }, "Protocol": "HTTPS" } ], "AvailabilityZones": { "Fn::GetAZs": "" }, "Policies": [ { "PolicyName": "Mozilla-intermediate-2015-03", "PolicyType": "SSLNegotiationPolicyType", "Attributes": [ { "Name": "Protocol-TLSv1", "Value": true }, { "Name": "Protocol-TLSv1.1", "Value": true }, { "Name": "Protocol-TLSv1.2", "Value": true }, { "Name": "Server-Defined-Cipher-Order", "Value": true }, { "Name": "ECDHE-ECDSA-CHACHA20-POLY1305", "Value": true }, { "Name": "ECDHE-RSA-CHACHA20-POLY1305", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-GCM-SHA256", "Value": true }, { "Name": "ECDHE-RSA-AES128-GCM-SHA256", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-GCM-SHA384", "Value": true }, { "Name": "ECDHE-RSA-AES256-GCM-SHA384", "Value": true }, { "Name": "DHE-RSA-AES128-GCM-SHA256", "Value": true }, { "Name": "DHE-RSA-AES256-GCM-SHA384", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-SHA256", "Value": true }, { "Name": "ECDHE-RSA-AES128-SHA256", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-SHA", "Value": true }, { "Name": "ECDHE-RSA-AES256-SHA384", "Value": true }, { "Name": "ECDHE-RSA-AES128-SHA", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-SHA384", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-SHA", "Value": true }, { "Name": "ECDHE-RSA-AES256-SHA", "Value": true }, { "Name": "DHE-RSA-AES128-SHA256", "Value": true }, { "Name": "DHE-RSA-AES128-SHA", "Value": true }, { "Name": "DHE-RSA-AES256-SHA256", "Value": true }, { "Name": "DHE-RSA-AES256-SHA", "Value": true }, { "Name": "ECDHE-ECDSA-DES-CBC3-SHA", "Value": true }, { "Name": "ECDHE-RSA-DES-CBC3-SHA", "Value": true }, { "Name": "EDH-RSA-DES-CBC3-SHA", "Value": true }, { "Name": "AES128-GCM-SHA256", "Value": true }, { "Name": "AES256-GCM-SHA384", "Value": true }, { "Name": "AES128-SHA256", "Value": true }, { "Name": "AES256-SHA256", "Value": true }, { "Name": "AES128-SHA", "Value": true }, { "Name": "AES256-SHA", "Value": true }, { "Name": "DES-CBC3-SHA", "Value": true } ] } ] } } }, "Outputs": { "ELBDNSName": { "Description": "DNS entry point to the stack (all ELBs)", "Value": { "Fn::GetAtt": [ "ExampleELB", "DNSName" ] } } } }
- You can get managed Magento hosting here.
- https://www.armor.com/security-solutions/armor-complete/
- https://www.mgt-commerce.com/
- https://www.rackspace.com/en-us/digital/magento
- https://www.cogecopeer1.com/en/services/managed-it/ecommerce/magento/
- https://www.cogecopeer1.com/en/services/cloud/mission-critical/
- https://www.engineyard.com/magento
- https://www.cloudways.com/en/magento-managed-cloud-hosting.php
- https://www.rochen.com/magento-hosting/
- http://www.tenzing.com/ecommerce-hosting-2/magento-optimized-hosting-on-aws/
- https://www.siteground.com/dedicated-hosting.htm#tab-3
- https://www.siteground.com/cloud-hosting.htm#tab-2
- https://www.siteground.com/speed
- The request for
-
May I ask did your friend modify any of the site structure aside from adding HTTPS?
make sure you have followed all the steps in this list by Google link to your and the list below. There are more resources
if needed. Read what Google's John Mueller has to say on the subject of redirects.
Official Google moving to HTTS how to
https://support.google.com/webmasters/answer/6033049
** tools you can use**
- https://www.screamingfrog.co.uk/log-file-analyser/
- https://www.deepcrawl.com
- https://www.screamingfrog.co.uk/seo-spider/
** a very important checklist make sure you do this one below.**
SEO checklist to preserve your rankings
-
Make sure every element of your website uses HTTPS, including widgets, java script, CSS files, images and your content delivery network.
-
Use 301 redirects to point all HTTP URLs to HTTPS. This is a no-brainer to most SEOs, but you'd be surprised how often a 302 (temporary) redirect finds its way to the homepage by accident
-
Make sure all canonical tags point to the HTTPS version of the URL.
-
Use relative URLs whenever possible.
-
Rewrite hard-coded internal links (as many as is possible) to point to HTTPS. This is superior to pointing to the HTTP version and relying on 301 redirects.
-
Register the HTTPS version in both Google and Bing Webmaster Tools.
-
Use the Fetch and Render function in Webmaster Tools to ensure Google can properly crawl and render your site.
-
Update your sitemaps to reflect the new URLs. Submit the new sitemaps to Webmaster Tools. Leave your old (HTTP) sitemaps in place for 30 days so search engines can crawl and "process" your 301 redirects.
-
Update your robots.txt file. Add your new sitemaps to the file. Make sure your robots.txt doesn't block any important pages.
-
If necessary, update your analytics tracking code. Most modern Google Analytics tracking snippets already handle HTTPS, but older code may need a second look.
-
Implement HTTP Strict Transport Security (HSTS). This response header tells user agents to only access HTTPS pages even when directed to an HTTP page. This eliminates redirects, speeds up response time, and provides extra security.
-
If you have a disavow file, be sure to transfer over any disavowed URLs into a duplicate file in your new Webmaster Tools profile.
-
NGINX
Add the following to your Nginx config.
server { listen 80; server_name domain.com www.domain.com; return 301 https://domain.com$request_uri; }
Apache
Add the following to your
.htaccess
file.RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
-
** Here are some more extremely helpful resources**
-
https://www.seroundtable.com/google-seo-http-to-https-migration-checklist-19268.html
It is not abnormal for a site to see a dip in rankings or search visibility after migration or a change of structure. I have a very regimented list that I stick to and have not seen anything dip for more than three days, but all sites are unique, and Google indexes all sites differently.
Depending on your domain authority you may or may not have a higher crawl budget based on whether or not you tell Google you are making these changes will make an enormous difference in whether or not your site recovers quickly or sees a dip in traffic.
I hope this is helpful and remember Google has to reindex everything.
Thomas
-
It makes no sense that you would have your blog on a subfolder that was non-encrypted why did you choose to do this? I like the site to be 100% encrypted?
Read the second post first please
http://www.myfitfuel.in/mffblog/ should be https://www.myfitfuel.in/mffblog/
why not https?
if your hosting provider does not allow you to use HTTP/2 I suggest adding a WAF four as little as $20 a month you can run your site on HTTP/2
Now the cost of Akamai might scare people just from hearing the name, but I can assure you there are very good pricing options now that companies are competing against them in the same area. One thing in my opinion that no other CDN Waf company has is the amount of points of presence or pops/ Akamai exceeds over 250
https://community.akamai.com/community/web-performance/blog/2015/01/26/enabling-http2-h2-in-akamai
https://www.cloudflare.com/http2/
https://www.incapsula.com/cdn-guide/cdn-and-ssl-tls.html
when you switch your entire site over to https, then you can use the Google change of address tool and migrate your site to HTTPS
This should be encrypted you don't need a next or certificate you want to encrypt the entire site ideally. Add it to Google Webmaster Tools four times
- http://www.myfitfuel.in/
- http://myfitfuel.in/
- https://www.myfitfuel.in/
- https://myfitfuel.in/ Canonical chooses this in Webmaster tools like the site you want traffic to go to.
https://support.google.com/webmasters/topic/6029673?hl=en&ref_topic=6001951
https://www.deepcrawl.com/knowledge/best-practice/the-zen-guide-to-https-configuration/
https://www.deepcrawl.com/knowledge/best-practice/hsts-a-tool-for-http-to-https-migration/
elb 2.2.15 | intermediate profile | OpenSSL 1.0.1e | link
Oldest compatible clients : Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7
This Amazon Web Services CloudFormation template will create an Elastic Load Balancer which terminates HTTPS connections using the Mozilla recommended ciphersuites and protocols.
{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "Example ELB with Mozilla recommended ciphersuite", "Parameters": { "SSLCertificateId": { "Description": "The ARN of the SSL certificate to use", "Type": "String", "AllowedPattern": "^arn:[^:]*:[^:]*:[^:]*:[^:]*:.*$", "ConstraintDescription": "SSL Certificate ID must be a valid ARN. http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-arns" } }, "Resources": { "ExampleELB": { "Type": "AWS::ElasticLoadBalancing::LoadBalancer", "Properties": { "Listeners": [ { "LoadBalancerPort": "443", "InstancePort": "80", "PolicyNames": [ "Mozilla-intermediate-2015-03" ], "SSLCertificateId": { "Ref": "SSLCertificateId" }, "Protocol": "HTTPS" } ], "AvailabilityZones": { "Fn::GetAZs": "" }, "Policies": [ { "PolicyName": "Mozilla-intermediate-2015-03", "PolicyType": "SSLNegotiationPolicyType", "Attributes": [ { "Name": "Protocol-TLSv1", "Value": true }, { "Name": "Protocol-TLSv1.1", "Value": true }, { "Name": "Protocol-TLSv1.2", "Value": true }, { "Name": "Server-Defined-Cipher-Order", "Value": true }, { "Name": "ECDHE-ECDSA-CHACHA20-POLY1305", "Value": true }, { "Name": "ECDHE-RSA-CHACHA20-POLY1305", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-GCM-SHA256", "Value": true }, { "Name": "ECDHE-RSA-AES128-GCM-SHA256", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-GCM-SHA384", "Value": true }, { "Name": "ECDHE-RSA-AES256-GCM-SHA384", "Value": true }, { "Name": "DHE-RSA-AES128-GCM-SHA256", "Value": true }, { "Name": "DHE-RSA-AES256-GCM-SHA384", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-SHA256", "Value": true }, { "Name": "ECDHE-RSA-AES128-SHA256", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-SHA", "Value": true }, { "Name": "ECDHE-RSA-AES256-SHA384", "Value": true }, { "Name": "ECDHE-RSA-AES128-SHA", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-SHA384", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-SHA", "Value": true }, { "Name": "ECDHE-RSA-AES256-SHA", "Value": true }, { "Name": "DHE-RSA-AES128-SHA256", "Value": true }, { "Name": "DHE-RSA-AES128-SHA", "Value": true }, { "Name": "DHE-RSA-AES256-SHA256", "Value": true }, { "Name": "DHE-RSA-AES256-SHA", "Value": true }, { "Name": "ECDHE-ECDSA-DES-CBC3-SHA", "Value": true }, { "Name": "ECDHE-RSA-DES-CBC3-SHA", "Value": true }, { "Name": "EDH-RSA-DES-CBC3-SHA", "Value": true }, { "Name": "AES128-GCM-SHA256", "Value": true }, { "Name": "AES256-GCM-SHA384", "Value": true }, { "Name": "AES128-SHA256", "Value": true }, { "Name": "AES256-SHA256", "Value": true }, { "Name": "AES128-SHA", "Value": true }, { "Name": "AES256-SHA", "Value": true }, { "Name": "DES-CBC3-SHA", "Value": true } ] } ] } } }, "Outputs": { "ELBDNSName": { "Description": "DNS entry point to the stack (all ELBs)", "Value": { "Fn::GetAtt": [ "ExampleELB", "DNSName" ] } } } }
** here are some fantastic resources from https://mozilla.github.io/server-side-tls/ssl-config-generator/ for setting up your server These things need to be put in place**
Nginx 1.10.1 | intermediate profile | OpenSSL 1.0.1e | link
Oldest compatible clients : Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7
server { listen 80 default_server; listen [::]:80 default_server; # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response. return 301 https://$host$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate ssl_certificate /path/to/signed_cert_plus_intermediates; ssl_certificate_key /path/to/private_key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits ssl_dhparam /path/to/dhparam.pem; # intermediate configuration. tweak to your needs. ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS'; ssl_prefer_server_ciphers on; # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months) add_header Strict-Transport-Security max-age=15768000; # OCSP Stapling --- # fetch OCSP records from URL in ssl_certificate and cache them ssl_stapling on; ssl_stapling_verify on; ## verify chain of trust of OCSP response using Root CA and Intermediate certs ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates; resolver <ip dns="" resolver="">; .... }</ip>
Apache 2.4.18 | intermediate profile | OpenSSL 1.0.1e | link
Oldest compatible clients : Firefox 27, Chrome 30, IE 11 on Windows 7, Edge, Opera 17, Safari 9, Android 5.0, and Java 8
<virtualhost *:443="">... SSLEngine on SSLCertificateFile /path/to/signed_certificate_followed_by_intermediate_certs SSLCertificateKeyFile /path/to/private/key # Uncomment the following directive when using client certificate authentication #SSLCACertificateFile /path/to/ca_certs_for_client_authentication # HSTS (mod_headers is required) (15768000 seconds = 6 months) Header always set Strict-Transport-Security "max-age=15768000" ...</virtualhost> # intermediate configuration, tweak to your needs SSLProtocol all -SSLv3 SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS SSLHonorCipherOrder on SSLCompression off SSLSessionTickets off # OCSP Stapling, only in httpd 2.3.3 and later SSLUseStapling on SSLStaplingResponderTimeout 5 SSLStaplingReturnResponderErrors off SSLStaplingCache shmcb:/var/run/ocsp(128000)
After you change the architecture of any website it normally takes a little bit of a dive. John Mu stated Google would not be punishing people to redirect to encrypted sites so while that might be true it doesn't mean Google has figured out what is going on yet.
I think you need to get Google crawling your site and have it in Webmaster tools with all of the pages redirected to https including adding things like HSTS and HTTP/2 to speed up your site.
Hope this helps,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Over-optimizing Internal Linking: Is this real and, if so, what's the happy medium?
I have heard a lot about having a solid internal linking structure so that Google can easily discover pages and understand your page hierarchies and correlations and equity can be passed. Often, it's mentioned that it's good to have optimized anchor text, but not too optimized. You hear a lot of warnings about how over-optimization can be perceived as spammy: https://neilpatel.com/blog/avoid-over-optimizing/ But you also see posts and news like this saying that the internal link over-optimization warnings are unfounded or outdated:
Intermediate & Advanced SEO | | SearchStan
https://www.seroundtable.com/google-no-internal-linking-overoptimization-penalty-27092.html So what's the tea? Is internal linking overoptimization a myth? If it's true, what's the tipping point? Does it have to be super invasive and keyword stuffy to negatively impact rankings? Or does simple light optimization of internal links on every page trigger this?1 -
Ranking 1st for a keyword - but when 's' is added to the end we are ranking on the second page
Hi everyone - hope you are well. I can't get my head around why we are ranking 1st for a specific keyword, but then when 's' is added to the end of the keyword - we are ranking on the second page. What could be the cause of this? I thought that Google would class both of the keywords the same, in this case, let's say the keyword was 'button'. We would be ranking 1st for 'button', but 'buttons' we are ranking on the second page. Any ideas? - I appreciate every comment.
Intermediate & Advanced SEO | | Brett-S0 -
What's the best possible URL structure for a local search engine?
Hi Mozzers, I'm working at AskMe.com which is a local search engine in India i.e if you're standing somewhere & looking for the pizza joints nearby, we pick your current location and share the list of pizza outlets nearby along with ratings, reviews etc. about these outlets. Right now, our URL structure looks like www.askme.com/delhi/pizza-outlets for the city specific category pages (here, "Delhi" is the city name and "Pizza Outlets" is the category) and www.askme.com/delhi/pizza-outlets/in/saket for a category page in a particular area (here "Saket") in a city. The URL looks a little different if you're searching for something which is not a category (or not mapped to a category, in which case we 301 redirect you to the category page), it looks like www.askme.com/delhi/search/pizza-huts/in/saket if you're searching for pizza huts in Saket, Delhi as "pizza huts" is neither a category nor its mapped to any category. We're also dealing in ads & deals along with our very own e-commerce brand AskMeBazaar.com to make the better user experience and one stop shop for our customers. Now, we're working on URL restructure project and my question to you all SEO rockstars is, what can be the best possible URL structure we can have? Assume, we have kick-ass developers who can manage any given URL structure at backend.
Intermediate & Advanced SEO | | _nitman0 -
Other domains hosted on same server showing up in SERP for 1st site's keywords
For the website in question, the first domain alphabetically on the shared hosting space, strange search results are appearing on the SERP for keywords associated with the site. Here is an example: A search for "unique company name" shows the results: www.uniquecompanyname.com as the top result. But on pages 2 and 3, we are getting results for the same content but for domains hosted on the same server. Here are some examples with the domain name replaced: UNIQUE DOMAIN NAME PAGE TITLE
Intermediate & Advanced SEO | | Motava
ftp.DOMAIN2.com/?action=news&id=63
META DESCRIPTION TEXT UNIQUE DOMAIN NAME PAGE TITLE 2
www.DOMAIN3.com/?action=news&id=120
META DESCRIPTION TEXT2 UNIQUE DOMAIN NAME PAGE TITLE 2
www.DOMAIN4.com/?action=news&id=120
META DESCRIPTION TEXT2 UNIQUE DOMAIN NAME PAGE TITLE 3
mail.DOMAIN5.com/?action=category&id=17
META DESCRIPTION TEXT3 ns5.DOMAIN6.com/?action=article&id=27 There are more but those are just some examples. These other domain names being listed are other customer domains on the same VPS shared server. When clicking the result the browser URL still shows the other customer domain name B but the content is usually the 404 page. The page title and meta description on that page is not displayed the same as on the SERP.As far as we can tell, this is the only domain this is occurring for.So far, no crawl errors detected in Webmaster Tools and moz crawl not completed yet.0 -
How to prevent 404's from a job board ?
I have a new client with a job listing board on their site. I am getting a bunch of 404 errors as they delete the filled jobs. Question: Should we leave the the jobs pages up for extra content and entry points to the site and put a notice like this job has been filled, please search our other job listings ? Or should I no index - no follow these pages ? Or any other suggestions - it is an employment agency site. Overall what would be the best practice going forward - we are looking at probably 20 jobs / pages per month.
Intermediate & Advanced SEO | | jlane90 -
Adding index.php at the end of the url effect it's rankings
I have just had my site updated and we have put index.php at the end of all the urls. Not long after the sites rankings dropped. Checking the backlinks, they all go to (example) http://www.website.com and not http://www.website.com/index.php. So could this change have effected rankings even though it redirects to the new url?
Intermediate & Advanced SEO | | authoritysitebuilder0 -
Culling 99% of a website's pages. Will this cause irreparable damage?
I have a large travel site that has over 140,000 pages. The problem I have is that the majority of pages are filled with dupe content. When Panda came in, our rankings were obliterated, so I am trying to isolate the unique content on the site and go forward with that. The problem is, the site has been going for over 10 years, with every man and his dog copying content from it. It seems that our travel guides have been largely left untouched and are the only unique content that I can find. We have 1000 travel guides in total. My first question is, would reducing 140,000 pages to just 1,000 ruin the site's authority in any way? The site does use internal linking within these pages, so culling them will remove thousands of internal links throughout the site. Also, am I right in saying that the link juice should now move to the more important pages with unique content, if redirects are set up correctly? And finally, how would you go about redirecting all theses pages? I will be culling a huge amount of hotel pages, would you consider redirecting all of these to the generic hotels page of the site? Thanks for your time, I know this is quite a long one, Nick
Intermediate & Advanced SEO | | Townpages0 -
To subnav or NOT to subnav... that's my question.... :)
We are working on a new website that is golf related and wondering about whether or not we should set up a subnavigation dropdown menu from the main menu. For example: GOLF PACKAGES
Intermediate & Advanced SEO | | JamesO
>> 2 Round Packages
>> 3 Round Packages
>> 4 Round Packages
>> 5 Round Packages GOLF COURSES
>> North End Courses
>> Central Courses
>> South End Courses This would actually be very beneficial to our users from a usability standpoint, BUT what about from an SEO standpoint? Is diverting all the link juice to these inner pages from the main site navigation harmful? Should we just create a page for GOLF PACKAGES and break it down on that page?0