Why are http and https pages showing different domain/page authorities?
-
My website www.aquatell.com was recently moved to the Shopify platform. We chose to use the http domain, because we didn't want to change too much, too quickly by moving to https. Only our shopping cart is using https protocol. We noticed however, that https versions of our non-cart pages were being indexed, so we created canonical tags to point the https version of a page to the http version. What's got me puzzled though, is when I use open site explorer to look at domain/page authority values, I get different scores for the http vs. https version. And the https version is always better. Example: http://www.aquatell.com DA = 21 and https://www.aquatell.com DA = 27. Can somebody please help me make sense of this? Thanks,
-
Laurie
It should be clarified that Moz's Domain Authority, while a really solid metric, is not the metric Google has or uses. And domain authority can have a few artificial quirks. So I would not be alarmed at all.
That said - can you explain where you are seeing the two different number? I see a Page Authority of 39 for both http and https - and I see a domain authority of 27 for both http and https.
Now, even IF Moz has two different numbers for http and https, again, this is not what Google is doing, it's just an approximation.
Setting a canonical from https to http is just a band-aid and I would not recommend that approach. I would recommend having a site-wide 301 redirect so if a user lands on the https version of a URL it redirects them to the same version of that page on http. Or vice vera, whichever version you are prioritizing.
I have to respectfully disagree with Dmytro and Robert - as mentioned, Moz's metrics are not Google metrics - and the best action here is always to prioritize http or https with redirects.
-
Hi Laurie,
Absolutely - the answer is actually more simple than you might think. Google has explicitly stated that HTTPS is a ranking factor and benefits the domain more than HTTP. Check out this post on the Moz Blog by Cyrus Sheppard:
https://moz.com/blog/ranking-factors-2015
I will quote from the blog itself:
"While page length, hreflang use, and total number of links all show moderate association with Google rankings, we found that using HTTPS has a very low positive correlation. This could indicate it's the "tie-breaker" Google claims."
Here's a link to the actual survey data:
https://moz.com/search-ranking-factors
Basically, HTTPS is a tie-breaker system set up whereby if all else remains the same, a site using HTTPS will beat out a site using HTTP in the SERPs.
This is perfectly expressed in your data.
Hope this helps to make sense of the situation and let me know if you need anything else,
Rob
-
Hi,
It seems like Google might be treating http and https as 2 different websites. Have you specified the preffered website version in your Search Console?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Category pages, should I noindex them?
Hi there, I have a question about my blog that I hope you guys can answer. Should I no index the category and tag pages of my blog? I understand they are considered as duplicate content, but what if I try to work the keyword of that category? What would you do? I am looking forward to reading your answers 🙂
On-Page Optimization | | lucywrites0 -
Which is better? One dynamically optimised page, or lots of optimised pages?
For the purpose of simplicity, we have 5 main categories in the site - let's call them A, B, C, D, E. Each of these categories have sub-category pages e.g. A1, A2, A3. The main area of the site consists of these category and sub-category pages. But as each product comes in different woods, it's useful for customers to see all the product that come in a particular wood, e.g. walnut. So many years ago we created 'woods' pages. These pages replicate the categories & sub-categories but only show what is available in that particular wood. And of course - they're optimised much better for that wood. All well and good, until recently, these specialist page seem to have dropped through the floor in Google. Could be temporary, I don't know, and it's only a fortnight - but I'm worried. Now, because the site is dynamic, we could do things differently. We could still have landing pages for each wood, but of spinning off to their own optimised specific wood sub-category page, they could instead link to the primary sub-category page with a ?search filter in the URL. This way, the customer is still getting to see what they want. Which is better? One page per sub-category? Dynamically filtered by search. Or lots of specific sub-category pages? I guess at the heart of this question is? Does having lots of specific sub-category pages lead to a large overlap of duplicate content, and is it better keeping that authority juice on a single page? Even if the URL changes (with a query in the URL) to enable whatever filtering we need to do.
On-Page Optimization | | pulcinella2uk0 -
Redirect www.domain to domain
How do I redirect www.entrepreneurhandbook.co.uk to entrepreneurhandbook.co.uk The latter is an existing site. Thanks James
On-Page Optimization | | entrepreneurhandbook1 -
Ranked page is not desired page
I have a question on a problem I am currently faced with. There is a certain keyword that my employer wants to rank for. The good news is that sometimes it does rank in the top 5 pages of Google. (It drops in and out) The bad news is that it is going to a page that we need to keep, but not the ideal place we want people who are looking for that keyword to go to. I was wondering if anyone has had any experience with this type of situation and what tactic they used to get people to the better page.
On-Page Optimization | | trumpfinc1 -
The seomoz on page keyword analysis tool is not showing title or keyword in document
the SEOMOZ onpage analysis tool is not not showing title or keyword for any page in one of my sites. It says there are no title elements on my page and there are, i checked the source code myself and they are there and correct. my title and keywords are in there and show up fine in firefox and internet explorer even after i refresh them. why would this tool show them as missing in one of my sites but not others? I'm worried that google's spider might not see them if the on page analyzer doesn't see them and my rankings might drop. they showed up the other day in the seomoz on page analyzer just fine and i haven't changed anything. Thanks mozzers!
On-Page Optimization | | Ron100 -
Best practice for Meta-Robots tag in categories and author pages?
For some of our site we use Wordpress, which we really like working with. The question I have is for the categories and authors pages (and similiar pages), i.e. the one looking: http://www.domain.com/authors/. Should you or should you not use follow, noindex for meta-robots? We have a lot of categories/tags/authors which generates a lot of pages. I'm a bit worried that google won't like this and leaning towards adding the follow, noindex. But the more I read about it, the more I see people disagree. What does the community of Seomoz think?
On-Page Optimization | | Lobtec0 -
Link Juice not passing for root domain to an internal page
Hi Im working in a SEO proyect for the web fagorarrasate.com, it has good root domain ranking, but it seems like the root domain isn't passing any of its link juice to other internal pages of the site although there are links which link to these sub pages For example if I seach in Google for the page http://www.google.com/search?aq=f&gcx=c&sourceid=chrome&ie=UTF-8&q=Stamping+System+and+Presses The firs page appears without any page ranking, so it seems that the root domain is not passing any of its ranking Why could this be happening? Thanks Jordi
On-Page Optimization | | joralles0 -
Similar Keywords/Different Pages
My question is about my content strategy regarding keywords and page creation. For this example I will use the following keywords: "widget financing" "widget leasing" "widget loans" "thingy financing" "thingy leasing" "leasing loans" "whatchamacallit financing" "whatchamacallit leasing" "whatchamacalit loans" You get the idea. Now I have created a separate page for each of these keywords. There are about 70 keywords and their respective pages. Although all of these describe the same thing I have re-written each page. In other words I didn't use the same content and just substituted the keywords. Each page is roughly 200 to 500 words. I do rank very well for most of these keywords. I would post some of the pages from my site here but I didn't know if that is frowned on. My fear/concern is will I get in trouble in a "post Panda" world. Again, the pages rank very well I just want to be in good graces with Google going forward.
On-Page Optimization | | leaseman0