Google indexing https sites by default now, where's the Moz blog about it!
-
Hello and good morning / happy Friday!
Last night an article from of all places " Venture Beat " titled " Google Search starts indexing and letting users stream Android apps without matching web content " was sent to me, as I read this I got a bit giddy. Since we had just implemented a full sitewide https cert rather than a cart only ssl.
I then quickly searched for other sources to see if this was indeed true, and the writing on the walls seems to indicate so.
Google - Google Webmaster Blog! - http://googlewebmastercentral.blogspot.in/2015/12/indexing-https-pages-by-default.html
http://www.searchenginejournal.com/google-to-prioritize-the-indexing-of-https-pages/147179/
http://www.tomshardware.com/news/google-indexing-https-by-default,30781.html
https://hacked.com/google-will-begin-indexing-httpsencrypted-pages-default/
https://www.seroundtable.com/google-app-indexing-documentation-updated-21345.html
I found it a bit ironic to read about this on mostly unsecured sites.
I wanted to hear about the 8 keypoint rules that google will factor in when ranking / indexing https pages from now on, and see what you all felt about this.
Google will now begin to index HTTPS equivalents of HTTP web pages, even when the former don’t have any links to them. However, Google will only index an HTTPS URL if it follows these conditions:
- It doesn’t contain insecure dependencies.
- It isn’t blocked from crawling by robots.txt.
- It doesn’t redirect users to or through an insecure HTTP page.
- It doesn’t have a rel="canonical" link to the HTTP page.
- It doesn’t contain a noindex robots meta tag.
- It doesn’t have on-host outlinks to HTTP URLs.
- The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL.
- The server has a valid TLS certificate.
One rule that confuses me a bit is :
- **It doesn’t redirect users to or through an insecure HTTP page. **
Does this mean if you just moved over to https from http your site won't pick up the https boost? Since most sites in general have http redirects to https?
Thank you!
-
Can you please make a concrete example of a key-word for that you do not rank nicely. Please also specify the thing which in your opinion need to appear nicely inside the serch and the object for the blog of nextgenapk .
-
Thanks for your response, Peter! As I said, I could be totally wrong - glad I asked this question
Cheers!
-
-
_"Or you can leave but change their links to pass some URL shortener - bit.ly or t.co until they comes with HTTPS version." _
looking at it from technical standpoint, these shortners are also not https (when crawling. Would they not have the same effect as other non https links?
Sorry, I could be going totally wrong about this and this question doesnt make sense at all.
-
Touche, good sir, these are certainly some great ways to go about this. Especially number 3.
Thanks!
Wonder how long we got until http2 implementation...
-
Or you can leave but change their links to pass some URL shortener - bit.ly or t.co until they comes with HTTPS version.
Or you can also make some page as "partners" where you can link only HTTP external sites.
Or you can also make internal page redirector to HTTP site. Like HTTPS -> HTTPS (inside redirector and dummy page) -> HTTP. On this case redirector won't be indexed and that's why it's dummy.
And this is just three ideas that i think for one minute. Probably mine favorite is #3. But it's IMHO.
-
So if my manufacturers don't have https sites, I should remove the links to them since it's going to hinder indexing?
Thanks for the http redirecting to https response.
-
Some sites comes with redirectors or "beacons" for detecting user presence. Example i'm on site X page A and there i click on link to go on page B. But due marketing department this pass via HTTP redirector or pure HTTP (and there 301 redirect to HTTPS). Then this page B can be not indexed.
This mean that once you set sitewide 301 redirect to encrypted connection you must make few more steps:
- you must check all resources to pass via this encrypted channel. Images, CSS, JS - just anything.
- you must check canonical to be set to HTTPS
- you must check that link between pages to be also HTTPS
- you must see any 3rd party tools for encrypted connection. Can be analytics software or "tracking pixels" or heat maps or ads.
- you must check if outgoing links from your site can be via other sites with encryption. Can be Wikipedia, Moz, Google. Since everything there is already encrypted you will skip frustrating HTTPS -> HTTP -> HTTPS jump too.
So then your site can be indexed in HTTPS. It's tricky procedure with many traps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What happens when a de-indexed subdomain is redirected to another de-indexed subdomain? What happens to the link juice?
Hi all, We are planning to de-index and redirect a sub domain A to sub domain B. Consequently we now need to d-index sub domain B also. What happens now to the link juice or page rank they gained from hundreds and thousands of backlinks? Will there be any ranking impact on main domain? Backlinks of these sub domains are not much relevant to main domain content. Thanks
Algorithm Updates | | vtmoz1 -
Meta robots at every page rather than using robots.txt for blocking crawlers? How they'll get indexed if we block crawlers?
Hi all, The suggestion to use meta robots tag rather than robots.txt file is to make sure the pages do not get indexed if their hyperlinks are available anywhere on the internet. I don't understand how the pages will be indexed if the entire site is blocked? Even though there are page links are available, will Google really index those pages? One of our site got blocked from robots file but internal links are available on internet for years which are not been indexed. So technically robots.txt file is quite enough right? Please clarify and guide me if I'm wrong. Thanks
Algorithm Updates | | vtmoz0 -
Reduced traffic from google news
I have a question for my site. Can you advice me for situation? Last 1 months my site reduced traffic from google and google news.. And i dont know why? I need your advice. Thanks. Yunus Altindag Diyarbakir/Turkey
Algorithm Updates | | maxbilgisayar0 -
Changes in Google "Site:" Search Algorithm Over Time?
I was wondering if anyone has noticed changes in how Google returns 'site:' searches over the past few years or months. I remember being able to do a search such as "site:example.com" and Google would return a list of webpages where the order may have shown the higher page rank pages (due to link building, etc) first and/or parent category pages higher up in the list of the first page (if relevant) first (as they could have higher PR naturally, anyways). It seems that these days I can hardly find quality / target pages that have higher page rank on the first page of Google's site: search results. Is this just me... or has Google perhaps purposely scrambled the SERPS somewhat for site: searches to not give away their page ranking secrets?
Algorithm Updates | | OrionGroup1 -
Input on Experiment with Google
As I'm doing more research into Google's devaluing links, I can do nothing more but to wonder if we will be penalized for previous links (bad links). Here is the situation: Our company was ranking very well for this particular keyword (within the top 3 positions on Google). However, in the last 6 months, we have seen rankings drop significantly (now to the point Google doesn't even recognize the existence of the page). With Google not recognizing us, we decided to do an experiment. The experiment: Make another page with a different URL and delete the existing page that is not ranking in Google. Our Experience: We have noticed that our pages will get indexed and ranked within weeks or making a new page. Our Goal: To get ranked on Google Will our new page get penalized from the old page if it's an entirely new URL? Will the fact that Google in devaluing our links effect our new page that we are trying to get ranked? Any insight would be of great value. Thanks in advance
Algorithm Updates | | WebRiverGroup0 -
Client's site dropped completely from Google - AGAIN! Please help...
ok guys - hoping someone out there can help... (kinda long, but wanted to be sure all the details were out there) Already had this happen once - even posted in here about it - http://www.seomoz.org/q/client-s-site-dropped-completely-for-all-keywords-but-not-brand-name-not-manual-penalty-help Guy was a brand new client, all we did was tweak title tags and add a bit of content to his site since most was generic boilerplate text... started on our KW research and competitor research... in just a week, from title tag and content tweaks alone, he went from ranking on page 4-5 to ranking on page 3-4... then as we sat down to really optimize his site... POOF - he was gone from the Googs... He only showed up in "site:" searches and for exact matches of his business name - everything else was gone. Posted in here and on WMT - had several people check it out, both local guys and people from here (thanks to John Doherty for trying!) - but no one could figure out any reason why it would have happened. We submitted a reconsideration request, explaining that we knew we hadn't violated any quality guidelines, that he had less than 10 backlinks so it couldn't be bad linking, and that we had hardly touched the site. They sent back a canned response a week later that said there was no manual penalty and that we should "check our content" - mysteriously, the site started to show back up in the SERPs that morning (we got the canned response in the afternoon) There WAS an issue with NAP mismatch on some citations, but we fixed that, and that shouldn't have contributed to complete disappearance anyway. SO - the site was back, and back at its page 3 or 4 position... we decided to leave it alone for a few days just to be sure we didn't do anything... and then just 6 days later, when we were sitting down to fully optimize the site - POOF - completely gone again. We do SEO for a lot of different car dealers all over the country, and i know our strategies work. Looking at the competition in his market, he should easily be ranked page 2 or 3 with the very minimal tweaking we did... AND, since we didn't change anything since he came back, it makes even less sense that he was visible for a week and then gone again. So, mozzers... Anybody got any ideas? I'm really at a loss here - it makes zero sense that he's completely gone, except for his biz name... if nothing else, he should be ranking for "used cars canton"... Definitely appreciate any help anyone can offer -
Algorithm Updates | | Greg_Gifford0 -
Whats happaning with Plus1's
What do we know about googles +1's, are they used as a signal? i noticed that they have some reports on them in google web master tools.
Algorithm Updates | | AlanMosley0 -
TOP 3-5 SEO Blogs
I am wondering if you can help me start with the top three to five SEO blogs. I have been really enjoying and getting into learning more about SEO and it is becoming really fun as it becomes less overwhelming. A few days ago there was a question about great SEO blogs. And everyone provided a great list. I bookmarked all of them, but in reality I won't be able to go through them all and really get what is being presented. My question is what would be the best 3-5 to start with? Eventually I will go through them all but experience can help me get on the right track. Thanks for the suggestions
Algorithm Updates | | fertilityhealth0