Google indexing https sites by default now, where's the Moz blog about it!
-
Hello and good morning / happy Friday!
Last night an article from of all places " Venture Beat " titled " Google Search starts indexing and letting users stream Android apps without matching web content " was sent to me, as I read this I got a bit giddy. Since we had just implemented a full sitewide https cert rather than a cart only ssl.
I then quickly searched for other sources to see if this was indeed true, and the writing on the walls seems to indicate so.
Google - Google Webmaster Blog! - http://googlewebmastercentral.blogspot.in/2015/12/indexing-https-pages-by-default.html
http://www.searchenginejournal.com/google-to-prioritize-the-indexing-of-https-pages/147179/
http://www.tomshardware.com/news/google-indexing-https-by-default,30781.html
https://hacked.com/google-will-begin-indexing-httpsencrypted-pages-default/
https://www.seroundtable.com/google-app-indexing-documentation-updated-21345.html
I found it a bit ironic to read about this on mostly unsecured sites.
I wanted to hear about the 8 keypoint rules that google will factor in when ranking / indexing https pages from now on, and see what you all felt about this.
Google will now begin to index HTTPS equivalents of HTTP web pages, even when the former don’t have any links to them. However, Google will only index an HTTPS URL if it follows these conditions:
- It doesn’t contain insecure dependencies.
- It isn’t blocked from crawling by robots.txt.
- It doesn’t redirect users to or through an insecure HTTP page.
- It doesn’t have a rel="canonical" link to the HTTP page.
- It doesn’t contain a noindex robots meta tag.
- It doesn’t have on-host outlinks to HTTP URLs.
- The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL.
- The server has a valid TLS certificate.
One rule that confuses me a bit is :
- **It doesn’t redirect users to or through an insecure HTTP page. **
Does this mean if you just moved over to https from http your site won't pick up the https boost? Since most sites in general have http redirects to https?
Thank you!
-
Can you please make a concrete example of a key-word for that you do not rank nicely. Please also specify the thing which in your opinion need to appear nicely inside the serch and the object for the blog of nextgenapk .
-
Thanks for your response, Peter! As I said, I could be totally wrong - glad I asked this question
Cheers!
-
-
_"Or you can leave but change their links to pass some URL shortener - bit.ly or t.co until they comes with HTTPS version." _
looking at it from technical standpoint, these shortners are also not https (when crawling. Would they not have the same effect as other non https links?
Sorry, I could be going totally wrong about this and this question doesnt make sense at all.
-
Touche, good sir, these are certainly some great ways to go about this. Especially number 3.
Thanks!
Wonder how long we got until http2 implementation...
-
Or you can leave but change their links to pass some URL shortener - bit.ly or t.co until they comes with HTTPS version.
Or you can also make some page as "partners" where you can link only HTTP external sites.
Or you can also make internal page redirector to HTTP site. Like HTTPS -> HTTPS (inside redirector and dummy page) -> HTTP. On this case redirector won't be indexed and that's why it's dummy.
And this is just three ideas that i think for one minute. Probably mine favorite is #3. But it's IMHO.
-
So if my manufacturers don't have https sites, I should remove the links to them since it's going to hinder indexing?
Thanks for the http redirecting to https response.
-
Some sites comes with redirectors or "beacons" for detecting user presence. Example i'm on site X page A and there i click on link to go on page B. But due marketing department this pass via HTTP redirector or pure HTTP (and there 301 redirect to HTTPS). Then this page B can be not indexed.
This mean that once you set sitewide 301 redirect to encrypted connection you must make few more steps:
- you must check all resources to pass via this encrypted channel. Images, CSS, JS - just anything.
- you must check canonical to be set to HTTPS
- you must check that link between pages to be also HTTPS
- you must see any 3rd party tools for encrypted connection. Can be analytics software or "tracking pixels" or heat maps or ads.
- you must check if outgoing links from your site can be via other sites with encryption. Can be Wikipedia, Moz, Google. Since everything there is already encrypted you will skip frustrating HTTPS -> HTTP -> HTTPS jump too.
So then your site can be indexed in HTTPS. It's tricky procedure with many traps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google adding main site name to the title tags of pages in the sub folders: How to handle?
Hi community, Ours is a WP hosted website. We have given our site title which reflects across all the website page title suffix. Like "Moz SEO" will be default at the title for pages like "Local SEO - Moz SEO". We have given different page title suffix to our sub-folders' pages like blog and help guides. For blog we have given "Moz blog" as title tag suffix which was working fine. But Google suddenly started showing main website's title as suffix in pages of sub folders. Ex blog: "How to rank better - Moz blog - Moz SEO". Here we can see "Moz SEO" has been added which is not required. How to handle this? Thanks
Algorithm Updates | | vtmoz0 -
Increase in impressions reported by Google Analytics
Because Universal Analytics (and Google Webmaster) only stores SEO data for 3 months, I've been downloading SEO data (from the Acquisition tab of Analytics) to get a record of how impressions, clicks, CTR etc are changing in the long term (our business is seasonal, so these long-term patterns are important). Today, I downloaded data for September, and found a very large increase in the number of impressions compared to previous months. I looked back at the data for August, which I've already downloaded, and found that Analytics is now reporting much higher numbers of impressions than I have in my downloaded data. The total number of impressions has roughly doubled, and the increase for individual URLs varies, with some increasing by a factor of 10. The number of clicks has also increased, by about 15% in total. Because of the 3 month cut-off, I could only look back as far as the 11th of July, but the impressions for the end of July are also much higher than in my downloaded data. I've noticed that Analytics has changed some other details in its reporting of SEO data. For example, the impressions and clicks data is no longer rounded. Could this increase in impressions be a result of those changes? Has anyone else experienced something similar? We can go ahead and use the new data but it will throw our analysis off for past months (which have the lower numbers). If others have experienced something similar it would be good to know, so that we can adjust our historical numbers accordingly.
Algorithm Updates | | MargotLoco20 -
Is there a we to get Google to index our site quicker?
I have updated some pages on a website, is there a way to get Google to index the page quicker?
Algorithm Updates | | webguru20140 -
Homepage Index vs Home vs Default?
Should your home page be www.yoursite.com/index.htm or home.htm or default.htm on an apache server? Someone asked me this, and I have no idea. On our wordpress site, I have never even seen this come up, but according to my friend, every homepage HAS to be one of those three. So my question is which one is best for an apache server site AND does it actually have to be one of those three? Thanks, Ruben
Algorithm Updates | | KempRugeLawGroup0 -
Should I use canonical tags on my site?
I'm trying to keep this a generic example, so apologies if this is too vague. On my main website, we've always had a duplicate content issue. The main focus of our site is breaking down to specific, brick and mortar locations. We have to duplicate the description of product/service for every geographic location (this is a legal requirement). So for example, you might have the parent "product/service" page targeting the term, and then 100's of sub pages with "product/service San Francisco", "product/service Austin", etc. These pages have identical content except for the geographic location is dynamically swapped out. There is also additional useful content like google map of area, local resources, etc. As I said this was always seen as an SEO issue, specifically you could see in the way that googlebot would crawl pages and how pagerank flowed through the site that having 100's of pages with identical copy and just swapping out the geographic location wasn't seen as good content, however we still always received traffic and conversions for the long tail geographic terms so we left it. Las year, with Panda, we noticed a drop in traffic and thought it was due to this duplicate issue so I added canonical tags to all our geographic specific product/service pages that pointed back to the parent page, that seemed to be received well by google and traffic was back to normal in short order. However, recently what I notice a LOT in our SERP pages is if I type in a geographic specific term, i.e. "product/service san francisco", our deep page with the canonical tag is what google is ranking. Google inserts its own title tag on the SERP page and leaves the description blank as it doesn't index the page due to the canonical tag on the page. Essentially what I think it is rewarding is the site architecture which organizes the content to the specific geo in the URL: site.com/service/location/san-francisco. Other than that there is no reason for it to rank that page. Sorry if this is lengthy, thanks for reading all of that! Essentially my question is, should I keep the canonical tags on the site or take them off since Google insists on ranking the page? If I am ranking already then the potential upside to doing that is ranking higher (we're usually in the 3-6 spot on the result page) and also higher CTR because we can get a description back on our resulting page. The counter argument is I'm already ranking so leave it and focus on other things. Appreciate your thoughts on this!
Algorithm Updates | | edu-SEO0 -
What do you think of Google SERP encryption?
Really interesting post by Search Engine Land about this "issue" for tracking conversion, especially for long tail keyword research. I suppose this change will be also applied on all google search pages (.ca, .fr etc.). I Really don't think Webmaster tools is a serious compensation in Analytics for this.
Algorithm Updates | | Olivier_Lambert0 -
How is important domain age now?
Just as topic states, how important do you think domain age is? Do you think getting old domain for the new website will do any difference? Lets say website A used to sell apples... went out of business after 5 years of business. If I would get that domain and will star selling apples again can I expect some extra boost because of the age? Cheers
Algorithm Updates | | DiamondJewelryEmpire0 -
Why would my product pages no longer be indexed in Google?
Our UK site has 72 pages in our sitemap. 30 of them are product pages which take a productid parameter. Prior to 1st Feb 2011, all pages were indexed in Google but since then all of our product pages seem to have dropped from the index? If I check in webmaster tools, I can see that we have submitted 72 pages and 42 are indexed. I realise we should have some better url structuring and I'm working on that but do you have any ideas on how we can get our product poages back into googles index http://www.ebacdirect.com
Algorithm Updates | | ebacltd0