Google Indexing Of Pages As HTTPS vs HTTP
-
We recently updated our site to be mobile optimized. As part of the update, we had also planned on adding SSL security to the site. However, we use an iframe on a lot of our site pages from a third party vendor for real estate listings and that iframe was not SSL friendly and the vendor does not have that solution yet. So, those iframes weren't displaying the content.
As a result, we had to shift gears and go back to just being http and not the new https that we were hoping for.
However, google seems to have indexed a lot of our pages as https and gives a security error to any visitors. The new site was launched about a week ago and there was code in the htaccess file that was pushing to www and https. I have fixed the htaccess file to no longer have https.
My questions is will google "reindex" the site once it recognizes the new htaccess commands in the next couple weeks?
-
That's not going to solve your problem, vikasnwu. Your immediate issue is that you have URLs in the index that are HTTPS and will cause searchers who click on them not to reach your site due to the security error warnings. The only way to fix that quickly is to get the SSL certificate and redirect to HTTP in place.
You've sent the search engines a number of very conflicting signals. Waiting while they try to work out what URLs they're supposed to use and then waiting while they reindex them is likely to cause significant traffic issues and ongoing ranking harm before the SEs figure it out for themselves. The whole point of what I recommended is it doesn't depend on the SEs figuring anything out - you will have provided directives that force them to do what you need.
Paul
-
Remember you can force indexing using Google Search Console
-
Nice answer!
But you forgot to mention:
- Updating the sitemap files with the good URLs
- Upload them to Google Search Console
- You can even force the indexing at Google Search Console
Thanks,
Roberto
-
Paul,
I just provided the solution to de-index the https version. I understood that what's wanted, as they need their client to fix their end.And of course that there is no way to noindex by protocol. I do agree what you are saying.
Thanks a lot for explaining further and prividing other ways to help solvinf the issue, im inspired by used like you to help others and make a great community.
GR.
-
i'm first going to see what happens if I just upload a sitemap with http URLs since there wasn't a sitemap in webmaster tools from before. Will give you the update then.
-
Great! I'd really like to hear how it goes when you get the switch back in.
P.
-
Paul that does make sense - i'll add the SSL certificate back, and then redirect from https to http via the htaccess file.
-
You can't noindex a URL by protocol, Gaston - adding no-index would eliminate the page from being returned as a search result regardless of whether HTTP or HTTPS, essentially making those important pages invisible and wasting whatever link equity they may have. (You also can't block in robots.txt by protocol either, in my experience.)
-
There's a very simple solution to this issue - and no, you absolutely do NOT want to artificially force removal of those HTTPS pages from the index.
You need to make sure the SSL certificate is still in place, then re-add the 301-redirect in the site's htaccess file, but this time redirecting all HTTPS URLs back their HTTP equivalents.
You don't want to forcibly "remove" those URLs from the SERPs, because they are what Google now understands to be the correct pages. If you remove them, you'll have to wait however long it takes for Google and other search engines to completely re-understand the conflicting signals you've sent them about your site. And traffic will inevitably suffer in that process. Instead, you need to provide standard directives that the search engines don't have to interpret and can't ignore. Once the search engines have seen the new redirects for long enough, they'll start reverting the SERP listings back to the HTTP URLs naturally.
The key here is the SSL cert must stay in place. As it stands now, a visitor clicking a page in the search engine is trying to make an HTTPS connection to your site. If there is no certificate in place, they will get the harmful security warning. BUT! You can't just put in a 301-redirect in that case. The reason for this is that the initial connection from the SERP is coming in over the "secure channel". That connection must be negotiated securely first, before the redirect can even be read. If that first connection isn't secure, the browser will return the security warning without ever trying to read the redirect.
Having the SSL cert in place even though you're not running all pages under HTTPS means that first connection can still be made securely, then the redirect can be read back to the HTTP URL, and the visitor will get to the page they expect in a seamless manner. And search engines will be able to understand and apply authority without misunderstandings/confusion.
Hope that all makes sense?
Paul
-
Noup, Robots.txt works on a website level. This means that there has to be a file for the http and another for the https website.
And, there is no need for waiting until the whole site is indexed.Just to clarify, robots.txt itself does not remove pages already indexed. It just blocks bots from crawling a website and/or specific pages with in it.
-
GR - thanks for the response.
Given our site is just 65 pages, would it make sense to just put all of the site's "https" URLs in the robots.txt file as "noindex" now rather than waiting for all the pages to get indexed as "https" and then remove them?
And then upload a sitemap to webmaster tools with the URLS as "http://"?
VW
-
Hello vikasnwu,
As what you are looking for is to remove from index the pages, follow this steps:
- Allow the whole website to be crawable in the robots.txt
- add the robots meta tag with "noindex,follow" parametres
- wait several weeks, 6 to 8 weeks is a fairly good time. Or just do a followup on those pages
- when you got the results (all your desired pages to be de-indexed) re-block with robots.txt those pages
- DO NOT erase the meta robots tag.
Remember that http://site.com andhttps://site.com are different websites to google.
When your client's website is fixed with https, follow these steps:- Allow the whole website (or the parts wanted to be indexed) to be crawable in robots.txt
- Remove the robots meta tag
- Redirect 301 http to https
- Sit and wait.
Information about the redirection to HTTPS and a cool checklist:
The Big List of SEO Tips and Tricks for Using HTTPS on Your Website - Moz Blog
The HTTP to HTTPs Migration Checklist in Google Docs to Share, Copy & Download - AleydaSolis
Google SEO HTTPS Migration Checklist - SERoundtableHope I'm helpful.
Best luck.
GR.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to get a large number of urls out of Google's Index when there are no pages to noindex tag?
Hi, I'm working with a site that has created a large group of urls (150,000) that have crept into Google's index. If these urls actually existed as pages, which they don't, I'd just noindex tag them and over time the number would drift down. The thing is, they created them through a complicated internal linking arrangement that adds affiliate code to the links and forwards them to the affiliate. GoogleBot would crawl a link that looks like it's to the client's same domain and wind up on Amazon or somewhere else with some affiiiate code. GoogleBot would then grab the original link on the clients domain and index it... even though the page served is on Amazon or somewhere else. Ergo, I don't have a page to noindex tag. I have to get this 150K block of cruft out of Google's index, but without actual pages to noindex tag, it's a bit of a puzzler. Any ideas? Thanks! Best... Michael P.S., All 150K urls seem to share the same url pattern... exmpledomain.com/item/... so /item/ is common to all of them, if that helps.
Intermediate & Advanced SEO | | 945010 -
Google not Indexing images on CDN.
My URL is: http://bit.ly/1H2TArH We have set up a CDN on our own domain: http://bit.ly/292GkZC We have an image sitemap: http://bit.ly/29ca5s3 The image sitemap uses the CDN URLs. We verified the CDN subdomain in GWT. The robots.txt does not restrict any of the photos: http://bit.ly/29eNSXv. We used to have a disallow to /thumb/ which had a 301 redirect to our CDN but we removed both the disallow in the robots.txt as well as the 301. Yet, GWT still reports none of our images on the CDN are indexed. The above screenshot is from the GWT of our main domain.The GWT from the CDN subdomain just shows 0. We did not submit a sitemap to the verified subdomain property because we already have a sitemap submitted to the property on the main domain name. While making a search of images indexed from our CDN, nothing comes up: http://bit.ly/293ZbC1While checking the GWT of the CDN subdomain, I have been getting crawling errors, mainly 500 level errors. Not that many in comparison to the number of images and traffic that we get on our website. Google is crawling, but it seems like it just doesn't index the pictures!? Can anyone help? I have followed all the information that I was able to find on the web but yet, our images on the CDN still can't seem to get indexed.
Intermediate & Advanced SEO | | alphonseha0 -
Google indexing pages from chrome history ?
We have pages that are not linked from site yet they are indexed in Google. It could be possible if Google got these pages from browser. Does Google takes data from chrome?
Intermediate & Advanced SEO | | vivekrathore0 -
How to Get Google to Recognize Your Pages Are Gone
Here's a quick background of the site and issue. A site lost half of its traffic over 18 months ago and its believed to be a Panda penalty. Many, many items were already taken care of and crossed off the list, but here's something that was recently brought up. There are 30,000 pages indexed in Google,but there are about 12,000 active products. Many of these pages in their index are out of stock items. A site visitor cannot find them by browsing the site unless he/she had bookmarked and item before, was given the link by a friend, read about it, etc. If they get to an old product because they had a link to it, they will see an out of stock graphic and not allow to make the purchase. So, efforts have been made about 1 month ago to 301 old products to something similar, if possible, or 410 them. Google has not been removing them from the index. My question is how to make sure Google sees that these pages are no longer there and remove from the index? Some of the items have links to them and this will help Google see them, but what about the items which have 0 external / internal links? Thanks in advance for your assistance. In working on a site which has about 10,000 items available for sale. Looking in G
Intermediate & Advanced SEO | | ABK7170 -
Google + pages and SEO results...
Hi, Can anyone give me insight into how people are getting away with naming their business by the SEO search term, creating a BS Google + page, then having that page rank high in the search results. I am speaking specifically about the results you get when you Google: "Los Angeles DUI Lawyer". As you can see from my attached screenshot (I'm doing the search in Los Angeles), the FIRST listing is a Google + business. Strangely, the phone number listed doesn't actually take you to a DUI attorney, but rather to some marketing group that never answers the phone. Can anyone give me insight into why Google even allows this? I just find it odd that Google cares so much about the user experience, but have the first result be something completely misleading. I know it sounds like I'm just jealous (which I am, a little), but I find it disheartening that we work so hard on SEO, and someone takes the top spot with an obvious BS page. UupqBU9
Intermediate & Advanced SEO | | mrodriguez14400 -
Google Not Indexing XML Sitemap Images
Hi Mozzers, We are having an issue with our XML sitemap images not being indexed. The site has over 39,000 pages and 17,500 images submitted in GWT. If you take a look at the attached screenshot, 'GWT Images - Not Indexed', you can see that the majority of the pages are being indexed - but none of the images are. The first thing you should know about the images is that they are hosted on a content delivery network (CDN), rather than on the site itself. However, Google advice suggests hosting on a CDN is fine - see second screenshot, 'Google CDN Advice'. That advice says to either (i) ensure the hosting site is verified in GWT or (ii) submit in robots.txt. As we can't verify the hosting site in GWT, we had opted to submit via robots.txt. There are 3 sitemap indexes: 1) http://www.greenplantswap.co.uk/sitemap_index.xml, 2) http://www.greenplantswap.co.uk/sitemap/plant_genera/listings.xml and 3) http://www.greenplantswap.co.uk/sitemap/plant_genera/plants.xml. Each sitemap index is split up into often hundreds or thousands of smaller XML sitemaps. This is necessary due to the size of the site and how we have decided to pull URLs in. Essentially, if we did it another way, it may have involved some of the sitemaps being massive and thus taking upwards of a minute to load. To give you an idea of what is being submitted to Google in one of the sitemaps, please see view-source:http://www.greenplantswap.co.uk/sitemap/plant_genera/4/listings.xml?page=1. Originally, the images were SSL, so we decided to reverted to non-SSL URLs as that was an easy change. But over a week later, that seems to have had no impact. The image URLs are ugly... but should this prevent them from being indexed? The strange thing is that a very small number of images have been indexed - see http://goo.gl/P8GMn. I don't know if this is an anomaly or whether it suggests no issue with how the images have been set up - thus, there may be another issue. Sorry for the long message but I would be extremely grateful for any insight into this. I have tried to offer as much information as I can, however please do let me know if this is not enough. Thank you for taking the time to read and help. Regards, Mark Oz6HzKO rYD3ICZ
Intermediate & Advanced SEO | | edlondon0 -
Should I prevent Google from indexing blog tag and category pages?
I am working on a website that has a regularly updated Wordpress blog and am unsure whether or not the category and tag pages should be indexable. The blog posts are often outranked by the tag and category pages and they are ultimately leaving me with a duplicate content issue. With this in mind, I assumed that the best thing to do would be to remove the tag and category pages from the index, but after speaking to someone else about the issue, I am no longer sure. I have tried researching online, but there isn't anything that provided any further information. Please can anyone with any experience of dealing with issues like this or with any knowledge of the topic help me to resolve this annoying issue. Any input will be greatly appreciated. Thanks Paul
Intermediate & Advanced SEO | | PaulRogers0