Push for site-wide https, but all pages in index are http. Should I fight the tide?
-
Hi there,
First Q&A question
So I understand the problems caused by having a few secure pages on a site. A few links to the https version a page and you have duplicate content issues.
While there are several posts here at SEOmoz that talk about the different ways of dealing with this issue with respect to secure pages, the majority of this content assumes that the goal of the SEO is to make sure no duplicate https pages end up in the index.
The posts also suggest that https should only used on log in pages, contact forms, shopping carts, etc."
That's the root of my problem. I'm facing the prospect of switching to https across an entire site. In the light of other https related content I've read, this might seem unecessary or overkill, but there's a vaild reason behind it.
I work for a certificate authority. A company that issues SSL certificates, the cryptographic files that make the https protocol work. So there's an obvious need our site to "appear" protected, even if no sensitive data is being moved through the pages. The stronger push, however, stems from our membership of the Online Trust Alliance. https://otalliance.org/
Essentially, in the parts of the internet that deal with SSL and security, there's a push for all sites to utilize HSTS Headers and force sitewide https. Paypal and Bank of America are leading the way in this intiative, and other large retailers/banks/etc. will no doubt follow suit. Regardless of what you feel about all that, the reality is that we're looking at future that involves more privacy protection, more SSL, and more https.
The bottom line for me is; I have a site of ~800 pages that I will need to switch to https.
I'm finding it difficult to map the tips and tricks for keeping the odd pesky https page out of the index, to what amounts to a sitewide migratiion.
So, here are a few general questions.
- What are the major considerations for such a switch?
- Are there any less obvious pitfalls lurking?
- Should I even consider trying to maintain an index of http pages, or should I start work on replacing (or have googlebot replace) the old pages with https versions?
- Is that something that can be done with canonicalization? or would something at the server level be necessary?
- How is that going to affect my page authority in general?
- What obvious questions am I not asking?
Sorry to be so longwinded, but this is a tricky one for me, and I want to be sure I'm giving as much pertinent information as possible.
Any input will be very much appreciated.
Thanks,
Dennis
-
Hi Dennis Lees,
I had to deal with something similar in the past, the website was about online donations and wanted to look secure.
All pages were 301 redirected to the https version and it didn't seem to affect their rankings.
If you are to force sitewide https, I suggest to 301 redirect all http pages to their https version and search engine spiders will do their jobs at crawling the new urls and replacing them in the search results.
Don't expect this to happen overnight! It will take some time, you might see some rankings greatly fluctuate, but things should get back to normal and definitely better than having duplicate content all over the place.
Best regards,
Guillaume Voyer.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I use https schema markup after http-https migration?
Dear Moz community, Noticed that several groups of websites after HTTP -> HTTPS migration update their schema markup from, example : {
Intermediate & Advanced SEO | | admiral99
"@context": "http://schema.org",
"@type": "WebSite",
"name": "Your WebSite Name",
"alternateName": "An alternative name for your WebSite",
"url": "http://www.your-site.com"
} becomes {
"@context": "https://schema.org",
"@type": "WebSite",
"name": "Your WebSite Name",
"alternateName": "An alternative name for your WebSite",
"url": "https://www.example.com"
} Interesting to know, because Moz website is on https protocol but uses http version of markup. Looking forward for answers 🙂0 -
Mobile First Index: What Could Happen To Sites w Large Desktop but Small Mobile Sites?
I have a question about how Mobile First could affect websites with separate (and smaller) mobile vs desktop sites. Referencing this SE Roundtable article (seorountable dot com /google-mobile-first-index-22953.html), "If you have less content on your mobile version than on your desktop version - Google will probably see the less content mobile version. Google said they are indexing the mobile version first." But Google/ Gary Illyes are also on the record stating the switch to mobile-first should be minimally disruptive. Does "Mobile First" mean that they'll consider desktop URLs "second", or will they actually just completely discount the desktop site in lieu of the mobile one? In other words: will content on your desktop site that does not appear in mobile count in desktop searches? I can't find clear answer anywhere (see also: /jlh-marketing dot com/mobile-first-unanswered-questions/). Obviously the writing is on the wall (and has been for years) that responsive is the way to go moving forward - but just looking for any other viewpoints/feedback here since it can be really expensive for some people to upgrade. I'm basically torn between "okay we gotta upgrade to responsive now" and "well, this may not be as critical as it seems". Sigh... Thanks in advance for any feedback and thoughts. LOL - I selected "there may not be a right answer to this question" when submitting this to the Moz community. 🙂
Intermediate & Advanced SEO | | mirabile0 -
Linking to one of my own sites, from my site
Hi experts, I own a site for castingjobs (Site1) and a site for selling paintings (Site2). In a long time, I've had a link at the bottom of Site1, linking to Site 2. (Basicaly: Partnerlink: Link site 2). Site1 is for me the the only important site, since it's where Im making my monthly revenue. I added the link like 5 years ago or so, to try to boost site 2. My question is:
Intermediate & Advanced SEO | | KasperGJ
1. Is it somehow bad for SEO for site 1, since the two sites have nothing to do with each other, they are basically just owned by me.
2. Would it make sense to link from Site 2 to Site 1 indstead?0 -
My blog is indexing only the archive and category pages
Hi there MOZ community. I am new to the QandA and have a question. I have a blog Its been live for months - but I can not get the posts to rank in the serps. Oddly only the categories rank. The posts are crawled it seems - but seen as less important for a reason I don't understand. Can anyone here help with this? See here for what i mean. I have had several wp sites rank well in the serps - and the posts do much better. Than the categories or archives - super odd. Thanks to all for help!
Intermediate & Advanced SEO | | walletapp0 -
Big discrepancies between pages in Google's index and pages in sitemap
Hi, I'm noticing a huge difference in the number of pages in Googles index (using 'site:' search) versus the number of pages indexed by Google in Webmaster tools. (ie 20,600 in 'site:' search vs 5,100 submitted via the dynamic sitemap.) Anyone know possible causes for this and how i can fix? It's an ecommerce site but i can't see any issues with duplicate content - they employ a very good canonical tag strategy. Could it be that Google has decided to ignore the canonical tag? Any help appreciated, Karen
Intermediate & Advanced SEO | | Digirank0 -
How can a Page indexed without crawled?
Hey moz fans,
Intermediate & Advanced SEO | | atakala
In the google getting started guide it says **"
Note: **Pages may be indexed despite never having been crawled: the two processes are independent of each other. If enough information is available about a page, and the page is deemed relevant to users, search engine algorithms may decide to include it in the search results despite never having had access to the content directly. That said, there are simple mechanisms such as robots meta tags to make sure that pages are not indexed.
" How can it happen, I dont really get the point.
Thank you0 -
Should all pages on a site be included in either your sitemap or robots.txt?
I don't have any specific scenario here but just curious as I come across sites fairly often that have, for example, 20,000 pages but only 1,000 in their sitemap. If they only think 1,000 of their URL's are ones that they want included in their sitemap and indexed, should the others be excluded using robots.txt or a page level exclusion? Is there a point to having pages that are included in neither and leaving it up to Google to decide?
Intermediate & Advanced SEO | | RossFruin1 -
Will thousands of redirected pages have a negative impact on the site?
A client site has thousands of pages with unoptimized urls. I want to change the url structure to make them a little more search friendly. Many of the pages I want to update have backlinks to them and good PR so I don't want to delete them entirely. If I change the urls on thousands of pages, that means a lot of 301 redirects. Will thousands of redirected pages have a negative impact on the site? Thanks, Dino
Intermediate & Advanced SEO | | Dino641