HTTPS in Rel Canonical
-
Hi,
Should I, or do I need to, use HTTPS (note the "S") in my canonical tags?
Thanks
Andrew
-
Thanks Alan all done so far so good thanks for your help
-
Yeah, definitely agree - the how/why of using https in general is a much broader and more difficult question.
You said the first link was http (not secure), but it looks like it redirects to a secure page? I'm not seeing any crawl issues, although I wonder if the combination of a footer link and the page looking like a lead-gen page is causing Google to ignore it. Honestly, though, it feels more like a technical issue. I'm not seeing any red flags, though.
-
in iis cp find the folder secure, slect ssl settings from the mail window, and tick "require https", they will now be forced to use https for that folder.
Next if you haven't already, using web platform installer, install url rewrite in IIS, best grab SEO toolkit while you are there. Restart IIS cp after install
Select the site then go to url rewrite,
click add rule
Select blank rule
fill in as per screen shots here
http://screencast.com/t/6qUxduZ7UxWz
http://screencast.com/t/cvivbdFsm
If any problems get back to me. I did this without testing.
If you installed seo toolkit also, you will see there are some ready built rules at bottom, see tutorials here if needed.http://thatsit.com.au/seo/tutorials
Note with the rule remove append trailing slash, I always select remove as when people type out your url they never put a slash on the end.
When your done select the site again and have a play with the SEO toolkit, do a scan on your site.
let me know how you went
-
-
-
Hi Alan,
Thanks, we are using IIS, could you please explain how to do this further please. Do you think this maybe the cause of google not seeing and indexing HTTPS page?
Thanks
Andrew
-
In Microsoft IIS server you can require uses use https on a folder basis, you seem to want to force to not use https, this can be done by writing a urlrewrite rule.
If your site does not use https at all, then just remove the binging for SSL. If you have some https pages and some without then you need to do the above.
If you are using a lynix type server then you will have to look it up, if you are using
IIS I can show you how to do this. -
Hi
Thank you both for your responses. Alan your point is very interesting. The main reason for asking the question is because we are desperately trying to find a solution to why our HTTPS page is not being indexed by google 6 weeks after going live. There are 2 other SEOMoz posts by us that have not been able to answer this "Mystery"
www.seomoz.org/q/why-isn-t-google-indexing-our-site
www.seomoz.org/q/why-is-our-page-will-not-being-found-by-google
The HTTPS page in question HTTPS://www.invoicestudio.com/Secure/invoiceTemplate is in fact references via a link at the bottom of HTTP://www.invoicestudio.com (note no "S").
Alan could you please explain your answer further as I do not fully understand what you are saying but it sounds like the HTTP link to HTTPS maybe causing the issue and would like to explore further to solve this long standing issue that is very important to us.
Thanks
Andrew.
-
Dr Pete as usual is correct here, but I would ask a further question, is your page accessed from both http and https? if so I would make the page "https required" so it is not, and use a 301 if you all ready have links to http.
I work on Microsoft IIS servers this is very easy to do, not sure how you do it on lynix
-
If the canonical version of your URLs is secure (HTTPS), then yes - you should use absolute paths with "https://" in the them for your canonical tags.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Domain Level Redirects - HTTP and HTTPS
About 2 years ago (well before I started with the company), we did an http=>https migration. It was not done correctly. The http=>https redirect was never inserted into the .htaccess file. In essence, we have 2 websites. According to Google search console, we have 19,000 HTTP URLs indexed and 9,500 HTTPS URLs indexed. I've done a larger scale http=>https migration (60,000 SKUs), and our rankings dropped significantly for 6-8 weeks. We did this the right way, using sitemaps, and http and https GSC properties. Google came out recently and said that this type of rankings drop is normal for large sites. I need to set the appropriate expectations for management. Questions: How badly is the domain split affecting our rankings, if at all? Our rankings aren't bad, but I believe we are underperforming our backlink profile. Can we expect a net rankings gain when the smoke clears? There are a number of other technical SEO issues going on as well. How badly will our rankings drop (temporarily) and for how long when we add the redirect to the .htaccess file? Is there a way to mitigate the rankings impact? For example, only submitting partial sitemaps to our GSC http property? Has anyone gone through this before?
Intermediate & Advanced SEO | | Satans_Apprentice0 -
Canonical Query
If Google decides to ignore your canonical and indexes numerous versions, does that count as duplicate content? We've got a large amount of canonicals ignored by Google, so I'm just trying to gauge if it's an issue or not.
Intermediate & Advanced SEO | | ThomasHarvey0 -
Moving to https: Double Redirects
We're migrating our site to https and I have the following question: We have some old url's that we are 301ing to new ones. If we switch over to https then we will be forced to do a double-redirect for these url's. Will this have a negative SEO impact? If so, is there anything that we can do about it?
Intermediate & Advanced SEO | | YairSpolter0 -
Cross Domain Rel Canonical tags vs. Rel Canonical Tags for internal webpages
Today I noticed that one of my colleagues was pointing rel canonical tags to a third party domain on a few specific pages on a client's website. This was a standard rel canonical tag that was written Up to this point I haven't seen too many webmasters point a rel canonical to a third party domain. However after doing some reading in the Google Webmaster Tools blog I realized that cross domain rel canonicals are indeed a viable strategy to avoid duplicate content. My question is this; should rel canonical tags be written the same way when dealing with internal duplicate content vs. external duplicate content? Would a rel=author tag be more appropriate when addressing 3rd party website duplicate content issues? Any feedback would be appreciated.
Intermediate & Advanced SEO | | VanguardCommunications0 -
HTTPS Certificate Expired. Website with https urls now still in index issue.
Hi Guys This week the Security certificate of our website expired and basically we now have to wail till next Tuesday for it to be re-instated. So now obviously our website is now index with the https urls, and we had to drop the https from our site, so that people will not be faced with a security risk screen, which most browsers give you, to ask if you are sure that you want to visit the site, because it's seeing it as an untrusted one. So now we are basically sitting with the site urls, only being www... My question what should we do, in order to prevent google from penalizing us, since obviously if googlebot comes to crawl these urls, there will be nothing. I did however re-submitted it to Google to crawl it, but I guess it's going to take time, before Google picks up that now only want the www urls in the index. Can somebody please give me some advice on this. Thanks Dave
Intermediate & Advanced SEO | | daveza0 -
HTTPS moz.org untrusted - invalid cert
https://www.moz.com/ has an invalid cert guys This Connection is Untrusted You have asked Firefox to connect
Intermediate & Advanced SEO | | irvingw
securely to www.moz.com, but we can't confirm that your connection is secure.
Normally, when you try to connect securely,
sites will present trusted identification to prove that you are
going to the right place. However, this site's identity can't be verified. What Should I Do? If you usually connect to
this site without problems, this error could mean that someone is
trying to impersonate the site, and you shouldn't continue. www.moz.com uses an invalid security certificate. The certificate is only valid for moz.com (Error code: ssl_error_bad_cert_domain) If you understand what's going on, you
can tell Firefox to start trusting this site's identification.
Even if you trust the site, this error could mean that someone is
tampering with your connection.
Don't add an exception unless
you know there's a good reason why this site doesn't use trusted identification.1 -
How to Remove Joomla Canonical and Duplicate Page Content
I've attempted to follow advice from the Q&A section. Currently on the site www.cherrycreekspine.com, I've edited the .htaccess file to help with 301s - all pages redirect to www.cherrycreekspine.com. Secondly, I'd added the canonical statement in the header of the web pages. I have cut the Duplicate Page Content in half ... now I have a remaining 40 pages to fix up. This is my practice site to try and understand what SEOmoz can do for me. I've looked at some of your videos on Youtube ... I feel like I'm scrambling around to the Q&A and the internet to understand this product. I'm reading the beginners guide.... any other resources would be helpful.
Intermediate & Advanced SEO | | deskstudio0 -
How Rel=Prev & Rel=Next work for me?
I have implemented Rel=Prev & Rel=Next tag on my website. I would like to give example URL to know more about it. http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=3 http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=4 http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=5 Right now, I have blocked paginated pages by Robots.txt by following query. Disallow: /*?p= I have removed disallow syntax from Robots.txt for paginated pages. But, I have confusion with duplicate page title. If you will check all 3 pages so you will find out duplicate page title across all pages. I know that, duplicate page title is harmful for SEO. Will Google crawl + index all paginated pages? If yes so which page will get maximum benefits in organic ranking? Is there any specific way which may help me to solve this issue?
Intermediate & Advanced SEO | | CommercePundit0