HTTPS in Rel Canonical
-
Hi,
Should I, or do I need to, use HTTPS (note the "S") in my canonical tags?
Thanks
Andrew
-
Thanks Alan all done so far so good thanks for your help
-
Yeah, definitely agree - the how/why of using https in general is a much broader and more difficult question.
You said the first link was http (not secure), but it looks like it redirects to a secure page? I'm not seeing any crawl issues, although I wonder if the combination of a footer link and the page looking like a lead-gen page is causing Google to ignore it. Honestly, though, it feels more like a technical issue. I'm not seeing any red flags, though.
-
in iis cp find the folder secure, slect ssl settings from the mail window, and tick "require https", they will now be forced to use https for that folder.
Next if you haven't already, using web platform installer, install url rewrite in IIS, best grab SEO toolkit while you are there. Restart IIS cp after install
Select the site then go to url rewrite,
click add rule
Select blank rule
fill in as per screen shots here
http://screencast.com/t/6qUxduZ7UxWz
http://screencast.com/t/cvivbdFsm
If any problems get back to me. I did this without testing.
If you installed seo toolkit also, you will see there are some ready built rules at bottom, see tutorials here if needed.http://thatsit.com.au/seo/tutorials
Note with the rule remove append trailing slash, I always select remove as when people type out your url they never put a slash on the end.
When your done select the site again and have a play with the SEO toolkit, do a scan on your site.
let me know how you went
-
-
-
Hi Alan,
Thanks, we are using IIS, could you please explain how to do this further please. Do you think this maybe the cause of google not seeing and indexing HTTPS page?
Thanks
Andrew
-
In Microsoft IIS server you can require uses use https on a folder basis, you seem to want to force to not use https, this can be done by writing a urlrewrite rule.
If your site does not use https at all, then just remove the binging for SSL. If you have some https pages and some without then you need to do the above.
If you are using a lynix type server then you will have to look it up, if you are using
IIS I can show you how to do this. -
Hi
Thank you both for your responses. Alan your point is very interesting. The main reason for asking the question is because we are desperately trying to find a solution to why our HTTPS page is not being indexed by google 6 weeks after going live. There are 2 other SEOMoz posts by us that have not been able to answer this "Mystery"
www.seomoz.org/q/why-isn-t-google-indexing-our-site
www.seomoz.org/q/why-is-our-page-will-not-being-found-by-google
The HTTPS page in question HTTPS://www.invoicestudio.com/Secure/invoiceTemplate is in fact references via a link at the bottom of HTTP://www.invoicestudio.com (note no "S").
Alan could you please explain your answer further as I do not fully understand what you are saying but it sounds like the HTTP link to HTTPS maybe causing the issue and would like to explore further to solve this long standing issue that is very important to us.
Thanks
Andrew.
-
Dr Pete as usual is correct here, but I would ask a further question, is your page accessed from both http and https? if so I would make the page "https required" so it is not, and use a 301 if you all ready have links to http.
I work on Microsoft IIS servers this is very easy to do, not sure how you do it on lynix
-
If the canonical version of your URLs is secure (HTTPS), then yes - you should use absolute paths with "https://" in the them for your canonical tags.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google-selected canonical makes no sense
Howdy, fellow mozzers, We have added canonical URL to this page - https://www.dignitymemorial.com/obituaries/houston-tx/margot-schurig-8715369/share, pointing to https://www.dignitymemorial.com/obituaries/houston-tx/margot-schurig-8715369 When I check in Google search console, there are no issues reported with that page, and Google does say that it was able to properly read the canonical URL. Yet, it still chooses the page itself as canonical. This doesn't make sense to me. (Here is the link to the screenshot: https://dmitrii-regexseo.tinytake.com/tt/MzU0Mjc0M18xMDY2MTc4Ng) Has anyone dealt with this type of issue, and were you able to resolve it?
Intermediate & Advanced SEO | | DmitriiK0 -
Canonicle & rel=NOINDEX used on the same page?
I have a real estate company: www.company.com with approximately 400 agents. When an agent gets hired we allow them to pick a URL which we then register and manage. For example: www.AGENT1.com We then take this agent domain and 301 redirect it to a subdomain of our main site. For example
Intermediate & Advanced SEO | | EasyStreet
Agent1.com 301’s to agent1.company.com We have each page on the agent subdomain canonicled back to the corresponding page on www.company.com
For example: agent1.company.com canonicles to www.company.com What happened is that google indexed many URLS on the subdomains, and it seemed like Google ignored the canonical in many cases. Although these URLS were being crawled and indexed by google, I never noticed any of them rank in the results. My theory is that Google crawled the subdomain first, indexed the page, and then later Google crawled the main URL. At that point in time, the two pages actually looked quite different from one another so Google did not recognize/honor the canonical. For example:
Agent1.company.com/category1 gets crawled on day 1
Company.com/category1 gets crawled 5 days later The content (recently listed properties for sale) on these category pages changes every day. If Google crawled the pages (both the subdomain and the main domain) on the same day, the content on the subdomain and the main domain would look identical. If the urls are crawled on different days, the content will not match. We had some major issues (duplicate content and site speed) on our www.company.com site that needed immediate attention. We knew we had an issue with the agent subdomains and decided to block the crawling of the subdomains in the robot.txt file until we got the main site “fixed”. We have seen a small decrease in organic traffic from google to our main site since blocking the crawling of the subdomains. Whereas with Bing our traffic has dropped almost 80%. After a couple months, we have now got our main site mostly “fixed” and I want to figure out how to handle the subdomains in order to regain the lost organic traffic. My theory is that these subdomains have a some link juice that is basically being wasted with the implementation of the robots.txt file on the subdomains. Here is my question
If we put a ROBOTS rel=NOINDEX on all pages of the subdomains and leave the canonical (to the corresponding page of the company site) in place on each of those pages, will link juice flow to the canonical version? Basically I want the link juice from the subdomains to pass to our main site but do not want the pages to be competing for a spot in the search results with our main site. Another thought I had was to place the NOIndex tag only on the category pages (the ones that seem to change every day) and leave it off the product (property detail pages, pages that rarely ever change). Thank you in advance for any insight.0 -
Previously owned domain & canonical
Hi, I've recently joined the business and as part of the cleanup process I got told that we owned this domain preferredsafaris.com with some very similar content to our main site southernafricatravel.com. We're no longer owns the preferredsafaris.com domain but looking at Google's cache for it we realised that the title, meta description & page shown when looking at the 'cached page' is for our current domain even though it is showing the 'correct' URL there. I imagine this might have something to do with canonical set on those pages but the weird thing is all those pages now render 404 & do not show a canonical in the source code. I have used Google Removal Tool https://www.google.com/webmasters/tools/removals for all those URLs & Google says that it has removed them & yet they're still showing. What do you suggest? Any potential issue in regards to duplicate content here? Cheers, Julien
Intermediate & Advanced SEO | | SouthernAfricaTravel0 -
Website losing rank positions after https
Hi there.. we were having a good increase in rankings since using Moz and then we purchased an ssl certificate so our website would run under https... to make sure our website is only accessed through https, we have used a method on .htaccess (appache) to make sure the website is only reached through https, using: RewriteEngine on
Intermediate & Advanced SEO | | mszeer
RewriteCond %{HTTPS} !=on
RewriteRule ^(.*)$ https://%{HTTP_HOST}/$1 [R,QSA] after doing this, from our last crawl perspective, we lost several positions that we were increasing.. what have we done wrong ? we have to fix this asap, please help us...0 -
Canonical url issue
Canonical url issue My site https://ladydecosmetic.com on seomoz crawl showing duplicate page title, duplicate page content errors. I have downloaded the error reports csv and checked. From the report, The below url contains duplicate page content.
Intermediate & Advanced SEO | | trixmediainc
https://www.ladydecosmetic.com/unik-colours-lipstick-caribbean-peach-o-27-item-162&category_id=40&brands=66&click=brnd And other duplicate urls as per report are,
https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40&click=colorsu&brands=66 https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40 https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40&brands=66&click=brnd But on every these url(all 4) I have set canonical url. That is the original url and an existing one(not 404). https://www.ladydecosmetic.com/unik-colours-lipstick-caribbean-peach-o-27-item-162&category_id=0 Then how this issues are showing like duplicate page content. Please give me an answer ASAP.0 -
Is this structure valid for a canonical tag?
Working on a site, and noticed their canonical tags follow the structure: //www.domain.com/article They cited their reason for this as http://www.ietf.org/rfc/rfc3986.txt. Does anyone know if Google will recognize this as a valid canonical? Are there any issues with using this as a the canonical?
Intermediate & Advanced SEO | | nicole.healthline0 -
Pagination Question: Google's 'rel=prev & rel=next' vs Javascript Re-fresh
We currently have all content on one URL and use # and Javascript refresh to paginate pages, and we are wondering if we transition to the Google's recommended pagination if we will see an improvement in traffic. Has anyone gone though a similar transition? What was the result? Did you see an improvement in traffic?
Intermediate & Advanced SEO | | nicole.healthline0 -
Should I be using rel canonical here?
I am reorganizing the data on my informational site in a drilldown menu. So, here's an example. One the home page are several different items. Let's say you clicked on "Back Problems". Then, you would get a menu that says: Disc problems, Pain relief, paralysis issues, see all back articles. Each of those pages will have a list of articles that suit. Some articles will appear on more than one page. Should I be worried about these pages being partially duplicates of each other? Should I use rel-canonical to make the root page for each section the one that is indexed. I'm thinking no, because I think it would be good to have all of these pages indexed. But then, that's why I'm asking!
Intermediate & Advanced SEO | | MarieHaynes0