Drupal, http/https, canonicals and Google Search Console
-
I’m fairly new in an in-house role and am currently rooting around our Drupal website to improve it as a whole. Right now on my radar is our use of http / https, canonicals, and our use of Google Search Console. Initial issues noticed:
- We serve http and https versions of all our pages
- Our canonical tags just refer back to the URL it sits on (apparently a default Drupal thing, which is not much use)
- We don’t actually have https properties added in Search Console/GA
I’ve spoken with our IT agency who migrated our old site to the current site, who have recommended forcing all pages to https and setting canonicals to all https pages, which is fine in theory, but I don’t think it’s as simple as this, right? An old Moz post I found talked about running into issues with images/CSS/javascript referencing http – is there anything else to consider, especially from an SEO perspective?
I’m assuming that the appropriate certificates are in place, as the secure version of the site works perfectly well.
And on the last point – am I safe to assume we have just never tracked any traffic for the secure version of the site?
Thanks
John
-
OK I gotcha now. You can submit the sitemap in all versions of Search Console, won't hurt anything to have it referenced in multiple profiles of SC.
Another thing you can do to make sure crawlers find your XML is add this line to your robots.txt file:
Sitemap: http://yoursitecom/sitemap.xml
-
Thanks so much, this is so helpful!
About the search console question, I may have confused you. This is what I mean: I have a www and non-www property of the website in Search Console (from before my time), which looks like this:
|
property
|
Sitemap
|
http://www.mysite.com/sitemap.xml
|
NO SITEMAP LINKED
|
(apologies that has not formatted well, I hope you can decipher!)
With a sitemap linked to the www version and nothing to the non-www version. The sitemap is located on the non-www version of the site, so I was just wondering if the above scenario has essentially meant we've had no sitemap submissions to date (that said, the sitemap appears to be pulling through despite being the "wrong" address, so I can only think there are either 2 separate sitemap files, OR the redirect we have set from www to non-www is having an effect?)
-
Hi John, always glad to help!
For your Search Console question: When you get the redirects setup and have committed to your site being all HTTPS, you'll want to move the location of your XML sitemap to https://yoursite.com/sitemap.xml. As Cyrus mentions in that article, don't update the URLs in the sitemap yet, let search engines hit them as non-secure for a while, I think he recommends 30 days, to give them a chance to learn your new protocol and for them to hit your redirects multiple times.
For your www question: There's no difference in SEO-value whether you choose www or non-www, simply a preference. The only thing that matters here is that you pick one and stick with it.
For your GA question: That is correct, you are seeing traffic from both in GA. GA will collect and report on any page/URL/website that your UA-ID is on. If someone scraped your site and took the GA script with it, you'd start seeing their traffic in your reporting view (that's why appending hostname is always a good idea ). You can specify in the View Settings of GA what your protocol is.
-
Hi Logan,
Thanks for your quick response, that’s very helpful and the article you provided is great.
I hadn’t thought of the purpose of self-referring canonicals, thanks for clarifying.
Re: Search Console: I’ve just noticed we only have a sitemap linked for the http://www property. Currently, all www. traffic is redirected to the non-www version of any given page (forgetting https for a second). Is this an issue in terms of pagerank?
And my last question, I promise! If our UA tag is firing on both http and https versions of the site, should we be seeing traffic from both in GA, if the property/view default url is set to http:// ? By my understanding, that setting is just a vanity thing for reporting purposes, but I’m not sure where, if anywhere, I need to specify in a particular view that http:// and https:// traffic should be treated as the same thing?
-
Hi John,
For the most part, your IT partner is correct, 2 of the most important things are to 301 all HTTP requests to HTTPS and to update canonicals. I often refer to people with questions about HTTPS to this post written by Cyrus Shepard, he covers all the bases needed for an SEO-friendly secure migration: https://moz.com/blog/seo-tips-https-ssl.
Regarding your specific comments:
- We serve http and https versions of all our pages - A 301 redirect rule will correct this
- Our canonical tags just refer back to the URL it sits on (apparently a default Drupal thing, which is not much use) - Self-referring canonicals like this serve plenty of purpose, they just need to match your preferred version www/non-www http/https, etc. etc. Self-referring canonicals help prevent duplicates caused by parameters, case-sensitive URLs, and the aformentioned HTTP/S and www/non-www.
- We don’t actually have https properties added in Search Console/GA - You should add another profile for HTTPS, verification should be simple since you've already proven you're the site owner. You want to have both profiles in GSC so you can monitor the shift of indexed URLs from HTTP to HTTPS. Also good for future troubleshooting should you see and issue with indexing of HTTP in the future for some reason.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
How to de-index a page with a search string with the structure domain.com/?"spam"
The site in question was hacked years ago. All the security scans come up clean but the seo crawlers like semrush and ahrefs still show it as an indexed page. I can even click through on it and it takes me to the homepage with no 301. Where is the page and how to deindex it? domain/com/?spam There are multiple instances of this. http://www.clipular.com/c/5579083284217856.png?k=Q173VG9pkRrxBl0b5prNqIozPZI
Technical SEO | | Miamirealestatetrendsguy1 -
Http://newsite.intercallsystems.com/vista-series/sales@intercallsystems.com
I keep getting crawl errors for urls that have email addresses on the end. I have no idea what these are. Here is an example: the-audio-visual-system/sales@intercallsystems.com Where would these be coming from, how are they created? How can i fix them? When I try to do a 301 redirect it doesn't work. Thanks for your help,
Technical SEO | | renalynd27
Rena0 -
Using the Google Remove URL Tool to remove https pages
I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week. I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front. For example, I add to the removal tool:- https://www.mydomain.com/blah.html?search_garbage_url_addition On the confirmation page, the URL actually shows as:- http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look? AND PART 2 OF MY QUESTION If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request? www.domain.com/url.html?xsearch_... A description for this result is not available because of this site's robots.txt – learn more.
Technical SEO | | sparrowdog1 -
Canonicals
We have a client that has his products listed on 20+ different websites, including 4 of his own. Also, he only has 1 of everything, so once he sells it then the product is gone. To battle this duplication issue, plus having a short internet lifespan of less than 4 weeks, I was wondering if it would be a good idea to canonical the products back to the category page. Kind of like using canonical tags on your "used blue widget" and "used red widget" pages back to the "used widgets" page. Would this help with the duplicate content issues? Is this a proper use of a canonical?
Technical SEO | | WhoWuddaThunk0 -
Google local / Rich snippets in multiple locations
Hi All, Something i'm trying to understand about Google local but cant find precise info about what we are thinking of doing. We are located in City A, but also provide our services in almost every city in our country. We broker the service to branches that don't have their own local websites and want to take advantage of being listed in each city by marking up the address of each branch on each of our specific branch pages. All our pages are currently marked up using the physical company address. Would their be any issues marking up separate branch pages to the corresponding branch address, and then taking advantage of Local results in many areas around the country? In other words, Is it possible for a single website to mark up separate pages for different locations and take advantage of local results for each? Thanks in Advance Greg
Technical SEO | | AndreVanKets0 -
Schema / microdata under Google listing
I was wondering if it was possible to add any schema, or microdata, or anything of that sort, to my website/webpage in order to show the location of the business just underneath the snippet of the website in Google. I have seen this with other businesses and I am not sure if it is only because they have the business listed on Google Places, or if it is because they have schema data on their website and when it is read by Google it shows underneath their snippet in Google search. Also, if there is an ability to make the address, phone, etc show up underneath the Google search listing, can I do this with a plugin or widget for wordpress? Thank you!
Technical SEO | | SEOWizards0 -
/index.php/ page
I was wondering if my system creates this page www my domain com/index.php/ is it better to block with robot.txt or just canonize?
Technical SEO | | ciznerguy0