Canonical Meta Tag Best Practices
-
I've noticed that some website owners use canonical tags even when there may be no duplicate issues.For examplewww.examplesite.com has a canonical tag.......rel="canonical" href="http://www.examplesite.com/" />www.examplesite.com/bluewidget has a canonical tag.......rel="canonical" href="http://www.examplesite.com/bluewidget/" />Is this recommended or helpful to do this?
-
I prefer to think of it as "index control", since PR sculpting has a history of being abused, but you've covered the big ones. Obviously, good site architecture is the first step. If they tag exists in 2012, I pretty much covered it in this article:
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
-
Sorry about not clarifying that
Tools or tags used to channel spidering and indexing and circulate page rank (e.g. robots.txt file, pagination with rel="next" and rel="prev", x-robots-tag, etc.....)
I just read an article on pagerank sculpting in visibility magazine that inspired my question
-
Sorry about not clarifying that
Tools or tags used to channel spidering and indexing and circulate page rank (e.g. robots.txt file, pagination with rel="next" and rel="prev", x-robots-tag, etc.....)
I just read an article on pagerank sculpting in visibility magazine that inspired my question
-
Sorry, not sure what you mean. Site-wide tags, or tags that perform canonicalization?
-
Thanks for the post Peter!
In addition to the canonical tag are there any others that you guys have heard of people having success with?
-
I'd generally agree with (and thumbed up) Adam - it's harmless and can sometimes help sweep up any stray URLs. I find it especially useful for the home-page, which naturally has a lot of variants.
I'd only add that you often see this in place not so much because it's strategic but because it's easier to implement, especially in a CMS. Telling the system to add a canonical to every version but the canonical URL is a lot more of a pain, so most people don't do it. Originally, Google and Bing suggested this was their preferred method, but it was so immediately obvious that it's easier to put the tag on all versions that I think they completely reversed that.
I've never seen it cause any harm, and I've seen it help a bit more than once.
-
You're welcome.
It's important to note that the use of canonicals or redirects is not intended for directing page rank. They are primarily used to direct users to the most appropriate page and to avoid any duplicate content issues with search engines.
-
Thanks Adam for posting a response. Very helpful. I read an article about pagerank sculpting and it got me thinking about the best use of canonical, robots.txt files, etc...
My site currently does not have any canonical tags or any of the others used to channel page rank. I have been told that the proper use of certain tags can possible help with rankings by directing page rank to the more important pages.
-
I'll add this to what Crimson said,
It doesn't hurt to have canonical tags on all pages.
-
Hi Nathan,
Personally I think it is good practice to use canonical tags for all pages (even those without duplicates).
Although you may not have duplicates of these pages on your site, other sites may try to scrape the content of your site including its pages. As you have the canonical tag on these pages, any content scraper will also add the canonical tag that points to the page on your site. Therefore it is a good idea to have the canonical tag as a preventative measure also.
Hope that helps,
Adam.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No index detected in robots meta tag GSC issue_Help Please
Hi Everyone, We just did a site migration ( URL structure change, site redesign, CMS change). During migration, dev team messed up badly on a few things including SEO. The old site had pages canonicalized and self canonicalized <> New site doesn't have anything (CMS dev error) so we are working retroactively to add canonicalization mechanism The legacy site had URL’s ending with a trailing slash “/” <> new site got redirected to Set of url’s without “/” New site action : All robots are allowed: A new sitemap is submitted to google search console So here is my problem (it been a long 24hr night for me 🙂 ) 1. Now when I look at GSC homepage URL it says that old page is self canonicalized and currently in index (old page with a trailing slash at the end of URL). 2. When I try to perform a live URL test, I get the message "No: 'noindex' detected in 'robots' meta tag" , so indexation cant be done. I have no idea where noindex is coming from. 3. Robots.txt in search console still showing old file ( no noindex there ) I tried to submit new file but old one still coming up. When I click on "See live robots.txt" I get current robots. 4. I see that old page is still canonicalized and attempting to index redirected old page might be confusing google Hope someone can help to get the new page indexed! I really need it 🙂 Please ping me if you need more clarification. Thank you ! Thank you
Intermediate & Advanced SEO | | bgvsiteadmin1 -
Splitting One Site Into Two Sites Best Practices Needed
Okay, working with a large site that, for business reasons beyond organic search, wants to split an existing site in two. So, the old domain name stays and a new one is born with some of the content from the old site, along with some new content of its own. The general idea, for more than just search reasons, is that it makes both the old site and new sites more purely about their respective subject matter. The existing content on the old site that is becoming part of the new site will be 301'd to the new site's domain. So, the old site will have a lot of 301s and links to the new site. No links coming back from the new site to the old site anticipated at this time. Would like any and all insights into any potential pitfalls and best practices for this to come off as well as it can under the circumstances. For instance, should all those links from the old site to the new site be nofollowed, kind of like a non-editorial link to an affiliate or advertiser? Is there weirdness for Google in 301ing to a new domain from some, but not all, content of the old site. Would you individually submit requests to remove from index for the hundreds and hundreds of old site pages moving to the new site or just figure that the 301 will eventually take care of that? Is there substantial organic search risk of any kind to the old site, beyond the obvious of just not having those pages to produce any more? Anything else? Any ideas about how long the new site can expect to wander the wilderness of no organic search traffic? The old site has a 45 domain authority. Thanks!
Intermediate & Advanced SEO | | 945010 -
URL Rewriting Best Practices
Hey Moz! I’m getting ready to implement URL rewrites on my website to improve site structure/URL readability. More specifically I want to: Improve our website structure by removing redundant directories. Replace underscores with dashes and remove file extensions for our URLs. Please see my example below: Old structure: http://www.widgets.com/widgets/commercial-widgets/small_blue_widget.htm New structure: https://www.widgets.com/commercial-widgets/small-blue-widget I've read several URL rewriting guides online, all of which seem to provide similar but overall different methods to do this. I'm looking for what's considered best practices to implement these rewrites. From what I understand, the most common method is to implement rewrites in our .htaccess file using mod_rewrite (which will find the old URLs and rewrite them according to the rewrites I implement). One question I can't seem to find a definitive answer to is when I implement the rewrite to remove file extensions/replace underscores with dashes in our URLs, do the webpage file names need to be edited to the new format? From what I understand the webpage file names must remain the same for the rewrites in the .htaccess to work. However, our internal links (including canonical links) must be changed to the new URL format. Can anyone shed light on this? Also, I'm aware that implementing URL rewriting improperly could negatively affect our SERP rankings. If I redirect our old website directory structure to our new structure using this rewrite, are my bases covered in regards to having the proper 301 redirects in place to not affect our rankings negatively? Please offer any advice/reliable guides to handle this properly. Thanks in advance!
Intermediate & Advanced SEO | | TheDude0 -
H2 Tags- Can you have more than 1 H2 tag
Hi All, Screaming frog has identified that we have a few H2 tags on our pages , although we only have 1 H1 tag. We have numerous H3,H4's etc. I am wondering, is it good SEO to have only 1 H2 tag like with H1 tag or can you have more ? thanks Peter
Intermediate & Advanced SEO | | PeteC120 -
How to add Geo Meta Tags, Dublin Core, Microformats in Word press website?
Please let me know how to add and what to include in Geo Meta Tags, Dublin Core, Microformats.
Intermediate & Advanced SEO | | Dan_Brown10 -
What is best practice to eliminate my IP addr content from showing in SERPs?
Our eCommerce platform provider has our site load balanced in a few data centers. Our site has two of our own exclusive IP addresses associated with it (one in each data center). Problem is Google is showing our IP addresses in the SERPs with what I would assume is bad duplicate content (our own at that). I brought this to the attention of our provider and they say they must keep the IP addresses open to allow their site monitoring software to work. Their solution was to add robots.txt files for both IP addresses with site wide/root disallows. As a side note, we just added canonical tags so the pages indexed within the IP addresses ultimately show the correct URL (non IP address) via the canonical. So here are my questions. Is there a better way? If not, is there anything else we need to do get Google to drop the several hundred thousand indexed pages at the IP address level? Or do we sit back and wait now?
Intermediate & Advanced SEO | | ovenbird0 -
SEO Best Practices for Video Sites
What are the SEO Best Practices for video sites? Is there a guideline for this in SEOMOZ? Thanks in advance!
Intermediate & Advanced SEO | | merkal20050 -
Multilingual sites: Canonical and Alternate tag implementation question
Hello, I would like some clarification about the correct implementation of the rel="alternate" tag and the canonical tag. The example given at http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077 recommends implementing the canonical tag on all region specific sub-domains, and have it point to the www version of the website Here's the example given by Google. My question is the following. Would this technique also apply if I have region specific sites site local TLD. In other words, if I have www.example.com, www.example.co.uk, www.example.ca – all with the same content in English, but prices and delivery options tailored for US, UK and Canada residents, should I go ahead and implement the canonical tag and alternate tag as follows: I am a bit concerned about canonicalizing an entire local TLD to the .com site.
Intermediate & Advanced SEO | | Amiee0