Canonical vs 301 - Web Development
-
So I'm having a conversation with the development team at my work and I'm a little tired today so I thought I would ask for other opinions. The currently the site duplicates it's full site by having a 200 show with or without a trailing slash. I have asked for a 301 redirect to with the trailing slash. They countered with having all the rel=canonical be the trailing slash, which I know is acceptable. My issue is that while a rel=canonical is acceptable, since my site has a very high level of competition and a very aggressive link building strategy, I believe that it may be beneficial to have the 301 redirect. BUT, I may be wrong. When we're talking hundreds of thousands of links, I would love to have them directly linked instead of possibly splitting them up between a duplicate page that has a correct canonical. I'm curious to what everyone thinks though....
-
+1 for Egol here. A canonical is just a request to Google - a 301 is a directive Google has to respect. I don't really understand why your technical team is making such a fuzz about it - enforcing the trailing slash (or not) is just 1/2 lines in your .htacess file. Check Stackoverflow
Dirk
-
Going straight to the root. There are two versions, with and without slash, because someone started using them. So the first thing that needs to be done is to decide which one is dominant today and go with it. Immediately thereafter, development team, bloggers, everyone is to be informed of the new form of your URL and be expected to use it. Clean them up, get them off of the site. It's time to stop being sloppy. People who don't go with the company's method need to be reminded.
You will find disagreements on if you should use 301 or if you should use rel=canonical.
The advantage of a 301 is that it takes control and forces the URL that you want to the browser and bot. In contrast rel=canonical is a "hint" to Google. We know for a fact that google changes their mind about how they handle things and they will ignore variants of URLs for an awful long time. This same problem exists with parameters. Google provides parameter controls in your Search Console, however, if you have experience with them you will know that they are highly unreliable and take a long time to be picked up and partially obeyed. So you can take control with 301 or use rel=canonical in combination with prayer.
I use 301s because I don't trust Google to do things my way and because once you start using 301s your problems will immediately be reduced in size because the versions of the URLs that you don't want to see will be permanently eliminated from the address window of the browser. I am also pretty luck that the staff here knows how the URLs on our websites are standardized.
-
When it comes to the trailing slash on website URLs, the proper way is to use a 301 Permanent Redirect. However, you can help minimize this problem by fixing all of the internal links on the site so that you always link internally to the version that you prefer.
-
In some cases, implementing a self-referring 301 redirect may cause an infinite loop in which your homepage would not be accessible at all, so I can understand your dev team's reluctance.
A canonical tag and a 301 redirect pass the same amount of link authority, so in this case, they serve the same purpose and provide the same benefit. I'd stick with the canonical tag and pick a different, more valuable battle to fight.
-
301 Redirects are primarily designed for more permanent complicated jobs.
- Expired content
- Multiple versions of homepage
- Change of site
Canonical tags are a better way of telling Google that a query or slash is serving the exact same page content and is just a variation of the URL. Neither if done correctly will have a negative effect on the SEO, however using the canonical tag is far easier and appropriate.
https://moz.com/blog/301-redirect-or-relcanonical-which-one-should-you-use
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Two websites vs each other owned by same company
My client owns a brand and came to me with two ecommerce websites. One website sells his specific brand product and the other sells general products in his niche (including his branded product). Question is my client wants to rank each website for basically the same set of keywords. We have two choices I'd like feedback on- Choice 1 is to rank both websites for same keyword groupings so even if they are both on page 1 of the serps then they take up more real estate and share of voice. are there any negative possibilities here? Choice 2 is to recommend a shift in the position of the general industry website to bring it further away from the industry niche by focusing on different keywords so they don't compete with each other in the serps. I'm for choice 1, what about you?
Intermediate & Advanced SEO | | Rich_Coffman0 -
Cross domain canonical and hreflang
Hi Guys, So we are close to launching our new site and just need to be sure that our canonical, duplicate issues are sorted before launch. So here is our current situation. The current site is on trespass.co.uk. Then new site will be on trespass.com. The new launch is global and we will have the 3 stores within magento all in english. Trespass.com for the UK
Intermediate & Advanced SEO | | Trespass
Trespass.com/US for US
Trespass.com/ROW for all other countries On trespass.com we have the following: On trespass.com/US we have the following: On trespass.com/ROW we have the following: This is how the magento developers.design company have set it up but am I right in saying the canonical tag for each store (/ROW and /US) should point to Trespass.com as the only difference is in the pricing £ $ and euros? Thanks for your help0 -
301 redirect to a temporary URL
Hi there, What would happen if I redirected a set of URLs to a temporary URL structure. And then a few weeks later redirected the original URLs and temporary URLs to the final permanent URLs? So for example:A -> B for a few weeks.
Intermediate & Advanced SEO | | sichristie
then: A->C and B->C where:
C is the final destination URL.
B is the temporary destination
A is the original URL. The reason we are doing this is the naming of the URLs and pages are different, and we wish to transition our customers carefully from old to new. I am looking for a pure technical response.
Would we lose link juice? Does Google care if we permanently redirect to a set of 'temporary' URLs, and then permanently redirect to a set of what we think are permanent URLs? Cheers, Simon0 -
Htaccess 301 regex question
I need some help with a regex for htaccess. I want to 301 redirect this: http://olddomain.com/oldsubdir/fruit.aspx to this: https://www.newdomain.com/newsubdir/FRUIT changes: different protocol (http -> https) add 'www.' different domain (olddomain and newdomain are constants) different subdirectory (oldsubdir and newsubdir are constants) 'fruit' is a variable (which will contain only letters [a-zA-Z]) is it possible to make 'fruit' UPPER case on the redirect (so 'fruit' -> 'FRUIT') remove '.aspx' I think it's something like this (placed in the .htaccess file in the root directory of olddomain): RedirectMatch 301 /oldsubdir/(.*).aspx https://www.newdomain.com/newsubdir/$1 Thanks.
Intermediate & Advanced SEO | | scanlin0 -
301 Redirecting an Entire Site
I have a question which has had me thinking for hours..... If SITE A is ranking well on a number of search phrases and you 301 that site to another (SITE B). The site will change on the Google SERPs to the site which you've re-directed to... In this case SITE B. But how do you maintain the rankings of SITE A?. Do you keep the rankings of SITE A forever? Or will your rankings of SITE A (now SITE B) gradually slip as other sites rank higher? As you can no longer edit SITE A does Google take into consideration the content on SITE B and no longer take anything that SITE A had to offer into consideration? SITE B has simply replaced it in the SERPs??...... Please can anybody help? Thanks,
Intermediate & Advanced SEO | | karl620 -
Broken sitemaps vs no sitemaps at all?
The site I am working on is enormous. We have 71 sitemap files, all linked to from a sitemap index file. The sitemaps are not up to par with "best practices" yet, and realistically it may be another month or so until we get them cleaned up. I'm wondering if, for the time being, we should just remove the sitemaps from Webmaster Tools altogether. They are currently "broken", and I know that sitemaps are not mandatory. Perhaps they're doing more harm than good at this point? According to Webmaster Tools, there are 8,398,082 "warnings" associated with the sitemap, many of which seem to be related to URLs being linked to that are blocked by robots.txt. I was thinking that I could remove them and then keep a close eye on the crawl errors/index status to see if anything changes. Is there any reason why I shouldn't remove these from Webmaster Tools until we get the sitemaps up to par with best practices?
Intermediate & Advanced SEO | | edmundsseo0 -
301 redirect
Hi there, I have some good links pointing to one of my web pages at the moment, however we are just about to launch a new design with new URL structure and I am clear that I need to do a 301 redirect on the URL to the new URL. However, do I keep the old URL live forever? or can I remove it after a while? Kind Regards
Intermediate & Advanced SEO | | Paul780 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0