Canonical tags and Syndicated Content
-
-
Good point. If a new domain is able to rank as well as the old site before the 301 redirects are put in place, that's very compelling evidence.
-
I agree with Kurt - in lieu of de-listing or redirects, rel=canonical is about your only option. It's possible it won't be enough, but it's the best you've got by a long shot, given the restrictions.
-
I haven't seen all the numbers, but I know people at major newspapers using cross-domain canonical, and they'd drop it in a heartbeat if it didn't pass the majority of link equity.
I think the domain move case is more compelling, because now you've got a completely new domain that you can show ranking in place of the old, stronger domain, without redirects in place. At that point, it's unlikely just a fluke.
-
Cool. I hadn't heard of using canonical tags to move sites. That's quite helpful.
I'm curious about the idea that the canonical tag passes link authority or PageRank. Is it possible that these tests people have done just look like that's what's happening? Here's what I mean. Let's say I write an article that gets reproduced on another site and Google is ranking the other site in the top ten for some keyword. Then I get the other site to put a canonical tag on their page and in a few days my site is ranking for that keyword. Now, does that indicate that any link authority was passed or does it indicate that Google would have ranked either site in the top ten for that keyword, but they had to decide on one or the other because they are duplicate. So, the canonical tag just caused Google to change it's mind about which site it would rank. In other words, could it be that both pages are authoritative enough to rank and the canonical tag is just telling Google which of the two should rank?
Has anyone done tests where one site had content for a while that didn't rank and then another more authoritative site re-published the content and ranked for it and then the authoritative site put a canonical tag to the original site and now that original site was able to rank well for the keyword? And when they did this, they would have to not have put a link to the original content only using the canonical.
-
Dave,
What you're describing is exactly what the canonical tag is for, reproducing content on pages, but giving credit to the original. Anyway, if 301's wouldn't work, what else would you do?
-
She essentially said that canonicals for moving a site was one of the intended uses. In her talk she gave the example of having an Exercise Blog and taking over Matt Cutts' Exercise blog... and how in that instance canonicals are a good way to notify the search engines that you would like your main site to start ranking for the instances where the secondary site would come up. (Plus the bits about good for the user experience) Then you would canonical all relevant pages as necessary, move any content that you would like to appear on the main site, and throw up a message on the secondary site with a link stating you're moving to the new URL. Then after a while you would 301 everything over.
I have actually given that advice to people regularly and (so far) no one has come back screaming at me that I ruined their site.
-
That actually makes much more sense than the way I've had people try to explain it to me I didn't realize a Googler had actually condoned it (although sometimes I find Maile's messages a bit mixed).
-
I have done these and I agree completely.
Also, the bit about Canonicals to move a site and then 301 later was actually talked about at SMX by Maile Ohye of Google as a legitimate and good use for situations such as buying or taking over someone else's site as a means to pass link equity while also giving users a better experience by letting them know you are transitioning... giving them time to change their bookmarks instead of potentially causing them to bounce by sending them somewhere they didn't intend to go.
(though don't quote me on her saying anything about "link juice" or "link equity" specifically... it was about a year ago and its been ages since I've listened to my personal recordings of the session [and actually, i'm not sure I was even actually allowed to record while Google and Bing reps were speaking... but oh well])
-
So, I can tell you from conversations with SEOs that some have used rel=canonical successfully to pass link-juice. In some cases, I even know people who use it to move sites, and then 301 later, and claim success with this method. Unfortunately, almost none of those case studies are published.
Generally speaking, I still don't think it's a great way to move a resource, and tend toward 301s for that purpose, but all the data I've seen suggests that rel=canonical tends to consolidate link juice. There are exceptions, of course, such as when Google doesn't honor the tag (they don't see it as a duplicate, for example, and think you're trying to game the system), but that's true of 301s as well.
Rand did a Whiteboard Friday a couple of years ago talking about link-equity and cross-domain canonical:
http://moz.com/blog/cross-domain-canonical-the-new-301-whiteboard-friday
I know he's actually a big believer that rel=canonical passes link equity, as or more strongly in some cases than 301-redirects (again, it's pretty situational).
-
My understanding is that canonical tag only establishes the original location of content. It has nothing to do with PageRank. I've not seen anything from Google that would indicate that adding a canonical tag to a page will pass all it's authority to the canonical URL.
-
Hiya,I wouldn't look at it as a link juice argument as its really aimed at telling the search engine which concepts the original (which can be helpful if e.g you have multiple products etc.). What it can do is help build you up as an authority. Regards to auther credit it depends if they used the rel="author" tag (telling Google who the auther is).
Look at it another way you would use the tags for duplicate content, do you think a search engine would highly rank duplicate content? It would link one copy of the relevant result and you can use the tag to tell it "this is the original content" (i.e the most relevant).
You may find the following helpful : https://support.google.com/webmasters/answer/139394
as well a similar topic was posted only an hour ago http://moz.com/community/q/canonical-tag-refers-to-itself
I hope this has helped a bit for your question, good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does "google selected canonical" pass link juice the same as "user selected canonical"?
We are in a bit of a tricky situation since a key top-level page with lots of external links has been selected as a duplicate by Google. We do not have any canonical tag in place. Now this is fine if Google passes the link juice towards the page they have selected as canonical (an identical top-level page)- does anyone know the answer to this question? Due to various reasons, we can't put a canonical tag ourselves at this moment in time. So my question is, does a Google selected canonical work the same way and pass link juice as a user selected canonical? Thanks!
Technical SEO | | Lewald10 -
Content in Accordion doesn't rank as well as Content in Text box?
Does content rank better in a full view text layout, rather than in a clickable accordion? I read somewhere because users need to click into an accordion it may not rank as well, as it may be considered hidden on the page - is this true? accordion example: see features: https://www.workday.com/en-us/applications/student.html
Technical SEO | | DigitalCRO1 -
Content relaunch without content duplication
We write great Content for blog and websites (or at least we try), especially blogs. Sometimes few of them may NOT get good responses/reach. It could be the content which is not interesting, or the title, or bad timing or even the language used. My question for the discussion is, what will you do if you find the content worth audience's attention missed it during its original launch. Is that fine to make the text and context better and relaunch it ? For example: 1. Rechristening the blog - Change Title to make it attractive
Technical SEO | | macronimous
2. Add images
3. Check spelling
4. Do necessary rewrite, spell check
5. Change the timeline by adding more recent statistics, references to recent writeups (external and internal blogs for example), change anything that seems outdated Also, change title and set rel=cannoical / 301 permanent URLs. Will the above make the blog new? Any ideas and tips to do? Basically we like to refurbish (:-)) content that didn't succeed in the past and relaunch it to try again. If we do so will there be any issues with Google bots? (I hope redirection would solve this, But still I want to make sure) Thanks,0 -
How to avoid duplicate content
Hi, I have a website which is ranking on page 1: www.oldname.com/landing-page But because of legal reason i had to change the name.
Technical SEO | | mikehenze
So i moved the landing page to a different domain.
And 301'ed this landing page to the new domain (and removed all products). www.newname.com/landing-page All the meta data, titles, products are still the same. www.oldname.com/landing-page is still on the same position
And www.newname.com/landing-page was on page 1 for 1 day and is now on page 4. What did i do wrong and how can I fix this?
Maybe remove www.oldname.com/landing-page from Google with Google Webmaster Central or not allow crawling of this page with .htaccess ?0 -
Canonical URL Tag: Confusing Use Case
We have a webpage that changes content each evening at mid-night -- let's call this page URL /foo. This allows a user to bookmark URL /foo and obtain new content each day. In our case, the content on URL /foo for a given day is the same content that exists on another URL on our website. Let's say the content for November 5th is URL /nov05, November 6th is /nov06 and so on. This means on November 5th, there are two pages on the website that have almost identical content -- namely /foo and /nov05. This is likely a duplication of content violation in the view of some search engines. Is the Canonical URL Tag designed to be used in this situation? The page /nov05 is the permanent page containing the content for the day on the website. This means page /nov05 should have a Canonical Tag that points to itself and /foo should have a Canonical Tag that points to /nov05. Correct? Now here is my problem. The page at URL /foo is the fourth highest page authority on our 2,000+ page website. URL /foo is a key part of the marketing strategy for the website. It has the second largest number of External Links second only to our home page. I must tell you that I'm concerned about using a Cononical URL Tag that points away from the URL /foo to a permanent page on the website like /nov05. I can think of a lot of things negative things that could happen to the rankings of the page by making a change like this and I am not sure what we would gain. Right now /foo has a Canonical URL Tag that points to itself. Does anyone believe we should change this? If so, to what and why? Thanks for helping me think this through! Greg
Technical SEO | | GregSims0 -
According to 1 of my PRO campaigns - I have 250+ pages with Duplicate Content - Could my empty 'tag' pages be to blame?
Like I said, my one of my moz reports is showing 250+ pages with duplicate content. should I just delete the tag pages? Is that worth my time? how do I alert SEOmoz that the changes have been made, so that they show up in my next report?
Technical SEO | | TylerAbernethy0 -
Robots.txt and canonical tag
In the SEOmoz post - http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts, it's being said - If you have a robots.txt disallow in place for a page, the canonical tag will never be seen. Does it so happen that if a page is disallowed by robots.txt, spiders DO NOT read the html code ?
Technical SEO | | seoug_20050 -
Duplicate Content issue
I have been asked to review an old website to an identify opportunities for increasing search engine traffic. Whilst reviewing the site I came across a strange loop. On each page there is a link to printer friendly version: http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes That page also has a link to a printer friendly version http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes&printfriendly=yes and so on and so on....... Some of these pages are being included in Google's index. I appreciate that this can't be a good thing, however, I am not 100% sure as to the extent to which it is a bad thing and the priority that should be given to getting it sorted. Just wandering what views people have on the issues this may cause?
Technical SEO | | CPLDistribution0