Multilingual -> ahref lang, canonical and duplicated title content
-
Hi all!
We have our site eurasmus.com where we are implementing the multilingual.
We have already available english and spanish and we use basically href lang to control different areas.First question:
When a page is not translated but still is visible in both langauges under /en and /es is it enough with the hreflang or should we
add a canonical as well? Nowadays we are apply href lang and only canonicals to the one which are duplicated
in the same language.Second question:
When some pages are not translated, like http://eurasmus.com/en/info/find-intern-placement-austria and http://eurasmus.com/es/info/find-intern-placement-austria,
we are setting up the href lang but still moz detects title and meta duplicated (not duplicate page content).
What do you suggest we should do?Let me know and thank you before hand for your help!
-
What I know is that since almost one year Google is able to deal with duplicated content in a multilingual or multicountry environment if the hreflang is well implemented.
Moreover... if you were using the rel="canonical", you were practically quitting to your Spanish home page (in this specific case) any possibility to even being present in the index, because you would be telling Google:
"Don't consider this URL, but just the canonical one".
This is one of the reasons why Google quit all mention of the rel="canonical" in the hreflang help pages.
-
I am not so sure about using canonical, even if this case is multilingual and not multicountry.
Maybe this is due to the well-known inability Google has to communicate correctly, but in this case it is quite clear with its example:
Some example scenarios where rel="alternate" hreflang="x" is recommended:
You keep the main content in a single language and translate only the template, such as the navigation and footer. Pages that feature user-generated content like a forums typically do this.
This scenario is the one described in this Q&A, so I personally would not suggest canonicalization but yes using hreflang, and - obviously - my main priority would be telling to localize all the content of the page, also because without a complete translation the opportunities to rank in Google.es are substantially zero.
-
I confirm that the moz crawler does not detect or consider the hreflang (in fact no tabs or advice in the moz analytics is dedicated to it).
The only tools that consider it by default (and that I know) are deepcrawl and onpage.org
-
They are not great at writing their own explanations for international. What they meant above is if you have geo-targeted correctly, you would not have to use a canonical between two pages that are the same. That they will figure it out on their own.
You aren't geo-targeting, so I still think the canonical would be needed.
-
Hi there Kate!
Thanks for your time. That is what logic tells me.
But "God" google says, confusing me:
Specifying language and location
We've expanded our support of the rel="alternate" hreflang link element to handle content that is translated or provided for multiple geographic regions. The hreflang attribute can specify the language, optionally the country, and URLs of equivalent content. By specifying these alternate URLs, our goal is to be able to consolidate signals for these pages, and to serve the appropriate URL to users in search. Alternative URLs can be on the same site or on another domain.
Annotating pages as substantially similar content
Optionally, for pages that have substantially the same content in the same language and are targeted at multiple countries, you may use the rel="canonical" link element to specify your preferred version. We’ll use that signal to focus on that version in search, while showing the local URLs to users where appropriate. For example, you could use this if you have the same product page in German, but want to target it separately to users searching on the Google properties for Germany, Austria, and Switzerland.
Update: to simplify implementation, we no longer recommend using rel=canonical.So I guess canonical is no longer needed?
-
HREFLANG is all you need to note the change in language between two pages. However, if the page has not been translated and is available under both language subfolders, make sure there isn't an HREFLANG and has a canonical. When the pages are identical and have 2 URLs, us a canonical and NOT HREFLANG.
I am not sure if Moz detects HREFLANG. If you know it's set up correctly, just ignore the warnings in Moz. And if you can, translate the title and description as well. That'll help get rid of the warnings.
-
Geo-tagging is not necessary if the content is just translated.
-
Did you assign the geography in webmastertools? This is advised and should already prevent some of the problems might they arise ( i think it should be OK)
Using a canonical is always a good way of harnessing the link value to one specific version.
You could test if a problem is there by running your englisch keywords against the local version of Google.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website title duplicates in SEO description
Hi - My website title (company name) repeats in the SEO description. My host service is Square Space. How do I fix this?
Technical SEO | | Peeker
Thanks! Paula board-directors0 -
Duplicate content: using the robots meta tag in conjunction with the canonical tag?
We have a WordPress instance on an Apache subdomain (let's say it's blog.website.com) alongside our main website, which is built in Angular. The tech team is using Akamai to do URL rewrites so that the blog posts appear under the main domain (website.com/more-keywords/here). However, due to the way they configured the WordPress install, they can't do a wildcard redirect under htaccess to force all the subdomain URLs to appear as subdirectories, so as you might have guessed, we're dealing with duplicate content issues. They could in theory do manual 301s for each blog post, but that's laborious and a real hassle given our IT structure (we're a financial services firm, so lots of bureaucracy and regulation). In addition, due to internal limitations (they seem mostly political in nature), a robots.txt file is out of the question. I'm thinking the next best alternative is the combined use of the robots meta tag (no index, follow) alongside the canonical tag to try to point the bot to the subdirectory URLs. I don't think this would be unethical use of either feature, but I'm trying to figure out if the two would conflict in some way? Or maybe there's a better approach with which we're unfamiliar or that we haven't considered?
Technical SEO | | prasadpathapati0 -
How different should content be so that it is not considered duplicate?
I am making a 2nd website for the same company. The name of the company, our services, keywords and contact info will show up several times within the text of both websites. The overall text and paragraphs will be different but some info may be repeated on both sites. Should I continue this? What precautions should I take?
Technical SEO | | savva0 -
Duplicate page content
Hello, The pro dashboard crawler bot thing that you get here reports the mydomain.com and mydomain.com/index.htm as duplicate pages. Is this a problem? If so how do I fix it? Thanks Ian
Technical SEO | | jwdl0 -
Duplicate Page Title for Wordpress
Hello, We are using WP for our blog and keep getting Dup Page Title errors for our 12 author archives pages. The title of each page is the same, but I am wondering if this is WP issue with canonicalization working properly. The most recent four pages have a linking root domain and carry some Page Authority, but the older pages do not. Is this what Rand was talking about in his Google+ whiteboard Friday talk about blog post relevancy not lasting as long as articles? Here's what it looks like. Side question, is there a reason why the SEO Moz website doesn't have a Google+ button anywhere easy to find? Thank you, Michael
Technical SEO | | MKaloud1 -
Duplicate Content Errors
Ok, old fat client developer new at SEO so I apologize if this is obvious. I have 4 errors in one of my campaigns. two are duplicate content and two are duplicate title. Here is the duplicate title error Rare Currency And Old Paper Money Values and Information.
Technical SEO | | Banknotes
http://www.antiquebanknotes.com/ Rare Currency And Old Paper Money Values and Information.
http://www.antiquebanknotes.com/Default.aspx So, my question is... What do I need to do to make this right? They are the same page. in my page load for default.aspx I have this: this.Title = "Rare Currency And Old Paper Money Values and Information."; And it occurs only once...0 -
Is 100% duplicate content always duplicate?
Bit of a strange question here that would be keen on getting the opinions of others on. Let's say we have a web page which is 1000 lines line, pulling content from 5 websites (the content itself is duplicate, say rss headlines, for example). Obviously any content on it's own will be viewed by Google as being duplicate and so will suffer for it. However, given one of the ways duplicate content is considered is a page being x% the same as another page, be it your own site or someone elses. In the case of our duplicate page, while 100% of the content is duplicate, the page is no more than 20% identical to another page so would it technically be picked up as duplicate. Hope that makes sense? My reason for asking is I want to pull latest tweets, news and rss from leading sites onto a site I am developing. Obviously the site will have it's own content too but also want to pull in external.
Technical SEO | | Grumpy_Carl0 -
Canonical Link for Duplicate Content
A client of ours uses some unique keyword tracking for their landing pages where they append certain metrics in a query string, and pulls that information out dynamically to learn more about their traffic (kind of like Google's UTM tracking). Non-the-less these query strings are now being indexed as separate pages in Google and Yahoo and are being flagged as duplicate content/title tags by the SEOmoz tools. For example: Base Page: www.domain.com/page.html
Technical SEO | | kchandler
Tracking: www.domain.com/page.html?keyword=keyword#source=source Now both of these are being indexed even though it is only one page. So i suggested placing an canonical link tag in the header point back to the base page to start discrediting the tracking URLs: But this means that the base pages will be pointing to themselves as well, would that be an issue? Is their a better way to solve this issue without removing the query tracking all togther? Thanks - Kyle Chandler0