Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Should you use a canonical tag on translated content in a multi-language country?
-
A customer of ours has a website in Belgium. There two main languages in Belgium: Dutch and French.
At first there was only a Dutch version with a .be extension. Right now they are implementing the French Belgium version on the URL website.be/fr. All of the content and comments will be translated. Also the URL’s will change from Dutch to French, so you've got two URL’s with the same content but in another language. Question: Should you use a canonical tag on translated content in a multi-language country?I think Google will understand this is just for the usability for a Multilanguage country. What do you guys think???
-
Hi Aleyda,
Thanks for your answer and thanks for the links. As written in the description everything will be translated, so also the title, desc, comments etc.). So we don't have to worry about anything "everything is gonna be alright" (Bob Marley) :-).
In addition the hreflang annotations are a good way to communicate with Google about what is what

Thanks!
Best regards, Wesley
-
Hi Wesley,
If you enable a new language version totally optimized in another language (From the URLs, to titles, descriptions, text content, comments, etc.) there shouldn't be any problem. If you want to help Google to identity that this is your French version (in this case algo specifically targeted to a Belgium language), you can use the hreflang tag specifying the language and country, as explained here, in your pages html head section. Additionally, you can add the hreflang annotations in your XML sitemap as described here. You can also use this tool to facilitate the process.
Best regards,
Aleyda
-
Hi Mike,
Thanks for your reply and the linking

Just as I thought we don't have to worry about that as long we're optimizing the usability for the visitor. That's Google's way of thinking in all cases.
In addtion, I want to make a crazy skeptical statement:
After listening to Matt, we can conclude that:
It's perfectly fine by Google for a Dutch website (website.nl) to republish hand-translated content from a foreign website!I don't think it will work like this. What do you think? (I know... this is a little bit of an other subject
)This is a Question that's coming into my mind right now. I know enough through your link for my main question. Thanks for that

-
You shouldn't have to worry about it.
I would reference this article where Matt Cutts explains that if you are professionally translating it for usability... you are good; however, if you use Google translate to spam your content in a bunch of languages... that is bad.
Hope this helps and answers your question.
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hreflang : mixing with/without country code for same language
Hello, I would like to display 3 different english versions of my website : 1 for UK, 1 for CA and 1 for other english users. It would look like this for a page: . (english content with £ prices) <link rel="alternate" href="https: xxx.com="" en-ca" hreflang="en-CA">(english content with $CA prices)</link rel="alternate" href="https:> <link rel="alternate" href="https: xxx.com="" en="" " hreflang="en">(english content without currency)</link rel="alternate" href="https:> I wonder if I can mix this hreflang without country code with hreflangs with country code for the 2 other specific versions... or if the hreflang without country code version will appear whatever the country, even if i specified it . In other terms, is hreflang="en" > hreflang="en-CA" + hreflang="en-GB" if tagged together on a same page? Thank you
Intermediate & Advanced SEO | | AlexisH0 -
Translating meta tags using WPML and AIO SEO
Having a heck of a time finding info on this one... We're working on a multilingual website which uses WPML. I've used the All in One SEO plugin to customize meta data (title, description, etc). These strings do not appear in the list of translations in WPML. Does anyone have any experience with this setup? How do you enable WPML to translate meta data set via the AIO plugin? Thanks!
Intermediate & Advanced SEO | | jonmc0 -
Fast/Easy Way to Implement Canonical tags in Bulk in Magento CMS?
Hello Amazing SEO Community! Quick Q for a client with a TON of duplicate content. (yikes!) My client is currently undertaking a large SEO project around canonical tagging for their thousands of duplicate pages. Currently, one product sits on multiple URLs and they are being indexed as different pages (with the same content). The issue is found across all products and other pages, and across their international sites as well. One core challenge they face now is lack of time/resources from their developer side. The solution we see to the duplicate content is to manually add a canonical tag to each of our tens of thousands of pages. Their content management system is Magento. Has anyone ever tackled canonicalization for a large site that uses Magento? Any more efficient solutions to manual tagging is ideal. Thanks in advance for your input. -Bonnie
Intermediate & Advanced SEO | | accpar0 -
H3 Tags - Should I Link to my content Articles- ? And do I have to many H3 tags/ Links as it is ?
Hello All, On my ecommerce landing pages, I currently have links to my products as H3 Tags. I also have useful guides displayed on the page with links useful articles we have written (they currently go to my news section). I am wondering if I should put those article links as additional H3 tags as well for added seo benefit or do I have to many tags as it is ?. A link to my Landing Page I am talking about is - http://goo.gl/h838RW Screenshot of my h1-h6 tags - http://imgur.com/hLtX0n7 I enclose screenshot my guides and also of my H1-H6 tags. Any advice would be greatly appreciated. thanks Peter
Intermediate & Advanced SEO | | PeteC120 -
Is it okay to copy and paste on page content into the meta description tag?
I have heard conflicting answers to this. I always figured that it was okay to selectively copy and paste on page content into the meta description tag.....especially if the onpage content is well written. How can it be duplicate content if it's pulling from the exact same page? Does anybody have any feedback from a credible source about this? Thanks.
Intermediate & Advanced SEO | | VanguardCommunications1 -
[E-commerce] Duplicate content due to color variations (canonical/indexing)
Hello, We currently have a lot of color variations on multiple products with almost the same content. Even with our canonicals being set, Moz's crawling tool seems to flag them as duplicate content. What we have done so far: Choosing the best-selling color variation (our "master product") Adding a rel="canonical" to every variation (with our "master product" as the canonical URL) In my opinion, it should be enough to address this issue. However, being given the fact that it's flagged as duplicate by Moz, I was wondering if there is something else we should do? Should we add a "noindex,follow" to our child products and "index,follow" to our master product? (sounds to me like such a heavy change) Thank you in advance
Intermediate & Advanced SEO | | EasyLounge0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
How does the use of Dynamic meta tags effect SEO?
I'm evaluating a new client site which was built buy another design firm. My question is they are dynamically creating meta tags and I'm concerned that it is hurting their SEO. When I view the page source this is what I see. <meta name="<a class="attribute-value">keywords</a>" id="<a class="attribute-value">keywordsGoHere</a>" content="" /> <meta name="<a class="attribute-value">description</a>" id="<a class="attribute-value">descriptionGoesHere</a>" content="" /> <title id="<a class="attribute-value">titleGoesHere</a>">title> To me it looks like the tags are not being added to the page, however the title is showing when you view it in a browser and if use a spider view tool, it sees the title. I'm guess it is being called from a DB. So I'm a little concerned though that the search engines are not really seeing the title and description. I'm not worried about the keywords tag. Can anyone shed some light on how this might work? Why it might not being showing the text for the description in the page code and if that will hurt SEO? Thanks for the help!
Intermediate & Advanced SEO | | BbeS0