Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Best practice for URL - Language/country
-
Hi,
We are planning on having our website localized into more languages. We already have an English and German version. The German version is currently a sub-domain:
www.example.com --> English version
de.example.com --> German version
Is this recommended? Or is it always better to have URLs with language prefixes such a:
Which is a better practice in terms of SEO?
-
Hi Peter,
Both really good answers to your questions above but maybe it would be good to give you some further pointing in the right direction. Perhaps you could answer the questions below and I can give you my personal opinion on which method would be best:
-
will you be putting an equal amount of marketing (content, PR, etc.) into the Spanish version for example compared with English?
-
are you able to offer fully localised service eg, Spanish customer service, Spanish sales team etc.?
-
is your company well-known globally?
It's important not to also forget that another option is using ccTLDs (eg, .co.uk, .com.au). These give the highest signal to search engines about the country being targeted and also importantly make you look more "local" which can do wonders for increasing conversion rate in countries where your company is not well-known.
-
-
I think that Tom gave you one of the best answers possible.
However I hope this helps your site structure should be very similar to one contained in the two URL's
If I may add a little bit of information that I thought was helpful
- https://support.google.com/webmasters/answer/189077?hl=en
- https://www.deepcrawl.com/knowledge/best-practice/hreflang-101-how-to-avoid-international-duplication/
WHERE TO ADD YOUR HREFLANG TAGS
You can add hreflang tags to your sitemaps, in the HTTP response headers, or on the page itself.
IN YOUR SITEMAPS
The best place to add hreflang is in your sitemap as including them in the headers or on the page adds weight to every single page request.
The following example will inform Google about the English version from the German version of the website:
<url> <loc>http://www.example.com/deutsch/</loc></url>
<xhtml:link< span=""> rel=”alternate” hreflang=”en” href=”http://www.example.com/english/” /> <xhtml:link < span="">rel=”alternate” hreflang=”de” href=”http://www.example.com/deutsch/” /></xhtml:link <></xhtml:link<>
This method would need to be repeated in full for every page on the site and for all the international websites.
IN YOUR HEADERS AND HTML
Hreflang tags can also be added to the HTTP header:
Link: http://www.example.com/english/; rel=”alternate”; hreflang=”en” Link: http://www.example.com/deutsch/; rel=”alternate”; hreflang=”de”
Or in the tag in the HTML:
http://www.example.com/english/” /> http://www.example.com/deutsch/
& because you will be creating a new site
https://www.candidsky.com/blog/the-seo-2015-guide-to-website-migration/
it would come down to your backlink profile if it were me I would use
Moz open site Explorer, Majestic, Ahrefs and Google Webmaster tools to determine whether or not I will be receiving a enough Backlinks for a subdomain or separate TLD otherwise I would use a subfolder and an extremely fast method of hosting the site Fastly is excellent or many other great methods as well.
Hope this helps,
Tom
PS use
http://hreflang.ninja/ to check
-
Hi Peter
Both are viable options.
I'd highly recommend going through Aleyda Solis' international SEO posts here on the Moz blog. They can teach how to prepare for international SEO, how to approach site structure and how to generate relevant code and hreflang tags.
Here is her international SEO checklist
Here is her Hreflang blog post and generator tool
And 40 tools to help advance your international SEO
They're great reading and nothing that I'd be able to do add to, so I hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What do you do with product pages that are no longer used ? Delete/redirect to category/404 etc
We have a store with thousands of active items and thousands of sold items. Each product is unique so only one of each. All products are pinned and pushed online ... and then they sell and we have a product page for a sold item. All products are keyword researched and often can rank well for longtail keywords Would you :- 1. delete the page and let it 404 (we will get thousands) 2. See if the page has a decent PA, incoming links and traffic and if so redirect to a RELEVANT category page ? ~(again there will be thousands) 3. Re use the page for another product - for example a sold ruby ring gets replaces with ta new ruby ring and we use that same page /url for the new item. Gemma
Technical SEO | | acsilver0 -
Google tries to index non existing language URLs. Why?
Hi, I am working for a SAAS client. He uses two different language versions by using two different subdomains.
Technical SEO | | TheHecksler
de.domain.com/company for german and en.domain.com for english. Many thousands URLs has been indexed correctly. But Google Search Console tries to index URLs which were never existing before and are still not existing. de.domain.com**/en/company
en.domain.com/de/**company ... and an thousand more using the /en/ or /de/ in between. We never use this variant and calling these URLs will throw up a 404 Page correctly (but with wrong respond code - we`re fixing that 😉 ). But Google tries to index these kind of URLs again and again. And, I couldnt find any source of these URLs. No Website is using this as an out going link, etc.
We do see in our logfiles, that a Screaming Frog Installation and moz.com w opensiteexplorer were trying to access this earlier. My Question: How does Google comes up with that? From where did they get these URLs, that (to our knowledge) never existed? Any ideas? Thanks 🙂0 -
Folders in url structure?
Hello, Revamping an out-of-date website and am wondering if I need to include the folders (categories) in the url structure? The proposed structure has 8 main folders. I've been reading that Google is ok if the folder is not included in the url, but is it really? The hesitation I have is that the urls are getting long and the main folder only has only a sub folder beneath it. So, /folder-name/facility-name/treatment-overview. This looks too long, doesn't it? Thanks!
Technical SEO | | lfrazer1230 -
What's the best way to handle product filter URLs?
I've been researching and can't find a clear cut answer. Imagine you have a product category page e.g. domain/jeans You've a lot of options as to how to filter the results domain/jeans?=ladies,skinny,pink,10 or domain/jeans/ladies-skinny-pink-10 or domain/jeans/ladies/skinny?=pink,10 And in this how do you handle titles, breadcrumbs etc. Is the a way you prefer to handle filters and why do you do it that way? I'm trying to make my mind up as some very big names handle this differently e.g. http://www.next.co.uk/shop/gender-women-category-jeans/colour-pink-fit-skinny-size-10r VS https://www.matalan.co.uk/womens/shop-by-category/jeans?utf8=✓&[facet_filter][meta.tertiary_category][Skinny]=on&[facet_filter][variants.meta.size][Size+10]=on&[facet_filter][meta.master_colour][Midwash]=on&[facet_filter][min_current_price][gte]=6.0&[facet_filter][min_current_price][lte]=18.0&per=36&sort=
Technical SEO | | RodneyRiley0 -
Numbers in URL
Hey guys! Need your many awesome brains. 🙂 This may be a very basic question but am hoping you can help me out with some insights beyond "because Google says it's better". 🙂 I only recently started working with SEO, and I work for a SaaS website builder company that has millions of open/active user sites, and all our user sites URLs, instead of www.mydomainname.com/gallery or myusername.simplesite.com/about, we use numbers, so www.mysite.com/453112 or myusername.simplesite.com/426521 The Sales manager has asked me to figure out if it will pay off for us in terms of traffic (other benefits?) to change it from the number system to the "proper" and right way of setting up these URLs. He's looking for rather concrete answers, as he usually sits with paid search and is therefore used to the mindset of "if we do x it will yield us y in z months". I'm finding it quite difficult to find case studies/other concrete examples beyond the generic, vague implication that it will simply be "better" (when for example looking at SEO checklists and search engine guidelines). Will it make a difference? How so? I have to convince our developers of the importance and priority of this adjustment, or it will just drown in the many projects they already have. So truly, any insights would be so very welcome. Thank you!
Technical SEO | | michelledemaree2 -
Robots.txt to disallow /index.php/ path
Hi SEOmoz, I have a problem with my Joomla site (yeah - me too!). I get a large amount of /index.php/ urls despite using a program to handle these issues. The URLs cause indexation errors with google (404). Now, I fixed this issue once before, but the problem persist. So I thought, instead of wasting more time, couldnt I just disallow all paths containing /index.php/ ?. I don't use that extension, but would it cause me any problems from an SEO perspective? How do I disallow all index.php's? Is it a simple: Disallow: /index.php/
Technical SEO | | Mikkehl0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0