Canonical vs Alternate for country based subdomain dupe content?
-
What's the correct method for tagging dupe content between country based subdomains?
We have:
mydomain.com // default, en-us
www.mydomain.com // en-us
uk.mydomain.com // uk, en-gb
au.mydomain.com // australia, en-au
eu.mydomain.com // europe, en-eu
In the header of each we currently have rel="alternate" tags but we're still getting dupe content warnings in Moz for the "WWW" subdomain.
Question 1) Are we headed in the right direction with using alternate? Or would it be better to use canonical since the languages are technically all English, just different regions. The content is pretty much the same minus currency and localization differences.
Question 2) How can we solve the dupe content between WWW and the base domain, since the above isn't working.
Thanks so much
-
Yes.
-
Thanks.
So then I am safe when including all of these on every subdomain?
I have a common header where the above is the exact same for every subdomain (all 4 are always included), which I assume is the correct way?
Also: Why doesn't Moz look at the hreflang tag? I'm very worried about just "ignoring" what the tool says... why is the top SEO tool in the world not capable of correctly detecting dupe content? I'm not sure I'm comfortable with just ignoring the check engine light, so to speak.
-
In cases like yours, using the hreflang is the correct way to handle the duplicate content issue, because of the characteristics you yourself cite: currency and localization, which may be tiny differences in terms of "content" but huge in terms of usability and making completely different a product page from another.
Remember that if you canonicalize all the "duplicate" toward the canonical, the canonicalized URLs won't be shown in the countries you're targeting with those URLs... so screwing up the international SEO strategy 100%, so each URL must have as canonical its own URL (self referential), apart the obvious canonicalization rules being applied (e.g.: url with parameter canonicalized to url without parameter).
In case the URL is canonicalized for whatever reason, remember to indicate the canonical URLs in the href of the hreflang annotations. On the contrary Google will start alerting of no-return URLs errors.
Regarding the Moz Pro crawler... don't pay attention to it, because it doesn't consider the hreflang annotation,therefore it will continue saying that those pages are duplicate.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page content not being recognised?
I moved my website from Wix to Wordpress in May 2018. Since then, it's disappeared from Google searches. The site and pages are indexed, but no longer ranking. I've just started a Moz campaign, and most pages are being flagged as having "thin content" (50 words or less), when I know that there are 300+ words on most of the pages. Looking at the page source I find this bit of code: page contents Does this mean that Google is finding this and thinks that I have only two words (page contents) on the page? Or is this code to grab the page contents from somewhere else in the code? I'm completely lost with this and would appreciate any insight.
Technical SEO | | Photowife1 -
Will canonical solve this?
Hi all, I look after a website which sells a range of products. Each of these products has different applications, so each product has a different product page. For eg. Product one for x application Product one for y application Product one for z application Each variation page has its own URL as if it is a page of its own. The text on each of the pages is slightly different depending on the application, but generally very similar. If I were to have a generic page for product one, and add canonical tags to all the variation pages pointing to this generic page, would that solve the duplicate content issue? Thanks in advance, Ethan
Technical SEO | | Analoxltd0 -
Robots and Canonicals on Moz
We noticed that Moz does not use a robots "index" or "follow" tags on the entire site, is this best practice? Also, for pagination we noticed that the rel = next/prev is not on the actual "button" rather in the header Is this best practice? Does it make a difference if it's added to the header rather than the actual next/previous buttons within the body?
Technical SEO | | PMPLawMarketing0 -
HTTPS vs HTTP
A few months ago we switched all product page urls in our e-commerce site to https: Recently working on the site I was appalled at how slow our pages were loading and on investigating further with our hosting partner they advised to switch back to http instead of https for all of the site content to help page speed. We obviously still need to use https in the cart and check-out. I think that that Google will be pushing all commerce web pages to https but for now I need to improve page load speed. Will the switch back from https to http impair our keywords? https://www.silverandpewtergifts.com/
Technical SEO | | silverpewter0 -
Rel=Canonical Help
The site in question is www.example.com/example. The client has added a rel=canonical tag to this page as . In other words, instead of putting the tag on the pages that are not to be canonical and pointing them to this one, they are doing it backwards and putting the same URL as the canonical one as the page they are putting the tag on. They have done this with thousands of pages. I know this is incorrect, but my question is, until the issue is resolved, are these tags hurting them at all just being there?
Technical SEO | | rock220 -
301 or Rel=canonical
Should I use a 301 redirect for redirect mywebsite.com to www.mywebsite.com or use a rel=canonical?? Thanks!
Technical SEO | | LeslieVS0 -
Duplicate content
I am getting flagged for duplicate content, SEOmoz is flagging the following as duplicate: www.adgenerator.co.uk/ www.adgenerator.co.uk/index.asp These are obviously meant to be the same path so what measures do I take to let the SE's know that these are to be considered the same page. I have used the canonical meta tag on the Index.asp page.
Technical SEO | | IPIM0 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0