Duplicate content pages on different domains, best practice?
-
Hi,
We are running directory sites on different domains of different countries (we have the country name in the domain name of each site) and we have the same static page on each one, well, we have more of them but I would like to exemplify one static page for the sake of simplicity.
So we have http://firstcountry.com/faq.html, http://secondcountry.com/faq.html and so on for 6-7 sites, faq.html from one country and the other have 94% similarity when checked against duplicate content. We would like an alternative approach to canonical cause the content couldn´t belong to only one of this sites, it belongs to all. Second option would be unindex all but one country. It´s syndicated content but we cannot link back to the source cause there is none.
Thanks for taking the time in reading this.
-
Using canonical IS NOT the solution, because if you use canonical the FAQ pages of the canonicalized websites are going to be deindexed.
So, just do it if you really don't care about the traffic those answers can generate for your sites (as you can imagine, this is an ironic suggestion...).
Just use the hreflang, because Google in the last months has become quite smart in understanding that it means that you consider those pages relevant enough for the geo-targeted audiences to not filter them out even if they are substantially identical between country versions.
Said that, try to differentiate the FAQ pages (better localization of the language. i.e.: English UK is slightly different than American English), or even offering a local number for phone inquiries and localized email address for question via email.
In general, it is not a good idea using the crossdomain canonical in International SEO, and it should be used just in exceptional cases.
-
In order to make things easier you can implement hreflang via sitemaps.xml using this tool by Mediaflow: http://www.themediaflow.com/tool_hreflang.php.
-
If your site is based on templates so you can easily put in a header code (Wordpress, Joomla, most CMS, anything but a page-by-page HTML usually) you can insert it directly in by calling in the page itself like this:
" hreflang="x-default" />
" hreflang="en-au" />
" hreflang="en-us" />
" hreflang="en-nz" />This works on Apache servers - this starts with the domain and then request_URI pulls in the page you're on so /about, or /faq and adds the appropriate hreflang tag to that.
Also, when you're done implementing hreflang test it using Flang.
-
As the other users have pointed out, the alternate and hreflang tag would be most ideal. I am in a pickle myself with a very similar issue.
You must note that the alternate tag is to be applied on a page level so every page should resolve to the appropriate URL of it's copy on all other country domains.
So your homepage (.com) could have the following alternate tags:
But on your FAQ page, the alternates would be:
You'll have to rinse and repeat on all 3 sites and for every single page.
Tedious if you ask me! Does anyone know an easier way to go around adding alternate tags to 3 or 4 sites without doing it manually?
The advantage of implementing those however is that you are not canonicalising to one domain which means all your domains stand a chance of performing well in their regions (e.g a search on Google Australia will show the .com.au website).
Again, does anyone have a better approach to this or seen / heard of one? Apart from canonical of course.
-
Hreflang tags are great. I would highly suggest implementing these. Something that I was confused about when I first started using them was that all tags should be on all domains including its own.
For example: firstcountry.com/faq.html should have tags for:
and so on.
You can check that these have been implemented correctly in Google Webmaster Tools under "Search Traffic" -> "International Targeting"
-
I would start by implementing hreflang tags:
https://support.google.com/webmasters/answer/189077?hl=en
Hreflang should take care of these type of issues as Google will associate the right country domain with the content. You may see some overlap for awhile - we've seen hreflang take a bit longer than we'd like to get fully set but once it is, it usually works well.
Short of that, you have 3 options. 1) change the content on all sites to be (somewhat) unique. 2) deindex all but one as you said, 3) canonical, as you said.
1, 2 & 3 all have problems so that's why I would start with hreflang.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content on Places to Stay listings pages
Hello, I've just crawled our website https://www.i-escape.com/ to find we have a duplicate content issue. Every places to stay listing page has identical content (over 1,500 places) due to the fact it's based on user searches or selections. If we hide this pages using canonical tags, will we lose our visibility for each country and/or region we promote hotels? Any help on this would be hugely appreciated! Thanks so much Clair
Technical SEO | | iescape0 -
Best way to deal with over 1000 pages of duplicate content?
Hi Using the moz tools i have over a 1000 pages of duplicate content. Which is a bit of an issue! 95% of the issues arise from our news and news archive as its been going for sometime now. We upload around 5 full articles a day. The articles have a standalone page but can only be reached by a master archive. The master archive sits in a top level section of the site and shows snippets of the articles, which if a user clicks on them takes them to the full page article. When a news article is added the snippets moves onto the next page, and move through the page as new articles are added. The problem is that the stand alone articles can only be reached via the snippet on the master page and Google is stating this is duplicate content as the snippet is a duplicate of the article. What is the best way to solve this issue? From what i have read using a 'Meta NoIndex' seems to be the answer (not that i know what that is). from what i have read you can only use a canonical tag on a page by page basis so that going to take to long. Thanks Ben
Technical SEO | | benjmoz0 -
Is there an easy solution for duplicate page content on a drupal CMS?
I have a drupal 7 site www.australiacounselling.com.au that has over 5000 crawl errors (!). The main problem - close to 3000 errors- is I have duplicate page content. When I create a page I can create a URL alias for the page that is SEO friendly, however every time I do this, it is registering there are 2 pages with the same content. Is there a module that you're aware of that I can have installed that would allow me to show what is the canonical page? My developers seemed stumped and have given up trying to find a solution, but I'm not convinced that it should be that hard. Any ideas from those familiar with drupal 7 would be greatly appreciated!
Technical SEO | | ClintonP0 -
How can i resolve Duplicate Page Content?
Hello, I have created one campaign over SEOmoz tools for my website AutoDreams.it i have found 159 duplicate page content. My problem is that this web site is about car adsso it is easy to create pages with duplicate content and also Car ads are placed byregistered users. How can i resolve this problem? Regards Francesco
Technical SEO | | francesco870 -
Duplicate homepage content
Hi, I recently did a site crawl using seomoz crawl test My homepage seems to have 3 cases of duplicate content.. These are the urls www.example.ie/ www.example..ie/%5B%7E19%7E%5D www.example..ie/index.htm Does anyone have any advise on this? What impact does this have on my seo?
Technical SEO | | Socialdude0 -
Duplicate Content
Hi - We are due to launch a .com version of our site, with the ability to put prices into local currency, whereas our .co.uk site will be solely £. If the content on both the .com and .co.uk sites is the same (at product level mainly), will we be penalised? What is the best way to get around this?
Technical SEO | | swgolf1230 -
New Domain Page 7 Google but Page 1 Bing & Yahoo
Hi just wondered what other people's experience is with a new domain. Basically have a client with a domain registered end of May this year, so less than 3 months old! The site ranks for his keyword choice (not very competitive), which is in the domain name. For me I'm not at all surprised with Google's low ranking after such a short period but quite surprsied to see it ranking page 1 on Bing and Yahoo. No seo work has been done yet and there are no inbound links. Anyone else have experience of this? Should I be surprised or is that normal in the other two search engines? Thanks in advance Trevor
Technical SEO | | TrevorJones0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0