SEO with duplicate content for 3 geographies
-
The client would like us to do seo for these 3 sites
http://www.solutionselectrical.com.au
http://www.calculatecablesizes.co.uk/
The sites have to targetted in US, Australia, and UK resoectively .All the above sites
have identical content. Will Google penalise the sites ?
Shall we change the content completly ? How do we approach this issue ?
-
So, shall i accept the project considering i am safe at seomoz !
That is entirely your decision. I would not recommend depending on SEOmoz for assistance. The Q&A is a fantastic resource for asking an occasional question, but some questions go unanswered and the quality of answers can vary.
You are being paid for your expertise on a subject. Only accept the job if you are confident you can offer a solid benefit to the client. I was very candid with my first clients about my experience. I offered to work hard, work extra and work for less money but I did not ever hide my lack of experience. You may wish to do the same.
Any tool to know the different terms used by australia and uk people.
None that I am aware of. I would recommend locating someone from each country.
-
So, shall i accept the project considering i am safe at seomoz ! Any tool to know the different terms used by australia and uk people. As the content needs to be tailored to each geography.
-
Please clarify this.
Atul, the clarification is the 5 bullet points immediately following that statement.
what does it signify ?
It signifies the language used on the page.
Is this necessary ?
It depends what you mean by necessary.
It is a step towards solid SEO. Most solid SEO involves multiple layers. The idea is it would require multiple failures to cause a problem. I would recommend this step on any site which targets multiple languages or countries.
One could argue it is unnecessary because the proper setting in Google WMT alone should resolve the matter. But then again, the same setting would need to be made for any search engines for which you wish the site to rank.
What is the language code for uk and australia ?
Alex offered a good response to this question.
-
http://en.wikipedia.org/wiki/Language_localisation#Language_tags_and_codes
Australia isn't listed there but it's en-AU. It's necessary if you want to help Google recognise the sites are targeted to different countries, as Ryan mentions language and spellings differ slightly in various English-speaking countries.
-
Since all three sites are in the same language, be sure each site is properly directed to their respective countries
Please clarify this.
Use the proper language code meta tag such as EN-US for the .com
Is this necessary ? what does it signify ? What is the language code for uk and australia ?
-
The sites have to targeted in US, Australia, and UK respectively .All the above sites have identical content. Will Google penalize the sites ?
No. Google does not penalty sites for duplicate content if each site targets a different country.
Since all three sites are in the same language, be sure each site is properly directed to their respective countries. A few steps to take:
-
Use the proper language code meta tag such as EN-US for the .com.
-
You can set the country targets in Google WMT.
-
Use the proper form of English for each country. For example, US English should show "penalize" where UK and Australian English would show the same word as "penalise" (I think).
-
Use the proper currency and measurement systems for each country.
-
Use the appropriate cultural references for each site.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Tricky Duplicate Content Issue
Hi MOZ community, I'm hoping you guys can help me with this. Recently our site switched our landing pages to include a 180 item and 60 item version of each category page. They are creating duplicate content problems with the two examples below showing up as the two duplicates of the original page. http://www.uncommongoods.com/fun/wine-dine/beer-gifts?view=all&n=180&p=1 http://www.uncommongoods.com/fun/wine-dine/beer-gifts?view=all&n=60&p=1 The original page is http://www.uncommongoods.com/fun/wine-dine/beer-gifts I was just going to do a rel=canonical for these two 180 item and 60 item pages to the original landing page but then I remembered that some of these landing pages have page 1, page 2, page 3 ect. I told our tech department to use rel=next and rel=prev for those pages. Is there anything else I need to be aware of when I apply the canonical tag for the two duplicate versions if they also have page 2 and page 3 with rel=next and rel=prev? Thanks
Technical SEO | | znotes0 -
404 Error Pages being picked up as duplicate content
Hi, I recently noticed an increase in duplicate content, but all of the pages are 404 error pages. For instance, Moz site crawl says this page: https://www.allconnect.com/sc-internet/internet.html has 43 duplicates and all the duplicates are also 404 pages (https://www.allconnect.com/Coxstatic.html for instance is a duplicate of this page). Looking for insight on how to fix this issue, do I add an rel=canonical tag to these 60 error pages that points to the original error page? Thanks!
Technical SEO | | kfallconnect0 -
Does adding a noindex tag reduce duplicate content?
I've been working under the assumption for some time that if I have two (or more) pages which are very similar that I can add a noindex tag to the pages I don't need and that will reduce duplicate content. As far as I know this removes the pages with the tag from Google's index and stops any potential issues with duplicate content. It's the second part of that assumption that i'm now questioning. Despite pages having the noindex tag they continue to appear in Google Search console as duplicate content, soft 404 etc. That is, new pages are appearing regularly that I know to have the noindex tag. My thoughts on this so far are that Google can still crawl these pages (although won't index them) so shows them in GSC due to a crude issue flagging process. I mainly want to know: a) Is the actual Google algorithm sophisticated enough to ignore these pages even through GSC doesn't. b) How do I explain this to a client.
Technical SEO | | ChrisJFoster0 -
Internal duplicated content on articles, when is too much?
I have an automotive rental blog with articles that explain the pros of renting a specific model. So in this articles the advantages of rental versus the buying of a new model. This advantages are a list with bullets like this:
Technical SEO | | markovald
Rental | Buy new car
Rental:
Free car insurance
Free assistance
etc.
Buy new car
You have to pay insurance
You have to pay assistance
etc. etc. I want to do this because i want to make all articles like landing pages...
This "advantages box" have 100 characters. The general length of articles on my blog is 500/600 characters. So i have an average of 15/20% internal duplicated content on all my articles. Is this bad for seo? Any alternatives?0 -
Is this duplicate content when there is a link back to the original content?
Hello, My question is: Is it duplicate content when there is a link back to the original content? For example, here is the original page: http://www.saugstrup.org/en-ny-content-marketing-case-infografik/. But that same content can be found here: http://www.kommunikationsforum.dk/anders-saugstrup/blog/en-ny-content-marketing-case-til-dig, but there is a link back to the original content. Is it still duplicate content? Thanks in advance.
Technical SEO | | JoLindahl912 -
Duplicated content in news portal: should we use noindex?
Hello, We have a news portal, and like other newspapers we have our own content and content from other contributors. Both our content and our contributors content can be found in other websites (we sell our content and they give theirs to us). In this regard, everything seems to work fine from the business and users perspective. The problem is that this means duplicated content... so my question is: "Should we add the noindex,nofollow" tag to these articles? Notice that there might be hundreds of articles everyday, something like a 1/3 of the website. I checked one newspaper which uses news from agencies, but they seem not to use any noindex tag. Not sure what others do. I would appreciate any opinion on that.
Technical SEO | | forex-websites0 -
Thin/Duplicate Content
Hi Guys, So here's the deal, my team and I just acquired a new site using some questionable tactics. Only about 5% of the entire site is actually written by humans the rest of the 40k + (and is increasing by 1-2k auto gen pages a day)pages are all autogen + thin content. I'm trying to convince the powers that be that we cannot continue to do this. Now i'm aware of the issue but my question is what is the best way to deal with this. Should I noindex these pages at the directory level? Should I 301 them to the most relevant section where actual valuable content exists. So far it doesn't seem like Google has caught on to this yet and I want to fix the issue while not raising any more red flags in the process. Thanks!
Technical SEO | | DPASeo0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0