US and UK Websites of Same Business with Same Content
-
Hello Community,
I need your help to understand, whether I can use the US website's content on my UK website or not?
US Website's domain: https://www.fortresssecuritystore.com
UK Website's domain: https://www.fortresssecuritystore.co.uk
Both websites are having same content on all the pages, including testimonials/reviews.
I am trying to gain business from Adwords and Organic SEO marketing.
Thanks.
-
Yup, but doesn't matter. Hreflang works for this situation whether cross-domain or on a subdirectory/subdomain basis (and in fact is even more effective when cross-domain as you're also getting the benefit of the geo-located ccTLD.)
P.
-
Hi Paul,
If I understood correctly, we are talking about two different websites, not a website with subdomains.
Hreflang can be used for other languages and countries although not for masking 100% duplicated content as I stated above.site A: https://www.fortresssecuritystore.com
site B: https://www.fortresssecuritystore.co.uk
The recommendations that Google gives are for the purpose of having the pages crawled and indexed not for having success with 100% duplicate content which do not serve a good UX, therefore gain a high bounce rate, then the overall SEO fall down.
Mª Verónica
-
Unfortunately, your information is incorrect, Veronica.
Hreflang is specifically designed for exactly this situation. As Google Engineer Maile Oye clearly states, one of the primary uses of hreflang markup is:
- Your content has small regional variations with** similar content in a single language**. For example, you might have English-language content targeted to the US, GB, and Ireland.
(https://support.google.com/webmasters/answer/189077?hl=en)
There's no question differentiating similar content in the same language for different regions/countries is more of a challenge than for totally different languages, but it can absolutely be done, and in fact is a very common requirement for tens of thousands of companies.
Paul
- Your content has small regional variations with** similar content in a single language**. For example, you might have English-language content targeted to the US, GB, and Ireland.
-
Hi CommercePundit,
Sadly, there is not "a non painful way to say it".
You cannot gain business from Adwords and Organic SEO marketing having 100% duplicated content.The options; canonical and hreflang would not work in this case.
The only option is language "localization", mean rewrite the whole content by a local writer.
Canonical can be used for up to 10% not for the whole 100%. Hreflang can be used for other languages and countries although not for masking 100% duplicated content.
Sorry to tell the bad news. Good luck!
Mª Verónica
-
The more you can differentiate these two sites, the better they will each perform in their own specific markets, CP.
First requirement will be a careful, full implementation of hreflang tags for each site.
Next, you'll need to do what you can to regionalise the content - for example changing to UK spelling for the UK site content, making sure prices are referenced in pounds instead of dollars, changing up the language to use British idioms and locations as examples where possible. It'll also be critical to work towards having the reviews/testimonials from each site's own country, rather than generic, This will help dramatically from a marketing standpoint and also help differentiate for the search engines, so a double win.
And finally, you'll want to make certain you've set up each in their own Google Search Console and used the geographic targeting for the .com site to specify its target as US. (You won't' need to target the UK site as the .co.uk is already targeted so you won't' get that option in GSC.). If you have an actual physical address/phone in the UK, would also help to set up a separate Google My Busines profile for the UK branch.
Bottom line is - you'll need to put in significant work to differentiate the sites and provide as many signals as possible for which site is for which country in order to help the search engines understand which to return in search results.
Hope that all makes sense?
Paul
-
Hi!
Yeap you can target UK market with US site version. Always keep in mind that its possible that you might perform as well as in the main market (US).
Also, before making any desition and/or implementing, take a look at these articles:
Multi-regional and multilingual sites - Google Search Console
International checklist - Moz Blog
Using the correct hreglang tag - Moz Blog
Guide to international website expansion - Moz Blog
Tool for checking hreflang anotations - Moz BlogHope it helps.
Best Luck.
GR.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content
I am trying to get a handle on how to fix and control a large amount of duplicate content I keep getting on my Moz Reports. The main area where this comes up is for duplicate page content and duplicate title tags ... thousands of them. I partially understand the source of the problem. My site mixes free content with content that requires a login. I think if I were to change my crawl settings to eliminate the login and index the paid content it would lower the quantity of duplicate pages and help me identify the true duplicate pages because a large number of duplicates occur at the site login. Unfortunately, it's not simple in my case because last year I encountered a problem when migrating my archives into a new CMS. The app in the CMS that migrated the data caused a large amount of data truncation Which means that I am piecing together my archives of approximately 5,000 articles. It also means that much of the piecing together process requires me to keep the former app that manages the articles to find where certain articles were truncated and to copy the text that followed the truncation and complete the articles. So far, I have restored about half of the archives which is time-consuming tedious work. My question is if anyone knows a more efficient way of identifying and editing duplicate pages and title tags?
Technical SEO | | Prop650 -
Set Canonical for Paginated Content
Hi Guys, This is a follow up on this thread: http://moz.com/community/q/dynamic-url-parameters-woocommerce-create-404-errors# I would like to know how I can set a canonical link in Wordpress/Woocommerce which points to "View All" on category pages on our webshop.
Technical SEO | | jeeyer
The categories on my website can be viewed as 24/48 or All products but because the quanity constantly changes viewing 24 or 48 products isn't always possible. To point Google in the right direction I want to let them know that "View All" is the best way to go.
I've read that Google's crawler tries to do this automatically but not sure if this is the case on on my website. Here is some more info on the issue: https://support.google.com/webmasters/answer/1663744?hl=en
Thanks for the help! Joost0 -
Duplicate Content Mystery
Hi Moz community! I have an ongoing duplicate mystery going on here and I'm hoping someone here can answer my question. We have an Ecommerce site that has a variety of product pages and category pages. There are Rel canonicals in place, along with parameters in GWT, and there are also URL rewrites. Here are some scenarios, maybe you can give insight as to what’s exactly going on and how to fix it. All the duplicates look to be coming from category pages specifically. For example:
Technical SEO | | Ecom-Team-Access
This link re-writes: http://www.incipio.com/cases/tablet-cases/amazon-kindle-cases-sleeves.html?cat=407&color=152&price=20- To: http://www.incipio.com/cases/tablet-cases/amazon-kindle-cases-sleeves.html The rel canonical tag looks like this: http://www.incipio.com/cases/tablet-cases/amazon-kindle-cases-sleeves.html" /> The CONTENT is different, but the URLs are the same. It thinks that the product category view is the same as the all products view, even though there is a canonical in there telling it which one is the original. Some of them don’t have anything to do with each other. Take a look: Link identified as duplicate: http://www.incipio.com/cases/smartphone-cases/htc-smartphone-cases/htc-windows-phone-8x-cases.html?color=27&price=20- Link this is a duplicate of: http://www.incipio.com/cases/macbook-cases/macbook-pro-13in-cases.html Any idea as to what could be happening here?0 -
Development Website Duplicate Content Issue
Hi, We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live. In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it. This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file. Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks. I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found. When I do find the dev site in Google it displays this; Roller Banners Cheap » admin <cite>dev.rollerbannerscheap.co.uk/</cite><a id="srsl_0" class="pplsrsla" tabindex="0" data-ved="0CEQQ5hkwAA" data-url="http://dev.rollerbannerscheap.co.uk/" data-title="Roller Banners Cheap » admin" data-sli="srsl_0" data-ci="srslc_0" data-vli="srslcl_0" data-slg="webres"></a>A description for this result is not available because of this site's robots.txt – learn more.This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google.Please can anyone help?
Technical SEO | | SO_UK0 -
Website redesign launch
Hello everyone, I am in the process of having my consulting website redesigned and have a question about how this may impact SEO. I will be using the same URL as I did before, just simply replacing an old website with a new website. Obviously the URL structure will change slightly since I am changing navigation names. Page titles will also change. Do I need to do anything special to ensure that all of the pages from the old website are redirected to the new website? For example, should I do a page level redirect for each page that remains the same? So that the old "services" page is pointed to the new "services" page? Or can I simply do a redirect at the index page level? Thank you in advance for any advice! Best, Linda
Technical SEO | | LindaSchumacher0 -
Duplicate page content
hi I am getting an duplicate content error in SEOMoz on one of my websites it shows http://www.exampledomain.co.uk http://www.exampledomain.co.uk/ http://www.exampledomain.co.uk/index.html how can i fix this? thanks darren
Technical SEO | | Bristolweb0 -
Is this dangerous (a content question)
Hi I am building a new shop with unique products but I also want to offer tips and articles on the same topic as the products (fishing). I think if was to add the articles and advice one piece at a time it would look very empty and give little reason to come back very often. The plan, therefore, is to launch the site pulling articles from a number of article websites - with the site's permission. Obviously this would be 100% duplicate content but it would make the user experience much better and offer added value to my site as people are likely to keep returning even when not in the mood to purchase anything; it also offers the potential for people to email links to friends etc. note: over time we will be adding more unique content and slowly turning off the pulled articled. Anyway, from an seo point of view I know the duplicate content would harm the site but if I was to tell google not to index the directory and block it from even crawling the directory would it still know there is duplicate content on the site and apply the penalty to the non duplicate pages? I'm guessing no but always worth a second opinion. Thanks Carl
Technical SEO | | Grumpy_Carl0 -
Duplicate content
Greetings! I have inherited a problem that I am not sure how to fix. The website I am working on had a 302 redirect from its original home url (with all the link juice) to a newly designed page (with no real link juice). When the 302 redirect was removed, a duplicate content problem remained, since the new page had already been indexed by google. What is the best way to handle duplicate content? Thanks!
Technical SEO | | shedontdiet0