HELP! How does one prevent regional pages as being counted as "duplicate content," "duplicate meta descriptions," et cetera...?
-
The organization I am working with has multiple versions of its website geared towards the different regions.
- US - http://www.orionhealth.com/
- CA - http://www.orionhealth.com/ca/
- DE - http://www.orionhealth.com/de/
- UK - http://www.orionhealth.com/uk/
- AU - http://www.orionhealth.com/au/
- NZ - http://www.orionhealth.com/nz/
Some of these sites have very similar pages which are registering as duplicate content, meta descriptions and titles. Two examples are:
Now even though the content is the same, the navigation is different since each region has different product options / services, so a redirect won't work since the navigation on the main US site is different from the navigation for the UK site. A rel=canonical seems like a viable option, but (correct me if I'm wrong) it tells search engines to only index the main page, in this case, it would be the US version, but I still want the UK site to appear to search engines. So what is the proper way of treating similar pages accross different regional directories?
Any insight would be GREATLY appreciated! Thank you!
-
Are they showing as duplicate content within Moz Analytics? As long as the search engines don't see it as duplicate content you shouldn't have a problem. Here are some things to check for:
- The content should be localised as best as possible (e.g. the differences between US, Canadian and British English).
- I'm not sure it's possible with your sites, but things like contact details and local currency can help send the right signals to Google etc. - so the address and phone number if you have a HQ in each country.
- Each site should be registered separately with Bing and Google Webmaster Tools and geotargeted to the relevant location.
- Use the hreflang attribute if you're not already (triple-check to make sure it is set up properly): https://support.google.com/webmasters/answer/189077?hl=en - you can set Canadian English (en-ca) and Canadian French (fr-ca) - the language always comes before the country.
As you mentioned your terms and conditions, I noticed a lot of companies are using very similar wording to yours, so that could be an issue of duplicate content that features on completely different websites. You could rewrite the t's and c's, though I wouldn't see any harm in noindexing those particular pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Defining duplicate content
If you have the same sentences or paragraphs on multiple pages of your website, is this considered duplicate content and will it hurt SEO?
Intermediate & Advanced SEO | | mnapier120 -
Product Syndication and duplicate content
Hi, It's a duplicate content question. We sell products (vacation rental homes) on a number of websites as well as our own. Generally, these affiliate sites have a higher domain authority and much more traffic than our site. The product content (text, images, and often availability and rates) is pulled by our affiliates into their websites daily and is exactly the same as the content on our site, not including their page structure. We receive enquiries by email and any links from their domains to ours are nofollow. For example, all of the listing text on mysite.com/listing_id is identical to my-first-affiliate-site.com/listing_id and my-second-affiliate-site.com/listing_id. Does this count as duplicate content and, if so, can anyone suggest a strategy to make the best of the situation? Thanks
Intermediate & Advanced SEO | | McCaldin0 -
Brackets vs Encoded URLs: The "Same" in Google's eyes, or dup content?
Hello, This is the first time I've asked a question here, but I would really appreciate the advice of the community - thank you, thank you! Scenario: Internal linking is pointing to two different versions of a URL, one with brackets [] and the other version with the brackets encoded as %5B%5D Version 1: http://www.site.com/test?hello**[]=all&howdy[]=all&ciao[]=all
Intermediate & Advanced SEO | | mirabile
Version 2: http://www.site.com/test?hello%5B%5D**=all&howdy**%5B%5D**=all&ciao**%5B%5D**=all Question: Will search engines view these as duplicate content? Technically there is a difference in characters, but it's only because one version encodes the brackets, and the other does not (See: http://www.w3schools.com/tags/ref_urlencode.asp) We are asking the developer to encode ALL URLs because this seems cleaner but they are telling us that Google will see zero difference. We aren't sure if this is true, since engines can get so _hung up on even one single difference in character. _ We don't want to unnecessarily fracture the internal link structure of the site, so again - any feedback is welcome, thank you. 🙂0 -
Joomla Duplicate Page content fix for mailto component?
Hi, I am currently working on my site and have the following duplicate page content issues: My Uni Essays http://www.myuniessays.co.uk/component/mailto/?tmpl=component&template=it_university&link=2631849e33 My Uni Essays http://www.myuniessays.co.uk/component/mailto/?tmpl=component&template=it_university&link=2edd30f8c6 This happens 15 times Any ideas on how to fix this please? Thank you
Intermediate & Advanced SEO | | grays01800 -
Fetch as GoogleBot "Unreachable Page"
Hi, We are suddenly having an error "Unreachable Page" when any page of our site is accessed as Googlebot from webmaster tools. There are no DNS errors shown in "Crawl Errors". We have two web servers named web1 and web2 which are controlled by a software load balancer HAProxy. The same network configuration has been working for over a year now and never had any GoogleBot errors before 21st of this month. We tried to check if there could be any error in sitemap, .htaccess or robots.txt by excluding the loadbalancer and pointing DNS to web1 and web2 directly and googlebot was able to access the pages properly and there was no error. But when loadbalancer was made active again by pointing the DNS to it, the "unreachable page" started appearing again. This very same configuration has been working properly for over a year till 21st of this month. Website is properly accessible from browser and there are no DNS errors either as shown by "Crawl Errors". Can you guide me about how to diagnose the issue. I've tried all sorts of combinations, even removed the firewall but no success. Is there any way to get more details about error instead of just "Unreachable Page" error ? Regards, shaz
Intermediate & Advanced SEO | | shaz_lhr0 -
How much (%) of the content of a page is considered too much duplication?
Google is not fond of duplication, I have been very kindly told. So how much would you suggest is too much?
Intermediate & Advanced SEO | | simonberenyi0 -
Should we deindex duplicate pages?
I work on an education website. We offer programs that are offered up to 6 times per year. At the moment, we have a webpage for each instance of the program, but that's causing duplicate content issues. We're reworking the pages so the majority of the content will be on one page, but we'll still have to keep the application details as separate pages. 90% of the time, application details are going to be nearly identical, so I'm worried that these pages will still be seen as duplicate content. My question is, should we deindex these pages? We don't particularly want people landing on our application page without seeing the other details of the program anyway. But, is there problem with deindexing such a large chunk of your site that I'm not thinking of? Thanks, everyone!
Intermediate & Advanced SEO | | UWPCE0 -
Should I do something about this duplicate content? If so, what?
On our real estate site we have our office listings displayed. The listings are generated from a scraping script that I wrote. As such, all of our listings have the exact same description snippet as every other agent in our office. The rest of the page consists of site-wide sidebars and a contact form. The title of the page is the address of the house and so is the H1 tag. Manually changing the descriptions is not an option. Do you think it would help to have some randomly generated stuff on the page such as "similar listings"? Any other ideas? Thanks!
Intermediate & Advanced SEO | | MarieHaynes0