Thanks Corey, That's very helpful and seems to make sense. I'd assume too that each site would have separate GA tracking codes and Webmaster Tools profiles. Then, having them on separate serves and countries with localized content is another key factor.
- Home
- gregelwell
Latest posts made by gregelwell
-
RE: SEO for .com vs. .com.au websites
-
SEO for .com vs. .com.au websites
I have a new client from Australia who has a website on a .com.au domain. He has the same domain name registered for .com.
Example: exampledomain.com.au, and exampledomain.com
He started with the .com.au site for a product he offers in Australia. He's bringing the same product to the U.S. (it's a medical device product) and wants us to build a site for it and point to the .com.
Right now, he has what appears is the same site showing on the .com as on the .com.au. So both domains are pointing to the same host, but there are separate sections or directories within the hosting account for each website - and the content is exactly the same.
Would this be viewed as duplicate content by Google?
What's the best way to structure or build the new site on the .com to get the best SEO in the USA, maintain the .au version and not have the websites compete or be viewed as having duplicate content?
Thanks,
Greg
-
RE: Could you use a robots.txt file to disalow a duplicate content page from being crawled?
Peter, Thanks for the clarification.
-
RE: Could you use a robots.txt file to disalow a duplicate content page from being crawled?
Next time I'll read the reference links better
Thank you!
-
RE: Could you use a robots.txt file to disalow a duplicate content page from being crawled?
Thanks Kyle. Anthony had a similar view on using the rel canonical tag. I'm just curious about adding it to both the original page or duplicate page? Or both?
Thanks,
Greg
-
RE: Could you use a robots.txt file to disalow a duplicate content page from being crawled?
Anthony, Thanks for your response. See Kyle, he also felt using the rel canonical tag was the best thing to do. However he seemed to think you'd put it on the original page - the one you want to rank for. And you're suggesting putting on the duplicate page. Should it be added to both while specifying which page is the 'original'?
Thanks!
Greg
-
Could you use a robots.txt file to disalow a duplicate content page from being crawled?
A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO.
I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution?
Looks like your connection to Moz was lost, please wait while we try to reconnect.