Duplicate Contact Information
-
My clients has had a website for many years, and his business for decades. He has always had a second website domain which is basically a shopping module for obtaining information, comparisons, and quotes for tires. This tire module had no informational pages or contact info. Until recently, we pulled this information in through iframes.
Now however the tire module is too complex and we do not bring in this info through iframes, and because of the way this module is configured (or website framework), we are told we can not place it as a sub-directory.
So now this tire module resides on another domain name (although similar to the client's "main site" domain name) with some duplicate informational pages (I am working through this with the client), but mainly I am concerned about the duplicate contact info -- address and phone.
Should I worry that this other tire website has duplicated the client's phone and address, same as their main website?
And would having a subdomain (tires.example.com) work better for Google and SEO considering the duplicate contact info?
Any help is much appreciated.
ccee bar
(And, too, The client is directing AdWords campaigns to this other website for tires, while under the same AdWords account directing other campaigns to their main site? - I have advised an entirely separate AdWords account for links to the tire domain. BTW the client does NOT have separate social media accounts for each site -- all social media efforts and links are for the main site.)
-
Laura,
Yes thank you for your reply, this helps greatly.
Right now for the client, because they lack a good strategy for organic SEO, AdWords generates their greatest traffic. I hope to leverage this with a better organic approach for SEO, and help create a better AdWords strategy.
But all that said, I just wasn't sure about the contact info and address... now I can move on. Thanks again!
-
First of all, backlinks from Adwords campaigns do not help you with organic search rankings at all.
Secondly, this kind of duplicate content issue may not be as big a problem as you think. If Google detects two pages have the same or very similar content, it will choose the best one to display in search results and filter the other one out. So, you may not need to do anything.
On the other hand, if you are particular about which website should appear in search results for that content, you'll want to use the rel="canonical" tag to let Google know which page you prefer. You'll find more info about the canonical tag at the two links below.
- https://support.google.com/webmasters/answer/139066?hl=en
- https://moz.com/learn/seo/canonicalization
I hope that helps!
-
Laura,
Thanks so much for your response.
I guess what I was thinking is if online directories have duplicate info that would be expected.
But if duplicate content information, and business name, were on two different websites ((each set up as a service or consulting business)), would it look like the two websites were trying to capitalize on search results -- especially if some outbound links (like AdWords) were coming to one site (tires, say) and also to the "main site" (brakes, and some tires).
Still you think this is OK?
ccee bar
-
Having the same phone number and address on two websites is not a duplicate content issue. It's very common because of business directories all over the web. If that's the only duplicate content you're worried about, then you're fine.
-
Subdomain is better than separate domain if you cannot have a subdirectory.
As for the duplicate content, regardless of a separate domain, subdomain, or subdirectory, I would canonical any of the duplicate pages to the authoritative content. If it's the main site, then I would canonical the other domain to it. Not sure of a reason why you would prefer the other domain to be the authoritative source, but if that is the case, then you would canonical the main site to the other domain.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate URL Parameters for Blog Articles
Hi there, I'm working on a site which is using parameter URLs for category pages that list blog articles. The content on these pages constantly change as new posts are frequently added, the category maybe for 'Heath Articles' and list 10 blog posts (snippets from the blog). The URL could appear like so with filtering: www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general&year=2016 www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general&year=2016&page=1 All pages currently have the same Meta title and descriptions due to limitations with the CMS, they are also not in our xml sitemap I don't believe we should be focusing on ranking for these pages as the content on here are from blog posts (which we do want to rank for on the individual post) but there are 3000 duplicates and they need to be fixed. Below are the options we have so far: Canonical URLs Have all parameter pages within the category canonicalize to www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general and generate dynamic page titles (I know its a good idea to use parameter pages in canonical URLs). WMT Parameter tool Tell Google all extra parameter tags belong to the main pages (e.g. www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general&year=2016&page=3 belongs to www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general). Noindex Remove all the blog category pages, I don't know how Google would react if we were to remove 3000 pages from our index (we have roughly 1700 unique pages) We are very limited with what we can do to these pages, if anyone has any feedback suggestions it would be much appreciated. Thanks!
Intermediate & Advanced SEO | | Xtend-Life0 -
Duplicate content created by website Calendar - A Penalty?
A colleague of mine asked me a question about duplicate content coming from their event calendar. I don't think this will affect them negatively, but I would love some feedback and thoughts. ThanksOne of my clients, LifeTech Academy, is using my RavenTools software. Raventools has reported a HUGE amount of duplicate content (4.4K instances).The duplicate content all revolves around their calendar and repeating events (http://lifetechacademy.org/events/)The question is this - will this impact their SEO efforts in a negative way?
Intermediate & Advanced SEO | | Bill_K0 -
Concerns of Duplicative Content on Purchased Site
Recently I purchased a site of 50+ DA (oldsite.com) that had been offline/404 for 9-12 months from the previous owner. The purchase included the domain and the content previously hosted on the domain. The backlink profile is 100% contextual and pristine. Upon purchasing the domain, I did the following: Rehosted the old site and content that had been down for 9-12 months on oldsite.com Allowed a week or two for indexation on oldsite.com Hosted the old content on my newsite.com and then performed 100+ contextual 301 redirects from the oldsite.com to newsite.com using direct and wild card htaccess rules Issued a Press Release declaring the acquisition of oldsite.com for newsite.com Performed a site "Change of Name" in Google from oldsite.com to newsite.com Performed a site "Site Move" in Bing/Yahoo from oldsite.com to newsite.com It's been close to a month and while organic traffic is growing gradually, it's not what I would expect from a domain with 700+ referring contextual domains. My current concern is around original attribution of content on oldsite.com shifting to scraper sites during the year or so that it was offline. For Example: Oldsite.com has full attribution prior to going offline Scraper sites scan site and repost content elsewhere (effort unsuccessful at time because google know original attribution) Oldsite.com goes offline Scraper sites continue hosting content Google loses consumer facing cache from oldsite.com (and potentially loses original attribution of content) Google reassigns original attribution to a scraper site Oldsite.com is hosted again and Google no longer remembers it's original attribution and thinks content is stolen Google then silently punished Oldsite.com and Newsite.com (which it is redirected to) QUESTIONS Does this sequence have any merit? Does Google keep track of original attribution after the content ceases to exist in Google's search cache? Are there any tools or ways to tell if you're being punished for content being posted else on the web even if you originally had attribution? Unrelated: Are there any other steps that are recommend for a Change of site as described above.
Intermediate & Advanced SEO | | PetSite0 -
What is considered duplicate content?
Hi, We are working on a product page for bespoke camper vans: http://www.broadlane.co.uk/campervans/vw-campers/bespoke-campers . At the moment there is only one page but we are planning add similar pages for other brands of camper vans. Each page will receive its specifically targeted content however the 'Model choice' cart at the bottom (giving you the choice to select the internal structure of the van) will remain the same across all pages. Will this be considered as duplicate content? And if this is a case, what would be the ideal solution to limit penalty risk: A rel canonical tag seems wrong for this, as there is no original item as such. Would an iFrame around the 'model choice' enable us to isolate the content from being indexed at the same time than the page? Thanks, Celine
Intermediate & Advanced SEO | | A_Q0 -
Magento products and eBay - duplicate content risk?
Hi, We are selling about 1000 sticker products in our online store and would like to expand a large part of our products lineup to eBay as well. There are pretty good modules for this as I've heard. I'm just wondering if there will be duplicate content problems if I sync the products between Magento and eBay and they get uploaded to eBay with identical titles, descriptions and images? What's the workaround in this case? Thanks!
Intermediate & Advanced SEO | | speedbird12290 -
Duplicate Errors from Wordpress login redirects
I've some Duplicate issues showing up in Moz Analytics which are due to a Q&A plugin being used on a Wordpress website which prompts the user to login. There's a number of links looking like the one shown below, which lead to the login page: www.website.com/wp-login.php?redirect_to=http%3A%2F%2Fwww.website.com%question%2.... What's the best way to deal with this? -- extra info: this is only showing up in Moz Analytics. Google Webmaster Tools reports no duplicates.. I'm guessing this is maybe down to the 'redirect_to' parameter being effective in grouping the URLs for Googlebot. currently the wplogin and consequent redirects are 'noindex, follow' - I cannot see where this is being generated from in wp-login.php to change this to nofollow (if this will solve it).
Intermediate & Advanced SEO | | GregDixson0 -
Duplicate page content errors stemming from CMS
Hello! We've recently relaunched (and completely restructured) our website. All looks well except for some duplicate content issues. Our internal CMS (custom) adds a /content/ to each page. Our development team has also set-up URLs to work without /content/. Is there a way I can tell Google that these are the same pages. I looked into the parameters tool, but that seemed more in-line with ecommerce and the like. Am I missing anything else?
Intermediate & Advanced SEO | | taylor.craig0 -
Do you bother cleaning duplicate content from Googles Index?
Hi, I'm in the process of instructing developers to stop producing duplicate content, however a lot of duplicate content is already in Google's Index and I'm wondering if I should bother getting it removed... I'd appreciate it if you could let me know what you'd do... For example one 'type' of page is being crawled thousands of times, but it only has 7 instances in the index which don't rank for anything. For this example I'm thinking of just stopping Google from accessing that page 'type'. Do you think this is right? Do you normally meta NoIndex,follow the page, wait for the pages to be removed from Google's Index, and then stop the duplicate content from being crawled? Or do you just stop the pages from being crawled and let Google sort out its own Index in its own time? Thanks FashionLux
Intermediate & Advanced SEO | | FashionLux0