Duplicate Content based on www.www
-
In trying to knock down the most common errors on our site, we've noticed we do have an issue with dupicate content; however, most of the duplicate content errors are due to our site being indexed with www.www and not just www.
I am perplexed as to how this is happening. Searching through IIS, I see nothing that would be causing this, and we have no hostname records setup that are www.www.
Does anyone know of any other things that may cause this and how we can go about remedying it?
-
Just a thanks again! I modded ISAPI Rewrite and resolved the problem thanks to your tip.
FYI in case anyone else needs it, here is the mod I did:
#redirecting www.www.domain.com to www.domain.com also redirect the net and org #RewriteCond Host: www.www.domainname.(com|net|org)
#RewriteRule (.*) http://domainname.com$2 [R] -
Thanks Keri - yes I have ISAPI Rewrite installed.
I will go that route and give it a shot.
-
I haven't done this in a long time, but I used ISAPI Rewrite, and did what is talked about in this thread at http://www.webmasterworld.com/microsoft_asp_net/3951672.htm. An alternate method is discussed at http://www.stepforth.com/resources/web-marketing-knowledgebase/non-www-redirect/#microsoft. Hope this helps you get started!
-
Hi Keri - it is IIS6
-
Are you running IIS6 or IIS7? Things are a little different for each of them. If you let us know what version of IIS you're running, we'll look up the proper configuration for you.
-
Hi Sanket,
The person asking the question is on IIS, so an .htaccess file won't help.
-
Thanks Sanket, your reply makes total sense and I would apply it if I could figure out how the extra www ( www.www ) is even happening. Wouldn't I have to know that first in order to implement the redirect?
In IIS I don't see where the extra www.www is setup (what directory)
If I could find it, it seems I could just create a .htaccess file with the code below:
Options +FollowSymlinks
RewriteEngine on
rewritecond %{http_host} ^www.www.domain.com [nc]
rewriterule ^(.*)$ http://www.domain.com/$1 [r=301,nc]Would that could be right?
I'm still digging on where the extra forward to www.www is happening.
-
Hi
If your sit is opened with or without WWW then obviously Google consider both different site and duplicate content. This will solve by setting rel="canonical" . 301 redirect is best option because it will transfer your back-links, page-rank and linkjuce so it is safe way of redirect URL. Upload .htaccess. file in your FTP server it helps you to solve this problem.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
ViewState and Duplicate Content
Our site keeps getting duplicated content flagged as an issue... however, the pages being grouped together have very little in common on-page. One area which does seem to recur across them is the ViewState. There's a minimum of 150 lines across the ones we've investigated. Could this be causing the reports?
Technical SEO | | RobLev0 -
Duplicate Content?
My site has been archiving our newsletters since 2001. It's been helpful because our site visitors can search a database for ideas from those newsletters. (There are hundreds of pages with similar titles: archive1-Jan2000, archive2-feb2000, archive3-mar2000, etc.) But, I see they are being marked as "similar content." Even though the actual page content is not the same. Could this adversely affect SEO? And if so, how can I correct it? Would a separate folder of archived pages with a "nofollow robot" solve this issue? And would my site visitors still be able to search within the site with a nofollow robot?
Technical SEO | | sakeith0 -
Duplicate content by php id,page=... problem
Hi dear friends! How can i resolve this duplicate problem with edit the php code file? My trouble is google find that : http://vietnamfoodtour.com/?mod=booking&act=send_booking&ID=38 and http://vietnamfoodtour.com/.....booking.html are different page, but they are one but google indexed both of them. And the Duplcate content is raised 😞 how can i notice to google that they are one?
Technical SEO | | magician0 -
How do I deal with Duplicate content?
Hi, I'm trying SEOMOZ and its saying that i've got loads of duplicate content. We provide phone numbers for cities all over the world, so have pages like this... https://www.keshercommunications.com/Romaniavoipnumbers.html https://www.keshercommunications.com/Icelandvoipnumbers.html etc etc. One for every country. The question is, how do I create pages for each one without it showing up as duplicate content? Each page is generated by the server, but Its impossible to write unique text for each one. Also, the competition seem to have done the same but google is listing all their pages when you search for 'DID Numbers. Look for DIDWW or MyDivert.
Technical SEO | | DanFromUK0 -
What is the best practice to handle duplicate content?
I have several large sections that SEOMOZ is indicating has duplicate content, even though the content is not identical. For example: Leather Passport Section - Leather Passports - Black - Leather Passposts - Blue - Leather Passports - Tan - Etc. Each of the items has good content, but it is identical, since they are the same products. What is the best practice here: 1. Have only one product with a drop down (fear is that this is not best for the customer) 2. Make up content to have them sound different? 3. Put a do-no-follow on the passport section? 4. Use a rel canonical even though the sections are technically not identical? Thanks!
Technical SEO | | trophycentraltrophiesandawards0 -
Duplicate content across multiple domains
I have come across a situation where we have discovered duplicate content between multiple domains. We have access to each domain and have recently within the past 2 weeks added a 301 redirect to redirect each page dynamically to the proper page on the desired domain. My question relates to the removal of these pages. There are thousands of these duplicate pages. I have gone back and looked at a number of these cached pages in google and have found that the cached pages that are roughly 30 days old or older. Will these pages ever get removed from google's index? Will the 301 redirect even be read by google to be redirected to the proper domain and page? If so when will that happen? Are we better off submitting a full site removal request of the sites that carries the duplicate content at this point? These smaller sites do bring traffic on their own but I'd rather not wait 3 months for the content to be removed since my assumption is that this content is competing with the main site. I suppose another option would be to include no cache meta tag for these pages. Any thoughts or comments would be appreciated.
Technical SEO | | jmsobe0 -
Canonical Link for Duplicate Content
A client of ours uses some unique keyword tracking for their landing pages where they append certain metrics in a query string, and pulls that information out dynamically to learn more about their traffic (kind of like Google's UTM tracking). Non-the-less these query strings are now being indexed as separate pages in Google and Yahoo and are being flagged as duplicate content/title tags by the SEOmoz tools. For example: Base Page: www.domain.com/page.html
Technical SEO | | kchandler
Tracking: www.domain.com/page.html?keyword=keyword#source=source Now both of these are being indexed even though it is only one page. So i suggested placing an canonical link tag in the header point back to the base page to start discrediting the tracking URLs: But this means that the base pages will be pointing to themselves as well, would that be an issue? Is their a better way to solve this issue without removing the query tracking all togther? Thanks - Kyle Chandler0 -
Duplicate Content -->?ss=facebook
Hi there, When searching site:mysite.com my keyword I found the "same page" twice in the SERP's. The URL's look like this: Page 1: www.example.com/category/productpage.htm Page 2: www.example.com/category/productpage.htm**?ss=facebook** The ?ss=facebook is caused by a bookmark button inserted in some of our product pages. My question is... will the canonical tag do to solve this? Thanks!
Technical SEO | | Nobody15565529539090