Root domain not resolving to www. Duplicate content?
-
Hi,
I'm working with a domain that stays on the root domain if the www is not included.
But if the www is included, it stays with the www.
LIke this:
example.com
or
www.example.comOf course, they are identical and both go to the same IP.
Do search engines consider that to be duplicate content?
thanks,
michael -
Sorry, I forgot to remove the .au
-
Hey thanks Gyorgy. I'm going to try your fix.
I'm wondering about the .com.au/$1 segment.
Should that .au remain there, is that australia?
Much obliged,
michael -
Hey thanks everyone. Good info all around.
My client is hosting the site with a datarooms host, so I don't have access to the server settings. Tech support there seems clueless and "doesn't see a problem." I tried the DNS settings to no avail.
Is the only way to resolve this to get the dataroom host to modify htaccess?
thanks,
michael -
Sometimes the webhosting company has a feature for this. I have this as an option in dreamhost.com when I am setting up hosting for a specific domain.
Leave it alone: Both
http://www.compatibletonercartridge.net/
andhttp://compatibletonercartridge.net/
will work.
<label class="innerlabel">Add WWW: Makehttp://compatibletonercartridge.net/
redirect tohttp://www.compatibletonercartridge.net/
</label>
<label class="innerlabel">Remove WWW: Makehttp://www.compatibletonercartridge.net/
redirect tohttp://compatibletonercartridge.net/
</label> -
Using GWT is easy and a great help, but it doesn't affect the site itself when people come to the site from something other than Google, nor does it help the other search engines.
-
The most important and easiest step is to use Google Webmaster Tools. Go to Site Configuration, then Settings and set the Preferred Domain to either www or non-www.
Google:
"The preferred domain is the one that you would like used to index your site's pages. If you specify your preferred domain as http://www.example.com and we find a link to your site that is formatted as http://example.com, we'll treat that link as if it was http://www.example.com."If you have access to .htaccess and not afraid to use it, then the following will help:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^yourdomain.com [NC]
RewriteRule (.) http://www.yourdomain.com/$1 [L,R=301]
RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.*)index.html$ http://www.yourdomain.com.au/$1 [R=301,L]This redirects the non-www domain to the www version, and also redirects the index.html to the www version. Edit accordingly..
-
Hi Michael,
Let us know if you have questions about how to implement this. You will want to chose one version of your site, either www or non-www, and stick with that. Using Open Site Explorer to see whether www or non-www has the most backlinks can help with this decision.
Usually you do use an htaccess file to set this up, but sometimes you're on IIS or you don't have access to that file and need to work around it. If this is your case, let us know and we'll give you some help.
-
Yes, be careful with this.
Modify the .htaccess and redirect the example.com page to the www.example.com page.
More information: http://www.seomoz.org/learn-seo/redirection
Bye!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL slash creating duplicate content
Hi All, I currently have an issue whereby by domain name (just homepage) has: mydomain.com and: mydomain.com/ Moz crawler flags this up as duplicate content - does anyone know of a way I can fix this? Thanks! Jack
Technical SEO | | Jack11660 -
Setting up addon domains properly (bonus duplicate content issue inside)
A new client of mine is using 1and1 hosting from back in the dark ages. Turns out, her primary domain and her main website (different domain) are exactly the same. She likes to have the domains names of her books, but her intention is to have it redirect to her main site. Unfortunately, 1and1's control panel is light years behind cpanel, so when she set up her new domains it just pointed everything to the same directory. I just want to make sure I don't make this up, so please correct me if I'm wrong about something. I'm assuming this is a major duplicate content deal, so I plan to create a new directory for each add-on domain. Since her main site is an add-on itself, I'll have to move all the files into it's new home directory. Then I'll create an htaccess file for each domain and redirect it to her main site. Right so far? My major concern is with the duplicate content. She's had two sites being exactly the same for years. Will there be any issues leftover after I set everything up properly? Is there anything else I need to do? Thanks for the help guys! I'm fairly new to this community and love the opportunity to learn from the best!
Technical SEO | | Mattymar0 -
Content and url duplication?
One of the campaign tools flags one of my clients sites as having lots of duplicates. This is true in the sense the content is sort of boiler plate but with the different countries wording changed. The is same with the urls but they are different in the sense a couple of words have changed in the url`s. So its not the case of a cms or server issue as this seomoz advises. It doesnt need 301`s! Thing is in the niche, freight, transport operators, shipping, I can see many other sites doing the same thing and those sites have lots of similar pages ranking very well. In fact one site has over 300 keywords ranked on page 1-2, but it is a large site with an 12yo domain, which clearly helps. Of course having every page content unique is important, however, i suppose it is better than copy n paste from other sites. So its unique in that sense. Im hoping to convince the site owner to change the content over time for every country. A long process. My biggest problem for understanding duplication issues is that every tabloid or broadsheet media website would be canned from google as quite often they scrape Reuters or re-publish standard press releases on their sites as newsworthy content. So i have great doubt that there is a penalty for it. You only have to look and you can see media sites duplication everywhere, everyday, but they get ranked. I just think that google dont rank the worst cases of spammy duplication. They still index though I notice. So considering the business niche has very much the same content layout replicated content, which rank well, is this duplicate flag such a great worry? Many businesses sell the same service to many locations and its virtually impossible to re write the services in a dozen or so different ways.
Technical SEO | | xtopher660 -
Ways of Helping Reducing Duplicate Content.
Hi I am looking to no of anyway there is at helping to reduce duplicate content on a website with out breaking link and affecting Google rankings.
Technical SEO | | Feily0 -
How to get rid of duplicate content
I have duplicate content that looks like http://deceptionbytes.com/component/mailto/?tmpl=component&link=932fea0640143bf08fe157d3570792a56dcc1284 - however I have 50 of these all with different numbers on the end. Does this affect the search engine optimization and how can I disallow this in my robots.txt file?
Technical SEO | | Mishelm1 -
404's and duplicate content.
I have real estate based websites that add new pages when new listings are added to the market and then deletes pages when the property is sold. My concern is that there are a significant amount of 404's created and the listing pages that are added are going to be the same as others in my market who use the same IDX provider. I can go with a different IDX provider that uses IFrame which doesn't create new pages but I used a IFrame before and my time on site was 3min w/ 2.5 pgs per visit and now it's 7.5 pg/visit with 6+min on the site. The new pages create new content daily so is fresh content and better on site metrics (with the 404's) better or less 404's, no dup content and shorter onsite metrics better? Any thoughts on this issue? Any advice would be appreciated
Technical SEO | | AnthonyLasVegas0 -
Forget Duplicate Content, What to do With Very Similar Content?
All, I operate a Wordpress blog site that focuses on one specific area of the law. Our contributors are attorneys from across the country who write about our niche topic. I've done away with syndicated posts, but we still have numerous articles addressing many of the same issues/topics. In some cases 15 posts might address the same issue. The content isn't duplicate but it is very similar, outlining the same rules of law etc. I've had an SEO I trust tell me I should 301 some of the similar posts to one authoritative post on the subject. Is this a good idea? Would I be better served implementing canonical tags pointing to the "best of breed" on each subject? Or would I be better off being grateful that I receive original content on my niche topic and not doing anything? Would really appreciate some feedback. John
Technical SEO | | JSOC0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0