What's my best strategy for Duplicate Content if only www pages are indexed?
-
The MOZ crawl report for my site shows duplicate content with both www and non-www pages on the site. (Only the www are indexed by Google, however.) Do I still need to use a 301 redirect - even if the non-www are not indexed? Is rel=canonical less preferable, as usual?
Facts:
- the site is built using asp.net
- the homepage has multiple versions which use 'meta refresh' tags to point to 'default.asp'.
- most links already point to www
Current Strategy:
- set the preferred domain to 'www' in Google's Webmaster Tools.
- set the Wordpress blog (which sits in a /blog subdirectory) with rel="canonical" to point to the www version.
- Ask programmer to add 301 redirects from the non-www pages to the www pages.
- Ask programmer to use 301 redirects as opposed to meta refresh tags & point all homepage versions to www.site.org.
Does this strategy make the most sense? (Especially considering the non-indexed but existent non-www pages.)
Thanks!!
-
Very informative - thank you! It seems when I think I have a relatively firm grip on SEO, I stumble upon something new - like the dangerous potential for an infinite loop in a 301 redirect in IIS. (That was Chinese to me two days ago;))
Your response solved my concerns - hopefully it will help somebody else when they face the same problem.
-
Well the reason Google has picked the www version as it's preferred version automatically is most likely because of all the links you mentioned that were already pointing to that iteration of your domain. Google can figure this out on their own. That said, it still sees both of the sites (non www and www) as duplicates of each other. Best practice is to 301 one to another.
I've waged this war with a programmer before so I know how it goes. The one I dealt with didn't think there was any reason and told me all websites work that way. So I asked him to go to http://google.com and tell me how it resolves. Repeat that step with every major brand you can think of until he/she gets the point and that might help you.
They should be able to 301 this one time, no matter whether they're using an Apache or IIS server. This should be a quick fix. If they're unsure how to do it, have them Google "IIS 301 redirects" if it's a Windows server or "htaccess 301" if it's a Linux/Apache server.
-
Thanks for easing my mind Jesse! One thing still confuses me - the fact that the non-www pages are not indexed. They are not disallowed in robots.txt, there are no rel=canonical tags (except for pages in the blog subdirectory), they are not meta-refreshed, and obviously not 301 redirected. Could it be the doing of a sitemap (though I cant find one)? Or did Google simply decide all the www pages were more relevant? Am I missing something here? I don't want to ask the programmer to add a ton of 301 redirects (which I did) only to get a 'DUH!' response;)
FYI - site is asp.net - not sure if that matters, except when they redirect homepage to avoid creating infinite loop. (right?)
Thanks again!
-
This is an incredibly easy topic to address because you've already laid out exactly what needs to happen.
In other words, yes! That strategy is exactly the way you should go.
Good job and good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Difference between Open site Explorer's Root Domain and Basic SERP Report's Linking Root Domain?
Why show different Linking Root Domain open site explorer and SERP of any websites? Open Site explorer show different linking root domain and Basic SERP Report show different linking root domain of any website url, who is the correct and why it is show different linking root domain?
Moz Pro | | surabhi60 -
To Worry or Not? Duplicate Content Created from Redirect After Login
One of my Moz reports is flagging duplicate content. For example, https://redchairmarket.com/Account/LogOn?ReturnUrl=%2FAccount%2FSaveSearch%3FsearchId%3D0&searchId=0 and https://redchairmarket.com/Account/LogOn?ReturnUrl=%2FAccount%2FSaveSearch%3FsearchId%3D1&searchId=1 are created when a user logs in and the website sends them back to the page they were looking at before. What is the best way to deal with this duplicate issue? How serious is it? Thank you!
Moz Pro | | BrittanyHighland0 -
How do you guys/gals define a 'row?'
I have a question about calls to the API and how these are measured. I noticed that the URL Metrics calls allow a batch of multiple URLs. We're in a position where we need link data for multiple websites; can we request a single row of data with link information for multiple URLs, or do we need to request a unique row for each URL?
Moz Pro | | ssimburg0 -
UK Google Results .com V's .co.uk
In the UK lots of people use google.com rather than google.co.uk. See: http://weblogs.hitwise.com/robin-goad/2008/04/how_popular_is_googles_pages_from__the_uk_search_option.html The above data is a bit out of date but still I know lots of UK people who search on google.com. When I depersonalise the results and search there is a difference between what I see on google.com V's what I see on google.co.uk. Indeed one of the companies I work for ranks ok on google.co.uk, but not so well on google.com Questions: How would I best track ranking for UK based users who use google.com AND/OR google.co.uk using the SEOmoz tools (or other tools if appropriate)? I note that in "Google Webmaster Tools" you can change the setings to "Target users in United Kingdom". Given the fact that UK users use both google.com AND/OR google.co.uk would it be better to change this to unlisted or deselect the "Target users in" option? Thanks in advance. Justin
Moz Pro | | GrouchyKids0 -
Duplicate Content Issue from using filters on a directory listing site
I have a directory listing site of harpists and have alot of issues coming up that say: Content that is identical (or nearly identical) to content on other pages of your site forces your pages to unnecessarily compete with each other for rankings. Because this is a directory listing site the content is quite generic.The main issue appears to be coming from the functionality of the page. It appears that the "spider" is picking up each different choice of filter as a new page? If you have a look at this link you will see what I mean. People searching the site can filter the results of the songs played by this harpist by changing the dropdowns etc... but for some reason the filter arguments are being picked up...? Do you have any good approaches to solving this issue? A similar issue comes from the video pages for each harpist. They are being flagged as identical content - as there are currently no videos on the page. | http://www.find-a-harpist.co.uk/user/39/videos | http://www.find-a-harpist.co.uk/user/37/videos | Do you have any suggestions? Many thanks for taking the time to read this and respond. | | | | | |
Moz Pro | | dseo241
| |0 -
Duplicate page content and search in Magento
Hi all, Firstly, I am a business owner and not a SEO genuis but I work on my site and am learning how to "tweek" everyday. That said, my site www.vintagetimes.com.au needs a bit more than a tweek. Here is problem 1: I have massive duplicate page content which is being driven primarily by search and I'm not sure how to tackle the issue. Working in Magento. Could anybody give me an instruction on how to steer robots away from search results? I would also like to know WHY a search result is here as well? Example of about 20 pages of this type of result: | Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=created_at&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=metal&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=name&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=price&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=relevance&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=stone&dir=asc | 50+ | 1 | 0 |
Moz Pro | | VintageTimesAustralia0 -
Seomoz & Duplicate Page Content Issue?
Hi, What is the criteria on Seomoz Crawl Diagnostic Report? I got a long list of URLs indicating Content that is identical (or nearly identical) to content on other pages of your site forces your pages to unnecessarily compete with each other for rankings. But as I gone through none of the reported pages duplicate. What should I do? Thanks in Advance
Moz Pro | | VipinLouka780 -
Broken Links and Duplicate Content Errors?
Hello everybody, I’m new to SEOmoz and I have a few quick questions regarding my error reports: In the past, I have used IIS as a tool to uncover broken links and it has revealed a large amount of varying types of "broken links" on our sites. For example, some of them were links on my site that went to external sites that were no longer available, others were missing images in my CSS and JS files. According to my campaign in SEOmoz, however, my site has zero broken links (4XX). Can anyone tell me why the IIS errors don’t show up in my SEOmoz report, and which of these two reports I should really be concerned about (for SEO purposes)? 2. Also in the "errors" section, I have many duplicate page titles and duplicate page content errors. Many of these "duplicate" content reports are actually showing the same page more than once. For example, the report says that "http://www.cylc.org/" has the same content as "http://www.cylc.org/index.cfm" and that, of course, is because they are the same page. What is the best practice for handling these duplicate errors--can anyone recommend an easy fix for this?
Moz Pro | | EnvisionEMI0