What is the best method to solve duplicate page content?
-
The issue I am having is an overwhelmingly large number of pages on cafecartel.com show that they have duplicate page content.
But when I check the errors on SEOmoz it shows that the duplicate content is from www.cafecartel.com not cafecartel.com.
So first of all, does this mean that there are two sites? and is this a problem I can fix easily? (i.e. redirecting the URL and deleting the extra pages)
Is this going to make all other SEO useless due to the fact that it shows that nearly every page has duplicate page content?
Or am I just completely reading the data wrong?
-
the wordpress just has a setting under general settings for www or non www.
-
I had the htaccess redirect, but the ccsnews is a wordpress blog. When I had that re-direct going, the blog complained of too many re-directs. I've seen this happen before even on seomoz.
So I'm using a joomla redirect plug in. I'm thinking the wordpress has a redirect plug in also, just haven't installed it yet.
-
The internal crawl report from SEOmoz is based on your internal links, not external inbound links. So if there are any errors, it is in your site.
At a quick glance, I see that you have setup the 301 to www, but if you click into the blog (news), then you aren't at the www anymore. http://cafecartel.com/ccsnews/ - (if wordpress, then it's just a simple settings change.)
Run a crawl test on it (http://pro.seomoz.org/tools/crawl-test) and keep on plugging away and fixing every issue until there are no more.
And make sure you use rel=canonical tags. This will help out with the duplicate content as well. http://www.seomoz.org/learn-seo/canonicalization
-
Thank you Brent, and Mark...
So taking your advice this is what happened...
At the tail end of last week, we implemented a 301 redirect to www.cafecartel.com, we adjusted the .htaccess file to implement it and it worked as far as always landing on www.cafecartel.com....BUT the errors didn't adjust after the crawl.
I fear that the mere existence of these links to cafecartel.com and www.cafecartel.com may need to be manually redirected for each page.
The pages that are showing the highest errors are the blog article pages, quote request pages, and the free download pages. These same pages have links going between pages on www.cafecartel.com and other blog sites, which we did as an organic SEO tactic. Is this possibly something that is causing errors?
Thank you all for your advice!
-
You need to setup your site Canonicalization so that you don't have the duplicates. SEOmoz has a great article here: http://www.seomoz.org/learn-seo/canonicalization
Since you are hosted on an Apache server, you will need to modify your .htaccess file in your root directory to take care of these.
Make sure you also setup the www or non www preference in GWT. (Google Webmaster Tools)
-
You are reading the correct data. You should be redirecting the pages to cafecartel.com/.... this will eliminate the duplicate content issues. You also might be able to see the issue with the sitemap....if the website was converted from another website then the pages might still be attached.
Another option, less SEO favorable, but will eliminate the duplicate content, is figuring out where the pages are and then installing robot no follows....
This will help your SEO not hurt it. You are being penalized for the duplicate content.
Hope this helps....
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I noindex user-created fundraising pages?
Hello Moz community! I work for a nonprofit where users are able to create their own fundraising pages on the website for supporters to directly donate. Some of them are rarely used, others get updated frequently by the host. There are likely a ton of these on our site. Moz crawl says we have ~54K pages, and when I do a "site:[url]" search on Google, 90% of the first 100 results are fundraising pages. These are not controlled by our staff members, but I'm wondering if meta noindexing these pages could have a big effect on our SEO rankings. Has anyone tried anything similar or know if this strategy could have legs for our site? My only concern is whether users wouldn't be able to find their fundraising page in our Google CSE implemented on the website. Any insight you fine folks could provide would be greatly appreciated!
Moz Pro | | Adam_Paris1 -
Home Page Location Redirect
We have recently upgraded our Wordpress site to detect your local city and redirect to the proper location. Previously we had independent sites - for example, http://atlanta.styleblueprint.com is now http://styleblueprint.com/atlanta We've setup 301 redirects on all of the old site home pages. Now we have two issues: Moz will no longer crawl our domain. For two weeks now our campaign shows only four pages crawled None of our home pages show up in Google any longer for organic searches. We previously always ranked #1 for "styleblueprint" or "style blueprint" Does our new auto redirect mess things up? Or is this just a function of time until Google "learns" how to index our new site? All thoughts appreciated. Thanks in advance, Jay
Moz Pro | | SSBCI0 -
Duplicate Page Title - although there are differences
Hello, I get duplicate page titles errors on pages in which there are little differences. For example: C++ Online Test for Seniors C# Online Test for Seniors I assume that from some reason the ++ and the # are removed when SEOMoz crawler checks for duplicate page titles. As you may know C# and C++ means two different programming languages. Should I do something about it or is it a bug in the crawler?
Moz Pro | | ulukach0 -
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
Pages Crawled: 1 Why?
I have some campaigns which have only 1 page crawled, while some other campaigns, having completely similar URL (subdomain) and number of keywords and pages, have all pages crawled... Why is that so? It has been also a while I waited and so far no change...
Moz Pro | | BritishCouncil0 -
Page authority questions?
I've been analyzing some IT communities ...in order to check how relevant is the page authority vs PageRank. I found one main site which is organized by "communities'..and every community is a sub-domain. The root domain has an authority of 90/100 which it should be great......so the sub-domains "inherit" part of this authority.... Until here everything seems to be perfect. However, I went deeper and I picked one of these communities. Analyzing the "Linking Root Domain" I discovered it only has only 5 root domains pointing to its home page. Those 5 Root Domains have generated more than 134k links. That doesn't seem to be "natural". Checking those 5 Root Domains I discovered that they have been registered by the same Root Domain site. Ex: Main domain: Domain.com Community1.domain.com Community2.domain.com.... Linking Root Domains: DomainXY.com DomainABC.com DomainRST.com DomainFGH.com DomainOPQ.com It seems to me that it is easy to cheat the authority domain score. Just creating others sites developing the same topic and generating back links to your main domain
Moz Pro | | SherWeb0 -
URL paramters and duplicate content
Hello, I have a 2-fold question: Crawl Diagnostics is picking up a lot of Duplicate Page Title errors, and as far as I can tell, all of them are cause by URL parameters trailing the URL. We use a Magento store, and all filtering attributes, categories, product pages etc are tagged on as URL parameters. example: Main URL:
Moz Pro | | yacpro13
/accessories.html Duplicated Title Page URLs: /accessories.html?dir=asc&order=position
/accessories.html?mode=list
/accessories.html?mode=grid
...and many others How can I make the Crawl Diagnostics not identify these as errors? Now from an SEO point of view, all these URL parameters are been picked up by google, and are listed in WedMaster Tools -> URL parameters. All URL parameters are set to "let google decide". I remember having read that Google was smart enough here to make the right decision, and we shouldn't have to worry about it. Is this true, or is there a larger issue at hand here? Thankas!0 -
Will canonical tag get rid of duplicate page title errors?
I have a directory on my website, paginated in groups of 10. On page 2 of the results, the title tag is the same as the first page, as it is on the 3rd page and so on. This is giving me duplicate page title errors. If i use rel=canonical tags on the subsequent pages and href the first page of my results, will my duplicate page title warnings go away? thanks.
Moz Pro | | fourthdimensioninc0