Error msg 'Duplicate Page Content', how to fix?
-
Hey guys,
I'm new to SEO and have the following error msg 'Duplicate Page Content'. Of course I know what it means, but my question is how do you delete the old pages that has duplicate content?
I use to run my website through Joomla! but have since moved to Shopify. I see that the duplicated site content is still from the old Joomla! site and I would like to learn how to delete this content (or best practice in this situation).
Any advice would be very helpful!
Cheers,
Peter
-
I dont think the problem is the https vs http here.
Maybe you could share one of the erros with me?
Fredrik
-
Hi Fredrik,
Thanks for the follow up! I got the error msg from my SEOmoz dashboard. I just got it a couple days ago and the first crawl has been completed. I can also see in Google Webmaster a similar error msg. The old pages are on an older SSL secured site, but the domain is still the same.
For example, my old site was https://www.kellanapparel.com and now I don't use the SSL certificate and the website is hosted on a new platform (shopify) but with the same domain minus 's' (so now its http://www.kellanapparel.com). Maybe it's the case that the old SSL certificate is still valid and because the old content is associated on there it's still triggering? Would this make a difference?
Cheers,
Peter
-
Hi Peter
When you say you got the message 'Duplicate Page Content', is this from Google Webmaster Tools?
You mention an old page? Is that still active under different domain?
To avoid the duplicate content error you need to make sure that there is only ONE url for each piece of content. There are many ways of achieving this and here I only suggest a few:
Canonical tags
Try to use Canonical tags where ever you can. This can be implemented on both versions. The orgininal version should have tags pointing to itself. More info on excatly how to do this here:
http://www.seomoz.org/learn-seo/canonicalization
Remove old site from index
Make sure the old system is no longer indexed. Do this using either robots.txt or add a no index no follow metatag.
More on robots.txt here:
http://www.seomoz.org/learn-seo/robotst.xt
301 redirects
Try to use 301 redirects pointing any old page to the new page. You can also use this to point non www pages to always use www or the other way around. If you have old pages on a different system make sure these are not online. If they are still active on different domain, make sure to make 301 redirects to the new page.
More on this here:
http://www.seomoz.org/learn-seo/redirection
You can find more info on how to avoid this error on this post:
http://www.seomoz.org/learn-seo/duplicate-content
Good luck and hope these tips can help you get started.
Fredrik
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content Issues on Product Pages
Hi guys Just keen to gauge your opinion on a quandary that has been bugging me for a while now. I work on an ecommerce website that sells around 20,000 products. A lot of the product SKUs are exactly the same in terms of how they work and what they offer the customer. Often it is 1 variable that changes. For example, the product may be available in 200 different sizes and 2 colours (therefore 400 SKUs available to purchase). Theese SKUs have been uploaded to the website as individual entires so that the customer can purchase them, with the only difference between the listings likely to be key signifiers such as colour, size, price, part number etc. Moz has flagged these pages up as duplicate content. Now I have worked on websites long enough now to know that duplicate content is never good from an SEO perspective, but I am struggling to work out an effective way in which I can display such a large number of almost identical products without falling foul of the duplicate content issue. If you wouldnt mind sharing any ideas or approaches that have been taken by you guys that would be great!
Technical SEO | | DHS_SH0 -
Duplicate content problem
Hi, i work in joomla and my site is www.in2town.co.uk I have been looking at moz tools and it is showing i have over 600 pages of duplicate content. The problem is shown below and i am not sure how to solve this, any help would be great, | Benidorm News http://www.in2town.co.uk/benidorm-news/Page-2 50 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-102 50 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-103 50 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-104 9 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-106 28 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-11 50 22 3 In2town http://www.in2town.co.uk/blog/In2town/Page-112 50 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-114 45 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-115 50 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-116 50 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-12 50 22 3 In2town http://www.in2town.co.uk/blog/In2town/Page-120 50 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-123 50 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-13 50 22 3 In2town http://www.in2town.co.uk/blog/In2town/Page-130 50 23 3 In2town http://www.in2town.co.uk/blog/In2town/Page-131 50 22 3 In2town http://www.in2town.co.uk/blog/In2town/Page-132 31 22 3 In2town http://www.in2town.co.uk/blog/In2town/Page-140 4 18 1 In2town http://www.in2town.co.uk/blog/In2town/Page-141 50 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-21 10 18 1 In2town http://www.in2town.co.uk/blog/In2town/Page-22 50 18 1 In2town http://www.in2town.co.uk/blog/In2town/Page-23 50 18 1 In2town http://www.in2town.co.uk/blog/In2town/Page-26 50 18 1 In2town http://www.in2town.co.uk/blog/In2town/Page-271 50 18 1 In2town http://www.in2town.co.uk/blog/In2town/Page-274 50 18 1 In2town http://www.in2town.co.uk/blog/In2town/Page-277 50 21 2 In2town http://www.in2town.co.uk/blog/In2town/Page-28 50 21 2 In2town http://www.in2town.co.uk/blog/In2town/Page-29 50 18 1 In2town http://www.in2town.co.uk/blog/In2town/Page-310 50 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-341 21 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-342 4 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-343 50 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-345 1 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-346 50 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-348 50 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-349 50 1 0 In2town http://www.in2town.co.uk/blog/In2town/Page-350 50 16 0 In2town http://www.in2town.co.uk/blog/In2town/Page-351 50 19 1 In2town http://www.in2town.co.uk/blog/In2town/Page-82 24 1 0 In2town http://www.in2town.co.uk/blog/in2town 50 20 1 In2town http://www.in2town.co.uk/blog/in2town/Page-10 50 23 3 In2town http://www.in2town.co.uk/blog/in2town/Page-100 50 22 3 In2town http://www.in2town.co.uk/blog/in2town/Page-101 50 22 3 In2town http://www.in2town.co.uk/blog/in2town/Page-105 50 22 3 In2town http://www.in2town.co.uk/blog/in2town/Page-107 50 22 3 In2town http://www.in2town.co.uk/blog/in2town/Page-108 50 22 3 In2town http://www.in2town.co.uk/blog/in2town/Page-109 50 22 3 In2town http://www.in2town.co.uk/blog/in2town/Page-110 50 22 3 In2town http://www.in2town.co.uk/blog/in2town/Page-111 50 22 3 In2town http://www.in2town.co.uk/blog/in2town/Page-113 |
Technical SEO | | ClaireH-1848860 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Tips and duplicate content
Hello, we have a search site that offers tips to help with search/find. These tips are organized on the site in xml format with commas... of course the search parameters are duplicated in the xml so that we have a number of tips for each search parameter. For example if the parameter is "dining room" we might have 35 pieces of advice - all less than a tweet long. My question - will I be penalized for keyword stuffing - how can I avoid this?
Technical SEO | | acraigi0 -
Avoiding duplicate content on internal pages
Lets say I'm working on a decorators website and they offer a list of residential and commercial services, some of which fall into both categories. For example "Internal Decorating" would have a page under both Residential and Commercial, and probably even a 3rd general category of Services too. The content inside the multiple instances of a given page (i.e. Internal Decorating) at best is going to be very similar if not identical in some instances. I'm just a bit concerned that having 3 "Internal Decorating" pages could be detrimental to the website's overall SEO?
Technical SEO | | jasonwdexter0 -
Is 100% duplicate content always duplicate?
Bit of a strange question here that would be keen on getting the opinions of others on. Let's say we have a web page which is 1000 lines line, pulling content from 5 websites (the content itself is duplicate, say rss headlines, for example). Obviously any content on it's own will be viewed by Google as being duplicate and so will suffer for it. However, given one of the ways duplicate content is considered is a page being x% the same as another page, be it your own site or someone elses. In the case of our duplicate page, while 100% of the content is duplicate, the page is no more than 20% identical to another page so would it technically be picked up as duplicate. Hope that makes sense? My reason for asking is I want to pull latest tweets, news and rss from leading sites onto a site I am developing. Obviously the site will have it's own content too but also want to pull in external.
Technical SEO | | Grumpy_Carl0 -
Seomoz is showing duplicate page content for my wordpress blog
Hi Everyone, My seomoz crawl diagnostics is indicating that I have duplicate content issues in the wordpress blog section of my site located at: http://www.cleversplash.com/blog/ What is the best strategy to deal with this? Is there a plugin that can resolve this? I really appreciate your help guys. Martin
Technical SEO | | RogersSEO0 -
Up to my you-know-what in duplicate content
Working on a forum site that has multiple versions of the URL indexed. The WWW version is a top 3 and 5 contender in the google results for the domain keyword. All versions of the forum have the same PR, but but the non-WWW version has 3,400 pages indexed in google, and the WWW has 2,100. Even worse yet, there's a completely seperate domain (PR4) that has the forum as a subdomain with 2,700 pages indexed in google. The dupe content gets completely overwhelming to think about when it comes to the PR4 domain, so I'll just ask what you think I should do with the forum. Get rid of the subdomain version, and sometimes link between two obviously related sites or get rid of the highly targeted keyword domain? Also what's better, having the targeted keyword on the front of Google with only 2,100 indexed pages or having lower rankings with 3,400 indexed pages? Thanks.
Technical SEO | | Hondaspeder0