How to delete/redirect duplicate content
-
Hello,
Our site thewealthymind(dot)com has a lot of duplicate content. How do you clear up duplicate content when there's a lot of it.
The owners redid the site several times and didn't update the URLs.
Thank you.
-
Sanket,
Thanks for the good tools. I'll use them. Actually, the duplicate content is all on our own server. We upgraded our site a couple of times and didn't redirect old pages to new.
I'm using Google and the command site:thewealthymind(dot)com to find duplicate content. Will that find it all?
-
Hi BobGW,
If you wnat to find that your page content has duplicate content or not then http://www.copyscape.com/ then this is best tool i was used that tool for finding copy content of my website. I hope this will help you more in your confusion. The another tool that i want to suggest are:
Duplichecker
Plagiarisma
Plagium
-
The canonical issue will be fixed. Thank you.
I'm still not clear how to find the duplicate content.
Thanks again.
-
Hello,
Your site is open with or without www and also with index.php so first of all you have to 301 redirect with .htaccess file. It is major problem and you have to solve it. For more information about the duplicate content read this URL it is best source of getting right idea about the duplicate content.
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
-
Since your site uses Joomla as its CMS, this extension might be useful. I suggest reading through the comments/reviews to determine whether it will work for you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirection
Hi I used to have an online shop at https://shop.domain.co.uk. I have since done a new website and and want to divert all traffic to https://domain.co.uk/shoppage. The old shop was a subdomain but the new site has a shop on the normal domain. In Moz I am getting a lot of errors with missing descriptions and URL too long. so for example one of the urls <dl class="crawl-page-details-list"> <dd class="crawl-page-details-list-emphasis">https://shop.domain.co.uk/product-category/great-merchandise/?product_order=desc are like this. I would like to redirect them all to the new shop page. </dd> <dd class="crawl-page-details-list-emphasis">The missing descriptions are all similar. </dd> <dd class="crawl-page-details-list-emphasis">Is there a way I can redirect all these issues without doing them manually?</dd> <dd class="crawl-page-details-list-emphasis">Thanks </dd> </dl>
Moz Pro | | Paul_YAS0 -
Duplicate Content/Missing Meta Description | Pages DO NOT EXISIT!
Hello all, For the last few months, Moz has been showing us that our site has roughly 2,000 duplicate content errors. Pages that were actually duplicate content, I took care of accordingly using best practice (301 redirects, canonicalization,etc.). Still remaining after these fixes were errors showing for pages that we have never created. Our homepage is www.primepay.com. An example of pages that are being shown as duplicate content is http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/payroll/online-payroll with a referring page of http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/online-payroll. Some of these are even now showing up as 403 and 404 errors. The only real page on our site within that URL strand is primepay.com/payroll or primepay.com/payroll/online-payroll. Therefore, I am not sure where Moz is getting these pages from. Another issue we are having in relation to duplicate content is that moz is showing old campaign url’s tacked on to our blog page i.e. http://primepay.com/blog?title=&page=2&utm_source=blog&utm_medium=blogCTA&utm_campaign=IRSblogpost&qt-blog_tabs=1. As of this morning, our duplicate content went from 2,000 to 18,000. I exported all of our crawl diagnostics data and looked to see what the referring pages were, and even they are not pages that we have created. When you click on these links, they take you to a random point in time from the homepage of our blog; some dating back to 2010. I checked our crawl stats in both Google and Bing’s Webmaster tool, and there are no duplicate content or 400 level errors being reporting from their crawl. My team is truly at a loss with trying to resolve this issue and any help with this matter would be greatly appreciated.
Moz Pro | | PrimePay0 -
How can I deal with tag page duplicate issues
The Moz crawler reported some dupliated issues. Many of them have to do with tags.
Moz Pro | | IamKovacs
Each tag has a link, and as some articles are under several tags, these come up as duplicate content. I read Dr Peter's piece on Canonical stuff, but it's not clear to me if any of these are the solution. Perhaps the solution lies somewhere else? Maybe I need to block the robots from these urls (But that seems counter-SEO-productive) Thanks
Kovacs0 -
Has SEOMoz considered making it possible to follow a specific Q & A thread via Twitter and/or Email?
I think it would be great if there was a way to follow a particular Q & A thread without having to post a comment. There are so many great topics and discussions in here and sometimes I lose track of some that I'd like to continue following as people post, without having to post in the thread myself. Have you guys (Mozzers) ever considered adding a "Follow this thread on Twitter" or "Follow this thread by email" function? Or is there already a way to do this (without posting), that I just haven't found?
Moz Pro | | danatanseo1 -
Crawl Diagnostics Warnings - Duplicate Content
Hi All, I am getting a lot of warnings about duplicate page content. The pages are normally 'tag' pages. I have some news stories or blog posts tagged with multiple 'tags'. Should I ask google not to index the tag pages? Does it really affect my site? Thanks
Moz Pro | | skehoe0 -
What is the best method to solve duplicate page content?
The issue I am having is an overwhelmingly large number of pages on cafecartel.com show that they have duplicate page content. But when I check the errors on SEOmoz it shows that the duplicate content is from www.cafecartel.com not cafecartel.com. So first of all, does this mean that there are two sites? and is this a problem I can fix easily? (i.e. redirecting the URL and deleting the extra pages) Is this going to make all other SEO useless due to the fact that it shows that nearly every page has duplicate page content? Or am I just completely reading the data wrong?
Moz Pro | | MarkP_0 -
I need a tool/tools to extract keywords from say 50 sites in one niche and then check the rank tor those sites
This is for telemarketing of seo services I want to have some insight into an industry before I call them could this be done with the adwords keyword tool api and then exported to excel. It would also be nice to have data on backlinks say from seo moz opensite explorer.. Its just that the research you do before you even call a potential client is so time consuming and you can never really check to see how they are ranking for there main keywords manually. We are trying to automate as much of this initial research as possible... Any Idea's Thanks
Moz Pro | | duncan2740 -
Duplicate page error from SEOmoz
SEOmoz's Crawl Diagnostics is complaining about a duplicate page error. I'm trying to use a rel=canonical but maybe I'm not doing it right. This page is the original, definitive version of the content: https://www.borntosell.com/covered-call-newsletter/sent-2011-10-01 This page is an alias that points to it (each month the alias is changed to point to the then current issue): https://www.borntosell.com/covered-call-newsletter/latest-issue The alias page above contains this tag (which is also updated each month when a new issue comes out) in the section: Is that not correct? Is the https (vs http) messing something up? Thanks!
Moz Pro | | scanlin0