How to delete/redirect duplicate content
-
Hello,
Our site thewealthymind(dot)com has a lot of duplicate content. How do you clear up duplicate content when there's a lot of it.
The owners redid the site several times and didn't update the URLs.
Thank you.
-
Sanket,
Thanks for the good tools. I'll use them. Actually, the duplicate content is all on our own server. We upgraded our site a couple of times and didn't redirect old pages to new.
I'm using Google and the command site:thewealthymind(dot)com to find duplicate content. Will that find it all?
-
Hi BobGW,
If you wnat to find that your page content has duplicate content or not then http://www.copyscape.com/ then this is best tool i was used that tool for finding copy content of my website. I hope this will help you more in your confusion. The another tool that i want to suggest are:
Duplichecker
Plagiarisma
Plagium
-
The canonical issue will be fixed. Thank you.
I'm still not clear how to find the duplicate content.
Thanks again.
-
Hello,
Your site is open with or without www and also with index.php so first of all you have to 301 redirect with .htaccess file. It is major problem and you have to solve it. For more information about the duplicate content read this URL it is best source of getting right idea about the duplicate content.
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
-
Since your site uses Joomla as its CMS, this extension might be useful. I suggest reading through the comments/reviews to determine whether it will work for you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I am having a duplicate title -pagination issue Need Help
In WordPress I added %%page%% saved changes and ran a test in Moz and screaming frog, it doesn't show any duplicates. But when I open the site up in a new browser the duplicate titles are still there. No access to the PHP file due to the theme the client chose also. Any suggestions anyone?
Moz Pro | | Strateguyz0 -
Why Only Our Homepage Can Be Crawled Showing a Redirect Message as the Meta Title
Hello Everyone, So recently when we checked our domain using a Moz Crawl Test and Screaming Frog only the homepage comes up and the meta title says “You are being redirected to…”. We have several pages that used to come up and when submitting them to GSC no issues come up. The robots.txt file looks fine as well. We thought this might be ‘server’ related but it’s a little out of our field of expertise so we thought we would find out if anyone has any experience with this (ideas of reasons, how to check…etc.) or any potential suggestions. Any extra insight would be really appreciated. Please let us know if there is anything we could provide further details for that might. Looking forward to hearing from all of you! Thanks in advance. Best,
Moz Pro | | Ben-R0 -
What are the restrictions/limitations to running SEO/Adwords in these countries?
What are the limitations or restrictions to running SEO/Adwords campaigns in countries such as China, South Korea, Japan, Brazil, Portugal, Spain, and Mexico?
Moz Pro | | ThomasCenterInc0 -
I did a redirect and now I'm getting duplication errors.
I was told by SEO Moz to do a redirect so that our website would be crawled with and without the www in front of the address. I did and now I'm getting duplicate page and title errors because the crawler is seeing www.oursitename.com and its underpages and oursitename.com and its underpages and giving me duplicate page content errors and duplicate page title errors. Makes sense, but how do I make it stop? Anyone else have this problem?
Moz Pro | | THMCC0 -
Why does SEOMoz think I have duplicate content?
The SEOmoz crawl report shows me a large amount of duplicate content sites. Our site is built on a CMS that creates the link we want it to be but also automatically creates it's own longer version of the link (e.g. http://www.federalnational.com/About/tabid/82/Default.aspx and http://www.federalnational.com/about.aspx). We set the site up so that there are automatic redirects for our site. Google Webmaster does not see these pages as duplicate pages. Why does SEOmoz consider them duplicate content? Is there a way to weed this out so that the crawl report becomes more meaningful? Thanks!
Moz Pro | | jsillay0 -
How many of my linked pages should I redirect (301's)
I'm moving my store to a new site and will have a much friendlier but completely different URL structure. I used Open Site Explorer to find inbound links to 513 pages and have done about half so far. The remaining pages have one link each at a page authority of 27 or less - but there are still 250+ of them. I have to manually view each old page, search for the product on the new site, and enter the redirect as there is no way to translate old URL's to new ones. How important is it for rankings to redirect the remaining 250 or so pages?
Moz Pro | | agirlandamac0 -
How to remove Duplicate content due to url parameters from SEOMoz Crawl Diagnostics
Hello all I'm currently getting back over 8000 crawl errors for duplicate content pages . Its a joomla site with virtuemart and 95% of the errors are for parameters in the url that the customer can use to filter products. Google is handling them fine under webmaster tools parameters but its pretty hard to find the other duplicate content issues in SEOMoz with all of these in the way. All of the problem parameters start with ?product_type_ Should i try and use the robot.txt to stop them from being crawled and if so what would be the best way to include them in the robot.txt Any help greatly appreciated.
Moz Pro | | dfeg0 -
SEOmoz Bot indexing JSON as content
Hello, We have a bunch of pages that contain local JSON we use to display a slideshow. This JSON has a bunch of<a links="" in="" it. <="" p=""></a> <a links="" in="" it. <="" p="">For some reason, these</a><a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p=""></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">One example page this is happening on is: http://www.trendhunter.com/trends/a2591-simplifies-product-logos . Searching for the string '<a' yields="" 1100+="" results="" (all="" of="" which="" are="" recognized="" as="" links="" for="" that="" page="" in="" seomoz),="" however,="" ~980="" these="" json="" code="" and="" not="" actual="" on="" the="" page.="" this="" leads="" to="" a="" lot="" invalid="" our="" site,="" super="" inflated="" count="" on-page="" page. <="" span=""></a'></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">Is this a bug in the SEOMoz bot? and if not, does google work the same way?</a>
Moz Pro | | trendhunter-1598370