Masses (5,168 issues found) of Duplicate content.
-
Hi Mozzers,
I have a site that has returned 5,168 issues with duplicate content.
Where would you start?
I started sorting via High page Authority first the highest being 28 all the way down to 1. I did want to use the rel=canonical tag as the site has many redirects already.
The duplicates are caused by various category and cross category pages and search results such as ....page/1?show=2&sort=rand.
I was thinking of going down the lines of a URL rewrite and changing the search anyway. Is it work redirecting everything in terms of results versus the effort of changing all the 5,168 issues?
Thanks
sm
-
Hi Guys,
Thanks for the responses I'm going to have a look at the issue again, with your suggestions in mind. And I'll keep you posted. Thanks again.
-
Don't look at individual URLs - at the scale of 5K plus, look at your site architecture and what kind of variants you're creating. For example, if you know that the show= and sort= parameter are a possible issue, you could go to Google and enter something like:
site:example.com inurl:show=
(warning: it will return pages with the word "show" in the URL, like "example.com/show-times" - not usually an issue, but it can be on rare occasion).
That'll give you a sense of how many cases that one parameter is creating. Odds are, you'll find a couple that are causing 500+ of the 5K duplicates, so start with those.
Search pagination is very tricky - you could canonicalize to "View All" as Chris Hill said, you could NOINDEX pages 2+, or you could try Google's new (but very complicated way):
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
Problem is, that doesn't work on Bing and it's pretty easy to mess up.
The rel-canonical tag can scoop up sorts pretty well. You can also tell Google in Google Webmaster Tools what those parameters do, and whether to index them, but I've had mixed luck with that. If you're not having any serious problems, GWT is easy and worth a shot.
-
Have a look at your pagination too. If you've not got a 'show all' link it might be worth putting one in and making that the canonical. Should eliminate some of your duplicate content issues.
-
Last I came accross such an issue I mostly started with making the 'easy' changes that reduced the number the most.
In the last case, it was implimenting a 301 to the www version of the site (cutting the errors in half) and putting a canonical on one search page.
This got the number down to the point where it was easyer to make decisions on 'Is it worth making friendlyer urls' and discover more intresting places dup content was being generated.
It's one of these things I would always aim for 0 where I can. It usualy means that the url or site structure can be improved sugnificantly, or it's such an easy fix that it's hard to justify not doing.
-
If it really is a URL issue then you should just be able to easily canonical the root pages and the rest should sort itself out. Start there and let the next spidering tell you where you stand.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does duplicate content not concern Rand?
Hello all, I'm a new SEOer and I'm currently trying to navigate the layman's minefield that is trying to understand duplicate content issues in as best I can. I'm working on a website at the moment where there's a duplicate content issue with blog archives/categories/tags etc. I was planning to beat this by implementing a noindex meta tag on those pages where there are duplicate content issues. Before I go ahead with this I thought: "Hey, these Moz guys seem to know what they're doing! What would Rand do?" Blogs on the website in question appear in full and in date order relating to the tag/category/what-have-you creating the duplicate content problem. Much like Rand's blog here at Moz - I thought I'd have a look at the source code to see how it was dealt with. My amateur eyes could find nothing to help answer this question: E.g. Both the following URLs appear in SERPs (using site:moz,com and very targeted keywords, but they're there): https://moz.com/rand/does-making-a-website-mobile-friendly-have-a-universally-positive-impact-on-mobile-traffic/ https://moz.com/rand/category/moz/ Both pages have a rel="canonical" pointing to themselves. I can understand why he wouldn't be fussed about the category not ranking, but the blog? Is this not having a negative effect? I'm just a little confused as there are so many conflicting "best practice" tips out there - and now after digging around in the source code on Rand's blog I'm more confused than ever! Any help much appreciated, Thanks
Technical SEO | | sbridle1 -
150+ Pages of URL Parameters - Mass Duplicate Content Issue?
Hi we run a large e-commerce site and while doing some checking through GWT we came across these URL parameters and are now wondering if we have a duplicate content issue. If so, we are wodnering what is the best way to fix them, is this a task with GWT or a Rel:Canonical task? Many of the urls are driven from the filters in our category pages and are coming up like this: page04%3Fpage04%3Fpage04%3Fpage04%3F (See the image for more). Does anyone know if these links are duplicate content and if so how should we handle them? Richard I7SKvHS
Technical SEO | | Richard-Kitmondo0 -
Database driven content producing false duplicate content errors
How do I stop the Moz crawler from creating false duplicate content errors. I have yet to submit my website to google crawler because I am waiting to fix all my site optimization issues. Example: contactus.aspx?propid=200, contactus.aspx?propid=201.... these are the same pages but with some old url parameters stuck on them. How do I get Moz and Google not to consider these duplicates. I have looked at http://moz.com/learn/seo/duplicate-content with respect to Rel="canonical" and I think I am just confused. Nick
Technical SEO | | nickcargill0 -
Duplicate content - font size and themes
Hi, How do we sort duplicate content issues with: http://www.ourwebsite.co.uk/ being the same as http://www.ourwebsite.co.uk/StyleType=SmallFont&StyleClass=FontSize or http://www.ourwebsite.co.uk/?StyleType=LargeFont&StyleClass=FontSize and http://www.ourwebsite.co.uk/legal_notices.aspx being the same as http://www.ourwebsite.co.uk/legal_notices.aspx?theme=default
Technical SEO | | Houses0 -
Duplicate Content Issue with
Hello fellow Moz'rs! I'll get straight to the point here - The issue, which is shown in the attached image, is that for every URL ending in /blog/category/name, it has a duplicate page of /blog/category/name/?p=contactus. Also, its worth nothing that the ?p=contact us are not in the SERPs but were crawled by SEOMoz and they are live and duplicate. We are using Pinnacle cart. Is there a way to just stop the crawlers from ?p=contactus or? Thank you all and happy rankings, James
Technical SEO | | JamesPiper0 -
Canonical usage and duplicate content
Hi We have a lot of pages about areas like ie. "Mallorca" (domain.com/Spain/Mallorca), with tabbed pages like "excursion" (domain.com/spain/Mallorca/excursions) and "car rental" (domain.com/Spain/Mallorca/car-rental) etc. The text on ie the "car rental"-page is very similar on Mallorca and Rhodos, and seomoz marks these as duplicate content. This happens on "car rental", "map", "weather" etc. which not have a lot of text but images and google maps inserted. Could i use rel=nex/prev/canonical to gather the information from the tabbed pages? That could show google that the Rhodos-map page is related to Rhodos and not Mallorca. Is that all wrong or/and is there a better way to do this? Thanks, Alsvik
Technical SEO | | alsvik0 -
How much to change to avoid duplicate content?
Working on a site for a dentist. They have a long list of services that they want us to flesh out with text. They provided a bullet list of services, we're trying to get 1 to 2 paragraphs of text for each. Obviously, we're not going to write this off the top of our heads. We're pulling text from other sources and trying to rework. The question is, how much rephrasing do we have to do to avoid a duplicate content penalty? Do we make sure there are changes per paragraph, sentence, or phrase? Thanks! Eric
Technical SEO | | ericmccarty0 -
Tags causing Duplicate page content?
I was looking through the 'Duplicate Page Content' and Too Many On-Page Link' errors and they all seem to be linked to the 'Tags' on my blog pages. Is this really a problem and if so how should I be using tags properly to get the best SEO rewards?
Technical SEO | | zapprabbit1