Re : Duplicate Content
-
Hello,
I am a pro member, in my campaign it says duplicate content for few urls. which i m not able to understand, because both the url's are same but why its showing under duplicate content. here are the urls example.
-
Hi Justin,
Ow... I just had a quick look and if I am not mistaken SEOmoz might actually have gotten better at picking up the duplicates in my case. I just manually went through the list and I can't really fault the results... in hindsight.
Note to self: manually check the data next time before replying to any posts!
Cheers
Greg
-
Thats interesting, i've not noticed anything on any of my campaigns
Out of curiosity, does the list of duplicate URL's give any clues as the what may be causing the duplicate pages error? Could Moz have updated their crawler to be case sensitive or something along those lines ?
-
I am also seeing sudden spikes in Duplicate Content for several campaigns and I keep seeing questions about this in the Q&A. This leads me to assume that there is an issue is at SEOmoz' side...
There is no reason that I would suddenly see an increase without having made any changes.
Just checked the data manually and in my case the Duplicate warnings are valid actually...
-
Agree with Toms comments
If you want to tidy up this error, go through the site and make the links consistent (ie change http://domain.com/page to http://domain.com/page/ throughout and that should solve the problem.
This is particularly worth while if you share the reports with your customers as customers hate seeing errors.
Hope that helps
-
I wouldn't worry about this error in the campaign.
It may be that the moz crawler has tried to index the second URL - with the slash - because a site might be linking to it, or it could be in your sitemap, worth checking if this is the case.
But I can see that you have a canonical tag set up on your website that will prevent Google from indexing the second URL and potentially seeing duplicate content. Indeed, after having searched for the second URL in Google it does not appear to be indexed, so there should be no duplicate content issue.
Check no one is linking to your site in that way, or if it's in your sitemap. If not, chalk it up to a Moz crawler error and don't worry about it!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do you create tracking URLs in Wordpress without creating duplicate pages?
I use Wordpress as my CMS, but I want to track click activity to my RFQ page from different products and services on my site. The easiest way to do this is through adding a string to the end of a URL (ala http://www.netrepid.com/request-for-quote/?=colocation) The downside to this, of course, is that when Moz does its crawl diagnostic every week, I get notified that I have multiple pages with the same page title and the dup content. I'm not a programming expert, but I'm pretty handy with Wordpress and know a thing or two about 'href-fing' (yeah, that's a thing). Can someone who tracks click activity in WP with URL variables please enlighten me on how to do this without creating dup pages? Appreciate your expertise. Thanks!
Moz Pro | | Netrepid0 -
What's my best strategy for Duplicate Content if only www pages are indexed?
The MOZ crawl report for my site shows duplicate content with both www and non-www pages on the site. (Only the www are indexed by Google, however.) Do I still need to use a 301 redirect - even if the non-www are not indexed? Is rel=canonical less preferable, as usual? Facts: the site is built using asp.net the homepage has multiple versions which use 'meta refresh' tags to point to 'default.asp'. most links already point to www Current Strategy: set the preferred domain to 'www' in Google's Webmaster Tools. set the Wordpress blog (which sits in a /blog subdirectory) with rel="canonical" to point to the www version. Ask programmer to add 301 redirects from the non-www pages to the www pages. Ask programmer to use 301 redirects as opposed to meta refresh tags & point all homepage versions to www.site.org. Does this strategy make the most sense? (Especially considering the non-indexed but existent non-www pages.) Thanks!!
Moz Pro | | kimmiedawn0 -
Forward slash on URL on Duplicate Content Report
Hi I'm new to this whole Moz thing, so needing help from some kind people! I've just looked at my Duplicate Page Content report and there are loads of URLs in there which are the same but are just differentiated by adding / at the end of the URL, e.g. http://youngepilepsy.org.uk/news-and-events/events http://youngepilepsy.org.uk/news-and-events/events/ Is this be a canonical issue? I can't understand why though as these aren't at the root. However when we add inline text links within the page HTML, there are some URLs with / and some without, could that be the reason? Thanks for your help! Jackie
Moz Pro | | YoungEpilepsy1 -
Duplicate page report
We ran a CSV spreadsheet of our crawl diagnostics related to duplicate URLS' after waiting 5 days with no response to how Rogerbot can be made to filter. My IT lead tells me he thinks the label on the spreadsheet is showing “duplicate URLs”, and that is – literally – what the spreadsheet is showing. It thinks that a database ID number is the only valid part of a URL. To replicate: Just filter the spreadsheet for any number that you see on the page. For example, filtering for 1793 gives us the following result: | URL http://truthbook.com/faq/dsp_viewFAQ.cfm?faqID=1793 http://truthbook.com/index.cfm?linkID=1793 http://truthbook.com/index.cfm?linkID=1793&pf=true http://www.truthbook.com/blogs/dsp_viewBlogEntry.cfm?blogentryID=1793 http://www.truthbook.com/index.cfm?linkID=1793 | There are a couple of problems with the above: 1. It gives the www result, as well as the non-www result. 2. It is seeing the print version as a duplicate (&pf=true) but these are blocked from Google via the noindex header tag. 3. It thinks that different sections of the website with the same ID number the same thing (faq / blogs / pages) In short: this particular report tell us nothing at all. I am trying to get a perspective from someone at SEOMoz to determine if he is reading the result correctly or there is something he is missing? Please help. Jim
Moz Pro | | jimmyzig0 -
Blog Page URLs Showing Duplicate Content
On the SEOMoz Crawl Diagnostics, we are receiving information that we have duplicate page content for the URL Blog pages. For Example: blog/page/33/ blog/page/34/ blog/page/35/ blog/page/36/ These are older post in our blog. Moz is saying that these are duplicate content. What is the best way to fix the URL structure of the pages?
Moz Pro | | _Thriveworks0 -
Duplicate Content and Titles in SEOMoz reports
I've had to rename some of the pages on my site and also move them to different locations. I placed a rel="canonical" on the old page pointing to the new one. The reports on my PRO Dashboard are telling me that I have Duplicate Content and Page Title errors. Do the SEOMoz automated reports take the rel="canonical" link into consideration or do I need to remove these pages and do a 301 redirect from the old to the new page?
Moz Pro | | TRICORSystems0 -
Should I worry about duplicate content errors caused by backslashes?
Frequently we get red-flagged for duplicate content in the MozPro Crawl Diagnostics for URLs with and without a backslash at the end. For example: www.example.com/ gets flagged as being a duplicate of www.example.com I assume that we could rel=canonical this, if needed, but our assumption has been that Google is clever enough to discount this as a genuine crawl error. Can anyone confirm or deny that? Thanks.
Moz Pro | | MackenzieFogelson0 -
Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results. However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do! I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl? Can I get my developer to add some code somewhere. Help will be much appreciated. Asif
Moz Pro | | blagger0