Why does Crawl Diagnostics report this as duplicate content?
-
Hi guys,
we've been addressing a duplicate content problem on our site over the past few weeks. Lately, we've implemented rel canonical tags in various parts of our ecommerce store, over time, and observing the effects by both tracking changes in SEOMoz and Websmater tools.
Although our duplicate content errors are definitely decreasing, I can't help but wonder why some URLs are still being flagged with duplicate content by our SEOmoz crawler.
Here's an example, taken directly from our Crawl Diagnostics Report:
URL with 4 Duplicate Content errors:
/safety-lights.htmlDuplicate content URLs:
/safety-lights.html ?cat=78&price=-100
/safety-lights.html?cat=78&dir=desc&order=position /safety-lights.html?cat=78 /safety-lights.html?manufacturer=514What I don't understand, is all of the URLS with URL parameters have a rel canonical tag pointing to the 'real' URL
/safety-lights.htmlSo why is SEOMoz crawler still flagging this as duplicate content?
-
So glad I could help get this figured out! Sometimes it just takes another set of eyes.
-Chiaryn
-
Good catch Chiaryn! Totally didn't see this.
Essentially two URLs end up displaying the same content: 1 is the URL that's picked up by google from our XML sitemap, and the other is a dynamic URL with filtering parameters based on a one level higher category URL.
The canonical tags were set up in such a way that they point to the base category, which in this case, are different, even though the content is the same.
We will address this.
Thanks!
-
Hi there,
I looked into your campaign and it seems that this is happening because of where your canonical tags are pointing. These pages are considered duplicates because their canonical tags point to different URLs. For example, accessories/lights.html?cat=78&price=-100 is considered a duplicate of accessories/lights/safety-lights.html?manufacturer=514 because the canonical tag for the first page is accessories/lights.html while the canonical for the second URL is accessories/lights/safety-lights.html.
Since the canonical tags point to different pages it is assumed that accessories/lights.html and accessories/lights/safety-lights.html are likely to be duplicates themselves.
Here is how our system interprets duplicate content vs. rel canonical:
Assuming A, B, C, and D are all duplicates,
- If A references B as the canonical, then they are not considered duplicates
- If A and B both reference C as canonical, A and B are not considered duplicates of each other
- If A references C as a canonical, A and B are considered duplicated
- If A references C as canonical, B references D, then A and B are considered duplicates
The examples you've provided actually fall into the fourth example I've listed above.
I hope this clears things up. Please let me know if you have any other questions.
-Chiaryn
-
Does seem a little odd. Could you post the domain so we can have a more detailed look?
Thanks
Iain - Reload Media
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content : what are best solutions
Hello i am a beginner in website, I got my first report the report saying that there are some duplicate content. I would like to know What can I do to solve that issue ?
Moz Pro | | Dieumerci0 -
Duplicate content on SearchResults.asp
hi guys. I'm currently working through the reported crawl errors in Moz Analytics, but an unsure what to do about some of them. for example... Searchresults.asp?search=frankie+says+relax is showing as having duplicate page content and page title as SearchResults.asp?searching=Y&sort=13&search=Frankie+Says+Relax&show=24 There's all sorts of searchresults.asp page being flagged. Is this something i can safely ignore or is it something i should endeavour to rectify? I'm also getting errors reported on shoppingcart.asp pages as well as pindex.asp (product index). I'm thinking i should maybe add disallow/ shoppingcart.asp to my robots text file, but am unsure as to whether i should be blocking robots from the search results pages and product index (which is essentially a secondary sitemap). Any advice would be greatly appreaciated. Thanks, Dave 🙂
Moz Pro | | giddygrafix0 -
How do a run a MOZ crawl of my site before waiting for the scheduled weekly crawl?
Greetings: I have just updated my site and would like to run a crawl immediately. How can I do so before waiting for the next MOZ crawl? Thanks,
Moz Pro | | Kingalan1
Alan0 -
Problems with Crawl Diagnostics/New Campaign
Hi all I added a new domain to my campaigns yesterday and got the usual message saying we will have a small set of results ready in a couple of hours, and the rest will be done in a week or so. 24 hours later, this same message is still showing. Anyone else experiencing this or is it just related to my domain? Many thanks. Carl
Moz Pro | | GrumpyCarl0 -
Have a Campaign, but only states 1 page has been crawled by SEOmoz bots. What needs to be done to have all the pages crawled?
We have a campaign running for a client in SEOmoz and only 1 page has been crawled per SEOmoz' data. There are many pages in the site and a new blog with more and more articles posted each month, yet Moz is not crawling anything, aside from maybe the Home page. The odd thing is, Moz is reporting more data on all the other inner pages though for errors, duplicate content, etc... What should we do so all the pages get crawled by Moz? I don't want to delete and start over as we followed all the steps properly when setting up. Thank you for any tips here.
Moz Pro | | WhiteboardCreations0 -
Port 80 and Duplicate Content
The SEOmoz Web App is showing me that every single URL on one of my clients' domains has a duplicate in the form of the URL + :80. For instance, the app is showing me that www.example.com/default.aspx is duplicated in the form of www.example.com:80/default.aspx Any idea if this is an actual problem or just some kind of reporting error? Any help would be appreciated.
Moz Pro | | AnthonyMangia0 -
Crawl Diagnostics Report Lacks Information
When I look at the crawl diagnostics, SEOMoz tells me there are 404 errors. This is understandable, because some pages were removed. What this report doesn't tell me is how those pages were discovered. This is a very important piece of information, because it would tell me there are links pointing to those pages, either internal or external. I believe the internal links have been removed. If the report told me how if found the link, I would be able to take immediate action. Without that information, I have to go so a lot of investigation. And when you have a million pages, that isn't easy. Some possibilities: The crawler remembered the page from the previous crawl. There was a link from an index page - i.e. it is in the database still There was an individual link from another story - so now there are broken links Ditto, but it in on a static index page The link was from an external source - I need to make a redirect Am I missing something, or is this a feature the SEO Moz crawler doesn't have yet? What can I do (other than check all my pages) to discover this?
Moz Pro | | loopyal0 -
How to remove Duplicate content due to url parameters from SEOMoz Crawl Diagnostics
Hello all I'm currently getting back over 8000 crawl errors for duplicate content pages . Its a joomla site with virtuemart and 95% of the errors are for parameters in the url that the customer can use to filter products. Google is handling them fine under webmaster tools parameters but its pretty hard to find the other duplicate content issues in SEOMoz with all of these in the way. All of the problem parameters start with ?product_type_ Should i try and use the robot.txt to stop them from being crawled and if so what would be the best way to include them in the robot.txt Any help greatly appreciated.
Moz Pro | | dfeg0