'Duplicate Page Content' for dissimilar pages
-
I'm using Moz's Crawl Diagnostics to try and clean up some SEO priorities for our website (http://www.craftcompany.co.uk) HOWEVER, virtually all of the pages that are being categorised as duplicate content are not the same, or indeed similar.
For instance, these three pages have been deemed duplicated pages;
- http://www.craftcompany.co.uk/pme-rose-leaf-veined-plunger.html
- http://www.craftcompany.co.uk/double-faced-satin-ribbon-black-25mm-wide.html
- http://www.craftcompany.co.uk/double-faced-satin-maroon-10mm-wide-25mt.html
Can anyone give me an insight into why this is?
Many Thanks!
-
Hi there
I want to say that it may have to do with your canonical tags. For instance...
http://www.craftcompany.co.uk/pme-rose-leaf-veined-plunger.html has a canonical tag for http://www.craftcompany.co.uk/pme-rose-leaf-veined-plunger.html
But, if you use http://craftcompany.co.uk/pme-rose-leaf-veined-plunger.html has a canonical tag for http://www.craftcompany.co.uk/pme-rose-leaf-veined-plunger.html?SID=1c501bb25ab64ab687f30b714dee9969
Your non www. URLs seem to have a SID parameter attached to their canonical tags.
I would check out Google's resource on duplicate content. There are links to help you clean these up. I am not saying that's the reason, but I would definitely clean those URLs up in canonical tags.
Hope this helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
Moz Pro | | NichGunn0 -
Crawl Diagnostics saids a page is linking but I can't find the link on the page.
Hi I have just got my first Crawl Diagnostics report and I have a questions. It saids that this page: http://goo.gl/8py9wj links to http://goo.gl/Uc7qKq which is a 404. I can't recognize the URL on the page which is a 404 and when searching in the code I can't find the %7Blink%7D in the URL which gives the problems. I hope you can help me to understand what triggers it 🙂
Moz Pro | | SebastianThode0 -
Duplicated content generated by keywords
Hello! I am kind of new to SEO and MOZ, so I really need your help to understand why some of my keywords generate duplicated content. Meaning, in my blog posts I use various SEO keywords. It shows up that in my MOZ crawl analysis, I have these keywords listed as duplicates: so two/three different keywords are pointing to the same articles and are considered duplicates? I really don't understand how it is possible. Did it also happen to you? I highly appreciate it. Thank you
Moz Pro | | DianaC0 -
How to find those website who are using our content
I'm tring to figure it out that by using seo moz how can i find all website who are using our content.
Moz Pro | | Showhow20 -
Crawl diagnostics incorrectly reporting duplicate page titles
Hi guys, I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!). Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community! Cheers, Brad
Moz Pro | | brad_dubs0 -
Seomoz crawling filtered pages
Hi, I just checked an seo campaign we started last week, so I opened seomoz to see the crawl diagnostics. Lot's of duplicate content & duplicate titles showing up, but that's because Rogerbot is crawling all of the filtered pages as well. How do I exclude these pages from being crawled? /product/brand-x/3969?order=brand&sortorder=ASC
Moz Pro | | nvs.nim
/product/brand-x/3969?order=popular&sortorder=ASC
/product/brand-x/3969?order=popular&sortorder=DESC&page=10
/product/brand-x/3969?order=popular&sortorder=DESC&page=110 -
How to Fix the Errors with Duplicate Title or Content?
The latest Crawl Diagnostic has found 160 Errors on my site.
Moz Pro | | hanmark
And my error is, that the same content or title is used on two different! pages:
on both my root domain (han-mark.com) and the www subdomain. What does it matter (with or without www)? How serious is that error? Do I need to fix all the errors (and hundreds of warnings too)? What's the best practice? Is there any Guide on how to do it
or Tools for doing it the fast way? Viggo Joergensen0