Duplicate page content showing up with proper use of canonical tag
-
Hi,
In the Crawl diagnostics reports, I'm getting lots of duplicate errors warnings e.g. duplicate page title. In most cases these are tracking urls and the page has a canonical tag pointing to the original page.
It would be helpful if the crawl analysis reports could separate these out from ones that are of genuine concern.
It can also happen when there's a noindex tag on a page.
Thanks,
Leigh
-
Hi Cyrus,
I don't see any issues with the canonical tag.
I'll contact the help team.
Thanks,
-
Hi Leigh,
The SEOmoz PRO platform is designed to detect canonicals, and disregard these kind of errors when proper canoncials are in place.
That said, there have been bugs before which has presented this from working correctly, but most of those have been fixed. If you are still seeing problems, I encourage you to contact the help team (help@seomoz.org) to make sure everything is working okay in your campaign, and to verify this is actually a bug and not something wrong with your canonical tags.
Best of luck!
-
Yes, but the non-www version 301 redirects to the www version.
-
Let me ask you this: Does your site exist for both the www and non www? Example: http://www.site.com/ http://site.com/
-
Hi,
I'm not sure I want to list the domain here, but here's a example of what I mean. We create google tracking links (google url builder) for use in a newsletter. The homepage looks like this:
and one the links in the newsletter might look like this:
http://www.site.com/?utm_source=newsletter&utm_medium=email&utm_content=offer&utm_campaign=1
When you look at the source code for both urls, they both have the canonical tag equal to:
So, Google knows there's no duplicate content issue there. It would be good if the diagnostics tool could recognise that too.
Thanks,
Leigh
-
Can you please list the domain?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
Concerned About Individual Pages
Okay. I've setup a campaign for www.site.com and given a list of keywords. So after the initial crawl we'll have some results. What I'm looking for tho is how do individual pages on my site rank for the list of keywords given. And then be able to go to a screen in seomoz with data for that particular page with recommendations and stuff like that. Is this what's going to happen or do I need to create a campaign for each url i want to track? If all will work as I'd like in the example above, should I then add the second list of keywords that some other pages should rank for? Will it get to be a big mess or can I relate the keywords to pages in some way? It seems like what I'm looking for is what this program should be... Thanks!
Moz Pro | | martJ0 -
Page authority questions?
I've been analyzing some IT communities ...in order to check how relevant is the page authority vs PageRank. I found one main site which is organized by "communities'..and every community is a sub-domain. The root domain has an authority of 90/100 which it should be great......so the sub-domains "inherit" part of this authority.... Until here everything seems to be perfect. However, I went deeper and I picked one of these communities. Analyzing the "Linking Root Domain" I discovered it only has only 5 root domains pointing to its home page. Those 5 Root Domains have generated more than 134k links. That doesn't seem to be "natural". Checking those 5 Root Domains I discovered that they have been registered by the same Root Domain site. Ex: Main domain: Domain.com Community1.domain.com Community2.domain.com.... Linking Root Domains: DomainXY.com DomainABC.com DomainRST.com DomainFGH.com DomainOPQ.com It seems to me that it is easy to cheat the authority domain score. Just creating others sites developing the same topic and generating back links to your main domain
Moz Pro | | SherWeb0 -
I've got quite a few "Duplicate Page Title" Errors in my Crawl Diagnostics for my Wordpress Blog
Title says it all, is this an issue? The pages seem to be set up properly with Rel=Canonical so should i just ignore the duplicate page title erros in my Crawl Diagnostics dashboard? Thanks
Moz Pro | | SheffieldMarketing0 -
Duplicate Page Content
i getting crewl errors on Duplicate Page Title and content for the same page. www.breeze-air.com www.breeze-air.com/ www.breeze-air.com/index-html what am i doing worng? please help thank you
Moz Pro | | eoberlender0 -
Excel tips or tricks for duplicate content madness?
Dearest SEO Friends, I'm working on a site that has over 2,400 instances of duplicate content (yikes!). I'm hoping somebody could offer some excel tips or tricks to managing my SEOMoz crawl diagnostics summary data file in a meaningful way, because right now this spreadsheet is not really helpful. Here's a hypothetical situation to describe why: Say we had three columns of duplicate content. The data is displayed thusly: | Column A | Column B | Column C URL A | URL B | URL C | In a perfect world, this is easy to understand. I want URL A to be the canonical. But unfortunately, the way my spreadsheet is populated, this ends up happening: | Column A | Column B | Column C URL A | URL B | URL C URL B | URL A | URL C URL C | URL A | URL B | Essentially all of these URLs would end up being called a canonical, thus rendering the effect of the tag ineffective. On a site with small errors, this has never been a problem, because I can just spot check my steps. But the site I'm working on has thousands of instances, making it really hard to identify or even scale these patterns accurately. This is particularly problematic as some of these URLs are identified as duplicates 50+ times! So my spreadsheet has well over 100K cells!!! Madness!!! Obviously, I can't go through manually. It would take me years to ensure the accuracy, and I'm assuming that's not really a scalable goal. Here's what I would love, but I'm not getting my hopes up. Does anyone know of a formulaic way that Excel could identify row matches and think - "oh! these are all the same rows of data, just mismatched. I'll kill off duplicate rows, so only one truly unique row of data exists for this particular set" ? Or some other work around that could help me with my duplicate content madness? Much appreciated, you Excel Gurus you!
Moz Pro | | FMLLC0 -
Only 1 page has been crawled. Why?
I set a new profile up a fortnight ago. Last week seomoz crawled the entire site (10k pages), and this week has only crawled 1 page. Nothing's changed on the site that I'm aware of, so what's happened?
Moz Pro | | tompollard0 -
One page per campaign?
Not quite sure if I read correctly, but is it correct that one campaign tracks only one page of my site? So if I wanted to track something like a services page, this would require a second campaign?
Moz Pro | | GroundFloorSEO0