Why do I see a duplicate content errors when rel="canonical" tag is present
-
I was reviewing my first Moz crawler report and noticed the crawler returned a bunch of duplicate page content errors. The recommendations to correct this issue are to either put a 301 redirect on the duplicate URL or use the rel="canonical" tag so Google knows which URL I view as the most important and the one that should appear in the search results. However, after poking around the source code I noticed all of the pages that are returning duplicate content in the eyes of the Moz crawler already have the rel="canonical" tag.
Does the Moz crawler simply not catch whether that tag is being used? If I have that tag in place, is there anything else I need to do in order to get that error to stop showing up in the Moz crawler report?
-
We're seeing the same issue. Multiple pages are flagged as "duplicate content" but each retains a single rel canonical tag pointing to the same url.
-
Hey Webtraders,
I'm also look at this issue any chance you got to the bottom of it?
-
We have the same problem with reference to duplicate pagetitles in the Moz crawl errors. Has anyone found a solution for this already?
-
I had pages with bad rel canonical configurated and moz crawl did not detect them as duplicate content. The information of rel canonical the moz crawl show it to me on notices.
Althouth If you see duplicate content on moz crawl and you have rel canonical installed it doesn't mean always mean it has
I have a lot of blog pages with same title o description and the moz crawl shows as duplicate metas althoug i think it is not bat for google as they see de canonical o rel page on this case
-
is the rel canonical pointing to the right page or are they all just pointing to themselves?
A lot of times Wordpress or similar creation tools will drop a canonical tag on each page that points to itself. What you need to do is ensure that the duplicated page is pointing to the one you want indexed...
If you cut and paste an example in here perhaps we can be more helpful.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
Error Code 902 & 403
Several thousand of these popped up on my Crawl Report and the links appear to be searches, i.e. below 902: http://thespacecollective.com/index.php?route=product/search&tag=nasa+ma-1+jacket%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F 403: http://thespacecollective.com/index.php?route=product/search&tag=periodic+table+tshirt%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F I don't want Moz, let alone Google finding this kind of nonsensical link but I don't exactly know what the problem is or how to fix it. Am I right in thinking these are pages people have searched for? Can anyone shed light on this please?
Moz Pro | | moon-boots0 -
H1 tag question
I am currently going through the process of optimizing my pages for my given keywords. Most of my pages are receiving an A grading from the Moz page checker, with keywords being found in all elements expect for the H1 tag. For certain pages I have not used a H1 tag, the pages title has been incorporated into the image on the top of the page. This is difficult to explain without showing you so i will use one of my pages to explain, the page is http://www.ecobode.co.uk/garden-uses-3/garden-gym/. The keyword for this page is garden gym, it is found multiple times in the content, URL and other on page elements expect for the H1 tag as I don't have one. The title resides in the image, I know how important H1 tags are but I don't know how I can incorporate into this page. Does anyone have any ideas how I can incorporate the H1 tag into this page? Kind Regards, Tom
Moz Pro | | Tmgale0 -
URL paramters and duplicate content
Hello, I have a 2-fold question: Crawl Diagnostics is picking up a lot of Duplicate Page Title errors, and as far as I can tell, all of them are cause by URL parameters trailing the URL. We use a Magento store, and all filtering attributes, categories, product pages etc are tagged on as URL parameters. example: Main URL:
Moz Pro | | yacpro13
/accessories.html Duplicated Title Page URLs: /accessories.html?dir=asc&order=position
/accessories.html?mode=list
/accessories.html?mode=grid
...and many others How can I make the Crawl Diagnostics not identify these as errors? Now from an SEO point of view, all these URL parameters are been picked up by google, and are listed in WedMaster Tools -> URL parameters. All URL parameters are set to "let google decide". I remember having read that Google was smart enough here to make the right decision, and we shouldn't have to worry about it. Is this true, or is there a larger issue at hand here? Thankas!0 -
Why does my crawl diagnostics show duplicate content
My crawl diagnostics show duplicate content at mysite.com and mysite.com/index.html which are essentially the same file.
Moz Pro | | MSSBConsulting0 -
In OSE "Followed Linking Root Domains" = "links from homepages"
In OSE's, "Followed Linking Root Domains" are defined as "The number of root domains that have at least one followed link to a page or domain." Does this mean that if one of my competitors has, let's say, 1000 followed linking root domains, they have a link pointing to them on the homepage of 1000 other sites? Thanks for your help!
Moz Pro | | gerardoH0 -
Broken Links and Duplicate Content Errors?
Hello everybody, I’m new to SEOmoz and I have a few quick questions regarding my error reports: In the past, I have used IIS as a tool to uncover broken links and it has revealed a large amount of varying types of "broken links" on our sites. For example, some of them were links on my site that went to external sites that were no longer available, others were missing images in my CSS and JS files. According to my campaign in SEOmoz, however, my site has zero broken links (4XX). Can anyone tell me why the IIS errors don’t show up in my SEOmoz report, and which of these two reports I should really be concerned about (for SEO purposes)? 2. Also in the "errors" section, I have many duplicate page titles and duplicate page content errors. Many of these "duplicate" content reports are actually showing the same page more than once. For example, the report says that "http://www.cylc.org/" has the same content as "http://www.cylc.org/index.cfm" and that, of course, is because they are the same page. What is the best practice for handling these duplicate errors--can anyone recommend an easy fix for this?
Moz Pro | | EnvisionEMI0 -
Duplicate content and what to say to my webmaster
Hi, SEOmoz is telling me I have a duplicate content issue between kansascityrealestate dot com and kansascityrealestate dot com/Real-Estate-Homes-Kansas-City.asp my webmaster says Google should figure out is the same page. Suggestions on what to do and how to explain it to the webmaster? Thank you.
Moz Pro | | Ken_Jansen0