Duplicate content in crawl despite canonical
-
Hi! I've had a bunch of duplicate content issues come up in a crawl, but a lot of them seem to have canonical tags implemented correctly. For example:
http://www.alwayshobbies.com/brands/aztec-imports/-catg=Fireplaces http://www.alwayshobbies.com/brands/aztec-imports/-catg=Nursery http://www.alwayshobbies.com/brands/aztec-imports/-catg=Turntables http://www.alwayshobbies.com/brands/aztec-imports/-catg=Turntables?page=0 Aztec http://www.alwayshobbies.com/brands/aztec-imports/-catg=Turntables?page=1
Any ideas on what's happening here?
-
thanks for that
-
Hi!
I found the following info in the "help" section:
**I use the rel=****"**canonical" tag correctly. Why did I get a notice? If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. We include the notice so you can make sure you're targeting the right page.
Is there any way to ignore or exclude certain errors or issues?
Currently there isn't a way to do this, but we plan to add this in a future version of the application.Looks like it is safe to ignore this, as long as you have your rel="canonical" setup correctly. (I have the same issue btw...)
Best regards,Anders
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawlers crawl weird long urls
I did a crawl start for the first time and i get many errors, but the weird fact is that the crawler tracks duplicate long, not existing urls. For example (to be clear): there is a page: www.website.com/dogs/dog.html but then it is continuing crawling:
Moz Pro | | r.nijkamp
www.website.com/dogs/dog.html
www.website.com/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dogs/dog.html what can I do about this? Screaming Frog gave me the same issue, so I know it's something with my website0 -
When I did my first crawl, I was given some errors.
Do I then need to re-crawl to make sure the errors were fixed accordingly?
Moz Pro | | immortalgamer0 -
How does SEOmoz pull its duplicate page title and content information?
I ask because I am getting errors based on URLs that do not even exist on our site. For example: http://www.robots.com/applications/abb/panasonic/robots this URL does not even exist for our site, but somehow it is listed in the error section of page title duplication tool. http://www.robots.com/applications/ exists, but there is no place to get to an ABB or a Panasonic robot from this page, not to mention an ABB/Panasonic (which for sure does not exist). ?? We have quite a few of these out there and just wondering how to find out where the link is coming from. When we checked our URLs through Integrity, links like the one listed above (which we had 29 of them listed) that do not show up. Thoughts? Thanks! Janelle
Moz Pro | | jwanner0 -
Does crawling help in optimisation.?
the website is as it was last week. no optimisation from my side for 10 days now. i was ranked 5 with my keyword not much competition there. however 2 days ago i registrred at seomoz and created a campaign for my website with my keywords that were ranked 5 in search. today i see that my rank has gone up to 2. i have nt done any optimisation neither have ii created any backlinks. so how and why did i climb up? i just created a campaign and let seomoz crawl my website for 2days. am i to assume seomoz crawl optimises website? if that is the case then can i create a campaign crawl pages, climb up in searches, delete the campaign after a week, create it again crawl pages and climb up and so on ? please advise?
Moz Pro | | wahin10 -
Is there a way to specify what SEOmoz classes as duplicate content?
Hi all, Currently working through the laundry list of errors and warning on our company's 24 websites. Due to the ridiculous amount of on page links and the sheer volume of products on our sites, much of the descriptive text is similar, following a strict pattern to best mention our USPs and the like. Of course we use a CMS, which means that all the pages look the same and draw this information from the style sheet. Anyways, to the problem at hand. I have been tasked with reducing the "error" count on the SEOmoz admin panel, the problem being SEOmoz is reporting duplicate page content, when they are different, but similar products, for example, 35, 45 and 55 litre refrigeration units. Is there a way in which I can specify what classes as duplicate content, or make the duplicate content report more restrictive, so that everything HAS to be the same for this error to show. Any help is much appreciated, thanks in advance.
Moz Pro | | cmuknbb0 -
How do I find the corresponding duplicate content pages from my SEOmoz report?
Once I have run my report and the duplicate content pages come up, is there a way to find out which pages have the duplicate content on them? I have one URL but where can I find the duplicate content that corresponds to it? Thanks Barry
Moz Pro | | MrBarrytg0 -
How to remove Duplicate content due to url parameters from SEOMoz Crawl Diagnostics
Hello all I'm currently getting back over 8000 crawl errors for duplicate content pages . Its a joomla site with virtuemart and 95% of the errors are for parameters in the url that the customer can use to filter products. Google is handling them fine under webmaster tools parameters but its pretty hard to find the other duplicate content issues in SEOMoz with all of these in the way. All of the problem parameters start with ?product_type_ Should i try and use the robot.txt to stop them from being crawled and if so what would be the best way to include them in the robot.txt Any help greatly appreciated.
Moz Pro | | dfeg0 -
Only 1 page is being crawled by SEOmoz for the last 2 crawls
I would like to ask for the possible problem plus solution on one of our campaigns. Only 1 page is being crawled by SEOmoz for the last 2 crawls. Before the last two crawls, SEOmoz crawls numerous pages and we can’t think of a possible reason for this error. For this particular campaign , there are no data --- no errors, warnings and notices. Thanks!
Moz Pro | | TheNorthernOffice790