Crawl diagnostics incorrectly reporting duplicate page titles
-
Hi guys,
I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!).
Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community!
Cheers,
Brad
-
Glad I could help, Bradley. Let us know if you need help with anything else.
-
Thanks for the thorough response Chiaryn! I figured as much but wanted to make sure I wasn't overlooking anything.
-
Hey Bradley,
You're right; Google's crawler is way more sophisticated than ours is because they have a lot more resources, be they engineers or finances, to pour into their crawler. We think our crawl provides tremendous value and is an excellent way to discover and understand the architecture of your site at scale, but it's not that strange that it wouldn't line up with exactly what a site: search reveals. We also don't always know how Google (or other search engine bots) is going to consider a set of pages, so we would rather be safe than sorry with the data we provide.
Since the page http://www.causes.com/about?ctm=home is linked to from another page on your site (www.causes.com) and resolves with a 200 status, our crawler sees it as an individual page and won't associate it with the main /about page. Instead, it just compares the code and content with the other pages we've crawled and reports back when we find duplicates.
I hope this helps clear things up. Please let me know if you have any other questions.
Chiaryn
Help Team Ninja
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why my site not crawl?
hi all me allow rogerbot in robots.txt but rogerbot can't crawl my site my site: toska-co.ir
Moz Pro | | jahanidawodi0 -
How do you create tracking URLs in Wordpress without creating duplicate pages?
I use Wordpress as my CMS, but I want to track click activity to my RFQ page from different products and services on my site. The easiest way to do this is through adding a string to the end of a URL (ala http://www.netrepid.com/request-for-quote/?=colocation) The downside to this, of course, is that when Moz does its crawl diagnostic every week, I get notified that I have multiple pages with the same page title and the dup content. I'm not a programming expert, but I'm pretty handy with Wordpress and know a thing or two about 'href-fing' (yeah, that's a thing). Can someone who tracks click activity in WP with URL variables please enlighten me on how to do this without creating dup pages? Appreciate your expertise. Thanks!
Moz Pro | | Netrepid0 -
Is there a tool to crawl website meta tags to locate any in an incorrect language?
Example: I'm interested in crawling a regional site for Brazil to find any meta tags that are still in English with the goal being to fix any localization issues.
Moz Pro | | mattsolar0 -
Crawl diagnostics up to date after Magento ecommerce site crawl?
Howdy Mozzers, I have a Magento ecommerce website and I was wondering if the data (errors/warnings) from the Crawl diagnostics are up to date. My Magento website has 2.439 errors, mainly 1.325 duplicate page content and 1.111 duplicate page title issues. I already implemented the Yoast meta data plugin that should fix these issues, however I still see there errors appearing in the crawl diagnostics, but when going to the mentioned URL in the crawl diagnostics for e.g.: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and checking the source code and searching for 'canonical' I do see: http://domain.com/babyroom/productname.html" />. Even I checked the google serp for url: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and I couldn't find the url indexed in Google. So it basically means the Yoast meta plugin actually worked. So what I was wondering is why I still see the error counted in the crawl diagnostics? My goal is to remove all the errors and bring it all to zero in the crawl diagnostics. And now I am still struggling with the "overly-dynamic URL" (1.025) and "too many on-page links" (9.000+) I want to measure whether I can bring the warnings down after implementing an AJAX-based layered navigation. But if it's not updating it here crawl diagnostics I have no idea how to measure the success of eliminating the warnings. Thanks for reading and hopefully you all can give me some feedback.
Moz Pro | | videomarketingboys0 -
Campaign Crawl
I have a site with 8036 pages in my sitemap index. But the MozBot only Crawled 2169 pages. It's been several months and each week it crawls roughly the same number of pages. Any idea why I'm not getting fully crawled?
Moz Pro | | JMFieldMarketing0 -
What is the difference between the Rank Tracker and the On-page Optimization page?
Both of them track keywords. In the Rank Tracker, you add each keyword manually and you associate it with a URL. For On-page Optimization page, the URLs are generated automatically based on searches and traffic?
Moz Pro | | ehabd0 -
Crawl Diagnostic | Starter Crawl taken 14hrs.. so far
We started a starter crawl 14hrs ago and it's still going, can anyone help on why this is taking so long, when it says '2 hrs' on the interface.. Thanks, Rory
Moz Pro | | RoryMacDonald0 -
Why am I getting duplicate content errors on same page?
In the SEOmoz tools I am getting multiple errors for duplicate page content and duplicate page titles for one section on my site. When I check to see which page has the duplicate title/content the url listed is exactly the same. All sections are set up the same, so any ideas on why I would be getting duplication errors in just this one section and why they would say the errors are on the same page (when I only have one copy uploaded on the server)?
Moz Pro | | CIEEwebTeam0