Crawl Diagnostics - 350 Critical errors? But I used rel-canonical links
-
Hello Mozzers,
We launched a new website on Monday and had our first MOZ crawl on 01/07/15 which came back with 350+ critical errors.
The majority of these were for duplicate content. We had a situation like this for each gym class:
GLOBAL YOGA CLASS (canonical link / master record)
-
YOGA CLASS BROMLEY
-
YOGA CLASS OXFORD
-
YOGA CLASS GLASGOW etc
All of these local Yoga pages had the canonical link deployed. So why is this regarded as an error by MOZ?
Should I have added robots NO INDEX instead? Would think help?
Very scared our rankings are gonna get effected
Ben
-
-
Hi Patrick,
That is super useful and thank you. I have read up as suggested on MOZ and Webmaster tools and you are bang on - absolute over relative urls to be used in canonicals.
I will ask our developers to fix.
I wonder whether MOZ will ignore this as an error in its crawl diagnostics thereafter?
Ben
-
Hi there
I would first read the duplicate content resources from both Google and Moz so you can spot check your pages.
Also, I would read the following resource from Google, that states:
"Avoid errors: use absolute paths rather than relative paths with the
rel="canonical"
link element.Use this structure:
https://www.example.com/dresses/green/greendresss.html
Not this structure:/dresses/green/greendress.html
." You are currently using the second structure in your canonical tags. Try switching to an absolute URL structure and see how that works.Hope this helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How are you using Moz Content?
Hey people, I just subscribed for Moz Content and I am wondering how are other professionals using it as a strategy tool. Today I just released a blog post talking about how larger content impacts PA. Not a big deal. I would appreciate some ideas and insights.
Moz Pro | | amirfariabr0 -
Dot Net Nuke generating long URL showing up as crawl errors!
Since early July a DotNetNuke site is generating long urls that are showing in campaigns as crawl errors: long url, duplicate content, duplicate page title. URL: http://www.wakefieldpetvet.com/Home/tabid/223/ctl/SendPassword/Default.aspx?returnurl=%2F Is this a problem with DNN or a nuance to be ignored? Can it be controlled? Google webmaster tools shows no crawl errors like this.
Moz Pro | | EricSchmidt0 -
Number of available links limited?
OK, I've been making use of the free LinkScape API (on behalf of a client of mine) and trying to get links (and info on those links) to a specific domain/page/etc. NOTE : I've been using it without any issue in the past, however we are currently facing some weird issues. Let's take this simple query as an example : http://lsapi.seomoz.com/linkscape/links/wikipedia.org?SourceCols=4&TargetCols=4&Sort=page_authority&Scope=page_to_domain What this one supposedly does is to get links to "wikipedia.org", right? I'm reading : The Page_to_* scopes will by default return 25 links per source domain if no limit is specified, so you can see domain diversity. Due to space limitations in our API, a general link query for a given page will return at most 25 pages for every unique domain linking to that page. And I'm saying OK, that's fine. The thing is that (instead of the 1000 links I had been getting before), I'm now getting just 25 links. NOT per... "source domain"... but obviously per "target domain" (= wikipedia.org) - or am I missing something? (well, probably wikipedia suddenly has just about 25 links pointed to it... makes sense! 🙂 ) Please, let me know what's going on with the above, simply because getting just 25 links is close to worthless... Thanks a lot, in advance!
Moz Pro | | drkameleon0 -
In my crawl diagnostics, there are links to duplicate content. How can I track down where these links originated in?
How can I find out how SEOMOz found these links to begin with? That would help fix the issue. Where's the source page where the link was first encountered listed at?
Moz Pro | | kirklandsl0 -
SEOMoz reports and 404 errors
My SEOMoz report shows a 404 error, found today for this url: http://globalheavyhaul.com/google.com i do not have this anchor text anywhere on my website. How did Roger figure out that somebody looked for that page? Do I need to worry about 404 errors that are the result of user mistakes, instead of actual bad links?
Moz Pro | | FreightBoy0 -
Whats rel canonical
I have a warning in SEOmoz saying that I have 150 rel canonical - What the hell that means? 🙂 Tks in advance 🙂 Pedro Pereira
Moz Pro | | PedroM0 -
Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results. However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do! I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl? Can I get my developer to add some code somewhere. Help will be much appreciated. Asif
Moz Pro | | blagger0