"link_count" column in Crawl Diagnostics report
-
On the Crawl Diagnostics report, does "link_count" represent external (links to this URL), internal, both, or what ?
-
Rock and roll!
Glad you got it all figured out Glenn.
Mike
-
OK. I think I get it
For the URL in question, the "link_count", Title and Meta Description exactly match the custom 404 page, so it looks like there is no page for this URL. The reason it was picked up in the crawl is because a link to this exists in the "referrer" page.
If I get them to correct the referrer page, this should be good.
(First day using SEOMoz Pro)
Thanks much for your help !
-
The site is returning a custom 404 page. That is why SEOmoz and Screaming Frog are returning a 200.
You need to define that page to return a 404 or fix the page.
This article will hopefully shed some light on your situation.
Mike
-
Could you give an example of a URL that goes to a 404.
Edit: NM I see above.
-
Mike - Yep. Screaming Frog also shows a 200 Status Code. So I have to assume the page exists -- altho not sure why I'm directed to a 404 page...
So basically, I think you and George answered my original question: "link_count" represents links on a page pointing to other internal and external pages.
I would appreciate any thoughts on why I'm ending up on a 404 page tho...
-
No Mike. This is a client's site. An example of these URLs is: http://www.teamflexo.com/home/contact_us.asp, which shows a link count of 43.
Good thought tho, I'll take a look at this on Screaming Frog.
-
Are we talking about your gfwebsoft website that you have listed in your profile?
Using Screaming Frog, the only 404 status code I am seeing is from the homepage, contact, costs, about, testimonials, and services pages that are pointing to your facebook page.
Do you have specific URLs you can share that are 404ing?
Mike
-
If these are on-page links, then I have another question...
I had originally assumed that if the page showed up in Crawl Diagnostics, it must actually exist (as opposed to being a URL in a backlink somewhere) but there are several URLs showing "link_count" of 40+ that, when you go to the URL, it goes directly to a 404 page. (However, the "http_status_code" in the diagnostics report is showing 200.)
Any theories that could help me understand this ?
Tx, Glenn
-
It refers to the number of followed links on the page pointing to other pages on your site or other sites.
Source: Using MozBar to compare numbers.
-
Hi Glen,
It looks like those numbers represent the number of hyperlinks (internal and external) on that specific page.
I was able to validate this by looking at the link_count column of 100+ and verifying the same numbers on my Too Many On-Page Links report on the SEOmoz Crawl Diagnostics.
Hope this helps.
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
moz crawl is stopped?
moz stopped indexing the links due to some updates? can some one confirm me thanks
Moz Pro | | 42409300125323700 -
404 Crawl Diagnostics with void(0) appended to URL
Hello I am getting loads of 404 reported in my Crawl report, all appended with void(0) at the end. For example: http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0)
Moz Pro | | moshen
The site is running on Drupal 7, Has anyone come across this before? Kind Regards Moshe | http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0) |0 -
Lag time between MOZ crawl and report notification?
I did a lot of work to one of my sites last week and eagerly awaited this week's MOZ report to confirm that I had achieved what I was trying to do, but alas I still see the same errors and warnings in the latest report. This was supposedly generated five days AFTER I made the changes, so why are they not apparent in the new report? I am mainly referring to missing metadata, long page titles, duplicate content and duplicate title errors (due to crawl and URL issues). Why would the new crawl not have picked up that these have been corrected? Does it rely on some other crawl having updated (e.g. Google or Bing)?
Moz Pro | | Gavin.Atkinson0 -
Crawl Stats Have Dissapeared
Hi SEOmoz I received an email today that another scan has been performed but when I log into my account all the tracking details have disappeared? States Pages crawled N/A. Can someone please help? Temporary problem? Website www.vintageheirloom.com Thanks
Moz Pro | | well-its-1-louder0 -
I have a Rel Canonical "notice" in my Crawl Diagnostics report. I'm presuming that means that the spider has detected a rel canonical tag and it is working as opposed to warning about an issue, is this correct?
I know this seems like a really dumb question but the site I'm working on is a BigCommerce one and I've been concerned about canonicalisation issues prior to receiving this report (I'm a SEOmoz pro newbie also!) and I just want to be clear I am reading this notice correctly. I presume this means that the site crawl has detected the rel canonical tag on these pages and it is working correctly. Is this correct?? Any input is much appreciated. Thanks
Moz Pro | | seanpearse0 -
How to read Crawler downloaded report
I am trying to seperate the duplicate title and description URLs, by looking at the report i am not getting how to find all urls which contain same title and description. Is there any video link on the site which walk me through each part of the report. Thanks, Punam
Moz Pro | | nonlinearcreations0 -
How to remove Duplicate content due to url parameters from SEOMoz Crawl Diagnostics
Hello all I'm currently getting back over 8000 crawl errors for duplicate content pages . Its a joomla site with virtuemart and 95% of the errors are for parameters in the url that the customer can use to filter products. Google is handling them fine under webmaster tools parameters but its pretty hard to find the other duplicate content issues in SEOMoz with all of these in the way. All of the problem parameters start with ?product_type_ Should i try and use the robot.txt to stop them from being crawled and if so what would be the best way to include them in the robot.txt Any help greatly appreciated.
Moz Pro | | dfeg0 -
How long does a crawl take?
A crawl of my site started on the 8th July & is still going on - is there something wrong???
Moz Pro | | Brian_Worger1