Internal Link Counts in SEOMoz Report?
-
Hi,
We ran a site diagnostic and it came back with thousands of pages that have more than 100 internal links on a page; however, the actual number of links on those pages seems to be far less than what was reported. Any ideas?
Thanks!
Phil
UPDATE:
So we've looked at the source code and realized that for each product we link to the product page in multiple ways - from the product image, product title and price. So we have three internal links to the same page from each product listing, which is being counted by the SEOMoz crawler as hundreds of links on each page.
But in terms of the Googlebot, is this as egregious as having hundreds of links to different pages or does it not matter as much?
-
Hi Vinnie,
Google only counts the first link in regards to Anchor text - although there appear to be ways around this if you care to do so, but for the average person, yes, only the first link:
http://www.seomoz.org/ugc/3-ways-to-avoid-the-first-link-counts-rule
-
I asked this same question a few weeks back and the answer I got is that Google and other engines should only be counting the first link as far as number of links on a page goes. That makes me curious though about anchor text and which gets counted first. So you may link to a page 5 times on a single page but only the first link is looked at. Can anyone back this up? I couldn't find any official stance regarding it but it makes sense for the exact reason above. Lot's of sites are redundant with image linking and text linking, as well as side nav linking.
-
So we've looked at the source code and realized that for each product we link to the product page in multiple ways - from the product image, product title and price. So we have three internal links to the same page from each product listing, which is being counted by the SEOMoz crawler as hundreds of links on each page.
But in terms of the Googlebot, is this as egregious as having hundreds of links to different pages or does it not matter as much?
-
Are you counting links in drop-down menus and side navigation? I think they are counted in the Site Diagnostic.
-
have you viewed the source of those pages? Looked for rogue links? Or URLs embedded in scripts you thought were hidden from search bots? There are all sorts of reasons for the reading that becomes clear when examining the source view (seen as Googlebot sees it).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
302 redirected links not found
There are so many 302 redirected links you found among which most are for the pages which needs users to login to view the pages so redirection to login page is unavoidable. For example: https://www.stopwobble.com/wishlist/index/add/product/98199/form_key/QE0kEzOF2yO3DTtt/ Also we don't have product compare functionlity, but still there are so many links from compare page which redirects to respective category page. For exammple: http://www.stopwobble.com/catalog/product_compare/add/product/98199/uenc/aHR0cDovL3d3dy5zdG9wd29iYmxlLmNvbS93b2JibGUtd2VkZ2Vz/form_key/QE0kEzOF2yO3DTtt/ We need to know from where Moz crawler is detecting these links so that we can supress them from being crawled. I already tries to review overall site and confirmed these links nowhere exists in page source or in sitemap.xml
Technical SEO | | torbett0 -
Will Google Recrawl an Indexed URL Which is No Longer Internally Linked?
We accidentally introduced Google to our incomplete site. The end result: thousands of pages indexed which return nothing but a "Sorry, no results" page. I know there are many ways to go about this, but the sheer number of pages makes it frustrating. Ideally, in the interim, I'd love to 404 the offending pages and allow Google to recrawl them, realize they're dead, and begin removing them from the index. Unfortunately, we've removed the initial internal links that lead to this premature indexation from our site. So my question is, will Google revisit these pages based on their own records (as in, this page is indexed, let's go check it out again!), or will they only revisit them by following along a current site structure? We are signed up with WMT if that helps.
Technical SEO | | kirmeliux0 -
Do Sitespect links get indexed?
I put a link on one of my websites using sitespect because the next release is not for a few weeks. The reason for the link is to pass domain authority (SEO Juice) to the linked site. In my next release I will add the link in the actual code, but am hoping that from now till then google will crawl and index this link. So the question is, will google crawl and index links adding to webpages via sitespect? Here is the code: | * [http://www.](<a class=)yourdomain.com" class="" >YourDomain |
Technical SEO | | AlyssaN
| | | Link to Sitespect: http://www.sitespect.com/0 -
Duplicate content on report
Hi, I just had my Moz Campaign scan 10K pages out of which 2K were duplicate content and URL's are http://www.Somesite.com/modal/register?destination=question%2F37201 http://www.Somesite.com/modal/register?destination=question%2F37490 And the title for all 2K is "Register" How can i deal with this as all my pages have the register link and login and when done it comes back to the same page where we left and that it actually not duplicate but we need to deal with it propely thanks
Technical SEO | | mtthompsons0 -
Ok to internally link to pages with NOINDEX?
I manage a directory site with hundreds of thousands of indexed pages. I want to remove a significant number of these pages from the index using NOINDEX and have 2 questions about this: 1. Is NOINDEX the most effective way to remove large numbers of pages from Google's index? 2. The IA of our site means that we will have thousands of internal links pointing to these noindexed pages if we make this change. Is it a problem to link to pages with a noindex directive on them? Thanks in advance for all responses.
Technical SEO | | OMGPyrmont0 -
Does img src="http: count as a link?
Hi All I have looked in WMT and it says I am getting a lot of links from 1 affiliate - they have 100,000 pages on their site but GWT is showing me 200,000 links from their domain - each of their pages has the following code. Mysite I think we have Nofollowed the link but does the img src="http://www.site.co.uk/affiliate/affiliation-images/470x80.gif also act as a link and if so do I need to Nofollow that too? The image is stored on our server so the affiliate is linking to the banner image on our server. Would something such as this affect my rankings in a negative way? Thanks
Technical SEO | | MotoringSEO1 -
404's in WMT are old pages and referrer links no longer linking to them.
Within the last 6 days, Google Webmaster Tools has shown a jump in 404's - around 7000. The 404 pages are from our old browse from an old platform, we no longer use them or link to them. I don't know how Google is finding these pages, when I check the referrer links, they are either 404's themselves or the page exists but the link to the 404 in question is not on the page or in the source code. The sitemap is also often referenced as a referrer but these links are definitely not in our sitemap and haven't been for some time. So it looks to me like the referrer data is outdated. Is that possible? But somehow these pages are still being found, any ideas on how I can diagnose the problem and find out how google is finding them?
Technical SEO | | rock220 -
Removing inbound Spam Links
Hello, Last February one of my clients websites was delisted. It turns out that some time ago that had attempted to launch a social network along time lines of ning. The project had fallen apart of the was still up. At some point spammers found it and started using it as part of a link farm. Once it was discovered, the subdomain it was posted on was removed and the website returned to search within 2 weeks. Last week, the website disappeared again OSE shows that in the last 2 months the website has got 2000 (There are about 16,000 total spam links) additional spam links now pointing and the root domain. On top of that, Google Webmaster Tools is reporting about 15,000 404 errors. I have blocked Google from crawling the path where the path were the spam pages used to be. If there a way to block the 1000s of inbound spam links?
Technical SEO | | Simple_Machines0