Why Doesn't Open Site Explorer show Domain Links from Twitter, WordPress or Feedburner?
-
I commented on a few blog posts and wanted to see the domain authority of those sites. When I look at our domain links, I notice the links are not there from the blogs (which I now know I have to give it time to show up). But I also noticed that domains with links to our website (regularly) were not showing up either. i .e Twitter, WordPress and Feedburner. I know we have links in these locations, especially Twitter.
Why would these domains not show up?
Much thanks!
-
I don't know the reason myself, but will ask when I'm back in the office on Monday. You know that you can filter out for just followed links, right?
-
Hey Keri,
Quick question on the topic of OSE (feel free to move this to a more appropriate thread): why does OSE include nofollow links in it's top inbound links?
Thanks!
-Cameron
-
Another reason is that we just don't have the same size server farm that Google and Bing have. We could crawl all of Twitter and get nothing else crawled, or we could crawl some of Twitter, and some of the rest of the web. We aren't able to crawl all of the web, and we release a new index about once a month, so that's why you don't see all of your links or see them right away.
However, what we do offer that is different from Google and Bing is that we show you links for sites that are not your own, we add metrics about the trust and authority of the page, etc.
-
1. How long ago were the blog comments made?
2. Has the OSE index updated since then? (9/12 and 10/8)
3. The sites on which you commented: are the comments set in robots.txt to noindex, nofollow all links?
The only one I'd expect you to see being "dofollow" is Feedburner, btw.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Spammy inbound links: Don't Fix It If It's Not Broken?
Hi Moz community, Our website is nearing the end of a big redesign to be mobile-responsive. We decided to delay any major changes to text content so that if we do suffer a rankings drop upon launch, we'll have some ability to isolate the cause. In the meantime I'm analyzing our current SEO strengths and weaknesses. There is a huge discrepancy between our rankings and our inbound link profile. Specifically, we do great on most of our targeted keywords and in fact had a decent surge in recent months. But Link Profiler turned up hundreds of pages of inbound links from spammy domains, many of which don't even display a webpage when I click there. (shown in uploaded image) "Don't fix it if it's not broken" is conflicting with my natural repulsion to these sorts of referrals. Assuming we don't suffer a rankings drop from the redesign, how much of a priority should this be? There are too many and most are too spammy to contact the webmasters, so we'll need to do it through a Disavow. I couldn't even open the one at the top of the list because our business web proxy identified it as adult content. It seems like a common conception is that if Google hasn't penalized us for it yet, they will eventually. Are we talking about the algorithm just stumbling upon these links and hurting us or would this be something we would find in Manual Actions? (or both?) How long after the launch should we wait before attacking these bad links? Is there a certain spam score that you'd say is a threshold for "Yes, definitely get rid of it"? And when we do, should we Disavow domains one domain at a time to monitor any potential drops or all at once? (this seems kind of obvious but if the spam score and domain authority alone is enough of a signal that it won't hurt us, we'd rather get it done asap) How important is this compared to creating fresh new content on all the product pages? Each one will have new images as well as product reviews, but the product descriptions will be the same ones we've had up for years. I have new content written but it's delayed pending any fallout from the redesign. Thanks for any help with this! d1SB2JP.jpg
Moz Pro | | jcorbo0 -
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
My site's domain authority is 1\. why is that
Hi Guys My website's domain authority is 1 no matter i try www or non www.. why is that? can you guys please help? Thanks a lot in advance. http://www.opensiteexplorer.org/links?site=autoproject.com.au
Moz Pro | | JazzJack
http://www.opensiteexplorer.org/links?site=www.autoproject.com.au Jazz0 -
Site Explorer shows links as followable but they have nofollow tags
Hello, I am looking at site explorer and sites linking to my site moneyfact.co.uk. I've got thousands of links showing as 'followable' but when i check them they have rel="nofollow" tags. e.g: http://www.dianomioffers.co.uk/partner/moneyfacts.co.uk/brochures.epl?partner=93&partner_id=93&partner_variant_id=33 Why would they show as followable when the links are nofollowed? Thanks Steve
Moz Pro | | SteveBrumpton0 -
Does Open Site Explorer purposefully not crawl some sites?
I use both SEOmoz's Open Site Explorer and Web Master Tools to find backlinks when conducting link audits. WMT always finds more links than OSE; I understand Google's database is bigger. But what is interesting to me is that it seems that a large percentage of the links WMT finds that OSE does not are real crappy links that I don't want. That makes me wonder if SEOmoz decides not to crawl certain, low quality, sites? Just curious.
Moz Pro | | ILM_Marketing0 -
In OSE "Followed Linking Root Domains" = "links from homepages"
In OSE's, "Followed Linking Root Domains" are defined as "The number of root domains that have at least one followed link to a page or domain." Does this mean that if one of my competitors has, let's say, 1000 followed linking root domains, they have a link pointing to them on the homepage of 1000 other sites? Thanks for your help!
Moz Pro | | gerardoH0 -
How accurate is Open Site Explorer?
I noticed for one of my sites, a large number of links do not show up in Open Site Explorer, including some of my stronger links. That being the case, how much weight can I put on using these tools to compare sites? I'm not trying to bash here, I really like the tools but if my PA is 29 and my competition is 34, how much weight can I put on these numbers? Or if it says my site has 50 links from 25 domains and my competition has 60 links from 30 domains, these numbers obviously aren't very accurate? So how much weight can I put on these comparisons? Are there better tools?
Moz Pro | | MattMaresca0 -
It won't let me print the secon or third pages of site errors.
When I place the site errors page in pdf format it won't let me print the second or any of the other webpages containing content about my site. Does any one know why?
Moz Pro | | ibex0