Difference between Open site Explorer's Root Domain and Basic SERP Report's Linking Root Domain?
-
Why show different Linking Root Domain open site explorer and SERP of any websites? Open Site explorer show different linking root domain and Basic SERP Report show different linking root domain of any website url, who is the correct and why it is show different linking root domain?
-
Hi there!
Thanks for reaching out! My name is Erin, and I'm on the Help Team here at Moz. In order to help you troubleshoot this, I'm going to need a little more information. Can you email us at help@moz.com and let us know the domain you're referring to? Once we have that, we'll be able to look into this further!
Cheers,
Erin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
'Too many on-page links': doesn't add up...
Hi All, I'm fairly new to the site, have had a full site crawl and am somewhat confused by a large number of pages (82) reported as showing 'too many on page links'. Mine is a new e-commerce site selling a project management methodology. I have about 80 pages which describe individual templates and contain internal links to additional, related content on the site. The crawl warnings define too many on page links as roughly 100 "on any given page" yet almost none of the pages flagged contain more than 20-30 on links. Any thoughts and ideas about (a) the degree to which this will aeffect my ranking in practice and (b) how to resolve it gratefully received! Thanks in advance, Felix
Moz Pro | | RomanCat0 -
What's the future of SERP Tracking? And... Is SEOMoz's SERP Rank Tracking in compliance with Google Adwords API Terms of Service?
My question is: Is SEOMoz's SERP Rank Tracking in compliance with Google Adwords API Terms of Service? Background: The reason I ask is because Raven Tools is now removing their SERP Reporting tool because it uses scraped Google position data. So, it looks like SEO's will either have to find a new rank tracking tool or find new ways to traffic the effects that rankings have on a website traffic volumes. For instance, there is a way to get the position a search results was in Google when it was clicked. We could create a secondary profile in Google Analytics for each client and use a custom filter to record the position that the keywords was in when the search result was clicked ( http://www.seomoz.org/blog/show-keyword-position-using-filters-and-advanced-segments ) Or perhaps we'll have to use Google Webmaster Tools' SEO Report to get data somehow ( http://support.google.com/analytics/bin/answer.py?hl=en&answer=1308626 ) What are your thoughts on this? As you know, ranking data is still a great way to show clients if they are gaining or losing visibility in the search engines. It helps SEO's to report how effective their efforts have been. Because other ranking software companies uses Adwords API data to show the keyword search volume and advertiser competition of a keyword, they can not or eventually will not be able to use scraped ranking data any more. But, if another rank tracking tool out there doesn't need to be in compliance with the Adwords API TOS because they don't use that API to show search volume and advertiser competition, they can still technically provide their ranking data and not be violating any TOS, right? I'm just trying to understand the best way to continue reporting impact of organic keyword rankings on a website. Does the SEOMoz SERP Tracker comply with Adwords API? Is there another rank tracking tool out there that already is using Average Position data from the GWT SEO Report tool? Should we all just stop reporting rankings to clients altogether? Scott
Moz Pro | | OrionGroup2 -
Can't find backlinks shown in report.
I ran an advanced report to show me all the backlinks pointing to a domain. When I go to many of the domains listed, I can't find the link. I've searched the pages by anchor text in the browser and nothing comes up. Anyone know why this would be?
Moz Pro | | PatioLifeStyle0 -
Here's a hard one! Why isn't my profile picture displaying as an avatar iin forums and blogs?
My profile pic is updated and displays on my profile but I still have the default avatar beside all forum and blog posts... This is life or death... I must know the answer. 😉 Thanks guys.
Moz Pro | | Anthony_NorthSEO0 -
BOTW links not recognized by Open Site Explorer
Hi there, I was wondering if I buy a submission to the Best of the Web directory (waiting for the new directory list promised by the seomoz team 🙂 ) but when I get to the category on BOTW website that will fit for my website, I took some links already there and put them on open site explorer to see their value, I had the surprise they are not even recognized... So I am still wondering if it is worth or not... voilà , if anybody knows if this directory still has value...
Moz Pro | | thuraminho750 -
How are our competitors getting these inbound linking domains?
I'm currently managing SEO for my company's website, and I'm getting into link building for the first time. As part of the process, I'm using Open Site Explorer to see who's linking into our competitor sites, to get a better sense of what's available to us in our particular avenue of e-commerce. However, I'm finding that our competitors are getting inbound links from high-authority sites pretty far afield from selling jewelry - census.gov, parallels.com, warnerbros.com, and others. I try clicking through to these links, but each link starts a download of a file. I've seen .f4v, .7z, and .apk files listed as inbound links to our competitor. How is this happening? Again, I'm new to link building, so there may be a simple answer here, and if so I apologize for asking. However, this seems really strange to me, and a difficult situation to confront.
Moz Pro | | jozaksut0 -
Is having the company's address in the footer (or header) of each webpage important for SEO?
Is having the company/office's address in the footer (or header) of each webpage important for SEO? Your page for the Geotarget tool says that having the address in this element helps search engines find your location. My question is, how important or relevant is this to SEO? How does knowing the address influence SEO? Is it best SEO practice to put the address in the footer of every webpage? http://www.seomoz.org/geotarget
Moz Pro | | richardstrange0