Why MOZ just index some of the links?
-
hello everyone
i've been using moz pro for a while and found a lot of backlink oppertunites as checking my competitor's backlink profile.
i'm doing the same way as my competitors but moz does not see and index lots of them, maybe just index 10% of them. though my backlinks are commenly from sites with +80 and +90 DA like Github, Pinterest, Tripadvisor and ....and the strange point is that 10% are almost from EDU sites with high DA. i go to EDU sites and place a comment and in lots of case, MOZ index them in just 2-3 days!! with maybe just 10 links like this, my DA is incresead from 15 to 19 in less than one month!
so, how does this "SEO TOOL" work?? is there anyway to force it to crawl a page?
-
Why Does Moz Only Index Some Links?
From my experience, Moz might only index some links on a site due to several factors such as crawl budget, site structure, or content quality. I recently dealt with a similar issue on themepcobill.com.Here’s how I approached and solved the problem:
Crawl Budget Optimization: Moz, like other search engines, allocates a specific crawl budget to each site. Ensuring that the most important pages are easily accessible and linked from the homepage helps in better indexing. For my site, I audited the internal linking structure to ensure that important pages weren't buried deep within the site.
Fixing Technical Issues: I used tools like Google Search Console and Moz’s Site Crawl to identify and fix technical issues such as broken links, duplicate content, and slow loading pages. Ensuring that the site's XML sitemap was up-to-date and submitted to Moz also helped improve crawl efficiency.
Quality Content: High-quality, unique content tends to get indexed more reliably. I reviewed the content on Mepco Bill Page to ensure it was engaging and provided value to visitors. Updating old content and adding new, relevant information also made a significant difference.
Backlink Profile: Having a strong backlink profile helps in better indexing. I worked on acquiring high-quality backlinks, which in turn improved its overall visibility and indexing rates on Moz.
By focusing on these key areas, I was able to significantly improve the number of pages Moz indexed from my site. Regular monitoring and adjustments are crucial to maintaining and further improving the indexing status.
If you’re facing similar issues, I recommend starting with a comprehensive audit of your site’s structure, content, and backlinks. Addressing these areas can greatly enhance Moz’s ability to index your site effectively.
-
I also see the same thing for the USA Blogger Book website. Our website has good reference source links but it is not shown while exploring or checking total links in Moz.
May be dealy for crawling or data not updated recently. Now just waiting to recrawl because there is no any other option. -
@seogod123234 Moz Pro, like other SEO tools, has its own methods and algorithms for crawling and indexing backlinks. Here's an explanation of how Moz's backlink indexing works and why you might be seeing the discrepancies you mentioned:
How Moz Pro Crawls and Indexes Backlinks
-
Crawling Frequency and Depth:
- Moz does not crawl the entire web as frequently or as deeply as search engines like Google. This means that it might not discover all the backlinks that exist.
- High-authority sites (like those with DA 80+ you mentioned) may still not be crawled frequently if the specific pages where your backlinks exist are not high on Moz's priority list.
-
Link Discovery:
- Moz prioritizes discovering links that are on pages it already knows about and considers important. EDU sites often fall into this category due to their high trust and authority, hence why your links from these sites are indexed more quickly.
-
Indexing Priorities:
- Moz’s index might prioritize certain types of links. EDU sites generally have high trust, and links from these sites are considered high-quality, which could explain why Moz indexes them more reliably and quickly.
-
Link Verification and Quality:
- Moz might have algorithms to verify the quality of links before indexing them. Links from user-generated content on high DA sites (like comments on GitHub or Pinterest) might be seen as less valuable compared to editorial links or contextual backlinks.
Reasons for Low Indexing Rate of High DA Site Links
- User-Generated Content: Links in user-generated content (e.g., comments) on high DA sites might be indexed less frequently.
- No-Follow Links: Many high DA sites apply a no-follow attribute to user-generated links, which may cause Moz to deemphasize or not index these links.
- Page Priority: Specific pages on high DA sites might not be considered important by Moz’s crawlers if they are deep in the site’s structure or receive low traffic.
How to Improve Backlink Indexing in Moz Pro
-
Submit URL for Crawling:
- While Moz does not offer a direct way to submit URLs for crawling, you can use the Moz Link Explorer tool to analyze a URL, which might prompt Moz’s crawlers to check it.
-
Quality and Context of Backlinks:
- Focus on acquiring backlinks that are within the main content of pages rather than in comments or user-generated sections. These are more likely to be indexed and valued by Moz.
-
Use Moz’s Link Intersect Tool:
- This tool helps find sites that link to your competitors but not to you. Acquiring backlinks from these sites might increase the likelihood of them being indexed by Moz.
-
Diversify Backlink Sources:
- Try to get backlinks from a variety of high-authority sites rather than relying heavily on user-generated content on a few high DA sites.
Understanding SEO Tool Limitations
- Third-Party Tools: Remember that Moz, Ahrefs, SEMrush, etc., are third-party tools with their own limitations and might not reflect real-time data as accurately as Google Search Console.
- Google Search Console: For the most accurate representation of your backlinks, use Google Search Console, as it shows the backlinks Google has discovered and indexed.
While there is no guaranteed way to force Moz to crawl specific pages, focusing on high-quality, diverse backlinks and utilizing Moz’s tools effectively can help improve the indexing rate of your backlinks.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Added a canonical ref tag and SERPs tanked, should we change it back?
My client's CMS uses an internal linking structure that includes index.php at the end of the URLs. The site also works using SEO-friendly URLs without index.php, so the SEO tool identified a duplicate content issue. Their marketing team thought the pages with index.php would have better link equity and rank higher, so they added a canonical ref tag, making the index.php version of the pages the canonical page. As a result, the site dropped in the rankings by a LOT and has not recovered in the last 3-months. It appears that Google had automatically selected the SEO-friendly URLs as the canonical page, and by switching, it re-indexed the entire site. The question we have is, should they change it back? Or will this cause the site to be reindexed again, resulting in an even lower ranking?
Technical SEO | | TienB240 -
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
Unsolved how to add my known backlinks manually to moz
hello
Moz Local | | icogems
i have cryptocurrency website and i found backlinks listed in my google webmasters dashboard, but those backlinks dont show in my moz dashboard even after 45 days. so my question is can i add those backlinks to moz, just to check my website real da score thanks,0 -
Dynamic Canonical Tag for Search Results Filtering Page
Hi everyone, I run a website in the travel industry where most users land on a location page (e.g. domain.com/product/location, before performing a search by selecting dates and times. This then takes them to a pre filtered dynamic search results page with options for their selected location on a separate URL (e.g. /book/results). The /book/results page can only be accessed on our website by performing a search, and URL's with search parameters from this page have never been indexed in the past. We work with some large partners who use our booking engine who have recently started linking to these pre filtered search results pages. This is not being done on a large scale and at present we only have a couple of hundred of these search results pages indexed. I could easily add a noindex or self-referencing canonical tag to the /book/results page to remove them, however it’s been suggested that adding a dynamic canonical tag to our pre filtered results pages pointing to the location page (based on the location information in the query string) could be beneficial for the SEO of our location pages. This makes sense as the partner websites that link to our /book/results page are very high authority and any way that this could be passed to our location pages (which are our most important in terms of rankings) sounds good, however I have a couple of concerns. • Is using a dynamic canonical tag in this way considered spammy / manipulative? • Whilst all the content that appears on the pre filtered /book/results page is present on the static location page where the search initiates and which the canonical tag would point to, it is presented differently and there is a lot more content on the static location page that isn’t present on the /book/results page. Is this likely to see the canonical tag being ignored / link equity not being passed as hoped, and are there greater risks to this that I should be worried about? I can’t find many examples of other sites where this has been implemented but the closest would probably be booking.com. https://www.booking.com/searchresults.it.html?label=gen173nr-1FCAEoggI46AdIM1gEaFCIAQGYARS4ARfIAQzYAQHoAQH4AQuIAgGoAgO4ArajrpcGwAIB0gIkYmUxYjNlZWMtYWQzMi00NWJmLTk5NTItNzY1MzljZTVhOTk02AIG4AIB&sid=d4030ebf4f04bb7ddcb2b04d1bade521&dest_id=-2601889&dest_type=city& Canonical points to https://www.booking.com/city/gb/london.it.html In our scenario however there is a greater difference between the content on both pages (and booking.com have a load of search results pages indexed which is not what we’re looking for) Would be great to get any feedback on this before I rule it out. Thanks!
Technical SEO | | GAnalytics1 -
Can't get Google to index our site although all seems very good
Hi there, I am having issues getting our new site, https://vintners.co indexed by Google although it seems all technical and content requirements are well in place for it. In the past, I had way poorer websites running with very bad setups and performance indexed faster. What's concerning me, among others, is that the crawler of Google comes from time to time when looking on Google Search Console but does not seem to make progress or to even follow any link and the evolution does not seem to do what google says in GSC help. For instance, our sitemap.xml was submitted, for a few days, it seemed like it had an impact as many pages were then visible in the coverage report, showing them as "detected but not yet indexed" and now, they disappeared from the coverage report, it's like if it was not detected any more. Anybody has any advice to speed up or accelerate the indexing of a new website like ours? It's been launched since now almost two months and I was expected, at least on some core keywords, to quickly get indexed.
Technical SEO | | rolandvintners1 -
Question about reciprocal link building. I'm not an SEO professional, just a local service business owner.
I did a link page on my website 13 years ago and never took it down. Should we scratch that page all together? Is it ok with Google to do a page on Recommended local service providers. Maybe I can keep some of those reciprocal links if that's the case...
Link Building | | FVLMS0 -
Unnatural Links To Your Site — Impacts Links warning, Should I do something?
Hi, I got: "Unnatural Links To Your Site — Impacts Links Google has detected a pattern of unnatural artificial, deceptive, or manipulative links pointing to pages on this site. Some links may be outside of the webmaster’s control, so for this incident we are taking targeted action on the unnatural links instead of on the site’s ranking as a whole. " I don't see any dropping at rankings, could the best solution here to be, just to leave everything as it is and be more careful with the link building in the future? Or is there a danger that Google gives further penalties if I don't act on this one do something? I am little afraid that if I start removing links, my rankings will drop, even though they have remained same if don't do anything? Any help is appreciated.
Link Building | | pok3rplay3r0 -
JavaScript is crawled by search engines, isn’t it? Does it mean that links embedded in JavaScript pass link juice?
I wonder If links embedded in JavaScript from an external Website pass link juice to the linked page and thus have a positive effect on google rankings. I read that JavaScipt is craweld. Does it mean that also the link juice is passed? I'm looking forward to your answers.
Link Building | | Tabea0