Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why MOZ just index some of the links?
-
hello everyone
i've been using moz pro for a while and found a lot of backlink oppertunites as checking my competitor's backlink profile.
i'm doing the same way as my competitors but moz does not see and index lots of them, maybe just index 10% of them. though my backlinks are commenly from sites with +80 and +90 DA like Github, Pinterest, Tripadvisor and ....and the strange point is that 10% are almost from EDU sites with high DA. i go to EDU sites and place a comment and in lots of case, MOZ index them in just 2-3 days!! with maybe just 10 links like this, my DA is incresead from 15 to 19 in less than one month!
so, how does this "SEO TOOL" work?? is there anyway to force it to crawl a page?
-
Why Does Moz Only Index Some Links?
From my experience, Moz might only index some links on a site due to several factors such as crawl budget, site structure, or content quality. I recently dealt with a similar issue on themepcobill.com.Here’s how I approached and solved the problem:
Crawl Budget Optimization: Moz, like other search engines, allocates a specific crawl budget to each site. Ensuring that the most important pages are easily accessible and linked from the homepage helps in better indexing. For my site, I audited the internal linking structure to ensure that important pages weren't buried deep within the site.
Fixing Technical Issues: I used tools like Google Search Console and Moz’s Site Crawl to identify and fix technical issues such as broken links, duplicate content, and slow loading pages. Ensuring that the site's XML sitemap was up-to-date and submitted to Moz also helped improve crawl efficiency.
Quality Content: High-quality, unique content tends to get indexed more reliably. I reviewed the content on Mepco Bill Page to ensure it was engaging and provided value to visitors. Updating old content and adding new, relevant information also made a significant difference.
Backlink Profile: Having a strong backlink profile helps in better indexing. I worked on acquiring high-quality backlinks, which in turn improved its overall visibility and indexing rates on Moz.
By focusing on these key areas, I was able to significantly improve the number of pages Moz indexed from my site. Regular monitoring and adjustments are crucial to maintaining and further improving the indexing status.
If you’re facing similar issues, I recommend starting with a comprehensive audit of your site’s structure, content, and backlinks. Addressing these areas can greatly enhance Moz’s ability to index your site effectively.
-
I also see the same thing for the USA Blogger Book website. Our website has good reference source links but it is not shown while exploring or checking total links in Moz.
May be dealy for crawling or data not updated recently. Now just waiting to recrawl because there is no any other option. -
@seogod123234 Moz Pro, like other SEO tools, has its own methods and algorithms for crawling and indexing backlinks. Here's an explanation of how Moz's backlink indexing works and why you might be seeing the discrepancies you mentioned:
How Moz Pro Crawls and Indexes Backlinks
-
Crawling Frequency and Depth:
- Moz does not crawl the entire web as frequently or as deeply as search engines like Google. This means that it might not discover all the backlinks that exist.
- High-authority sites (like those with DA 80+ you mentioned) may still not be crawled frequently if the specific pages where your backlinks exist are not high on Moz's priority list.
-
Link Discovery:
- Moz prioritizes discovering links that are on pages it already knows about and considers important. EDU sites often fall into this category due to their high trust and authority, hence why your links from these sites are indexed more quickly.
-
Indexing Priorities:
- Moz’s index might prioritize certain types of links. EDU sites generally have high trust, and links from these sites are considered high-quality, which could explain why Moz indexes them more reliably and quickly.
-
Link Verification and Quality:
- Moz might have algorithms to verify the quality of links before indexing them. Links from user-generated content on high DA sites (like comments on GitHub or Pinterest) might be seen as less valuable compared to editorial links or contextual backlinks.
Reasons for Low Indexing Rate of High DA Site Links
- User-Generated Content: Links in user-generated content (e.g., comments) on high DA sites might be indexed less frequently.
- No-Follow Links: Many high DA sites apply a no-follow attribute to user-generated links, which may cause Moz to deemphasize or not index these links.
- Page Priority: Specific pages on high DA sites might not be considered important by Moz’s crawlers if they are deep in the site’s structure or receive low traffic.
How to Improve Backlink Indexing in Moz Pro
-
Submit URL for Crawling:
- While Moz does not offer a direct way to submit URLs for crawling, you can use the Moz Link Explorer tool to analyze a URL, which might prompt Moz’s crawlers to check it.
-
Quality and Context of Backlinks:
- Focus on acquiring backlinks that are within the main content of pages rather than in comments or user-generated sections. These are more likely to be indexed and valued by Moz.
-
Use Moz’s Link Intersect Tool:
- This tool helps find sites that link to your competitors but not to you. Acquiring backlinks from these sites might increase the likelihood of them being indexed by Moz.
-
Diversify Backlink Sources:
- Try to get backlinks from a variety of high-authority sites rather than relying heavily on user-generated content on a few high DA sites.
Understanding SEO Tool Limitations
- Third-Party Tools: Remember that Moz, Ahrefs, SEMrush, etc., are third-party tools with their own limitations and might not reflect real-time data as accurately as Google Search Console.
- Google Search Console: For the most accurate representation of your backlinks, use Google Search Console, as it shows the backlinks Google has discovered and indexed.
While there is no guaranteed way to force Moz to crawl specific pages, focusing on high-quality, diverse backlinks and utilizing Moz’s tools effectively can help improve the indexing rate of your backlinks.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
The particular page cannot be indexed by Google
Hello, Smart People!
On-Page Optimization | | Viktoriia1805
We need help solving the problem with Google indexing.
All pages of our website are crawled and indexed. All pages, including those mentioned, meet Google requirements and can be indexed. However, only this page is still not indexed.
Robots.txt is not blocking it.
We do not have a tag "nofollow"
We have it in the sitemap file.
We have internal links for this page from indexed pages.
We requested indexing many times, and it is still grey.
The page was established one year ago.
We are open to any suggestions or guidance you may have. What else can we do to expedite the indexing process?1 -
Page Indexing without content
Hello. I have a problem of page indexing without content. I have website in 3 different languages and 2 of the pages are indexing just fine, but one language page (the most important one) is indexing without content. When searching using site: page comes up, but when searching unique keywords for which I should rank 100% nothing comes up. This page was indexing just fine and the problem arose couple of days ago after google update finished. Looking further, the problem is language related and every page in the given language that is newly indexed has this problem, while pages that were last crawled around one week ago are just fine. Has anyone ran into this type of problem?
Technical SEO | | AtuliSulava1 -
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
How to index e-commerce marketplace product pages
Hello! We are an online marketplace that submitted our sitemap through Google Search Console 2 weeks ago. Although the sitemap has been submitted successfully, out of ~10000 links (we have ~10000 product pages), we only have 25 that have been indexed. I've attached images of the reasons given for not indexing the platform. gsc-dashboard-1 gsc-dashboard-2 How would we go about fixing this?
Technical SEO | | fbcosta0 -
Dynamic Canonical Tag for Search Results Filtering Page
Hi everyone, I run a website in the travel industry where most users land on a location page (e.g. domain.com/product/location, before performing a search by selecting dates and times. This then takes them to a pre filtered dynamic search results page with options for their selected location on a separate URL (e.g. /book/results). The /book/results page can only be accessed on our website by performing a search, and URL's with search parameters from this page have never been indexed in the past. We work with some large partners who use our booking engine who have recently started linking to these pre filtered search results pages. This is not being done on a large scale and at present we only have a couple of hundred of these search results pages indexed. I could easily add a noindex or self-referencing canonical tag to the /book/results page to remove them, however it’s been suggested that adding a dynamic canonical tag to our pre filtered results pages pointing to the location page (based on the location information in the query string) could be beneficial for the SEO of our location pages. This makes sense as the partner websites that link to our /book/results page are very high authority and any way that this could be passed to our location pages (which are our most important in terms of rankings) sounds good, however I have a couple of concerns. • Is using a dynamic canonical tag in this way considered spammy / manipulative? • Whilst all the content that appears on the pre filtered /book/results page is present on the static location page where the search initiates and which the canonical tag would point to, it is presented differently and there is a lot more content on the static location page that isn’t present on the /book/results page. Is this likely to see the canonical tag being ignored / link equity not being passed as hoped, and are there greater risks to this that I should be worried about? I can’t find many examples of other sites where this has been implemented but the closest would probably be booking.com. https://www.booking.com/searchresults.it.html?label=gen173nr-1FCAEoggI46AdIM1gEaFCIAQGYARS4ARfIAQzYAQHoAQH4AQuIAgGoAgO4ArajrpcGwAIB0gIkYmUxYjNlZWMtYWQzMi00NWJmLTk5NTItNzY1MzljZTVhOTk02AIG4AIB&sid=d4030ebf4f04bb7ddcb2b04d1bade521&dest_id=-2601889&dest_type=city& Canonical points to https://www.booking.com/city/gb/london.it.html In our scenario however there is a greater difference between the content on both pages (and booking.com have a load of search results pages indexed which is not what we’re looking for) Would be great to get any feedback on this before I rule it out. Thanks!
Technical SEO | | GAnalytics1 -
Disavowing Links
I need some advice... I've noticed our link profile has increased with many comments links --- something I certainly have not pursued manually. I'm new to disavowing links. However, before I go ahead and disavow them, I'd like to ask how harmful these links are and would you recommend this is something I can do myself (relatively novice SEO) or if you'd recommend someone who could do this for a reasonable cost. In one instance, the link from this comment thread is with the anchor text, "porn"... Certainly not something we want to rank for, haha! I look forward to your advice
Link Building | | LukeBTDT0 -
Footer Links And Link Juice
I'm starting to learn about link juice and notice in GWMT > Traffic > Internal Links, that the list is in this order by the links counted on each page. Some are in the footer and some are in the header, with some being more important than others commercially i.e. /register /privacy /terms /search /sitemap /disclaimer /blog /register So I am wondering if I should add a 'no-follow' attribute to the footer links i.e. privacy, terms, disclaimer and leave the others as they are? Does this help retain link juice on each page where the links appear? Or am I missing the point all together? This is my website: http://goo.gl/CN0e5
Link Building | | Ubique0