How to Diagnose "Crawled - Currently Not Indexed" in Google Search Console
-
The new Google Search Console gives a ton of information about which pages were excluded and why, but one that I'm struggling with is "crawled - currently not indexed". I have some clients that have fallen into this pit and I've identified one reason why it's occurring on some of them - they have multiple websites covering the same information (local businesses) - but others I'm completely flummoxed.
Does anyone have any experience figuring this one out?
-
@intellect did you find a solution to that?
-
-
@dalerio-consulting what should can we do with excluded section then. let say this page of my website is under duplicate canonical tag in excluded section. then should i leave it if its not very serious or should i request indexing ? Are these excluded pages issues very serious to take?
-
Hey Brett!
Basically what we believe this status means is Google saying "I can crawl and access the URL but I don't believe this page belongs in the index". They key here is to figure out why Google might not believe the page should be considered for indexation. We analyzed a good number of Index Coverage reports across all of our different clients.
Here are the most commons reasons URLs get reported as "Crawled - Currently Not Indexed":
- False positives
- RSS Feed URLs
- Paginated URLs
- Expired products
- 301 redirects
- Thin content
- Duplicate content
- Private-facing content
You can find a breakdown of each reason on the post we wrote here: https://moz.com/blog/crawled-currently-not-indexed-coverage-status
However, there's likely many more reasons why Google does't think the page is eligible for indexation.
-
Crawled - Currently not indexed is the most common way for pages or posts on your site not to be indexed. It is also the most difficult one to pinpoint because it happens for a multitude of reasons.
Google needs computing power to analyze each website. How it works is that Google assigns a certain crawl budget to each site, and that crawl budget determines how many pages of your site will be indexed. Google will always index your top pages, therefore, the excluded pages are of less quality rank-wise.
Every website has pages that are not indexed, and the healthy ratio of non-indexed pages will depend on the niche of the website.
There are however 2 ways for you to get your pages out of the "Crawled - Currently not indexed" pit:
- Decrease the number of pages/posts. It's a matter of quality v quantity, so make sure that put more attention into linking every new post so that they get indexed in no time. Don't forget to utilize robots.txt to block pages that aren't useful to the site from indexing so that the crawl budget can be assigned to the other posts.
- Increase the crawl budget. You can do that by raising the quality of the pages/posts. Make more internal and external backlinks for your posts and homepage, make sure that the articles are unique and keyword-optimized, and work hard to aim so that each article will rank on that first page.
SEO is a tough business, but if managed carefully, over time it will pay off.
Daniel Rika - Dalerio Consulting
https://dalerioconsulting.com
info@dalerioconsulting.com -
Crawled - currently not indexed list includes sitemap and robots.txt
We have searched and try to understand this issue. But we did not get final result regarding this issue
If any one fixed this issues, please share your suggestions as soon as possible
-
Hi There,
Google has been struggling to eliminate spam pages, content and structurally ordering them; this is an inherent problem especially with badly structured e-commerce websites.
You might be aware that "Crawled - Currently Not Indexed" means that your page(s) has been found by Google but it is not currently indexed, this might not be an error, just that your pages are in a queue. That might be due to the following reasons:
- There are a lot of pages to index, so it's going to take Google some time to get through them and mark them as either indexed or not.
- There might be duplicate pages / canonical issues for the website of the pages. Google might be seeing a lot of duplicate pages without canonical tags on your site, to improve the number of pages indexed you need to either improve pages so they are no longer duplicated or add canonical tags to help Google attribute it to the correct page
You need to justify each and every page for their merits, and then let google decide whether it think it should be available in their search and also against what keywords at what rank. To summarise, just help 'Google search' by structuring your data right, it might reward you by ranking your pages at right places for the right keywords.Thanks and Regards,Vijay
-
Search Console > Status > Index Coverage > Crawled - currently not indexed
Yes, I had the same Issues last month, in my case the crawler took it 6 weeks to update the Index Coverage. And apparently, there are not too many things that you can do it about it.
Regards
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Was Google Analytics and Adsense Down Today?
For the last 4 hours of so we were registering zero users and Adsense reporting has not changed. We checked the site, and there were no problems. It seems for some reason there was no reporting. Just now it came back up and we are showing live traffic. Trying to figure out if this was a problem specific to us or if it is on Google's end. Thanks,
Reporting & Analytics | | akin670 -
404 Status Codes in Google Search Console
Hi all, I've noticed in Google Search Console under 'Crawl errors' - 1. Why does the status code '410' come up as an 'error' in the crawl report? 2. Why are some articles labelled as '404' error when they have been completely deleted and should be a '410' - there are roughly around 1000-2000 of these. Thanks!
Reporting & Analytics | | lucwiesman0 -
Google Analytics View Filters
Using the same GA property, I would like to set up three filtered views: 1. Tracking across one subdomain and one primary domain (example: shop.example.com & example.com) 2. Track only primary domain (example.com) 3. Track only subdomain (shop.example.com) Can this be achieved by using view filters? If so, how do they need to be set? Also, according to this article: https://moz.com/blog/cross-domain-subdomain-tracking-in-google-analytics, with cross domain tracking, I need to ignore self-referrals, which can only be done at the property level. If set up to ignore example.com referrals, will this cause problems with filter 2 and 3?
Reporting & Analytics | | Evan340 -
Google analytics and software applications
Hei Guys. I think i know the answer for this one but i thought i ask you in order to be 100% sure. Ok let's go.. So i set up url based goals in Google analytics. My website (what are running on WordPress) has google analytics enabled but just before customers makes desired action i have to send them to the application page. Trick is that the application page is not running on wordpress and doesn't have google analytic tracking. After customer fills the application form i send him to my /thank-you page on my wordpress site. My question is: Does the conversion still count because customer left my website for a minute in order to fill in the application form? Best Regards, Tauri
Reporting & Analytics | | seopartnermarketing0 -
Advanced Segment on Google Analytics
Hello there, hope everyone is allright and rockin' the SEO world 🙂 Was wondering if anyone could give a tip on how to configure an 'Advanced Segment' on Google Analytics. Basically I need to isolate traffic for 4 specific subfolders. E.g. www.mywebsite.com/solutions/A www.mywebsite.com/solutions/B www.mywebsite.com/solutions/C www.mywebsite.com/solutions/D/part1 Please note that the website has more pages under the specific section. E.g www.mywebsite.com/solutions/Z www.mywebsite.com/solutions/D/part2 but I only need to isolate the 4 directories (and their own sub-folders) mentioned above. Any idea how I could do this? Thanks a lot Joe
Reporting & Analytics | | Joseph.Volcy0 -
Google Analytics: Deleted Profile
Has anyone ever successfully managed to have a deleted GA profile restored? One of our client's profiles was deleted accidentally. I know the official line is it can't be restored, but...
Reporting & Analytics | | David_ODonnell0 -
Google anomaly
Hi, As per Google's Keyword Tool , the exact search volume for a particular keyword is 22000. As per Google's Webmaster Tool , my SER for the same keyword is 5.8 BUT , I see only 500 impressions for this keyword in webmaster ! Can someone help decipher this behavior ?
Reporting & Analytics | | iamnew0 -
Drop in google referral traffic
Hi guys, As we know, GA shows google as traffic source in two ways: google / organic for organic searches and google.TLD / referral for everything else: google groups, base.google.com, static pages, google reader, google image search, google search appliance/mini. What we noticed is that around Oct 20th there's a huge drop of google.TLD / referral traffic to our site. Do you experience something similar? I couldn't find anything Google-related that happened around this specific date. We use GSA for our site search and I'm wondering if this could be the reason - maybe someone from our development team made changes to GSA settings that affected this traffic source. Looking forward to hearing from you! Thanks.
Reporting & Analytics | | lgrozeva0