Search console validation taking a long time?
-
Hello! I did something dumb back in the beginning of September. I updated Yoast and somehow noindexed a whole set of custom taxonomy on my site. I fixed this and then asked Google to validate the fixes on September 20. Since then they have gotten through only 5 of the 64 URLS.....is this normal? Just want to make sure I'm not missing something that I should be doing.
Thank you! ^_^
-
You're welcome.
We as a community are here to help.If your issue is now fixed, you could mark this question as answered
Best luck.
GR -
Cool! Thanks Gaston! I'm glad I asked about this! ^_^
-
What sometimes happens is that when some URLs are marked as noindex, googlebot reduces its crawling frequency as they interpret that you really don't want that page to be indexed thus has value or so ever.
What you just did is tell GoogleBot to crawl specifically that page and "force" it to analyze and render that page. So GoogleBot now understood that the noindex is no longer set and that page should be indexed.
I'd wait a few days so that googlebot naturally crawls all your site again and eventually index every page that deserves to be indexed.If that doesnt happen in about 2 weeks, then there is a tool in the old Search Console, where you can tell GoogleBot to Crawl a single page and its links. That is under Crawl-> Fetch as Google. Request an URL to fetched, after a few minutes it a button: _Request indexing_will appear, there you'll have the option to "Crawl this URL and its direct links". This image might came handy: https://imgur.com/a/y5DbUVw
I'm glad it helped previously and hope the last helps you even more.
Best luck.
GR -
Whoooooooaaaaahhhhhh! that fixed it! what's the deal!? lol. why is this method instantaneous and the other method I was pointed to by google is taking months?....do I have to do this with each individual URL?
-
....or maybe that's what it found the last time it was crawled? I clicked the "request indexing" button.....we'll see what happens.
-
hmmm. it says:
Indexing allowed? No: 'noindex' detected in 'robots' meta tag....but I have the settings in yoast set up to allow indexing.....do you think maybe changes in yoast settings aren't applying retroactively?....
-
Sorry to hear that.
Its possible that googlebot still didnt find out that you've changed noindex tag.
Would you mind checking what does the _Inspect URL _tool report?
To find that, go to the new version of Search Console and enter one of that URL that should be indexed in the textbox.
Then clic on the "test live URL" button. This image could be helpful: https://imgur.com/a/CPvfwifThere you might get a hint of what is going on.
-
They're in google search console, but I have tried searching for a couple of them and they don't appear to be indexed :-(. I tried the method you suggested and that didn't bring up anything either.
-
Hi angela,
Those 5 out of 64 URLs.. Is that a report in Search Console? or only 5 URLs appear when searching in Google?
Search Console usually takes a little longer to update its reports on index status.Have you tried a site: search? Also using _inurl: _parameter.
For example: site:domain.com inurl:/category-noindexed/Hope it helps.
Best luck.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Evolution of rankings over the course of time
Hello, Has anyone got experience on how rankings behave to climb all the way to the top once your do a redirect and change the design and content of your page entirely ?
Intermediate & Advanced SEO | | seoanalytics0 -
Pages canonicaled to another appearing before the canonical on google searches
Hello, When I do this google search, this page(amandine roses category) appears before the one it is canonical-ed to(this multi-product version of amandine roses). This happens often with this multi-product template, where they don't rank as well as their category version(that are canonical to the multi-product version). Can someone maybe point us in the right direction on what the issue may be? What can be improved?
Intermediate & Advanced SEO | | globalrose.com0 -
Orphan Duplicate is created as Subdomain in Google Search
We noticed that some of our results on google for the blog are also come up with subdomain that is not linked from anywhere on the website. For example: SUBDOMAIN1.website.com/blog/content.html -> it redirects to website.com/blog/content.html SUBDOMAIN1 is not linked anywhere on the website. How did the google find it in the first place? Why does it still keep it in the search results? How do you get rid of it?
Intermediate & Advanced SEO | | rkdc0 -
Bad Domain Links - Penguin? - Moz vs. Search Console Stats?
I've been trying to figure out why my site www.stephita.com has lost it's google ranking the past few years. I had originally thought it was due to the Panda updates, but now I'm concerned it might be because of the Penguin update. Hard for me to pinpoint, as I haven't been actively looking at my traffic stats the past years. So here's what I just noticed. On my Google Search Console - Links to your Site, I discovered there are 301 domains, where over 75% seem to be spammy. I didn't actively create those links. I'm using the MOZ - Open site Explorer tool to audit my site, and I noticed there is a smaller set of LINKING DOMAINS, at about 70 right now. Is there a reason, why MOZ wouldn't necessarily find all 300 domains? What's the BEST way to clean this up??? I saw there's a DISAVOW option in the Google Search Console, but it states it's not the best way, as I should be contacting the webmasters of all the domains, which is I assume impossible to get a real person on the other end to REMOVE these link references. HELP! 🙂 What should I do?
Intermediate & Advanced SEO | | TysonWong0 -
Why differents browsers return different search results?
Hi everyone, I don't understand the reason why if I delete cookies, chronology, set anonymous way surfing in Chorme and Safari, I have different results on Google. I tried it from the same pc and at the same time. Searching in google the query "vangogh" the internet site "www.vangogh-creative.it" is shown in the first page in Chrome but not in Safari. I asked in Google webmaster forum, but nobody seems to know the reason of this behavior. Can anyone help me? Thanks in advance. Massimiliano
Intermediate & Advanced SEO | | vanGoGh-creative0 -
Natural Fluctuation in Search Traffic
This is going to sound like a weird question... I'm curious to know whether there is a natural fluctuation in the actual number of searches being made online each week. It would be great to relate this to the performance of my own organic traffic each week. For example, if organic search traffic is down 10% week on week, is that because search in general is down 10%? Has anybody ever looking into this?
Intermediate & Advanced SEO | | ausmed0 -
Recovering from index problem (Take two)
Hi all. This is my second pass at the problem. Thank you for your responses before, I think I'm narrowing it down! Below is my original message. Afterwards, I've added some update info. For a while, we've been working on http://thewilddeckcompany.co.uk/. Everything was going swimmingly, and we had a top 5 ranking for the term 'bird hides' for this page - http://thewilddeckcompany.co.uk/products/bird-hides. Then disaster struck! The client added a link with a faulty parameter in the Joomla back end that caused a bunch of duplicate content issues. Before this happened, all the site's 19 pages were indexed. Now it's just a handful, including the faulty URL (thewilddeckcompany.co.uk/index.php?id=13) This shows the issue pretty clearly. https://www.google.co.uk/search?q=site%3Athewilddeckcompany.co.uk&oq=site%3Athewilddeckcompany.co.uk&aqs=chrome..69i57j69i58.2178j0&sourceid=chrome&ie=UTF-8 I've removed the link, redirected the bad URL, updated the site map and got some new links pointing at the site to resolve the problem. Yet almost two month later, the bad URL is still showing in the SERPs and the indexing problem is still there. UPDATE OK, since then I've blocked the faulty parameter in the robots.txt file. Now that page has disappeared, but the right one - http://thewilddeckcompany.co.uk/products/bird-hides - has not been indexed. It's been like this for several week. Any ideas would be much appreciated!
Intermediate & Advanced SEO | | Blink-SEO0 -
Correct strategy for long-tail keywords?
Hi, We are selling log houses on our website. Every log house is listed as a "product", and this "product" consists of many separate parts, that are technically also products. For example a log house product consists of doors, windows, roof - and all these parts are technically also products, having their own content pages. The question is - Should we let google index these detail pages, or should we list them as noindex? These pages have no content, only the headline, which are great for long-tail SEO. We are probably the only manufacturer in the world who has a separate page for "log house wood beam 400x400mm". But otherwise these pages are empty. My question is - what should we do? Should we let google index them all (we have over 3600 of them) and maybe try to insert an automatic FAQ section to every one of them to put more content on the page? Or will 3600 low-content pages hurt our rankings? Otherwise we are ranking quite well. Thanks, Johan
Intermediate & Advanced SEO | | JohanMattisson0