How long takes to a page show up in Google results after removing noindex from a page?
-
Hi folks,
A client of mine created a new page and used meta robots noindex to not show the page while they are not ready to launch it. The problem is that somehow Google "crawled" the page and now, after removing the meta robots noindex, the page does not show up in the results.
We've tried to crawl it using Fetch as Googlebot, and then submit it using the button that appears. We've included the page in sitemap.xml and also used the old Google submit new page URL https://www.google.com/webmasters/tools/submit-url
Does anyone know how long will it take for Google to show the page AFTER removing meta robots noindex from the page? Any reliable references of the statement? I did not find any Google video/post about this.
I know that in some days it will appear but I'd like to have a good reference for the future.
Thanks.
-
Just to let you know that the page was indexed in less than 24hrs. We didn't use Tony's tiip (share on G+) but we did all the following:
- Used GWT tool - fetch as googlebot
- Submit the URL using the button that appears after fetching as googlebot
- Included some sidewide links to the page
- Included the page in our sitemap.xml
Thanks all folks who added some insights and tips!
-
Thanks for the tip Tony! We didn't try this yet.
-
Depends on the site, if the site is Microsoft.com with a link from the home page, you can expect it to appear same day.
If its on boringoldsite.com then it could take a week or more.
But mostly a few days -
You can do two things in Google Webmaster tools to identify how long it will take for a page to index or even speed up the process of re indexation:
- Use Google's crawl rate and indexation reports
2) google tools fetch as googlebot
-
Hi Fabio,
Share the page in question on G+. Indexation of G+ posts (including links) can be as quick as 1/2 hour. Also make sure the website is linked to from the clients main G+ profile as a custom link.
-
We had a sub domain website (very small... four or five pages) that was blocked via the robots.txt file for two or three years. When we decided to have it indexed I did just what you did; fetch via GWT and clicked the button to add it to the index. This worked and then the next day... or maybe two days later, it was gone. I did this a couple of times...
It didn't hit the index and stick for two weeks. But since then everything has been just fine.
-
One of my competitors had a designer put a new look on their website. As soon as they uploaded it we went to the site to sniff the code. We saw that the developer left the "noindex" on all of the files. We laughed and laughed about that. Within a few days their entire site dropped out of search and it took them a couple weeks to figure out what happened while we enjoyed a big increase in sales. But, when they uploaded the site with the noindex removed, within a few days the pages were mostly back in search and two weeks later they were back to normal.
The amount of time required is influenced by the amount of spider action received by the site. If your site has low PageRank and does not receive a lot of spider action you can go much longer without being reindexed. Deep pages on a site without much spider action can take weeks to come back. The site in the example above is a PR6 site with mostly PR3 and PR4 pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does Google understand misspellings in terms of what keywords I should optimize a page for
Hey there! This is sort of an oddball question. We do a lot of hospital websites. One client that we have spells "Orthopedics" as "Orthopaedics" which is another spelling. When I did initial keyword research the volume for Orthopedics as I expected is much higher. However when I do a test search for "Orthopaedics" it looks like I'm getting the same results and Google is highlighting in the content "orthopaedics" even though my search query was "orthopedics". What I'm wondering - is it the same thing to optimize for "orthopaedics" or is it a recommendation I should make to the client to change to "orthopedics" Thanks!
Intermediate & Advanced SEO | | CentreTEK0 -
Fetch as Google -- Does not result in pages getting indexed
I run a exotic pet website which currently has several types of species of reptiles. It has done well in SERP for the first couple of types of reptiles, but I am continuing to add new species and for each of these comes the task of getting ranked and I need to figure out the best process. We just released our 4th species, "reticulated pythons", about 2 weeks ago, and I made these pages public and in Webmaster tools did a "Fetch as Google" and index page and child pages for this page: http://www.morphmarket.com/c/reptiles/pythons/reticulated-pythons/index While Google immediately indexed the index page, it did not really index the couple of dozen pages linked from this page despite me checking the option to crawl child pages. I know this by two ways: first, in Google Webmaster Tools, if I look at Search Analytics and Pages filtered by "retic", there are only 2 listed. This at least tells me it's not showing these pages to users. More directly though, if I look at Google search for "site:morphmarket.com/c/reptiles/pythons/reticulated-pythons" there are only 7 pages indexed. More details -- I've tested at least one of these URLs with the robot checker and they are not blocked. The canonical values look right. I have not monkeyed really with Crawl URL Parameters. I do NOT have these pages listed in my sitemap, but in my experience Google didn't care a lot about that -- I previously had about 100 pages there and google didn't index some of them for more than 1 year. Google has indexed "105k" pages from my site so it is very happy to do so, apparently just not the ones I want (this large value is due to permutations of search parameters, something I think I've since improved with canonical, robots, etc). I may have some nofollow links to the same URLs but NOT on this page, so assuming nofollow has only local effects, this shouldn't matter. Any advice on what could be going wrong here. I really want Google to index the top couple of links on this page (home, index, stores, calculator) as well as the couple dozen gene/tag links below.
Intermediate & Advanced SEO | | jplehmann0 -
Mass Removal Request from Google Index
Hi, I am trying to cleanse a news website. When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts. This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012. So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012! Therefore My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article. The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results? I know that for individual URLs I need to request removal from this link
Intermediate & Advanced SEO | | ioannisa
https://www.google.com/webmasters/tools/removals The problem is doing this in bulk, as there are tens of thousands of URLs I want to remove. Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404? I believe this is very wrong. As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools. Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires> The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
http://www.example.com/docid=123456 So, how can I bulk remove from the google index all the junk... relatively fast?0 -
Should we include a canonical or noindex on our m. (mobile) pages?
According to https://developers.google.com/webmasters/smartphone-sites/details, we should include a canonicalicalize back to our desktop version of the URL, but what if that desktop URL is noindexed? Should the m. version be noindexed as well? Or is it fine to leave it as a canonical?
Intermediate & Advanced SEO | | nicole.healthline0 -
Same content pages in different versions of Google - is it duplicate>
Here's my issue I have the same page twice for content but on different url for the country, for example: www.example.com/gb/page/ and www.example.com/us/page So one for USA and one for Great Britain. Or it could be a subdomain gb. or us. etc. Now is it duplicate content is US version indexes the page and UK indexes other page (same content different url), the UK search engine will only see the UK page and the US the us page, different urls but same content. Is this bad for the panda update? or does this get away with it? People suggest it is ok and good for localised search for an international website - im not so sure. Really appreciate advice.
Intermediate & Advanced SEO | | pauledwards0 -
How long does it take before URL's are removed from Google?
Hello, I recently changed our websites url structures removing the .html at the end. I had about 55 301's setup from the old url to the new. Within a day all the new URL's were listed in Google, but the old .html ones still have not been removed a week later. Is there something I am missing? Or will it just take time for them to get de-indexed? As well, so far the Page Authority hasn't transfered from the old pages to the new, is this typical? Thanks!
Intermediate & Advanced SEO | | SeanConroy0 -
How long till pages drop out of the index
In your experience how long does it normally take for 301-redirected pages to drop out of Google's index?
Intermediate & Advanced SEO | | bjalc20110 -
Should I prevent Google from indexing blog tag and category pages?
I am working on a website that has a regularly updated Wordpress blog and am unsure whether or not the category and tag pages should be indexable. The blog posts are often outranked by the tag and category pages and they are ultimately leaving me with a duplicate content issue. With this in mind, I assumed that the best thing to do would be to remove the tag and category pages from the index, but after speaking to someone else about the issue, I am no longer sure. I have tried researching online, but there isn't anything that provided any further information. Please can anyone with any experience of dealing with issues like this or with any knowledge of the topic help me to resolve this annoying issue. Any input will be greatly appreciated. Thanks Paul
Intermediate & Advanced SEO | | PaulRogers0