How long takes to a page show up in Google results after removing noindex from a page?
-
Hi folks,
A client of mine created a new page and used meta robots noindex to not show the page while they are not ready to launch it. The problem is that somehow Google "crawled" the page and now, after removing the meta robots noindex, the page does not show up in the results.
We've tried to crawl it using Fetch as Googlebot, and then submit it using the button that appears. We've included the page in sitemap.xml and also used the old Google submit new page URL https://www.google.com/webmasters/tools/submit-url
Does anyone know how long will it take for Google to show the page AFTER removing meta robots noindex from the page? Any reliable references of the statement? I did not find any Google video/post about this.
I know that in some days it will appear but I'd like to have a good reference for the future.
Thanks.
-
Just to let you know that the page was indexed in less than 24hrs. We didn't use Tony's tiip (share on G+) but we did all the following:
- Used GWT tool - fetch as googlebot
- Submit the URL using the button that appears after fetching as googlebot
- Included some sidewide links to the page
- Included the page in our sitemap.xml
Thanks all folks who added some insights and tips!
-
Thanks for the tip Tony! We didn't try this yet.
-
Depends on the site, if the site is Microsoft.com with a link from the home page, you can expect it to appear same day.
If its on boringoldsite.com then it could take a week or more.
But mostly a few days -
You can do two things in Google Webmaster tools to identify how long it will take for a page to index or even speed up the process of re indexation:
- Use Google's crawl rate and indexation reports
2) google tools fetch as googlebot
-
Hi Fabio,
Share the page in question on G+. Indexation of G+ posts (including links) can be as quick as 1/2 hour. Also make sure the website is linked to from the clients main G+ profile as a custom link.
-
We had a sub domain website (very small... four or five pages) that was blocked via the robots.txt file for two or three years. When we decided to have it indexed I did just what you did; fetch via GWT and clicked the button to add it to the index. This worked and then the next day... or maybe two days later, it was gone. I did this a couple of times...
It didn't hit the index and stick for two weeks. But since then everything has been just fine.
-
One of my competitors had a designer put a new look on their website. As soon as they uploaded it we went to the site to sniff the code. We saw that the developer left the "noindex" on all of the files. We laughed and laughed about that. Within a few days their entire site dropped out of search and it took them a couple weeks to figure out what happened while we enjoyed a big increase in sales. But, when they uploaded the site with the noindex removed, within a few days the pages were mostly back in search and two weeks later they were back to normal.
The amount of time required is influenced by the amount of spider action received by the site. If your site has low PageRank and does not receive a lot of spider action you can go much longer without being reindexed. Deep pages on a site without much spider action can take weeks to come back. The site in the example above is a PR6 site with mostly PR3 and PR4 pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I use noindex or robots to remove pages from the Google index?
I have a Magento site and just realized we have about 800 review pages indexed. The /review directory is disallowed in robots.txt but the pages are still indexed. From my understanding robots means it will not crawl the pages BUT if the pages are still indexed if they are linked from somewhere else. I can add the noindex tag to the review pages but they wont be crawled. https://www.seroundtable.com/google-do-not-use-noindex-in-robots-txt-20873.html Should I remove the robots.txt and add the noindex? Or just add the noindex to what I already have?
Intermediate & Advanced SEO | | Tylerj0 -
Should We Remove Content Through Google Webmaster Tools?
We recently collapsed an existing site in order to relaunch it as a much smaller, much higher quality site. In doing so, we're facing some indexation issues whereas a large number of our old URLs (301'd where appropriate) still show up for a site:domain search. Some relevant notes: We transitioned the site from SiteCore to Wordpress to allow for greater flexibility The Wordpress CMS went live on 11/22 (same legacy content, but in the new CMS) The new content (and all required 301s) went live on 12/2 The site's total number of URLS is currently at 173 (confirmed by ScreamingFrog) As of posting this question, a site:domain search shows 6,110 results While it's a very large manual effort, is there any reason to believe that submitting removal requests through Google Webmaster Tools would be helpful? We simply want all indexation of old pages and content to disappear - and for Google to treat the site as a new site on the same old domain.
Intermediate & Advanced SEO | | d50-Media0 -
Google local pointing to Google plus page not homepage
Today my clients homepage dropped off the search results page (was #1 for months, in the top for years). I noticed in the places account everything is suddenly pointing at the Google plus page? The interior pages are still ranking. Any insight would be very helpful! Thanks.
Intermediate & Advanced SEO | | stevenob0 -
To index or de-index internal search results pages?
Hi there. My client uses a CMS/E-Commerce platform that is automatically set up to index every single internal search results page on search engines. This was supposedly built as an "SEO Friendly" feature in the sense that it creates hundreds of new indexed pages to send to search engines that reflect various terminology used by existing visitors of the site. In many cases, these pages have proven to outperform our optimized static pages, but there are multiple issues with them: The CMS does not allow us to add any static content to these pages, including titles, headers, metas, or copy on the page The query typed in by the site visitor always becomes part of the Title tag / Meta description on Google. If the customer's internal search query contains any less than ideal terminology that we wouldn't want other users to see, their phrasing is out there for the whole world to see, causing lots and lots of ugly terminology floating around on Google that we can't affect. I am scared to do a blanket de-indexation of all /search/ results pages because we would lose the majority of our rankings and traffic in the short term, while trying to improve the ranks of our optimized static pages. The ideal is to really move up our static pages in Google's index, and when their performance is strong enough, to de-index all of the internal search results pages - but for some reason Google keeps choosing the internal search results page as the "better" page to rank for our targeted keywords. Can anyone advise? Has anyone been in a similar situation? Thanks!
Intermediate & Advanced SEO | | FPD_NYC0 -
Why is my XML sitemap ranking on the first page of google for 100s of key words versus the actual relevant page?
I still need this question answerd and I know it's something I must have changed. But google is ranking my sitemap for 100s of key terms versus the actual page. It's great to be on the first page but not my site map...... Geeeez.....
Intermediate & Advanced SEO | | ursalesguru0 -
Why Does Ebay Allow Internal Search Result Pages to be Indexed?
Click this Google query: https://www.google.com/search?q=les+paul+studio Notice how Google has a rich snippet for Ebay saying that it has 229 results for Ebay's internal search result page: http://screencast.com/t/SLpopIvhl69z Notice how Sam Ash's internal search result page also ranks on page 1 of Google. I've always followed the best practice of setting internal search result pages to "noindex." Previously, our company's many Magento eCommerce stores had the internal search result pages set to be "index," and Google indexed over 20,000 internal search result URLs for every single site. I advised that we change these to "noindex," and impressions from Search Queries (reported in Google Webmaster Tools) shot up on 7/24 with the Panda update on that date. Traffic didn't necessarily shoot up...but it appeared that Google liked that we got rid of all this thin/duplicate content and ranked us more (deeper than page 1, however). Even Dr. Pete advises no-indexing internal search results here: http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world So, why is Google rewarding Ebay and Sam Ash with page 1 rankings for their internal search result pages? Is it their domain authority that lets them get away with it? Could it be that noindexing internal search result pages is NOT best practice? Is the game different for eCommerce sites? Very curious what my fellow professionals think. Thanks,
Intermediate & Advanced SEO | | M_D_Golden_Peak
Dan0 -
SEOMOZ duplicate page result: True or false?
SEOMOZ say's: I have six (6) duplicate pages. Duplicate content tool checker say's (0) On the physical computer that hosts the website the page exists as one file. The casing of the file is irrelevant to the host machine, it wouldn't allow 2 files of the same name in the same directory. To reenforce this point, you can access said file by camel-casing the URI in any fashion (eg; http://www.agi-automation.com/Pneumatic-grippers.htm). This does not bring up a different file each time, the server merely processes the URI as case-less and pulls the file by it's name. What is happening in the example given is that some sort of indexer is being used to create a "dummy" reference of all the site files. Since the indexer doesn't have file access to the server, it does this by link crawling instead of reading files. It is the crawler that is making an assumption that the different casings of the pages are in fact different files. Perhaps there is a setting in the indexer to ignore casing. So the indexer is thinking that these are 2 different pages when they really aren't. This makes all of the other points moot, though they would certainly be relevant in the case of an actual duplicated page." ****Page Authority Linking Root Domains http://www.agi-automation.com/ 43 82 http://www.agi-automation.com/index.html 25 2 http://www.agi-automation.com/Linear-escapements.htm 21 1 www.agi-automation.com/linear-escapements.htm 16 1 http://www.agi-automation.com/Pneumatic-grippers.htm 30 3 http://www.agi-automation.com/pneumatic-grippers.htm 16 1**** Duplicate content tool estimates the following: www and non-www header response; Google cache check; Similarity check; Default page check; 404 header response; PageRank dispersion check (i.e. if www and non-www versions have different PR).
Intermediate & Advanced SEO | | AGIAutomation0 -
Is Google just taking long time to re-index or did I make a boo boo?...
Couple weeks ago I changed a url on my site from using underscores to using hyphens I setup a 301 redirect and added appropriate canonical to the new page. My site is crawled daily and I've done this on several other pages with good results but this page is just not being indexed right… I see my page #8 with some random title Is there some "interim index" that Google has? It's just bazaar to me, any thoughts? Thanks! - Cliff
Intermediate & Advanced SEO | | CliffAuerswald0