Why is google not deindexing pages with the meta noindex tag?
-
On our website www.keystonepetplace.com we added the meta noindex tag to category pages that were created by the sorting function.
Google no longer seems to be adding more of these pages to the index, but the pages that were already added are still in the index when I check via site:keystonepetplace.com
Here is an example page: http://www.keystonepetplace.com/dog/dog-food?limit=50
How long should it take for these pages to disappear from the index?
-
Google might have already crawled the pages but not indexed them yet. Be patient , if you have enough links coming in and the pages are less than 3 levels deep they will all be crawled and indexed in no time.
-
I guess it depends on the urgency of your situation. If you were just trying to clean things up then it's okay to wait for Google to re-crawl and solve the problem. But if you have been affected by panda and your site is not ranking then I personally would consider that an urgent enough need to use the tool.
-
This link almost makes it seem like I shouldn't use the webmaster tools removal.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1269119
-
The crawlers have so many billions of webpages to get to. We have more than 50,000 on our site; there's about 8,000 that they check more regularly than the others - some are just really deep on the site and hard to get to.
-
You can remove entire category directories from the index in one command using the tool. But the urls won't be removed from the cache, just the index. To remove them from the cache you'll need to enter each url individually. I think that if you are trying to clear things up for Panda reasons, just removing from the index is enough. However, I'm currently trying to decide if it will speed things up to remove from the cache as well.
-
Ok. That makes sense. I wonder why it takes so long? I'll start the long process of the manual removal.
-
Streamline Metrics has got it right.
I've seen pages take MONTHS to drop out of the index after being noindexed. It's best to use the URL removal tool in WMT (not to be confused with the disavow tool) to tell Google to not only deindex the pages but to remove them from the cache as well. I have found that when you do this the pages are gone within 12 hours.
-
In your experience how long does this normally take?
-
Yes it was around December 2nd or 3rd that we added the noindex tags. It just seemed like google wasn't removing any pages yet from the index. It did stop google from adding more of these pages though.
-
It all depends on how long it takes Google to re-crawl those pages with the no index tag on them.
I would do this along with the steps you have already taken in order to help speed the process up if you are in a hurry
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663419
-
Do you know when you added the noindex tags? Google will need to recrawl the pages to see the noindex tags before removing them. I just looked at one your category pages and it looks like it was cached by Google on December 1st, and there was no noindex tag on that page. Depending on how big your site is and how often your site is crawled will determine when they will be removed from the index. Here's Google's official explanation -
"When we see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it. Other search engines, however, may interpret this directive differently. As a result, a link to the page can still appear in their search results.
Note that because we have to crawl your page in order to see the noindex meta tag, there's a small chance that Googlebot won't see and respect the noindex meta tag. If your page is still appearing in results, it's probably because we haven't crawled your site since you added the tag. (Also, if you've used your robots.txt file to block this page, we won't be able to see the tag either.)
If the content is currently in our index, we will remove it after the next time we crawl it. To expedite removal, use the URL removal request tool in Google Webmaster Tools."
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93710
-
Or canonical or by robots.txt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console Showing 404 errors for product pages not in sitemap?
We have some products with url changes over the past several months. Google is showing these as having 404 errors even though they are not in sitemap (sitemap shows the correct NEW url). Is this expected? Will these errors eventually go away/stop being monitored by Google?
Technical SEO | | woshea0 -
Can you force Google to use meta description?
Is it possible to force Google to use only the Meta description put in place for a page and not gather additional text from the page?
Technical SEO | | A_Q0 -
Meta descriptions and h1 tags during a 301 redirect
My employer is shifting to a new domain and i am in the midst of doing URL mapping. I realize that many of the meta descriptions and H1 tags are different on the new pages - is this a problem ? Thank you.
Technical SEO | | ptapley0 -
Canconical tag on site with multiple URL links but only one set of pages
We have a site www.mezfloor.com which has a number of Url's pointing at one site. As the url's have been in use for many years there are links from many sources include good old fashioned hard copy advertising. We have now decided that it would be better to try to start porting all sources to the .co.uk version and get that listing as the prime/master site. A couple of days ago I went through and used canonical tags on all the pages thinking that would set the priority and that would also strengthen the page in terms of trust due to the reduced duplication. However when I went to scan the site in MOZ the warning that the page redirects came up and I am beginning to think that I need to remove all these canonical tags so that search engines do not get into a confused spiral where we loose the little page rank we have. Is there a way that I can redirect everything except the target URL without setting up a separate master site just for all the other pages to point at.
Technical SEO | | Eff-Commerce0 -
Are duplicate page titles fixed by the canonical tag
Google Web Master Tools is saying that some of my pages have duplicate page titles because of pagination. However, I have implemented the canonical tag on the paginated pages which I thought would keep my site from being penalized for duplicate page titles. Is this correct? Or does canonical tag only relate to duplicate content issues?
Technical SEO | | Santaur0 -
I have 15,000 pages. How do I have the Google bot crawl all the pages?
I have 15,000 pages. How do I have the Google bot crawl all the pages? My site is 7 years old. But there are only about 3,500 pages being crawled.
Technical SEO | | Ishimoto0 -
Meta tags - better NOT to have?
OK ok . . . the SEOMox report card told me it's actually better NOT to have meta tag keywords on my page, because my competitors can then look at my page to see what words I am trying to target . . . That makes since, but is also painfully counter intuitive. I thought I would just double check and make sure . .. NO META TAGS KEYWORDS? and if so . . .. what (if anything) should I have in the meta tags?
Technical SEO | | damon12120 -
Mask links with JS that point to noindex'ed paged
Hi, in an effort to prepare our page for the Panda we dramatically reduced the number of pages that can be indexed (from 100k down to 4k). All the remaining pages are being equipped with unique and valuable content. We still have the other pages around, since they represent searches with filter combination which we deem are less interesting to the majority of users (hence they are not indexed). So I am wondering if we should mask links to these non-indexed pages with JS, such that Link-Juice doesn't get lost to those. Currently the targeted pages are non-index via "noindex, follow" - we might de-index them with robots.txt though, if the "site:" query doesn't show improvements. Thanks, Sebastian
Technical SEO | | derderko0