Why is google not deindexing pages with the meta noindex tag?
-
On our website www.keystonepetplace.com we added the meta noindex tag to category pages that were created by the sorting function.
Google no longer seems to be adding more of these pages to the index, but the pages that were already added are still in the index when I check via site:keystonepetplace.com
Here is an example page: http://www.keystonepetplace.com/dog/dog-food?limit=50
How long should it take for these pages to disappear from the index?
-
Google might have already crawled the pages but not indexed them yet. Be patient , if you have enough links coming in and the pages are less than 3 levels deep they will all be crawled and indexed in no time.
-
I guess it depends on the urgency of your situation. If you were just trying to clean things up then it's okay to wait for Google to re-crawl and solve the problem. But if you have been affected by panda and your site is not ranking then I personally would consider that an urgent enough need to use the tool.
-
This link almost makes it seem like I shouldn't use the webmaster tools removal.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1269119
-
The crawlers have so many billions of webpages to get to. We have more than 50,000 on our site; there's about 8,000 that they check more regularly than the others - some are just really deep on the site and hard to get to.
-
You can remove entire category directories from the index in one command using the tool. But the urls won't be removed from the cache, just the index. To remove them from the cache you'll need to enter each url individually. I think that if you are trying to clear things up for Panda reasons, just removing from the index is enough. However, I'm currently trying to decide if it will speed things up to remove from the cache as well.
-
Ok. That makes sense. I wonder why it takes so long? I'll start the long process of the manual removal.
-
Streamline Metrics has got it right.
I've seen pages take MONTHS to drop out of the index after being noindexed. It's best to use the URL removal tool in WMT (not to be confused with the disavow tool) to tell Google to not only deindex the pages but to remove them from the cache as well. I have found that when you do this the pages are gone within 12 hours.
-
In your experience how long does this normally take?
-
Yes it was around December 2nd or 3rd that we added the noindex tags. It just seemed like google wasn't removing any pages yet from the index. It did stop google from adding more of these pages though.
-
It all depends on how long it takes Google to re-crawl those pages with the no index tag on them.
I would do this along with the steps you have already taken in order to help speed the process up if you are in a hurry
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663419
-
Do you know when you added the noindex tags? Google will need to recrawl the pages to see the noindex tags before removing them. I just looked at one your category pages and it looks like it was cached by Google on December 1st, and there was no noindex tag on that page. Depending on how big your site is and how often your site is crawled will determine when they will be removed from the index. Here's Google's official explanation -
"When we see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it. Other search engines, however, may interpret this directive differently. As a result, a link to the page can still appear in their search results.
Note that because we have to crawl your page in order to see the noindex meta tag, there's a small chance that Googlebot won't see and respect the noindex meta tag. If your page is still appearing in results, it's probably because we haven't crawled your site since you added the tag. (Also, if you've used your robots.txt file to block this page, we won't be able to see the tag either.)
If the content is currently in our index, we will remove it after the next time we crawl it. To expedite removal, use the URL removal request tool in Google Webmaster Tools."
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93710
-
Or canonical or by robots.txt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blog archive pages are meta noindexed but still flagged as duplicate
Hi all. I know there several threads related to noindexing blog archives and category pages, so if this has already been answered, please direct me to that post. My blog archive pages have preview text from the posts. Each time I post a blog, the last post on any given archive page shifts to the first spot on the next archive page. Moz seems to report these as new duplicate content issues each week. I have my archive pages set to meta noindex, so can I feel good about continuing to ignore these duplicate content issues, or is there something else I should be doing to prevent penalties? TIA!
Technical SEO | | mkupfer1 -
Stuck trying to deindex pages from google
Hi There, We had developers put a lot of spammy markups in one of our websites. We tried many ways to deindex them by fixing it and requesting recrawls... However, some of the URLs that had these spammy markups were incorrect URLs - redirected to the right version, (ex. same URL with or without / at the end) so now all the regular URLs are updated and clean, however, the redirected URLs can't be found in crawls so they weren't updated, and couldn't get the spam removed. They still show up in the serp. I tried deindexing those spammed pages by making then no-index in the robot.txt file. This seemed to be working for about a week, and now they showed up again in the serp Can you help us get rid of these spammy urls? edit?usp=sharing
Technical SEO | | Ruchy0 -
Google Bot Noindex
If a site has the tag, can it still be flagged for duplicate content?
Technical SEO | | MayflyInternet0 -
Canonical Tags on Parameter Pages With Hreflang
Hey Everyone: We are currently implementing hreflang tags on our site, and we have many parameter pages with hreflang tags; however, I am afraid these may be counted as duplicate content without canonical tags. example.com/utm_source=tpi href='http://example.com/de" hreflang="de" rel="alternate" href='http://example.com/nl" hreflang="nl" rel="alternate" href='http://example.com/fr" hreflang="fr" rel="alternate" href='http://example.com/it" hreflang="it" rel="alternate" I have two questions 1. Do I need a canonical tag pointing to example.com ? 2. On the homepage without the parameter, should I add self referencing hreflang tags? (href="http://example.com/" hreflang="es" Thanks so much for your help! Kyle
Technical SEO | | TeespringMoz0 -
Is any code to prevent duplicate meta description on blog pages
Is any code to prevent duplicate meta description on blog pages I use rell canonical on blog page and to prevent duplicate title y use on page category title de code %%page%% Is there any similar code so to description?
Technical SEO | | maestrosonrisas0 -
Two META Robots tags on a page - which will win?
Hi, Does anybody know which meta-robots tag will "win" if there is more than one on a page? The situation:
Technical SEO | | jmueller
our CMS is not very flexible and so we have segments of META-Tags on the page that originate from templates.
Now any author can add any meta-tag from within his article-editor.
The logic delivering the pages does not care if there might be more than one meta-robots tag present (one from template, one from within the article). Now we could end up with something like this: Which one will be regarded by google & co?
First?
Last?
None? Thanks a lot,
Jan0 -
Will a drop in indexed pages significantly affect Google rankings?
I am doing some research into why we were bumped from Google's first page into the 3rd, fourth and fifth pages in June of 2010. I always suspected Caffeine, but I just came across some data that indicates a drop in indexed pages from 510 in January of that year to 133 by June. I'm not sure what happened but I believe our blog pages were de-indexed somehow. What I want to know is could that significant drop in indexed pages have had an effect on our rankings at that time? We are back up to over 500 indexed pages, but have not fully recovered our first page positions.
Technical SEO | | rdreich490 -
How can I get unimportant pages out of Google?
Hi Guys, I have a (newbie) question, untill recently I didn't had my robot.txt written properly so Google indexed around 1900 pages of my site, but only 380 pages are real pages, the rest are all /tag/ or /comment/ pages from my blog. I now have setup the sitemap and the robot.txt properly but how can I get the other pages out of Google? Is there a trick or will it just take a little time for Google to take out the pages? Thanks! Ramon
Technical SEO | | DennisForte0