Why are pages still showing in SERPs, despite being NOINDEXed for months?
-
We have thousands of pages we're trying to have de-indexed in Google for months now. They've all got . But they simply will not go away in the SERPs.
Here is just one example....
http://bitly.com/VutCFiIf you search this URL in Google, you will see that it is indexed, yet it's had for many months. This is just one example for thousands of pages, that will not get de-indexed. Am I missing something here? Does it have to do with using content="none" instead of content="noindex, follow"?
Any help is very much appreciated.
-
Thanks for your reply,
Let me know if you are able to deindex those pages. I will wait. Also please share what you have implemented to deindex those pages.
-
A page can have a link to it, and still not be indexed, so I disagree with you on that.
But thanks for using the domain name. That will teach me to use a URL shortener...
-
Hm, that is interesting. So you're saying that it will get crawled, and thus will eventually become deindexed (as noindex is part of the content="none" directive), but since it's a dead end page, it just takes an extra long time for that particular page to get crawled?
-
Just to add to the other answers, you can also remove the URLs (or entire directory if necessary) via the URL removal tool in Webmaster Tools, although Google prefers you to use it for emergencies of sorts (I've had no problems with it).
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=164734
-
No, nofollow will only tell the bot that the page is a dead end - that the bot should not follow any links on page. And that means any inks from those pages won't be visited by the bot - that is slowing the crawling process overall for those pages.
If you block a page in robots.txt and the page is already in the index - that will remain in the index as the noindex or content=none won't be seen by the bot so it won't be removed from the index - it will just won't be visited anymore.
-
Ok, so, nofollow is stopping the page from being read at all? I thought that nofollow just means the links on the page will not be followed. Is meta nofollow essentially the same as blocking a page in robots.txt?
-
Hi Howard,
The page is in Google index because you are still linking to that page from your website. Here is the page from where that page links:
http://www.2mcctv.com/product_print-productinfoVeiluxVS70CDNRDhtml.html
As you are linking that page Google indexing the page. Google come to know about "noindex" tag before that he has already indexed it. Sorry for bad English.
Lindsay has written awesome post about it here:
http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
After reading above blog post, my all doubts about noindex, follow, robots.txt get clear.
Thanks Lindsay
-
We always use the noindex code in our robot.txt file.
-
Hi,
In order to deindex you should use noindex as content=none also means nofollow. You do need to follow now in order to reach all other pages and see the no index tag and remove those from the index.
When you have all of them out of the index you can set the none back on.
This is the main reason "none" as attribute is not very wide in usage as "shooting yourself in the foot" with it it's easy.
On the otehr hand you need to see if google bot is actually reaching those pages:
-
see if you don't have any robots.txt restrictions first
-
see when google's bot last have a hit on any of the pages - that will give you a good idea and you can do a prediction.
If those pages are in the sup index you can wait for some time for Google bit to revisit.
One last note: build xml sitemaps with all of those pages and submit those via WMT - that will help at 100% to get those in front of the firing squad and also to be able to monitor those better.
Hope it helps.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Home Page
Hi everyone! So, I;m using the crawl diagnostics in Moz and it's telling that I've got duplicate content for these two pages: http://www.bridgelanguages.com/
Technical SEO | | Bridge_Education_Group
http://www.bridgelanguages.com/index.php?p=3233&source=3 Would a redirect from the 2nd page to the 1st one be a solution? I'm not even sure where that 2nd link is on the site? Any suggestions or has anyone experienced the same? Thanks! Kelly0 -
Medium sizes forum with 1000's of thin content gallery pages. Disallow or noindex?
I have a forum at http://www.onedirection.net/forums/ which contains a gallery with 1000's of very thin-content pages. We've currently got these photo pages disallowed from the main googlebot via robots.txt, but we do all the Google images crawler access. Now I've been reading that we shouldn't really use disallow, and instead should add a noindex tag on the page itself. It's a little awkward to edit the source of the gallery pages (and keeping any amends the next time the forum software gets updated). Whats the best way of handling this? Chris.
Technical SEO | | PixelKicks0 -
SEOMOZ and Webmaster Tools showing Different Page Index Results
I am promoting a jewelry e-commerce website. The website has about 600 pages and the SEOMOZ page index report shows this number. However, webmaster tools shows about 100,000 indexed pages. I have no idea why this is happening and I am sure this is hurting the page rankings in Google. Any ideas? Thanks, Guy
Technical SEO | | ciznerguy1 -
Why are the bots still picking up so many links on our page despite us adding nofollow?
We have been working to reduce our on-page links issue. On a particular type of page the problem arose because we automatically link out to relevant content. When we added nofollows to this content it resolved the issue for some but not all and we can't figure out why is was not successful for every one. Can you see any issues? Example of a page where nofollow did not work for... http://www.andor.com/learning-academy/4-5d-microscopy-an-overview-of-andor's-solutions-for-4-5d-microscopy
Technical SEO | | tonykelly0 -
2 months on and still no Google plus pic :-(
Buonjourno from 10 degrees C wet Wetherby UK 🙂 2 months ago i set up my Google+ account so i could get a thumb nail pic like this:
Technical SEO | | Nightwing
http://i216.photobucket.com/albums/cc53/zymurgy_bucket/working-example-thumb-pic.jpg But instead i get a pictureless snippet like this:
http://i216.photobucket.com/albums/cc53/zymurgy_bucket/no-google-plus-image-snippet.jpg I feel ive waited long enopugh for it to be working by now & today i went back to the drawing board to see why does Kate Mallender have a pic & I dont. But having looked at my set up: http://i216.photobucket.com/albums/cc53/zymurgy_bucket/authorship-tested-but-no-imagecopy-2.jpg I cant see any difference. I admit Ive ran out of ideas, But if any SEO mozzer can point me in the right direction I'd be eternally gratefull 🙂 Grazie Tanto,
David0 -
Duplicates on the page
Hello SEOMOZ, I've one big question about one project. We have a page http://eb5info.com/eb5-attorneys and a lot of other similar pages. And we got a big list of errors, warnings saying that we have duplicate pages. But in real not all of them are same, they have small differences. For example - you select "State" in the left sidebar and you see a list on the right. List on the right panel is changing depending on the what you selecting on the left. But on report pages marked as duplicates. Maybe you can give some advices how to improve quality of the pages and make SEO better? Thanks Igor
Technical SEO | | usadvisors0 -
Redirected Subdomain Development URLs Showing In SERPs?
I develop client websites within a subdomain of another website (with noindex, nofollow so that incomplete websites on the wrong domains aren't ever seen by web users). Then, when we launch a client's site on their own domain, we redirect all of the development URLS to the appropriate page on the new live site. (meaning at site launch, all pages on http://client-site.developersite.com would be set to 301 redirect to identical pages pages on http://www.client-site.com). This system has always seemed to work fine, but today I discovered 94,700 pages indexed by Google on my root domain and found that these were mostly old URLs of sites in development that redirect to the actual client sites. Many are several years old. Any idea why Google would be indexing these pages? Thanks in advance!
Technical SEO | | VTDesignWorks0 -
Page MozRank and MozTrust 0 for Home Page, Makes No Sense?
Hey Mozzers! I'm a bit confused by a site that is showing a 0 for home page MozRank and MozTrust, while its subdomain and root domain metrics look decent (relatively). I am posting images of the page metrics and subdomain metrics to show the disparity: http://i.imgur.com/3i0jq.png http://i.imgur.com/ydfme.png Is it normal to see this type of disparity? The home page has very little inbound links, but the big goose egg has me wondering if there is something else going on. Has anyone else experienced this? Or, does anyone have speculation as to why a home page would have a 0 MozRank while the subdomain metrics look much better? Thanks!
Technical SEO | | ClarityVentures0