Why are pages still showing in SERPs, despite being NOINDEXed for months?
-
We have thousands of pages we're trying to have de-indexed in Google for months now. They've all got . But they simply will not go away in the SERPs.
Here is just one example....
http://bitly.com/VutCFiIf you search this URL in Google, you will see that it is indexed, yet it's had for many months. This is just one example for thousands of pages, that will not get de-indexed. Am I missing something here? Does it have to do with using content="none" instead of content="noindex, follow"?
Any help is very much appreciated.
-
Thanks for your reply,
Let me know if you are able to deindex those pages. I will wait. Also please share what you have implemented to deindex those pages.
-
A page can have a link to it, and still not be indexed, so I disagree with you on that.
But thanks for using the domain name. That will teach me to use a URL shortener...
-
Hm, that is interesting. So you're saying that it will get crawled, and thus will eventually become deindexed (as noindex is part of the content="none" directive), but since it's a dead end page, it just takes an extra long time for that particular page to get crawled?
-
Just to add to the other answers, you can also remove the URLs (or entire directory if necessary) via the URL removal tool in Webmaster Tools, although Google prefers you to use it for emergencies of sorts (I've had no problems with it).
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=164734
-
No, nofollow will only tell the bot that the page is a dead end - that the bot should not follow any links on page. And that means any inks from those pages won't be visited by the bot - that is slowing the crawling process overall for those pages.
If you block a page in robots.txt and the page is already in the index - that will remain in the index as the noindex or content=none won't be seen by the bot so it won't be removed from the index - it will just won't be visited anymore.
-
Ok, so, nofollow is stopping the page from being read at all? I thought that nofollow just means the links on the page will not be followed. Is meta nofollow essentially the same as blocking a page in robots.txt?
-
Hi Howard,
The page is in Google index because you are still linking to that page from your website. Here is the page from where that page links:
http://www.2mcctv.com/product_print-productinfoVeiluxVS70CDNRDhtml.html
As you are linking that page Google indexing the page. Google come to know about "noindex" tag before that he has already indexed it. Sorry for bad English.
Lindsay has written awesome post about it here:
http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
After reading above blog post, my all doubts about noindex, follow, robots.txt get clear.
Thanks Lindsay
-
We always use the noindex code in our robot.txt file.
-
Hi,
In order to deindex you should use noindex as content=none also means nofollow. You do need to follow now in order to reach all other pages and see the no index tag and remove those from the index.
When you have all of them out of the index you can set the none back on.
This is the main reason "none" as attribute is not very wide in usage as "shooting yourself in the foot" with it it's easy.
On the otehr hand you need to see if google bot is actually reaching those pages:
-
see if you don't have any robots.txt restrictions first
-
see when google's bot last have a hit on any of the pages - that will give you a good idea and you can do a prediction.
If those pages are in the sup index you can wait for some time for Google bit to revisit.
One last note: build xml sitemaps with all of those pages and submit those via WMT - that will help at 100% to get those in front of the firing squad and also to be able to monitor those better.
Hope it helps.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sudden issue: no google cache for any product detail pages and offer schema missing from SERPS
Absolutely no idea what is going on. All of our category / subcategory and other support pages are indexed and cached as normal, but suddenly none of our product pages are cached, and all of the product / offer schema snippets have been dropped from the serps as well (price, review count, average rating etc). When I inspect a product detail page url in GSC, I am either getting errors or it is returned as a soft 404. There have been no recent changes to our website that are obvious culprits. When I request indexing, it works fine for non-product pages, but generates the "Something went wrong
Technical SEO | | jamestown
If the issue persists, try again in a few hours" message for any product page submitted. We are not SEO novices. This is an Angular 7 site with a Universal version launched back in October (new site, same domain), and until this strange issue cropped up we'd enjoyed steady improvement of rankings and GSC technical issues. Has anyone seen anything like this? We are seeing rapid deterioration in rankings overnight for all product detail pages due to this issue. A term / page combination that ranked for over a decade in the top 10 lost 10 places overnight... There's just no obvious culprit. Using chrome dev tools to view as googlebot, everything is kosher. No weird redirects, no errors, returns 200 and page loads. Thank You0 -
Google Serps Not Showing HTTPS in Front of URL
Hi Everyone, We implemented the HTTPS change to our four websites about 6 months ago. I have found something that I feel is strange. The homepage of each website shows www.domain.com, but all the internal pages show https://www.domain.com/page. If you click through it shows it as secure, but I feel that because it is happening on all four websites, that something was done incorrectly. Here is one Google SERP: https://www.google.com/search?client=firefox-b-1&biw=1920&bih=947&ei=gq9GWpizBuuF_Qa_p5e4Bw&q=tanzanite+jewelry+designs&oq=tanzanite+jewelry+designs&gs_l=psy-ab.3..0l2.130446.136028.0.136152.29.17.4.7.9.0.207.2214.7j9j1.17.0....0...1c.1.64.psy-ab..1.28.2350...0i131k1j0i22i30k1.0.BA5-meGmuA0 As you can see, our site displays with no https, but all the internal pages do. It just worries me as I have seen our internal pages increasing in positioning, but not our homepage. Any ideas?
Technical SEO | | vetofunk0 -
Removing indexed pages
Hi all, this is my first post so be kind 🙂 - I have a one page Wordpress site that has the Yoast plugin installed. Unfortunately, when I first submitted the site's XML sitemap to the Google Search Console, I didn't check the Yoast settings and it submitted some example files from a theme demo I was using. These got indexed, which is a pain, so now I am trying to remove them. Originally I did a bunch of 301's but that didn't remove them from (at least not after about a month) - so now I have set up 410's - These also seem to not be working and I am wondering if it is because I re-submitted the sitemap with only the index page on it (as it is just a single page site) could that have now stopped Google indexing the original pages to actually see the 410's?
Technical SEO | | Jettynz
Thanks in advance for any suggestions.0 -
Pages with 301 redirects showing as 200 when crawled using RogerBot
Hi guys, I recently did an audit for a client and ran a crawl on the site using RogerBot. We quickly noticed that all but one page was showing as status code 200, but we knew that there were a lot of 301 redirects in place. When our developers checked it, they saw the pages as 301s, as did the Moz toolbar. If page A redirected to page B, our developers and the Moz toolbar saw page A as 301 and page B as 200. However the crawl showed both page A and page B as 200. Does anyone have any idea why the crawl may have been showing the status codes as 200? We've checked and the redirect is definitely in place for the user, but our worry is that there could be an issue with duplicate content if a crawler isn't picking up on the 301 redirect. Thanks!
Technical SEO | | Welford-Media0 -
What should I do about not found pages?
I took over a site that had been hacked. A bunch of pages were created that said domain.com/cms/viagra. The pages are gone but they still show in webmaster tools as not being found, which is what I want. However, should I do anything besides leaving them as 404?
Technical SEO | | EcommerceSite0 -
Is it good to redirect million of pages on a single page?
My site has 10 lakh approx. genuine urls. But due to some unidentified bugs site has created irrelevant urls 10 million approx. Since we don’t know the origin of these non-relevant links, we want to redirect or remove all these urls. Please suggest is it good to redirect such a high number urls to home page or to throw 404 for these pages. Or any other suggestions to solve this issue.
Technical SEO | | vivekrathore0 -
Using 302s to redirect pages returning in 6 months
We are doing a 2-phase site redesign (in order to meet a deadline). An entire section of the site will not be available in the first phase, but will come back in 6 months. The question is, do we use 301s or 302s for those pages that will be coming back in 6 months? Is there a time limit on what is considered "temporary"? thanks in advance!
Technical SEO | | Max_B0 -
2 months on and still no Google plus pic :-(
Buonjourno from 10 degrees C wet Wetherby UK 🙂 2 months ago i set up my Google+ account so i could get a thumb nail pic like this:
Technical SEO | | Nightwing
http://i216.photobucket.com/albums/cc53/zymurgy_bucket/working-example-thumb-pic.jpg But instead i get a pictureless snippet like this:
http://i216.photobucket.com/albums/cc53/zymurgy_bucket/no-google-plus-image-snippet.jpg I feel ive waited long enopugh for it to be working by now & today i went back to the drawing board to see why does Kate Mallender have a pic & I dont. But having looked at my set up: http://i216.photobucket.com/albums/cc53/zymurgy_bucket/authorship-tested-but-no-imagecopy-2.jpg I cant see any difference. I admit Ive ran out of ideas, But if any SEO mozzer can point me in the right direction I'd be eternally gratefull 🙂 Grazie Tanto,
David0