NOINDEX content still showing in SERPS after 2 months
-
I have a website that was likely hit by Panda or some other algorithm change. The hit finally occurred in September of 2011. In December my developer set the following meta tag on all pages that do not have unique content:
name="robots" content="NOINDEX" />
It's been 2 months now and I feel I've been patient, but Google is still showing 10,000+ pages when I do a search for site:http://www.mydomain.com
I am looking for a quicker solution. Adding this many pages to the robots.txt does not seem like a sound option. The pages have been removed from the sitemap (for about a month now). I am trying to determine the best of the following options or find better options.
- 301 all the pages I want out of the index to a single URL based on the page type (location and product). The 301 worries me a bit because I'd have about 10,000 or so pages all 301ing to one or two URLs. However, I'd get some link juice to that page, right?
- Issue a HTTP 404 code on all the pages I want out of the index. The 404 code seems like the safest bet, but I am wondering if that will have a negative impact on my site with Google seeing 10,000+ 404 errors all of the sudden.
- Issue a HTTP 410 code on all pages I want out of the index. I've never used the 410 code and while most of those pages are never coming back, eventually I will bring a small percentage back online as I add fresh new content. This one scares me the most, but am interested if anyone has ever used a 410 code.
Please advise and thanks for reading.
-
Just wanted to let you know that submitting all the sites I wanted removed into an XML sitemap worked. I then submitted that sitemap to webmaster tools and listed it in the robots.txt. When doing query "site:domain.com" index pages went from 20k+ down to 700 in a matter of days.
-
I could link to them then, but what about creating a custom sitemap for just content that I want removed? Would that have the same effect?
-
If they are not linked to then spiders will not find the noindex code. They could suffer in the SERPs for months and months.
-
If all these pages are under a directory structure than you have the option to remove a complete directory in URL removal option. See if that is feasible in your case.
-
I suppose I'll wait longer. Crawl rate over the last 90 days is a high of 3,285 and average of 550 with a low of 3 according to webmaster tools.
-
Yeah the pages are low PR and are not linked to at all from the site. I've never heard of removing a page via webmaster tools. How do I do that? I also have to remove several thousand.
*edit: It looks like I have to remove them one at a time which is not feasible in my case. Is there a faster way?
-
If you want a page out of the index fast the best way is to do it through webmaster tools. It's easy and lasts for about six months. Then, if they find your page again it will register the noindex and you should be fine.
As EGOL said, if it's a page that isn't crawled very often then it could be a LONG time before it gets deindexed.
-
I removed some pages from the index and used the same line of code...
name="robots" content="NOINDEX" />
My pages dropped from the index within 2 or 3 days - but this is a site that has very heavy spider activity.
If your site is not crawled very much or these are low PR pages (such as PR1, PR2) it could take google a while to revisit and act upon your noindex instructions - but two months seems a bit long.
Is your site being crawled vigorously? Look in webmaster tools to see if crawling declined abruptly when your rankings fell. Check there also for crawl problems.
If I owned your site and the PR of these pages is low I would wait a while longer before doing anything. If my patience was wearing thin I would do the 301 redirect because that will transfer the linkjuice from those pages to the target URL of the redirect - however, you might wait quite a while to see the redirect take effect. That's why my first choice would be to wait longer.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Serp Stars?
Is there anyway of getting the Stars to show up for a home page? The issue is that 70% of our traffic goes to our home page and while we have stars for the product pages this only covers 30% of our traffic the other 70% goes to our home page and due to this not being a product page it does not have them. Would selling the product direct on this page allow us to have them? I appreciate any help.
Intermediate & Advanced SEO | | BobAnderson0 -
Local Showroom Tel Showing Up in SERP for Branded Term Search?
We encountered an issue: when user search for a branded term in UK on Google UK, one of the showroom got listed in SERP under site links as a location with Location icon and takes over a few lines - their address and phone number are displayed. This caused confusion for user as it's only one of our showrooms and we don't want to direct calls to the showroom this way. I'm suspecting somehow the localization factor came into play but not sure how to remove the showroom phone number from being displayed. Anyone has any insight/experience with this? Thanks
Intermediate & Advanced SEO | | b.digi0 -
Why are these m. results showing as blocked?
If you go to http://bit.ly/173gdWK, you'll see that m. results are showing as blocked by robots.txt, but we don't have anything in our robots.txt file that specifies to block m. results. Any ideas why these URLs show as blocked?
Intermediate & Advanced SEO | | nicole.healthline0 -
Will duplicate content across a .net website and a .ch have negative affects on SERPs?
Hi, I am working with a company that has a .net site and a .ch website that are identical. Will this duplicate content have a negative affect on SERPs? Thanks Ali.B
Intermediate & Advanced SEO | | Bmeisterali0 -
Wrong page in serps
Hi
Intermediate & Advanced SEO | | niclaus78
I've been working with a law firm's website for a couple of years and we've encounter a problem. The pages were divided to target employers and employees separately. For the very targeted keywords mentioning either employees or employers everything was good but for broader less targeted keywords e.g unfair dismissal keywords chooses either one or the other which is a problem. Now I created this ''bridge'' pages where all the topics are explained and then users are directed to and then they will chose where to go. the problem is a lot of off page was created during this years either targeting on or the other. What I plan to do is: -Create a new site map and changing the priority, so the new pages will have a priority 1 and the others less. - bookmarks, articles, etc will be targeting now to the new pages. I place the new pages linked from the home page so that they get the link juice of the home page and they are also now more a category page in the map, so a level up comparing to the previous ones. Questions: 1- Is it worthwhile adding a rel canonical tag to the new pages and rel alternate to previous pages, or if its not a question of duplicate content it shouldn't have an impact? What other things should I take into consideration? Thanks a lot. nico0 -
How to remove duplicate content, which is still indexed, but not linked to anymore?
Dear community A bug in the tool, which we use to create search-engine-friendly URLs (sh404sef) changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. <code>Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake)</code> After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David
Intermediate & Advanced SEO | | rmvw0 -
What constitutes duplicate content?
I have a website that lists various events. There is one particular event at a local swimming pool that occurs every few months -- for example, once in December 2011 and again in March 2012. It will probably happen again sometime in the future too. Each event has its own 'event' page, which includes a description of the event and other details. In the example above the only thing that changes is the date of the event, which is in an H2 tag. I'm getting this as an error in SEO Moz Pro as duplicate content. I could combine these pages, since the vast majority of the content is duplicate, but this will be a lot of work. Any suggestions on a strategy for handling this problem?
Intermediate & Advanced SEO | | ChatterBlock0