NOINDEX content still showing in SERPS after 2 months
-
I have a website that was likely hit by Panda or some other algorithm change. The hit finally occurred in September of 2011. In December my developer set the following meta tag on all pages that do not have unique content:
name="robots" content="NOINDEX" />
It's been 2 months now and I feel I've been patient, but Google is still showing 10,000+ pages when I do a search for site:http://www.mydomain.com
I am looking for a quicker solution. Adding this many pages to the robots.txt does not seem like a sound option. The pages have been removed from the sitemap (for about a month now). I am trying to determine the best of the following options or find better options.
- 301 all the pages I want out of the index to a single URL based on the page type (location and product). The 301 worries me a bit because I'd have about 10,000 or so pages all 301ing to one or two URLs. However, I'd get some link juice to that page, right?
- Issue a HTTP 404 code on all the pages I want out of the index. The 404 code seems like the safest bet, but I am wondering if that will have a negative impact on my site with Google seeing 10,000+ 404 errors all of the sudden.
- Issue a HTTP 410 code on all pages I want out of the index. I've never used the 410 code and while most of those pages are never coming back, eventually I will bring a small percentage back online as I add fresh new content. This one scares me the most, but am interested if anyone has ever used a 410 code.
Please advise and thanks for reading.
-
Just wanted to let you know that submitting all the sites I wanted removed into an XML sitemap worked. I then submitted that sitemap to webmaster tools and listed it in the robots.txt. When doing query "site:domain.com" index pages went from 20k+ down to 700 in a matter of days.
-
I could link to them then, but what about creating a custom sitemap for just content that I want removed? Would that have the same effect?
-
If they are not linked to then spiders will not find the noindex code. They could suffer in the SERPs for months and months.
-
If all these pages are under a directory structure than you have the option to remove a complete directory in URL removal option. See if that is feasible in your case.
-
I suppose I'll wait longer. Crawl rate over the last 90 days is a high of 3,285 and average of 550 with a low of 3 according to webmaster tools.
-
Yeah the pages are low PR and are not linked to at all from the site. I've never heard of removing a page via webmaster tools. How do I do that? I also have to remove several thousand.
*edit: It looks like I have to remove them one at a time which is not feasible in my case. Is there a faster way?
-
If you want a page out of the index fast the best way is to do it through webmaster tools. It's easy and lasts for about six months. Then, if they find your page again it will register the noindex and you should be fine.
As EGOL said, if it's a page that isn't crawled very often then it could be a LONG time before it gets deindexed.
-
I removed some pages from the index and used the same line of code...
name="robots" content="NOINDEX" />
My pages dropped from the index within 2 or 3 days - but this is a site that has very heavy spider activity.
If your site is not crawled very much or these are low PR pages (such as PR1, PR2) it could take google a while to revisit and act upon your noindex instructions - but two months seems a bit long.
Is your site being crawled vigorously? Look in webmaster tools to see if crawling declined abruptly when your rankings fell. Check there also for crawl problems.
If I owned your site and the PR of these pages is low I would wait a while longer before doing anything. If my patience was wearing thin I would do the 301 redirect because that will transfer the linkjuice from those pages to the target URL of the redirect - however, you might wait quite a while to see the redirect take effect. That's why my first choice would be to wait longer.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Topic research and content suggestion
Is the topic research and content suggestions that semrush gives (that is currently in beta) similar to what moz calls content suggestions ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Trailing Slashes for Magento CMS pages - 2 URLS - Duplicate content
Hello, Can anyone help me find a solution to Fixing and Creating Magento CMS pages to only use one URL and not two URLS? www.domain.com/testpage www.domain.com/testpage/ I found a previous article that applies to my issue, which is using htaccess to redirect request for pages in magento 301 redirect to slash URL from the non-slash URL. I dont understand the syntax fully in htaccess , but I used this code below. This code below fixed the CMS page redirection but caused issues on other pages, like all my categories and products with this error: "This webpage has a redirect loop ERR_TOO_MANY_REDIRECTS" Assuming you're running at domain root. Change to working directory if needed. RewriteBase / # www check If you're running in a subdirectory, then you'll need to add that in to the redirected url (http://www.mydomain.com/subdirectory/$1 RewriteCond %{HTTP_HOST} !^www. [NC]
Intermediate & Advanced SEO | | iamgreenminded
RewriteRule ^(.*)$ http://www.mydomain.com/$1 [R=301,L] Trailing slash check Don't fix direct file links RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.)/$
RewriteRule ^(.)$ $1/ [L,R=301] Finally, forward everything to your front-controller (index.php) RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule .* index.php [QSA,L]0 -
Lot of duplicate content and still traffic is increasing... how does it work?
Hello Mozzers, I've a dilemma with a client's site I am working on that is make me questioning my SEO knowledge, or the way Google treat duplicate content. I'll explain now. The situation is the following: organic traffic is constantly increasing since last September, in every section of the site (home page, categories and product pages) even though: they have tons of duplicate content from same content in old and new URLs (which are in two different languages, even if the actual content on the page is in the same language in both of the URL versions) indexation is completely left to Google decision (no robots file, no sitemap, no meta robots in code, no use of canonical, no redirect applied to any of the old URLs, etc) a lot (really, a lot) of URLs with query parameters (which brings to more duplicated content) linked from the inner page of the site (and indexed in some case) they have Analytics but don't use Webmaster Tools Now... they expect me to help them increase even more the traffic they're getting, and I'll go first on "regular" onpage optimization, as their title, meta description and headers are not optimized at all according to the page content, but after that I was thinking on fixing the issues with indexation and content duplication, but I am worried I can "break the toy", as things are going well for them. Should I be confident that fixing these issues will bring to even better results or do you think is better for me to focus on other kind of improvements? Thanks for your help!
Intermediate & Advanced SEO | | Guybrush_Threepw00d0 -
Question on Moving Content
I just moved my site from a Wordpress hosted site to Squarespace. We have the same domain, however, the content is now located on a different URL (again, same base domain). I'm unable to easily set up 301 redirects for the old content to be mapped to the new content so I was wondering if anyone had any recommendations for a workaround. Basically, I want to make sure google knows that Product A's page is now located at this new URL. (www.domain.com/11245 > www.domain.com/product-a). Maybe it's something that I don't have to worry about anymore because the old content is gone? I mean, I have a global redirect set up that no matter what you enter after the base domain, it now goes to the homepage but I just want to make sure I'm not missing something here. Really appreciate your help!
Intermediate & Advanced SEO | | TheBatesMillStore1 -
Noindex
I have been reading a lot of conflicting information on the Link Juice ramifications of using "NoIndex". Can I get some advice for the following situation? 1. I have pages that I do not want indexed on my site. They are lead conversion pages. Just about every page on my site has links to them. If I just apply a standard link, those pages will get a ton of Link Juice that I'd like to allocate to other pages. 2. If I use "nofollow", the pages won't rank, but the link juice evaporates. I get that. I won't use "nofollow" 3. I have read that "noindex, follow" will block the pages in the SERPs, but will pass Link Juice to them. I don't think that I want this either. If I "dead end" the lead form with no navigation or links, will the juice be locked up on the page? 4. I assume that I should block the pages in robots.txt In order to keep the pages out of the SERPs, and conserve Link Juice, what should I do? Can someone please give me a step by step process with the reasoning for what I should do here?
Intermediate & Advanced SEO | | CsmBill0 -
Widget Links Still Acceptable?
I own a funny video site that is getting about 1-4 of it's videos embedded on various sites daily. In todays form, is widget link still acceptable? This would show up under the video embed. Example: Funny Dog Video From Site Brand Or is it best nowadays to just have Video Provided By Site Brand
Intermediate & Advanced SEO | | superlordme0 -
Taking up the entire serp
Hey guys, I tried to search across the q&a for an answer but came up with nothing for this. I'm competing well for our keyword rankings with 20 keywords, 13 of which on the first page and 9 in the top 3. Which is great! But what we are interested in doing is taking over the rankings. Currently competitors reside around us in the serps taking up the remainder of traffic. We are considering creating new websites and competing on the same keywords to rank and eventually take over the google rankings for our products. So that if you were to look to buy 'purple buttons' the top 5 websites sell these but are owned by ourselves. The question is. Has anyone else done this? What are googles views? Are there any traps that we could run into? As far as I see it the only real issues we could run into are with google on a moral basis. Thoughts?
Intermediate & Advanced SEO | | AdenBrands0 -
Google badge extracted to SERPs
It's a while ago that (i thought) I read the following information on the Google badge. Here https://developers.google.com/+/plugins/badge/ we have the implementation guide, however I was under the impression the Google badge could be thereafter extracted into SERPs so the user could follow etc... direct from SERPs. I can't find anything confirming this. I think that it might clash with authorship data which does a similar job, but where a site page is not relevant to authorship at all, I would have thought linking back to the G+ page from SERPs was a sensible option. Can anyone confirm the Google badge can appear in SERPs?
Intermediate & Advanced SEO | | richcowley0