Noindexing Thin Content Pages: Good or Bad?
-
If you have massive pages with super thin content (such as pagination pages) and you noindex them, once they are removed from googles index (and if these pages aren't viewable to the user and/or don't get any traffic) is it smart to completely remove them (404?) or is there any valid reason that they should be kept?
If you noindex them, should you keep all URLs in the sitemap so that google will recrawl and notice the noindex tag?
If you noindex them, and then remove the sitemap, can Google still recrawl and recognize the noindex tag on their own?
-
Sometimes you need to leave the crawl path open to Googlebot so they can get around the site. A specific example that may be relevant to you is in pagination. If you have 100 products and are only showing 10 on the first page Google will not be able to reach the other 90 product pages as easily if you block paginated pages in the robots.txt. Better options in such a case might be a robots noindex,follow meta tag, rel next/prev tags, or a "view all" canonical page.
If these pages aren't important to the crawlability of the site, such as internal search results, you could block them in the robots.txt file with little or no issues, and it would help to get them out of the index. If they aren't useful for spiders or users, or anything else, then yes you can and should probably let them 404, rather than blocking.
Yes, I do like to leave the blocked or removed URLs in the sitemap for just a little while to ensure Googlebog revisits them and sees the noindex tag, 404 error code, 301 redirect, or whatever it is they need to see in order to update their index. They'll get there on their own eventually, but I find it faster to send them to the pages myself. Once Googlebot visits these URls and updates their index you should remove them from your sitemaps.
-
If you want to noindex any of your pages, there is no way that Google or any other search engines will think something is fishy. Its up to the webmaster to decide what and what not to get indexed from his website. If you implement page level noindex, the link juice will still flow to the page but if you also have nofollow along with noindex, the link juice will flow to the page but will be contained on the page itself and will not be passed on the links that flow out of that page.
I conclude by saying, there is nothing wrong in making the pages non-indexable.
Here is an interesting discussion related to this on Moz:
http://moz.com/community/q/noindex-follow-is-a-waste-of-link-juice
Hope it helps.
Best,
Devanur Rafi
-
Devanur,
What I am asking is if the robots/google will view it as a negative thing for noindexing pages and still trying to pass the link juice, even though the pages aren't even viewable to the front end user.
-
If you wish not to show these pages even to the front end user, you can just block them using the page level robots meta tag so that these pages will never be indexed by the search engines as well.
Best,
Devanur Rafi
-
Yes, but what if these pages aren't even viewable to the front end user?
-
Hi there, it is a very good idea to block any and all the pages that do not provide any useful content to the visitors and especially when they are very thin content wise. So the idea is to keep away low quality content that does no good to the visitor, from the Internet. Search engines would love every webmaster doing so.
However, sometimes, no matter how thin the content is on some pages, they still provide good information to the visitors and serve the purpose of the visit. In this case, you can provide contextual links to those pages and add the nofollow attribute to the link. Of course you should ideally be implementing the page level blocking using the robots meta tag on those pages. I do not think you should return a 404 on these pages as there is no need to do so. When a page level blocking is implemented, Google will not index the blocked content even if it finds a third party reference to it from elsewhere on the Internet.
If you have implemented the page level noindex using the robots meta tag, there is no need to go for a sitemap with these URLs. With noindex in place, as I mentioned above, Google will not index the content even if it discovers the page using a reference from anywhere on the Internet.
Hope it helps my friend.Best,Devanur Rafi
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to stop google bot from crawling spammy injected pages by hacker?
Hello, Please help me. Our one of website is under attack by hacker once again. They have injected spammy URL and google is indexing, but we could not find these pages on our website. These all are 404 Pages. Our website is not secured. No HTTPS Our website is using wordpress CMS Thanks
White Hat / Black Hat SEO | | ShahzadAhmed0 -
What sort of knowledge differentiates a good SEO from a great SEO?
This is definitely more of a discussion than a clear cut answer.
White Hat / Black Hat SEO | | Edward_Sturm0 -
JavaScript encoded links on an AngularJS framework...bad idea for Google?
Hi Guys, I have a site where we're currently deploying code in AngularJS. As part of this, on the page we sometimes have links to 3rd party websites. We do not want to have followed links on the site to the 3rd party sites as we may be perceived as a link farm since we have more than 1 million pages and a lot of these have external 3rd party links. My question is, if we've got javascript to fire off the link to the 3rd party, is that enough to prevent Google from seeing that link? We do not have a NOFOLLOW on that currently. The link anchor text simply says "Visit website" and the link is fired using JavaScript. Here's a snapshot of the code we're using: Visit website Does anyone have any experience with anything like this on their own site or customer site that we can learn from just to ensure that we avoid any chances of being flagged for being a link farm? Thank you 🙂
White Hat / Black Hat SEO | | AU-SEO0 -
Can anyone suggest good keywords for this
hello everyone, can you please suggest Good Keywords for my client domain www.amojobs.com. Any one can help please ?? my client Need it urgent.. Thanx in advance
White Hat / Black Hat SEO | | poojathakar0 -
Bad for SEO to have two very similar websites on the same server?
Is it bad for SEO to have two very similar sites on the same server? What's the best way to set this up?
White Hat / Black Hat SEO | | WebServiceConsulting.com0 -
Unique page URLs and SEO titles
www.heartwavemedia.com / Wordpress / All in One SEO pack I understand Google values unique titles and content but I'm unclear as to the difference between changing the page url slug and the seo title. For example: I have an about page with the url "www.heartwavemedia.com/about" and the SEO title San Francisco Video Production | Heartwave Media | About I've noticed some of my competitors using url structures more like "www.competitor.com/san-francisco-video-production-about" Would it be wise to follow their lead? Will my landing page rank higher if each subsequent page uses similar keyword packed, long tail url? Or is that considered black hat? If advisable, would a url structure that includes "san-francisco-video-production-_____" be seen as being to similar even if it varies by one word at the end? Furthermore, will I be penalized for using similar SEO descriptions ie. "San Francisco Video Production | Heartwave Media | Portfolio" and San Francisco Video Production | Heartwave Media | Contact" or is the difference of one word "portfolio" and "contact" sufficient to read as unique? Finally...am I making any sense? Any and all thoughts appreciated...
White Hat / Black Hat SEO | | keeot0 -
Switching site content
I have been advised to take a particular path with my domain, to me it seems "black hat" but ill ask the experts: Is it acceptable when one owns an exact match location domain eg london.com, to run as a tourist information site, gathering links from wikipedia,bbc,local paper/radio/sports websites etc, then after 6 - 12 months, switch the content to a business site? What could the penalties be? Please advise...
White Hat / Black Hat SEO | | klsdnflksdnvl0 -
Link Building: Location-specific pages
Hi! I've technically been a member for a few years, but just recently decided to go Pro (and I gotta say, I'm glad I did!). Anyway, as I've been researching and analyzing, one thing I noticed a competitor is doing is creating location-specific pages. For example, they've created a page that has a URL similar to this: www.theirdomain.com/seattle-keyword-phrase They have a few of these for specific cities. They rank well for the city-keyword combo in most cases. Each city-specific page looks the same and the content is close to being the same except that they drop in the "seattle keyword phrase" bit here and there. I noticed that they link to these pages from their site map page, which, if I were to guess, is how SEs are getting to those pages. I've seen this done before on other sites outside my industry too. So my question is, is this good practice or is it something that should be avoided?
White Hat / Black Hat SEO | | AngieHerrera0