Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Noindexing Thin Content Pages: Good or Bad?
-
If you have massive pages with super thin content (such as pagination pages) and you noindex them, once they are removed from googles index (and if these pages aren't viewable to the user and/or don't get any traffic) is it smart to completely remove them (404?) or is there any valid reason that they should be kept?
If you noindex them, should you keep all URLs in the sitemap so that google will recrawl and notice the noindex tag?
If you noindex them, and then remove the sitemap, can Google still recrawl and recognize the noindex tag on their own?
-
Sometimes you need to leave the crawl path open to Googlebot so they can get around the site. A specific example that may be relevant to you is in pagination. If you have 100 products and are only showing 10 on the first page Google will not be able to reach the other 90 product pages as easily if you block paginated pages in the robots.txt. Better options in such a case might be a robots noindex,follow meta tag, rel next/prev tags, or a "view all" canonical page.
If these pages aren't important to the crawlability of the site, such as internal search results, you could block them in the robots.txt file with little or no issues, and it would help to get them out of the index. If they aren't useful for spiders or users, or anything else, then yes you can and should probably let them 404, rather than blocking.
Yes, I do like to leave the blocked or removed URLs in the sitemap for just a little while to ensure Googlebog revisits them and sees the noindex tag, 404 error code, 301 redirect, or whatever it is they need to see in order to update their index. They'll get there on their own eventually, but I find it faster to send them to the pages myself. Once Googlebot visits these URls and updates their index you should remove them from your sitemaps.
-
If you want to noindex any of your pages, there is no way that Google or any other search engines will think something is fishy. Its up to the webmaster to decide what and what not to get indexed from his website. If you implement page level noindex, the link juice will still flow to the page but if you also have nofollow along with noindex, the link juice will flow to the page but will be contained on the page itself and will not be passed on the links that flow out of that page.
I conclude by saying, there is nothing wrong in making the pages non-indexable.
Here is an interesting discussion related to this on Moz:
http://moz.com/community/q/noindex-follow-is-a-waste-of-link-juice
Hope it helps.
Best,
Devanur Rafi
-
Devanur,
What I am asking is if the robots/google will view it as a negative thing for noindexing pages and still trying to pass the link juice, even though the pages aren't even viewable to the front end user.
-
If you wish not to show these pages even to the front end user, you can just block them using the page level robots meta tag so that these pages will never be indexed by the search engines as well.
Best,
Devanur Rafi
-
Yes, but what if these pages aren't even viewable to the front end user?
-
Hi there, it is a very good idea to block any and all the pages that do not provide any useful content to the visitors and especially when they are very thin content wise. So the idea is to keep away low quality content that does no good to the visitor, from the Internet. Search engines would love every webmaster doing so.
However, sometimes, no matter how thin the content is on some pages, they still provide good information to the visitors and serve the purpose of the visit. In this case, you can provide contextual links to those pages and add the nofollow attribute to the link. Of course you should ideally be implementing the page level blocking using the robots meta tag on those pages. I do not think you should return a 404 on these pages as there is no need to do so. When a page level blocking is implemented, Google will not index the blocked content even if it finds a third party reference to it from elsewhere on the Internet.
If you have implemented the page level noindex using the robots meta tag, there is no need to go for a sitemap with these URLs. With noindex in place, as I mentioned above, Google will not index the content even if it discovers the page using a reference from anywhere on the Internet.
Hope it helps my friend.Best,Devanur Rafi
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How good/bad the exit intent pop-ups? What is Google's perspective?
Hi all, We have launched the exit intent pop-ups on our website where a pop-up will appear when the visitor is about to leave the website. This will trigger when the mouse is moved to the top window section; as an attempt by the visitor to close the window. We see a slight ranking drop post this pop-up launch. As the pop-up is appearing just before someone leaves the website; does this making Google to see as if the user left because of the pop-up and penalizing us? What is your thoughts and suggestions on this? Thanks
White Hat / Black Hat SEO | | vtmoz1 -
Is this campaign of spammy links to non-existent pages damaging my site?
My site is built in Wordpress. Somebody has built spammy pharma links to hundreds of non-existent pages. I don't know whether this was inspired by malice or an attempt to inject spammy content. Many of the non-existent pages have the suffix .pptx. These now all return 403s. Example: https://www.101holidays.co.uk/tazalis-10mg.pptx A smaller number of spammy links point to regular non-existent URLs (not ending in .pptx). These are given 302s by Wordpress to my homepage. I've disavowed all domains linking to these URLs. I have not had a manual action or seen a dramatic fall in Google rankings or traffic. The campaign of spammy links appears to be historical and not ongoing. Questions: 1. Do you think these links could be damaging search performance? If so, what can be done? Disavowing each linking domain would be a huge task. 2. Is 403 the best response? Would 404 be better? 3. Any other thoughts or suggestions? Thank you for taking the time to read and consider this question. Mark
White Hat / Black Hat SEO | | MarkHodson0 -
Unlisted (hidden) pages
I just had a client say they were advised by a friend to use 'a bunch of unlisted (hidden) pages'. Isn't this seriously black hat?
White Hat / Black Hat SEO | | muzzmoz0 -
Good vs Bad Web directories
Hi this blog post Rand mentions a list of bad web directories - I asked couple of years ago if there is an updated list as some of these (Alive Directory for example) do not seem to be blacklisted anymore and are coming up in Google searches etc? It seems due to old age of the blog post (7 years ago ) the comments are not responded to. Would anyone be able to advise if which of these good directories to use? https://moz.com/blog/what-makes-a-good-web-directory-and-why-google-penalized-dozens-of-bad-ones
White Hat / Black Hat SEO | | IsaCleanse0 -
One page with multiple sections - unique URL for each section
Hi All, This is my first time posting to the Moz community, so forgive me if I make any silly mistakes. A little background: I run a website that for a company that makes custom parts out of specialty materials. One of my strategies is to make high quality content about all areas of these specialty materials to attract potential customers - pretty strait-forward stuff. I have always struggled with how to structure my content; from a usability point of view, I like just having one page for each material, with different subsections covering covering different topical areas. Example: for a special metal material I would have one page with subsections about the mechanical properties, thermal properties, available types, common applications, etc. Basically how Wikipedia organizes its content. I do not have a large amount of content for each section, but as a whole it makes one nice cohesive page for each material. I do use H tags to show the specific sections on the page, but I am wondering if it may be better to have one page dedicated to the specific material properties, one page dedicated to specific applications, and one page dedicated to available types. What are the communities thoughts on this? As a user of the website, I would rather have all of the information on a single, well organized page for each material. But what do SEO best practices have to say about this? My last thought would be to create a hybrid website (I don't know the proper term). Have a look at these examples from Time and Quartz. When you are viewing a article, the URL is unique to that page. However, when you scroll to the bottom of the article, you can keep on scrolling into the next article, with a new unique URL - all without clicking through to another page. I could see this technique being ideal for a good web experience while still allowing me to optimize my content for more specific topics/keywords. If I used this technique with the Canonical tag would I then get the best of both worlds? Let me know your thoughts! Thank you for the help!
White Hat / Black Hat SEO | | jaspercurry0 -
Indexing content behind a login
Hi, I manage a website within the pharmaceutical industry where only healthcare professionals are allowed to access the content. For this reason most of the content is behind a login. My challenge is that we have a massive amount of interesting and unique content available on the site and I want the healthcare professionals to find this via Google! At the moment if a user tries to access this content they are prompted to register / login. My question is that if I look for the Google Bot user agent and allow this to access and index the content will this be classed as cloaking? I'm assuming that it will. If so, how can I get around this? We have a number of open landing pages but we're limited to what indexable content we can have on these pages! I look forward to all of your suggestions as I'm struggling for ideas now! Thanks Steve
White Hat / Black Hat SEO | | stever9990 -
Tags on WordPress Sites, Good or bad?
My main concern is about the entire tags strategy. The whole concept has really been first seen by myself on WordPress which seems to be bringing positive results to these sites and now there are even plugins that auto generate tags. Can someone detail more about the pros and cons of tags? I was under the impression that google does not want 1000's of pages auto generated just because of a simple tag keyword, and then show relevant content to that specific tag. Usually these are just like search results pages... how are tag pages beneficial? Is there something going on behind the scenes with wordpress tags that actually bring benefits to these wp blogs? Setting a custom coded tag feature on a custom site just seems to create numerous spammy pages. I understand these pages may be good from a user perspective, but what about from an SEO perspective and getting indexed and driving traffic... Indexed and driving traffic is my main concern here, so as a recap I'd like to understand the pros and cons about tags on wp vs custom coded sites, and the correct way to set these up for SEO purposes.
White Hat / Black Hat SEO | | WebServiceConsulting.com1 -
Rel author and duplicate content
I have a question if a page who a im the only author, my web will duplicate content with the blog posts and the author post as they are the same. ¿what is your suggestion in that case? thanks
White Hat / Black Hat SEO | | maestrosonrisas0