Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Which pages to "noindex"
-
I have read through the many articles regarding the use of Meta Noindex, but what I haven't been able to find is a clear explanation of when, why or what to use this on.
I'm thinking that it would be appropriate to use it on:
legal pages such as privacy policy and terms of use
search results page
blog archive and category pagesThanks for any insight of this.
-
Here are two posts that may be helpful in both explaining how to set up a robots.txt for wordpress, and the thinking behind setting up which parts to exclude.
http://www.cogentos.com/bloggers-guide-to-using-robotstxt-and-robots-meta-tags-to-optimise-indexing/
http://codex.wordpress.org/Search_Engine_Optimization_for_WordPress#Robots.txt_Optimization
The wordpress link (second link) has a link to several other resources as well.
-
Yes I'm using wordpress.
-
You also want to block any admin directory, plugin directory, etc. Are you using Wordpress or a specific CMS? There are often best-practice posts for robots.txt files for specific platforms.
-
yes, generally you would noindex your about us, contact us, privacy, terms pages since these are rarely searched and in fact are so heavily linked to internally that they would rank well if indexed.
all search results should be noindexed - google wants to do the search

definitely NOT blog/category pages - these are your gold content!
I also noindex any URL accessed by https
-
As well as pagination pages I have read, but not done it myself, that you should consider using it on low value pages that you are wouldn't want to rank above other pages on the site (hopefully they wouldn't anyway) and also sitemaps as don't necessarily want them to appear in the index but definitely want them followed.
-
Noindexed pages are pages that you want your link juices flowing through, but not have them rank as individual entries in the search engines.
-
I think your legal pages should rank as individual pages. If I wanted to find your privacy policy and searched for 'privacy policy company name', I'd expect to find an entry where I can click and find your privacy policy
-
Your search results page (the internal ones) are great candidates for a noindex attribute. If a search engine robot happens to stumble upon one (via a link from somebody else for example), you'd want the spider to start crawling pages from there and spreading link juice over your site. However, under most circumstances you don't want this result page to rank on itself in the search engines, as it usually offers thin value to your visitors
-
Blog archive and category pages are useful pages to visitors and I personally wouldn't noindex these
Bonus: your paginated results ('page 2+ in a result set that has multiple pages') are great candidates for noindex. It'll keep the juices running, without having all these pretty much meaningless (and highly dynamic) pages in the search index.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page with "random" content
Hi, I'm creating a page of 300+ in the near future, on which the content basicly will be unique as it can be. However, upon every refresh, also coming from a search engine refferer, i want the actual content such as listing 12 business to be displayed random upon every hit. So basicly we got 300+ nearby pages with unique content, and the overview of those "listings" as i might say, are being displayed randomly. Ive build an extensive script and i disabled any caching for PHP files in specific these pages, it works. But what about google? The content of the pages will still be as it is, it is more of the listings that are shuffled randomly to give every business listing a fair shot at a click and so on. Anyone experience with this? Ive tried a few things in the past, like a "Last update PHP Month" in the title which sometimes is'nt picked up very well.
Technical SEO | | Vanderlindemedia0 -
Google Search Console "Text too small to read" Errors
What are the guidelines / best practices for clearing these errors? Google has some pretty vague documentation on how to handle this sort of error. User behavior metrics in GA are pretty much in line with desktop usage and don't show anything concerning Any input is appreciated! Thanks m3F3uOI
Technical SEO | | Digital_Reach2 -
Blog Page Titles - Page 1, Page 2 etc.
Hi All, I have a couple of crawl errors coming up in MOZ that I am trying to fix. They are duplicate page title issues with my blog area. For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc. I was just wondering if this was an actual SEO problem or not and if there is a way to fix it. I am using Wordpress for reference but I can't see anywhere to access the settings of these pages. Thanks
Technical SEO | | O2C0 -
Why is Google Webmaster Tools showing 404 Page Not Found Errors for web pages that don't have anything to do with my site?
I am currently working on a small site with approx 50 web pages. In the crawl error section in WMT Google has highlighted over 10,000 page not found errors for pages that have nothing to do with my site. Anyone come across this before?
Technical SEO | | Pete40 -
New "Static" Site with 302s
Hey all, Came across a bit of an interesting challenge recently, one that I was hoping some of you might have had experience with! We're currently in the process of a website rebuild, for which I'm really excited. The new site is using Markdown to create an entirely static site. Load-times are fantastic, and the code is clean. Life is good, apart from the 302s. One of the weird quirks I've realized is that with oldschool, non-server-generated page content is that every page of the site is an Index.html file in a directory. The resulting in a www.website.com/page-title will 302 to www.website.com/page-title/. My solution off the bat has been to just be super diligent and try to stay on top of the link profile and send lots of helpful emails to the staff reminding them about how to build links, but I know that even the best laid plans often fail. Has anyone had a similar challenge with a static site and found a way to overcome it?
Technical SEO | | danny.wood1 -
"Fourth-level" subdomains. Any negative impact compared with regular "third-level" subdomains?
Hey moz New client has a site that uses: subdomains ("third-level" stuff like location.business.com) and; "fourth-level" subdomains (location.parent.business.com) Are these fourth-level addresses at risk of being treated differently than the other subdomains? Screaming Frog, for example, doesn't return these fourth-level addresses when doing a crawl for business.com except in the External tab. But maybe I'm just configuring the crawls incorrectly. These addresses rank, but I'm worried that we're losing some link juice along the way. Any thoughts would be appreciated!
Technical SEO | | jamesm5i0 -
Noindex vs. page removal - Panda recovery
I'm wondering whether there is a consensus within the SEO community as to whether noindexing pages vs. actually removing pages is different from Google Pandas perspective?Does noindexing pages have less value when removing poor quality content than physically removing ie. either 301ing or 404ing the page being removed and removing the links to it from the site? I presume that removing pages has a positive impact on the amount of link juice that gets to some of the remaining pages deeper into the site, but I also presume this doesn't have any direct impact on the Panda algorithm? Thanks very much in advance for your thoughts, and corrections on my assumptions 🙂
Technical SEO | | agencycentral0 -
Can you 301 redirect a page to an already existing/old page ?
If you delete a page (say a sub department/category page on an ecommerce store) should you 301 redirect its url to the nearest equivalent page still on the site or just delete and forget about it ? Generally should you try and 301 redirect any old pages your deleting if you can find suitable page with similar content to redirect to. Wont G consider it weird if you say a page has moved permenantly to such and such an address if that page/address existed before ? I presume its fine since say in the scenario of consolidating departments on your store you want to redirect the department page your going to delete to the existing pages/department you are consolidating old departments products into ?
Technical SEO | | Dan-Lawrence0