Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Which pages to "noindex"
-
I have read through the many articles regarding the use of Meta Noindex, but what I haven't been able to find is a clear explanation of when, why or what to use this on.
I'm thinking that it would be appropriate to use it on:
legal pages such as privacy policy and terms of use
search results page
blog archive and category pagesThanks for any insight of this.
-
Here are two posts that may be helpful in both explaining how to set up a robots.txt for wordpress, and the thinking behind setting up which parts to exclude.
http://www.cogentos.com/bloggers-guide-to-using-robotstxt-and-robots-meta-tags-to-optimise-indexing/
http://codex.wordpress.org/Search_Engine_Optimization_for_WordPress#Robots.txt_Optimization
The wordpress link (second link) has a link to several other resources as well.
-
Yes I'm using wordpress.
-
You also want to block any admin directory, plugin directory, etc. Are you using Wordpress or a specific CMS? There are often best-practice posts for robots.txt files for specific platforms.
-
yes, generally you would noindex your about us, contact us, privacy, terms pages since these are rarely searched and in fact are so heavily linked to internally that they would rank well if indexed.
all search results should be noindexed - google wants to do the search
definitely NOT blog/category pages - these are your gold content!
I also noindex any URL accessed by https
-
As well as pagination pages I have read, but not done it myself, that you should consider using it on low value pages that you are wouldn't want to rank above other pages on the site (hopefully they wouldn't anyway) and also sitemaps as don't necessarily want them to appear in the index but definitely want them followed.
-
Noindexed pages are pages that you want your link juices flowing through, but not have them rank as individual entries in the search engines.
-
I think your legal pages should rank as individual pages. If I wanted to find your privacy policy and searched for 'privacy policy company name', I'd expect to find an entry where I can click and find your privacy policy
-
Your search results page (the internal ones) are great candidates for a noindex attribute. If a search engine robot happens to stumble upon one (via a link from somebody else for example), you'd want the spider to start crawling pages from there and spreading link juice over your site. However, under most circumstances you don't want this result page to rank on itself in the search engines, as it usually offers thin value to your visitors
-
Blog archive and category pages are useful pages to visitors and I personally wouldn't noindex these
Bonus: your paginated results ('page 2+ in a result set that has multiple pages') are great candidates for noindex. It'll keep the juices running, without having all these pretty much meaningless (and highly dynamic) pages in the search index.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console "Text too small to read" Errors
What are the guidelines / best practices for clearing these errors? Google has some pretty vague documentation on how to handle this sort of error. User behavior metrics in GA are pretty much in line with desktop usage and don't show anything concerning Any input is appreciated! Thanks m3F3uOI
Technical SEO | | Digital_Reach2 -
Sudden Indexation of "Index of /wp-content/uploads/"
Hi all, I have suddenly noticed a massive jump in indexed pages. After performing a "site:" search, it was revealed that the sudden jump was due to the indexation of many pages beginning with the serp title "Index of /wp-content/uploads/" for many uploaded pieces of content & plugins. This has appeared approximately one month after switching to https. I have also noticed a decline in Bing rankings. Does anyone know what is causing/how to fix this? To be clear, these pages are **not **normal /wp-content/uploads/ but rather "index of" pages, being included in Google. Thank you.
Technical SEO | | Tom3_150 -
Why do some URLs for a specific client have "/index.shtml"?
Reviewing our client's URLs for a 301 redirect strategy, we have noticed that many URLs have "/index.shtml." The part we don'd understand is these URLs aren't the homepage and they have multiple folders followed by "/index.shtml" Does anyone happen to know why this may be occurring? Is there any SEO value in keeping the "/index.shtml" in the URL?
Technical SEO | | FranFerrara0 -
How can I Style Long "List Posts" in Wordpress?
Hi All, I have been working on a list-post which spans over 100 items. Each item on the list has a quick blurb to explain it, an image and a few resource links. I am trying to find an attractive way to present this long list post in Wordpress. I have seen several sites with long list posts however; they place their items one on top of the other which yields a VERY long page and the end user has to do a lot of scrolling. Others turn their lists into slideshows, but I have no data on how slides perform against 10-mile-long-lists which load in 1 page. I would like to do something similar to what List25.com does as they present about 5-10 items per page and they seem to have pagination. The pagination part I understand however; is there a shortcode plugin to format lists in an attractive way just like list25?
Technical SEO | | IvanC0 -
New "Static" Site with 302s
Hey all, Came across a bit of an interesting challenge recently, one that I was hoping some of you might have had experience with! We're currently in the process of a website rebuild, for which I'm really excited. The new site is using Markdown to create an entirely static site. Load-times are fantastic, and the code is clean. Life is good, apart from the 302s. One of the weird quirks I've realized is that with oldschool, non-server-generated page content is that every page of the site is an Index.html file in a directory. The resulting in a www.website.com/page-title will 302 to www.website.com/page-title/. My solution off the bat has been to just be super diligent and try to stay on top of the link profile and send lots of helpful emails to the staff reminding them about how to build links, but I know that even the best laid plans often fail. Has anyone had a similar challenge with a static site and found a way to overcome it?
Technical SEO | | danny.wood1 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
How valuable is content "hidden" behind a JavaScript dropdown really?
I've come across a method implemented by some SEO agencies to fill up pages with somehow relevant text and hide it behind a javascript dropdown. Does Google fall for such cheap tricks? You can see this method used on these pages for example (just scroll down to the bottom) - it's all in German, but you get the idea I guess: http://www.insider-boersenbrief.de/ http://www.deko-und-kerzenshop.de/ How is you experience with this way of adding content to a site? Do you think it is valuable or will it get penalised?
Technical SEO | | jfkorn0 -
404 crawl errors from "tel:" link?
I am seeing thousands of 404 errors. Each of the urls is like this: abc.com/abc123/tel:1231231234 Everything is normal about that url except the "/tel:1231231234" these urls are bad with the tel: extension, they are good without it. The only place I can find this character string is on each page we have this code which is used for Iphones and such. What are we doing wrong? Code: Phone: <a href="[tel:1231231234](tel:7858411943)"> (123) 123-1234a>
Technical SEO | | EugeneF0