Which pages to "noindex"
-
I have read through the many articles regarding the use of Meta Noindex, but what I haven't been able to find is a clear explanation of when, why or what to use this on.
I'm thinking that it would be appropriate to use it on:
legal pages such as privacy policy and terms of use
search results page
blog archive and category pagesThanks for any insight of this.
-
Here are two posts that may be helpful in both explaining how to set up a robots.txt for wordpress, and the thinking behind setting up which parts to exclude.
http://www.cogentos.com/bloggers-guide-to-using-robotstxt-and-robots-meta-tags-to-optimise-indexing/
http://codex.wordpress.org/Search_Engine_Optimization_for_WordPress#Robots.txt_Optimization
The wordpress link (second link) has a link to several other resources as well.
-
Yes I'm using wordpress.
-
You also want to block any admin directory, plugin directory, etc. Are you using Wordpress or a specific CMS? There are often best-practice posts for robots.txt files for specific platforms.
-
yes, generally you would noindex your about us, contact us, privacy, terms pages since these are rarely searched and in fact are so heavily linked to internally that they would rank well if indexed.
all search results should be noindexed - google wants to do the search
definitely NOT blog/category pages - these are your gold content!
I also noindex any URL accessed by https
-
As well as pagination pages I have read, but not done it myself, that you should consider using it on low value pages that you are wouldn't want to rank above other pages on the site (hopefully they wouldn't anyway) and also sitemaps as don't necessarily want them to appear in the index but definitely want them followed.
-
Noindexed pages are pages that you want your link juices flowing through, but not have them rank as individual entries in the search engines.
-
I think your legal pages should rank as individual pages. If I wanted to find your privacy policy and searched for 'privacy policy company name', I'd expect to find an entry where I can click and find your privacy policy
-
Your search results page (the internal ones) are great candidates for a noindex attribute. If a search engine robot happens to stumble upon one (via a link from somebody else for example), you'd want the spider to start crawling pages from there and spreading link juice over your site. However, under most circumstances you don't want this result page to rank on itself in the search engines, as it usually offers thin value to your visitors
-
Blog archive and category pages are useful pages to visitors and I personally wouldn't noindex these
Bonus: your paginated results ('page 2+ in a result set that has multiple pages') are great candidates for noindex. It'll keep the juices running, without having all these pretty much meaningless (and highly dynamic) pages in the search index.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
"non-WWW" vs "WWW" in Google SERPS and Lost Back Link Connection
A Screaming Frog report indicates that Google is indexing a client's site for both: www and non-www URLs. To me this means that Google is seeing both URLs as different even though the page content is identical. The client has not set up a preferred URL in GWMTs. Google says to do a 301 redirect from the non-preferred domain to the preferred version but I believe there is a way to do this in HTTP Access and an easier solution than canonical.
Technical SEO | | RosemaryB
https://support.google.com/webmasters/answer/44231?hl=en GWMTs also shows that over the past few months this client has lost more than half of their backlinks. (But there are no penalties and the client swears they haven't done anything to be blacklisted in this regard. I'm curious as to whether Google figured out that the entire site was in their index under both "www" and "non-www" and therefore discounted half of the links. Has anyone seen evidence of Google discounting links (both external and internal) due to duplicate content? Thanks for your feedback. Rosemary0 -
Pages with Duplicate Page Content Crawl Diagnostics
I have Pages with Duplicate Page Content in my Crawl Diagnostics Tell Me How Can I solve it Or Suggest Me Some Helpful Tools. Thanks
Technical SEO | | nomyhot0 -
After I 301 redirect duplicate pages to my rel=canonical page, do I need to add any tags or code to the non canonical pages?
I have many duplicate pages. Some pages have 2-3 duplicates. Most of which have Uppercase and Lowercase paths (generated by Microsoft IIS). Does this implementation of 301 and rel=canonical suffice? Or is there more I could do to optimize the passing of duplicate page link juice to the canonical. THANK YOU!
Technical SEO | | PFTools0 -
Translating Page Titles & Page Descriptions
I am working on a site that will be published in the original English, with localized versions in French, Spanish, Japanese and Chinese. All the versions will use the English information architecture. As part of the process, we will be translating the page the titles and page descriptions. Translation quality will be outstanding. The client is a translation company. Each version will get at least four pairs of eyes including expert translators, editors, QA experts and proofreaders. My question is what special SEO instructions should be issued to translators re: the page titles and page descriptions. (We have to presume the translators know nothing about SEO.) I was thinking of: stick to the character counts for titles and descriptions make sure the title and description work together avoid over repetition of keywords page titles (over-optimization peril) think of the descriptions as marketing copy try to repeat some title phrases in the description (to get the bolding and promote click though) That's the micro stuff. The macro stuff: We haven't done extensive keyword research for the other languages. Most of the clients are in the US. The other language versions are more a demo of translation ability than looking for clients elsewhere. Are we missing something big here?
Technical SEO | | DanielFreedman0 -
Why are pages linked with URL parameters showing up as separate pages with duplicate content?
Only one page exists . . . Yet I link to the page with different URL parameters for tracking purposes and for some reason it is showing up as a separate page with duplicate content . . . Help? rpcIZ.png
Technical SEO | | BlueLinkERP0 -
Sitemaps and "noindex" pages
Experimenting a little bit to recover from Panda and added "noindex" tag for quite a few pages. Obviously now we need Google to re-crawl them ASAP and de-index. Should we leave these pages in sitemaps (with updated "lastmod") for that? Or just patiently wait? 🙂 What's the common/best way?
Technical SEO | | LocalLocal0 -
Domain "Forwarded"?
Hi SEOMoz! The company I work for has a website, www.accupos.com, but they also have an old domain which is not used anymore called http://accuposretail.com/ These two sites had duplicate content so I requested the OLD site (http://accuposretail.com/) be redirected to accupos.com to eliminate the dupe content. Unfortunately, I do not understand completely what happened but when they performed this forwarding the accuposretail.com URL is still in use. Now it just displays EXACTLY what accupos.com displays and not something similar. The tech team told me it is forwarded but I can't help but see the URL still in the search box on top. Is this unacceptable? The actual URL has to forward and change to the accupos.com URL in order to not be duplicate content, correct? I have limited experience in this. Please let me know if we are good to go, or if I need to tell them more action is required. Thanks! Derek M
Technical SEO | | DerekM880 -
Noindex, follow duplicate pages
I have a series of websites that all feature a library of the same content. These pages don't make up the majority of the sites content, maybe 10-15% of the total pages. Most of our clients won't take the time to rewrite the content, but it's valuable to their site. So I decided to noindex, follow all of the pages. Outside of convincing them all to write their own versions of the content, is this the best method? I could also block the pages with robots.txt, but then I couldn't pass any link juice through the pages. Any thoughts?
Technical SEO | | vforvinnie0