Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Paging. is it better to use noindex, follow
-
Is it better to use the robots meta noindex, follow tag for paging, (page 2, page 3) of Category Pages which lists items within each category
or just let Google index these pages
Before Panda I was not using noindex because I figured if page 2 is in Google's index then the items on page 2 are more likely to be in Google's index. Also then each item has an internal link
So after I got hit by panda, I'm thinking well page 2 has no unique content only a list of links with a short excerpt from each item which can be found on each items page so it's not unique content, maybe that contributed to Panda penalty. So I place the meta tag noindex, follow on every page 2,3 for each category page. Page 1 of each category page has a short introduction so i hope that it is enough to make it "thick" content (is that a word :-)) My visitors don't want long introductions, it hurts bounce rate and time on site.
Now I'm wondering if that is common practice and if items on page 2 are less likely to be indexed since they have no internal links from an indexed page
Thanks!
-
Hi Theo, This is an old post you commented on, but I wanted to expand on the question and ask your thoughts: I have a real estate website where I show MLS listings (properties for sale shared by Realtors) which means these MLS listings also exit on 100+ other real estate sites. For my various MLS result pages I use rel=prev / next for paginated pages. Now, here is the question: should I also ad a "no index, follow" on these paginated pages? According to a Google blog post it said no need to use when using rel=prev / next. However, in my case these pages are very similar to other pages around the web and not original content. Yes, I know I could make more unique by adding content, but that is not what my users want. I need a simple clean look with minimal words. So, if I have a result page with 10 pages, would no index follow 9 of those pages make sense to reduce the duplicate content on my website? Or, is issue that my result page will look "thin" compared to competitors and that will impact my ranking negatively?
-
Google just announced some tags to help support pagination better. They say if you have a view all option that doesn't take too long to load, searchers generally prefer that, so you can rel=canonical to that page. However, if you don't have a view all page, then you can put these nifty rel="next" and rel="prev" tags in to let Google know your page has pagination, and where the next and previous pages are.
View all: http://googlewebmastercentral.blogspot.com/2011/09/view-all-in-search-results.html
next/prev: http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
-
I was talking about the same concept you're describing when I mentioned category listings. The next / previous and related items sound exactly like the things that I would recommend to get links to the page > 1 items! Lastly, yes the canonical URL should be the page we're actually viewing and not always page 1.
-
What do you mean by category listings? I'm talking about category pages where each item in the category is listed.
I do link from product or item pages to each other using next, previous and related items.
Also I'm pretty sure about this but just asking, rel=canonical for page 2,3 should be that page and not page 1 ?
-
You're welcome! It is a link from one page of your website to another, thus an internal link. I don't see how noindex,follow would change that. Yes, they will receive link juice. Because of the follow in the robots tag the pages (even though they aren't indexed) still pass link juice. Like I said in my original post, it is best to have other pages (such as category listings for example) link to these items as welll though.
-
Thanks for the answer.
Does a link from a page with noindex,follow count as an internal link? Will the items on page 2 receive any link juice, if their only internal link is from a noindexed page?
What do you think?
-
From what I've read on the internet, it is best to "noindex,follow" all pages >1. This issue had bugged me for quite some time as well, and I've struggled to find good resources explaining why their solution was the best. Now that I've actually given the subject some thought, and finally managed to read some quality material on the matter, it all makes sense.
It's basically a checklist. Do you want search engines to
-
index your paginated result pages: yes / no
-
reach the items that are listed in your paginated result pages: yes / no
In most cases you don't want your paginated result pages to be indexed. With our without Panda, visitors get little value from actually viewing 'page 7' in your result pages. That actual page provides little or no value to those visitors. However, you DO want those items listed on these paginated pages to be crawled, especially when you don't have any other pages linking to them (which you should by the way). This boils down to:
-
Don't nofollow your paginated links (because you want search engine spiders to reach them)
-
Put "noindex,follow" in the meta robots tag for all pages >1 (thus page 2 and greater) so the engines will no index these paginated results, but will crawl on to the pages that are behind the listings
Good luck!
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I apply Canonical Links from my Landing Pages to Core Website Pages?
I am working on an SEO project for the website: https://wave.com.au/ There are some core website pages, which we want to target for organic traffic, like this one: https://wave.com.au/doctors/medical-specialties/anaesthetist-jobs/ Then we have basically have another version that is set up as a landing page and used for CPC campaigns. https://wave.com.au/anaesthetists/ Essentially, my question is should I apply canonical links from the landing page versions to the core website pages (especially if I know they are only utilising them for CPC campaigns) so as to push link equity/juice across? Here is the GA data from January 1 - April 30, 2019 (Behavior > Site Content > All Pages😞
Intermediate & Advanced SEO | | Wavelength_International0 -
Location Pages On Website vs Landing pages
We have been having a terrible time in the local search results for 20 + locations. I have Places set up and all, but we decided to create location pages on our sites for each location - brief description and content optimized for our main service. The path would be something like .com/location/example. One option that has came up in question is to create landing pages / "mini websites" that would probably be location-example.url.com. I believe that the latter option, mini sites for each location, would be a bad idea as those kinds of tactics were once spammy in the past. What are are your thoughts and and resources so I can convince my team on the best practice.
Intermediate & Advanced SEO | | KJ-Rodgers0 -
Better to 301 or de-index 403 pages
Google WMT recently found and called out a large number of old unpublished pages as access denied errors. The pages are tagged "noindex, follow." These old pages are in Google's index. At this point, would it better to 301 all these pages or submit an index removal request or what? Thanks... Darcy
Intermediate & Advanced SEO | | 945010 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Using href lang tag for multi-regional targeting on the same page
Hi, I have the site au.example.com and I ranked on google AustraliaI would like to be ranked also in Google New Zeland for the same page (au.example.com) Because they are geographically & culturally close Can I place href lang tag for both countries and present the same page The code should look like: OR should i have create a different page for New Zealand (for eample: http://au.example.com/EN-NZ) And the code will look like: What will work better or there is other solution? Hope I’m clear.. Thanks!
Intermediate & Advanced SEO | | Kung_fu_Panda0 -
Do I need to use rel="canonical" on pages with no external links?
I know having rel="canonical" for each page on my website is not a bad practice... but how necessary is it for pages that don't have any external links pointing to them? I have my own opinions on this, to be fair - but I'd love to get a consensus before I start trying to customize which URLs have/don't have it included. Thank you.
Intermediate & Advanced SEO | | Netrepid0 -
How do you implement dynamic SEO-friendly URLs using Ajax without using hashbangs?
We're building a new website platform and are using Ajax as the method for allowing users to select from filters. We want to dynamically insert elements into the URL as the filters are selected so that search engines will index multiple combinations of filters. We're struggling to see how this is possible using symfony framework. We've used www.gizmodo.com as an example of how to achieve SEO and user-friendly URLs but this is only an example of achieving this for static content. We would prefer to go down a route that didn't involve hashbangs if possible. Does anyone have any experience using hashbangs and how it affected their site? Any advice on the above would be gratefully received.
Intermediate & Advanced SEO | | Sayers1 -
Robots.txt & url removal vs. noindex, follow?
When de-indexing pages from google, what are the pros & cons of each of the below two options: robots.txt & requesting url removal from google webmasters Use the noindex, follow meta tag on all doctor profile pages Keep the URLs in the Sitemap file so that Google will recrawl them and find the noindex meta tag make sure that they're not disallowed by the robots.txt file
Intermediate & Advanced SEO | | nicole.healthline0