Why use noindex, follow vs rel next/prev
-
Look at what www.shutterstock.com/cat-26p3-Abstract.html
does with their search results page 3 for 'Abstract' - same for page 2-N in the paginated series.
| name="robots" content="NOINDEX, FOLLOW"> |
| |Why is this a better alternative then using the next/prev, per Google's official statement on pagination? http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663744
Which doesn't even mention this as an option. Any ideas? Does this improve the odds of the first page in the paginated series ranking for the target term? There can't be a 'view all page' because there are simply too many items.
- Jeff
-
Hmmm - good thought. I wonder if Google is giving out deliberately bad advice for dealing with paginated sets, in that they never mention <noindex, follow="">as a viable alternative to next/prev. </noindex,>
If each paginated page is all unique assets (photos), why would it be dupe?
J
-
I don't think they're "gaming" Googlebot - I think they're trying to help the bots properly crawl through the site, index the relevant content, but not create hundreds of thousands of empty pages that will simply dilute their index and lower the overall value of the site in the search engine's eyes - I think they're trying to keep the Panda hungry and not provide it with lots of yummy food for it's low quality content hungry stomach.
This is why they are noindexing the pages - not to game the system, but to actually play by the system's rules.
-
Thanks Mark - if you disable javascript or impersonate Google-bot using a browser extension, then click on one of the main categories on the homepage bottom nav, you arrive here:
http://www.shutterstock.com/cat-5-Education.html
and click next, you get a URL like this: http://www.shutterstock.com/cat-5p2-Education.html
which is noindex,follow
if I arrive at the site without impersonating google-bot:
http://www.shutterstock.com/cat-5-Education.html#page=2
with a canonical back to http://www.shutterstock.com/cat-5-Education.html
So it seems they are trying to literally game Google - is there any evidence this works?
-
It seems like they noindexed that page because it may be part of an antiquated version of the site navigation/structure, or part of the cms and not something they want to promote. Not sure how you got there, but when you get to the primary version of a category, and then click through to the next page, the items shown change via ajax and the URL stays the same, just with a parameter that this is the second set of items being shown.
With the url staying the same, for their primary path of navigation, I don't think rel prev/next would be relevant. And these other pages probably created by the cms but not easily accessible they've noindexed - that's my best guess
-
There's more than one way to skin a cat. So while rel next/prev is an option, you could also dump it all out in one page OR you could also noindex your search page and let your sitemap do the work of notifying Google of your pages. I don't know that it's better (I would guess not but that's just a guess) but you could do it that way and not hurt yourself.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using RewriteRule - SEO Implications
Hi There, My client has a website (www.activeadventures.com) which they relaunched in April 2013. The company sells inbound tourism trips to New Zealand, South America and the Himalayas. Previously, the websites for these destinations were on their own domains (activenewzealand.com, activehimalayas.com, activesouthamerica.com). With the launch of the new website those domains were all retired (but had 301 redirects put into place to the new site), and moved into sub directories of the activeadventures.com domain (eg: activeadventures.com/new-zealand). There has been no indication that this strategy has improved organic search results (based on analytics) and in my opinion I believe that having this structure has been detrimental to their results. My opinion is based off the following: Visitors to the websites are coming into the site with a specific destination in mind that they want to travel to. Thus... having the destination in the URL I believe provides more immediate relevancy and should result in a higher CTR. I also feel that having the sites on their own URL's will provide a more concentrated theme for the destination based search phrases. The new site is a custom Joomla build and I want to find the easiest way to keep the current Joomla set up AND move the country specific sections of the site back onto their original URL's. It seems on the face of it that the easiest way to get this done is to use the htaccess file and use "RewriteRule" to push all the relevant pages back onto their original domains. Obviously we will ensure we also cover off pointing the existing 301's from the new site and the old sites to this new structure. My question is, are their any potential negative SEO implications of using the RewriteRule in the htaccess file to achieve this? Many thanks in advance. Kind Regards
Technical SEO | | activenz
Conrad Cranfield0 -
ECommerce Problem with canonicol , rel next , rel prev
Hi I was wondering if anyone willing to share your experience on implementing pagination and canonical when it comes to multiple sort options . Lets look at an example I have a site example.com ( i share the ownership with the rest of the world on that one 😉 ) and I sell stuff on the site example.com/for-sale/stuff1 example.com/for-sale/stuff2 example.com/for-sale/stuff3 etc I allow users to sort it by date_added, price, a-z, z-a, umph-value, and so on . So now we have example.com/for-sale/stuff1?sortby=date_added example.com/for-sale/stuff1?sortby=price example.com/for-sale/stuff1?sortby=a-z example.com/for-sale/stuff1?sortby=z-a example.com/for-sale/stuff1?sortby=umph-value etc example.com/for-sale/stuff1 **has the same result as **example.com/for-sale/stuff1?sortby=date_added ( that is the default sort option ) similarly for stuff2, stuff3 and so on. I cant 301 these because these are relevant for users who come in to buy from the site. I can add a view all page and rel canonical to that but let us assume its not technically possible for the site and there are tens of thousands of items in each of the for-sale pages. So I split it up in to pages of x numbers and let us assume we have 50 pages to sort through. example.com/for-sale/stuff1?sortby=date_added&page=2 to ...page=50 example.com/for-sale/stuff1?sortby=price&page=2 to ...page=50 example.com/for-sale/stuff1?sortby=a-z&page=2 to ...page=50 example.com/for-sale/stuff1?sortby=z-a&page=2 to ...page=50 example.com/for-sale/stuff1?sortby=umph-value&page=2 to ...page=50 etc This is where the shit hits the fan. So now if I want to avoid duplicate issue and when it comes to page 30 of stuff1 sorted by date do I add rel canonical = example.com/for-sale/stuff1 rel next = example.com/for-sale/stuff1?sortby=date_added&page=31 rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29 or rel canonical = example.com/for-sale/stuff1?sortby=date_added rel next = example.com/for-sale/stuff1?sortby=date_added&page=31 rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29 or rel canonical = example.com/for-sale/stuff1 rel next = example.com/for-sale/stuff1?page=31 rel prev = example.com/for-sale/stuff1?page=29 or rel canonical = example.com/for-sale/stuff1?page=30 rel next = example.com/for-sale/stuff1?sortby=date_added&page=31 rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29 or rel canonical = example.com/for-sale/stuff1?page=30 rel next = example.com/for-sale/stuff1?page=31 rel prev = example.com/for-sale/stuff1?page=29 None of this feels right to me . I am thinking of using GWT to ask G-bot not to crawl any of the sort parameters ( date_added, price, a-z, z-a, umph-value, and so on ) and use rel canonical = example.com/for-sale/stuff1?sortby=date_added&page=30 rel next = example.com/for-sale/stuff1?sortby=date_added&page=31 rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29 My doubts about this is that , will the link value that goes in to the pages with parameters be consolidated when I choose to ignore them via URL Parameters in GWT ? what do you guys think ?
Technical SEO | | Saijo.George0 -
Should I Use the Disavow Tool to for a Spammy Site/Landing Page?
Here's the situation... There's a site that is linking to about 6 articles of mine from about 225 pages of theirs (according to info in GWT). These pages are sales landing pages looking to sell their product. The pages are pretty much identical but have different urls. (I actually have a few sites doing this to me.) Here's where I think it's real bad -- when they are linking to me you don't see the link on the page, you have to view the page source and search for my site's url. I'm thinking having a hidden url, and it being my article that's hidden, has got to be bad. That on top of it being a sales page for a product I've seen traffic to my site dropping but I don't have a warning in GWT. These aren't links that I've placed or asked for in any way. I don't see how they could be good for me and I've already done what I could to email the site to remove the links (I didn't think it would work but thought I'd at least try). I totally understand that the site linking to me may not have any affect on my current traffic. So should I use the Disavow tool to make sure this site isn't counting against me?
Technical SEO | | GlenCraig0 -
Wiki/Knowledge bases
Hi A client of mine is creating a knowledge base/wiki for their website. There using there suppliers own knowledge base (basically their a reseller). What would be the best practice with regards to duplicate content. Would it be best to make all the pages "no follow"? and block the pages by the robot.txt?
Technical SEO | | Cocoonfxmedia0 -
How to change noindex to index?
Hey, I've recently upgraded to a pro SEOmoz account and have realised i have 14574 issues to do with 'blocked by meta-robot' and that 'This page is being kept out of the search engine indexes by the meta tag , which may have a value of "noindex", keeping this page out of the index.' How can i change this so my pages get indexed? I read somewhere that i need to change my privacy settings but that thread was 3 years old and now the WP Dashboard has updated.. Please let me know Many thanks, Jamie P.s Im using WordPress 3.5 And i have the XML sitemap plugin And i have no idea where to look for this robots.txt file..
Technical SEO | | markgreggs0 -
/$1 URL Showing Up
Whenever I crawl my site with any kind of bot or a sitemap generator over my site. it comes up with /$1 version of my URLs. For example: It gives me hdiconference.com & hdiconference.com/$1 and hdiconference.com/purchases & hdiconference.com/purchases/$1 Then I get warnings saying that it's duplicate content. Here's the problem: I can't find these /$1 URLs anywhere. Even when I type them in, I get a 404 error. I don't know what they are, where they came from, and I can't find them when I scour my code. So, I'm trying to figure out where the crawlers are picking this up. Where are these things? If sitemap generators and other site crawlers are seeing them, I have to assume that Googlebot is seeing them as well. Any help? My developers are at a loss as well.
Technical SEO | | HDI0 -
Is anyone using Media Temple?
I'm looking to move 5 of my sites from Hostgator's shared servers to Media Temple's dedicated virtual servers. Anyone have experience with (mt)? I'm planning on adding a few more sites this year and several things they offer are attractive to me: A (virtually) dedicated environment: Faster websites, better user experience, plus I like having some control over my site's resources Scalability: I can add more resources easily (although not super cheap) Unique control panels for each site: More control for my tech savvy clients. Unique IPs for $1 a month: More linkjuice between my related sites. $50/month is a big jump from my $12/month Hostgator account but I'm thinking it will be worth it. Am I on the right track or is this a fool's errand?
Technical SEO | | AaronParrish0 -
How to see a theme ‘/wp-content/themes/’
HI I'm still plugging away at getting to grips with my companies personalized blog. I've been trying for the past two days to upload a theme to my own test Wordpress blog, in order to correct a bug in the companies theme that makes formatting in the Post disappear. The code in the themes CSS file seems to be fine. Anyhow what I assumed would be a simple step has given me hours of hassle. I have finally got to the point of uploading an unzipped version of the theme intot ‘/wp-content/themes/’. Now try as I might my Wordpress admin is completely blind to the fact. Any attempt at using the Upload facility (which is what I attempted many hours ago) fails. There seems to be no place to say, look out there at my directory - a new original theme - unzipped and ready to go. Am I missing something very obvious?
Technical SEO | | catherine-2793880