Wow, yes - sorry about that. I've updated it. Google original write-up actually covers this case, too (it's toward the end):
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Wow, yes - sorry about that. I've updated it. Google original write-up actually covers this case, too (it's toward the end):
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
This gets tricky fast. Google currently wants rel=prev/next to contain the parameters currently in use (like sorts) for the page you're on and then wants you rel-canonical to the non-parameterized version. So, if the URL is:
http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html?cp=3&lpg=40
...then the tags should be...
Yeah, it's a bit strange. They have suggested that it's ok to rel-canonical to a "View All" page, but with the kind of product volume you have, that's generally a bad idea (for users and search). The have specifically recommended against setting rel-canonical to Page 1 of search results, especially if you use rel=prev/next.
Rel=prev/next will still show pages in the index, but I've found it to work pretty well. The other option is the more classic approach to simple META NOINDEX, FOLLOW pages 2+. That can still be effective, but it's getting less common.
Adam Audette has generally strong posts about this topic - here's a good, recent one:
http://searchengineland.com/the-latest-greatest-on-seo-pagination-114284
I have to disagree on this one. If Google honors a canonical tag, the non-canonical page will generally disappear from the index, at least inasmuch as we can measure it (with "site:", getting it to rank, etc.). It's a strong signal in many cases.
This is part of the reason Google introduced rel=prev/next for paginated content. With canonical, pages in the series aren't usually able to rank. Rel=prev/next allows them to rank without clogging up the index (theoretically). For search pagination, it's generally a better solution.
If your paginated content is still showing in large quantities in the index, Google may not be honoring the canonical tag properly, and they could be causing duplicate content issues. It depends on the implementation, but they recommend these days that you don't canonical to the first page of search results. Google may choose to ignore the tag in some cases.
Just to add to the consensus (although credit goes to multiple people on the thread) - PR-sculpting with nofollow on internal links no longer works, and it can be counter-productive. If these links are needed for users, don't worry about them, and don't disrupt PR flow through your site. Ultimately, you're only talking about a few pages, and @sprynewmedia is right - Google probably discounts footer links even internally (although we may no good way to measure this).
Be careful with links like "register", though, because sometimes they spin off URL variations, and you don't want those all indexed. In that case, you'd probably want to NOINDEX the target page - it just doesn't have any search value. I'm not seeing that link in your footer, though, so I'm not clear on what it does. I see this a lot with "login" links.
100% agreed - 403 isn't really an appropriate alternative to 404. I know SEOs who claim that 410s are stronger/faster, but I haven't seen great evidence in the past couple of years. It's harmless to try 410s, but I wouldn't expect miracles.
Let me jump in and clarify one small detail. If you delete a page, which would naturally result in a 404, but then 301-redirect that page/URL, there is no 404. I understand the confusion, but ultimately you can only have one HTTP status code. So, if the page properly 301s, it will never return a 404, even if it's technically deleted.
If the page 301s to a page that looks like a "not found" sort of page (content-wise), Google could consider that a "soft 404". Typically, though, once the 301 is in place, the 404 is moot.
For any change in status, the removal of crawl paths could slow Google re-processing those pages. Even if you delete a page, Google has to re-crawl it to see the 404. Now, if it's a high-authority page or has inbound (external) links, it could get re-crawled even if you cut the internal links. If it's a deep, low-value page, though, it may take Google a long time to get back and see those new signals. So, sometimes we recommend keeping the paths open.
There are other ways to kick Google to re-crawl, such as having an XML sitemap open with those pages in them (but removing the internal links). These signals aren't as powerful, but they can help the process along.
As to your specific questions:
(1) It's very tricky, in practice, especially at large-scale. I think step 1 is to dig into your index/cache (slice and dice with the site: operator) and see if Google has removed these pages. There are cases where massive 301s, etc. can look fishy to Google, but usually, once a page is gone, it's gone. If Google has redirected/removed these pages, and you're still penalized, then you may be fixing the wrong problem or possibly haven't gone far enough.
(2) It really depends on the issue. If you cut too deep and somehow cut off crawl paths or stranded inbound links, then you may need to re-establish some links/pages. If you 301'ed a lot of low-value content (and possibly bad links), you may actually need to cut some of those 301s and let those pages die off. I agree with @mememax that sometimes a helathy combination of 301s/404s is a better bet - pages go away, and 404s are normal if there's really no good alternative to the page that's gone.