What's the best way to noindex pages but still keep backlinks equity?
-
Hello everyone,
Maybe it is a stupid question, but I ask to the experts... What's the best way to noindex pages but still keep backlinks equity from those noindexed pages?
For example, let's say I have many pages that look similar to a "main" page which I solely want to appear on Google, so I want to noindex all pages with the exception of that "main" page... but, what if I also want to transfer any possible link equity present on the noindexed pages to the main page?
The only solution I have thought is to add a canonical tag pointing to the main page on those noindexed pages... but will that work or cause wreak havoc in some way?
-
Thank you Chris for your in-depth answer, you just confirmed what I suspected.
To clarify though, what I am trying to save here by noindexing those subsequent pages is "indexing budget" not "crawl budget". You know the famous "indexing cap"? And also, tackling possible "duplicate" or "thin" content issues with such "similar but different" pages... fact is, our website has been hit by Panda several times, we recovered several times as well, but we have been hit again with the latest quality update of last June, and we are trying to find a way to get out of it once for all. Hence my attempt to reduce the number of similar indexed pages as much as we can.
I have just opened a discussion on this "Panda-non-sense" issue, and I'd like to know your opinion about it:
https://moz.com/community/q/panda-rankings-and-other-non-sense-issues
Thank you again.
-
Hi Fabrizo,
That's a tricky one given the sheer volume of pages/music on the site. Typically the cleanest way to handle all of this is to offer up a View All page and Canonical back to that but in your case, a View All pages would scroll on forever!
Canonical is not the answer here. It's made for handling duplicate pages like this:
www.website.com/product1.html
www.website.com/product1.html&sid=12432In this instance, both pages are 100% identical so the canonical tag tells Google that any variation of product1.html is actually just that page and should be counted as such. What you've got here is pagination so while the pages are mostly the same, they're not identical.
Instead, this is exactly what rel=prev/next is for which you've already looked into. It's very hard to find recent information on this topic but the traditional advice from Google has been to implement prev/next and they will infer the most important page (typically page one) from the fact that it's the only page that has a rel=next but no rel=prev (because there is no previous page). Apologies if you already knew all of this; just making sure I didn't skim over anything here. Google also says these pages will essentially be seen as a single unit from that point and so all link equity will be consolidated toward that block of pages.
Canonical and rel=next/prev do act separately so by all means if you have search filters or anything else that may alter the URL, a canonical tag can be used as well but each page here would just point back to itself, not back to page 1.
This clip from Google's Maile Ohye is quite old but the advice in here clears a few things up and is still very relevant today.
With that said, the other point you raised is very valid - what to do about crawl budget. Google also suggests just leaving them as-is since you're only linking to the first 5 pages and any links beyond that are buried so deep in the hierarchy they're seen as a low priority and will barely be looked at.
From my understanding (though I'm a little hesitant on this one) is that noindexed pages do retain their link equity. Noindex doesn't say 'don't crawl me' (also meaning it won't help your crawl budget, this would have to be done through Robots.txt), it says 'don't include me in your index'. So on this logic it would make sense that links pointing to a noindexed page would still be counted.
-
You are right, hard to give advice without the specific context.
Well, here is the problem that I am facing: we have an e-commerce website and each category has several hundreds if not thousands of pages... now, I want just the first page of each category page to appear in the index in order to not waste the index cap and avoid possible duplicate issues, therefore I want to noindex all subsequent pages, and index just the first page (which is also the most rich).
Here is an example from our website, our piano sheet music category page:
http://www.virtualsheetmusic.com/downloads/Indici/Piano.html
I want that first page to be in the index, but not the subsequent ones:
http://www.virtualsheetmusic.com/downloads/Indici/Piano.html?cp=2
http://www.virtualsheetmusic.com/downloads/Indici/Piano.html?cp=3
etc...
After playing with canonicals and rel,next, I have realized that Google still keeps those unuseful pages in the index, whereas by removing them could help with both index cap issues and possible Panda penalties (too many similar and not useful pages). But is there any way to keep any possible link-equity of those subsequent pages by noindexing them? Or maybe the link equity is anyway preserved on those pages and on the overall domain as well? And, better, is there a way to move all that possible link equity to the first page in some way?
I hope this makes sense. Thank you for your help!
-
Apologies for the indirect answer but I would have to ask "why"?
If these pages are almost identical and you only want one of them to be indexed, in most situations the users would probably benefit from there only being that one main page. Cutting down on redundant pages is great for UX, crawl budget and general site quality.
Maybe there is a genuine reason for it but without knowing the context it's hard to give accurate info on the best way to handle it
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What are best page titles for sub-domain pages?
Hi Moz communtity, Let's say a website has multiple sub-domains with hundreds and thousands of pages. Generally we will be mentioning "primary keyword & "brand name" on every page of website. Can we do same on all pages of sub-domains to increase the authority of website for this primary keyword in Google? Or it gonna end up as negative impact if Google consider as duplicate content being mentioned same keyword and brand name on every page even on website and all pages of sub domains? Thanks
Intermediate & Advanced SEO | | vtmoz0 -
404's after pruning old posts
Hey all, So after reading about the benefits of pruning old content I decided to give it a try on our blog. After reviewing thousands of posts I found around 2500 that were simply not getting any traffic, or if they were there was 100% bounce & exit. Many of these posts also had content with relevance that had long ago expired. After deleted these old posts, I am now seeing the posts being reported as 404's in Google Search Console. But most of them are the old url with "trashed" appended to the url. My question is: are these 404's normal? Do I now have to go through and set up 301's for all of these? Is it enough to simply add the lot to my robots.txt file? Are these 404's going to hurt my blog? Thanks, Roman
Intermediate & Advanced SEO | | Dynata_panel_marketing0 -
404's and Ecommerce - Products no longer for sale
Hi We regularly have products which are no longer sold and discontinued. As we have such a large site, webmaster tools regularly picks up new 404's. These 404 pages aren't linked to from anywhere on the site any longer, however WMT will still report them as errors. Does this affect site authority? Thank you
Intermediate & Advanced SEO | | BeckyKey0 -
Our client's web property recently switched over to secure pages (https) however there non secure pages (http) are still being indexed in Google. Should we request in GWMT to have the non secure pages deindexed?
Our client recently switched over to https via new SSL. They have also implemented rel canonicals for most of their internal webpages (that point to the https). However many of their non secure webpages are still being indexed by Google. We have access to their GWMT for both the secure and non secure pages.
Intermediate & Advanced SEO | | RosemaryB
Should we just let Google figure out what to do with the non secure pages? We would like to setup 301 redirects from the old non secure pages to the new secure pages, but were not sure if this is going to happen. We thought about requesting in GWMT for Google to remove the non secure pages. However we felt this was pretty drastic. Any recommendations would be much appreciated.0 -
How long takes to a page show up in Google results after removing noindex from a page?
Hi folks, A client of mine created a new page and used meta robots noindex to not show the page while they are not ready to launch it. The problem is that somehow Google "crawled" the page and now, after removing the meta robots noindex, the page does not show up in the results. We've tried to crawl it using Fetch as Googlebot, and then submit it using the button that appears. We've included the page in sitemap.xml and also used the old Google submit new page URL https://www.google.com/webmasters/tools/submit-url Does anyone know how long will it take for Google to show the page AFTER removing meta robots noindex from the page? Any reliable references of the statement? I did not find any Google video/post about this. I know that in some days it will appear but I'd like to have a good reference for the future. Thanks.
Intermediate & Advanced SEO | | fabioricotta-840380 -
Ecommerce URL's
I'm a bit divided about the URL structure for ecommerce sites. I'm using Magento and I have Canonical URLs plugin installed. My question is about the URL structure and length. 1st Way: If I set up Product to have categories in the URL it will appear like this mysite.com/category/subcategory/product/ - and while the product can be in multiple places , the Canonical URL can be either short or long. The advantage of having this URL is that it shows all the categories in the breadcrumbs ( and a whole lot more links over the site ) . The disadvantage is the URL Length 2nd Way: Setting up the product to have no category in the URL URL will be mysite.com/product/ Advantage: short URL. disadvantage - doesn't show the categories in the breadcrumbs if you link direct. Thoughts?
Intermediate & Advanced SEO | | s_EOgi_Bear1 -
NoIndexing Massive Pages all at once: Good or bad?
If you have a site with a few thousand high quality and authoritative pages, and tens of thousands with search results and tags pages with thin content, and noindex,follow the thin content pages all at once, will google see this is a good or bad thing? I am only trying to do what Google guidelines suggest, but since I have so many pages index on my site, will throwing the noindex tag on ~80% of thin content pages negatively impact my site?
Intermediate & Advanced SEO | | WebServiceConsulting.com0 -
Is there any negative SEO effect of having comma's in URL's?
Hello, I have a client who has a large ecommerce website. Some category names have been created with comma's in - which has meant that their software has automatically generated URL's with comma's in for every page that comes beneath the category in the site hierarchy. eg. 1 : http://shop.deliaonline.com/store/music,-dvd-and-games/dvds-and-blu_rays/ eg. 2 : http://shop.deliaonline.com/store/music,-dvd-and-games/dvds-and-blu_rays/action-and-adventure/ etc... I know that URL's with comma's in look a bit ugly! But is there 'any' SEO reason why URL's with comma's in are any less effective? Kind Regs, RB
Intermediate & Advanced SEO | | RichBestSEO0