Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What's the best way to noindex pages but still keep backlinks equity?
-
Hello everyone,
Maybe it is a stupid question, but I ask to the experts... What's the best way to noindex pages but still keep backlinks equity from those noindexed pages?
For example, let's say I have many pages that look similar to a "main" page which I solely want to appear on Google, so I want to noindex all pages with the exception of that "main" page... but, what if I also want to transfer any possible link equity present on the noindexed pages to the main page?
The only solution I have thought is to add a canonical tag pointing to the main page on those noindexed pages... but will that work or cause wreak havoc in some way?
-
Thank you Chris for your in-depth answer, you just confirmed what I suspected.
To clarify though, what I am trying to save here by noindexing those subsequent pages is "indexing budget" not "crawl budget". You know the famous "indexing cap"? And also, tackling possible "duplicate" or "thin" content issues with such "similar but different" pages... fact is, our website has been hit by Panda several times, we recovered several times as well, but we have been hit again with the latest quality update of last June, and we are trying to find a way to get out of it once for all. Hence my attempt to reduce the number of similar indexed pages as much as we can.
I have just opened a discussion on this "Panda-non-sense" issue, and I'd like to know your opinion about it:
https://moz.com/community/q/panda-rankings-and-other-non-sense-issues
Thank you again.
-
Hi Fabrizo,
That's a tricky one given the sheer volume of pages/music on the site. Typically the cleanest way to handle all of this is to offer up a View All page and Canonical back to that but in your case, a View All pages would scroll on forever!
Canonical is not the answer here. It's made for handling duplicate pages like this:
www.website.com/product1.html
www.website.com/product1.html&sid=12432In this instance, both pages are 100% identical so the canonical tag tells Google that any variation of product1.html is actually just that page and should be counted as such. What you've got here is pagination so while the pages are mostly the same, they're not identical.
Instead, this is exactly what rel=prev/next is for which you've already looked into. It's very hard to find recent information on this topic but the traditional advice from Google has been to implement prev/next and they will infer the most important page (typically page one) from the fact that it's the only page that has a rel=next but no rel=prev (because there is no previous page). Apologies if you already knew all of this; just making sure I didn't skim over anything here. Google also says these pages will essentially be seen as a single unit from that point and so all link equity will be consolidated toward that block of pages.
Canonical and rel=next/prev do act separately so by all means if you have search filters or anything else that may alter the URL, a canonical tag can be used as well but each page here would just point back to itself, not back to page 1.
This clip from Google's Maile Ohye is quite old but the advice in here clears a few things up and is still very relevant today.
With that said, the other point you raised is very valid - what to do about crawl budget. Google also suggests just leaving them as-is since you're only linking to the first 5 pages and any links beyond that are buried so deep in the hierarchy they're seen as a low priority and will barely be looked at.
From my understanding (though I'm a little hesitant on this one) is that noindexed pages do retain their link equity. Noindex doesn't say 'don't crawl me' (also meaning it won't help your crawl budget, this would have to be done through Robots.txt), it says 'don't include me in your index'. So on this logic it would make sense that links pointing to a noindexed page would still be counted.
-
You are right, hard to give advice without the specific context.
Well, here is the problem that I am facing: we have an e-commerce website and each category has several hundreds if not thousands of pages... now, I want just the first page of each category page to appear in the index in order to not waste the index cap and avoid possible duplicate issues, therefore I want to noindex all subsequent pages, and index just the first page (which is also the most rich).
Here is an example from our website, our piano sheet music category page:
http://www.virtualsheetmusic.com/downloads/Indici/Piano.html
I want that first page to be in the index, but not the subsequent ones:
http://www.virtualsheetmusic.com/downloads/Indici/Piano.html?cp=2
http://www.virtualsheetmusic.com/downloads/Indici/Piano.html?cp=3
etc...
After playing with canonicals and rel,next, I have realized that Google still keeps those unuseful pages in the index, whereas by removing them could help with both index cap issues and possible Panda penalties (too many similar and not useful pages). But is there any way to keep any possible link-equity of those subsequent pages by noindexing them? Or maybe the link equity is anyway preserved on those pages and on the overall domain as well? And, better, is there a way to move all that possible link equity to the first page in some way?
I hope this makes sense. Thank you for your help!
-
Apologies for the indirect answer but I would have to ask "why"?
If these pages are almost identical and you only want one of them to be indexed, in most situations the users would probably benefit from there only being that one main page. Cutting down on redundant pages is great for UX, crawl budget and general site quality.
Maybe there is a genuine reason for it but without knowing the context it's hard to give accurate info on the best way to handle it
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
New Subdomain & Best Way To Index
We have an ecommerce site, we'll say at https://example.com. We have created a series of brand new landing pages, mainly for PPC and Social at https://sub.example.com, but would also like for these to get indexed. These are built on Unbounce so there is an easy option to simply uncheck the box that says "block page from search engines", however I am trying to speed up this process but also do this the best/correct way. I've read a lot about how we should build landing pages as a sub-directory, but one of the main issues we are dealing with is long page load time on https://example.com, so I wanted a kind of fresh start. I was thinking a potential solution to index these quickly/correctly was to make a redirect such as https://example.com/forward-1 -> https:sub.example.com/forward-1 then submit https://example.com/forward-1 to Search Console but I am not sure if that will even work. Another possible solution was to put some of the subdomain links accessed on the root domain say right on the pages or in the navigation. Also, will I definitely be hurt by 'starting over' with a new website? Even though my MozBar on my subdomain https://sub.example.com has the same domain authority (DA) as the root domain https://example.com? Recommendations and steps to be taken are welcome!
Intermediate & Advanced SEO | | Markbwc0 -
Using the same content on different TLD's
HI Everyone, We have clients for whom we are going to work with in different countries but sometimes with the same language. For example we might have a client in a competitive niche working in Germany, Austria and Switzerland (Swiss German) ie we're going to potentially rewrite our website three times in German, We're thinking of using Google's href lang tags and use pretty much the same content - is this a safe option, has anyone actually tries this successfully or otherwise? All answers appreciated. Cheers, Mel.
Intermediate & Advanced SEO | | dancape1 -
Best way to block a sub-domain from being indexed
Hello, The search engines have indexed a sub-domain I did not want indexed its on old.domain.com and dev.domain.com - I was going to password them but is there a best practice way to block them. My main domain default robots.txt says :- Sitemap: http://www.domain.com/sitemap.xml global User-agent: *
Intermediate & Advanced SEO | | JohnW-UK
Disallow: /cgi-bin/
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/cache/
Disallow: /wp-content/themes/
Disallow: /trackback/
Disallow: /feed/
Disallow: /comments/
Disallow: /category//
Disallow: */trackback/
Disallow: */feed/
Disallow: /comments/
Disallow: /?0 -
Can too many "noindex" pages compared to "index" pages be a problem?
Hello, I have a question for you: our website virtualsheetmusic.com includes thousands of product pages, and due to Panda penalties in the past, we have no-indexed most of the product pages hoping in a sort of recovery (not yet seen though!). So, currently we have about 4,000 "index" page compared to about 80,000 "noindex" pages. Now, we plan to add additional 100,000 new product pages from a new publisher to offer our customers more music choice, and these new pages will still be marked as "noindex, follow". At the end of the integration process, we will end up having something like 180,000 "noindex, follow" pages compared to about 4,000 "index, follow" pages. Here is my question: can this huge discrepancy between 180,000 "noindex" pages and 4,000 "index" pages be a problem? Can this kind of scenario have or cause any negative effect on our current natural SEs profile? or is this something that doesn't actually matter? Any thoughts on this issue are very welcome. Thank you! Fabrizio
Intermediate & Advanced SEO | | fablau0 -
What is the best way to optimize/setup a teaser "coming soon" page for a new product launch?
Within the context of a physical product launch what are some ideas around creating a /coming-soon page that "teases" the launch. Ideally I'd like to optimize a page around the product, but the client wants to try build consumer anticipation without giving too many details away. Any thoughts?
Intermediate & Advanced SEO | | GSI0 -
Is there any negative SEO effect of having comma's in URL's?
Hello, I have a client who has a large ecommerce website. Some category names have been created with comma's in - which has meant that their software has automatically generated URL's with comma's in for every page that comes beneath the category in the site hierarchy. eg. 1 : http://shop.deliaonline.com/store/music,-dvd-and-games/dvds-and-blu_rays/ eg. 2 : http://shop.deliaonline.com/store/music,-dvd-and-games/dvds-and-blu_rays/action-and-adventure/ etc... I know that URL's with comma's in look a bit ugly! But is there 'any' SEO reason why URL's with comma's in are any less effective? Kind Regs, RB
Intermediate & Advanced SEO | | RichBestSEO0 -
Disallowed Pages Still Showing Up in Google Index. What do we do?
We recently disallowed a wide variety of pages for www.udemy.com which we do not want google indexing (e.g., /tags or /lectures). Basically we don't want to spread our link juice around to all these pages that are never going to rank. We want to keep it focused on our core pages which are for our courses. We've added them as disallows in robots.txt, but after 2-3 weeks google is still showing them in it's index. When we lookup "site: udemy.com", for example, Google currently shows ~650,000 pages indexed... when really it should only be showing ~5,000 pages indexed. As another example, if you search for "site:udemy.com/tag", google shows 129,000 results. We've definitely added "/tag" into our robots.txt properly, so this should not be happening... Google showed be showing 0 results. Any ideas re: how we get Google to pay attention and re-index our site properly?
Intermediate & Advanced SEO | | udemy0 -
Tool to calculate the number of pages in Google's index?
When working with a very large site, are there any tools that will help you calculate the number of links in the Google index? I know you can use site:www.domain.com to see all the links indexed for a particular url. But what if you want to see the number of pages indexed for 100 different subdirectories (i.e. www.domain.com/a, www.domain.com/b)? is there a tool to help automate the process of finding the number of pages from each subdirectory in Google's index?
Intermediate & Advanced SEO | | nicole.healthline0