What's the best way to noindex pages but still keep backlinks equity?
-
Hello everyone,
Maybe it is a stupid question, but I ask to the experts... What's the best way to noindex pages but still keep backlinks equity from those noindexed pages?
For example, let's say I have many pages that look similar to a "main" page which I solely want to appear on Google, so I want to noindex all pages with the exception of that "main" page... but, what if I also want to transfer any possible link equity present on the noindexed pages to the main page?
The only solution I have thought is to add a canonical tag pointing to the main page on those noindexed pages... but will that work or cause wreak havoc in some way?
-
Thank you Chris for your in-depth answer, you just confirmed what I suspected.
To clarify though, what I am trying to save here by noindexing those subsequent pages is "indexing budget" not "crawl budget". You know the famous "indexing cap"? And also, tackling possible "duplicate" or "thin" content issues with such "similar but different" pages... fact is, our website has been hit by Panda several times, we recovered several times as well, but we have been hit again with the latest quality update of last June, and we are trying to find a way to get out of it once for all. Hence my attempt to reduce the number of similar indexed pages as much as we can.
I have just opened a discussion on this "Panda-non-sense" issue, and I'd like to know your opinion about it:
https://moz.com/community/q/panda-rankings-and-other-non-sense-issues
Thank you again.
-
Hi Fabrizo,
That's a tricky one given the sheer volume of pages/music on the site. Typically the cleanest way to handle all of this is to offer up a View All page and Canonical back to that but in your case, a View All pages would scroll on forever!
Canonical is not the answer here. It's made for handling duplicate pages like this:
www.website.com/product1.html
www.website.com/product1.html&sid=12432In this instance, both pages are 100% identical so the canonical tag tells Google that any variation of product1.html is actually just that page and should be counted as such. What you've got here is pagination so while the pages are mostly the same, they're not identical.
Instead, this is exactly what rel=prev/next is for which you've already looked into. It's very hard to find recent information on this topic but the traditional advice from Google has been to implement prev/next and they will infer the most important page (typically page one) from the fact that it's the only page that has a rel=next but no rel=prev (because there is no previous page). Apologies if you already knew all of this; just making sure I didn't skim over anything here. Google also says these pages will essentially be seen as a single unit from that point and so all link equity will be consolidated toward that block of pages.
Canonical and rel=next/prev do act separately so by all means if you have search filters or anything else that may alter the URL, a canonical tag can be used as well but each page here would just point back to itself, not back to page 1.
This clip from Google's Maile Ohye is quite old but the advice in here clears a few things up and is still very relevant today.
With that said, the other point you raised is very valid - what to do about crawl budget. Google also suggests just leaving them as-is since you're only linking to the first 5 pages and any links beyond that are buried so deep in the hierarchy they're seen as a low priority and will barely be looked at.
From my understanding (though I'm a little hesitant on this one) is that noindexed pages do retain their link equity. Noindex doesn't say 'don't crawl me' (also meaning it won't help your crawl budget, this would have to be done through Robots.txt), it says 'don't include me in your index'. So on this logic it would make sense that links pointing to a noindexed page would still be counted.
-
You are right, hard to give advice without the specific context.
Well, here is the problem that I am facing: we have an e-commerce website and each category has several hundreds if not thousands of pages... now, I want just the first page of each category page to appear in the index in order to not waste the index cap and avoid possible duplicate issues, therefore I want to noindex all subsequent pages, and index just the first page (which is also the most rich).
Here is an example from our website, our piano sheet music category page:
http://www.virtualsheetmusic.com/downloads/Indici/Piano.html
I want that first page to be in the index, but not the subsequent ones:
http://www.virtualsheetmusic.com/downloads/Indici/Piano.html?cp=2
http://www.virtualsheetmusic.com/downloads/Indici/Piano.html?cp=3
etc...
After playing with canonicals and rel,next, I have realized that Google still keeps those unuseful pages in the index, whereas by removing them could help with both index cap issues and possible Panda penalties (too many similar and not useful pages). But is there any way to keep any possible link-equity of those subsequent pages by noindexing them? Or maybe the link equity is anyway preserved on those pages and on the overall domain as well? And, better, is there a way to move all that possible link equity to the first page in some way?
I hope this makes sense. Thank you for your help!
-
Apologies for the indirect answer but I would have to ask "why"?
If these pages are almost identical and you only want one of them to be indexed, in most situations the users would probably benefit from there only being that one main page. Cutting down on redundant pages is great for UX, crawl budget and general site quality.
Maybe there is a genuine reason for it but without knowing the context it's hard to give accurate info on the best way to handle it
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does creating too many parent pages damage my website's SEO?
I need to know how to keep my website structure well organised and ensure Google still recognises the key pages. I work for a travel company which needs to give customers various pieces of information on our website and this needs to be well organised in terms of structure. For example, customers need information on airport pick-ups and drop-offs for each of our destinations but this isn't something that needs to rank on Google. Logically for site structure would be to create a parent page: thedragontrip.com/transfers/india Is creating parent pages for unimportant content a bad idea?
Intermediate & Advanced SEO | | nicolewretham1 -
Google's Knowledge Panel
Hi Moz Community. Has anyone noticed a pattern in the websites that Google pulls in to populate knowledge Panels? For example, for a lot of queries Google keeps pulling data from a specific source over and over again, and the data shown in the Knowledge Panel isn't on the target page. Is it possible that Google simply favors some sites over others and no matter what you do, you'll never make it into the Knowledge box? Thanks.
Intermediate & Advanced SEO | | yaelslater0 -
Is it good practice of keeping all our pages at second level?
While defining the site structure we thought of having all pages at second level only. i.e. domain.com/services domain.com/city domain.com/services-in-city please let us know the pros and cons of having this as architecture.
Intermediate & Advanced SEO | | fabogo_marketing0 -
Any issue? Redirect 100's of domains into one website's internal pages
Hi all, Imagine if you will I was the owner of many domains, say 100 demographically rich kwd domains & my plan was to redirect these into one website - each into a different relevant subfolder. e.g. www.dewsburytilers..com > www.brandname.com/dewsbury/tilers.html www.hammersmith-tilers.com > www.brandname.com/hammersmith/tilers.html www.tilers-horsforth.com > www.brandname.com/horsforth/tilers.html another hundred or so 301 redirects...the backlinks to these domains were slim but relevant (the majority of the domains do not have any backlinks at all - can anyone see a problem with this practice? If so, what would your recommendations be?
Intermediate & Advanced SEO | | Fergclaw0 -
Site Structure: How do I deal with a great user experience that's not the best for Google's spiders?
We have ~3,000 photos that have all been tagged. We have a wonderful AJAXy interface for users where they can toggle all of these tags to find the exact set of photos they're looking for very quickly. We've also optimized a site structure for Google's benefit that gives each category a page. Each category page links to applicable album pages. Each album page links to individual photo pages. All pages have a good chunk of unique text. Now, for Google, the domain.com/photos index page should be a directory of sorts that links to each category page. Alternatively, the user would probably prefer the AJAXy interface. What is the best way to execute this?
Intermediate & Advanced SEO | | tatermarketing0 -
What is the best way to get anchor text cloud in line?
So I am working on a website, and it has been doing seo with keyword links for a a few years. The first branded terms comes in a 7% in 10th in the list on Ahefs. The keyword terms are upwards of 14%. What is the best way to get this back in line? It would take several months to build keyword branded terms to make any difference - but it is doable. I could try link removal, but less than 10% seem to actually get removed -- which won't make a difference. The disavow file doesn't really seem to do anything either. What are your suggestions?
Intermediate & Advanced SEO | | netviper0 -
What to do when all products are one of a kind WYSIWYG and url's are continuously changing. Lots of 404's
Hey Guys, I'm working on a website with WYSIWYG one of a kind products and the url's are continuously changing. There are allot of duplicate page titles (56 currently) but that number is always changing too. Let me give you guys a little background on the website. The site sells different types of live coral. So there may be anywhere from 20 - 150 corals of the same species. Each coral is a unique size, color etc. When the coral gets sold the site owner trashes the product creating a new 404. Sometimes the url gets indexed, other times they don't since the corals get sold within hours/days. I was thinking of optimizing each product with a keyword and re-using the url by having the client update the picture and price but that still leaves allot more products than keywords. Here is an example of the corals with the same title http://austinaquafarms.com/product-category/acans/ Thanks for the help guys. I'm not really sure what to do.
Intermediate & Advanced SEO | | aronwp0 -
Soft 404's from pages blocked by robots.txt -- cause for concern?
We're seeing soft 404 errors appear in our google webmaster tools section on pages that are blocked by robots.txt (our search result pages). Should we be concerned? Is there anything we can do about this?
Intermediate & Advanced SEO | | nicole.healthline4