Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Can noindexed pages accrue page authority?
-
My company's site has a large set of pages (tens of thousands) that have very thin or no content. They typically target a single low-competition keyword (and typically rank very well), but the pages have a very high bounce rate and are definitely hurting our domain's overall rankings via Panda (quality ranking).
I'm planning on recommending we noindexed these pages temporarily, and reindex each page as resources are able to fill in content.
My question is whether an individual page will be able to accrue any page authority for that target term while noindexed. We DO want to rank for all those terms, just not until we have the content to back it up. However, we're in a pretty competitive space up against domains that have been around a lot longer and have higher domain authorities. Like I said, these pages rank well right now, even with thin content. The worry is if we noindex them while we slowly build out content, will our competitors get the edge on those terms (with their subpar but continually available content)? Do you think Google will give us any credit for having had the page all along, just not always indexed?
-
Yes, Google will give you credit for adding value to pages. You must have them crawled as a Googlebot immediately after no indexing is removed.
Your no indexing will pass page rank of thin content could save you potentially from a penalty however if you have a better page redirected to that page using a 301.
You will not receive the existing traffic if your ranking for that keyword at all if you noindex it. Well, you'll lose a lot of it until it's fixed.
You will have more trouble ranking for that keyword if you remove the page from Google's index. However, if you feel your content is that thin I would recommend no indexing them if you are going to fix them. And you must be willing to fix them extremely soon. How are you going to rank for a term Organically if you no index it you will hurt it that is not currently getting traffic?
A NoIndex tag is an instruction to the search engines that you don’t want a page to be kept within their search results. You should use this when you believe you have a page that search engines might consider being of poor quality.
What does a noindex tag do?
- It is a directive, not a suggestion. I.e., Google will obey it, and not index the page.
- The page can still be crawled by Google.
- The page can still accumulate PageRank.
- The page can still pass PageRank via any links on the page.
(PageRank, in reality_, there are a lot of other signals that are potentially passed through any link. Better to say “signals passed” than “PageRank passed.”)_
Crawl frequency of a noindex page will decline over time.
Crawl frequency refers to how often Google returns to a page to check whether the page still exists, has any changes, and has accumulated or lost signals.
Typically crawl frequency will decline for any page that Google cannot index, for whatever reason. Google will try to recrawl a few times to check if the noindex, error, or whatever was blocking the crawl, is gone or fixed.
If the noindex instruction remains, Google will slowly start to lengthen the time to the next attempt to crawl the page, eventually reducing to a check about every two-to-three months to see if the no index tag is still there.
The no index page will be excluded from Google's search index, So it will not help you rank for that term unless you have other pages that are cannibalizing it and trying to rank for that term as well. If so 301 redirect the poor content page to the right content page.
Your question on page rank and no index yes page rank can accrue Google will still read the page. They will derive some information from the hypertext inside the URLs.
Before you remove content
The following are some guidelines you can use:
- Make an educated (non-biased) judgement: Is your content’s quality “worse” than this content?
- Do you cover the topic in enough length and sufficiently in-depth?
- Which aspects of this content is your page not covering completely?
- Which “user intent” queries is your content not answering?
- How can you make your content better?
- Can you use any great imagery or diagrams to supplement your content?
- Are there any YouTube or other videos which can add value to your content.
Iterate and do the above for all of the pages which are outranking yours. The first few are going to be the hardest — it’s likely that the rest will follow a similar pattern.
There are no short cuts. You’ll have to review all the pages which are outranking you to ensure you leave no gaps.
Update Your Content To Fully Answer The User Search Query
Once you’ve seen what you are up against, you need to update your content.
To put it simply, your content needs to be better than the competition. It also needs to fully answer the user search intent which we have identified previously.
Make it the BEST content out there.
Given that you’ve already analyzed your competitors’ content, you should have a pretty good idea of what your content is missing.
Supplement your existing content with that additional content, but
- Don’t rewrite it completely. You’ll likely lose the precious content that Google was ranking you for.
- Don’t write a new post with the hope that this will rank better. It’s a much longer and harder journey than pushing up your already existing content.
- Of course, don’t change the URL.
As discovered in this case study 468% traffic increase case study, Google will reward you for your efforts.
Use the judgment calls from your competitive research to plan what needs to be added or updated.
Enhance it with any missing content
While looking at the organic keywords which you are ranking for you might come across user search intent keywords for which you have no content.
Let’s say, for example; your content discusses enabling Joomla SEF URLs.
If in your research you find that you are ranking for “disabling Joomla SEF URLs,” make sure that your refreshed content answers that query also.
These queries are pure gold — make sure you are answering them
You can see a larger version of the photos below here
Reference
- http://www.hobo-web.co.uk/duplicate-content-problems/#thin-content-classifier
- https://www.stonetemple.com/gary-illyes-what-is-noindex-and-what-does-it-do/
- https://www.mattcutts.com/blog/pagerank-sculpting/
** when rebuilding**
- https://moz.com/learn/seo
- https://ahrefs.com/blog/link-building/
- https://moz.com/beginners-guide-to-link-building
- http://www.bruceclay.com/blog/what-is-pagerank/
this is similar because it addresses turning off pages and turning them back on
I hope this helps,
Tom
-
From a Google perspective if you noindex a page sooner or later it will be removed from the index and hence you will lose your search term.
If you have no particular need to remove the pages, create new pages with the new content (Google will like that anyway), almost certainly you will find that some of those pages will outrank the thin content pages by definition in time.
In due course you could then 301 the old link which in theory will pass on most of the authority to the new page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should you 'noindex' Checkout Pages?
Today I was reviewing my Moz analytics and suddenly noticed 1,000 issues with pages without a meta description. I reviewed the list and learned it is 1,000 checkout pages. That's because my website has thousands of agency pages from which you can buy a product, and it reflects that difference on each version of the checkout. So, I was thinking about no-indexing (but continuing to 'follow') these checkout pages, but wondering if it has any knock-on effects I may be unaware of? Any assistance is much appreciated. Luke
Intermediate & Advanced SEO | | Luke_Proctor0 -
Why is our noindex tag not working?
Hi, I have the following page where we've implemented a no index tag. But when we run this page in screaming frog or this tool here to verify the noidex is present and functioning, it shows that it's not. But if you view the source of the page, the code is present in the head tag. And unfortunately we've seen instances where Google is indexing pages we've noindexed. Any thoughts on the example above or why this is happening in Google? Eddy
Intermediate & Advanced SEO | | eddys_kap0 -
What's the best way to noindex pages but still keep backlinks equity?
Hello everyone, Maybe it is a stupid question, but I ask to the experts... What's the best way to noindex pages but still keep backlinks equity from those noindexed pages? For example, let's say I have many pages that look similar to a "main" page which I solely want to appear on Google, so I want to noindex all pages with the exception of that "main" page... but, what if I also want to transfer any possible link equity present on the noindexed pages to the main page? The only solution I have thought is to add a canonical tag pointing to the main page on those noindexed pages... but will that work or cause wreak havoc in some way?
Intermediate & Advanced SEO | | fablau3 -
How can I prevent duplicate pages being indexed because of load balancer (hosting)?
The site that I am optimising has a problem with duplicate pages being indexed as a result of the load balancer (which is required and set up by the hosting company). The load balancer passes the site through to 2 different URLs: www.domain.com www2.domain.com Some how, Google have indexed 2 of the same URLs (which I was obviously hoping they wouldn't) - the first on www and the second on www2. The hosting is a mirror image of each other (www and www2), meaning I can't upload a robots.txt to the root of www2.domain.com disallowing all. Also, I can't add a canonical script into the website header of www2.domain.com pointing the individual URLs through to www.domain.com etc. Any suggestions as to how I can resolve this issue would be greatly appreciated!
Intermediate & Advanced SEO | | iam-sold0 -
Wordpress Tag Pages - NoIndex?
Hi there. I am using Yoast Wordpress Plugin. I just wonder if any test have been done around the effects of Index vs Noindex for Tag Pages? ( like when tagging a word relevant to an article ) Thanks 🙂 Martin
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
Why does a site have no domain authority?
A website was built and launched eight months ago, and their domain authority is 1. When a site has been live for a while and has such a low DA, what's causing it?
Intermediate & Advanced SEO | | optimalwebinc0 -
Dynamic pages - ecommerce product pages
Hi guys, Before I dive into my question, let me give you some background.. I manage an ecommerce site and we're got thousands of product pages. The pages contain dynamic blocks and information in these blocks are fed by another system. So in a nutshell, our product team enters the data in a software and boom, the information is generated in these page blocks. But that's not all, these pages then redirect to a duplicate version with a custom URL. This is cached and this is what the end user sees. This was done to speed up load, rather than the system generate a dynamic page on the fly, the cache page is loaded and the user sees it super fast. Another benefit happened as well, after going live with the cached pages, they started getting indexed and ranking in Google. The problem is that, the redirect to the duplicate cached page isn't a permanent one, it's a meta refresh, a 302 that happens in a second. So yeah, I've got 302s kicking about. The development team can set up 301 but then there won't be any caching, pages will just load dynamically. Google records pages that are cached but does it cache a dynamic page though? Without a cached page, I'm wondering if I would drop in traffic. The view source might just show a list of dynamic blocks, no content! How would you tackle this? I've already setup canonical tags on the cached pages but removing cache.. Thanks
Intermediate & Advanced SEO | | Bio-RadAbs0 -
Are there any negative effects to using a 301 redirect from a page to another internal page?
For example, from http://www.dog.com/toys to http://www.dog.com/chew-toys. In my situation, the main purpose of the 301 redirect is to replace the page with a new internal page that has a better optimized URL. This will be executed across multiple pages (about 20). None of these pages hold any search rankings but do carry a decent amount of page authority.
Intermediate & Advanced SEO | | Visually0