Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Can noindexed pages accrue page authority?
-
My company's site has a large set of pages (tens of thousands) that have very thin or no content. They typically target a single low-competition keyword (and typically rank very well), but the pages have a very high bounce rate and are definitely hurting our domain's overall rankings via Panda (quality ranking).
I'm planning on recommending we noindexed these pages temporarily, and reindex each page as resources are able to fill in content.
My question is whether an individual page will be able to accrue any page authority for that target term while noindexed. We DO want to rank for all those terms, just not until we have the content to back it up. However, we're in a pretty competitive space up against domains that have been around a lot longer and have higher domain authorities. Like I said, these pages rank well right now, even with thin content. The worry is if we noindex them while we slowly build out content, will our competitors get the edge on those terms (with their subpar but continually available content)? Do you think Google will give us any credit for having had the page all along, just not always indexed?
-
Yes, Google will give you credit for adding value to pages. You must have them crawled as a Googlebot immediately after no indexing is removed.
Your no indexing will pass page rank of thin content could save you potentially from a penalty however if you have a better page redirected to that page using a 301.
You will not receive the existing traffic if your ranking for that keyword at all if you noindex it. Well, you'll lose a lot of it until it's fixed.
You will have more trouble ranking for that keyword if you remove the page from Google's index. However, if you feel your content is that thin I would recommend no indexing them if you are going to fix them. And you must be willing to fix them extremely soon. How are you going to rank for a term Organically if you no index it you will hurt it that is not currently getting traffic?
A NoIndex tag is an instruction to the search engines that you don’t want a page to be kept within their search results. You should use this when you believe you have a page that search engines might consider being of poor quality.
What does a noindex tag do?
- It is a directive, not a suggestion. I.e., Google will obey it, and not index the page.
- The page can still be crawled by Google.
- The page can still accumulate PageRank.
- The page can still pass PageRank via any links on the page.
(PageRank, in reality_, there are a lot of other signals that are potentially passed through any link. Better to say “signals passed” than “PageRank passed.”)_
Crawl frequency of a noindex page will decline over time.
Crawl frequency refers to how often Google returns to a page to check whether the page still exists, has any changes, and has accumulated or lost signals.
Typically crawl frequency will decline for any page that Google cannot index, for whatever reason. Google will try to recrawl a few times to check if the noindex, error, or whatever was blocking the crawl, is gone or fixed.
If the noindex instruction remains, Google will slowly start to lengthen the time to the next attempt to crawl the page, eventually reducing to a check about every two-to-three months to see if the no index tag is still there.
The no index page will be excluded from Google's search index, So it will not help you rank for that term unless you have other pages that are cannibalizing it and trying to rank for that term as well. If so 301 redirect the poor content page to the right content page.
Your question on page rank and no index yes page rank can accrue Google will still read the page. They will derive some information from the hypertext inside the URLs.
Before you remove content
The following are some guidelines you can use:
- Make an educated (non-biased) judgement: Is your content’s quality “worse” than this content?
- Do you cover the topic in enough length and sufficiently in-depth?
- Which aspects of this content is your page not covering completely?
- Which “user intent” queries is your content not answering?
- How can you make your content better?
- Can you use any great imagery or diagrams to supplement your content?
- Are there any YouTube or other videos which can add value to your content.
Iterate and do the above for all of the pages which are outranking yours. The first few are going to be the hardest — it’s likely that the rest will follow a similar pattern.
There are no short cuts. You’ll have to review all the pages which are outranking you to ensure you leave no gaps.
Update Your Content To Fully Answer The User Search Query
Once you’ve seen what you are up against, you need to update your content.
To put it simply, your content needs to be better than the competition. It also needs to fully answer the user search intent which we have identified previously.
Make it the BEST content out there.
Given that you’ve already analyzed your competitors’ content, you should have a pretty good idea of what your content is missing.
Supplement your existing content with that additional content, but
- Don’t rewrite it completely. You’ll likely lose the precious content that Google was ranking you for.
- Don’t write a new post with the hope that this will rank better. It’s a much longer and harder journey than pushing up your already existing content.
- Of course, don’t change the URL.
As discovered in this case study 468% traffic increase case study, Google will reward you for your efforts.
Use the judgment calls from your competitive research to plan what needs to be added or updated.
Enhance it with any missing content
While looking at the organic keywords which you are ranking for you might come across user search intent keywords for which you have no content.
Let’s say, for example; your content discusses enabling Joomla SEF URLs.
If in your research you find that you are ranking for “disabling Joomla SEF URLs,” make sure that your refreshed content answers that query also.
These queries are pure gold — make sure you are answering them
You can see a larger version of the photos below here
Reference
- http://www.hobo-web.co.uk/duplicate-content-problems/#thin-content-classifier
- https://www.stonetemple.com/gary-illyes-what-is-noindex-and-what-does-it-do/
- https://www.mattcutts.com/blog/pagerank-sculpting/
** when rebuilding**
- https://moz.com/learn/seo
- https://ahrefs.com/blog/link-building/
- https://moz.com/beginners-guide-to-link-building
- http://www.bruceclay.com/blog/what-is-pagerank/
this is similar because it addresses turning off pages and turning them back on
I hope this helps,
Tom
-
From a Google perspective if you noindex a page sooner or later it will be removed from the index and hence you will lose your search term.
If you have no particular need to remove the pages, create new pages with the new content (Google will like that anyway), almost certainly you will find that some of those pages will outrank the thin content pages by definition in time.
In due course you could then 301 the old link which in theory will pass on most of the authority to the new page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can Google Bot View Links on a Wix Page?
Hi, The way Wix is configured you can't see any of the on-page links within the source code. Does anyone know if Google Bots still count the links on this page? Here is the page in question: https://www.ncresourcecenter.org/business-directory If you do think Google counts these links, can you please send me URL fetcher to prove that the links are crawlable? Thank you SO much for your help.
Intermediate & Advanced SEO | | Fiyyazp0 -
Can't generate a sitemap with all my pages
I am trying to generate a site map for my site nationalcurrencyvalues.com but all the tools I have tried don't get all my 70000 html pages... I have found that the one at check-domains.com crawls all my pages but when it writes the xml file most of them are gone... seemingly randomly. I have used this same site before and it worked without a problem. Can anyone help me understand why this is or point me to a utility that will map all of the pages? Kindly, Greg
Intermediate & Advanced SEO | | Banknotes0 -
Substantial difference between Number of Indexed Pages and Sitemap Pages
Hey there, I am doing a website audit at the moment. I've notices substantial differences in the number of pages indexed (search console), the number of pages in the sitemap and the number I am getting when I crawl the page with screamingfrog (see below). Would those discrepancies concern you? The website and its rankings seems fine otherwise. Total indexed: 2,360 (Search Consule)
Intermediate & Advanced SEO | | Online-Marketing-Guy
About 2,920 results (Google search "site:example.com")
Sitemap: 1,229 URLs
Screemingfrog Spider: 1,352 URLs Cheers,
Jochen0 -
Google indexing only 1 page out of 2 similar pages made for different cities
We have created two category pages, in which we are showing products which could be delivered in separate cities. Both pages are related to cake delivery in that city. But out of these two category pages only 1 got indexed in google and other has not. Its been around 1 month but still only Bangalore category page got indexed. We have submitted sitemap and google is not giving any crawl error. We have also submitted for indexing from "Fetch as google" option in webmasters. www.winni.in/c/4/cakes (Indexed - Bangalore page - http://www.winni.in/sitemap/sitemap_blr_cakes.xml) 2. http://www.winni.in/hyderabad/cakes/c/4 (Not indexed - Hyderabad page - http://www.winni.in/sitemap/sitemap_hyd_cakes.xml) I tried searching for "hyderabad site:www.winni.in" in google but there also http://www.winni.in/hyderabad/cakes/c/4 this link is not coming, instead of this only www.winni.in/c/4/cakes is coming. Can anyone please let me know what could be the possible issue with this?
Intermediate & Advanced SEO | | abhihan0 -
Date of page first indexed or age of a page?
Hi does anyone know any ways, tools to find when a page was first indexed/cached by Google? I remember a while back, around 2009 i had a firefox plugin which could check this, and gave you a exact date. Maybe this has changed since. I don't remember the plugin. Or any recommendations on finding the age of a page (not domain) for a website? This is for competitor research not my own website. Cheers, Paul
Intermediate & Advanced SEO | | MBASydney0 -
Different Header on Home Page vs Sub pages
Hello, I am an SEO/PPC manager for a company that does a medical detox. You can see the site in question here: http://opiates.com. My question is, I've never heard of it specifically being a problem to have a different header on the home page of the site than on the subpages, but I rarely see it either. Most sites, if i'm not mistaken, use a consistent header across most of the site. However, a person i'm working for now said that she has had other SEO's look at the site (above) and they always say that it is a big SEO problem to have a different header on the homepage than on the subpages. Any thoughts on this subject? I've never heard of this before. Thanks, Jesse
Intermediate & Advanced SEO | | Waismann0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
NOINDEX listing pages: Page 2, Page 3... etc?
Would it be beneficial to NOINDEX category listing pages except for the first page. For example on this site: http://flyawaysimulation.com/downloads/101/fsx-missions/ Has lots of pages such as Page 2, Page 3, Page 4... etc: http://www.google.com/search?q=site%3Aflyawaysimulation.com+fsx+missions Would there be any SEO benefit of NOINDEX on these pages? Of course, FOLLOW is default, so links would still be followed and juice applied. Your thoughts and suggestions are much appreciated.
Intermediate & Advanced SEO | | Peter2640