Noindex
-
I have been reading a lot of conflicting information on the Link Juice ramifications of using "NoIndex". Can I get some advice for the following situation?
1. I have pages that I do not want indexed on my site. They are lead conversion pages. Just about every page on my site has links to them. If I just apply a standard link, those pages will get a ton of Link Juice that I'd like to allocate to other pages.
2. If I use "nofollow", the pages won't rank, but the link juice evaporates. I get that. I won't use "nofollow"
3. I have read that "noindex, follow" will block the pages in the SERPs, but will pass Link Juice to them. I don't think that I want this either. If I "dead end" the lead form with no navigation or links, will the juice be locked up on the page?
4. I assume that I should block the pages in robots.txt
In order to keep the pages out of the SERPs, and conserve Link Juice, what should I do? Can someone please give me a step by step process with the reasoning for what I should do here?
-
I have a private/login site where all pages are noindex, nofollow. Can I still monitor external site links with Google Analytics?
-
Yes, there is a way to keep them out of the SERPs and restrict them from getting link juice: using noindex + nofollow, but bare in mind you'll be loosing that link juice and impairing it's flow throughout your site, besides indicating Google that you don't "trust" those pages.
A workaround would be consolidating those links.
-
So what you are saying is that there is no way to keep the pages out of the serps and restrict them from getting link juice?
This is nuts. My conversion pages will be getting huge amounts of link juice - there are links to them on every page.
I'm not happy about this. Any workarounds?
-
Using robots.txt won't ensure that your pages are kept out of the SERPs, since any external link to those pages could get them indexed. If you need to make sure, the best way should be the noindex meta tag.
Now, in order not to loose your linkjuice, you should make sure to use "noindex, follow" in your meta, that way you're still preventing the pages from being indexed but you are allowing the juice flow through them.
If you want to pass the less possible juice to those pages, you should try to link them as little as possible or consolidate those links in fewer pages throughout your site.
Here's some useful information on the subject:
Google Says: Yes, You Can Still Sculpt PageRank. No You Can't Do It With Nofollow
Link Consolidation: The New PageRank Sculpting
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I use noindex or robots to remove pages from the Google index?
I have a Magento site and just realized we have about 800 review pages indexed. The /review directory is disallowed in robots.txt but the pages are still indexed. From my understanding robots means it will not crawl the pages BUT if the pages are still indexed if they are linked from somewhere else. I can add the noindex tag to the review pages but they wont be crawled. https://www.seroundtable.com/google-do-not-use-noindex-in-robots-txt-20873.html Should I remove the robots.txt and add the noindex? Or just add the noindex to what I already have?
Intermediate & Advanced SEO | | Tylerj0 -
Wordpress Tag Pages - NoIndex?
Hi there. I am using Yoast Wordpress Plugin. I just wonder if any test have been done around the effects of Index vs Noindex for Tag Pages? ( like when tagging a word relevant to an article ) Thanks 🙂 Martin
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Meta NOINDEX... how long before Google drops dupe pages?
Hi, I have a lot of near dupe content caused by URL params - so I have applied: How long will it take for this to take effect? It's been over a week now, I have done some removal with GWT removal tool, but still no major indexed pages dropped. Any ideas? Thanks, Ben
Intermediate & Advanced SEO | | bjs20100 -
Duplicate blog content and NOINDEX
Suppose the "Home" page of your blog at www.example.com/domain/ displays your 10 most recent posts. Each post has its own permalink page (where you have comments/discussion, etc.). This obviously means that the last 10 posts show up as duplicates on your site. Is it good practice to use NOINDEX, FOLLOW on the blog root page (blog/) so that only one copy gets indexed? Thanks, Akira
Intermediate & Advanced SEO | | ahirai0 -
If we add noindex to a subdomain, will the traffic to that subdomain still generate domain authority for the primary domain?
We are trying to decide whether a password protected site, that we will noindex, should be set up as a subdomain or if it should be its own domain. The determining factor here is whether or not having that noindexed subdomain will increase domain authority since its noindexed. Any ideas???
Intermediate & Advanced SEO | | grayloon0 -
Robots.txt & url removal vs. noindex, follow?
When de-indexing pages from google, what are the pros & cons of each of the below two options: robots.txt & requesting url removal from google webmasters Use the noindex, follow meta tag on all doctor profile pages Keep the URLs in the Sitemap file so that Google will recrawl them and find the noindex meta tag make sure that they're not disallowed by the robots.txt file
Intermediate & Advanced SEO | | nicole.healthline0 -
Noindex junk pages with inbound links?
I recently came across what is to me a new SEO problem. A site I consult with has some thin pages with a handful of ads at the top, some relevant local content sourced from a third party beneath that... and a bunch of inbound links to said pages. Not just any links, but links from powerful news sites. My impression is that said links are paid (sidebar links, anchor text... nice number of footprints.) Short version: They may be getting juice from these links. A preliminary lookup for one page's keywords in the title finds it top 100 on Google. I don't want to lose that juice, but do think the thin pages they link to can incur Panda's filter. They've got the same blurb for lots of [topic x] in [city y], plus the sourced content (not original...). So I'm thinking about noindexing said pages to avoid Panda filters. Also, as a future pre-emptive measure, I'm considering figuring out what they did to get these links and aiming to have them removed if they were really paid for. If it was a biz dev deal, I'm open to leaving them up, but that possibility seems unlikely. What would you do? One of the options I laid out above or something else? Why? p.s. I'm asking this on my blog (seoroi.com/blog/ ) too, so if you're up for me to quote you (and link to your site, do say so. You aren't guaranteed to be quoted if you answer here, but it's one of the easier ways you'll get a good quality link. p.p.s. Related note: I'm looking for intermediate to advanced guest posts for my blog, which has 2000+ RSS subs. Email me at gab@ my site if you're interested. You can also PM me here on SEOmoz, though I don't login as frequently.
Intermediate & Advanced SEO | | Gab-Goldenberg0