Noindex junk pages with inbound links?
-
I recently came across what is to me a new SEO problem.
A site I consult with has some thin pages with a handful of ads at the top, some relevant local content sourced from a third party beneath that...
and a bunch of inbound links to said pages. Not just any links, but links from powerful news sites. My impression is that said links are paid (sidebar links, anchor text... nice number of footprints.)
Short version: They may be getting juice from these links. A preliminary lookup for one page's keywords in the title finds it top 100 on Google. I don't want to lose that juice, but do think the thin pages they link to can incur Panda's filter. They've got the same blurb for lots of [topic x] in [city y], plus the sourced content (not original...).
So I'm thinking about noindexing said pages to avoid Panda filters.
Also, as a future pre-emptive measure, I'm considering figuring out what they did to get these links and aiming to have them removed if they were really paid for. If it was a biz dev deal, I'm open to leaving them up, but that possibility seems unlikely.
What would you do? One of the options I laid out above or something else? Why?
p.s. I'm asking this on my blog (seoroi.com/blog/ ) too, so if you're up for me to quote you (and link to your site, do say so. You aren't guaranteed to be quoted if you answer here, but it's one of the easier ways you'll get a good quality link.
p.p.s. Related note: I'm looking for intermediate to advanced guest posts for my blog, which has 2000+ RSS subs. Email me at gab@ my site if you're interested. You can also PM me here on SEOmoz, though I don't login as frequently.
-
These links likely aren't bringing much if any traffic, so it's a moot point here, imho.
-
Sorry if I was unclear. My thinking was that a high bounce rate probably indicates that many visitors don't find the content relevant. If the inbound links you mentioned are bringing lots of traffic to your pages but people are just bouncing right off the site, the value of those links is greatly diminished. If this is the case, I don't think the pages are worth keeping. If people are actually staying on the site after landing on the page, then I would focus on improving those pages and not worry as much about how they find the pages.
-
I don't see the connection to bounce rate? You mean click traffic or search traffic.
-
I would also be interested to know what people think about this. We have an issue where a few years ago, an SEO firm produced a few dozen "articles" for our site which consisted entirely of keyword-stuffed junk with lots of hidden internal links to other relevant parts of the site. Each page has thousands of links to it from link farms and junk directories.
I suspect that there are actually many legitimate, reputable websites out there who suffer from this problem. Any website with many thousands of pages might very easily conceal the remnants of old, poorly-executed SEO efforts for years, particularly if the people making the SEO decisions are unaware of the difference between black hat/white hat practices. With the release of the farmer update, this could be a big problem.
For our situation, we wrestled with whether we should noindex the pages, remove them and implement a 301 redirect to something more relevant, or just leave them as they are. For now we have left the junk pages alone; only a couple of the pages rank within the first 50 results for their targeted keyword, and the pages receive very little traffic. However, if the pages you are talking about get a lot of traffic with a very high bounce rate, I would probably try something else.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
After adding a ssl certificate to my site I encountered problems with duplicate pages and page titles
Hey everyone! After adding a ssl certificate to my site it seems that every page on my site has duplicated it's self. I think that is because it has combined the www.domainname.com and domainname.com. I would really hate to add a rel canonical to every page to solve this issue. I am sure there is another way but I am not sure how to do it. Has anyone else ran into this problem and if so how did you solve it? Thanks and any and all ideas are very appreciated.
Intermediate & Advanced SEO | | LovingatYourBest0 -
Pagination on a product page with reviews spread out on multiple pages
Our current product pages markup only have the canonical URL on the first page (each page loads more user reviews). Since we don't want to increase load times, we don't currently have a canonical view all product page. Do we need to mark up each subsequent page with its own canonical URL? My understanding was that canonical and rel next prev tags are independent of each other. So that if we mark up the middle pages with a paginated URL, e.g: Product page #1http://www.example.co.uk/Product.aspx?p=2692"/>http://www.example.co.uk/Product.aspx?p=2692&pageid=2" />**Product page #2 **http://www.example.co.uk/Product.aspx?p=2692&pageid=2"/>http://www.example.co.uk/Product.aspx?p=2692" />http://www.example.co.uk/Product.aspx?p=2692&pageid=3" />Would mean that each canonical page would suggest to google another piece of unique content, which this obviously isn't. Is the PREV NEXT able to "override" the canonical and explain to Googlebot that its part of a series? Wouldn't the canonical then be redundant?Thanks
Intermediate & Advanced SEO | | Don340 -
Unpaid Followed Links & Canonical Links from Syndicated Content
I have a user of our syndicated content linking to our detailed source content. The content is being used across a set of related sites and driving good quality traffic. The issue is how they link and what it looks like. We have tens of thousands of new links showing up from more than a dozen domains, hundreds of sub-domains, but all coming from the same IP. The growth rate is exponential. The implementation was supposed to have canonical tags so Google could properly interpret the owner and not have duplicate syndicated content potentially outranking the source. The canonical are links are missing and the links to us are followed. While the links are not paid for, it looks bad to me. I have asked the vendor to no-follow the links and implement the agreed upon canonical tag. We have no warnings from Google, but I want to head that off and do the right thing. Is this the right approach? What would do and what would you you do while waiting on the site owner to make the fixes to reduce the possibility of penguin/google concerns? Blair
Intermediate & Advanced SEO | | BlairKuhnen0 -
When you add 10.000 pages that have no real intention to rank in the SERP, should you: "follow,noindex" or disallow the whole directory through robots? What is your opinion?
I just want a second opinion 🙂 The customer don't want to loose any internal linkvalue by vaporizing link value though a big amount of internal links. What would you do?
Intermediate & Advanced SEO | | Zanox0 -
Will disallowing in robots.txt noindex a page?
Google has indexed a page I wish to remove. I would like to meta noindex but the CMS isn't allowing me too right now. A suggestion o disallow in robots.txt would simply stop them crawling I expect or is it also an instruction to noindex? Thanks
Intermediate & Advanced SEO | | Brocberry0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
Is my "term & conditions"-"privacy policy" and "About Us" pages stealing link juice?
should i make them no follow? or is this a bogus method?
Intermediate & Advanced SEO | | SEObleu.com0 -
What causes internal pages to have a page rank of 0 if the home page is PR 5?
The home page PageRank is 5 but every single internal page is PR 0. Things I know I need to address each page has 300 links (Menu problem). Each article has 2-3 duplicates caused from the CMS working on this now. Has anyone else had this problem before? What things should I look out for to fix this issue. All internal linking is follow there is no page rank sculpting happening on the pages.
Intermediate & Advanced SEO | | SEOBrent0