How to noindex lots of content properly : bluntly or progressively ?
-
Hello Mozers !
I'm quite in doubt, so I thought why not ask for help ?
Here's my problem : I need to better up a website's SEO that consists in a lot (1 million+ pages) of poor content. Basically it's like a catalog, where you select a brand, a product series and then the product, to then fill out a form for a request (sorry for the cryptic description, I can't be more precise).
Beside the classic SEO work, a part of what (I think) I need to do is noindex some useless pages and rewrite important ones with great content, but for the noindexing part I'm quite hesitant on the how.
There's like 200 000 pages with no visits since a year, so I guess they're pretty much useless junk that would be better off in noindex. But the webmaster is afraid that noindexing that much pages will hurt its long tail (in case of future visits), so he wants to check the SERP position of every one of them, to only eliminate those that are in the top 3 (for these there's no hope of amelioration he thinks). I think it would be wasting a lot of time and resources for nothing, and I'd advise to noindex them regardless of their position.
The problem is I lack the experience to be sure of it and how to do it : Is it wise to noindex 200 000 pages bluntly in one time (isn't it a bad signal for google ?) or should we do this progressively in like a few months ?
Thanks a lot for your help !
Johann.
-
Sorry you're stuck in that spot. I really would be worried that this "fix" would make life worse for everyone, but it's tough to come up with solutions that don't seem like band-aids. Best you may be able to do is get more aggressive about the de-indexation, focus on improving some core content, and maybe re-work the internal linking to focus more on key pages (and spread internal PR a bit less thinly).
-
Yeah, I get what you're saying and totally agree, since a radical overhaul is what I recommended from the start, but only got a no-can-do response... until now. But their "yes" is more like :
-
Ok, rebuild our website entirely, just don't touch our website.
-
Errr what ?
Anyway, so a similar domain name and brand was in fact a bad idea.
Thanks a lot for your input (and your awesome moz posts !
Cheers,
Johann.
-
-
Given their history, two domains with overlapping content and a similar name seems like a terrible idea to me, to be blunt. If this really is a Panda issue, then you're potentially going to aggravate the situation and send out even more low quality signals.
It's hard to speculate, but I've seen a few situations where what seemed like Panda turned out to be something deeper. Directory clients have been hit hard, for example, as Google just seems to be devaluing the entire space (along with price comparison sites, many types of affiliates, etc.). I'm not talking about spammy sites, even, but the ones that provide some original value. It's just that Google doesn't see them as the end-supplier, and so they're getting discounted.
An end-run to a new domain isn't going to fix this. I strongly suspect that you've got something deeper going on that may take a radical overhaul of the main site and even the business/brand. I think it's better to accept that now than continue a gradual decline over the next couple of years.
-
Hi everyone,
Some news on this story that may (or may not) be of interest for some (even if I can't give the domain name), and a new question (I may also start another discussion for that one) :
-
The website has lost a significant amount of trafic over the passing year, even with the massive noindexing of 200 000 pages (I finally convinced him to do it, but it clearly wasn't enough). About a 40% loss gradually with some panda updates (dates coincide nicely).
-
We've worked hard on it to offer a new section of interesting content (not a blog but nearly) that presented interesting original statistics on the niche with visual presentations, and a bunch of related content, about a hundred pages total. It's like a drop in the ocean, but it gained a bit of popularity, some nice links and good branding. I think it's probably the reason why the website is still standing, it even made a few top positions on new important keywords.
-
Last but not least, we've improved the user experience and bumped up our conversion rates so the loss in trafic is partly compensated by the gains in conversion (not completely though).
It still drags nearly a million pages of thin content, and still takes a little hit with every Panda roll-out... So no recovery, but a controled descent, as it's still alive.
Now I got the green light to a complete do-over, starting a rebuild with a completely new (lighter) structure and a new design. We're pumped full of ideas of great content and user experience, so it's gonna be a fresh new start. BUT, (there's always a but), the webmaster wants to keep the old website while it's still alive and I wonder if we can take a similar domain name to capitalize on the brand popularity. Like www.brand-domain.com instead of www.branddomain.com (in case it's not clear, we'll take the same domain name with a dash in it, so the brand stays recognizable).Is it gonna look manipulative for Google to have two websites with nearly the same domain name, the exact same brand, the same service (so the same keywords targeted) ? Any other caveats ?
(I know they are going to compete with each other, but they'll have different contents, and it would be temporary : as soon as the new one reaches the first one's popularity, we'll prepare a proper redirect - could be a month, could be a year later)Thanks for any input ! I'll wait before trying to start a new discussion to avoid any clutter^^
Johann
-
-
Thanks a lot for your insight dr pete
I'll sell the large cut sooner or later by convincing him. It's either that or I use a time machine to show his future stats when Google release the next Panda tweaks ^^
Option 1 is easier after all !
-
I wish I could convince people that more DOES NOT EQUAL better when it comes to index size. You'd think Panda would've been the nail in that coffin, but too many webmasters are still operating in 2005.
-
I've never seen an issue where a large-scale META NOINDEX caused Google to get suspicious. It's possible to NOINDEX the wrong pages and lose traffic, but Google generally doesn't get jumpy about it like they would a large scale 301-redirect (where you might be PR-sculpting).
If these are really duplicates, canonical tags might be a better bet. Honestly, while I agree with Stephen 99.9%, if there's no glaring current issue, you could ease into it. Start with the worst culprits - obvious, 100% duplicates. That should be an easier sell, too. If you can't sell the larger cut, it's not going to matter.
-
Damn, even by saying pages that don't generate traffic now won't much more in the future, and by giving an educated estimation of 0.05% potential future gains by keeping them versus the boatload of progress it could mean for the website to noindex them, it couldn't convince the webmaster to cut them out of the index...
Anyway thanks for your help everyone !
-
noindex asap
thumbs up for this
its not going to suddenly appear out of nowhere
ha ha... for sure!
-
Can you change the structure of the site and perhaps see this as an opportunity...
(granted lots of work required)
Adding another level of Sub categories to separate the content further and allow better indexing ?
-
If you use robots, it will not be able to read the follow tag, what i was suggesting is dont use robots but use meta tage "no-index,follow" to allow link juice to flow even though they are not indexed.
Search engines can still follow links of pages not indexxed, but a robots tells them they are not allowed to crawl the page.
-
Thanks for your replies.
Well, I'm not asking whether I should noindex those pages, I'm pretty sure I have to.
It's just that, noindex brutally one fifth of a website in one time would seem potentially suspect for the search engines... So I wonder if I should very carefully choose which ones to noindex and which ones to keep indexed even among unvisited pages, like the webmaster suggests, or do it slowly over a long period of time.
It's a big decision, I'm appealing to your professional experience to prevent me from making a potential mistake.
@AWCthreads : For the case of an e-commerce website, your suggestion would seem reasonable, because a robots.txt won't keep the pages out of the index if there's links to them, but would reduce the quantity of duplicate content. But in my case, it would not be enough, so the noindex meta tag is my only option it seems.
@Stephen : you're right, traffic can't appear out of thin air for these pages. Even if some of those should begin to see visits, they would still add up to a negligible part I believe. But I don't have the experience to support it or the numbers to prove it.
@Alan Mosley : I'll sure add the follow tag on these pages even if they're not indexed any more, it'll still be valuable. And I guess maybe it would prevent it from appearing too suspicious for the engines, wouldn't it ?
-
First remember that all pages in the index have PageRank and you should use that link juice to your advantage
http://perthseocompany.com.au/seo/tutorials/a-simple-explanation-of-pagerank
Blocking in robots is clumsy, you will have links pointing to pages that are not in the index poring link juice to nowhere. You can add a meta “noindex, follow” tag that will allow link juice to flow in and out of the pages.If the pages are duplicates then I would remove them and fix the broken links it causes.
-
remove from sitemap, noindex asap. he has no longtail from those pages, its not going to suddenly appear out of nowhere
-
Hi Johann. Excellent question and a source of dispute for some people. I've not done it, but many people who want to no-index a large volume of pages will create a directory and put those files in the directory and then put a robots.txt on the directory.
Some people would argue why you would want to put a bunch pages (product pages on an ecommerce site) in a no-index file as they will not be seen/shared/sold etc. Well, my response to that would be to prevent juice dillution on pages of little SEO value and help keep the juice directed at the 20-30% of the products that are making you the most money.
I'm curious what others have to say about this and hope people weigh in on it.
-
Yeah they are mostly duplicates (only about 10% difference in text with variations)...
But near 80% of the pages are indexed, probably because the website has a strong authority and a lot of visits : these are useful pages for people, just not useful to read^^. That's why I'm so hesitant to noindex that much content, even if the website HAS to improve its quality content ratio if it wants to stay for the long run.
Maybe I'll start with testing your sitemap idea. Thanks for the suggestion.
-
Are the pages mostly duplicate content? Do you know how many have been indexed?
If it's a lot, then yes, noindexing them will make it look like your site has dropped a ton of content. But if it's duplicate then I'd go for it anyway as it will probably help things.
Alternatively, how about removing them from the sitemap instead? They may still get found but at least you're giving them a clue that those pages don't matter to you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content and Subdirectories
Hi there and thank you in advance for your help! I'm seeking guidance on how to structure a resources directory (white papers, webinars, etc.) while avoiding duplicate content penalties. If you go to /resources on our site, there is filter function. If you filter for webinars, the URL becomes /resources/?type=webinar We didn't want that dynamic URL to be the primary URL for webinars, so we created a new page with the URL /resources/webinar that lists all of our webinars and includes a featured webinar up top. However, the same webinar titles now appear on the /resources page and the /resources/webinar page. Will that cause duplicate content issues? P.S. Not sure if it matters, but we also changed the URLs for the individual resource pages to include the resource type. For example, one of our webinar URLs is /resources/webinar/forecasting-your-revenue Thank you!
Technical SEO | | SAIM_Marketing0 -
Is content on widget bar less 'seo important' than main content?
hi, i wonder if content on widget bar less 'seo important' than main content.. i mean, is better to place content and links on main cotent than on wordpress widget bar? What are the pros and cons? tx!
Technical SEO | | Dreamrealemedia0 -
SEO for a a static content website
Hi everyone, We would like to ask suggestions on how to improve our SEO for our static content help website. With the release of each new version, our company releases a new "help" page, which is created by an authoring system. This is the latest page: http://kilgray.com/memoq/2015/help-en/ I have a couple of questions: 1- The page has an index with many links that open up new subpages with content for users. It is impossible to add title tags to this subpages, as everything is held together by the mother page. So it is really hard to for users to find these subpage information when they are doing a google search. 2- We have previous "help" pages which usually rank better in google search. They also have the same structure (1 page with big index and many subpages) and no metadata. We obviously want the last version to rank better, however, we are afraid exclude them from bots search because the new version is not easy to find. These are some of the previous pages: http://kilgray.com/memoq/2014R2/help-en/ http://kilgray.com/memoq/62/help-en/ I would really appreciate suggestions! Thanks
Technical SEO | | Kilgray0 -
Help regarding updated content
Hi, Some time back we created tutorials on a test tool Quality Center (http://www.guru99.com/quality-center-tutorials.html) which now needs upgrading.
Technical SEO | | Riya8520
Currently the tool has been renamed to HP ALM.
Our dilemma is whether we should create new pages for the new tutorials or update the existing tutorials itself ? To add to our pain, most of the end users still refer the new ALM with its old name Quality Center. Also we here hit by penguin 2.1 and since then have been very precautions from SEO standpoint.
Please help
Regards
Krishna Rungta0 -
Duplicate Page Content
I've got several pages of similar products that google has listed as duplicate content. I have them all set up with rel="prev" and rel="next tags telling google that they are part of a group but they've still got them listed as duplicates. Is there something else I should do for these pages or is that just a short falling of googles webmaster tools? One of the pages: http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
Technical SEO | | JAARON0 -
Indexed non www. content
Google has indexed a lot of old non www.mysite.com contnet my page at mysite.com still answers queries, should I 301 every url on it? Google has indexed about 200 pages all erogenous 404's, old directories and dynamic content at mysite.com www.mysite.com has 12 pages listed that are all current. Is this affecting my rankings?
Technical SEO | | adamzski0 -
Content Delivery Network
Anyone have a good reference for implementing a content delivery network? Any SEO pitfalls with using a CDN (brief research seems to indicate no problems)? I seem to recall that SEOmoz was using Amazon Web Services (AWS) for CDN. Is that still the case? All CDN & AWS experiences, advice, references welcomed!
Technical SEO | | Gyi0 -
Duplicate content connundrum
Hey Mozzers- I have a tricky situation with one of my clients. They're a reputable organization and have been mentioned in several major news articles. They want to create a Press page on their site with links to each article, but they want viewers to remain within the site and not be redirected to the press sites themselves. The other issue is some of the articles have been removed from the original press sites where they were first posted. I want to avoid duplicate content issues, but I don't see how to repost the articles within the client's site. I figure I have 3 options: 1. create PDFs (w/SEO-friendly URLs) with the articles embedded in them that open in a new window. 2. Post an image with screenshot of article on a unique URL w/brief content. 3. Copy and paste the article to a unique URL. If anyone has experience with this issue or any suggestions, I would greatly appreciate it. Jaime Brown
Technical SEO | | JamesBSEO0