How to noindex lots of content properly : bluntly or progressively ?
-
Hello Mozers !
I'm quite in doubt, so I thought why not ask for help ?
Here's my problem : I need to better up a website's SEO that consists in a lot (1 million+ pages) of poor content. Basically it's like a catalog, where you select a brand, a product series and then the product, to then fill out a form for a request (sorry for the cryptic description, I can't be more precise).
Beside the classic SEO work, a part of what (I think) I need to do is noindex some useless pages and rewrite important ones with great content, but for the noindexing part I'm quite hesitant on the how.
There's like 200 000 pages with no visits since a year, so I guess they're pretty much useless junk that would be better off in noindex. But the webmaster is afraid that noindexing that much pages will hurt its long tail (in case of future visits), so he wants to check the SERP position of every one of them, to only eliminate those that are in the top 3 (for these there's no hope of amelioration he thinks). I think it would be wasting a lot of time and resources for nothing, and I'd advise to noindex them regardless of their position.
The problem is I lack the experience to be sure of it and how to do it : Is it wise to noindex 200 000 pages bluntly in one time (isn't it a bad signal for google ?) or should we do this progressively in like a few months ?
Thanks a lot for your help !
Johann.
-
Sorry you're stuck in that spot. I really would be worried that this "fix" would make life worse for everyone, but it's tough to come up with solutions that don't seem like band-aids. Best you may be able to do is get more aggressive about the de-indexation, focus on improving some core content, and maybe re-work the internal linking to focus more on key pages (and spread internal PR a bit less thinly).
-
Yeah, I get what you're saying and totally agree, since a radical overhaul is what I recommended from the start, but only got a no-can-do response... until now. But their "yes" is more like :
-
Ok, rebuild our website entirely, just don't touch our website.
-
Errr what ?
Anyway, so a similar domain name and brand was in fact a bad idea.
Thanks a lot for your input (and your awesome moz posts !
Cheers,
Johann.
-
-
Given their history, two domains with overlapping content and a similar name seems like a terrible idea to me, to be blunt. If this really is a Panda issue, then you're potentially going to aggravate the situation and send out even more low quality signals.
It's hard to speculate, but I've seen a few situations where what seemed like Panda turned out to be something deeper. Directory clients have been hit hard, for example, as Google just seems to be devaluing the entire space (along with price comparison sites, many types of affiliates, etc.). I'm not talking about spammy sites, even, but the ones that provide some original value. It's just that Google doesn't see them as the end-supplier, and so they're getting discounted.
An end-run to a new domain isn't going to fix this. I strongly suspect that you've got something deeper going on that may take a radical overhaul of the main site and even the business/brand. I think it's better to accept that now than continue a gradual decline over the next couple of years.
-
Hi everyone,
Some news on this story that may (or may not) be of interest for some (even if I can't give the domain name), and a new question (I may also start another discussion for that one) :
-
The website has lost a significant amount of trafic over the passing year, even with the massive noindexing of 200 000 pages (I finally convinced him to do it, but it clearly wasn't enough). About a 40% loss gradually with some panda updates (dates coincide nicely).
-
We've worked hard on it to offer a new section of interesting content (not a blog but nearly) that presented interesting original statistics on the niche with visual presentations, and a bunch of related content, about a hundred pages total. It's like a drop in the ocean, but it gained a bit of popularity, some nice links and good branding. I think it's probably the reason why the website is still standing, it even made a few top positions on new important keywords.
-
Last but not least, we've improved the user experience and bumped up our conversion rates so the loss in trafic is partly compensated by the gains in conversion (not completely though).
It still drags nearly a million pages of thin content, and still takes a little hit with every Panda roll-out... So no recovery, but a controled descent, as it's still alive.
Now I got the green light to a complete do-over, starting a rebuild with a completely new (lighter) structure and a new design. We're pumped full of ideas of great content and user experience, so it's gonna be a fresh new start. BUT, (there's always a but), the webmaster wants to keep the old website while it's still alive and I wonder if we can take a similar domain name to capitalize on the brand popularity. Like www.brand-domain.com instead of www.branddomain.com (in case it's not clear, we'll take the same domain name with a dash in it, so the brand stays recognizable).Is it gonna look manipulative for Google to have two websites with nearly the same domain name, the exact same brand, the same service (so the same keywords targeted) ? Any other caveats ?
(I know they are going to compete with each other, but they'll have different contents, and it would be temporary : as soon as the new one reaches the first one's popularity, we'll prepare a proper redirect - could be a month, could be a year later)Thanks for any input ! I'll wait before trying to start a new discussion to avoid any clutter^^
Johann
-
-
Thanks a lot for your insight dr pete
I'll sell the large cut sooner or later by convincing him. It's either that or I use a time machine to show his future stats when Google release the next Panda tweaks ^^
Option 1 is easier after all !
-
I wish I could convince people that more DOES NOT EQUAL better when it comes to index size. You'd think Panda would've been the nail in that coffin, but too many webmasters are still operating in 2005.
-
I've never seen an issue where a large-scale META NOINDEX caused Google to get suspicious. It's possible to NOINDEX the wrong pages and lose traffic, but Google generally doesn't get jumpy about it like they would a large scale 301-redirect (where you might be PR-sculpting).
If these are really duplicates, canonical tags might be a better bet. Honestly, while I agree with Stephen 99.9%, if there's no glaring current issue, you could ease into it. Start with the worst culprits - obvious, 100% duplicates. That should be an easier sell, too. If you can't sell the larger cut, it's not going to matter.
-
Damn, even by saying pages that don't generate traffic now won't much more in the future, and by giving an educated estimation of 0.05% potential future gains by keeping them versus the boatload of progress it could mean for the website to noindex them, it couldn't convince the webmaster to cut them out of the index...
Anyway thanks for your help everyone !
-
noindex asap
thumbs up for this
its not going to suddenly appear out of nowhere
ha ha... for sure!
-
Can you change the structure of the site and perhaps see this as an opportunity...
(granted lots of work required)
Adding another level of Sub categories to separate the content further and allow better indexing ?
-
If you use robots, it will not be able to read the follow tag, what i was suggesting is dont use robots but use meta tage "no-index,follow" to allow link juice to flow even though they are not indexed.
Search engines can still follow links of pages not indexxed, but a robots tells them they are not allowed to crawl the page.
-
Thanks for your replies.
Well, I'm not asking whether I should noindex those pages, I'm pretty sure I have to.
It's just that, noindex brutally one fifth of a website in one time would seem potentially suspect for the search engines... So I wonder if I should very carefully choose which ones to noindex and which ones to keep indexed even among unvisited pages, like the webmaster suggests, or do it slowly over a long period of time.
It's a big decision, I'm appealing to your professional experience to prevent me from making a potential mistake.
@AWCthreads : For the case of an e-commerce website, your suggestion would seem reasonable, because a robots.txt won't keep the pages out of the index if there's links to them, but would reduce the quantity of duplicate content. But in my case, it would not be enough, so the noindex meta tag is my only option it seems.
@Stephen : you're right, traffic can't appear out of thin air for these pages. Even if some of those should begin to see visits, they would still add up to a negligible part I believe. But I don't have the experience to support it or the numbers to prove it.
@Alan Mosley : I'll sure add the follow tag on these pages even if they're not indexed any more, it'll still be valuable. And I guess maybe it would prevent it from appearing too suspicious for the engines, wouldn't it ?
-
First remember that all pages in the index have PageRank and you should use that link juice to your advantage
http://perthseocompany.com.au/seo/tutorials/a-simple-explanation-of-pagerank
Blocking in robots is clumsy, you will have links pointing to pages that are not in the index poring link juice to nowhere. You can add a meta “noindex, follow” tag that will allow link juice to flow in and out of the pages.If the pages are duplicates then I would remove them and fix the broken links it causes.
-
remove from sitemap, noindex asap. he has no longtail from those pages, its not going to suddenly appear out of nowhere
-
Hi Johann. Excellent question and a source of dispute for some people. I've not done it, but many people who want to no-index a large volume of pages will create a directory and put those files in the directory and then put a robots.txt on the directory.
Some people would argue why you would want to put a bunch pages (product pages on an ecommerce site) in a no-index file as they will not be seen/shared/sold etc. Well, my response to that would be to prevent juice dillution on pages of little SEO value and help keep the juice directed at the 20-30% of the products that are making you the most money.
I'm curious what others have to say about this and hope people weigh in on it.
-
Yeah they are mostly duplicates (only about 10% difference in text with variations)...
But near 80% of the pages are indexed, probably because the website has a strong authority and a lot of visits : these are useful pages for people, just not useful to read^^. That's why I'm so hesitant to noindex that much content, even if the website HAS to improve its quality content ratio if it wants to stay for the long run.
Maybe I'll start with testing your sitemap idea. Thanks for the suggestion.
-
Are the pages mostly duplicate content? Do you know how many have been indexed?
If it's a lot, then yes, noindexing them will make it look like your site has dropped a ton of content. But if it's duplicate then I'd go for it anyway as it will probably help things.
Alternatively, how about removing them from the sitemap instead? They may still get found but at least you're giving them a clue that those pages don't matter to you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content analysis
Hi all,We have some pages being flagged as duplicates by the google search console. However, we believe the content on these pages is distinctly different (for example, they have completely different search results returned, different headings etc). An example of two pages google finds to be duplicates is below. if anyone can spot what might be causing the duplicate issue here, would very much appreciate suggestions! Thanks in advance.
Technical SEO | | Eric_S
Examples: https://www.vouchedfor.co.uk/IFA-financial-advisor-mortgage/harborne
https://www.vouchedfor.co.uk/accountant/harborne0 -
SEO for a a static content website
Hi everyone, We would like to ask suggestions on how to improve our SEO for our static content help website. With the release of each new version, our company releases a new "help" page, which is created by an authoring system. This is the latest page: http://kilgray.com/memoq/2015/help-en/ I have a couple of questions: 1- The page has an index with many links that open up new subpages with content for users. It is impossible to add title tags to this subpages, as everything is held together by the mother page. So it is really hard to for users to find these subpage information when they are doing a google search. 2- We have previous "help" pages which usually rank better in google search. They also have the same structure (1 page with big index and many subpages) and no metadata. We obviously want the last version to rank better, however, we are afraid exclude them from bots search because the new version is not easy to find. These are some of the previous pages: http://kilgray.com/memoq/2014R2/help-en/ http://kilgray.com/memoq/62/help-en/ I would really appreciate suggestions! Thanks
Technical SEO | | Kilgray0 -
Am I doing SEO test properly?
Hello, I just created a page for researching the impact of social signals on Google ranking (in Italy). Page was not optimized (one internal backlink, no other external/internal links, keyword repeated 4 or 5 + h1 h2, no alt tags), and only social signals are being stimulated (through votes). The domain is 2 months old and is already positioned for few relevant keywords, but from 2 page down. My question is: am I doing right? Is this a good way to proceed? And if not, what I should do instead? Thank you for an advice. Eugenio
Technical SEO | | socialengaged0 -
404 or 503 Malware Content ?
Hi Folks When it comes to malware , if I have a site that uses iframe to show content off 3rd party sites which at times gets infected. Would you recommend 404 or 503 ing those pages with the iframe till the issue is resolved ? ( I am inclined to use 503 .. ) Then take the 404/503 off and ask for a reindex ( from GWT malware section ) OR Ask for a reindex as soon as the 404/503 goes up. ( I do understand we are asking to index as non existing page , but the malware warning gets removed ) PS : it makes sense for this business to showcase content using iframe on these special pages . I do understand these are not the best way to go about SEO.
Technical SEO | | Saijo.George0 -
How different does content need to be to avoid a duplicate content penalty?
I'm implementing landing pages that are optimized for specific keywords. Some of them are substantially the same as another page (perhaps 10-15 words different). Are the landing pages likely to be identified by search engines as duplicate content? How different do two pages need to be to avoid the duplicate penalty?
Technical SEO | | WayneBlankenbeckler0 -
Techniques for diagnosing duplicate content
Buonjourno from Wetherby UK 🙂 Diagnosing duplicate content is a classic SEO skill but I'm curious to know what techniques other people use. Personally i use webmaster tools as illustrated here: http://i216.photobucket.com/albums/cc53/zymurgy_bucket/webmaster-tools-duplicate.jpg but what other techniques are effective? Thanks,
Technical SEO | | Nightwing
David0 -
How to prevent duplicate content at a calendar page
Hi, I've a calender page which changes every day. The main url is
Technical SEO | | GeorgFranz
/calendar For every day, there is another url: /calendar/2012/09/12
/calendar/2012/09/13
/calendar/2012/09/14 So, if the 13th september arrives, the content of the page
/calendar/2012/09/13
will be shown at
/calendar So, it's duplicate content. What to do in this situation? a) Redirect from /calendar to /calendar/2012/09/13 with 301? (but the redirect changes the day after to /calendar/2012/09/14) b) Redirect from /calendar to /calendar/2012/09/13 with 302 (but I will loose the link juice of /calendar?) c) Add a canonical tag at /calendar (which leads to /calendar/2012/09/13) - but I will loose the power of /calendar (?) - and it will change every day... Any ideas or other suggestions? Best wishes, Georg.0 -
Duplicate content handling.
Hi all, I have a site that has a great deal of duplicate content because my clients list the same content on a few of my competitors sites. You can see an example of the page here: http://tinyurl.com/62wghs5 As you can see the search results are on the right. A majority of these results will also appear on my competitors sites. My homepage does not seem to want to pass link juice to these pages. Is it because of the high level of Dup Content or is it because of the large amount of links on the page? Would it be better to hide the content from the results in a nofollowed iframe to reduce duplicate contents visibilty while at the same time increasing unique content with articles, guides etc? or can the two exist together on a page and still allow link juice to be passed to the site. My PR is 3 but I can't seem to get any of my internal pages(except a couple of pages that appear in my navigation menu) to budge of the PR0 mark even if they are only one click from the homepage.
Technical SEO | | Mulith0