Can too many "noindex" pages compared to "index" pages be a problem?
-
Hello,
I have a question for you: our website virtualsheetmusic.com includes thousands of product pages, and due to Panda penalties in the past, we have no-indexed most of the product pages hoping in a sort of recovery (not yet seen though!). So, currently we have about 4,000 "index" page compared to about 80,000 "noindex" pages.
Now, we plan to add additional 100,000 new product pages from a new publisher to offer our customers more music choice, and these new pages will still be marked as "noindex, follow".
At the end of the integration process, we will end up having something like 180,000 "noindex, follow" pages compared to about 4,000 "index, follow" pages.
Here is my question: can this huge discrepancy between 180,000 "noindex" pages and 4,000 "index" pages be a problem? Can this kind of scenario have or cause any negative effect on our current natural SEs profile? or is this something that doesn't actually matter?
Any thoughts on this issue are very welcome.
Thank you!
Fabrizio
-
Julian, we sell digital sheet music and the additional 100,000 are products from Alfred music publishing company. Of course they will not be "high quality pages", but they are product pages, each one offering a piece of music. We are an e-commerce website, how can we avoid having product pages?! But of course, as Wesley said above, we can improve each product page quality content by giving more/custom information for each product, increasing user reviews, etc.
Other suggestions?
-
Thank you Wesley, yes, I think you are right. Our business is suffering really too much without traffic coming from the "noindex" pages, and after many months we still don't see recovery. I think the best approach would be probably to keep the pages in the index and differentiate them as much as we can.
Thank you!
-
Panda is probably the worst penalty to have. Very few site ever recover, even though site owner have spent a lot of time, effort and money trying to solve it. e.g. http://searchengineland.com/google-panda-two-years-later-losers-still-losing-one-real-recovery-149491
In this video, about 12.43 - matt cutts is clear, if you think its low quality 404 it, in other delete it.
May I ask why you want to keep these 180,000 pages live? And why are you planning to add another 100,000 pages? Surely they cant be high quality pages?
-
Fabrizo, as far as I know Google Panda is now part of the standard Google algorithm and it won't be a periodic event anymore. Penguin still is though.
If your product pages are duplicate content according to Google try and see if you can do something about that instead of no-indexing it. Is there no way you can update the products so they display a more prominent description? I understand that manually it's not a possibility because there are way too much products for that to be an option.
I did notice that on a lot of your product pages you have a standard text: "This item includes: PDF (digital sheet music to print), Scorch files (for online playing, transposition and printing), Videos, MIDI and Mp3 audio files (including <a title="This item includes Mp3 music accompaniment files.">Mp3 music accompaniment files</a>)*
Genre: classical
Skill Level: medium"Since this is basicly the only text on a lot of pages I think it's a big part of the problem. Maybe you can change this text so it looks different for every product?
Try tools like http://www.plagspotter.com/ to find the duplicate content and see which solution is best for your specific problem.
I hope i helped and if you need more help let me know
-
I understand what you mean and I agree with you in general, but specifically to our own website, I have no idea who put that link on that page, which is by the way a "nofollow" link. We never built links, all our incoming links are either natural and/or links from our own affiliates. I don't see much of "that stuff" on our back-link profile... am I in error?
Anyhow, yes, we are aware the situation is quite complex. Thank you again.
-
I actually looked at the competitors ranking #3 and #4 for the phrase "download sheet music" since your ranking 5th. Either way, its not a matter of too much or too little. It's how much of the link profile is authentic vs how much is made up of stuff like this....
http://www.dionneco.com/2011/02/love-is-a-parallax/
that's what I meant by fake links.
I think what you may be missing is how complex the situation really is. There's a lot more to be considered than a number in Open Site Explorer - which is actually only a portions of what's really out there.
You may also want to look at changes you can make on-site. I'm a firm believer that proper HTML, accessibility, UX and all that really matter.
-
Thank you Takeshi, I think you got the problem right. The "crawling" side of the issue is something I was thinking about too!
We are actually working on every aspect of our website to improve its content because we have suffered by Panda a lot in the past two years, so here is the strategy we begun to take since March:
1. "noindexing" most of our thin or almost-duplicate content to get it removed from the index
2. Improve our best content and differentiate it as much as we can with compelling content (this takes a long time!)
3. Consolidating similar pages with the use of canonical tags.
In order to tackle the "slower crawling" problem you have highlighted here, do you think that would be probably better for us to stop engines to crawl those pages altogether via robots.txt once they have been removed? Would that solve the crawl issue? I could do that at least with these new 100,000 new product pages we plan to add!
Thank you!
-
Wesley, that's because of being penalized by Panda several times in the past... so we are trying the "clean-up" strategy with the hope to be "de-penalized" by Panda at the next related algorithm update. Looks like we had too many "thin" or "almost duplicate" pages... that's why we removed so many pages from the index! But if we don't see improvements in the coming 1-2 months, I guess we'll put the product pages in the index because our business is suffering a big deal!
-
Colin, what do you mean with "fake links" exactly? Our link profile looks actually in better shape than our main competitors:
virtualsheetmusic.com (our site): links: 614,013 root domains: 2,233
sheetmusicplus.com (competitor): links: 5,322,596 root domains: 6,149 (worse than our profile!)
musicnotes.com (competitor): links: 6,527,429 root domains: 2,914 (much worse than our profile!)
Am I missing anything?
-
The discrepancy between noindexed/indexed pages is not in itself a problem. However having all those pages will present a challenge to Google, in terms of crawling. Even though the pages won't be indexed, Google will need to spend some of your limited crawl budget crawling all those pages.
Also, to recover from Panda it's necessary to not only noindex duplicate content, but improve your indexed content. That means things like consolidating similar pages into one page, writing unique content for your pages, and getting unique user-generated content such as reviews.
-
Why would you want to no-index your product pages? They seem like the kind of pages you want to get found on.
There shouldn't be a problem between the amount of index pages VS no-index pages except you won't get found on the no-index ones. Product pages tend to be the kind of pages that you REALLY want to get found on.
I think you should rethink your strategy to recover from the penalties.
Try to find out where exactly the penalties came from and fix the errors in that area of our website. -
Can't say I've been in that situation, but search engines seem to interpret that tag as an on/off situation. and I think you probably know that your problems aren't related to or able to be solved by robots meta tags.
You need less fake links. OSE finds well over half a million links from 3K root domains to your site. Look at your competitors - a few thousand links from a handful of domains.
It's a shame because it seems like the internet wanted to make you the authority naturally - You've got a handful of really solid links coming in. If you could shed the spam somehow you'd be doing a lot better.
So yea, stating the obvious, I know. best of luck to you and hope the site recovers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages excluded from Google's index due to "different canonicalization than user"
Hi MOZ community, A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines. Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index. False canonicalization About our setup: We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle 😉 ). What we tried so far: going through every step of this handy guide: https://moz.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.) manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines Questions to you: How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January? Could you think of any other reason to cause the behavior described above? Eternally thankful for any help! ldWB9
Intermediate & Advanced SEO | | SvenRi1 -
WordPress – parent category "blog" instead of regular "post page"?
In WordPress you normally show you blog posts on: Your home page. Your "posts page" (configurable in the Reading Settings) I want to do neither and have a third option instead: Assign a parent category called "blog" for all posts, and show the latest posts on that category's archive page. For the readers, the experience will be 100% the same as a regular "posts page". The UI, permalinks, and breadcrumbs will be 100% the same. But, I have heard that the "posts page" is important for Google for indexing and understanding your blog. So is is smarter SEO-wise to use a "posts page" instead of a parent category named "blog"? What negative effects might there be, if I have no "posts page" and just use the parent category "blog" instead?
Intermediate & Advanced SEO | | NikolasB0 -
Google indexing wrong pages
We have a variety of issues at the moment, and need some advice. First off, we have a HUGE indexing issue across our entire website. Website in question: http://www.localsearch.com.au/ Firstly
Intermediate & Advanced SEO | | localdirectories
In Google.com.au, if you search for 'plumbers gosford' (https://www.google.com.au/#q=plumbers+gosford), the wrong page appears - in this instance, the page ranking should be http://www.localsearch.com.au/Gosford,NSW/Plumbers I can see this across the board, across multiple locations. Secondly
Recently I've seen Google reporting in 'Crawl Errors' in webmaster tools URLs such as:
http://www.localsearch.com.au/Saunders-Beach,QLD/Electronic-Equipment-Sales-Repairs&Sa=U&Ei=xs-XVJzAA9T_YQSMgIHQCw&Ved=0CIMBEBYwEg&Usg=AFQjCNHXPrZZg0JU3O4yTGjWbijon1Q8OA This is an invalid URL, and more specifically, those query strings seem to be referrer queries from Google themselves: &Sa=U&Ei=xs-XVJzAA9T_YQSMgIHQCw&Ved=0CIMBEBYwEg&Usg=AFQjCNHXPrZZg0JU3O4yTGjWbijon1Q8OA Here's the above example indexed in Google: https://www.google.com.au/#q="AFQjCNHXPrZZg0JU3O4yTGjWbijon1Q8OA" Does anyone have any advice on those 2 errors?0 -
Pages with rel "next"/"prev" still crawling as duplicate?
Howdy! I have a site that is crawling as "duplicate content pages" that is really just pagination. The rel next/prev is in place and done correctly but Roger Bot and Google are both showing duplicated content + duplicate page titles & meta's respectively. The only thing I can think of is we have a canonical pointing back at the URL you are on - we do not have a view all option right now and would not feel comfortable recommending it given the speed implications and size of their catalog. Any experience, recommendations here? Something to be worried about? /collections/all?page=15"/>
Intermediate & Advanced SEO | | paul-bold0 -
My home page is not found by the "Grade a Page" tool
My home page as well as several important pages are not found by the Grade a Page tool. With our full https address I got this http://screencast.com/t/s1gESMlGwpa With just the www address I got this http://screencast.com/t/BMRHy36Ih https://www.joomlashack.com
Intermediate & Advanced SEO | | etabush
https://www.joomlashack.com/joomla-templates We recently lost a lot of positions for our most important keyword: Joomla Templates Please help us figure this out. Whats screwy with our site?0 -
How many inner links on one page?
I have seen Matt Cutts video about links per page and know that too many links "may" harm the flow of link juice. But what should e-commerce sites do? We have category pages with more than a few thousands products in each of them. So linking to each of them dilutes the PR flow? We could use pagination, but doesn't it give a disadvantage in user experience when he needs to go 10 links deep to reach a product? And Google robots won't update the information frequently because it will be on the lowest part of our site? Now our goal is to make all our products appear like Facebook scroll down page. We know that Google doesn't use Ajax to see more links so robots and all the users that don't have JavaScript could see the paginated results. Is it a good way to put all products and links like this?
Intermediate & Advanced SEO | | komeksimas1 -
404'd pages still in index
I recently launched a site and shortly after performed a URL rewrite (not the greatest idea, i know). The developer 404'd the old pages instead of a permanent 301 redirect. This caused a mess in the index. I have tried to use Google's removal tool to remove these URL's from the index. These pages were being removed but now I am finding them in the index as just URL's to the 404'd page (i.e. no title tag or meta description). Should I wait this out or now go back and 301 redirect the old URL's (that are 404'd now) to the new URL's? I am sure this is the reason for my lack of ranking as the rest of my site is pretty well optimized and I have some quality links.
Intermediate & Advanced SEO | | mj7750 -
Where does "Pages Similar" link text come from?
When I type in a competitor name (in this case "buycostumes") Google shows several related websites in it's "Pages Similar to..." section at the bottom of the page: My question, can anyone tell me where the text comes from that Google uses as the link. Our competitors have nice branded links and our is just a keyword. I can find nothing on-page that Google is using so it must be coming from someplace off-page, but where?
Intermediate & Advanced SEO | | costume0