Could large number of "not selected" pages cause a penalty?
-
My site was penalized for specific pages in the UK On July 28 (corresponding with a Panda update).
I cleaned up my website and wrote to Google and they responded that "no manual spam actions had been taken".
The only other thing I can think of is that we suffered an automatic penalty.
I am having problems with my sitemap and it is indexing many error pages, empty pages, etc... According to our index status we have 2,679,794 not selected pages and 36,168 total indexed.
Could this have been what caused the error?
(If you have any articles to back up your answers that would be greatly appreciate)
Thanks!
-
Canonical tag to what? Themselves? Or the page they should be? Are these pages unique by some URL variables only? If so, you can instruct Google to ignore specific get variables to resolve this issue but you would also want to fix your sitemap woes: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687
This is where it gets sticky, these pages are certainly not helping and not being indexed, Google Webmaster tools shows us that, but if you have this problem, how many other technical problems could the site have?
We can be almost certain you have some kind of panda filter but to diagnose it further we would need a link and access to analytics to determine what has gone wrong and provide more detailed guidance to resolve the issues.
This could be a red herring and your problem could be elsewhere but with no examples we can only give very general responses. If this was my site I would certainly look to identify the most likely issues and work through this in a pragmatic way to eliminate possible issues and look at other potentials.
My advice would be to have the site analysed by someone with distinct experience with Panda penalties who can give you specific feedback on the problems and provide guidance to resolve them.
If the URL is sensitive and can't be shared here, I can offer this service and am in the UK. I am sure can several other users at SEOMoz can also help. I know Marie Haynes offers this service as I am sure Ryan Kent could help also.
Shout if you have any questions or can provide more details (or a url).
-
Hi,
Thanks for the detailed answer.
We have many duplicate pages, but they all have canonical tags on them... shouldn't that be solving the problem. Would pages with the canonical tag be showing up here?
-
Yes, this can definitely cause problems. In fact this is a common footprint in sites hit by the panda updates.
It sound like you have some sort of canonical issue on the site: Multiple copies of each page are being crawled. Google is finding lots of copies of the same thing, crawling them but deciding that they are not sufficiently unique/useful to keep in the index. I've been working on a number of sites hit with the same issue and clean up can be a real pain.
The best starting point for reading is probably this article here on SEOmoz : http://www.seomoz.org/learn-seo/duplicate-content . That article includes some useful links on how to diagnose and solve the issues as well, so be sure to check out all the linked resources.
-
Hey Sarah
There are always a lot of moving parts when it comes to penalties but the very fact that you lost traffic on a known panda date really points towards this being a Panda style of penalty. Panda, is an algorithmic penalty so you will not receive any kind of notification in Webmaster Tools and likewise, a re-inclusion request will not help, you have to fix the problem to resolve the issues.
The not selected pages are likely a big part of your problem. Google classes not selected pages as follows:
"Not selected: Pages that are not indexed because they are substantially similar to other pages, or that have been redirected to another URL. More information."
If you have the best part of 3 million of these pages that are 'substantially similar' to other pages then there is every change that this is a very big part of your problem.
Obviously, there are a lot of moving parts to this. This sounds highly likely this is part of your problem and just think how this looks to Google. 2.6 million pages that are duplicated. It is a low quality signal, a possible attempt at manipulation or god knows what else but what we do know, is that is unlikely to be a strong result for any search users so those pages have been dropped.
What to do?
Well, firstly, fix your site map and sort out these duplication problems. It's hard to give specifics without a link to the site in question but just sort this out. Apply the noindex tag dynamically if needs be, remove these duplicates from the sitemap, heck, remove the sitemap alltogether for a while if needs be till it is fixed. Just sort out these issues one way or another.
Happy to give more help here if I can but would need a link or some such to advise better.
Resources
You asked for some links but I am not completely sure what to provide here without a link but let me have a shot and provide some general points:
1. Good General Panda Overview from Dr. Pete
http://www.seomoz.org/blog/fat-pandas-and-thin-content
2. An overview of canonicalisation form Google
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139066
3. A way to diagnose and hopefully recover from Panda from John Doherty at distilled.
http://www.distilled.net/blog/seo/beating-the-panda-diagnosing-and-rescuing-a-clients-traffic/
4. Index Status Overview from Google
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=2642366
Summary
You have a serious problem here but hopefully one that can be resolved. Panda is a primarily focused at on page issues and this is an absolute doozy of an on page issue so sort it out and you should see a recovery. Keep in mind you have 75 times more problem pages than actual content pages at the moment in your site map so this may be the biggest case I have ever seen so I would be very keen to see how you get on and what happens when you resolve these issues as I am sure would the wider SEOMoz community.
Hope this helps & please fire over any questions.
Marcus
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel canonical tag from shopify page to wordpress site page
We have pages on our shopify site example - https://shop.example.com/collections/cast-aluminum-plaques/products/cast-aluminum-address-plaque That we want to put a rel canonical tag on to direct to our wordpress site page - https://www.example.com/aluminum-plaques/ We have links form the wordpress page to the shop page, and over time ahve found that google has ranked the shop pages over the wp pages, which we do not want. So we want to put rel canonical tags on the shop pages to say the wp page is the authority. I hope that makes sense, and I would appreciate your feeback and best solution. Thanks! Is that possible?
Intermediate & Advanced SEO | | shabbirmoosa0 -
Internal search pages (and faceted navigation) solutions for 2018! Canonical or meta robots "noindex,follow"?
There seems to conflicting information on how best to handle internal search results pages. To recap - they are problematic because these pages generally result in lots of query parameters being appended to the URL string for every kind of search - whilst the title, meta-description and general framework of the page remain the same - which is flagged in Moz Pro Site Crawl - as duplicate, meta descriptions/h1s etc. The general advice these days is NOT to disallow these pages in robots.txt anymore - because there is still value in their being crawled for all the links that appear on the page. But in order to handle the duplicate issues - the advice varies into two camps on what to do: 1. Add meta robots tag - with "noindex,follow" to the page
Intermediate & Advanced SEO | | SWEMII
This means the page will not be indexed with all it's myriad queries and parameters. And so takes care of any duplicate meta /markup issues - but any other links from the page can still be crawled and indexed = better crawling, indexing of the site, however you lose any value the page itself might bring.
This is the advice Yoast recommends in 2017 : https://yoast.com/blocking-your-sites-search-results/ - who are adamant that Google just doesn't like or want to serve this kind of page anyway... 2. Just add a canonical link tag - this will ensure that the search results page is still indexed as well.
All the different query string URLs, and the array of results they serve - are 'canonicalised' as the same.
However - this seems a bit duplicitous as the results in the page body could all be very different. Also - all the paginated results pages - would be 'canonicalised' to the main search page - which we know Google states is not correct implementation of canonical tag
https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html this picks up on this older discussion here from 2012
https://moz.com/community/q/internal-search-rel-canonical-vs-noindex-vs-robots-txt
Where the advice was leaning towards using canonicals because the user was seeing a percentage of inbound into these search result pages - but i wonder if it will still be the case ? As the older discussion is now 6 years old - just wondering if there is any new approach or how others have chosen to handle internal search I think a lot of the same issues occur with faceted navigation as discussed here in 2017
https://moz.com/blog/large-site-seo-basics-faceted-navigation1 -
Indexed Pages Different when I perform a "site:Google.com" site search - why?
My client has an ecommerce website with approx. 300,000 URLs (a lot of these are parameters blocked by the spiders thru meta robots tag). There are 9,000 "true" URLs being submitted to Google Search Console, Google says they are indexing 8,000 of them. Here's the weird part - When I do a "site:website" function search in Google, it says Google is indexing 2.2 million pages on the URL, but I am unable to view past page 14 of the SERPs. It just stops showing results and I don't even get a "the next results are duplicate results" message." What is happening? Why does Google say they are indexing 2.2 million URLs, but then won't show me more than 140 pages they are indexing? Thank you so much for your help, I tried looking for the answer and I know this is the best place to ask!
Intermediate & Advanced SEO | | accpar0 -
Webmaster Tools "Not found" errors after sitemap update
Hello Mozzers - I found a sitemap with loads of URL errors on it (none of the URLs on sitemap actually existed) so I went ahead and updated sitemap - now I'm seeing a spike in "not found" errors in WMT - is this normal / anything to worry about when you significantly change a sitemap. I've never replaced every URL on a sitemap before! L
Intermediate & Advanced SEO | | McTaggart0 -
If Penguin 2.0 targets specific pages and keywords, should I spend less SEO effort on them since will they be harder to optimize? Penalty repair is only starting at end of year.
I’m working with a company that got hit by Penguin 2.0. They’re going to switch to white-hat only for a few months and review analytics before considering repairing the penalty. In the meantime, would it make sense to focus less SEO effort (on-site optimization, link building, etc.) on any pages or keywords that were penalized or hit hardest? Or are those the pages we should work on the most? Thanks for reading!
Intermediate & Advanced SEO | | DA20130 -
Is it better "nofollow" or "follow" links to external social pages?
Hello, I have four outbound links from my site home page taking users to join us on our social Network pages (Twitter, FB, YT and Google+). if you look at my site home page, you can find those 4 links as 4 large buttons on the right column of the page: http://www.virtualsheetmusic.com/ Here is my question: do you think it is better for me to add the rel="nofollow" directive to those 4 links or allow Google to follow? From a PR prospective, I am sure that would be better to apply the nofollow tag, but I would like Google to understand that we have a presence on those 4 social channels and to make clearly a correlation between our official website and our official social channels (and then to let Google understand that our social channels are legitimate and related to us), but I am afraid the nofollow directive could prevent that. What's the best move in this case? What do you suggest to do? Maybe the nofollow is irrelevant to allow Google to correlate our website to our legitimate social channels, but I am not sure about that. Any suggestions are very welcome. Thank you in advance!
Intermediate & Advanced SEO | | fablau9 -
Removing Dynamic "noindex" URL's from Index
6 months ago my clients site was overhauled and the user generated searches had an index tag on them. I switched that to noindex but didn't get it fast enough to avoid being 100's of pages indexed in Google. It's been months since switching to the noindex tag and the pages are still indexed. What would you recommend? Google crawls my site daily - but never the pages that I want removed from the index. I am trying to avoid submitting hundreds of these dynamic URL's to the removal tool in webmaster tools. Suggestions?
Intermediate & Advanced SEO | | BeTheBoss0 -
Is there an optimal ratio of external links to a page vs internal links originating at that page ?
I understand that multiple links fro a site dilute link juice. I also understand that external links to a specific page with relevant anchortext helps ranking. I wonder if there is an ideal ratioof tgese two items
Intermediate & Advanced SEO | | Apluswhs0