Anyone managed to decrease the "not selected" graph in WMT?
-
Hi Mozzers.
I am working with a very large E-com site that has a big issue with duplicate or near duplicate content. The site actually received a message in WMT listing out pages that Google deemed it should not be crawling. Many of these were the usual pagination / category sorting option URL issues etc.
We have since fixed the issue with a combination of site changes, robots.txt, parameter handling and URL removals, however I was expecting the "not selected" graph in WMT to start dropping.
The number of roboted pages has increased by around 1 million pages (which was expected) and indexed pages has actually increased despite removing hundreds of thousands of pages. I assume this is due to releasing some crawl bandwidth for more important pages like products.
I guess my question is two-fold;
1. Is the "not selected" graph cumulative, as this would explain why it isn't dropping?
2. Has anyone managed to get this figure to significantly drop? Should I even care? I am relating this to Panda by the way.
Important to note that the changes were made around 3 weeks ago and I am aware not everything will be re-crawled yet.
Thanks,
Chris -
Very interesting. I'm also convinced the "not selected" graph is a big clue towards a Panda penalty. I guess I will have to wait another couple of weeks to see if our changes have affected the graph. Maybe this time lag is why it can take upwards of 6 months to get recover from Panda!
-
Hi Chris
Here is the nice information about the "Not Selected" data in WMT. I hope this post will help you more to understand about the Not Selected Graph : http://support.google.com/webmasters/bin/answer.py?hl=en&answer=2642366
-
The "Not Selected" isn't cumulative. The "Ever Crawled" is though.
I have a large Wordpress content site. It was hit by Panda on a very same day that my "not selected" multiplied by 8. I don't think it was a coincidence, and I didn't make any large changes to the site besides the regular addition of about 10 posts per week.
I've been able to effect a downward movement on the not selected count by removing/redirecting things like "replytocom" variable URLs in the comments section;reworking print and email versions of each article, etc. It very slow though, only reducing by an average of 100 per week.
Needless to say, I think the not selected metric means quite alot.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Filter Content By State Selection and SEO Consideratoins
I have an insurance client that is represented in three states. They need to present different information to users by state identification. They prefer to have one page with all the information and then present the information relevant to the state by the users selection from a pop up window. Spiders will be able to index all the content. Users will only see the content based on their selection. So, I wanted to ask the Moz community what SEO implication could this have? The information available on the web is very thin with this situation so really appreciate any guidance that can be given...thanks,
Intermediate & Advanced SEO | | Liamis0 -
Disallowed "Search" results with robots.txt and Sessions dropped
Hi
Intermediate & Advanced SEO | | Frankie-BTDublin
I've started working on our website and I've found millions of "Search" URL's which I don't think should be getting crawled & indexed (e.g. .../search/?q=brown&prefn1=brand&prefv1=C.P. COMPANY|AERIN|NIKE|Vintage Playing Cards|BIALETTI|EMMA PAKE|QUILTS OF DENMARK|JOHN ATKINSON|STANCE|ISABEL MARANT ÉTOILE|AMIRI|CLOON KEEN|SAMSONITE|MCQ|DANSE LENTE|GAYNOR|EZCARAY|ARGOSY|BIANCA|CRAFTHOUSE|ETON). I tried to disallow them on the Robots.txt file, but our Sessions dropped about 10% and our Average Position on Search Console dropped 4-5 positions over 1 week. Looks like over 50 Million URL's have been blocked, and all of them look like all of them are like the example above and aren't getting any traffic to the site. I've allowed them again, and we're starting to recover. We've been fixing problems with getting the site crawled properly (Sitemaps weren't added correctly, products blocked from spiders on Categories pages, canonical pages being blocked from Crawlers in robots.txt) and I'm thinking Google were doing us a favour and using these pages to crawl the product pages as it was the best/only way of accessing them. Should I be blocking these "Search" URL's, or is there a better way about going about it??? I can't see any value from these pages except Google using them to crawl the site.0 -
URL Parameters Settings in WMT/Search Console
On an large ecommerce site the main navigation links to URLs that include a legacy parameter. The parameter doesn’t actually seem to do anything to change content - it doesn’t narrow or specify content, nor does it currently track sessions. We’ve set the canonical for these URLs to be without the parameter. (We did this when we started seeing that Google was stripping out the parameter in the majority of SERP results themselves.) We’re trying to best strategize on how to set the parameters in WMT (search console). Our options are to set to: 1. No: Doesn’t affect page content’ - and then the Crawl field in WMT is auto-set to ‘Representative URL’. (Note, that it's unclear what ‘Representative URL’ is defined as. Google’s documentation suggests that a representative URL is a canonical URL, and we've specifically set canonicals to be without the parameter so does this contradict? ) OR 2. ‘Yes: Changes, reorders, or narrows page content’ And then it’s a question of how to instruct Googlebot to crawl these pages: 'Let Googlebot decide' OR 'No URLs'. The fundamental issue is whether the parameter settings are an index signal or crawl signal. Google documents them as crawl signals, but if we instruct Google not to crawl our navigation how will it find and pass equity to the canonical URLs? Thoughts? Posted by Susan Schwartz, Kahena Digital staff member
Intermediate & Advanced SEO | | AriNahmani0 -
Why do Local "5 pack" results vary between showing Google+, Google+ and website address
I had a client ask me a good question. When they pull up a search result they show up at the top but only with a link to their G+ page. Other competitors show their web address and G+ page. Why are these results different in the same search group? Is there a way to ensure the web address shows up?
Intermediate & Advanced SEO | | Ron_McCabe0 -
Should pages with rel="canonical" be put in a sitemap?
I am working on an ecommerce site and I am going to add different views to the category pages. The views will all have different urls so I would like to add the rel="canonical" tag to them. Should I still add these pages to the sitemap?
Intermediate & Advanced SEO | | EcommerceSite0 -
Does anyone have a clue about my search problem?
After three years of destruction, my site still has a problem - or maybe more than one. OK, I understand I had - and probably still have - a Panda problem. The question is - does anyone know how to fix it, without destroying eveything? If I had money, I'd gladly give it up to fix this, but all I have is me, a small dedicated promotions team, 120,000+ visitors per month and the ability to write, edit and proofread. This is not an easy problem to fix. After completing more than 100 projects, I still haven't got it right, in fact, what I've done over the past 2 months has only made things worse - and I never thought I could do that. Everything has been measured, so as not to destroy our remaining ability to generate income, because without that, its the end of the line. If you can help me fix this, I will do anything for you in return - as long as it is legal, ethical and won't destroy my reputation or hurt others. Unless you are a master jedi guru, and I hope you are, this will NOT be easy, but it will prove that you really are a master, jedi, guru and time lord, and I will tell the world and generate leads for you. I've been doing website and SEO stuff since 1996 and I've always been able to solve problems and fix anything I needed to work on. This has me beaten. So my question is: is there anyone here willing to take a shot at helping me fix this, without the usual response of "change domains" "Delete everything and start over" or "you're screwed" Of course, it is possible that there is a different problem, nothing to do with algorithms, a hard-coded bias or some penalizing setting, that I don't know about, a single needle in a haystack. This problem results in a few visible things. 1. Some pages are buried in supplemental results 2. Search bots pick up new stories within minutes, but they show up in search results many hours later Here is the site: http://shar.es/EGaAC On request, I can provide a list of all the things we've done or tried. (actually I have to finish writing it) Some Notes: There is no manual spam penalty. All outgoing links are nofollow, and have been for 2 years. We never paid for incoming links. We did sell text advertising links 3-4 years ago, using text-link-ads.com, but removed them all 2 1/2 years ago. We did receive payment for some stories, 3-4 years ago, but all have been removed. One more thing. I don't write much - I'm a better editor than a writer, but I wrote a story that had 1 million readers. the massive percentage of 0.0016% came from you-know-who. Yes, 16 visitors. And this was an exclusive, unique story. And there was a similar story, with half a million readers. same result. Seems like there might be a problem!
Intermediate & Advanced SEO | | loopyal0 -
How would you handle 12,000 "tag" pages on Wordpress site?
We have a Wordpress site where /tag/ pages were not set to "noindex" and they are driving 25% of site's traffic (roughly 100,000 visits year to date). We can't simply "noindex" them all now, or we'll lose a massive amount of traffic. We can't possibly write unique descriptions for all of them. We can't just do nothing or a Panda update will come by and ding us for duplicate content one day (surprised it hasn't already). What would you do?
Intermediate & Advanced SEO | | M_D_Golden_Peak1 -
This is a bit of a melon-twister - Anyone have any ideas?
As a novice I created my main site, (waspkilluk.co.uk) and it was geo-targeted for my local City (Bristol). As I got better I created sub-domains(nailsea.waspkilluk.co.uk) for the smaller towns. Now I am suffering because I need to create a sub-domain for BristoI, allowing the main site to be free from geo-targeting and thus rank more cleanly for specific topics e.g flea control, wasp control etc, etc. My question is simply this - How do I avoid or limit damaging my existing rankings when I swap the site content over to the sub-domain and remove the key word Bristol from the root domains pages. Not a short question, but any thoughts would be lovely to hear.
Intermediate & Advanced SEO | | simonberenyi0