Anyone managed to decrease the "not selected" graph in WMT?
-
Hi Mozzers.
I am working with a very large E-com site that has a big issue with duplicate or near duplicate content. The site actually received a message in WMT listing out pages that Google deemed it should not be crawling. Many of these were the usual pagination / category sorting option URL issues etc.
We have since fixed the issue with a combination of site changes, robots.txt, parameter handling and URL removals, however I was expecting the "not selected" graph in WMT to start dropping.
The number of roboted pages has increased by around 1 million pages (which was expected) and indexed pages has actually increased despite removing hundreds of thousands of pages. I assume this is due to releasing some crawl bandwidth for more important pages like products.
I guess my question is two-fold;
1. Is the "not selected" graph cumulative, as this would explain why it isn't dropping?
2. Has anyone managed to get this figure to significantly drop? Should I even care? I am relating this to Panda by the way.
Important to note that the changes were made around 3 weeks ago and I am aware not everything will be re-crawled yet.
Thanks,
Chris -
Very interesting. I'm also convinced the "not selected" graph is a big clue towards a Panda penalty. I guess I will have to wait another couple of weeks to see if our changes have affected the graph. Maybe this time lag is why it can take upwards of 6 months to get recover from Panda!
-
Hi Chris
Here is the nice information about the "Not Selected" data in WMT. I hope this post will help you more to understand about the Not Selected Graph : http://support.google.com/webmasters/bin/answer.py?hl=en&answer=2642366
-
The "Not Selected" isn't cumulative. The "Ever Crawled" is though.
I have a large Wordpress content site. It was hit by Panda on a very same day that my "not selected" multiplied by 8. I don't think it was a coincidence, and I didn't make any large changes to the site besides the regular addition of about 10 posts per week.
I've been able to effect a downward movement on the not selected count by removing/redirecting things like "replytocom" variable URLs in the comments section;reworking print and email versions of each article, etc. It very slow though, only reducing by an average of 100 per week.
Needless to say, I think the not selected metric means quite alot.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does anyone have any experience with DakWak?
We are looking at using it as a translation service but I have SEO concerns. Does anyone have any experience or see any potential problems?
Intermediate & Advanced SEO | | EcommerceSite1 -
Rel=next/prev for paginated pages then no need for "no index, follow"?
I have a real estate website and use rel=next/prev for paginated real estate result pages. I understand "no index, follow" is not needed for the paginated pages. However, my case is a bit unique: this is real estate site where the listings also show on competitors sites. So, I thought, if I "no index, follow" the paginated pages that would reduce the amount of duplicate content on my site and ultimately support my site ranking well. Again, I understand "no index, follow" is not needed for paginated pages when using rel=next/prev, but since my content will probably be considered fairly duplicate, I question if I should do anyway.
Intermediate & Advanced SEO | | khi50 -
Fixed "lower-case/mixed-case" Internal Links causing duplicate- Now What?
Hi, So after a site re-launch, Moz crawled it and reported over 150 duplicate content errors. It was determined that it was because of incorrect uses of capitalization in internal links. Using screaming frog, I found all (500+) internal links and fixed them to match the actual URL. Now the site is100% consistent across the board as best I can tell. I am unsure what to do next though. We launched the site with all the internal link errors, and now many of the pages that are indexed and ranked are with the incorrect URL form. Some have said to use a canonical tag. But how can I use a canonical tag on a page doesn't even exist? Same thing with 301. Can I redirect /examplepage to /ExamplePage if only /ExamplePage actually exists? I would really appreciate some advice on what to do. After I fixed the internal links, I waited a week and Moz crawled the site again and reported all the same errors, and then even more. All capitalization. Seems like it's a mess. After I did another Screaming Frog crawl, it showed no duplicates, so I know I was successful in fixing the internals. Help!!
Intermediate & Advanced SEO | | yogitrout10 -
Received "Googlebot found an extremely high number of URLs on your site:" but most of the example URLs are noindexed.
An example URL can be found here: http://symptom.healthline.com/symptomsearch?addterm=Neck%20pain&addterm=Face&addterm=Fatigue&addterm=Shortness%20Of%20Breath A couple of questions: Why is Google reporting an issue with these URLs if they are marked as noindex? What is the best way to fix the issue? Thanks in advance.
Intermediate & Advanced SEO | | nicole.healthline0 -
Keywords in WMT
Hello, In Googles Web master tools under "content keywords" 2 of my major keywords are missing. My site used to rank well for the keyphrase "short hairstyles" but gets very little traffic from google at all now, about 1% of what it did before april 2012. Someone did a negative seo number on us by pointing 10k+ spammy links to us from message boards, this and the timing of the traffic loss leads me to suspectthe penguin update. I am removing them as best I can but no increase in traffic has resulted so I'm looking for any and all issues and the missing keywords seems to be an oddity. The missing keywords include "short" which is pretty fundemental. The word is in the domain and plenty of times in the content. Any ideas?
Intermediate & Advanced SEO | | jwdl0 -
Rank keeps decreasing - Is my site penalized
Hello, I have run into a bit of a predicament. All of my search terms keep dropping on a monthly basis, even though I am adding quality guest posts every month. Even if I get a handful of articles on semi-popular sites my ranking still drop. I am wondering, is my site penalized? My metrics also over exceed my rankings using both the Moz metrics and pagerank. I have a PR of 5 and domain rank of 50+ and I am still getting outranked on every term by people with lower metrics (PR 2 and DR of 30) In the past I have done mostly article syndication through sites like ezinearticles and isnare, but that was about 5 years ago. I have also done a couple of the "pay $50 for 100 directory submissions" once but that was also about 5 years ago. Has anyone experienced anything like this? Anyone have any advice? As you can probably tell I am getting really frustrated. P.S. - This is happening for all pages on my website, not just particular pages. Is is possible to get a site wide penalty, and if so, what can be done about it?
Intermediate & Advanced SEO | | Mjstout0 -
Rel="prev" and rel="next" implementation
Hi there since I've started using semoz I have a problem with duplicate content so I have implemented on all the pages with pagination rel="prev" and rel="next" in order to reduce the number of errors but i do something wrong and now I can't figure out what it is. the main page url is : alegesanatos.ro/ingrediente/ and for the other pages : alegesanatos.ro/ingrediente/p2/ - for page 2 alegesanatos.ro/ingrediente/p3/ - for page 3 and so on. We've implemented rel="prev" and rel="next" according to google webmaster guidelines without adding canonical tag or base link in the header section and we still get duplicate meta title error messages for this pages. Do you think there is a problem because we create another url for each page instead of adding parameters (?page=2 or ?page=3 ) to the main url alegesanatos.ro/ingrediente?page=2 thanks
Intermediate & Advanced SEO | | dan_panait0 -
Need advice re: selecting an Ecommerce platform
Hi. I am responsible for choosing an ecommerce platform and overseeing the implementation of a large ecommerce site. The site will have tens of thousands of products and will be fairly complex. Eventually the site will integrate with the suppliers back end inventory/order management system, which is some sort of custom Windows/.NET system. (I'm not very technical if you haven't noticed....) Primarily I want a platform that is SEO friendly, and I have to be sure that the site is developed properly from an SEO and usability perspective. I thought I would go with an asp.net solution (aspdotnetstorefront, specifically) to facilitate the future integration, but I am questioning this choice after reading some of the comments I have found here at SEOMoz. So is asp.net really a bad choice SEO-wise? I almost considered Magento, but was having trouble finding a good solution provider to work with. I also worried about integration issues down the road. I would appreciate any advice or input anyone may have. Thank you!
Intermediate & Advanced SEO | | sbg7810