Can anyone help me understand why google is "Not Selecting" a large number of my webpages to include when crawling my site.
-
When looking through my google webmaster tools, I clicked into the advanced settings under index status and was surprised to see that google has marked around 90% of my pages on my site as "Not Selected" when crawling. Please take a look and offer any suggestions.
-
Thank you. Thank you. Thank you. That makes so much sense. This is also the issue I am having with my communities and cities pages, pointing at my http://luxuryhomehunt.com/homes-for-sale page.
Does that make sense?
-
Thanks for the response. The pop up is running in java, and from what I have been told search engines can crawl pages so long as the opt in is running in java. Typically a visitor would hit one of our landing pages such as http://luxuryhomehunt.com/homes-for-sale/Longwood/alaqua-lakes.html where they can find information about the specific community they are searching for then if they click on a listing they would be prompted to opt in.
Do you think there may be any correlation to me using or not using canonical tags? Another thing I was wondering is if it has anything to do with, my handling of pages 2,3,4,5 etc of a city or community with more than ten listings.
I am not sure as to why your connection would have been refused, I am currently running a xml sitemap generator and maybe that had something to do with it. Either way, I am super grateful for your help and for you looking at this. I am very new to SEO and trying to learn my way through as much as possible.
-
Hmm, I just tried to click on a listing in Google but I was served a popup which required that I enter in my contact information before I could access the site http://luxuryhomehunt.com/view-property/40096215. Did you just add this pop up? Since there is no way for users to opt out of entering in contact information to view a listing, then it may be possible that the search engines are being blocked as well.
I also tried crawling the site with Screaming Frog SEO Spider and Xenu, but my connection was refused... not sure if my IP was blocked or if the site is blocking crawlers, but my guess is the search engines may be having some trouble accessing all of the pages on your site.
At the very least, I'd recommend removing that popup since it's bad for user experience and may be causing problems with the search engines.
EDIT - I did some more digging and looked the Google cache for one of your listings - http://webcache.googleusercontent.com/search?q=cache:L6LzTqj9gQUJ:luxuryhomehunt.com/view-property/40445850+&cd=6&hl=en&ct=clnk&gl=us&client=firefox-a. On this page, you have the rel="canonical" tag set to http://luxuryhomehunt.com/view-property so that tells the search engines that all of your property listing pages should use that canonical URL, which explains why most of your pages are "Not Selected" per Google -
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=2642366
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139066
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
When i type site:jamalon.com to discover number of pages indexed it gives me different result from google web master tools
when i type site:jamalon.com to discover number of pages indexed it gives me different result from google web master tools
Technical SEO | | Jamalon0 -
Help with google news application url question
Hi, i am going to be applying to have out site in google news but i have come across the below and not sure how we do this. I use joomla and our site is www.in2town.co.uk and the page we are including is http://www.in2town.co.uk/latest-news-headlines Article URLs. To make sure that we only crawl new articles, please make sure your URLs are unique with at least 3 digits, and are permanent. can anyone please let me know how i do this with the url please
Technical SEO | | ClaireH-1848860 -
Implementation of rel="next" & rel="prev"
Hi All, I'm looking to implement rel="next" & rel="prev", so I've been looking for examples. I looked at the source code for the MOZ.com forum, if anyone one is going to do it properly MOZ are. I noticed that the rel="next" & rel="prev" tags have been implemented in the a href tags that link to the previous and next pages rather than in the head. I'm assuming this is fine with Google but in their documentation they state to put the tags in the . Does it matter? Neil.
Technical SEO | | NDAY0 -
Should i do "Article Marketing" for my quotes site?
Hello members, Should i do Article Marketing for my quote site to have quality backlinks to my site? will it improve my rankings?
Technical SEO | | rimon56930 -
Crawling issues in google
Hi everyone, I think i have crawling issues with one of my sites. It has vanished form Google rankings it used to rank for all services i offered now it doesn't anymore ever since September 29th. I have resubmitted to Google 2 times and they came back with the same answer: " We reviewed your site and found no manual actions by the web spam team that might affect your site's ranking in Google. There's no need to file a reconsideration request for your site, because any ranking issues you may be experiencing are not related to a manual action taken by the webspam team. Of course, there may be other issues with your site that affect your site's ranking. Google's computers determine the order of our search results using a series of formulas known as algorithms. We make hundreds of changes to our search algorithms each year, and we employ more than 200 different signals when ranking pages. As our algorithms change and as the web (including your site) changes, some fluctuation in ranking can happen as we make updates to present the best results to our users. If you've experienced a change in ranking which you suspect may be more than a simple algorithm change, there are other things you may want to investigate as possible causes, such as a major change to your site's content, content management system, or server architecture. For example, a site may not rank well if your server stops serving pages to Googlebot, or if you've changed the URLs for a large portion of your site's pages. This article has a list of other potential reasons your site may not be doing well in search. " How i detected that it may be a crawling issue is that 2 weeks ago i changed metas - metas are very slow in getting updated and for some of my pages never did update Do you know any good tools to check for bad code that could slow down the crawling. I really don't know where to look other than issues for crawling. I validated the website with w3c validator and ran xenu and cleaned these up but my website is still down. Any ideas are appreciated.
Technical SEO | | CMTM0 -
Do we need to manually submit a sitemap every time, or can we host it on our site as /sitemap and Google will see & crawl it?
I realized we don't have a sitemap in place, so we're going to get one built. Once we do, I'll submit it manually to Google via Webmaster tools. However, we have a very dynamic site with content constantly being added. Will I need to keep manually re-submitting the sitemap to Google? Or could we have the continually updating sitemap live on our site at /sitemap and the crawlers will just pick it up from there? I noticed this is what SEOmoz does at http://www.seomoz.org/sitemap.
Technical SEO | | askotzko0 -
How can I tell Google, that a page has not changed?
Hello, we have a website with many thousands of pages. Some of them change frequently, some never. Our problem is, that googlebot is generating way too much traffic. Half of our page views are generated by googlebot. We would like to tell googlebot, to stop crawling pages that never change. This one for instance: http://www.prinz.de/party/partybilder/bilder-party-pics,412598,9545978-1,VnPartypics.html As you can see, there is almost no content on the page and the picture will never change.So I am wondering, if it makes sense to tell google that there is no need to come back. The following header fields might be relevant. Currently our webserver answers with the following headers: Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0, public
Technical SEO | | bimp
Pragma: no-cache
Expires: Thu, 19 Nov 1981 08:52:00 GMT Does Google honor these fields? Should we remove no-cache, must-revalidate, pragma: no-cache and set expires e.g. to 30 days in the future? I also read, that a webpage that has not changed, should answer with 304 instead of 200. Does it make sense to implement that? Unfortunatly that would be quite hard for us. Maybe Google would also spend more time then on pages that actually changed, instead of wasting it on unchanged pages. Do you have any other suggestions, how we can reduce the traffic of google bot on unrelevant pages? Thanks for your help Cord0 -
How does Ping services help your site
Hi i am trying to understand how services such as pingler.com help your site. I think i understand about the google ping service which tells google that you have updated a page but how does pingler work. Pingler claims that it sends traffic to your site but i do not understand this. Any help would be great
Technical SEO | | ClaireH-1848861