Google refuses to index our domain. Any suggestions?
-
A very similar question was asked previously. (http://www.seomoz.org/q/why-google-did-not-index-our-domain) We've done everything in that post (and comments) and then some.
The domain is http://www.miwaterstewardship.org/ and, so far, we have:
- put "User-agent: * Allow: /" in the robots.txt (We recently removed the "allow" line and included a Sitemap: directive instead.)
- built a few hundred links from various pages including multiple links from .gov domains
- properly set up everything in Webmaster Tools
- submitted site maps (multiple times)
- checked the "fetch as googlebot" display in Webmaster Tools (everything looks fine)
- submitted a "request re-consideration" note to Google asking why we're not being indexed
Webmaster Tools tells us that it's crawling the site normally and is indexing everything correctly. Yahoo! and Bing have both indexed the site with no problems and are returning results. Additionally, many of the pages on the site have PR0 which is unusual for a non-indexed site. Typically we've seen those sites have no PR at all.
If anyone has any ideas about what we could do I'm all ears. We've been working on this for about a month and cannot figure this thing out.
Thanks in advance for your advice.
-
You make excellent points. I'll escalate this to "the pros" and see if they're able to bring their guru powers to bear on the trouble.
Thanks again Ryan for all your advice. It is greatly appreciated.
-
Looking at the site I can confirm the following:
-
the home page is tagged index follow
-
the status code for the home page is 200, an OK response
-
the robots.txt file is valid and clear
-
your crawl reports look fine to me
-
you stated your sitemap is received and 73 of 75 pages are indexed
-
your site is clearly not in Google's index as a site:miwaterstewardship.org search shows nothing.
-
I looked at your sitemap. I am not familiar with .aspx sitemaps but it does contain valid html links which apparently is enough for Google to utilize
-
you stated your site is not under penalty, as per Google
The possibilities are:
-
one of the pieces of information we are depending on is incorrect
-
we are overlooking a key piece of information
-
our understanding of SEO in this case is not complete
-
there is an issue with Google which is preventing your site from being indexed.
At this point I would suggest using your 1 PRO question on this issue, and reference this Q&A thread. While I don't believe we missed anything we should get the team to look at this issue and rule out every last possibility.
-
-
Thanks for the suggestions, Ryan. All of the previous changes were made before Google did it's last crawl. Here's the other info...
URLs Submitted = 78 / URLs in Web Index = 75 / The sitemap status is green check mark and it was downloaded today--June 14.
Geographic target has not been selected. Google has always been able to determine the crawl rate. I just changed the preferred domain to www.miwaterstewardship.org. That's the only change made recently.
This is another piece that baffles me. Check out the crawl stats for the site here: http://netvantagemarketing.com/temp/miws-crawl-stats.png
The bot is crawling an average of 63 pages per day and there's crawling activity since the middle of March. STILL, though...the domain absolutely will not appear in the index.
We're working with the client right now to see if we can get the site changed to a new IP address. The thought is that perhaps Google has somehow historically blocked the IP that it lives on now and changing to a new IP might get us out of jail. SUPER long shot...but those are the kinds of things we're trying now.
Thanks again for the help.
-
I was able to verify your robots.txt file is correctly set up. Did you make the changes before Google crawled the site on Saturday? You are correct that your site is not in the index. I would guess the robots.txt file was not modified until after the crawl. If it was adjusted pre-crawl, below are the next steps you can take.
Let's take a fresh look at your site:
-
you have a valid robots.txt file
-
your home page has a valid "index, follow" tag (not necessary, but it doesn't hurt either)
-
you have checked WMT and confirmed your site was crawled
In WMT, Site Configuration > Sitempas there is a "URLs submitted" field and a "URLs in web index field". What are those numbers please?
- It's a bit far reaching but while you are there please go over to your Settings tab in WMT. Geographic Target should not be checked, or if it is then Target Users in US should be selected. Preferred domain should be "www.miwaterstewardship.org" and I would recommend "Let Google determine my crawl rate" option unless you have a specific reason for doing otherwise.
-
-
Well...we're still in the virtual doghouse so-to-speak...
I made the changes that Ryan suggested on Friday. Webmaster Tools reports that the GoogleBot crawled the site on Saturday and that everything is OK in its eyes. Google responded to our reconsideration request over the weekend and stated very specifically that no actions were taken by the Google Spam team which would affect the rankings of our site or domain.
Still, if you search info:miwaterstewardship.org in The Ol' Goog, it continues to report that the site does not exist in the Google database.
Are there any other ideas of what we might be able to try?
This is a Dot Net Nuke site. Is there a DNN setting somewhere which might indicate to Google that it should not report our domain in search results?
Thanks for the looks and the help.
-
I agree with Ryan, your backlinks look great, website is structured well and has a great looking design. No signs of you breaking any of googles TOS.
-
Thank you EGOL. I consider myself a student of SEO, definitely not a master.
The Q&A forums here have given me a great opportunity to learn about SEO. I have been spending all my time this past month un-learning all the bad information I gathered from the internet, and learning SEO the right way.
The Matt Cutts videos, SEOmoz webinars, blogs and pro Q&A have all been immensely helpful. Those resources, along with the replies you and other mozzers offer have provided me an incredibly rich learning experience.
Thanks for noticing.
-
Thanks, Ryan. I noticed this as well but figured I'd leave it alone since Webmaster Tools was telling me "all is well" with the robots.txt file. I'll add the code like you suggest and we'll see what happens.
Thanks again.
-
Ryan, you have been giving some really valuable answers. Nice. Keep up the great work!
-
Fix your robots.txt. It is not set up as you suggested.
http://www.miwaterstewardship.org/robots.txt
User-agent: * Sitemap: http://www.miwaterstewardship.org/SiteMap.aspx I am unsure of what action a search engine would take upon encountering your code, but based on your post it seems that it blocks all agents. The correct code would be:
User-agent: *
Disallow:Sitemap: http://www.miwaterstewardship.org/SiteMap.aspx For more information about robots.txt you can take a look at: [http://www.robotstxt.org/robotstxt.html](http://www.robotstxt.org/robotstxt.html)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirect multiple domains to 1 domain or not?
Hi there, I have client who has multiple domains that already have some PA and DA. Problem is that most websites have the same content and rank better on different keywords.
Technical SEO | | Leaf-a-mark
I want to redirect all the websites to 1 domain because it’s easier to manage and it removes any duplicate content. Question is if I redirect domain x to domain y do the rankings of domain x increase on domain y? Or is it better to keep domain x separately to generate more referral traffic to domain y? Thanks in advance! Cheers0 -
Is it problematic for Google when the site of a subdomain is on a different host than the site of the primary domain?
The Website on the subdomain runs on a different server (host) than the site on the main domain.
Technical SEO | | Christian_Campusjaeger0 -
Domain vs Sub Domain and Rankings
Hi All Wanting some advice. I have a client which has a number of individual centres that are part of an umbrella organisation. Each individual centre has its own web site and some of these sites have similar (not duplicate content) products and services. Currently the individual centres are sub domains of the umbrella organisation. i.e. Umbrella organisation www.organisation.org.au Individual centres are sub domains i.e. www.centre1.organisation.org.au, www.centre2.organisation.org.au etc. I'm feeling that perhaps this setup might be affecting the rankings of the individual sites because they are sub domains. Would love to hear some thoughts or experience on this and whether its worth going through the process of migrating the individual centre domains. Thanks Ian
Technical SEO | | iragless0 -
Google not indexing /showing my site in search results...
Hi there, I know there are answers all over the web to this type of question (and in Webmaster tools) however, I think I have a specific problem that I can't really find an answer to online. site is: www.lizlinkleter.com Firstly, the site has been live for over 2 weeks... I have done everything from adding analytics, to submitting a sitemap, to adding to webmaster tools, to fetching each individual page as googlebot and then submitting to index via webmaster tools. I've checked my robot files and code elsewhere on the site and the site is not blocking search engines (as far as I can see) There are no security issues in webmaster tools or MOZ. Google says it has indexed 31 pages in the 'Index Status' section, but on the site dashboard it says only 2 URLS are indexed. When I do a site:www.lizlinketer.com search the only results I get are pages that are excluded in the robots file: /xmlrpc.php & /admin-ajax.php. Now, here's where I think the issue stems from - I developed the site myself for my wife and I am new to doing this, so I developed it on the live URL (I now know this was silly) - I did block the content from search engines and have the site passworded, but I think Google must have crawled the site before I did this - the issue with this was that I had pulled in the Wordpress theme's dummy content to make the site easier to build - so lots of nasty dupe content. The site took me a couple of months to construct (working on it on and off) and I eventually pushed it live and submitted to Analytics and webmaster tools (obviously it was all original content at this stage)... But this is where I made another mistake - I submitted an old site map that had quite a few old dummy content URLs in there... I corrected this almost immediately, but it probably did not look good to Google... My guess is that Google is punishing me for having the dummy content on the site when it first went live - fair enough - I was stupid - but how can I get it to index the real site?! My question is, with no tech issues to clear up (I can't resubmit site through webmaster tools) how can I get Google to take notice of the site and have it show up in search results? Your help would be massively appreciated! Regards, Fraser
Technical SEO | | valdarama0 -
Google dropping pages from SERPs even though indexed and cached. (Shift over to https suspected.)
Anybody know why pages that have previously been indexed - and that are still present in Google's cache - are now not appearing in Google SERPs? All the usual suspects - noindex, robots, duplication filter, 301s - have been ruled out. We shifted our site over from http to https last week and it appears to have started then, although we have also been playing around with our navigation structure a bit too. Here are a few examples... Example 1: Live URL: https://www.normanrecords.com/records/149002-memory-drawings-there-is-no-perfect-place Cached copy: http://webcache.googleusercontent.com/search?q=cache:https://www.normanrecords.com/records/149002-memory-drawings-there-is-no-perfect-place SERP (1): https://www.google.co.uk/search?q=memory+drawings+there+is+no+perfect+place SERP (2): https://www.google.co.uk/search?q=memory+drawings+there+is+no+perfect+place+site%3Awww.normanrecords.com Example 2: SERP: https://www.google.co.uk/search?q=deaf+center+recount+site%3Awww.normanrecords.com Live URL: https://www.normanrecords.com/records/149001-deaf-center-recount- Cached copy: http://webcache.googleusercontent.com/search?q=cache:https://www.normanrecords.com/records/149001-deaf-center-recount- These are pages that have been linked to from our homepage (Moz PA of 68) prominently for days, are present and correct in our sitemap (https://www.normanrecords.com/catalogue_sitemap.xml), have unique content, have decent on-page optimisation, etc. etc. We moved over to https on 11 Aug. There were some initial wobbles (e.g. 301s from normanrecords.com to www.normanrecords.com got caught up in a nasty loop due to the conflicting 301 from http to https) but these were quickly sorted (i.e. spotted and resolved within minutes). There have been some other changes made to the structure of the site (e.g. a reduction in the navigation options) but nothing I know of that would cause pages to drop like this. For the first example (Memory Drawings) we were ranking on the first page right up until this morning and have been receiving Google traffic for it ever since it was added to the site on 4 Aug. Any help very much appreciated! At the very end of my tether / understanding here... Cheers, Nathon
Technical SEO | | nathonraine0 -
Homepage no longer indexed in Google
Have been working on a site and the hompage has recently vanished from Google. I submit the site to Google webmaster tools a couple of days ago and checked today and the homepage has vanished. There are no no follow tags, and no robots.txt stopping the page from being crawled. It's a bit of a worry, the site is http://www.beyondthedeal.com
Technical SEO | | tonysandwich
Any insights would be massively appreciated! Thanks.0 -
Best way to handle indexed pages you don't want indexed
We've had a lot of pages indexed by google which we didn't want indexed. They relate to a ajax category filter module that works ok for front end customers but under the bonnet google has been following all of the links. I've put a rule in the robots.txt file to stop google from following any dynamic pages (with a ?) and also any ajax pages but the pages are still indexed on google. At the moment there is over 5000 pages which have been indexed which I don't want on there and I'm worried is causing issues with my rankings. Would a redirect rule work or could someone offer any advice? https://www.google.co.uk/search?q=site:outdoormegastore.co.uk+inurl:default&num=100&hl=en&safe=off&prmd=imvnsl&filter=0&biw=1600&bih=809#hl=en&safe=off&sclient=psy-ab&q=site:outdoormegastore.co.uk+inurl%3Aajax&oq=site:outdoormegastore.co.uk+inurl%3Aajax&gs_l=serp.3...194108.194626.0.194891.4.4.0.0.0.0.100.305.3j1.4.0.les%3B..0.0...1c.1.SDhuslImrLY&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&fp=ff301ef4d48490c5&biw=1920&bih=860
Technical SEO | | gavinhoman0 -
Does Google index XML files?
Does Google or other search engines include XML files in their index? More specifically, I am wondering how Google knows the difference between an xml filetype and an RSS feed.
Technical SEO | | nicole.healthline0