Google refuses to index our domain. Any suggestions?
-
A very similar question was asked previously. (http://www.seomoz.org/q/why-google-did-not-index-our-domain) We've done everything in that post (and comments) and then some.
The domain is http://www.miwaterstewardship.org/ and, so far, we have:
- put "User-agent: * Allow: /" in the robots.txt (We recently removed the "allow" line and included a Sitemap: directive instead.)
- built a few hundred links from various pages including multiple links from .gov domains
- properly set up everything in Webmaster Tools
- submitted site maps (multiple times)
- checked the "fetch as googlebot" display in Webmaster Tools (everything looks fine)
- submitted a "request re-consideration" note to Google asking why we're not being indexed
Webmaster Tools tells us that it's crawling the site normally and is indexing everything correctly. Yahoo! and Bing have both indexed the site with no problems and are returning results. Additionally, many of the pages on the site have PR0 which is unusual for a non-indexed site. Typically we've seen those sites have no PR at all.
If anyone has any ideas about what we could do I'm all ears. We've been working on this for about a month and cannot figure this thing out.
Thanks in advance for your advice.
-
You make excellent points. I'll escalate this to "the pros" and see if they're able to bring their guru powers to bear on the trouble.
Thanks again Ryan for all your advice. It is greatly appreciated.
-
Looking at the site I can confirm the following:
-
the home page is tagged index follow
-
the status code for the home page is 200, an OK response
-
the robots.txt file is valid and clear
-
your crawl reports look fine to me
-
you stated your sitemap is received and 73 of 75 pages are indexed
-
your site is clearly not in Google's index as a site:miwaterstewardship.org search shows nothing.
-
I looked at your sitemap. I am not familiar with .aspx sitemaps but it does contain valid html links which apparently is enough for Google to utilize
-
you stated your site is not under penalty, as per Google
The possibilities are:
-
one of the pieces of information we are depending on is incorrect
-
we are overlooking a key piece of information
-
our understanding of SEO in this case is not complete
-
there is an issue with Google which is preventing your site from being indexed.
At this point I would suggest using your 1 PRO question on this issue, and reference this Q&A thread. While I don't believe we missed anything we should get the team to look at this issue and rule out every last possibility.
-
-
Thanks for the suggestions, Ryan. All of the previous changes were made before Google did it's last crawl. Here's the other info...
URLs Submitted = 78 / URLs in Web Index = 75 / The sitemap status is green check mark and it was downloaded today--June 14.
Geographic target has not been selected. Google has always been able to determine the crawl rate. I just changed the preferred domain to www.miwaterstewardship.org. That's the only change made recently.
This is another piece that baffles me. Check out the crawl stats for the site here: http://netvantagemarketing.com/temp/miws-crawl-stats.png
The bot is crawling an average of 63 pages per day and there's crawling activity since the middle of March. STILL, though...the domain absolutely will not appear in the index.
We're working with the client right now to see if we can get the site changed to a new IP address. The thought is that perhaps Google has somehow historically blocked the IP that it lives on now and changing to a new IP might get us out of jail. SUPER long shot...but those are the kinds of things we're trying now.
Thanks again for the help.
-
I was able to verify your robots.txt file is correctly set up. Did you make the changes before Google crawled the site on Saturday? You are correct that your site is not in the index. I would guess the robots.txt file was not modified until after the crawl. If it was adjusted pre-crawl, below are the next steps you can take.
Let's take a fresh look at your site:
-
you have a valid robots.txt file
-
your home page has a valid "index, follow" tag (not necessary, but it doesn't hurt either)
-
you have checked WMT and confirmed your site was crawled
In WMT, Site Configuration > Sitempas there is a "URLs submitted" field and a "URLs in web index field". What are those numbers please?
- It's a bit far reaching but while you are there please go over to your Settings tab in WMT. Geographic Target should not be checked, or if it is then Target Users in US should be selected. Preferred domain should be "www.miwaterstewardship.org" and I would recommend "Let Google determine my crawl rate" option unless you have a specific reason for doing otherwise.
-
-
Well...we're still in the virtual doghouse so-to-speak...
I made the changes that Ryan suggested on Friday. Webmaster Tools reports that the GoogleBot crawled the site on Saturday and that everything is OK in its eyes. Google responded to our reconsideration request over the weekend and stated very specifically that no actions were taken by the Google Spam team which would affect the rankings of our site or domain.
Still, if you search info:miwaterstewardship.org in The Ol' Goog, it continues to report that the site does not exist in the Google database.
Are there any other ideas of what we might be able to try?
This is a Dot Net Nuke site. Is there a DNN setting somewhere which might indicate to Google that it should not report our domain in search results?
Thanks for the looks and the help.
-
I agree with Ryan, your backlinks look great, website is structured well and has a great looking design. No signs of you breaking any of googles TOS.
-
Thank you EGOL. I consider myself a student of SEO, definitely not a master.
The Q&A forums here have given me a great opportunity to learn about SEO. I have been spending all my time this past month un-learning all the bad information I gathered from the internet, and learning SEO the right way.
The Matt Cutts videos, SEOmoz webinars, blogs and pro Q&A have all been immensely helpful. Those resources, along with the replies you and other mozzers offer have provided me an incredibly rich learning experience.
Thanks for noticing.
-
Thanks, Ryan. I noticed this as well but figured I'd leave it alone since Webmaster Tools was telling me "all is well" with the robots.txt file. I'll add the code like you suggest and we'll see what happens.
Thanks again.
-
Ryan, you have been giving some really valuable answers. Nice. Keep up the great work!
-
Fix your robots.txt. It is not set up as you suggested.
http://www.miwaterstewardship.org/robots.txt
User-agent: * Sitemap: http://www.miwaterstewardship.org/SiteMap.aspx I am unsure of what action a search engine would take upon encountering your code, but based on your post it seems that it blocks all agents. The correct code would be:
User-agent: *
Disallow:Sitemap: http://www.miwaterstewardship.org/SiteMap.aspx For more information about robots.txt you can take a look at: [http://www.robotstxt.org/robotstxt.html](http://www.robotstxt.org/robotstxt.html)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexing despite robots.txt block
Hi This subdomain has about 4'000 URLs indexed in Google, although it's blocked via robots.txt: https://www.google.com/search?safe=off&q=site%3Awww1.swisscom.ch&oq=site%3Awww1.swisscom.ch This has been the case for almost a year now, and it does not look like Google tends to respect the blocking in http://www1.swisscom.ch/robots.txt Any clues why this is or what I could do to resolve it? Thanks!
Technical SEO | | zeepartner0 -
Website Migration - Very Technical Google "Index" Question
This is my understanding of how Google's search works, and I am unsure about one thing in specifc: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" connects to the "page directory". I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I ask is I am starting to work with a client who has a newly developed website. The old website domain and files were located on a GoDaddy account. The new websites files have completely changed location and are now hosted on a separate GoDaddy account, but the domain has remained in the same account. The client has setup domain forwarding/masking to access the files on the separate account. From what I've researched domain masking and SEO don't get along very well. Not only can you not link to specific pages, but if my above assumption is true wouldn't Google have a hard time crawling and storing each page in the cache?
Technical SEO | | reidsteven750 -
Will Links to one Sub-Domain on a Site hurt a different Sub-Domain on the same site by affecting the Quality of the Root Domain?
Hi, I work for a SaaS company which uses two different subdomains on our site. A public for our main site (which we want to rank in SERPs for), and a secure subdomain, which is the portal for our customers to access our services (which we don't want to rank for) . Recently I realized that by using our product, our customers are creating large amounts of low quality links to our secure subdomain and I'm concerned that this might affect our public subdomain by bringing down the overall Authority of our root domain. Is this a legitimate concern? Has anyone ever worked through a similar situation? any help is appreciated!
Technical SEO | | ifbyphone0 -
Google indexing tags help
Hey everyone, So yesterday someone pointed out to me that Google is indexing tags and that will likely hurt search engine results. I just did a "site:thetechblock.com" and I notice that tags are still being pulled. http://d.pr/i/WmE6 Today, I went into my Yoast settings and checked "noindex,follow" tags in the Taxomomies settings. I just want to make sure what I'm doing is right. http://d.pr/i/zmbd Thanks guys
Technical SEO | | ttb0 -
301'ing domain to an addon domain
My googlefu failed me in finding this... How to 301 a domain to an addon domain? Domain structure is as follows: http://addondomain.maindomain.com/ http://www.maindomain.com/addondomain/ http://www.addondomain.com/ <--(addon domain has its own domain as well) I want main domain to all point to the addon domain like so: http://www.maindomain.com/ --> http://www.addondomain.com/
Technical SEO | | JasonJackson0 -
Will password protecting my test sub-domain help keep the SEs from indexing it?
Hi, all. I'm working in an unfamiliar area here, so I hope someone can tell me if I'm out in left field. I am building a sub-domain called http://test.mysite.com, so that I can upload a client's still-under-construction site while working on it. When completed, it'll go up on his server, replacing his old site. Obviously, I want to ensure that it doesn't get indexed while it's on my test platform. A friend suggested that I password it with htaccess and htpasswd, since we can never be certain the SEs will obey site directives. My question is, what do you think would be the best (and hopefully, simplest) way to accomplish this? I'm no code-monkey, so "simple" is a big plus! Doc By the way, the platform will be Wordpress CMS.
Technical SEO | | Doc_Sheldon0 -
New Sub-domains or New Directories for 10+ Year Domain?
We've got a one-page, 10+ year old domain that has a 65/100 domain authority that gets about 10k page views a day (I'm happy to share the URL but didn't know if that's permitted). The content changes daily (it's a daily bible verse) so most of this question is focused on domain authority, not the content. We're getting ready to provide translations of that daily content in 4 languages. Would it be better to create sub-domains for those translations (same content, different language) or sub-folders? Example: http://cn.example.com
Technical SEO | | ipllc
http://es.example.com
http://ru.example.com or http://example.com/cn
http://example.com/es
http://example.com/ru We're able to do either but want to pick the one that would give the translated version the most authority both now and moving forward. (We definitely don't want to penalize the root domain.) Thanks in advance for your input.0 -
Seomoz api for domains working, for domains+directory not?
We're working on a tool using the seomoz api ... for domains we're always getting the right values, but for longer URLs we're having troubles ... Example: http://www.seomoz.org/blog/6-reasons-why-qa-sites-can-boost-your-seo-in-2011-despite-googles-farmer-update-12160 won't work http://www.seomoz.org/blog works Any idea what we might be doing wrong?
Technical SEO | | gmellak0