Google refuses to index our domain. Any suggestions?
-
A very similar question was asked previously. (http://www.seomoz.org/q/why-google-did-not-index-our-domain) We've done everything in that post (and comments) and then some.
The domain is http://www.miwaterstewardship.org/ and, so far, we have:
- put "User-agent: * Allow: /" in the robots.txt (We recently removed the "allow" line and included a Sitemap: directive instead.)
- built a few hundred links from various pages including multiple links from .gov domains
- properly set up everything in Webmaster Tools
- submitted site maps (multiple times)
- checked the "fetch as googlebot" display in Webmaster Tools (everything looks fine)
- submitted a "request re-consideration" note to Google asking why we're not being indexed
Webmaster Tools tells us that it's crawling the site normally and is indexing everything correctly. Yahoo! and Bing have both indexed the site with no problems and are returning results. Additionally, many of the pages on the site have PR0 which is unusual for a non-indexed site. Typically we've seen those sites have no PR at all.
If anyone has any ideas about what we could do I'm all ears. We've been working on this for about a month and cannot figure this thing out.
Thanks in advance for your advice.
-
You make excellent points. I'll escalate this to "the pros" and see if they're able to bring their guru powers to bear on the trouble.
Thanks again Ryan for all your advice. It is greatly appreciated.
-
Looking at the site I can confirm the following:
-
the home page is tagged index follow
-
the status code for the home page is 200, an OK response
-
the robots.txt file is valid and clear
-
your crawl reports look fine to me
-
you stated your sitemap is received and 73 of 75 pages are indexed
-
your site is clearly not in Google's index as a site:miwaterstewardship.org search shows nothing.
-
I looked at your sitemap. I am not familiar with .aspx sitemaps but it does contain valid html links which apparently is enough for Google to utilize
-
you stated your site is not under penalty, as per Google
The possibilities are:
-
one of the pieces of information we are depending on is incorrect
-
we are overlooking a key piece of information
-
our understanding of SEO in this case is not complete
-
there is an issue with Google which is preventing your site from being indexed.
At this point I would suggest using your 1 PRO question on this issue, and reference this Q&A thread. While I don't believe we missed anything we should get the team to look at this issue and rule out every last possibility.
-
-
Thanks for the suggestions, Ryan. All of the previous changes were made before Google did it's last crawl. Here's the other info...
URLs Submitted = 78 / URLs in Web Index = 75 / The sitemap status is green check mark and it was downloaded today--June 14.
Geographic target has not been selected. Google has always been able to determine the crawl rate. I just changed the preferred domain to www.miwaterstewardship.org. That's the only change made recently.
This is another piece that baffles me. Check out the crawl stats for the site here: http://netvantagemarketing.com/temp/miws-crawl-stats.png
The bot is crawling an average of 63 pages per day and there's crawling activity since the middle of March. STILL, though...the domain absolutely will not appear in the index.
We're working with the client right now to see if we can get the site changed to a new IP address. The thought is that perhaps Google has somehow historically blocked the IP that it lives on now and changing to a new IP might get us out of jail. SUPER long shot...but those are the kinds of things we're trying now.
Thanks again for the help.
-
I was able to verify your robots.txt file is correctly set up. Did you make the changes before Google crawled the site on Saturday? You are correct that your site is not in the index. I would guess the robots.txt file was not modified until after the crawl. If it was adjusted pre-crawl, below are the next steps you can take.
Let's take a fresh look at your site:
-
you have a valid robots.txt file
-
your home page has a valid "index, follow" tag (not necessary, but it doesn't hurt either)
-
you have checked WMT and confirmed your site was crawled
In WMT, Site Configuration > Sitempas there is a "URLs submitted" field and a "URLs in web index field". What are those numbers please?
- It's a bit far reaching but while you are there please go over to your Settings tab in WMT. Geographic Target should not be checked, or if it is then Target Users in US should be selected. Preferred domain should be "www.miwaterstewardship.org" and I would recommend "Let Google determine my crawl rate" option unless you have a specific reason for doing otherwise.
-
-
Well...we're still in the virtual doghouse so-to-speak...
I made the changes that Ryan suggested on Friday. Webmaster Tools reports that the GoogleBot crawled the site on Saturday and that everything is OK in its eyes. Google responded to our reconsideration request over the weekend and stated very specifically that no actions were taken by the Google Spam team which would affect the rankings of our site or domain.
Still, if you search info:miwaterstewardship.org in The Ol' Goog, it continues to report that the site does not exist in the Google database.
Are there any other ideas of what we might be able to try?
This is a Dot Net Nuke site. Is there a DNN setting somewhere which might indicate to Google that it should not report our domain in search results?
Thanks for the looks and the help.
-
I agree with Ryan, your backlinks look great, website is structured well and has a great looking design. No signs of you breaking any of googles TOS.
-
Thank you EGOL. I consider myself a student of SEO, definitely not a master.
The Q&A forums here have given me a great opportunity to learn about SEO. I have been spending all my time this past month un-learning all the bad information I gathered from the internet, and learning SEO the right way.
The Matt Cutts videos, SEOmoz webinars, blogs and pro Q&A have all been immensely helpful. Those resources, along with the replies you and other mozzers offer have provided me an incredibly rich learning experience.
Thanks for noticing.
-
Thanks, Ryan. I noticed this as well but figured I'd leave it alone since Webmaster Tools was telling me "all is well" with the robots.txt file. I'll add the code like you suggest and we'll see what happens.
Thanks again.
-
Ryan, you have been giving some really valuable answers. Nice. Keep up the great work!
-
Fix your robots.txt. It is not set up as you suggested.
http://www.miwaterstewardship.org/robots.txt
User-agent: * Sitemap: http://www.miwaterstewardship.org/SiteMap.aspx I am unsure of what action a search engine would take upon encountering your code, but based on your post it seems that it blocks all agents. The correct code would be:
User-agent: *
Disallow:Sitemap: http://www.miwaterstewardship.org/SiteMap.aspx For more information about robots.txt you can take a look at: [http://www.robotstxt.org/robotstxt.html](http://www.robotstxt.org/robotstxt.html)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why Are Some Pages On A New Domain Not Being Indexed?
Background: A company I am working with recently consolidated content from several existing domains into one new domain. Each of the old domains focused on a vertical and each had a number of product pages and a number of blog pages; these are now in directories on the new domain. For example, what was www.verticaldomainone.com/products/productname is now www.newdomain.com/verticalone/products/product name and the blog posts have moved from www.verticaldomaintwo.com/blog/blogpost to www.newdomain.com/verticaltwo/blog/blogpost. Many of those pages used to rank in the SERPs but they now do not. Investigation so far: Looking at Search Console's crawl stats most of the product pages and blog posts do not appear to be being indexed. This is confirmed by using the site: search modifier, which only returns a couple of products and a couple of blog posts in each vertical. Those pages are not the same as the pages with backlinks pointing directly at them. I've investigated the obvious points without success so far: There are a couple of issues with 301s that I am working with them to rectify but I have checked all pages on the old site and most redirects are in place and working There is currently no HTML or XML sitemap for the new site (this will be put in place soon) but I don't think this is an issue since a few products are being indexed and appearing in SERPs Search Console is returning no crawl errors, manual penalties, or anything else adverse Every product page is linked to from the /course page for the relevant vertical through a followed link. None of the pages have a noindex tag on them and the robots.txt allows all crawlers to access all pages One thing to note is that the site is build using react.js, so all content is within app.js. However this does not appear to affect pages higher up the navigation trees like the /vertical/products pages or the home page. So the question is: "Why might product and blog pages not be indexed on the new domain when they were previously and what can I do about it?"
Technical SEO | | BenjaminMorel0 -
Google not index main keyword on homepage in 2 countries same language, rest of pages no problem
Hello, Two the same websites, two countries, same language http://www.lavistarelatiegeschenken.nl / http://www.lavistarelatiegeschenken.be The main keyword "relatiegeschenken" in top 10 of netherlands (steady position for 2 years) and in ** belgium** not in top 15****0 the main keyword "relatiegeschenken| but other keywords good positions, thats so strange I didn't understand and now every thing turned around suddenly: Now the main keyword "relatiegeschenken suddenly " not anymore in top 10 in the netherslandsits gone and other kewyords still good positions , now **main keyword suddenly in top 10 of belgium 2 years was not **other pages still ok. It are exactly the same websites and the same language. So double content But my programmer told me in google webmaster tools settings are right, so no problem with double content ? I really dont understand first main keyword in netherland in top 10 and in belgium not, now changed, now in belgium top 10 and not findable in the netherland on the main keyword. Maybe problem in code ? Maybe problems in code because websites are identical and active in two different countries wit same language ? No message about a penalty message in WMT, no spam links week i delete two strong but according to Linkdetox a bad links. I can not find a solution but its really important keyword that my customer want back in top 10 in netherland, like it was. All other positions and visitors are the same. Befor i have had this with belgium site, also main keyword google not index homepage. But suddenly no google show in belgium in top 10 Its turned around Kind regards, Marcel
Technical SEO | | Bossie720 -
Why google removed my landing pages from index?
I made new website meko.lv. I put many work to it, to make page SEO friendly, sprites, reduced requests added SSL, got google page speed insights score 100/100, but in 2. october all pages in google webmasters disappeared from index. Could you please look at website and say whats wrong with it? They are all search results present in google but for how long. it is so annoying, you put so many work but in result get high spam score. It is obvious that new pages can not get good links in one month https://meko.lv/ google webmasters google page speed score: https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fmeko.lv%2F&tab=mobile q1LDHTn
Technical SEO | | Mekounko0 -
Fake Links indexing in google
Hello everyone, I have an interesting situation occurring here, and hoping maybe someone here has seen something of this nature or be able to offer some sort of advice. So, we recently installed a wordpress to a subdomain for our business and have been blogging through it. We added the google webmaster tools meta tag and I've noticed an increase in 404 links. I brought this up to or server admin, and he verified that there were a lot of ip's pinging our server looking for these links that don't exist. We've combed through our server files and nothing seems to be compromised. Today, we noticed that when you do site:ourdomain.com into google the subdomain with wordpress shows hundreds of these fake links, that when you visit them, return a 404 page. Just curious if anyone has seen anything like this, what it may be, how we can stop it, could it negatively impact us in anyway? Should we even worry about it? Here's the link to the google results. https://www.google.com/search?q=site%3Amshowells.com&oq=site%3A&aqs=chrome.0.69i59j69i57j69i58.1905j0j1&sourceid=chrome&es_sm=91&ie=UTF-8 (odd links show up on pages 2-3+)
Technical SEO | | mshowells0 -
Old domain still being crawled despite 301s to new domain
Hi there, We switched from the domain X.com to Y.com in late 2013 and for the most part, the transition was successful. We were able to 301 most of our content over without too much trouble. But when when I do a site:X.com in Google, I still see about 6240 URLs of X listed. But if you click on a link, you get 301d to Y. Maybe Google has not re-crawled those X pages to know of the 301 to Y, right? The home page of X.com is shown in the site:X.com results. But if I look at the cached version, the cached description will say :This is Google's cache of Y.com. It is a snapshot of the page as it appeared on July 31, 2014." So, Google has freshly crawled the page. It does know of the 301 to Y and is showing that page's content. But the X.com home page still shows up on site:X.com. How is the domain for X showing rather than Y when even Google's cache is showing the page content and URL for Y? There are some other similar examples. For instance, you would see a deep URL for X, but just looking at the <title>in the SERP, you can see it has crawled the Y equivalent. Clicking on the link gives you a 301 to the Y equivalent. The cached version of the deep URL to X also shows the content of Y.</p> <p>Any suggestions on how to fix this or if it's a problem. I'm concerned that some SEO equity is still being sequestered in the old domain.</p> <p>Thanks,</p> <p>Stephen</p></title>
Technical SEO | | fernandoRiveraZ1 -
I have a blog on a sub domain, would you move it to the rood domain in a directory?
I have a blog that preforms fairly well on a sub domain, but after reading a post that Rand made to the Q & A I am thinking about moving it to the main domain in a sub directory. What are your thoughts on this? Here are some stats on it. The blog currently gets about 5 x the traffic of the main domain. The domain is older, 2008 creation date. They pretty much register for the same keywords.
Technical SEO | | LesleyPaone0 -
Missing files in Google and Bing Index
We uploaded our sitemap a while back and we are no longer see around 8 out of 33 pages. We try submitting the sitemap again about 1-2 weeks ago and there but no additional pages are seen when I do site: option in both search engines. I reviewed the sitemap and it includes all the pages. I am not seeing any errors in the seo moz for these pages. Any ideas what I should try?
Technical SEO | | EZSchoolApps0 -
Http VS https and google crawl and indexing ?
Is it true that https pages are not crawled and indexed by Google and other search engines as well as http pages?
Technical SEO | | sherohass0