Why isn't our new site being indexed?
-
We built a new website for a client recently.
Site: https://www.woofadvisor.com/
It's been live for three weeks. Robots.txt isn't blocking Googlebot or anything.
Submitted a sitemap.xml through Webmasters but we still aren't being indexed.
Anyone have any ideas?
-
Hey Dirk,
No worries - I visited the question first time today and considered it unanswered as the site is perfectly accessible in California. I like to confirm what Search Console says as that is 'straight from the horses mouth'.
Thanks for confirming that the IP redirect has changed, that is interesting. It is impossible for us to know when that happened - I would have expected thing to get indexed quite fast when it changed.
With the extra info I'm happy to mark this as answered, but would be good to hear from the OP.
Best,
-Tom
-
Hi Tom,
I am not questioning your knowledge - I re-ran the test on webpagetest.org and I see that the site is now accessible for Californian ip (http://www.webpagetest.org/result/150911_6V_14J6/) which wasn't the case a few days ago (check the result on http://www.webpagetest.org/result/150907_G1_TE9/) - so there has been a change on the ip redirection. I also checked from Belgium - the site is now also accessible from here.
I also notice that if I now do a site:woofadvisor.com in Google I get 19 pages indexed rather than 2 I got a few days ago.
Apparently removing the ip redirection solved (or is solving) the indexation issue - but still this question remains marked as "unanswered"
rgds,
Dirk
-
I am in California right now, and can access the website just fine, which is why I didn't mark the question as answered - I don't think we have enough info yet. I think the 'fetch as googlebot' will help us resolve that.
You are correct that if there is no robots.txt then Google assumes the site is open, but my concern is that the developers on the team say that there IS a robots.txt file there and it has some contents. I have, on at least two occasions, come across a team that was serving a robots.txt that was only accessible to search bots (once they were doing that 'for security', another time because they mis-understood how it worked). That is why I suggested that Search Console is checked to see what shows up for robots.txt.
-
To be very honest - I am quite surprised that this question is still marked as "Unanswered".
The owners of the site decided to block access for all non UK / Ireland adresses. The main Googlebot is using a Californian ip address to visit the site. Hence - the only page Googlebot can see is https://www.woofadvisor.com/holding-page.php which has no links to the other parts of the site (this is confirmed by the webpagetest.org test with Californian ip address)
As Google indicates - Googlebot can also use other IP adresses to crawl the site ("With geo-distributed crawling, Googlebot can now use IP addresses that appear to come from other countries, such as Australia.") - however it's is very likely that these bots do not crawl with the same frequency/depth as the main bot (the article clearly indicates " Google might not crawl, index, or rank all of your locale-adaptive content. This is because the default IP addresses of the Googlebot crawler appear to be based in the USA).
This can easily be solved by adding a link on /holding-page.php to the Irish/UK version which contains the full content (accessible for all ip adresses) which can be followed to index the full site (so - only put the ip detection on the homepage - not on the other pages)
The fact that the robots.txt gives a 404 is not relevant: if no robots.txt is found Google assumes that the site can be indexed (check this link) - quote: "You only need a
robots.txt
file if your site includes content that you don't want Google or other search engines to index." -
I'd be concerned about the 404ing robots.txt file.
You should check in Search Console:
-
What does Search Console show in the robots.txt section?
-
What happens if you fetch a page that is no indexed (e.g. https://www.woofadvisor.com/travel-tips.php) with the 'Fetch as Googlebot' tool?
I checked and do not see any obvious indicators of why the pages are not being indexed - we need more info.
-
-
I just did a quick check on your site with Webpagetest.org with California IP address http://www.webpagetest.org/result/150907_G1_TE9/ - as you can see here these IP's also go to the holding page - which is logically the only page which can be indexed as it's the only one Googlebot can access.
rgds,
Dirk
-
Hi,
I can't access your site in Belgium - I guess you are redirecting your users based on ip address. If , like me, they are not located in your target country they are 302 redirected to https://www.woofadvisor.com/holding-page.php and there is only 1 page that is indexed.
Not sure which country you are actually targeting - but could it be that you're accidentally redirecting Google bot as well?
Check also this article from Google on ip based targeting.
rgds
Dirk
-
Strangely, there are two pages indexed on Google Search.
The homepage and one other
-
I noticed the robots.txt file returned a 404 and asked the developers to take a look and they said the content of it is fine.
Sometimes developers say this stuff. If you are getting a 404, demonstrate it to them.
-
I noticed the robots.txt file returned a 404 and asked the developers to take a look and they said the content of it is fine.
But yes, I'll doublecheck the WordPress settings now.
-
Your sitemap all looked good, but when I tried to view the robots.txt file in your root, it returned a 404 and so was unable to determine if there was an issue. Could any of your settings in your WordPress installation also be causing it to trip over.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to setup an iFrame to be indexed as the parent site
Hi, we are trying to move all of our website content from www.mysite.com to a subdomain (i.e. content.mysite.com), and make "www.mysite.com" nothing more than an iFrame displaying the content from content.mysite.com. We have about 10 pages linking from the home page, all indexed separately, so I understand we'll have to do this for every one of them. (www.mysite.com/contact will be an iframe containing the content from content.mysite.com/contact, and we'll need to do this for every page) How do we do this so Google continues to index the content hosted at content.mysite.com with the parent page in organic results (www.mysite.com). We want all users to enter the site through www.mysite.com or www.mysite.com/xxxxxx, which will contain no content except for iFrames pulling in content from content.mysite.com. Our fear is that google will start directing users directly to content.mysite.com, rather than continue feeding to www.mysite.com. If we use www1.mysite.com or www2.mysite.com as the location of the content, instead of say content.mysite.com, would these subdomain names work better for passing credit for the iFramed content to the parent page (www.mysite.com)? Thanks! SIDE NOTE: Before someone asks why we need to do this, the content on mysite.com ranks very well, but site has a huge bounce rate due to a poorly designed CMS serving the content. The CMS does not load the page in pieces (like most pages load), but instead presents the visitor with a 100% blank page while the page loads in the background for about 5-10 seconds, and then boom 100% of the page shows up. We've been back and forth with our CMS provider about doing something about this for 5 years now, and we have given up. We tested moving our adwords links to xyz.mysite.com, where users are immediately shown a loading indicator, with our site (www.mysite.com) behind it in an iFrame. The immediate result was resounding success... our bounce rate PLUMMETED, and the root domain www.mysite.com saw a huge boost in search results. Problem with this is our site still comes up in organic results as www.mysite.com, which does not have any kind of spinning disk loading indicator, and still has a very high bounce rate.
Technical SEO | | vezaus0 -
URL Structure On Site - Currently it's domain/product-name NOT domain/category/product name is this bad?
I have a eCommerce site and the site structure is domain/product-name rather than domain/product-category/product-name Do you think this will have a negative impact SEO Wise? I have seen that some of my individual product pages do get better rankings than my categories.
Technical SEO | | the-gate-films0 -
Finding websites that don't have meta descriptions
Hi everyone, as a way to find new business leads I thought about targeting websites that have poor meta descriptions or where they are simply missing. A quick look at SERPs shows this is still a major issue for many businesses. Is there any way I can quickly find pages for which meta description is lacking? Thank you! Best regards, Florian
Technical SEO | | agencepicnic0 -
John Mueller says don't use Schema as its not working yet but I get markup conflicts using Google Mark-up
I watched recently John Mueller's Google Webmaster Hangout [DEC 5th]. In hit he mentions to a member not to use Schema.org as it's not working quite yet but to use Google's own mark-up tool 'Structured Data Markup Helper'. Fine this I have done and one of the tags I've used is 'AUTHOR'. However if you use Google's Structured Data Testing Tool in GWMT you get an error saying the following Error: Page contains property "author" which is not part of the schema. Yet this is the tag generated by their own tool. Has anyone experienced this before? and if so what action did you take to rectify it and make it work. As it stands I'm considering just removing this tag altogether. Thanks David cqbsdbunpicv8s76dlddd1e8u4g
Technical SEO | | David-E-Carey0 -
Why Can't I Get on Google?
I've employed many of the suggestions of SEOMoz and getting a Grade "A" on a particular keyword. I'm now #4 on Yahoo and Bing. However, my site hasn't cracked the top 50 in Google. Why? I see a similar pattern with other keywords, many on yahoo and bing but only a few of my subpages get #45-48 on Google. Any ideas? http://www.gospelebooks.net
Technical SEO | | mrjgardiner0 -
I have a WordPress site with 30 + categories and about 2k tags. I'd like to bring that number down for each taxonomy. What is the proper practice to do that?
I want to bring my categories down to about 8 or so and the tags... They're just a mess and I'd really like to bring that figure down significantly and setup a standard for usage. My thought was to remove the un-needed tags and categories and setup 301 redirects for the ones that I'm removing. Is that even necessary? Are there tools that can assist with this? What are the "gotchas" I should be aware of? Thanks!
Technical SEO | | digisavvy1 -
What is the most likely reason we aren't ranking #1 for our keyword.
So we are targeting a keyword and we are ranking 2nd for it. Another company is ranking number 1. What is the best element to target for us to improve into position number one? Page authority: them 41, us 40. mozRank: them 5.52, us 3.38. mozTrust: them 5.86, us 5.58. mT/mR: them 1.1, us 1.4. Total Links: them 6571, us 68. Internal Links: them 1138, us 1. External Links: them 5431, us 63. Followed Links: them 6569, us 64. Nofollowed Links: them 2, us 4. Linking Root Domains: them 25, us 41. Broadkeyword usage in page title: them YES, us YES. KW in domain: them no, us partial. Exact anchor test links: them 161, us 21. % of links with exact anchor text: them 2%, us 30%. Linking Root domains with exact anchor text: them 2, us 11. Domain Authority: them 41, us 40. Domain MozRank: them 3.7, us 4.5. Domain MozTrust: them 3.8, us 4.5. External links to domain: them 22574, us 217. Linking root domains: them 50, us 48. Linking C-blocks: them 46, us 42. Tweets: them 1, us 12. FB shares: them 6, us 26.
Technical SEO | | Benj250 -
Destination URL in SERPs keeps changing and I can't work out why.. Help.
I am befuddled as to why our destination URL in SERPs keeps changing oak furniture was nicely returning http://www.thefurnituremarket.co.uk/oakfurniture.asp then I changed something yesterday I did 2 things. published a link to that on facebook as part of a competition. redirected dynamic pages to the static URL for oak furniture.. Now for oak furniture the SERPs in GG UK is returning our home page as the most relevant landing page.. Any Idea why? I'm leaning to an onpage issue than posting on FB.. Thoughts?
Technical SEO | | robertrRSwalters0