Why isn't our new site being indexed?
-
We built a new website for a client recently.
Site: https://www.woofadvisor.com/
It's been live for three weeks. Robots.txt isn't blocking Googlebot or anything.
Submitted a sitemap.xml through Webmasters but we still aren't being indexed.
Anyone have any ideas?
-
Hey Dirk,
No worries - I visited the question first time today and considered it unanswered as the site is perfectly accessible in California. I like to confirm what Search Console says as that is 'straight from the horses mouth'.
Thanks for confirming that the IP redirect has changed, that is interesting. It is impossible for us to know when that happened - I would have expected thing to get indexed quite fast when it changed.
With the extra info I'm happy to mark this as answered, but would be good to hear from the OP.
Best,
-Tom
-
Hi Tom,
I am not questioning your knowledge - I re-ran the test on webpagetest.org and I see that the site is now accessible for Californian ip (http://www.webpagetest.org/result/150911_6V_14J6/) which wasn't the case a few days ago (check the result on http://www.webpagetest.org/result/150907_G1_TE9/) - so there has been a change on the ip redirection. I also checked from Belgium - the site is now also accessible from here.
I also notice that if I now do a site:woofadvisor.com in Google I get 19 pages indexed rather than 2 I got a few days ago.
Apparently removing the ip redirection solved (or is solving) the indexation issue - but still this question remains marked as "unanswered"
rgds,
Dirk
-
I am in California right now, and can access the website just fine, which is why I didn't mark the question as answered - I don't think we have enough info yet. I think the 'fetch as googlebot' will help us resolve that.
You are correct that if there is no robots.txt then Google assumes the site is open, but my concern is that the developers on the team say that there IS a robots.txt file there and it has some contents. I have, on at least two occasions, come across a team that was serving a robots.txt that was only accessible to search bots (once they were doing that 'for security', another time because they mis-understood how it worked). That is why I suggested that Search Console is checked to see what shows up for robots.txt.
-
To be very honest - I am quite surprised that this question is still marked as "Unanswered".
The owners of the site decided to block access for all non UK / Ireland adresses. The main Googlebot is using a Californian ip address to visit the site. Hence - the only page Googlebot can see is https://www.woofadvisor.com/holding-page.php which has no links to the other parts of the site (this is confirmed by the webpagetest.org test with Californian ip address)
As Google indicates - Googlebot can also use other IP adresses to crawl the site ("With geo-distributed crawling, Googlebot can now use IP addresses that appear to come from other countries, such as Australia.") - however it's is very likely that these bots do not crawl with the same frequency/depth as the main bot (the article clearly indicates " Google might not crawl, index, or rank all of your locale-adaptive content. This is because the default IP addresses of the Googlebot crawler appear to be based in the USA).
This can easily be solved by adding a link on /holding-page.php to the Irish/UK version which contains the full content (accessible for all ip adresses) which can be followed to index the full site (so - only put the ip detection on the homepage - not on the other pages)
The fact that the robots.txt gives a 404 is not relevant: if no robots.txt is found Google assumes that the site can be indexed (check this link) - quote: "You only need a
robots.txt
file if your site includes content that you don't want Google or other search engines to index." -
I'd be concerned about the 404ing robots.txt file.
You should check in Search Console:
-
What does Search Console show in the robots.txt section?
-
What happens if you fetch a page that is no indexed (e.g. https://www.woofadvisor.com/travel-tips.php) with the 'Fetch as Googlebot' tool?
I checked and do not see any obvious indicators of why the pages are not being indexed - we need more info.
-
-
I just did a quick check on your site with Webpagetest.org with California IP address http://www.webpagetest.org/result/150907_G1_TE9/ - as you can see here these IP's also go to the holding page - which is logically the only page which can be indexed as it's the only one Googlebot can access.
rgds,
Dirk
-
Hi,
I can't access your site in Belgium - I guess you are redirecting your users based on ip address. If , like me, they are not located in your target country they are 302 redirected to https://www.woofadvisor.com/holding-page.php and there is only 1 page that is indexed.
Not sure which country you are actually targeting - but could it be that you're accidentally redirecting Google bot as well?
Check also this article from Google on ip based targeting.
rgds
Dirk
-
Strangely, there are two pages indexed on Google Search.
The homepage and one other
-
I noticed the robots.txt file returned a 404 and asked the developers to take a look and they said the content of it is fine.
Sometimes developers say this stuff. If you are getting a 404, demonstrate it to them.
-
I noticed the robots.txt file returned a 404 and asked the developers to take a look and they said the content of it is fine.
But yes, I'll doublecheck the WordPress settings now.
-
Your sitemap all looked good, but when I tried to view the robots.txt file in your root, it returned a 404 and so was unable to determine if there was an issue. Could any of your settings in your WordPress installation also be causing it to trip over.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is site: a reliable method for getting full list of indexed pages?
The site:domain.com search seems to show less pages than it used to (Google and Bing). It doesn't relate to a specific site but all sites. For example, I will get "page 1 of about 3,000 results" but by the time I've paged through the results it will end and change to "page 24 of 201 results". In that example If I look in GSC it shows 1,932 indexed. Should I now accept the "pages" listed in site: is an unreliable metric?
Technical SEO | | bjalc20112 -
Strange URL's for client's site
We just picked up a new client and I've been doing some digging around on their site. They have quite the wide variety of URL's that make for a rather confusing experience. One of the milder examples is their "About" page. Normally I would expect something along the lines of: www.website.com/about I see: www.website.com/default.asp?Page=About I'm typically a graphic designer and know basically nothing about code, but I just assume this has something funky to do with how their website was constructed. I'm assuming this isn't particularly SEO friendly, but it doesn't seem too bad. Until I got to another section of their site. It's a section that logically should look like: www.website.com/training/public-seminars It's: www.website.com/default.asp?Page=MT&Area=Seminars&Sub=MRM Now that's nonsensical to me! Normally if a client has terrible URL's, I'd say let's do some redirects, but I guess I'm a little intimidated by these. Do the URL's have to be structured like this for some reason? Am I missing some important area of coding here? However, the most bizarre example is a link back to their website from yellowpages.com. Where normally I would expect it to lead to their homepage, I get this bizarre-looking thing: http://website1-px.rtrk.com/?utm_source=ReachLocal&utm_medium=PPC&utm_campaign=AssetManagement&reference_id=15&publisher=yellowpages&placement=ypwebsitemip&action_target=listing_website And as you browse through the site, that strange domain stays. For example the About page is now: http://website1-px.rtrk.com/default.asp?Page=About I would try to google this but I have no idea where to even start! What is going on with these links? Will we be able to fix them to something presentable without breaking their website?
Technical SEO | | everestagency0 -
Why is Google Webmaster Tools showing 404 Page Not Found Errors for web pages that don't have anything to do with my site?
I am currently working on a small site with approx 50 web pages. In the crawl error section in WMT Google has highlighted over 10,000 page not found errors for pages that have nothing to do with my site. Anyone come across this before?
Technical SEO | | Pete40 -
Should I disavow links from pages that don't exist any more
Hi. Im doing a backlinks audit to two sites, one with 48k and the other with 2M backlinks. Both are very old sites and both have tons of backlinks from old pages and websites that don't exist any more, but these backlinks still exist in the Majestic Historic index. I cleaned up the obvious useless links and passed the rest through Screaming Frog to check if those old pages/sites even exist. There are tons of link sending pages that return a 0, 301, 302, 307, 404 etc errors. Should I consider all of these pages as being bad backlinks and add them to the disavow file? Just a clarification, Im not talking about l301-ing a backlink to a new target page. Im talking about the origin page generating an error at ping eg: originpage.com/page-gone sends me a link to mysite.com/product1. Screamingfrog pings originpage.com/page-gone, and returns a Status error. Do I add the originpage.com/page-gone in the disavow file or not? Hope Im making sense 🙂
Technical SEO | | IgorMateski0 -
301 Redirect How Long until the juice passes through to new site
Hi Guys, Following on from a question i asked last week in regard to a 301 http://www.seomoz.org/q/301-redirect-have-no-ranking I was thinking that i had some kind of issue on the site, although i have gone over it with a fine tooth comb i cannot find any issue's and from the amount of reads the thread has had im sure if there was something obvious it would have been pointed out. So i am quite confident the 301 from site A to site B is fine and working as intended, so my question is how long should it take until the juice is passed From site A to Site B as its 9 weeks now and still down 85% on traffic and even text for my home page if copied into the search bar don't bring up my site Bing is fine and did not see any real traffic drops but Google is not giving me back the rankings i had prior Whenever i have done a 301 before the rankings pretty steady and i see no real loss in rankings but this time ... painful all changes in WMT made
Technical SEO | | kellymandingo
Canonical tag implemented
all Pages 301 and correct 200 response from the targeted page
Sitemap Updated
Many Links Changed from Old site to new (including DMOZ)
no Robots text Blocking directory's
Google crawling freely and regularly The strange thing is New content is indexed immediately and ranks easily, I added a page for my service in my local area and went straight to position 5 in Google however old existing content wont move, I tracked 150 keywords only 4 are top 75 Don't know what else to do so any advice would be much appreciated PS site is around 17k pages Paul0 -
Best META Fields to Include on New Site
I am in the process of transitioning sites to a Drupal CMS and am curious to know what META information to provide on each of the new site pages. Currently, this is the set-up I plan on using: My questions to the community are: whether or not I've added all pertinent information, and if there's anything I'm overlooking
Technical SEO | | NiallSmith0 -
How long does it take open site explorer to recognize new links?
I'm building a steady link profile to one of my websites and the new links still haven't shown up in open site explorer even after 2 months. How long does it take OSE to recognize new backlinks?
Technical SEO | | C-Style2 -
If I redirect my WordPress blog to my main site, will it help my main site's SEO?
I have separate sites for my blog and main website. I'd like to link them in a way that enables the blog to boost my main site's SEO. Is there an easy way to do this? Thanks in advance for any advice...
Technical SEO | | matt-145670