WMT - Googlebot can't access your site
-
Hi
On our new website which is just a few weeks old upon logging into Webmaster tools I am getting the following message
Googlebot can't access your site - The overall error rate for DNS queries is 50% What do I need to do to resolve this, I have never had this problem before with any of the sites - where the domains are with Fasthosts (UK) and hosting is with Dreamhosts. What is the recommended course of action Google mention contacting your host in my case Dreamhost - but what do you need to ask them in a support ticket. When doing a fetch in WMT the fetch status is a success?
-
Yes, it has to be a configuration entry in your host file with your website hosting company.
-
Your hosting company: dreamhost. (if a WMT fetch is ok - that means the DSN settings are ok and on a hosting level something might be wrong)
-
Which one should I contact?
The Domain Company - fasthots
or
The hosting company - dreamhost
-
Yes,that's right. They can check what's wrong...
-
Raise a ticket with the host or with the company that the domain is registered and managed from?
-
Google or a particular server could have the DNS cached, they might have multiple DNS servers etc causing it to be working with no issues.
If you have another domain with no similar issues, it could be a minor mis-config within the DNS entry for that domain only. Raise a ticket with your host with a screenshot of the issue from Google and their techs should know where to look.
I hope this helps.
-
Hi
I have other sites hosted with Dreamhost and there is no mention in WMT of this message?
Could it be something within the actual DNS of the domain itself? It has domain privacy set on the domain but it wouldnt be this that is causing it
Its also weird that when I do a fetch in Google WMT it always brings back a success message?
-
Do you have any other sites hosted with the same DNS ? It looks like a DNS issue for sure. If I were you, I would move my hosting. Maybe I am being paranoid, but I don't like DNS issues. It's not a SEO issue, it's a site up-time issue. If bots can't access my site 50% of the time, could it be that a lot of my user's can't access my website either ? 50% is a very big number. If possible, I would transfer my domain name to a different registrar and a different hosting company. Just trying to be safer.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why don't sites using Drupal have keywords
Why don't the vast majority of sites using Drupal list keywords in the head section? Is there another convention used in Drupal that serves the same purpose for SEO? I noticed most of the Drupal info pages about keywords seem to drop off around 2010
Technical SEO | | fxarechiga0 -
Why Can't Googlebot Fetch Its Own Map on Our Site?
I created a custom map using google maps creator and I embedded it on our site. However, when I ran the fetch and render through Search Console, it said it was blocked by our robots.txt file. I read in the Search Console Help section that: 'For resources blocked by robots.txt files that you don't own, reach out to the resource site owners and ask them to unblock those resources to Googlebot." I did not setup our robtos.txt file. However, I can't imagine it would be setup to block google from crawling a map. i will look into that, but before I go messing with it (since I'm not familiar with it) does google automatically block their maps from their own googlebot? Has anyone encountered this before? Here is what the robot.txt file says in Search Console: User-agent: * Allow: /maps/api/js? Allow: /maps/api/js/DirectionsService.Route Allow: /maps/api/js/DistanceMatrixService.GetDistanceMatrix Allow: /maps/api/js/ElevationService.GetElevationForLine Allow: /maps/api/js/GeocodeService.Search Allow: /maps/api/js/KmlOverlayService.GetFeature Allow: /maps/api/js/KmlOverlayService.GetOverlays Allow: /maps/api/js/LayersService.GetFeature Disallow: / Any assistance would be greatly appreciated. Thanks, Ruben
Technical SEO | | KempRugeLawGroup1 -
Help! How to Remove Error Code 901: DNS Errors (But to a URL that doesn't exist!)
I have 2 urgent errors saying there are 2 x error code 909's detected. These don't link to any page - but I can tell there is a mistake somewhere - I just don't know what needs changing. http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/printed-promotional-keyrings http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/blank-unassembled-keyrings Could someone help please? screen-shot-2015-08-11-at-13.18.17.png?t=1439292942
Technical SEO | | FullSteamBusiness0 -
I'm thinking I might need to canonicalize back to the home site and combine some content, what do you think?
I have a site that is mostly just podcasts with transcripts, and it has both audio and video versions of the podcasts. I also have a blog that I contribute to that links back to the video/transcript page of these podcasts. So this blog I contribute to has the exact same content (the podcast; both audio and video but no transcript) and then an audio and video version of this podcast. Each post of the podcast has different content on it that is technically unique but I'm not sure it's unique enough. So my question is, should I canonicalize the posts on this blog back to the original video/transcript page of the podcast and then combine the video with the audio posts. Thanks!
Technical SEO | | ThridHour0 -
404's in WMT are old pages and referrer links no longer linking to them.
Within the last 6 days, Google Webmaster Tools has shown a jump in 404's - around 7000. The 404 pages are from our old browse from an old platform, we no longer use them or link to them. I don't know how Google is finding these pages, when I check the referrer links, they are either 404's themselves or the page exists but the link to the 404 in question is not on the page or in the source code. The sitemap is also often referenced as a referrer but these links are definitely not in our sitemap and haven't been for some time. So it looks to me like the referrer data is outdated. Is that possible? But somehow these pages are still being found, any ideas on how I can diagnose the problem and find out how google is finding them?
Technical SEO | | rock220 -
Creating in-text links with ' 'target=_blank' - helping/hurting SEO!?!
Good Morning Mozzers, I have a question regarding a new linking strategy I'm trying to implement at my organization. We publish 'digital news magazines' that oftentimes have in-text links that point to external sites. More recently, the editorial department and me (SEO) conferred on some ways to reduce our bounce rate and increase time on page. One of the suggestions I offered is to add the 'target=_blank" attribute to all the links so that site visitors don't necessarily have to leave the site in order to view the link. It has, however, come to my attention that this can have some very negative effects on my SEO program, most notably, (fake or inaccurate) time(s) on-page. Is this an advisable way to create in-text links? Are there any other negative effects that I can expect from implementing such a strategy?
Technical SEO | | NiallSmith0 -
Different version of site for "users" who don't accept cookies considered cloaking?
Hi I've got a client with lots of content that is hidden behind a registration form - if you don't fill it out you can not proceed to the content. As a result it is not being indexed. No surprises there. They are only doing this because they feel it is the best way of capturing email addresses, rather than the fact that they need to "protect" the content. Currently users arriving on the site will be redirected to the form if they have not had a "this user is registered" cookie set previously. If the cookie is set then they aren't redirected and get to see the content. I am considering changing this logic to only redirecting users to the form if they accept cookies but haven't got the "this user is registered cookie". The idea being that search engines would then not be redirected and would index the full site, not the dead end form. From the clients perspective this would mean only very free non-registered visitors would "avoid" the form, yet search engines are arguably not being treated as a special case. So my question is: would this be considered cloaking/put the site at risk in any way? (They would prefer to not go down the First Click Free route as this will lower their email sign-ups.) Thank you!
Technical SEO | | TimBarlow0 -
From your perspective, what's wrong with this site such that it has a Panda Penalty?
www.duhaime.org For more background, please see: http://www.seomoz.org/q/advice-regarding-panda http://www.seomoz.org/q/when-panda-s-attack (hoping the third time's the charm here)
Technical SEO | | sprynewmedia0