Wordpress error
-
On our Google Webmaster Tools I'm getting a Severe Health Warning regarding our Robot.txt file reading:
User-agent: *
Crawl-delay: 20User-agent: 008
Disallow: /I'm wondering how I can fix this and stop it happening again.
The site was hacked about 4 months ago but I thought we'd managed to clear things up.
Colin
-
This will be my first post on SEOmoz so bear with me
The way I understand it is that robots read the robots.txt file from top to bottom, and once they find a rule that applies to them they stop reading and begin crawling. So basically the robots.txt written as:
User-agent:*
Disallow:
Crawl-delay: 20
User-agent: 008
Disallow: /
would not have the desired result as user-agent 008 would first read the top guideline:
User-agent: *
Disallow:
Crawl-delay: 20
and then begin crawling your site, as it is first being told that All user-agents are disallowed to crawl no pages or directories.
The corrected way to write this would be:
User-agent: 008
Disallow: /
User-agent: *
Disallow:
Crawl-delay: 20
-
Hi Peter,
I've tested the robot.txt file in Webmaster Tools and it now seems to be working as it should and it seems Google is seeing the same file as I have on the server.
I'm afraid this side of things isn't' my area of expertise so it's been a bit of a minefield.
I've taken a subscription with sucuri.net and taken various other steps that hopefully will hel;p with security. But who knows?
Thanks,
Colin
-
Google is seeing the same Robots.txt content (in GWT) that you show in the physical file, right? I just want to make sure that, when the site was hacked, no changes were made that are showing different versions of files to Google. It sounds like that's not the case here, but it definitely can happen.
-
Blog isn't' showing now and my hosts say that the index.php file is missing from the directory but I can see it.
Strange.
Have contacted them again to see what the problem can be.
Bit of a wasted Saturday!
-
Thanks Keith. Just contacting out hosts.
Nightmare!
-
Looks like a 403 permissions problem, that's a server side error... Make sure you have the correct permissions set on the blog folder in IIS Personally I always host on Linux...
-
Mind you the whole blog is now showing an error message and cant' be viewed so looks like an afternoon of trial and error!
-
Thanks very much Keith. I've just edited the file as suggested.
I see the error but as I am the web guy I cant' figure out how to get rid of it.
I think it might be a plugin that's causing it so I'm going to disable the and re-able them one as a time.
I've just PM'd you by the way.
Thanks for your help Keith.
Colin
-
Use this:
**User-agent: * Disallow: /blog/wp-admin/ Disallow: /blog/wp-includes/ Sitemap: http://nile-cruises-4u.co.uk/sitemap.xml**
Any FYI, you have the following error on your blog:
Warning: is_readable() [function.is-readable]: open_basedir restriction in effect. File(D:\home\nile-cruises-4u.co.uk\wwwroot\blog/wp-content/plugins/D:\home\nile-cruises-4u.co.uk\wwwroot\blog\wp-content\plugins\websitedefender-wordpress-security/languages/WSDWP_SECURITY-en_US.mo) is not within the allowed path(s): (D:\home\nile-cruises-4u.co.uk\wwwroot) in D:\home\nile-cruises-4u.co.uk\wwwroot\blog\wp-includes\l10n.php on line **339 **
Get your web guy to look at that, it appears at the top of every blog page for me...
Hope that helps,
Keith
-
Thanks Keith.
Only part of our site is WP based. Would that be a problem using the example you kindly suggested?
-
I gave you an example of a basic robots.txt file that I use on one of my Wordpress sites above, I would suggest using that for now.
I would not bother messing around with crawl delay in robots.txt as Peter said above there are better ways to achieve this... Plus I doubt you need it any way.
Google caches the robots.txt info for about 24hrs normally in my experience... So it's possible the old cached version is still being used by Google.
-
Hi Guys,
Thanks so much for your help. As you say Troy, that's defintely not what I want.
I assumed when we were hacked (twice in 8 months) that it might have been a competitor as we are in a very competitive niche. Might be very wrong there but we have certainly lost our top ranking on Google.co.uk for our main key phrases and our now at about position 7 for the same key phrases after about 3 years at number 1.
So when I saw on Google Webmaster Tools yesterday that we had a severe health warning and that the Googlebot was being prevented crawling our site I thought it might be the aftereffects of the hack.
Today even though I changed the robot.txt file yesterday GWT is showing 1000 pages with errors, 285 Access Denied and 719 Not Found and this message: Googlebot is blocked from http://nile-cruises-4u.co.uk/
I've just tested the robot.txt via GWT and now get this message:
AllowedDetected as a directory; specific files may have different restrictionsSo maybe the pages will be able to access by Googlebot shortly and the Access Denied message will disappear.I've chaged the robot.txt file to
User-agent: *
Crawl-delay: 20But should I change it to a better version? Sorry guys, I'm an online travel agent and not great on coding and really techie stuff. Although I'm learning pretty quickly about the bad stuff!I seem to have a few problems getting this sorted and wonder if this is a part of why our page position is dropping? -
I would simplify your robots.txt to read something like:
**User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Sitemap: http://www.your-domain.com/sitemap.xml**
-
That's odd: "008" appears to be the user agent for "80legs", a custom crawler platform. I'm seeing it in other Robots.txt files.
-
I'm not 100% sure what he's seeing, but when I plug his robots.txt into the robots analysis tool, I get this back:
Googlebot blocked by line 5: Disallow: /
Detected as a directory; specific files may have different restrictions
However, when I gave the top "**User-agent: ***" the "Disallow: " it seemed to fix the problem. Like, it didn't understand that the **Disallow: / **was meant only for the 008 user-agent?
-
Not honestly sure what User-agent "008" is, but that seems harmless. Why the crawl delay? There are better ways to handle that than Robots.txt, if a crawler is giving you trouble.
Was there a specific message/error in GWT?
-
I think, if you have a robots.txt reading what you show above:
User-agent: * Crawl-delay: 20
User-agent: 008 Disallow: /
That just basically says, "Don't crawl my site at all" (The "Disallow: /" means, I'm not allowing anything to be crawled by any search engine that pays attention to robots.txt at all)
So...I'm guessing that's not what you want?
(Bah..ignore. "User-agent". I'm a fool)
Actually, this seems to have solved your issue...make sure you explicitly tell all other User-agents that they are allowed:
User-agent: * Disallow: Crawl-delay: 20
User-agent: 008 Disallow: /
The extra "Disallow:" under User-agent: * says "I'm not going to disallow anything to most user-agents." Then the Disallow under user-agent 008 seems to only apply to them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Default Wordpress 301 Redirects of JS and CSS files. Bad for SEO & How to Fix?
Hi there: We are developers with some digital marketing expertise, but a current issue has us perplexed. An outside SEO firm has asked us to clean up a large number of 301 redirects. Most of these are 'default' Wordpress behavior that relate to calling the latest version of a JS or CSS file. For instance, a JS file is called with this: https://websitexyz.com/wp-includes/js/wp-embed.min.js?ver=4.9.1 but ultimately redirects to this: https://websitexyz.com/wp-includes/js/wp-embed.min.js. We are being asked to prevent the redirect from happening by, presumably, calling the ultimate file to begin with. The issue is that, as far as we know, there's no easy way to alter WP behavior to call the ultimate file to begin with. Does anyone have any thoughts on this? Thanks.
Intermediate & Advanced SEO | | Daaveey0 -
SEO penalty for changing domains by simply switching DNS on Wordpress and adding 301s server-side?
Working on a domain change for a client. They're hosted on Wordpress and their developer wants to simply switch out the DNS for the new domain to point to wordpress, and then have the old domain use 301s to redirect to the new domain. The url structure will be the same, but there will be no CMS connected to the old domain after the switch. Is this dangerous for SEO? A significant portion of their customers are from organic traffic and losing SEO value would be very bad.
Intermediate & Advanced SEO | | dfolwell0 -
Wordpress Tag Pages - NoIndex?
Hi there. I am using Yoast Wordpress Plugin. I just wonder if any test have been done around the effects of Index vs Noindex for Tag Pages? ( like when tagging a word relevant to an article ) Thanks 🙂 Martin
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
How do I change HTML title, and add H1 tags with Volusion? and why is it so hard!?!?! wordpress is WAYY better.
So I could go in depth with this and tell you all about the problems I have encountered with Volusion... but i'm not. I just need to help a client change the 7,000 product pages they have that have duplicated title tags, and no H1 tags. (probably one of the worst case of On-Page SEO set ups ever right? Shouldn't Volusion be built to help you do these types of essentials easily? Or (for the price) even better would be to do it automatically based on the product name??)
Intermediate & Advanced SEO | | TylerAbernethy0 -
Wordpress.com content feeding into site's subdomain, who gets SEO credit?
I have a client who had created a Wordpress.com (not Wordpress.org) blog, and feeds blog posts into a subdomain blog.client-site.com. My understanding was that in terms of SEO, Wordpress.com would still get the credit for these posts, and not the client, but I'm seeing conflicting information. All of the posts are set with permalinks on the client's site, such as blog.client-site.com/name-of-post, and when I run a Google site:search query, all of those individual posts appear in the Google search listings for the client's domain. Also, I've run a marketing.grader.com report, and these same results are seen. Looking at the source code on the page, however, I see this information which leads me to believe the content is being credited to, and fed in from, Wordpress.com ('client name' altered for privacy): href="http://client-name.files.wordpress.com/2012/08/could_you_survive_a_computer_disaster.jpeg">class="alignleft size-thumbnail wp-image-2050" title="Could_you_survive_a_computer_disaster" src="http://client-name.files.wordpress.com/2012/08/could_you_survive_a_computer_disaster.jpeg?w=150&h=143" I'm looking to provide a recommendation to the client on whether they are ok to continue moving forward with this current setup, or whether we should port the blog posts over to a subfolder on their primary domain www.client-site.com/blog and use Wordpress.org functionality, for proper SEO. Any advice?? Thank you!
Intermediate & Advanced SEO | | grapevinemktg0 -
Crawling error or somthing else that male my page unvisible ( Simple problem, no solved yet )
Hi, my problem isn't solved and nobody was able to answer my question: why isn't my page poltronafraubrescia.zenucchi.it indexed for the keyword poltrona frau Brescia? The same page on another domain was four on the ranking reluts... And now it redirects to the new one... An you explain me how to proceed? I trust you... Help me...
Intermediate & Advanced SEO | | guidoboem0 -
What is the best route SEO wise for implementing a Wordpress blog that has a domain under go daddy and hosting under a third party like Kalio Commerce?
I am currently trying to implement a Blog on a Ecommerce site that has its domain set up under Go Daddy, but it is hosted under a platform called Kalio Commerce. I am trying to implement a Wordpress blog because I am most familiar with it and believe it offers more strength SEO wise but the Ecommmerce platform by Kalio does not offer the ability to install Wordpress or any CMS platform other than its own. I am stuck wondering if I could find a way to either implement a blog through a sub-folder (mydomain.com/blog) while using wordpress on hosting added to the go daddy server under their domain. If Kalio Commerce doesn't have any way of adding a blog to its own hosting is this hindering my clients ability to be fully optimized for fresh content? Now I am not too familiar with adding sub-domains with hosting under a different platform. I am more in tune with building a wordpress ecommerce site and implementing blogs under the wordpress structure. This is a leap for me and it is getting to be a little overwhelming so if any one has prior experience with this please let me know if I can find a way through this without putting my SEO consulting at risk. So far I have gotten to the conclusion that I can set up hosting for the companies site under Go Daddy even though they have their hosting under Kalio. If I am to set up hosting under Go Daddy would this cause the DNS routing to be compromised. If this is so must I set up hosting under a separate domain (myotherdomian.com) and have Kalio set up a Sub-Folder like so (mydomain.com/blog) and just have the DNS of the blog installed on (myotherdomain.com) point to the folder in Kalio hosting for mydomain.com? So now that I explained my situation, what is the best route SEO wise for implementing a Wordpress blog that has a domain under go daddy and hosting under a third party like Kalio Commerce?
Intermediate & Advanced SEO | | cscoville0 -
Managing 404 errors
What is the best way to manage 404 errors for pages that are no longer on the server. For example, client deletes old site from server and replaces it with new site. Webmaster tools is reporting 100 + 404 errors form the old site. I've blocked the 404 pages with robot.text. Requested removal in google webmaster tools. And created a custom 404 page - http://www.tvsphoto.com/missingurlexample Is there anything else I can do?
Intermediate & Advanced SEO | | SEOProPhoto0