Wordpress error
-
On our Google Webmaster Tools I'm getting a Severe Health Warning regarding our Robot.txt file reading:
User-agent: *
Crawl-delay: 20User-agent: 008
Disallow: /I'm wondering how I can fix this and stop it happening again.
The site was hacked about 4 months ago but I thought we'd managed to clear things up.
Colin
-
This will be my first post on SEOmoz so bear with me
The way I understand it is that robots read the robots.txt file from top to bottom, and once they find a rule that applies to them they stop reading and begin crawling. So basically the robots.txt written as:
User-agent:*
Disallow:
Crawl-delay: 20
User-agent: 008
Disallow: /
would not have the desired result as user-agent 008 would first read the top guideline:
User-agent: *
Disallow:
Crawl-delay: 20
and then begin crawling your site, as it is first being told that All user-agents are disallowed to crawl no pages or directories.
The corrected way to write this would be:
User-agent: 008
Disallow: /
User-agent: *
Disallow:
Crawl-delay: 20
-
Hi Peter,
I've tested the robot.txt file in Webmaster Tools and it now seems to be working as it should and it seems Google is seeing the same file as I have on the server.
I'm afraid this side of things isn't' my area of expertise so it's been a bit of a minefield.
I've taken a subscription with sucuri.net and taken various other steps that hopefully will hel;p with security. But who knows?
Thanks,
Colin
-
Google is seeing the same Robots.txt content (in GWT) that you show in the physical file, right? I just want to make sure that, when the site was hacked, no changes were made that are showing different versions of files to Google. It sounds like that's not the case here, but it definitely can happen.
-
Blog isn't' showing now and my hosts say that the index.php file is missing from the directory but I can see it.
Strange.
Have contacted them again to see what the problem can be.
Bit of a wasted Saturday!
-
Thanks Keith. Just contacting out hosts.
Nightmare!
-
Looks like a 403 permissions problem, that's a server side error... Make sure you have the correct permissions set on the blog folder in IIS Personally I always host on Linux...
-
Mind you the whole blog is now showing an error message and cant' be viewed so looks like an afternoon of trial and error!
-
Thanks very much Keith. I've just edited the file as suggested.
I see the error but as I am the web guy I cant' figure out how to get rid of it.
I think it might be a plugin that's causing it so I'm going to disable the and re-able them one as a time.
I've just PM'd you by the way.
Thanks for your help Keith.
Colin
-
Use this:
**User-agent: * Disallow: /blog/wp-admin/ Disallow: /blog/wp-includes/ Sitemap: http://nile-cruises-4u.co.uk/sitemap.xml**
Any FYI, you have the following error on your blog:
Warning: is_readable() [function.is-readable]: open_basedir restriction in effect. File(D:\home\nile-cruises-4u.co.uk\wwwroot\blog/wp-content/plugins/D:\home\nile-cruises-4u.co.uk\wwwroot\blog\wp-content\plugins\websitedefender-wordpress-security/languages/WSDWP_SECURITY-en_US.mo) is not within the allowed path(s): (D:\home\nile-cruises-4u.co.uk\wwwroot) in D:\home\nile-cruises-4u.co.uk\wwwroot\blog\wp-includes\l10n.php on line **339 **
Get your web guy to look at that, it appears at the top of every blog page for me...
Hope that helps,
Keith
-
Thanks Keith.
Only part of our site is WP based. Would that be a problem using the example you kindly suggested?
-
I gave you an example of a basic robots.txt file that I use on one of my Wordpress sites above, I would suggest using that for now.
I would not bother messing around with crawl delay in robots.txt as Peter said above there are better ways to achieve this... Plus I doubt you need it any way.
Google caches the robots.txt info for about 24hrs normally in my experience... So it's possible the old cached version is still being used by Google.
-
Hi Guys,
Thanks so much for your help. As you say Troy, that's defintely not what I want.
I assumed when we were hacked (twice in 8 months) that it might have been a competitor as we are in a very competitive niche. Might be very wrong there but we have certainly lost our top ranking on Google.co.uk for our main key phrases and our now at about position 7 for the same key phrases after about 3 years at number 1.
So when I saw on Google Webmaster Tools yesterday that we had a severe health warning and that the Googlebot was being prevented crawling our site I thought it might be the aftereffects of the hack.
Today even though I changed the robot.txt file yesterday GWT is showing 1000 pages with errors, 285 Access Denied and 719 Not Found and this message: Googlebot is blocked from http://nile-cruises-4u.co.uk/
I've just tested the robot.txt via GWT and now get this message:
AllowedDetected as a directory; specific files may have different restrictionsSo maybe the pages will be able to access by Googlebot shortly and the Access Denied message will disappear.I've chaged the robot.txt file to
User-agent: *
Crawl-delay: 20But should I change it to a better version? Sorry guys, I'm an online travel agent and not great on coding and really techie stuff. Although I'm learning pretty quickly about the bad stuff!I seem to have a few problems getting this sorted and wonder if this is a part of why our page position is dropping? -
I would simplify your robots.txt to read something like:
**User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Sitemap: http://www.your-domain.com/sitemap.xml**
-
That's odd: "008" appears to be the user agent for "80legs", a custom crawler platform. I'm seeing it in other Robots.txt files.
-
I'm not 100% sure what he's seeing, but when I plug his robots.txt into the robots analysis tool, I get this back:
Googlebot blocked by line 5: Disallow: /
Detected as a directory; specific files may have different restrictions
However, when I gave the top "**User-agent: ***" the "Disallow: " it seemed to fix the problem. Like, it didn't understand that the **Disallow: / **was meant only for the 008 user-agent?
-
Not honestly sure what User-agent "008" is, but that seems harmless. Why the crawl delay? There are better ways to handle that than Robots.txt, if a crawler is giving you trouble.
Was there a specific message/error in GWT?
-
I think, if you have a robots.txt reading what you show above:
User-agent: * Crawl-delay: 20
User-agent: 008 Disallow: /
That just basically says, "Don't crawl my site at all" (The "Disallow: /" means, I'm not allowing anything to be crawled by any search engine that pays attention to robots.txt at all)
So...I'm guessing that's not what you want?
(Bah..ignore. "User-agent". I'm a fool)
Actually, this seems to have solved your issue...make sure you explicitly tell all other User-agents that they are allowed:
User-agent: * Disallow: Crawl-delay: 20
User-agent: 008 Disallow: /
The extra "Disallow:" under User-agent: * says "I'm not going to disallow anything to most user-agents." Then the Disallow under user-agent 008 seems to only apply to them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Domain migration via Wordpress - organic still 60% down four months later
Hi guys!Almost four months ago I performed a Wordpress domain migration. Three pet-based sites were migrated into a new pet-based one that incorporated them all - the new site is petskb.com and 240 posts were migrated.The site migration was performed via 301 wildcard re-directs using the .htaccess files in the old domains, which are still in place and working. I also used the site move tool in GSC. Afterwards, I performed an audit of the new site to ensure that all the old urls were being re-directed to the new one, which they were (and are). There have been no manual actions reported in GSC.The results have been very poor. A small few of the articles that were in the top 10 moved over and I quickly claimed the same positions in the new site. Most did not though and still sit >100 in the SERP or absolutely nowhere (or even omitted) using the main keyword.I've created about 60 new articles (using the same SEO analysis I did previously) since that time on the new site and not one of them has ranked <100 in all that time, whereas on the old sites they would initially rank somewhere in the top 50 after a couple of days and work their way up over the months. These new posts haven't moved though. The posts that were published on the new site four months ago are still in the exact same position.So, I've created new content, re-submitted the sitemap and manually requested re-indexing of the posts. Nothing has changed. I've hired SEO's and not one has found any problems with my site or how I performed the migration. Clearly there is a problem though. The original posts that were ranking previously and all the new posts have not moved in the SERP. There were a few spammy links pointing to the new domain but nothing significant, I did disavow these though - no more than on the old sites though.As a test, I created a new post on another domain which has no posts with the same long-tail keyword as one that has been on my new site for almost four months. The one I posted on the new domain out-ranked the one on petskb after just two days.Can anyone help? If you can I will personally travel to where you live and buy you several beers.Thanks,Matt
Intermediate & Advanced SEO | | mattpettitt1 -
How to Optimize With Wordpress SEO Plugin YOAST?
Hi Everyone, I am currently using Moz's page optimization format to improve our website's SEO. https://mathandmovement.com/ This is the format, these are all of the areas that we need to improve for each of our website's pages, according to Moz. include 3 keywords max: <url>www.mysite.com/my-keyword-phrase</url> <page title="">2 keywords max <title>Primary Keyword - Secondary Keyword - Brand</title></page> 2 keywords max: keywords in my headers 2 keywords max: keywords in my headers ![keyword](image file) <focus keyword="">1 with YOAST</focus> We are currently using the free version of YOAST for our SEO. My question to you is this, will our pages still have good SEO if we use appropriate keywords (high monthly volume, below 40 difficulty ranking, High Organic CTR,) and put them in the format above? Or since the free version of YOAST only let's us optimize 1 keyword, will we still rank for the other two/three that we put in our meta and page titles/h1s, h2s, urls, and overall paragraph text? Please also let us know what we can do to improve our SEO! Thanks so much, Emma
Intermediate & Advanced SEO | | emmamathandmovement0 -
Consolidate URLs on Wordpress?
Hi Guys, On a WordPress site, we are working with currently has multiple different versions of each URL per page. See screenshot: https://d.pr/i/ZC8bZt Data example: https://tinyurl.com/y8suzh6c Right now the non-https version redirects to the equivalent https versions while some of the https versions don't redirect and are status code 200. We all want all of them to redirect to the highlighted blue version (row a).Is this easily doable in wordpress and how would one go about it? Cheers.
Intermediate & Advanced SEO | | wickstar1 -
Lost 86% of traffic after moving old static site to WordPress
I hired a company to convert an old static website www.rawfoodexplained.com with about 1200 pages of content to WordPress. Four days after launch it lost almost 90% of traffic. It was getting over 60,000 uniques while nobody touched the site for several years. It’s been 21 days since the WordPress launch. I read a lot of stuff prior to moving it (including Moz's case study) and I was expecting to lose in short term 30% of traffic max… I don’t understand what is wrong. The internal link structure is the same, every url is 301 to the same url only without[dot]html (ie www.rawfoodexplained.com/science.html is 301′s to http://www.rawfoodexplained.com/science/ ), it’s added to Google Webmaster tool and Google indexed the new pages… Any ideas what could be possible wrong? I do understand the website is not optimized (meta descriptions etc, but it wasn't before either) .... Do you think putting back the old site would recover the traffic? I would appreciate any thoughts Thank you
Intermediate & Advanced SEO | | JakubH0 -
Best to Post Dynamic Content (Listings) under "Posts" in Wordpress?
My commercial real estate web site is being migrated to Wordpress from Drupal. Is it advisable to place dynamic content that will use taxonomy under "Posts" ? Listings will be changed every few months and there could be anywhere from several hundred to several thousand of them on the site. Developers have given me different advice. One has been adamant that listings and neighborhood pages (there will be about 25 neighborhood pages) should not be in the post section which is to be strictly reserved for blog entries. The last thing I want is to create a site structure which is unfriendly to SEO!!!! I would very much appreciate the perspective of anyone proficient with Wordpress and SEO. Thanks!!!
Intermediate & Advanced SEO | | Kingalan1
Alan Rosinsky0 -
Should I block wordpress archive and tag?
I use Wodpress and Wordpress SEO by Yoast. I've set ip up to add noindex meta tag on all archive and tag pages. I don't think its useful to include thoses pages in search results because there's quite a few. Especialy the tag archive. Should I consider anything else or change my mind? What do you think? Thanks
Intermediate & Advanced SEO | | Akeif0 -
The same "About" page on multiple WordPress microsites
Hello there, I have over 10 WordPress websites that all have the same "About" page because the same company. I have concerns that this will adversely affect my sites and i'm looking for the best way to deal with this. I was either going to remove the "About" page with Google Webmaster Tools and robots.txt or use the canonical meta tag on that page. Any thoughts?
Intermediate & Advanced SEO | | SpaMedica0 -
Magento Hidden Products & Google Not Found Errors
We recently moved our website over to the Magento eCommerce platform. Magento has functionality to make certain items not visible individually so you can, for example, take 6 products and turn it into 1 product where a customer can choose their options. You then hide all the individual products, leaving only that one product visible on the site and reducing duplicate content issues. We did this. It works great and the individual products don't show up in our site map, which is what we'd like. However, Google Webmaster Tools has all of these individual product URLs in its Not Found Crawl Errors. ! For example: White t-shirt URL: /white-t-shirt Red t-shirt URL: /red-t-shirt Blue t-shirt URL: /blue-t-shirt All of those are not visible on the site and the URLs do not appear in our site map. But they are all showing up in Google Webmaster Tools. Configurable t-shirt URL: /t-shirt This product is the only one visible on the site, does appear on the site map, and shows up in Google Webmaster Tools as a valid URL. ! Do you know how it found the individual products if it isn't in the site map and they aren't visible on the website? And how important do you think it is that we fix all of these hundreds of Not Found errors to point to the single visible product on the site? I would think it is fairly important, but don't want to spend a week of man power on it if the returns would be minimal. Thanks so much for any input!
Intermediate & Advanced SEO | | Marketing.SCG0