Wordpress error
-
On our Google Webmaster Tools I'm getting a Severe Health Warning regarding our Robot.txt file reading:
User-agent: *
Crawl-delay: 20User-agent: 008
Disallow: /I'm wondering how I can fix this and stop it happening again.
The site was hacked about 4 months ago but I thought we'd managed to clear things up.
Colin
-
This will be my first post on SEOmoz so bear with me
The way I understand it is that robots read the robots.txt file from top to bottom, and once they find a rule that applies to them they stop reading and begin crawling. So basically the robots.txt written as:
User-agent:*
Disallow:
Crawl-delay: 20
User-agent: 008
Disallow: /
would not have the desired result as user-agent 008 would first read the top guideline:
User-agent: *
Disallow:
Crawl-delay: 20
and then begin crawling your site, as it is first being told that All user-agents are disallowed to crawl no pages or directories.
The corrected way to write this would be:
User-agent: 008
Disallow: /
User-agent: *
Disallow:
Crawl-delay: 20
-
Hi Peter,
I've tested the robot.txt file in Webmaster Tools and it now seems to be working as it should and it seems Google is seeing the same file as I have on the server.
I'm afraid this side of things isn't' my area of expertise so it's been a bit of a minefield.
I've taken a subscription with sucuri.net and taken various other steps that hopefully will hel;p with security. But who knows?
Thanks,
Colin
-
Google is seeing the same Robots.txt content (in GWT) that you show in the physical file, right? I just want to make sure that, when the site was hacked, no changes were made that are showing different versions of files to Google. It sounds like that's not the case here, but it definitely can happen.
-
Blog isn't' showing now and my hosts say that the index.php file is missing from the directory but I can see it.
Strange.
Have contacted them again to see what the problem can be.
Bit of a wasted Saturday!
-
Thanks Keith. Just contacting out hosts.
Nightmare!
-
Looks like a 403 permissions problem, that's a server side error... Make sure you have the correct permissions set on the blog folder in IIS Personally I always host on Linux...
-
Mind you the whole blog is now showing an error message and cant' be viewed so looks like an afternoon of trial and error!
-
Thanks very much Keith. I've just edited the file as suggested.
I see the error but as I am the web guy I cant' figure out how to get rid of it.
I think it might be a plugin that's causing it so I'm going to disable the and re-able them one as a time.
I've just PM'd you by the way.
Thanks for your help Keith.
Colin
-
Use this:
**User-agent: * Disallow: /blog/wp-admin/ Disallow: /blog/wp-includes/ Sitemap: http://nile-cruises-4u.co.uk/sitemap.xml**
Any FYI, you have the following error on your blog:
Warning: is_readable() [function.is-readable]: open_basedir restriction in effect. File(D:\home\nile-cruises-4u.co.uk\wwwroot\blog/wp-content/plugins/D:\home\nile-cruises-4u.co.uk\wwwroot\blog\wp-content\plugins\websitedefender-wordpress-security/languages/WSDWP_SECURITY-en_US.mo) is not within the allowed path(s): (D:\home\nile-cruises-4u.co.uk\wwwroot) in D:\home\nile-cruises-4u.co.uk\wwwroot\blog\wp-includes\l10n.php on line **339 **
Get your web guy to look at that, it appears at the top of every blog page for me...
Hope that helps,
Keith
-
Thanks Keith.
Only part of our site is WP based. Would that be a problem using the example you kindly suggested?
-
I gave you an example of a basic robots.txt file that I use on one of my Wordpress sites above, I would suggest using that for now.
I would not bother messing around with crawl delay in robots.txt as Peter said above there are better ways to achieve this... Plus I doubt you need it any way.
Google caches the robots.txt info for about 24hrs normally in my experience... So it's possible the old cached version is still being used by Google.
-
Hi Guys,
Thanks so much for your help. As you say Troy, that's defintely not what I want.
I assumed when we were hacked (twice in 8 months) that it might have been a competitor as we are in a very competitive niche. Might be very wrong there but we have certainly lost our top ranking on Google.co.uk for our main key phrases and our now at about position 7 for the same key phrases after about 3 years at number 1.
So when I saw on Google Webmaster Tools yesterday that we had a severe health warning and that the Googlebot was being prevented crawling our site I thought it might be the aftereffects of the hack.
Today even though I changed the robot.txt file yesterday GWT is showing 1000 pages with errors, 285 Access Denied and 719 Not Found and this message: Googlebot is blocked from http://nile-cruises-4u.co.uk/
I've just tested the robot.txt via GWT and now get this message:
AllowedDetected as a directory; specific files may have different restrictionsSo maybe the pages will be able to access by Googlebot shortly and the Access Denied message will disappear.I've chaged the robot.txt file to
User-agent: *
Crawl-delay: 20But should I change it to a better version? Sorry guys, I'm an online travel agent and not great on coding and really techie stuff. Although I'm learning pretty quickly about the bad stuff!I seem to have a few problems getting this sorted and wonder if this is a part of why our page position is dropping? -
I would simplify your robots.txt to read something like:
**User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Sitemap: http://www.your-domain.com/sitemap.xml**
-
That's odd: "008" appears to be the user agent for "80legs", a custom crawler platform. I'm seeing it in other Robots.txt files.
-
I'm not 100% sure what he's seeing, but when I plug his robots.txt into the robots analysis tool, I get this back:
Googlebot blocked by line 5: Disallow: /
Detected as a directory; specific files may have different restrictions
However, when I gave the top "**User-agent: ***" the "Disallow: " it seemed to fix the problem. Like, it didn't understand that the **Disallow: / **was meant only for the 008 user-agent?
-
Not honestly sure what User-agent "008" is, but that seems harmless. Why the crawl delay? There are better ways to handle that than Robots.txt, if a crawler is giving you trouble.
Was there a specific message/error in GWT?
-
I think, if you have a robots.txt reading what you show above:
User-agent: * Crawl-delay: 20
User-agent: 008 Disallow: /
That just basically says, "Don't crawl my site at all" (The "Disallow: /" means, I'm not allowing anything to be crawled by any search engine that pays attention to robots.txt at all)
So...I'm guessing that's not what you want?
(Bah..ignore. "User-agent". I'm a fool)
Actually, this seems to have solved your issue...make sure you explicitly tell all other User-agents that they are allowed:
User-agent: * Disallow: Crawl-delay: 20
User-agent: 008 Disallow: /
The extra "Disallow:" under User-agent: * says "I'm not going to disallow anything to most user-agents." Then the Disallow under user-agent 008 seems to only apply to them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should we get our W3 Validation Errors Fixed for SEO. How important is it ?
Hi All, We implement most things on our Website that is recommended and most recently we did Schema.org. However, one area which we haven't done is fix our W3 Validation Errors. My developer thinks they are not so as such and it's more about ticking the boxes but does anymore have any experience whereby fixing all these did actually have an SEO /Ranking Benefit ?.. Most of our URL'S are indexed and google recrawls regularly so I am not sure as to it's importance. Also we have a mobile responsive version so I wasn't sure if it more important because of this. From what I read, I can't see to any benefit from fixing it all but just wanted some other opinions? thanks Pete
Intermediate & Advanced SEO | | PeteC120 -
Rich Snippets Not Displaying - Price Error?
We recently implemented Schema.org/product on our site (www.evo.com). In the Google Webmaster Tools Structured Data report we’re getting lots of errors: http://screencast.com/t/Z3QJBctjUvP which I believe is preventing our rich snippets (price, availability, ratings) from showing in search results. When I click into the “Product” data type on the Structured Data report I see that there’s 2 errors: missing price and missing best or worst rating: http://screencast.com/t/SuHVYFLFO5D We are adding the itemprop=“bestRating” code which should take care of the ‘missing best or worst rating’ error. The missing price error is what I want to ask about. There’s a couple strange things here (using this URL as example : http://www.evo.com/skis/line-sir-francis-bacon.aspx - which has been indexed since the code was added): 1) The Webmaster Tools report is finding the schema.org/offer data type and is recognizing the InStock and OutOfStock property of this: http://screencast.com/t/xtHouzeL37q BUT price is not being detected. 2) When I enter the URL into the Structured Data Testing Tool it does detect price: https://www.google.com/webmasters/tools/richsnippets?url=http://www.evo.com/skis/line-sir-francis-bacon.aspx 3) When I fetch the page as GoogleBot itemprop=“price”is present: http://screencast.com/t/Hnqda95N My hunch is that the reason our Rich Snippets are not showing is because of the “price” error. The “?” by the error in WMT says: “This property is missing in the html markup or was not properly highlighted in the Data Highlighter. This can prevent the rich snippet from appearing” Does anyone have an idea why we’re getting the “price” error – or anything else that could prevent our Rich Snippets from displaying? Thanks so much! http://screencast.com/t/SuHVYFLFO5D
Intermediate & Advanced SEO | | evoNick0 -
How to add dropdown menus in the Wordpress Cutline theme?
I've got a fan website for a cult US TV show (that also sells merchandise as an affiliate). The site's design is about five years old, and I should probably revamp it at some point: http://www.btvsonline.com/ Anyway, the site's Cutline theme (http://cutline.tubetorial.com/) does not support native headers, and the header menu shows only a text list of my top-level pages. There are many subpages below these pages, but it's hard for users to find them without going to different pages to try to find things. What I'm looking for: if someone hovers above "Store" in the menu, a dropdown menu showing all the product pages would appear. I've tried to find Wordpress plugins or PHP hacks to add dropdown menus and I have gone through Cutline's support site, but I have had no luck. Anyone in the Moz community have any advice? Perhaps I just need to revamp the site with a new and modern Wordpress theme? Thanks in advance for any thoughts!
Intermediate & Advanced SEO | | SamuelScott0 -
Duplicate Errors from Wordpress login redirects
I've some Duplicate issues showing up in Moz Analytics which are due to a Q&A plugin being used on a Wordpress website which prompts the user to login. There's a number of links looking like the one shown below, which lead to the login page: www.website.com/wp-login.php?redirect_to=http%3A%2F%2Fwww.website.com%question%2.... What's the best way to deal with this? -- extra info: this is only showing up in Moz Analytics. Google Webmaster Tools reports no duplicates.. I'm guessing this is maybe down to the 'redirect_to' parameter being effective in grouping the URLs for Googlebot. currently the wplogin and consequent redirects are 'noindex, follow' - I cannot see where this is being generated from in wp-login.php to change this to nofollow (if this will solve it).
Intermediate & Advanced SEO | | GregDixson0 -
Redirecting site from html/php to wordpress
I've never come across this and haven't been able to really find anything that explains it very well. I want to get opinions before we make a definitive decision. Here's the scenario... I am working on a site that was built in HTML/PHP and some of the pages are ranking pretty well. (some page 1, but not number 1) We are going to start using the Wordpress platform by year's end. The pages that were built in html have been built a little spammy but they still rank. I just think they are keyword stuffed a little and not very "reader friendly" (I think the last person was spinning content). So, we've built completely new content on our new pages and we've commissioned really good content writers for them. I will be handling the on-page SEO going forward so I know what to do there. My questions are this.... Should I 301 the old pages to the new pages with the better content? (old pages have the .html or .php extensions so www.example.com/keyword.php will become www.example.com/keyword-keyword Is there any negative side to doing this since the content will be completely different then the old pages that are being 301 from. (Keywords are pretty much staying the same with the exception of minor variations. ie, www.example.com/red-cashmere-sweater.php to www.example.com/cashmere-sweater) I ask this because I've moved sites before where I've just changed the location of the same content. I've never done it where the content is changing and so is the URL extension. Thank you in advance for your help and guidance.
Intermediate & Advanced SEO | | DarinPirkey0 -
Question regarding error url while checking in open-site explorer tool
Hello friends, My website home url & inner page url shows error while checking the open-site site explorer tool from SEOMoz, for a website** eg.www.abc.com website as**
Intermediate & Advanced SEO | | zco_seo
**"Oh Hey! It looks like that URL redirects to www.abc.com/error.aspx?aspxerrorpath=/default.aspx. Would you like to see data for that URL instead?"**May I know the reason, why this url showing this result while checking back link report from the tool?**May I know on what basis this tool is evaluating the website url as well?****May I know, Will this affect the Google SERPs for this website?**Thanks0 -
Wordpress site architecture conflicts and how to go about fixing them
I am attempting to figure something out with a site I'm trying to fix. So the problem is that I've got two categories that are basically related keywords. I set this up when I first started doing this work and didn't know what I was doing. So that site at one time was ranking on the first page for a specific term (example: 'project manager salary' and posted in the category 'project manager salary'. But they we added 'project manager salary in Vermont' and all other 50 posts for all states in a different category called, 'project manager salaries and benefits'. So my question is this: Would this cause some kind of keyword rank cannibalization? How do I fix this properly? Thanks! Michael
Intermediate & Advanced SEO | | mtking.us_gmail.com0 -
Do 404 Errors hurt SEO ?
I recently did a crawl test and it listed over 10,000 pages, but around 282 of them generated 404 errors (bad links) I'm wondering how much this hurts overall SEO and if its something I should focus on fixing asap ? Thanks.
Intermediate & Advanced SEO | | RagingBull0