First Crawl Report
-
Just joined SEOMoz today and am slightly overwhelmed, but excited about learning loads from it.
I've just received my Crawl Report and there is a
404 : UserPreemptionError:
http://www.iainmoran.com/comments/feed/This is a WordPress site and I've no idea what the best course of action to take. I've done some searching on Google and a couple of sites suggest removing that url from within the robots.txt file. I'm using the Yoast Plugin which apparently creates a robots.txt file, but I can't see any way to edit it. Is there another solution for resolving the 404 error?
Many thanks,
Iain.
-
John, thank you so much for your help with this. It's very much appreciated.
That does explain why I couldn't find the robots.txt file.
Is there another way to resolve the 404 error warning that SEOMoz is telling me about?
404 : UserPreemptionError:
http://www.iainmoran.com/comments/feed/I understand that if I disable comments on my WP site, that would fix the issue, but don't really want to do that if it can be avoided.
Thanks again,
Iain.
-
Actually, on further investigation, blocking the includes folder does not effect ranking, as the content is loaded server side (PHP)
I don't imagine that the robot.txt file is your issue, as its just stopping people accessing your include and admin folder, which you want
-
Iain
I dont use WP, but just Googled this
"I believe WordPress itself creates the robots.txt file and it is a virtual file, meaning there won't be a hard copy on your server for you to edit. You could create one yourself and upload it to your server and I think search engines will use that one instead, or you can use a plugin like this one, that will let you edit your robots.txt file from the WordPress admin."
Explains why you cant find it !
Basically that suggest creating one yourself and uploading it to the server.
Create a note pad file, add your content, name file robot.txt (must change extension to .txt), and that should work
-
Iain
Just to be sure, its actually just called a robot.txt (not .text)
I just checked, you do have one in your root file
http://www.iainmoran.com/robots.txt
Bizarrely, its blocking Google indexing your includes, which is a problem cause most of your links will be included in some sort of nav include, not to mention most of your website content.
Its defiantly there, maybe check your public folder and www folder
Let me know how you get on
John
-
Thanks for your quick response, Johnny,
I've just looked at my root directory via both my CPanel and FTP, but there is no robots.text file there.
I don't understand why this is, as in the Yoast section within each of my pages, there are some which I have set to "nofollow"
Also, within the Yoast CP there is a section titled: Robots.txt and says the following under it:
"If you had a robots.txt file and it was editable, you could edit it from here."Iain.
-
Iain
You robots.xt file can be found within your root directory
Where ever your website is hosted, log in there (if you have cpanel which is more than likely) and navigate to your public folder. the robots.text file should reside there, download it and edit in any software really, wordpad etc.
If you don't have cpanel, you can add your FTP credential using an FTP client and navigate the same way
When you edit it, upload it back to your server.
Regards
J
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website crawl error
Hi all, When I try to crawl a website, I got next error message: "java.lang.IllegalArgumentException: Illegal cookie name" For the moment, I found next explanation: The errors indicate that one of the web servers within the same cookie domain as the server is setting a cookie for your domain with the name "path", as well as another cookie with the name "domain" Does anyone has experience with this problem, knows what it means and knows how to solve it? Thanks in advance! Jens
Technical SEO | | WeAreDigital_BE0 -
Duplicate Page Titles Issue in Campaign Crawl Error Report
Hello All! Looking at my campaign I noticed that I have a large number of 'duplicate page titles' showing up but all they are the various pages at the end of the URL. Such as, http://thelemonbowl.com/tag/chocolate/page/2 as a duplicate of http://thelemonbowl.com/tag/chocolate. Any suggestions on how to address this? Thanks!
Technical SEO | | Rich-DC0 -
Abnormally high internal link reported in Google Search Console not matching Moz reports
If I'm looking at our internal link count and structure on Google Search Console, some pages are listed as having over a thousand internal links within our site. I've read that having too many internal links on a page devalues that page's PageRank, because the value is divided amongst the pages it links out to. Likewise, I've heard having too many internal links is just bad in general for SEO. Is that true? The problem I'm facing is determining how Google is "discovering" these internal links. If I'm just looking at one single page reported with, say, 1,350 links and I'm just looking at the code, it may only have 80 or 90 actual links. Moz will confirm this, as well. So why would Google Search Console report different? Should I be concerned about this?
Technical SEO | | Closetstogo0 -
Strange Crawl Report
Hey Moz Squad, So I have kind of strange case. My website locksmithplusinc.com has been around for a couple years. I have had all sorts of pages and blogs that have maybe ranked for a certain location a longtime ago and got deleted so I could speed up the site and consolidate my efforts. I said that because I think that might be part of the problem. When I was crawl reporting my site just three weeks ago on moz I had over 23 crawl report issues. Duplicate pages, missing meta tags the regular stuff. But now all of a sudden when I crawl report on MOZ it comes up with Zero issues. So I did another crawl On google analytic and this is what came up. SO im very confused because none of these url's are even url's on my site. So maybe people are searching for this stuff and clicking on broken links that are still indexed and getting this 404 error? What do you guys think? Thank you guys so much for taking a shot at this one. siS44ug
Technical SEO | | Meier0 -
Site Crawling with Firewall Plugin
Just wondering if anyone has any experience with the WordPress Simple Firewall plugin. I have a client who is concerned about security as they've had issues in that realm in the past and they've since installed this plugin: https://wordpress.org/support/view/plugin-reviews/wp-simple-firewall?filter=4 Problem is, even with a proper robots file and appropriate settings within the firewall, I still cannot crawl the site with site crawler tools. Google seems to be accessing the site fine, but I still wonder if it is in anyway potentially hindering search spiders.
Technical SEO | | BrandishJay0 -
Website not crawled
i added website www.nsale.in in add campaign, it shows only 1 page crawled. but its working fine for other sites, any idea why it failed ?
Technical SEO | | Dhinesh0 -
Firefox Add-On for crawl frequency??
Hi all, a short one: is there a firefox add-on available, which lets you see the crawl frequency of your page(s)? Would be interesting to see if google bot comes around more lately... There are some statistics in the webmaster tools, but I don't find them very attractive 🙂 I know there is something for Wordpress, but we don't use it... I don't to put up an excel-sheet and check the cache-version for myself. And I would love to see how deep the crawler gets and which sites do not get crawled... So, any existing add-ons or tools that are for free?? 🙂 Thanx....
Technical SEO | | accessKellyOCG0 -
How long will Google take to stop crawling an old URL once it has been 301 redirected
I need to do a clean-up old urls that have been redirected in sitemap and was wondering about this.
Technical SEO | | Ant-8080