404 Error on Spider Emulators
-
I recently began working at a company called Uncommon Goods. I ran a few different spider emulators on our homepage (uncommongoods.com) and I saw a 404 Error on SEO-browser.com as well as URL errors on Summit Media's emulator and SEOMoz's crawler. It seems there is a serious problem here. How is this affecting our site from an SEO standpoint? What are the repercussions?
Also, I know we have a lot of javascript on our homepage..is this causing the 404? Any advice would be much appreciated.
Thanks!
-Zack
-
Hey Zack,
It seems your website is now returning a 200 so you apparently managed to fix the problem.
Was the problem coming from the server configuration as I suggested?
Best regards,
Guillaume Voyer. -
Hi Zack,
Yes, having the home page return a 404 error is a HUGE problem. It actually tells the engines that the page doesn't exist so they will stop crawling it and eventualy drop it from their index even if it returns content.
You should solve this problem ASAP!
Best regards,
Guillaume Voyer. -
Hi Guillaume,
Your comments about Javascript on the client side make complete sense to me now and I will examine our Resin config w/ my IT team. Thanks for explaining. Also, as per Beneeb's advice above, I'm going to try making some changes to robots.txt.
From a bigger picture perspective though, do you think this 404 Error is even that big of a deal? Are we likely to be penalized for this in terms of Page Rank, Domain Authority, etc..??
Thanks for your help!
-Zack
-
Hi Zack,
The 404 error has nothing to do with the robots.txt file, it has to do with your server configuration as I said in my answers bellow.
About the robots.txt file, I would remove the Disallow: line if you don't need to block anything.
Best regards,
Guillaume Voyer. -
Hi Beneeb,
That tool is awesome! It definitely helps, thanks! I'm going to show that report to my IT guys today. I think your guess is a very good one. Hopefully I can persuade them to make the changes and we'll see if it resolves the error.
Best Regards,
-Zack
-
Hi Zack,
To be honest with you, it was just a guess. I used a robots.txt syntax checker and saw several issues. You can check out that same tool here & run your current robots.txt file through it:
http://tool.motoricerca.info/robots-checker.phtml
I hope that gets you pushed in the right direction. I'm very new to SEO, but I've worked in the technical support world forever. So, my suggestion is only worth what you paid for it.
-
Hi Beeneeb,
Thank you for your insight. I think this makes sense as I see there is some redundancy in robots.txt as it is now. I'm curious however, why do you think that changing robots.txt will resolve the 404 error?
Best Regards,
-Zack
-
Hi Zack,
Quick followup : Your website will always return 500 to HTTP/1.0 queries. With HTTP/1.1, homepage returns 404 and subpages returns 200.
I saw the website was running on a Resin server rather than a Apache server, then, you might want to look into your Resin server's configuration.
Best regards,
Guillaume Voyer. -
Hi Zack,
Actually, when I use this http header tool and that I input http://www.uncommongoods.com/ I see that the header returned is in fact a 500 Internal Server Error.
The HTTP Header is returned by the server even before the browser can kow that their is javascript on that page so it has nothing to do with javascript.
You'll have to look at the server side as an Internal Server Error and the HTTP Header are returned by the server in opposite to the javascript that is executed client send.
Best regards,
Guillaume Voyer. -
Hi Zack,
Looking at your robots.txt file, you have several errors. I would replace your current robots.txt file with the following:
User-Agent: *
Disallow:Sitemap: http://www.uncommongoods.com/sitemap.xml
(not sure why the message truncated your sitemap file, but you get the picture)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does anyone know the linking of hashtags on Wix sites does it negatively or postively impact SEO. It is coming up as an error in site crawls 'Pages with 404 errors' Anyone got any experience please?
Does anyone know the linking of hashtags on Wix sites does it negatively or positively impact SEO. It is coming up as an error in site crawls 'Pages with 404 errors' Anyone got any experience please? For example at the bottom of this blog post https://www.poppyandperle.com/post/face-painting-a-global-language the hashtags are linked, but they don't go to a page, they go to search results of all other blogs using that hashtag. Seems a bit of a strange approach to me.
Technical SEO | | Mediaholix0 -
Yoast SEO. After set up 404 error pages
Hello all, Something strange happened with my blog site. I recently signed to MOZ tools. Initially everything was fine, but during my last crawl I got loads of 404
Technical SEO | | A_Fotografy
pages. Few days ago I was tweaking some settings in SEO plugin according to this post https://moz.com/blog/setup-wordpress-for-seo-success What I noticed was that 404 pages were coming from my blog posts, but for
some reason category was missing in those posts. For example this link is 404
https://a-fotografy.co.uk/inchcolm-island-wedding-photography-bailie The one with category is https://a-fotografy.co.uk/wedding-pictures/inchcolm-island-wedding-photography-bailie/ So basically for some reason category was missing. Please let me know how can I fix this instead of doing hundreds of
redirects now. Thank you,
Regards,
Armands0 -
Webmaster Crawl errors caused by Joomla menu structure.
Webmaster Tools is reporting crawl errors for pages that do not exist due to how my Joomla menu system works. Example, I have a menu item named "Service Area" that stores 3 sub items but no actual page for Service Area. This results in a URL like domainDOTcom/service-area/service-page.html Because the Service Area menu item is constructed in a way that shows the bot it is a link, I am getting a 404 error saying it can't find domainDOTcom/service-area/ (The link is to "javasript:;") Note, the error doesn't say domainDOTcom/service-area/javascript:; it just says /service-area/ What is the best way to handle this? Can I do something in robots.txt to tell the bot that this /service-area/ should be ignored but any page after /service-area/ is good to go? Should I just mark them as fixed as it's really not a 404 a human will encounter or is it best to somehow explain this to the bot? I was advised on google forums to try this, but I'm nervous about it. Disallow: /service-area/*
Technical SEO | | dwallner
Allow: /service-area/summerlin-pool-service.
Allow: /service-area/north-las-vegas
Allow: /service-area/centennial-hills-pool-service I tried a 301 redirect of /service-area to home page but then it pulls that out of the url and my landing pages become 404's. http://www.lvpoolcleaners.com/ Thanks for any advice! Derrick0 -
To avoid errors in our Moz crawl, we removed subdomains from our host. (First we tried 301 redirects, also listed as errors.) Now we have backlinks all over the web that are broken. How bad is this, from a pagerank standpoint?
Our MOZ crawl kept telling us we had duplicate page content even though our subdomains were redirected to our main site. (Pages from Wineracks.vigilantinc.com were 301 redirected to vigilantinc.com/wineracks.) Now, to solve that problem, we have removed the wineracks.vigilantinc.com subdomain. The error report is better, but now we have broken backlinks - thousands of them. Is this hurting us worse than the duplicate content problem?
Technical SEO | | KristyFord0 -
GWT Error for RSS Feed
Hello there! I have a new RSS feed that I submitted to GWT. The feed validates no problemo on http://validator.w3.org/feed/ and also when I test the feed in GWT it comes back aok, finds all the content with "No errors found". I recently got a issue with GWT not being able to read the rss feed, error on line 697 "We were unable to read your Sitemap. It may contain an entry we are unable to recognize. Please validate your Sitemap before resubmitting." I am assuming this is an intermittent issue, possibly we had a server issue on the site last night etc. I am checking with my developer this morning. Wanted to see if anyone else had this issue, if it resolved itself, etc. Thanks!
Technical SEO | | CleverPhD0 -
Are W3C Validators too strict? Do errors create SEO problems?
I ran a HTML markup validation tool (http://validator.w3.org) on a website. There were 140+ errors and 40+ warnings. IT says "W3C Validators are overly strict and would deny many modern constructs that browsers and search engines understand." What a browser can understand and display to visitors is one thing, but what search engines can read has everything to do with the code. I ask this: If the search engine crawler is reading thru the code and comes upon an error like this: …ext/javascript" src="javaScript/mainNavMenuTime-ios.js"> </script>');}
Technical SEO | | INCart
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element
in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create
cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer
the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error). and this... <code class="input">…t("?");document.write('>');}</code> ✉ The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed). One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error). Does this mean that the crawlers don't know where the code ends and the body text begins; what it should be focusing on and not?0 -
Error 404, Wordpress adds the domain automaticly to the end of the pages, WHY?
Hello guys, I'm using wordpress and the Yoast to help me improve my SEO. Everything went well except for today because "Moz" found 404 errors when scrolling the website saying showing the domain of my website at the end of 12 url. For example :
Technical SEO | | abonnisseau
www.domain.com/service-1/www.domain.com www.domain.com/contact-page/**www.domain.com ** Do you have any idea where does that come from ? Thanks Alex0 -
Having a massive amount of duplicate crawl errors
Im having over 400 crawl errors over duplicate content looking like this: http://www.mydomain.com/index.php?task=login&prevpage=http%3A%2F%2Fwww.mydomain.com%2Ftag%2Fmahjon http://www.mydomain.com/index.php?task=login&prevpage=http%3A%2F%2Fwww.mydomain.com%2Findex.php%3F etc.. etc... So there seems to be something with my login script that is not working, Anyone knows how to fix this? Thanks
Technical SEO | | stanken0