404 Error on Spider Emulators
-
I recently began working at a company called Uncommon Goods. I ran a few different spider emulators on our homepage (uncommongoods.com) and I saw a 404 Error on SEO-browser.com as well as URL errors on Summit Media's emulator and SEOMoz's crawler. It seems there is a serious problem here. How is this affecting our site from an SEO standpoint? What are the repercussions?
Also, I know we have a lot of javascript on our homepage..is this causing the 404? Any advice would be much appreciated.
Thanks!
-Zack
-
Hey Zack,
It seems your website is now returning a 200 so you apparently managed to fix the problem.
Was the problem coming from the server configuration as I suggested?
Best regards,
Guillaume Voyer. -
Hi Zack,
Yes, having the home page return a 404 error is a HUGE problem. It actually tells the engines that the page doesn't exist so they will stop crawling it and eventualy drop it from their index even if it returns content.
You should solve this problem ASAP!
Best regards,
Guillaume Voyer. -
Hi Guillaume,
Your comments about Javascript on the client side make complete sense to me now and I will examine our Resin config w/ my IT team. Thanks for explaining. Also, as per Beneeb's advice above, I'm going to try making some changes to robots.txt.
From a bigger picture perspective though, do you think this 404 Error is even that big of a deal? Are we likely to be penalized for this in terms of Page Rank, Domain Authority, etc..??
Thanks for your help!
-Zack
-
Hi Zack,
The 404 error has nothing to do with the robots.txt file, it has to do with your server configuration as I said in my answers bellow.
About the robots.txt file, I would remove the Disallow: line if you don't need to block anything.
Best regards,
Guillaume Voyer. -
Hi Beneeb,
That tool is awesome! It definitely helps, thanks! I'm going to show that report to my IT guys today. I think your guess is a very good one. Hopefully I can persuade them to make the changes and we'll see if it resolves the error.
Best Regards,
-Zack
-
Hi Zack,
To be honest with you, it was just a guess. I used a robots.txt syntax checker and saw several issues. You can check out that same tool here & run your current robots.txt file through it:
http://tool.motoricerca.info/robots-checker.phtml
I hope that gets you pushed in the right direction. I'm very new to SEO, but I've worked in the technical support world forever. So, my suggestion is only worth what you paid for it.
-
Hi Beeneeb,
Thank you for your insight. I think this makes sense as I see there is some redundancy in robots.txt as it is now. I'm curious however, why do you think that changing robots.txt will resolve the 404 error?
Best Regards,
-Zack
-
Hi Zack,
Quick followup : Your website will always return 500 to HTTP/1.0 queries. With HTTP/1.1, homepage returns 404 and subpages returns 200.
I saw the website was running on a Resin server rather than a Apache server, then, you might want to look into your Resin server's configuration.
Best regards,
Guillaume Voyer. -
Hi Zack,
Actually, when I use this http header tool and that I input http://www.uncommongoods.com/ I see that the header returned is in fact a 500 Internal Server Error.
The HTTP Header is returned by the server even before the browser can kow that their is javascript on that page so it has nothing to do with javascript.
You'll have to look at the server side as an Internal Server Error and the HTTP Header are returned by the server in opposite to the javascript that is executed client send.
Best regards,
Guillaume Voyer. -
Hi Zack,
Looking at your robots.txt file, you have several errors. I would replace your current robots.txt file with the following:
User-Agent: *
Disallow:Sitemap: http://www.uncommongoods.com/sitemap.xml
(not sure why the message truncated your sitemap file, but you get the picture)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Odd 404 pages
Evening all, I've performed a Screaming Frog technical crawl of a site, and it's returning links like this as 404s: http://clientsite.co.uk/accidents-caused-by-colleagues/js/modernizr-2.0.6.min.js Now, I recognise that Modernizr is used for detecting features in the user's browser - but why would it have created an indexed page that no longer exists? Would you leave them as is? 410 them? Or do something else entirely? Thanks for reading, I look forward to hearing your thoughts! Kind regards, John.
Technical SEO | | Muhammad-Isap0 -
How to solve this merchant error?
Hello All, In my google merchant suddenly lots of warning appeared i.e. 1) Automatic item updates: Missing schema.org microdata price information 2) Missing microdata for condition Can you please tell me how to solve this errors? Thanks!
Technical SEO | | varo
John0 -
Weird, long URLS returning crawl error
Hi everyone, I'm getting a crawl error "URL too long" for some really strange urls that I'm not sure where they are being generated from or how to resolve it. It's all with one page, our request info. Here are some examples: http://studyabroad.bridge.edu/request-info/?program=request info > ?program=request info > ?program=request info > ?program=request info > ?program=programs > ?country=country?type=internships&term=short%25 http://studyabroad.bridge.edu/request-info/?program=request info > ?program=blog > notes from the field tefl student elaina h in chile > ?utm_source=newsletter&utm_medium=article&utm_campaign=notes%2Bfrom%2Bthe%2Bf Has anyone seen anything like this before or have an idea of what may be causing it? Thanks so much!
Technical SEO | | Bridge_Education_Group0 -
404 Best Practices
Hello All, So about 2 months ago, there was a massive spike in the number of crawl errors on my site according to Google Webmaster tools. I handled this by sending my webmaster a list of the broken pages with working pages that they should 301 redirect to. Admittedly, when I looked back a couple weeks later, the number had gone down only slightly, so I sent another list to him (I didn't realize that you could 'Mark as fixed' in webmaster tools) So when I sent him more, he 301 redirected them again (with many duplicates) as he was told without really digging any deeper. Today, when I talked about more re-directs, he suggested that 404's do have a place, that if they are actually pages that don't exist anymore, then a ton of 301 re-directs may not be the answer. So my two questions are: 1. Should I continue to relentlessly try to get rid of all 404's on my site, and if so, do I have to be careful not to be lazy and just send most of them to the homepage. 2. Are there any tools or really effective ways to remove duplicate 301 redirect records on my .htaccess (because the size of it at this point could very well be slowing down my site). Any help would be appreciated, thanks
Technical SEO | | CleanEdisonInc0 -
Duplicate page errors from pages don't even exist
Hi, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages don't even exist. My website has around 40-50 pages but SEO report shows that 375 pages have been crawled. My guess is that the errors have something to do with my recent htaccess configuration. I recently configured my htaccess to add trailing slash at the end of URLs. There is no internal linking issue such as infinite loop when navigating the website but the looping is reported in the SEOmoz's report. Here is an example of a reported link: http://www.mywebsite.com/Door/Doors/GlassNow-Services/GlassNow-Services/Glass-Compliance-Audit/GlassNow-Services/GlassNow-Services/Glass-Compliance-Audit/ btw there is no issue such as crawl error in my Google webmaster tool. Any help appreciated
Technical SEO | | mmoezzi0 -
Locating 404 Page Errors for Deletion
On my SEOmoz report, there are several 404 pages that I assume need deletion. Yes? When I am looking at my pages from the back-end of WordPress, how do I identify these to delete or fix them? In the list of pages I have created, it is not at all apparent when I click into "edit" the page that any of these are broken pages. I think the 404 pages are urls from pages that I changed the url to be more seo friendly, but they don't really exist. I hope this makes sense - it is baffling to me : ) Thank you for any insight and help with getting these cleared. The errors are listed below from the report. Sheryl | 404 : Error http://durangocodentists.com/durango-dentists-why-greg-mann/dentists-in-durango-co/Cosmetic_Dentistry_Services_Teeth_Whitening_Montezuma_CO.html 404 1 0 404 : Error http://durangocodentists.com/durango-dentists-why-greg-mann/dentists-in-durango-co/General_Dentistry_Services_White_Fillings_Montezuma_CO.html 404 1 0 404 : Error http://durangocodentists.com/durango-dentists-why-greg-mann/dentists-in-durango-co/Request_an_Appointment.html 404 1 0 404 : Error http://durangocodentists.com/videos/repairing-teeth/pid%3A4078865 404 1 0 404 : Error http://durangocodentists.com/videos/teeth-whitening/pid%3A4078865 404 1 0 404 : Error http://durangocodentists.com/videos/veneers/pid%3A4078865 | 404 | 1 | 0 |
Technical SEO | | TOMMarketingLtd.0 -
Google Crawler Error / restricting crawling
Hi On a Magento Instance we manage there is an advanced search. As part of the ongoing enhancement of the instance we altered the advance search options so there are less and more relevant. The issue is Google has crawled and catalogued the advanced search with the now removed options in the query string. Google keeps crawling these out of date advanced searches. These stale searches now create a 500 error. Currently Google is attempting to crawl these pages twice a day. I have implemented the following to stop this:- 1. Submitted requested the url be removed via Webmaster tools, selecting the directory option using uri: http://www.domian.com/catalogsearch/advanced/result/ 2. Added Disallow to robots.txt Disallow: /catalogsearch/advanced/result/* Disallow: /catalogsearch/advanced/result/ 3. Add rel="nofollow" to the links in the site linking to the advanced search. Below is a list of the links it is crawling or attempting to crawl, 12 links crawled twice a day each resulting in a 500 status. Can anything else be done? http://www.domain.com/catalogsearch/advanced/result/?bust_line=94&category=55&color_layered=128&csize[0]=0&fabric=92&inventry_status=97&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=115&category=55&color_layered=130&csize[0]=0&fabric=0&inventry_status=97&length=116&price=3%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=94&category=55&color_layered=126&csize[0]=0&fabric=92&inventry_status=97&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=137&csize[0]=0&fabric=93&inventry_status=96&length=0&price=8%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=142&csize[0]=0&fabric=93&inventry_status=96&length=0&price=4%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=137&csize[0]=0&fabric=93&inventry_status=96&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=142&csize[0]=0&fabric=93&inventry_status=96&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=135&csize[0]=0&fabric=93&inventry_status=96&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=128&csize[0]=0&fabric=93&inventry_status=96&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=127&csize[0]=0&fabric=93&inventry_status=96&length=0&price=4%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=127&csize[0]=0&fabric=93&inventry_status=96&length=0&price=3%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=128&csize[0]=0&fabric=93&inventry_status=96&length=0&price=10%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=122&csize[0]=0&fabric=93&inventry_status=96&length=0&price=8%2C10
Technical SEO | | Flipmedia1120 -
How to remove crawl errors in google webmaster tools
In my webmaster tools account it says that I have almost 8000 crawl errors. Most of which are http 403 errors The urls are http://legendzelda.net/forums/index.php?app=members§ion=friends&module=profile&do=remove&member_id=224 http://legendzelda.net/forums/index.php?app=core&module=attach§ion=attach&attach_rel_module=post&attach_id=166 And similar urls. I recently blocked crawl access to my members folder to remove duplicate errors but not sure how i can block access to these kinds of urls since its not really a folder thing. Any idea on how to?
Technical SEO | | NoahGlaser780