Html and css errors - what do SE spiders do if they come across coding errors? Do they stop crawling the rest of the code below the error
-
I have a client who uses a template to build their websites (no problem with that) when I ran the site through w3c validator it threw up a number of errors, most of which where minor eg missing close tags and I suggested they fix them before I start their off site SEO campaigns.
When I spoke to their web designer about the issues I was told that some of the errors where "just how its done" So if that's the case, but the validator still registers the error, do the SE spiders ignore them and move on, or does it penalize the site in some way?
-
Ryan, thanks so much for taking the time to answer, and so comprehensively too, I really appreciate it.
My client came around after I suggested that Getting quality backlinks to a website full of coding errors was like hanging a crystal chandelier in a toilet!! and that they where tying one of my hands behind my back by not sorting it out. Perhaps not the most expert answer but they got the point.
Thanks for some great information and a great answer all round.
-
**When I spoke to their web designer about the issues I was told that some of the errors where "just how its done" **
Are you OK with that response? If your client asked you why you took a course of action on their site would you expect the client to accept "it's just how things are done"?
Generally speaking, sites should use valid code. The W3C is the international body which establishes coding standards. They are made up of a group of people including representatives from Microsoft (IE), Google (Chrome), Mozilla (FireFox), Apple (Safari), etc. Valid code should appear correctly in all browsers.
Generally speaking again, a developer who writes valid code is following best coding practices. The code can be more easily reviewed by other developers. When invalid code is used, it is often due to sloppy coding practices such as not closing tags, using deprecated tags, not being familiar with the particular encoding of the language in use, etc. When I ask a developer why the code is not valid and the response is "it's just how things are done" the translation often is "I lack the knowledge / training / experience to write valid code".
Ok, now that I angered many developers let me take the flip side of the coin. Google.com does not validate. What's up with that? Well, you know the development team at Google is among the best in the world. Their project leaders likely have their doctorate degrees or at least master degrees. Many of them are authors of books on best coding practices. These guys clearly understand all the rules and are able to go past them to achieve better results in a given area, such as speed optimization which Google treasures.
In summary, leading companies can often employee the upper echelon of employees who thoroughly understand the rules and can break them for their benefit. Unfortunately, that does not trickle down to every day developers. Most of them do not have the knowledge / training / experience to make those calls and are simply either using sloppy coding practices or they are not taking the time to research other alternatives. They have deadlines and they jump on whatever works.
what do SE spiders do if they come across coding errors? Do they stop crawling the rest of the code below the error
The results vary based on the Search Engine and the type of error. Here are some examples:
1. There are some errors due to the "&" being used instead of the binary operator "&". Sometimes there are issues with various code where the & character may have another purpose and the interpreter may try to perform an operation on the code such as concatenation rather then simply reading the & as a character.
2. In html,
is a perfectly valid tag. In XHTML, there is a rule that any tags which are not used in a pair should be end in />. In other words, the correct form of the
tag in XHTML is
. If you have an XHTML document which generates 20 errors, and all of those errors are due to the developer using
instead of
then a crawler should handle that issue very well. The crawler recognizes and understands the
tag even though it is technically invalid code.3. An open div tag can cause a variety of issues. It all depends on what operation the div is performing. It could be very minor or a major issue.
Google does a great job of handling invalid code. Bing seems less tolerant of coding errors and much more selective.
A video you will likely enjoy: http://www.youtube.com/watch?v=FPBACTS-tyg
Summary
You should strive for valid code with your site. Coding errors can cause a variety of issues including making it harder for other developers to work on the site, causing the site to appear incorrectly in various browsers or devices, negatively impacting page loading times, and impeding search engine crawlers. It is not possible to say without a review of the specific error. While I do not develop websites, I do project manage the development of many sites. When the site is complete, the goal is to not have any validation errors. If a handful of errors exist, I request for the developer to try to eliminate them. If they cannot, I request an error-by-error explanation of why the error exists and why it cannot be eliminated. The result is a site which appears correctly in all browsers, is correctly crawled and interpreted by search engines, and is easily maintained by various developers.
A final note: just because a page validates does not mean it is developed well, and the reverse is true also. I would say with the exception of the top 1% of sites which are developed by teams of very well trained and experienced web professionals, sites which validate are likely better designed and maintained then sites which do not validate.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Many of my pages are showing up as 4xx errors because they require a login/subscription to view. How can I fix this?
Should these pages be redirecting to a login page? Or does there need to be something other than an Access Denied message? I feel like these are bringing the overall site score down.
On-Page Optimization | | CaraMc0 -
How to check code to text ratio of Mobile Site?
Hello All, How to check code to text ratio of Mobile Site? Thanks!
On-Page Optimization | | adamjack0 -
Can the Lightboxes on My Site be Crawled?
I'm trying to optimize my site, but I have lightboxes and I don't know if they are visible to the search engines. If they aren't, could you suggest something that I could do? THANK YOU so much!!!!! My site is lymphexpo.com
On-Page Optimization | | bosleypalmer0 -
Duplicate Blog pages across different domains
Hey Moz Community, I have 3 Duplicate websites which more or less contain the same blog article ( they are copy & paste from the original website ). I am now in the process of changing my duplicate websites and I stumbled upon this problem: if I have to change the content for all the duplicate articles I have across my different domains it would be a very time consuming task and on the other hand I don't want to no index, follow the duplicate articles because I want to use them for SEO purposes. Should I only change the articles that brought significant traffic and no index, follow the rest ? What do you think ? Thanks, Anddrei
On-Page Optimization | | kiraftw0 -
Moz not showing blog errors
I have a campaign running in moz for getpromoted.in. Your moz is only crawling and showing website errors not blog (blog.getpromoted.in). Please advice whats the issue
On-Page Optimization | | zigmund0 -
How does Google Detect which keywords my website should show up for in the SE?
When I checked my Google Webmaster Tools I found that my website is showing up for keywords that I didn't optimize for ... for example I optimize my website for "funny pictures with captions", and the website is showing up for "funny images with captions". I know that this is good, but the keyword is dancing all around, sometimes I search for "funny pictures with captions" and I show up in the 7th page, and some time I don't show up. and the same goes for the other keyword. of course I am optimizing for more than two keywords but the results is not consistent. my question is how does Google decide which keywords you website should show up for? Is it the on-page keywords?, or is it the off-page anchor text keywords? Thank you in advance ...
On-Page Optimization | | FarrisFahad
FarrisFahad0 -
Why is SEOMOZ Crawl Diagnostics not in sync with Webmaster Tools
Currently, my Website, according to the Crawl Diagnostics Summary, has 401 'Duplicate Page Title Errors'. But in Google Webmaster Tools, under Óptimization on the Left hand Side Toolbar, if you look up HTML Improvements, there are only 4 'Duplicate Title Tags'. I have two questions re this: A) Do 'Duplicate Page Title Errors' and 'Duplicate Title Tags' have the same meaning' ? , and B) why are there 401 errors located by the former, and just 4 by the latter?
On-Page Optimization | | ABCPS0 -
Is it ok to point internal links to index.html home page rather than full www
I thought I saw this somewhere on SEOmoz before but I was so busy by the time I got around to work on my SEO on my site, I realized I have this happening and can't recall if it is a problem which takes away from my ranking. If my www.website.com is ranking well but I have internal menu links pointing to www.website.com/index.html instead of www.website.com will that take away from my www.website.com rankings? Should I change all my menu links that point to /index.html to the full website url path www.website.com ?
On-Page Optimization | | Twinbytes0