jQuery, MooTools, Prototype, YUI and Dojo all have at least one thing in common; they’re all partial abstractions of JavaScript, the “official” client-side programming language. Each of these frameworks enables further abstraction whether it is through “plugins”, “extensions” or “classes”. So where does the abstraction end? Or rather, where should it end?
The point in abstraction is to make something simpler by factoring out details. If we take jQuery for example, it abstracts away cross-browser pains; it essentially covers up the difficulties. This can make for faster development but can sometimes result in an abstraction leak.
The law of Leaky Abstractions, a term coined by Joel Spolsky, states that “All non-trivial abstractions, to some degree, are leaky”. A “leak” occurs when an abstraction breaks and you have to resort to a lower level abstraction to solve the problem. So, if jQuery fails to provide cross-browser sanity in a particular situation then you’re left to battle it via “pure” JavaScript (a lower level abstraction).
This highlights a common controversy: is it necessary to know JavaScript at all? Are you comfortable operating on a wafer-thin high-level abstraction like jQuery, MooTools, Prototype or any other framework? What’s your course of action when your beloved abstraction leaks?
How low can you go?
So, given that all abstractions are, to some extent, leaky, how low should you go? I think this can be answered with another question, how low can you go given the constraints of the platform? With our jQuery example the platform is the browser; this piece of software does not offer anything lower than JavaScript, so surely that’s as far as you should go!
Similarly, with CSS frameworks like BlueprintCSS and 960gs you should go as low as you can, which, in this situation is CSS itself.
Using frameworks is absolutely fine as long as you know the technology on which the abstraction was formed. If you want to use jQuery, learn JavaScript; if you want to use a CSS framework then learn CSS!
This get’s more complicated in the area of software engineering because the platform allows you to go very low. Should all .NET developers know how to write C++, should all C++ programmers have a strong grasp on Assembly? I can go on forever…
It’s a compromise!
If we define “control” as a reasonable level of supervision over a piece of functionality (whether it’s a form validator or an image cropper) then it has a negative correlation with abstraction; in other words, the higher the level of abstraction, the less control you have.
Recently I discussed the benefits of creating a jQuery plugin but I failed to point out the obvious flaw: by creating a plugin or extension for any framework you’re raising the level of abstraction even further covering up the details), thereby decreasing the level of control someone has over it. Fortunately, because it’s JavaScript, we never really lose control; we can change the plugin source at any time. But the end user of your plugin probably won’t know JavaScript to the same extent as you. The level of control they have is dictated by the level of customisation you offer with the plugin; but, the more customisation you offer, the bigger in size the plugin will be. Therefore the entire process is a compromise between control, simplicity, and most importantly, speed!
The higher the level of abstraction the slower it will be. Higher abstractions may speed up development time but the processing time will usually be longer.
document.getElementById(‘elem’)
will always be faster than $(‘#elem’)
!
Food for thought…
Thanks for reading! Please share your thoughts with me on Twitter. Have a great day!
Nice article!
The new frameworks make us forget about complex programming letting just “imagine” what we wish our project to be and design it with some direct and easysteps.
Great tips thanks James 🙂
Congratulations about nettuts screencast competition
Thanks for highlighting this.
This is why I use my own library of functions rather than deal with the abstractions used in major libraries.
Nice article… definately something to think about!
I think that mootools is harder than jQuery, because in mootools you need to know how javascript works to use it. Javascript is an object oriented languague, and in jQuery all it’s about functions.
@IRM, I agree, using frameworks sometimes can make us forget. But for most people they’re not forgetting anything because they never knew it in the first place. Most adopters of jQuery don’t know JavaScript to begin with…
@Ibrahim, Thanks mate! 🙂
@Pete, I tend to do that too, on smaller projects, but sometimes it becomes pointless – there’s no point in reinventing the wheel. But I can definitely see the benefit in using your own library; when an abstraction leaks you know exactly how to fix it because you created it.
@Brenelz, Indeed! I don’t think knowing the above will really change what I do but it is interesting nonetheless. 🙂
@Santiago, I wouldn’t say MooTools is harder than jQuery (the initial learning curve may be greater) but I do agree with the fact that you need to know a little more about JavaScript to use MooTools. As well as being an object orientated language JavaScript is also a prototypal and functional one, jQuery is simply venturing on the other side of the road.
I’d say there are two occasions when using an abstraction such as jQuery is by far the best route:
– a novice JavaScripter just wants to get something done. They don’t want to learn the language in-depth, they just want their pages to work.
– a team of developers need to work together on a project. Using an established, well-documented library means everyone is working from the same sheet, as it were.
I tend to avoid using established libraries on the majority of my projects. This is because I work on my own library; while there’s no way it can compete with the likes of jQuery or Mootools, using & developing it has its own benefits:
– when I do come across a leaky abstraction, its easier to fix it for next time
– I’m far more knowledgeable about how the library works and how to approach the project in the best way
– its simply the best way to become a better JavaScripter – while it may feel like you are reinventing the wheel, part of the challenge is finding ways to make things work faster, improve stability etc.
Your article has a very valid point and is something everyone should take into account when using any Framework – especially a Javascript one which can easily sap the users CPU cycles if implemented incorrectly.
Luckily the majority of these frameworks are now so fined tuned that we no longer have to worry to much about JS bogging down a users experience of the web. Reducing these abstraction layers is only really needed in web applications these days and even then should only be required when creating a large number of DOM changes at any one time.
I’ve recently been doing some profiling work at my day job with the aim of speeding up are file managers directory listing, the main change was a case of removing multiple instances PrototypeJS .insert() for a single use of innerHTML which drastically improved perfomance (490ms to 7ms rending 100 complex HTML Elements).