Of late, I’ve been both reviewing eLearning, and designing processes & templates. As I’ve said before, the nuances between well-designed and well produced eLearning are subtle, but important. Reading a forthcoming book that outlines the future but recounts the past, it occurs to me that it may be worthwhile to look at a continuum of possibilities.
For the sake of argument, let’s assume that the work is well-produced, and explore some levels of differentiation in quality of the learning design. So let’s talk about a lack of worthwhile objectives, lack of models, insufficient examples, insufficient practice, and lack of emotional connection. These combine into several levels of quality.
The first level is where there aren’t any, or aren’t good learning objectives. Here we’re talking about waffly objectives like ‘understand’, ‘know’, etc. Look, I’m not a behaviorist, but I think *when* you have formal learning goals (and that’s not as often as we deliver), you bloody well ought to have some pretty meaningful description around it. Instead what we see is the all-to-frequently observed knowledge dump and knowledge test.
Which, by the way, is a colossal waste of time and money. Seriously, you are, er, throwing away money if that’s your learning solution. Rote knowledge dump and test reliably lead to no meaningful behavior change. We even have a label for it in cognitive science: “inert knowledge”.
So let’s go beyond meaningless objectives, and say we are focused on outcomes that will make a difference. We’re ok from here, right? Er, no. Turns out there are several different ways we can go wrong. The first is to focus on rote procedures. You may want execution, but increasingly the situation is such that the decisions are too complex to trust a completely prescribed response. If it’s totally predictable, you automate it!
Otherwise, you have two options; you provide sufficient practice, as they do with airline plots and heart surgeons. If lives aren’t on the line and failure isn’t as expensive as training, you should focus on providing model-based instruction where you develop the performer’s understanding of what’s underlying the decisions of how to respond. That latter gives you a basis for reconstructing an appropriate response even if you forget the rote approach. I recommend this in general, of course.
Which brings up another way learning designs go wrong. Sufficient practice as mentioned above would suggest repeating until you can’t get it wrong. What we tend to see, however, is practice until you get it right. And that isn’t sufficient. Of course, I’m talking real practice, not knowledge test ala multiple choice questions. Learners need to perform!
We don’t see sufficient examples, either. While we don’t want to overwhelm our learners, we do need sufficient contexts to abstract across. And it does not have to occur in just one day, indeed, it shouldn’t! We need to space the learning out for anything more than the most trivial of learning. Yet the ‘event’ model of learning crammed into one session is much of what we see.
The final way many designs fails is to ignore the emotional side of the equation. This manifests itself in several ways, including introductions, examples, and practice. Too often, introductions let you know what you’re about to endure, without considering why you should care. If you’re not communicating the value to the learner, why should they care? I reckon that if you don’t convey the WIIFM, you better not expect any meaningful outcomes. There are more nuances here (e.g. activating relevant knowledge, etc), but this is the most egregious.
In examples and practice, too, the learner should see the relevance of what is being covered to what they know is important and they care about. These are two important and separate things. What they see should be real situations where the knowledge being addressed plays a real role. Then they should also care about the examples personally.
It’s hard to be able to address all the elements, but aligning them is critical to achieving well-designed, not just well-produced learning. Are you really making the necessary distinctions?