Of late, I’ve been both reviewing eLearning, and designing processes & templates. As I’ve said before, the nuances between well-designed and well produced eLearning are subtle, but important. Reading a forthcoming book that outlines the future but recounts the past, it occurs to me that it may be worthwhile to look at a continuum of possibilities.
For the sake of argument, let’s assume that the work is well-produced, and explore some levels of differentiation in quality of the learning design. So let’s talk about a lack of worthwhile objectives, lack of models, insufficient examples, insufficient practice, and lack of emotional connection. These combine into several levels of quality.
The first level is where there aren’t any, or aren’t good learning objectives. Here we’re talking about waffly objectives like ‘understand’, ‘know’, etc. Look, I’m not a behaviorist, but I think *when* you have formal learning goals (and that’s not as often as we deliver), you bloody well ought to have some pretty meaningful description around it. Instead what we see is the all-to-frequently observed knowledge dump and knowledge test.
Which, by the way, is a colossal waste of time and money. Seriously, you are, er, throwing away money if that’s your learning solution. Rote knowledge dump and test reliably lead to no meaningful behavior change. We even have a label for it in cognitive science: “inert knowledge”.
So let’s go beyond meaningless objectives, and say we are focused on outcomes that will make a difference. We’re ok from here, right? Er, no. Turns out there are several different ways we can go wrong. The first is to focus on rote procedures. You may want execution, but increasingly the situation is such that the decisions are too complex to trust a completely prescribed response. If it’s totally predictable, you automate it!
Otherwise, you have two options; you provide sufficient practice, as they do with airline plots and heart surgeons. If lives aren’t on the line and failure isn’t as expensive as training, you should focus on providing model-based instruction where you develop the performer’s understanding of what’s underlying the decisions of how to respond. That latter gives you a basis for reconstructing an appropriate response even if you forget the rote approach. I recommend this in general, of course.
Which brings up another way learning designs go wrong. Sufficient practice as mentioned above would suggest repeating until you can’t get it wrong. What we tend to see, however, is practice until you get it right. And that isn’t sufficient. Of course, I’m talking real practice, not knowledge test ala multiple choice questions. Learners need to perform!
We don’t see sufficient examples, either. While we don’t want to overwhelm our learners, we do need sufficient contexts to abstract across. And it does not have to occur in just one day, indeed, it shouldn’t! We need to space the learning out for anything more than the most trivial of learning. Yet the ‘event’ model of learning crammed into one session is much of what we see.
The final way many designs fails is to ignore the emotional side of the equation. This manifests itself in several ways, including introductions, examples, and practice. Too often, introductions let you know what you’re about to endure, without considering why you should care. If you’re not communicating the value to the learner, why should they care? I reckon that if you don’t convey the WIIFM, you better not expect any meaningful outcomes. There are more nuances here (e.g. activating relevant knowledge, etc), but this is the most egregious.
In examples and practice, too, the learner should see the relevance of what is being covered to what they know is important and they care about. These are two important and separate things. What they see should be real situations where the knowledge being addressed plays a real role. Then they should also care about the examples personally.
It’s hard to be able to address all the elements, but aligning them is critical to achieving well-designed, not just well-produced learning. Are you really making the necessary distinctions?
Julie Dirksen says
LOVE. I think developing some heuristics around this is a really useful notion – particularly around sufficient practice.
Julie Dirksen says
Also, I’m wondering is if there’s a way to make some of this concrete for people in terms of guidelines without being rigidly prescriptive?
Clark says
Julie, I think making guidelines is not only a good idea, but practically necessary. There’re lots of good materials (e.g. your book), but perhaps a checklist?
Steve says
I dig it. I get a small sense that this could be like measuring the quality of symphony by the note or the instrument, but I think there’s definitely value in consistent assessment of quality. It could at least get us to collectively examine the turds by the same standard. We’re notoriously poor at accurately evaluating our own work in this field. Anything that gets folks to reflect and improve is a welcome addition:)
Maybe a rubric for evaluating quality facets across varied categories? Major categories might include overall execution, technical, instructional, and communication… For example, under instructional – something like methods and strategy, focus and relevance, clear connection to business outcomes, clear connection to audience.
I think there’s value in having a consistent measurement device, maybe even a grading mark or seal.
Brandon Carson says
Really great post and to the point. One of the issues I see is less about a good ID designing effective instruction and more about the wrong person being in the role of either designing the instruction or being a decider in how it’s designed and implemented. I’ve consulted and worked in large and small environments where I too often see this. I also think the broad dissemination of “rapid” tools and processes has diminished the ability for due diligence in analyzing what the needed intervention is to appropriately affect the situation. We’re always in a hurry it seems to produce “something” that can be checked off a list. However, when the stars align, and you have the right person in the right place trying to do the right thing, it sometimes is an effort to navigate the waters of corporate chaos to uncover the true “reason” for the intervention. My mantra is less is more, and too many times I’m asked to create more without the proper foundation. I know it sounds tired, but some of us in the corporate space need to grab our pitchforks and march on the C-suite and argue and fight to be heard — we are here to support performance — get out of way and let those of us that know what we’re doing just do it. And if you have on staff those that don’t know or are making wrong decisions — get rid of them.
Clark says
Steve, I’m largely focused on getting the design right (“if you get the design right, lots of ways to implement it, if you don’t get the design right, it doesn’t matter how you implement it” is one of my mantras), so that’s the core focus of a checklist I’d be concerned about, but broader rubrics would likely be useful as well. Just reading an article about how a well-regarded piece of educational software, upon testing, was found to be riddled with serious usability issues (!). I used to write articles talking about how ed tech could benefit from the usability field, and I suspect we’re still not doing basic usability testing!
Brandon, I’d agree that many people are responsible for designing without having the necessary background, and that folks are more concerned about costs than about effectiveness. The amount of money that I reckon is wasted on ‘spray and pray’ (aka ‘show up and throw up’), whether training or elearning, is mind-boggling. And a lot of us keep railing about it, to little change that I can see. Sigh.
Steve says
Gotcha. I do consider design in those three categories:
– Instructional – this is the instruction specific dimensions including: Is a performance requirement defined? Are specific tasks clearly mapped to the performance requirement? Are skills and subtasks (overt and covert) identified? Are practice opportunities highlighted? Are measurement opportunities highlighted? (http://androidgogy.com/?attachment_id=602 << graphic from a presentation last year)
– Technical – this includes dimensions like technology selection, integration design, etc.
– Communication – this includes dimensions like visual design, theme selection / consistency, written copy, etc.
Part of the problem with many products is that folks try to "all-in-one" in areas where they might be weakest. For our contract product evaluations, we broke things down this way. We still see weaknesses in some areas come across when the eval rubric is exposed ahead of time. I'm convinced breaking down into verticals is a good way to approach this type of evaluation:)