A few months back, the esteemed Dr. Will Thalheimer encouraged me to join him in a blog dialog, and we posted the first one on who L&D had responsibility to. And while we took the content seriously, I can’t say our approach was similarly. We decided to continue, and here’s the second in the series, this time trying to look at what might be hindering the opportunity for design to get better. And again, a serious convo leavened with a somewhat demented touch:
Will, we‘ve suffered Fear and Loathing on the Exhibition Floor at the state of the elearning industry before, but I think it‘s worth looking at some causes and maybe even some remedies. What is the root cause of our suffering? I‘ll suggest it‘s not massive consumption of heinous chemicals, but instead think that we might want to look to our tools and methods.
For instance, rapid elearning tools make it easy to take PPTs and PDFs, add a quiz, and toss the resulting knowledge test and dump over to the LMS to lead to no impact on the organization. Oh, the horror! On the other hand, processes like ADDIE make it easy to take a waterfall approach to elearning, mistakenly trusting that â€˜if you include the elements, it is good‘ without understanding the nuances of what makes the elements work. Where do you see the devil in the details?
Clark my friend, you ask tough questions! This one gives me Panic, creeping up my spine like the first rising vibes of an acid frenzy. First, just to be precise—because that‘s what us research pedants do—if this fear and loathing stayed in Vegas, it might be okay, but as we‘ve commiserated before, it‘s also in Orlando, San Francisco, Chicago, Boston, San Antonio, Alexandria, and Saratoga Springs. What are the causes of our debauchery? I once made a list—all the leverage points that prompt us to do what we do in the workplace learning-and-performance field.
First, before I harp on the points of darkness, let me twist my head 360 and defend ADDIE. To me, ADDIE is just a project-management tool. It‘s an empty baseball dugout. We can add high-schoolers, Poughkeepsie State freshman, or the 2014 Red Sox and we‘d create terrible results. Alternatively, we could add World-Series champions to the dugout and create something beautiful and effective. Yes, we often use ADDIE stupidly, as a linear checklist, without truly doing good E-valuation, without really insisting on effectiveness, but this recklessness, I don‘t think, is hardwired into the ADDIE framework—except maybe the linear, non-iterative connotation that only a minor-leaguer would value. I‘m open to being wrong—iterate me!
Your defense of ADDIE is admirable, but is the fact that it‘s misused perhaps reason enough to dismiss it? If your tool makes it easy to lead you astray, like the alluring temptation of a forgetful haze, is it perhaps better to toss it in a bowl and torch it rather than fight it? Wouldn‘t the Successive Approximation Method be a better formulation to guide design?
Certainly the user experience field, which parallels ours in many ways and leads in some, has moved to iterative approaches specifically to help align efforts to demonstrably successful approaches. Similarly, I get â€˜the fear‘ and worry about our tools. Like the demon rum, the temptations to do what is easy with certain tools may serve as a barrier to a more effective application of the inherent capability. While you can do good things with bad tools (and vice versa), perhaps it‘s the garden path we too easily tread and end up on the rocks. Not that I have a clear idea (and no, it‘s not the ether) of how tools would be configured to more closely support meaningful processing and application, but it‘s arguably a collection worth assembling. Like the bats that have suddenly appeared…
I‘m in complete agreement that we need to avoid models that send the wrong messages. One thing most people don‘t understand about human behavior is that we humans are almost all reactive—only proactive in bits and spurts. For this discussion, this has meaning because many of our models, many of our tools, and many of our traditions generate cues that trigger the wrong thinking and the wrong actions in us workplace learning-and-performance professionals. Let‘s get ADDIE out of the way so we can talk about these other treacherous triggers. I will stipulate that ADDIE does tend to send the message that instructional design should take a linear, non-iterative approach. But what‘s more salient about ADDIE than linearity and non-iteration is that we ought to engage in Analysis, Design, Development, Implementation, and Evaluation. Those aren‘t bad messages to send. It‘s worth an empirical test to determine whether ADDIE, if well taught, would automatically trigger linear non-iteration. It just might. Yet, even if it did, would the cost of this poor messaging overshadow the benefit of the beneficial ADDIE triggers? It‘s a good debate. And I commend those folks—like our comrade Michael Allen—for pointing out the potential for danger with ADDIE. Clark, I‘ll let you expound on rapid authoring tools, but I‘m sure we‘re in agreement there. They seem to push us to think wrongly about instructional design.
I spent a lot of time looking at design methods across different areas – software engineering, architecture, industrial design, graphic design, the list goes on – as a way to look for the best in design (just as I‘ve looked across engagement disciplines, learning approaches, and more; I can be kinda, er, obsessive). I found that some folks have 3 step models, some 4, some 5. There‘s nothing magic about ADDIE as â€˜the‘ five steps (though having *a* structure is of course a good idea). I also looked at interface design, which has arguably the most alignment with what elearning design is about, and they‘ve avoided some serious side effects by focusing on models that put the important elements up front, so they talk about participatory design, and situated design, and iterative design as the focus, not the content of the steps. They have steps, but the focus is on an evaluative design process. I‘d argue that‘s your empirical design (that or the fumes are getting to me). So I think the way you present the model does influence the implementation. If advertising has moved from fear motivation to aspirational motivation (c.f. Sach‘s Winning the Story Wars), so too might we want to focus on the inspirations.
Yes, let‘s get back to tools. Here‘s a pet peeve of mine. None of our authoring tools—as far as I can tell—prompt instructional designers to utilize the spacing effect or subscription learning. Indeed, most of them encourage—through subconscious triggering—a learning-as-an-event mindset.
For our readers who haven‘t heard of the spacing effect, it is one of the most robust findings in the learning research. It shows that repetitions that are spaced more widely in time support learners in remembering. Subscription learning is the idea that we can provide learners with learning events of very short duration (less than 5 or 10 minutes), and thread those events over time, preferably utilizing the spacing effect.
Do you see the same thing with these tools—that they push us to see learning as a longer-then-necessary bong hit, when tiny puffs might work better?
Now we’re into some good stuff! Yes, absolutely; our tools have largely focused on the event model, and made it easy to do simple assessments. Not simple good assessments, just simple ones. It’s as if they think designers don’t know what they need. And, as our colleague Cammy Bean’s book The Accidental Instructional Designer’s success shows, they may be right. Yet I’d rather have a power tool that’s incrementally explorable, but scaffolds good learning than one that ceilings out just when we’re getting to somewhere interesting. Where are the templates for spaced learning, as you aptly point out? Where are the tools to make two-step assessments (first tell us which is right, then why it’s right, as Tom Reeves has pointed us to)? Where are more branching scenario tools? They tend to hover at the top end of some tools, unused. I guess what I’m saying is that the tools aren’t helping us lift our game, and while we shouldn’t blame the tools, tools that pointed the right way would help. And we need it (and a drink!).
Should we blame the toolmakers then? Or how about blaming ourselves as thought leaders? Perhaps we‘ve failed to persuade! Now we‘re on to fear and self-loathing…Help me Clark! Or, here‘s another idea. How about you and I raise $5 million in venture capital and we‘ll build our own tool? Seriously, it‘s a sad sign about the state of the workplace learning market that no one has filled the need. Says to me that (1) either the vast cadre of professionals don‘t really understand the value, or (2) the capitalists who might fund such a venture don‘t think the vast cadre really understand the value, (3) or the vast cadre are so unsuccessful in persuading their own stakeholders that truth about effectiveness doesn‘t really matter. When we get our tool built, how about we call it Vastcadre? Help me Clark! Kent you help me Clark? Please get this discussion back on track…What else have you seen that keeps us ineffective?
Gotta hand it to Michael Allen, putting his money where his mouth is, and building ZebraZapps. Whether that‘s the answer is a topic for another day. Or night. Or… so what else keeps us ineffective? I‘ll suggest that we‘re focusing on the wrong things. In addition to our design processes, and our tools, we‘re not measuring the right things. If we‘re focused on how much it costs per bum in seat per hour, we‘re missing the point. We should be measuring the impact of our learning. It‘s about whether we‘re decreasing sales times, increasing sales success, solving problems faster, raising customer satisfaction. If we look at what we‘re trying to impact, then we‘re going to check to see if our approaches are working, and we‘ll get to more effective methods. We‘ve got to cut through the haze and smoke (open up what window, sucker, and let some air into this room), and start focusing with heightened awareness on moving some needles.
So there you have it. Should we continue our wayward ways?