I recently wrote about elearning garbage, and in case I doubted my assessment, today’s task made my dilemma quite clear. I was asked to be one of the judges for an elearning contest. Seven courses were identified as ‘finalists’, and my task was to review each and assign points in several categories. Only one was worthy of release, and only one other even made a passing grade. This is a problem.
Let me get the good news out of the way first. The winner, (in my mind; the overall findings haven’t been tabulated yet) did a good job of immediately placing the learner in a context with a meaningful task. It was very compelling stuff, with very real examples, and meaningful decisions. The real world resources were to be used to accomplish the task (I cheated; I did it just by the information in the scenarios), and mistakes were guided towards the correct answer. There was enough variety in the situations faced to cover the real range of possibilities. If I were to start putting this information into practice in the real world, it might stick around.
On the other hand, there were the six other projects. When I look at my notes, there were some common problems. Not every problem showed up in every one, but all were seen again and again. Importantly, it could easily be argued that several were appropriately instructionally designed, in that they had clear objectives, and presented information and assessment on that information. Yet they were still unlikely to achieve any meaningfully different abilities. There’s more to instructional design than stipulating objectives and then knowledge dump with immediate test against those objectives.
The first problem is that most of them were information objectives. There was no clear focus on doing anything meaningful, but instead the ability to ‘know’ something. And while in some cases the learner might be able to pass the test (either because they can keep trying ’til they get it right, or the alternatives to the right answer were mind-numbingly dumb; both leading to meaningless assessment), this information wasn’t going to stick. So we’ve really got two initial problems here, bad objectives and bad assessment..
In too many cases, also, there was no context for the information; no reason how it connected to the real world. It was “here’s this information”. And, of course, one pass over a fairly large quantity with some unreasonable and unrealistic expectation that it would stick. Again, two problems: lack of context and lack of chunking. And, of course, tests for random factoids that there was no particular reason to remember.
But wait, there’s more! In no case was there a conceptual model to tie the information to. Instead of an organizing framework, information was presented as essentially random collections. Not a good basis for any ability to regenerate the information. It’s as if they didn’t really care if the information actually stuck around after the learning experience.
Then, a myriad of individual little problems: bad audio in two, dull and dry writing pretty much across the board, even timing that of course meant you were either waiting on the program, or it was not waiting on you. The graphics were largely amateurish.
And these were finalists! Some with important outcomes. We can’t let this continue, as people are frankly throwing money on the ground. This is a big indictment of our field, as it continues to be widespread. What will it take?
Mary Gutwein says
Clark – thanks for this post! You hit the nail right on the head, so to speak! I am a developer, and “click ‘n read” training has been very common in our area. Many times, we are not given the time to create something that would be meaningful (read: something that makes the learning “stick”). This year, however, interactivity will be the rule, not the exception. Excellent post!
Lisa Chamberlin says
Clark,
You’re speaking my language. This has been my argument regarding much of the slide-deck information dumps passing as elearning these days. Without some context and authenticity to the tasks, the learning outcome is really an improved ability to click (due to the practice) and print (if certificate is offered at the end for proof of completion). Great post.
Lisa
JC Kinnamon says
Clark,
I remember being aghast once when looking at a demo of some award-winning e-learning programs with some colleagues. We were doing it as part of our own internal professional development–see what is out there; how do we raise our own level, that kind of thing. Meaningless interactivity and beautiful 3-D graphics do not make an award-winning program–but you wouldn’t have known that judging by the winners. When another “learning” colleague and I had finished trashing the “winners” I noticed some people in our group were quiet. The art director said, “I kind of liked them.” “Me too,” chimed in one of our sr. developers. It was quite a revelation. Ours is a unique and complex industry in which many talented people have to contribute in order to achieve success. Though what constitutes a winner may be subjective, I agree that the learning/performance improvement goals have to weigh heavily in the judging.
David Glow says
Even beyond the standard fare of “bore and score”, I find that the more developed elearning today clearly puts effort into the sizzle, not the steak.
I hope as an industry we can start to refocus on meaning more than media.
brainysmurf says
Thanks, Clark, this is eerily familiar. Of the finalists you reviewed, was there any compelling reason given as to why e-learning was the chosen methodology rather than some kind of on-the-job intervention?
Karen Hicks says
Unfortunately, eLearning needs a paradigm shift. Typically, the people making decisions about what eLearning should be, aren’t the developers, but management teams who haven’t done any research on where eLearning is going. So, we keep getting the “information dump” eLearning programs. I know in my company, I did a presentation over where eLearning is going, and I took some of our information dump eLearning and changed it into an activity based module. No one liked it, and didn’t buy into it. So, instead of having interactive eLearning, we have information dumping. Very sad indeed that people can’t embrace change and see something different.
Rob Stevens says
Clark, I’m curious if there was any information on the backstory of these courses? Budget? Timeline? Decision making process on why eLearning was chosen? Development process? I’m a little slower to throw stones. As a custom developer, we are sometimes put in a position where our recommendations are not accepted and we follow the requirements of our client. Our hope is that over time, we can influence our clients to make better learning decisions. Of course, we could simply put a stake in the ground and refuse to follow the requirements but that seems like a certain way to end the relationship.
Clark says
Thanks for the feedback. BrainySmurf, no, we weren’t told the reason it was elearning, but to your and Rob’s point, these were NGOs, clearly not-for-profits serving the under-developed world. As a consequence, they probably were running on a budget, and trying to reach vastly dispersed populations. All the info we, as judges had in addition to the courses, was a separate list of objectives. The objectives varied but often were just informational (a fail, right there). And even the winner wasn’t high production value, just high design. And that’s the point, if you get the design right, there are lots of ways to implement it, including on the cheap. And if you don’t get the design right, it doesn’t matter how you implement it. Most of them failed at the design stage: let’s just run a bunch of facts and figures by them and then test. That’s the form of ‘instructional design’ I’m going to rail against. I beleive that you could’ve done more meaningful things with the same budget. And if information/knowledge *was* what you were trying to do, you still couldn’t achieve it with one large knowledge dump and test.
sandra says
Excellent points all around, on budgets and less than ideal constraints placed on the developers. I especially like Clark’s last comment as it matches my own thoughts – it’s all about the content. If it is garbage, no amount of interactivity, swirling 3D imaging, or enticing visuals will change that.
So – how do we get the focus back on that design level? How do we evangelize that core principle and learn to get that part right?
Mark Jones says
The eLearning Network (UK) is waging a campaign to wipe out ineffective, inefficient, flabby elearning. We want great elearning and we want it now! http://www.elearningnetwork.org/content/campaign-effective-elearning
hastag #c4ee
Millie Vilaplana says
Karen’s and Rob’s comments especially resonate with my experience. Time and time again I have re-do relevant, better chunked content into knowledge dumps, either because my client said his leader wouldn’t understand the interactivity and would like to see all their website content and terminology in one fell swoop, or because my boss’s manager insisted on more text and bullet points in each sidle (!!!)
I believe we have no choice than to educate, educate, educate the general public, T&D directors, business executives, etc. Let’s unite, show some meaningful metrics and make the case to them!… In the meantime, I’ll be joining the UK eLearning Network.
Moira de Roche says
As my good friend John Ambrose would say, Content is King, but CONTEXT is the Kingdom. It’s not clear whether the courses in question are internally developed, or generic off-the-shelf. If the former, then I’m not that surprised. Many people end up developing courses (both e-learning and classroom) with absolutely no understanding of instructional design. You can get away with this is classroom, but not with e-learning.