I recently wrote about elearning garbage, and in case I doubted my assessment, today’s task made my dilemma quite clear. I was asked to be one of the judges for an elearning contest. Seven courses were identified as ‘finalists’, and my task was to review each and assign points in several categories. Only one was worthy of release, and only one other even made a passing grade. This is a problem.
Let me get the good news out of the way first. The winner, (in my mind; the overall findings haven’t been tabulated yet) did a good job of immediately placing the learner in a context with a meaningful task. It was very compelling stuff, with very real examples, and meaningful decisions. The real world resources were to be used to accomplish the task (I cheated; I did it just by the information in the scenarios), and mistakes were guided towards the correct answer. There was enough variety in the situations faced to cover the real range of possibilities. If I were to start putting this information into practice in the real world, it might stick around.
On the other hand, there were the six other projects. When I look at my notes, there were some common problems. Not every problem showed up in every one, but all were seen again and again. Importantly, it could easily be argued that several were appropriately instructionally designed, in that they had clear objectives, and presented information and assessment on that information. Yet they were still unlikely to achieve any meaningfully different abilities. There’s more to instructional design than stipulating objectives and then knowledge dump with immediate test against those objectives.
The first problem is that most of them were information objectives. There was no clear focus on doing anything meaningful, but instead the ability to ‘know’ something. And while in some cases the learner might be able to pass the test (either because they can keep trying ’til they get it right, or the alternatives to the right answer were mind-numbingly dumb; both leading to meaningless assessment), this information wasn’t going to stick. So we’ve really got two initial problems here, bad objectives and bad assessment..
In too many cases, also, there was no context for the information; no reason how it connected to the real world. It was “here’s this information”. And, of course, one pass over a fairly large quantity with some unreasonable and unrealistic expectation that it would stick. Again, two problems: lack of context and lack of chunking. And, of course, tests for random factoids that there was no particular reason to remember.
But wait, there’s more! In no case was there a conceptual model to tie the information to. Instead of an organizing framework, information was presented as essentially random collections. Not a good basis for any ability to regenerate the information. It’s as if they didn’t really care if the information actually stuck around after the learning experience.
Then, a myriad of individual little problems: bad audio in two, dull and dry writing pretty much across the board, even timing that of course meant you were either waiting on the program, or it was not waiting on you. The graphics were largely amateurish.
And these were finalists! Some with important outcomes. We can’t let this continue, as people are frankly throwing money on the ground. This is a big indictment of our field, as it continues to be widespread. What will it take?