Previous Series Post | Next Series Post
This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays. I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do good design.
Really, the key to learning is the practice. Learners have to apply knowledge, in the form of skills, to really internalize and ‘own‘ the learning. Knowledge recitation, in the absence of application, leads to what cognitive science calls ‘inert knowledge‘, that‘s able to be recited back, but isn‘t activated in appropriate contexts.
What we see, unfortunately, is too much of knowledge test, and not meaningful application. We see meaningless questions seeing if people can recite back memorized facts, and no application of those facts to solve problems. We see alternatives to the right answer that are so obviously wrong that we can pass the test without learning anything! And we see feedback that‘s not specific to the deficit. In short, we waste our and the learner‘s time.
What we want is appropriate challenge, contextualized performance, meaningful tasks, appropriate feedback, and more.
First, we should have picked meaningful objectives that indicate what they can do, in what context, to what level, and now we design the practice to determine whether they can do it. Of course, we may need to have some intermediate tasks to develop their skills at an appropriate pace, providing scaffolding to simplify the task until it‘s mastered.
We can scaffold in a variety of ways. We can provide tasks with simplified data first, that don‘t get complicated with other factors. We can provide problems with parts worked, so learners can accomplish the component skills separately and then combine. We can provide support tools such as checklists or flowcharts to assist, and gradually remove them until the learner is capable.
We do need to balance the level of challenge, so that the task gets difficult at the right rate for the learner: too easy, and the learner is bored; too hard and the learner is frustrated. Don‘t make it too easy! If it matters, ensure they know it (and if it doesn‘t, why are you bothering?).
The trick is not only the inherent nature of the task, but many times is a factor of the alternatives to the right answer. Learners don‘t make random mistakes (generally), they make patterned mistakes that represent inappropriate models that they perceive as appropriate. We should choose alternatives to the right answer or choice that represent these misconceptions.
Consequently, we need to provide specific feedback for that particular misconception. That‘s why any quiz tool that only has one response for all the wrong answers should be tossed out; it‘s worthless.
We need to ensure that the setting for the task is of interest to the learner. The contexts we choose should setup problems that the learner viscerally understands are important problems, and ones that they are interested in.
We also need, as mentioned with examples, that the contexts seen across both examples and practice determine the space of transfer, so that still needs to be kept in mind.
The elements listed here are the elements that make effective practice, but also those that make engaging experiences (hence, the book). That is, games. While the best practice is individually mentored real performance, that doesn‘t scale well, and the consequences can be costly. The next best practice, I argue, is simulated performance, tuned into a game (not turned, tuned). While model-driven simulations are ideal for a variety of reasons (essentially infinite replay, novelty, adaptive challenge), it can be simplified to branching or linear scenarios. If nothing else, just write better multiple choice questions!
Note that, here, practice encompasses formative and summative assessment. In either case, the learner‘s performing, it‘s just whether you evaluate and record that performance to determine what the learner is capable of. I reckon assessment should always be formative, helping the learner understand what they know. And summative assessment, in my mind, has to be tied back to the learning objectives , seeing if they can now do what they need to be able to do that‘s difference.
If you make meaningful challenging, contextualized performance, you make effective practice. And that‘s key to behavior change, and learning. So practice making perfect practice, because practice makes perfect.
Dave Ferguson says
“Patterned mistakes” is a good phrase; I remember referring to “expected incorrect answers” (because that was the term our CBT system used). These were valuable sources of information for the designer/instructor; by examining them, especially a group of them, you could figure out why people tended to make that answer. Maybe the explanation wasn’t clear; maybe the examples were misleading; maybe the exercise exercised the wrong things.
I heard Jim Fuller speak once; he said that he had been a self-taught golfer, and by the time he took lessons, his bad habits were deeply ingrained. His phrase was that practice doesn’t make perfect, it makes permanent. His point was the same as the one you make about feedback. Sometimes the ‘aha’ moment doesn’t come, or doesn’t come before a person’s patience runs out.
One benefit of objectives, scaffolding, and feedback working together is that the person comes to expect that he can succeed, and that the situation itself will help him learn how.