For reasons, I’ve been looking at multiple-choice questions (MCQs). Of course, for writing them right, you should look to Patti Shank’s book Write Better Multiple-Choice Questions. And there’s clearly a need! Why? Because when it comes to writing meaningful MCQs, I’m wanting to move us from knowledge to performance. And the vast number of questions I found didn’t do that.
To start, I’ll point, as I often do, to Pooja Agarwal’s research (plays to my bias ;). She found that asking high-level questions (e.g. application questions, or mini-scenarios as I like to term them) leads to ability to answer high-level questions (e.g., to do). What wasn’t necessary were low-level knowledge questions. She tested low alone, high alone, and low + high. What she found that was to pass high tests, you needed high questions. Further, low questions didn’t add anything. I’ll also suggest that our needs, for our learners and our organizations, are the ability to apply knowledge in high-level ways.
Yet, when I look at what’s out there, I continually see knowledge questions. They violate, btw, many principles of good multiple questions (hence Patti’s book ;). These questions often have silly or obvious alternatives to the right answer. They include the wrong length responses, and too many (3 is ideal, usually, including the right answer). We also see a lack of feedback, just ‘right’ or ‘wrong’, not anything meaningful. We also see too many questions, or incomplete coverage, and arbitrary criteria (why 80%?). Then, too, the absolutes (never/always, etc), which isn’t the way to go. Perhaps worst, they don’t always focus on anything meaningful, but query random information that was in no way signaled as important.
Now, I suppose I can’t say that knowledge questions should be avoided. There might be reasons to ensure it’s there for diagnostic reasons (e.g. why are they getting this wrong). I’d suggest, however, that they’re way overused. Moreover, we can do better. It’s even essentially easy (though not effortless).
What we have learners do is what’s critical for their effective learning, If we care (and we should), that means we need to make sure that what they do leads to the outcomes our organizations need. Which means that we need lots of practice. Deliberate practice, with desirable difficulty, spaced out over time. We need reactivation, for sure. But what we do to reactivate dictates what we’ll be able to do. If we ask people knowledge questions, they’ll be able to answer knowledge questions. But that has been shown to not lead to their ability to apply that knowledge to make decisions: solve problems, design solutions, generate better practices.
So, we can do better. We must do better. That is, if we want to actually assist our organizations. If we’re talking skilling (up-, re-, etc), we’re talking high-level questions. On the way, perhaps (and recommended), to more rigorous assessment (branching scenarios, sims, mentored practice, coaching, etc), Regardless, we want what we have learners do be meaningful, When we’re moving from knowledge to performance, it’s critical, And that’s what I believe we should be doing.
(BTW, technology’s an asset, but not a solution. As I like to say:
If you get the design right, there are lots of ways to implement it; if you don’t get the design right, it doesn’t matter how you implement it. )
Leave a Reply