I’ve talked in the past about miniscenarios. By this, I mean rewriting multiple choice questions (MCQs) to actually be situations requiring decisions and choices thereto. I evangelize this, regularly. I’ve also talked about what you need from subject matter experts (SMEs). What I haven’t really done is talk about how how you map information to miniscenarios. So it’s time to remedy that.
So, first, let’s talk about the structure of a mini-scenario. I’ve suggested that it’s an initial context or story, in which a situation precipitates the need for a decision. There’s the right one, and then alternatives. Not random or silly ones, but ones that represent ways in which learners reliably go wrong. There’s also feedback, which is best as story-based consequences first, then actual conceptual feedback.
So what’s the mapping? One of the things we (should) get from SMEs are the contexts in which these decisions come to play. Thus, the setting for the mini-scenario is one of these contexts. It may be made fantastic in story, but the necessary contextual elements have to exist. (“Pat had been recently promoted to line supervisor…”)
Then, we have the decisions the learners need to be able to make. These often come in the form of performance objectives. This forms the basis for choosing a situation that precipitates the decision, and the decision itself. (“The errors in manufacturing were higher than the production agreement stipulated. Pat:”) Also, at least, the correct answer. (* worked backward through the process.)
The wrong answers come from some other information we need from SMEs: misconceptions. These are the ways that individuals go wrong when performing. I’ve advocated before that you may want different types of SMEs. It may be that supervisors of the performers have more insight here than content experts. Regardless, you want to make these alternatives available as possible responses. You’ll want to address the difficultly of discrimination between alternatives as a way to manipulate the challenge of the task; it should be appropriate to the learners’ level. (*asked team members what they thought the problem was; *exhorted the team to pay more attention to quality).
The feedback starts with the consequences, which you should also get from SMEs. What happens when you get it right? What happens with each wrong answer? These may come from stories about wins and losses that you also want to collect. (“Pat’s team did not like the implicit claim that they weren’t working hard enough.”)
Finally, there’re the models that are the basis for good performance, and consequently also the basis for the feedback. These you should also collect, because you use them to explain why a choice is good or bad. You don’t want to just say right or wrong, learners need to understand the underlying reason to reinforce their understanding. (Which may also mean they also need to see their answer with the feedback, so they remember what they chose.) Importantly, they need specific feedback for each wrong answer, btw, so your implementation tool needs to support that! (When investigating errors, don’t start with the team. We always look at the process first, as system flaws need to be eliminated first.)
Pretty much everything you need from SMEs plays a role in providing practice. Miniscenarios aren’t necessarily the best practice, but they’re typically available in your authoring environment. Writing them isn’t necessarily as easy as generating typical recognition questions, but they more closely mimic the actual task, and therefore lead to better transfer. Plus, you’ll get better as you practice. So know the mapping of information to miniscenarios, practice your miniscenario writing, and put it into play!