As I‘ve been developing online workshops, I‘ve been thinking more about the type of assessment I want. Previously, I made the case for gated submissions. Now I find another type of interaction I‘d like to have. So here‘s the case for model answers (and a rubric).
As context, many moons ago we developed a course on speaking to the media. This was based upon the excellent work of the principals of Media Skills, and was a case study in my Engaging Learning book. They had been running a face to face course, and rather than write a book, they wondered if something else could be done. I was part of a new media consortium, and was partnered with an experienced CD ROM developer to create an asynchronous elearning course.
Their workshop culminated in a live interview with a journalist. We couldn‘t do that, but we wanted to prepare people to succeed at that as an optional extra next step. Given that this is something people really fear (apocryphally more than death), we needed a good approximation. Along with a steady series of exercises going from recognizing a good media quote, and compiling one, we wanted learners to have to respond live. How could we do this?
Fortunately, our tech guy came up with the idea of a programmable answering machine. Through a series of menus, you would drill down to someone asking you a question, and then record an answer. We had two levels: one where you knew the questions in advance, and the final test was one where you‘d have a story and details, but you had to respond to unanticipated questions.
This was good practice, but how to provide feedback? Ultimately, we allowed learners to record their answers, then listen to their answers and a model answer. What I‘d add now would be a rubric to compare your answer to the model answer, to support self-evaluation. (And, of course, we’d now do it digitally in the environment, not needing the machine.)
So that‘s what I‘m looking for again. I don‘t need verbal answers, but I do want free-form responses, not multiple-choice. I want learners to be able to self-generate their own thoughts. That‘s hard to auto-evaluate. Yes, we could do whatever the modern equivalent to Latent Semantic Analysis is, and train up a system to analyze and respond to their remarks. However, a) I‘m doing this on my own, and b) we underestimate, and underuse, the power of learners to self-evaluate.
Thus, I‘m positing a two stage experience. First, there‘s a question that learners respond to. Ideally, paragraph size, though their response is likely to be longer than the model one; I tend to write densely (because I am). Then, they see their answer, a model answer, and a self-evaluation rubric.
I‘ll suggest that there‘s a particular benefit to learners‘ self-evaluating. In the process (particularly with specific support in terms of a mnemonic or graphic model), learners can internalize the framework to guide their performance. Further, they can internalize using the framework and monitoring their application to become self-improving learners.
This is on top of providing the ability to respond in richer ways that picking an option out of those provided. It requires a freeform response, closer to what likely will be required after the learning experience. That‘s similar to what I‘m looking for from the gated response, but the latter expects peers and/or instructors to weigh in with feedback, where as here the learner is responsible for evaluating. That‘s a more complex task, but also very worthwhile if carefully scaffolded.
Of course, it‘d also be ideal if an instructor is monitoring the response to look for any patterns, but that‘s outside the learners‘ response. So that‘s the case for model answers. So, what say you? And is that supported anywhere or in any way you know?
Matt says
Coupled with some form of semantic evaluation, you could see the delta from self-assessment vs semantic eval
The bigger the delta the more importance to do follow-up. Would be great for customer service development programs. Hope the team at Sonders see this post
Christopher Riesbeck says
Yes, a free-form response followed by comparison to a model answer is a great tool.
It’s also worth exploring an extra stage, by repeating that approach on the rubric part as well.
Enter free-form response,. See model answer,. Enter free-form comparisons. Then see and compare using the rubric.
That way the learner gets to compare what they were paying attention to versus what the rubric emphasizes. The internalization of the framework is more explicit. Learners can give themselves “credit” for analytic dimensions that they think are important, even if the rubric doesn’t mention them. Those additional dimensions can be a great topic for future discussion.
Dave says
I love it and had been doing something similar, though smaller scale, in a recent elearning design. There are limitations, but it’s a usable and practical way to get the learner to formulate something rather than falling to the typical objective questions.
I think a potential way to go to the next level is to get the material out of the elearning and into the real world. Storyline, for example, can execute javascript that will open an email in Outlook, prefilled with whatever information you want – including whatever the learner typed, the model answer, the rubric, or anything. Send that it to the manager/supervisor, who can use it for a coaching discussion. Just musing.
Clark says
Great advice, folks. Thanks, Chris, for that extension. Dave, nice to know about that feature. And Matt, interesting idea.