As I‘ve been developing online workshops, I‘ve been thinking more about the type of assessment I want. Previously, I made the case for gated submissions. Now I find another type of interaction I‘d like to have. So here‘s the case for model answers (and a rubric).
As context, many moons ago we developed a course on speaking to the media. This was based upon the excellent work of the principals of Media Skills, and was a case study in my Engaging Learning book. They had been running a face to face course, and rather than write a book, they wondered if something else could be done. I was part of a new media consortium, and was partnered with an experienced CD ROM developer to create an asynchronous elearning course.
Their workshop culminated in a live interview with a journalist. We couldn‘t do that, but we wanted to prepare people to succeed at that as an optional extra next step. Given that this is something people really fear (apocryphally more than death), we needed a good approximation. Along with a steady series of exercises going from recognizing a good media quote, and compiling one, we wanted learners to have to respond live. How could we do this?
Fortunately, our tech guy came up with the idea of a programmable answering machine. Through a series of menus, you would drill down to someone asking you a question, and then record an answer. We had two levels: one where you knew the questions in advance, and the final test was one where you‘d have a story and details, but you had to respond to unanticipated questions.
This was good practice, but how to provide feedback? Ultimately, we allowed learners to record their answers, then listen to their answers and a model answer. What I‘d add now would be a rubric to compare your answer to the model answer, to support self-evaluation. (And, of course, we’d now do it digitally in the environment, not needing the machine.)
So that‘s what I‘m looking for again. I don‘t need verbal answers, but I do want free-form responses, not multiple-choice. I want learners to be able to self-generate their own thoughts. That‘s hard to auto-evaluate. Yes, we could do whatever the modern equivalent to Latent Semantic Analysis is, and train up a system to analyze and respond to their remarks. However, a) I‘m doing this on my own, and b) we underestimate, and underuse, the power of learners to self-evaluate.
Thus, I‘m positing a two stage experience. First, there‘s a question that learners respond to. Ideally, paragraph size, though their response is likely to be longer than the model one; I tend to write densely (because I am). Then, they see their answer, a model answer, and a self-evaluation rubric.
I‘ll suggest that there‘s a particular benefit to learners‘ self-evaluating. In the process (particularly with specific support in terms of a mnemonic or graphic model), learners can internalize the framework to guide their performance. Further, they can internalize using the framework and monitoring their application to become self-improving learners.
This is on top of providing the ability to respond in richer ways that picking an option out of those provided. It requires a freeform response, closer to what likely will be required after the learning experience. That‘s similar to what I‘m looking for from the gated response, but the latter expects peers and/or instructors to weigh in with feedback, where as here the learner is responsible for evaluating. That‘s a more complex task, but also very worthwhile if carefully scaffolded.
Of course, it‘d also be ideal if an instructor is monitoring the response to look for any patterns, but that‘s outside the learners‘ response. So that‘s the case for model answers. So, what say you? And is that supported anywhere or in any way you know?