Many years ago, I led the learning design of an online course on speaking to the media. It was way ahead of the times in a business sense; people weren’t paying for online learning. Still, there were some clever design factors in it. I’ve lifted one to new purposes, but also have a thought about how it could be improved. So here are some thoughts on learners as learning evaluators.
The challenge is the result of two conflicting challenges. For one, we want to support free answers on the part of learners. This is for situations where there’s more than one way to respond. For example a code solution, or a proposed social response. The other is the desire for auto-marking, that is independent asynchronous learning. While it’s ideal to have an instructor in the loop to provide feedback, the asynchronous part means that’s hard to arrange. We could try to have an intelligent programmed response (c.f. artificial intelligence), but those can be difficult to develop and costly. Is there another solution?
One alternative, occasionally seen, is to have the learner evaluate their response. There are positive benefits to this, as it gets learners to become self-evaluators. One of the mechanisms to support this is to provide a model answer to compare to the learners’ own response. We did this in that long-ago project, where learners could speak their response to a question, then listen to theirs and a model response.
There are some constraints on doing this; learners have to be able to see (or hear) their response in conjunction with the model response. I’ve seen circumstances where learners respond to complex questions and get the answer, but they don’t have a basis to compare. That is, they don’t get to see their own response, and the response was complex enough not to be completely remembered. One particular instance of this is in multiple response choices where you pick a collection out.
I want to go further, however. I don’t assume that learners will be able to effectively compare their response to the model response. At least, initially. As they gain expertise, they should, but early on they may not have the requisite support. You can annotate the model answer with the underlying thinking, but there’s another option.
I’m considering the value of having an extra rubric that states what you should notice about the model answer and prompts you to see if you have all the elements. I’m suggesting that this extra support, while it might add some cognitive load to the process, also reduces the load by supporting attention to the important aspects. Also, this is scaffolding that can be gradually removed, allowing learners to internalize the thinking.
I think we can have learners as learning evaluators, if we support the process appropriately. We shouldn’t assume that ability, at least initially, but we can support it. I’m not aware of research on this, though I certainly don’t doubt it. If you do know of some, please do point me to it! If you don’t, please conduct it! :D Seriously, I welcome your thoughts, comments, issues, etc.
Avnish Srivastava says
Interesting… I always have an opinion the objective is to facilitate Learners to Learn new things in new ways & not to assess them .
Ashley Green says
Absolutely love this approach, Clark! I was just now (quite literally) posting an essay question in an e-learning module where the correct answer (a paragraph or two in length) can be revealed side by side with the learner’s own length response. These are healthcare providers applying a treatment framework to a patient scenario. What’s missing for me is the rubric that helps them evaluate the quality of their answer (as opposed to just simple comparison). Do you have any more thoughts on the general framework for a self-evaluation rubric?
Love your posts!
Avnish, I sympathize, but do believe learners need assessment for several reasons. The aim is not to assess them, it’s to develop them. Yet, assessment has to be part of that process. For one, for them to know that they are improving and developing their ability, building their confidence. Also, for us to ascertain when they’ve mastered the outcome sufficiently to be given further opportunities. The hoary old cliche comes to mind: do you want your pilots and surgeons not to have been assessed?
Ashley, I don’t have any great frameworks, but on principle I’d suggest that you need to discuss everything that could and should be part of the response (“Did you include…”), and address any misconceptions. Of course, derived them from the model. Also, any meta-suggestions about process (“Did you remember to start with…”). Off the top of my head, but things I have thought about a bit; hope this helps!
Rob Moser says
I feel like this is the kind of thing that a fairly general purpose AI could be good at. Instead of using something like ChatGPT to generate generic meaningless text, use it to compare two essay responses for common threads. Then you write a response yourself, to store with the question, and the AI leads the learner through a comparison of your model response to their response. The trick would be to keep it from becoming a buzzword search engine; the AI would need to be deep enough to map connections between concepts – like one of your mind maps – to make sure the learner is understanding, not just parroting.
The other danger would be if the learner thinks of something genuinely relevant that you didn’t include in your model response. You never want to punish someone for being smarter than you! But you could have the AI flag unknown concepts for the expert to look over and flag as relevant or not. A little machine learning and a few iterations and it’ll improve itself.
Interesting idea. It sounds similar to some of Tak-wai Chan’s learning companion systems. It’d have to have, I think, some knowledge of how to guide people in the comparison (e.g. pedagogical knowledge): “you got this”, “you seem to have missed this…” the latter of which ideally would point to the underlying model. That latter bit gets more complex, I think. Still, thought provoking, thanks for sharing!