In an previous post, I argued for different types and ratios for worthwhile learning activities. I’ve been thinking about this (and working on it) quite a bit lately. I know there are other resources that I should know about (pointers welcome), but I’m currently wrestling with several types of situations and wanted to share my thinking. This is aside from scenarios/simulations (e.g. games) that are the first, best, learning practice you can engage in, of course. What I’m looking for is ways to get learners to do processing in ways that will assist their ability to do. This isn’t recitation, but application.
So one situation is where the learner has to execute the right procedure. This seems easy, but the problem is that they’re liable to get it right in practice. The problem is that they still can get it wrong when in real situations. An idea I had heard of before, but was reiterated through Socratic Arts (Roger Schank & cohorts) was to have learners observe (e.g. video) of someone performing it and identifying whether it was right or not. This is a more challenging task than just doing it right for many routine but important tasks (e.g. sanitation). It has learners monitor the process, and then they can turn that on themselves to become self-monitoring. If the selection of mistakes is broad enough, they’ll have experience that will transfer to their whole performance.
Another task that I faced earlier was the situation where people had to interpret guidelines to make a decision. Typically, the extreme cases are obvious, and instructors argue that they all are, but in reality there are many ambiguous situations. Here, as I’ve argued before, the thing to do is have folks work in groups and be presented with increasingly ambiguous situations. What emerges from the discussion is usually a rich unpacking of the elements. This processing of the rules in context exposes the underlying issues in important ways.
Another type of task is helping people understand applying models to make decisions. Rather than present them with the models, I’m again looking for more meaningful processing. Eventually I’ll expect learners to make decisions with them, but as a scaffolding step, I’m asking them to interpret the models in terms of their recommendations for use. So before I have them engage in scenarios, I’ll ask them to use the models to create, say, a guide to how to use that information. To diagnose, to remedy, to put in place initial protections. At other times, I’ll have them derive subsequent processes from the theoretical model.
One other example I recall came from a paper that Tom Reeves wrote (and I can’t find) where he had learners pick from a number of options that indicated problems or actions to take. The interesting difference was then there was a followup question about why. Every choice was two stages: decision and then rationale. This is a very clever way to see if they’re not just getting the right answer but can understand why it’s right. I wonder if any of the authoring tools on the market right now include such a template!
I know there are more categories of learning and associated tasks that require useful processing (towards do, not know, mind you ;), but here are a couple that are ‘top of mind’ right now. Thoughts?
Ite Smit says
Dear mr Quinn,
Thank you for this post. I’am starting with a group of super-users to learn their collegues a new ITapplication. They’ll use Captivate as one of the learning/transfer tools. The idea of not only showing the right way is a great idea,I’ll use it.
Ite Smit