Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

8 April 2009

Model learning

Clark @ 10:20 AM

On Monday, a hearty Twitter exchange emerged when Jane Bozarth quoted Roger Schank “Why do we assume that theories of things must be taught to practitioners of those things?”  I stood up for theory, Cammy Bean and Dave Ferguson chimed in and next thing you know, we’re having a lively discussion in 140 characters.  With all the names to include, Dave pointed out we had even less space!

One side was stoutly defending that what SMEs thought was important wasn’t necessarily what practitioners needed.  The other side (that would be me) wanted to argue that it’s been demonstrated that having an underlying model is important in being able to deal with complex problems.

So, of course, the issue really was what we mean by theory.  It’s easy (and correct) to bash conceptual knowledge frameworks that don’t have applicability to the problem at hand; Dave revived the great quote: “In theory, there’s no difference between theory and practice. But in practice, there is.” He also cited Van Merrienboer & Kirschner as saying that teaching theory to successful practitioners can be detrimental. (BTW, see Dave’s great series of posts ‘translating‘ their work.) On the other hand, having models has clearly been shown to be valuable in adapting to complexity and ambiguity.  What’s a designer to do?

So, let me be clear.  If there’s a rote procedure to be followed, there’s no need for a theory.  In fact, there’s no need for training, since you ought to automate it!  Our brains are good at pattern matching, bad at rote repetition, and it seems to me to be sad if not criminal to have people do rote stuff that could be done better by machine; save the interesting and challenging tasks for us!

It’s when tasks are complex, ill-structured, and/or ambiguous with lots of decisions, that we need theories.  Or, rather, models.  Which, I think, is part of the confusion (and I may be to blame! :).

I’m  talking about an understanding of the underlying model that guides performance.  Any approach to a problem has (or should) a rationale behind it about why that’s the reason you do it this way, not that way.  It’s based upon some theory, but it should be resolved into a model that has just enough richness to help you decide when to do X and when to do Y. As I said many years ago:

I see mental models as dynamic.  That is, they’re causal explanations of system behaviour.  They are used to explain observed outcomes and to predict the effects of perturbations.

It’s the explanation and prediction capabilities that are important.  The problem is, if the situation’s complex enough (and most are, whether it’s controlling a production line, or dealing with a customer, or…), you can’t train on all the situations that a learner might face.  So then you need to provide guidance.  Yes, we’ll use example and practice context to support transfer, but we should refer back to a model that guides our performance. And that’s useful and necessary.

Cammy noted that it’s extra work to develop that model, and I acknowledge that.  I’ve said that good instructional design requires more work and knowledge on the part of the designer than we typically expect, which is why I don’t think you can do good ID without knowing some learning theory. (BTW, my Broken ID series addresses a lot of the above.)

So, let me be clear: in any reasonably complex domain (and you shouldn’t be training for simple issues: just give a job aid or automate or…), you should present the learner with a model that you reinforce in examples and practice.  It should not be an abstract academic theory, but a practical guide to why things are done this way and what governs the adaptation to circumstances.  As that model is acquired through examples and practice, you provide the basis for self-improving performance.

That’s my model for designing effective learning.  What’s yours?

On a side note, what I recall as to the various tweets, and what Twitter shows from each person, doesn’t have a perfect correlation.  While I acknowledge my memory failing more frequently (just age, not dementia or Alzheimer’s, I *think*), I’m pretty sure that Twitter dropped some of those messages from the record (the same time they acknowledged having trouble with dropping avatar images).  Tweeter beware!


  1. Clark, apart from the 140-character limit, it was like one of those hallway conversations where the right people just happen to show up.

    As you know, van Merrienboer and Kirscner would agree with your post that an underlying model is important–probably essential–for dealing with complex problems. The two groups they suggest make a lot of sense to me: a conceptual map of the domain involved (as in, what does the world of “manufacturing maintenance” or “management-labor relations” look like?), and cognitive strategies used by skilled practitioners in that field (how a technician might diagnose start-up errors; how a manager interviews individuals involved in a dispute).

    My colleague John Howe has an informal test for cognitive strategies: would an acknowledged skilled practitioner agree that this was an acceptable way to approach the problem? (This is the “I wouldn’t do it that way, but it definitely would work” criterion.)

    You’re also right that this kind of information can help people learn; they use the model (or examples of the model in action) to build their own models.

    I’m not sure that it really is extra work to develop (or make explicit) the model. If you don’t, how do you know how to train the complex skill? It’s like doing task analysis: you have to do it so you know enough to decide whether to use job aids; you have to do it so you know what to train people in when you can’t use job aids.

    The only quibble I have with your points here is that some relatively routine skills probably should be trained or even overtrained so that they’re automatic. Reading’s an example (and many people learn to read a second language as adults). Likewise typing, driving, drawing blood.

    Comment by Dave Ferguson — 8 April 2009 @ 11:29 AM

  2. Fabulous improvement on our Twitter ping-pong ball session (as Dave called it)! I enjoyed it heartily. And I agree with you about models. But as you said, models are different from theory. Instead, they’re a practical guide. I can get behind that 100%.

    When the conversation started, I was thinking about learning theory vs. instructional design practice. Academic jargon and theory vs. what works in eLearning. Maybe this is where the novice vs. the expert comes into play? A novice might just need to know what works. An expert (or one on his or her way to becoming an expert) might want to circle back and start discovering the underlying theory (why something works). Certainly, that’s where I feel like I am as a professional at this moment in time. Starting to circle back to understand the why of things I’ve been doing (or not doing) all these years…

    Comment by Cammy Bean — 8 April 2009 @ 6:19 PM

  3. Cammy, you’ve got me thinking in at least two directions now, and it’s all Clark’s fault.

    One direction is the novice/expert distinction. Cognitive psych and just careful observation tell us that experts see problems differently from the way novices see them. By definition, novices pay attention to the surface resemblances–so, for example, noise from the front of the car means “engine problem.” The expert know that the sound of certain classes of engine problems will vary with the speed of the engine, and will know that putting the car in neutral while revving with change the speed of the engine but not the car.

    The other direction is a comment from the Ten Steps book. When your learners already have a high level of expertise, presenting them with a new model / theory can interfere, since you risk conflicting with the mental models they’ve already built up. (This is my interpretation.)

    …You know, I’m pretty sure I’ll finish Ten Steps despite the turgid language, but I’ve never been able to make myself read all of Dick & Carey.

    Comment by Dave Ferguson — 9 April 2009 @ 3:05 AM

  4. I’ll happily take the blame if it’s getting people thinking ;).

    I typically don’t think of giving experts models, they develop them. Unless they’ve got bad ones. Rand Spiro had to present a sequence of models to help learners get the complexities of a particular system (muscles).

    You typically don’t develop full courses for experts, or even practitioners, it’s novices that need the full-court press of intro/concept/example/practice/etc. Whether they’re experts elsewhere, by definition they’re novices if it’s a full skill-set shift, no? And you want to move novice to practitioner through the course.

    I’d trust experts to be relatively effective at self-directed learning, and exploring nuances of what they’re doing (even co-creating new models that better explain). Which is why I want to support them with social media! And practitioners with streamlined support, not full courses.

    Thanks, Cammy & Dave, for continuing the dialog!

    Comment by Clark — 9 April 2009 @ 6:10 AM

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress