I think supporting performance is important, and that we don’t do enough with models in formal learning. To me, another interesting opportunity that’s being missed is the intersection of the two.
Gloria Gery’s original vision of electronic performance support systems was that not only would they help you perform but they’d also develop your understanding so you’d need them less and less. I’ve never seen that in practice, sad to say.
Now it might get in the way of absolute optimal performance, but I believe we can, and should, develop learner understanding about the performance. If the performance support is just providing rote information so that the learner doesn’t have to look it up, that’s ok. But if, instead, the performance support is interactive decision support, the system could, and should, provide the model that’s guiding the decisions as well as the recommendations.
This needn’t be much, just a thin veneer over the system, so instead of, after asking X and Y, recommending Z, saying “because of A and B, we’ve eliminated C and recommend Z” or somesuch.
It could also be making the underlying model visible through the system. Show the influence of the answers to the questions to competing alternatives, for instance.
All in all, I believe it’s better that performers understand what’s behind recommendations, because then they can internalize those models both to reduce the need for the system and to be able to infer when to go beyond the system.
Helping people understand and use models is a powerful form of meta-learning, to me, and a 21st century skill folks will be needing. Why are we missing the opportunity to help develop those skills?