In a post last week, I mentioned how Gloria Gery’s original vision of performance support not only was supposed to help you in the moment, it was also – at least in principle – of developing you over time. And yet I have yet to see it. So what am I talking about?
Let’s use an example. I think of the typical GPS as one of the purest models of performance support: it knows where you’re trying to go (since you tell it), and it helps you every step of the way. It can even adapt if you make a mistake. It will get you there.
However, the GPS will tell you nothing about the rationale it’s using to choose your route, which can seem different than one you might have chosen on your own. Even if it offers you alternatives, or you specify preferences like ‘no toll roads’, the underlying reasoning isn’t clear. Yet this might be an opportunity for navigational learning (e.g. “this route has more lights, so we prefer the slightly longer one with fewer opportunities for stopping”).
Nor does it help you learn anything along the way: geography, political boundaries, even geology, although it could do any of these with only a thin veneer of extra work: “as we cross the river, we are also crossing the boundary between X county and Y; in 1643 the pressure between the two cities of X1 and Y1 jockeying for power led to this settlement that shared the water resource.”
It could go further, using this as an example of a greater phenomena: “geographic features often serve as political boundaries, including mountains and rivers as well as oceans”. This latter would, in a sensible approach, only be used a few times (as the message,nonce known, could become annoying. And, ideally, you could choose what you wanted to learn about.
This isn’t limited to GPS, this could be used in any instance of guided performance. Sometimes you might not care (e.g. I suspect most users of Turbo Tax don’t want to know about the nuances of the tax, they just want it done!), but if you want people to understand the reasoning as a boost to more expert performance, e.g. so they can then start using that model to infer how to deal with things that fall outside of the range of performance support, this is a missed opportunity.
The point is to have even our programs to be ‘thinking out loud‘, both to help us learn, and to serve as a check on validity. Sure, it should be able to be shut off or customized, but the processing going on provides an opportunity for learning to happen in new and meaningful ways. The more we can couple the concept to the context, the more we can create learning that will really stick. And that is, or should be, the real goal.