We frequently hear that ‘perfection is the enemy of the good’. And that may well be true. However, I want to suggest that there’s another enemy that plagues us as learning experience designers. We may be trying to do good, but there are barriers. These are worthy of explicit discussion.
You also hear about the holy trinity of engineering: cheap, fast, or good; pick two. We have real world pressures that want us to do things efficiently. For instance, we have lots of claims that generative AI will allow us to generate more learning faster. Thus, we can do more with less. Which isn’t a bad thing…if what we produce is good enough. If we’re doing good, I’ll suggest, then we can worry about fast and cheap. But doing bad faster and cheaper isn’t a good thing! Which brings us to the second issue.
What is our definition of ‘good’? It appears that, too often, good is if people ‘like’ it. Which isn’t a bad thing, it’s even the first level in the Kirkpatrick-Katzell model: asking what people think of the experience. One small problem: the correlation between what people think of an experience, and it’s actual impact, is .09 (Salas, et al, 2012). That’s zero with a rounding error! What it means is that people’s evaluation of what they think of it, and the actual impact, isn’t correlated at all. It could be highly rated and not be effective, or highly rated and be effective. Etc. At core, you can’t tell by the rating.
What should be ‘good’? The general intent of a learning intervention (or any intervention, really) is to have an impact! If we’re providing learning, it should yield a new ability to ‘do’. There are a multitude of problems here. For one, we don’t evaluate performance, so how would we know if our intervention is having an impact? Have learners acquired new abilities that are persisting in the workplace and leading to the necessary organizational change? Who knows? For another, folks don’t have realistic expectations about what it takes to have an impact. We’ve devolved to a state where if we build it, it must be good. Which isn’t a sound basis for determining outcomes.
There is, of course, a perfectly good reason to evaluate people’s affective experience of the learning. If we’re designing experiences, having it be ‘hard fun’ means we’ve optimized the engagement. This is fine, but only after, we’ve established efficacy. If we’re not having a learning impact in terms of new abilities to perform, what people think about it isn’t of use.
Look, I’d prefer us to be in the situation where perfection to be the enemy of the good! That’d mean we’re actually doing good. Yet, in our industry, too often we don’t have any idea whether we are or not. We’re not measuring ‘good’, so we’re not designing for it. If we measured impact first, then experience, we could get overly focused on perfection. That’d be a good problem to have, I reckon. Right now, however, we’re only focused on fast and cheap. We won’t get ‘good’ until we insist upon it from and for ourselves. So, let’s shall we?