In a recent post, Jane Bozarth goes to task on ‘best practices’, which I want to elaborate on. In the post, she talks about how best practices are contextualized, so that they may work well here, but not there. She’s got a cute and apt metaphor with marriage, and she’s absolutely right.
However, I want to go further. Let me set the stage: years ago as a grad student, our lab was approached with the task of developing an expert system for a particular task. It certainly was something we could have done. Eventually, we asked what the description was for the ideal performance, and were told that the best source was the person who’d been doing it the longest. Now, people are fabulous pattern matchers, and performing something for a long time with some reflection on improvement likely could get you some really good performance. However, there are some barriers: experts no longer have access to their own performance; without an external frame of reference, they can get trapped into local maxima; and other phenomena of our cognitive architecture interfere with optimal performance (e.g. set effects, functional fixedness). I’ve riffed on this often; it’s compiled and they tell stories about what they do that have little correlation to what they actually do. We didn’t end up taking up the opportunity. So it may be the best out there, but is it the best that can be?
And that’s the problem. Why are we only looking at what the best is that anyone’s doing? Why not abstract across that and other performances, looking for emergent principles, and trying to infer what would on principle be the best? That is, if it hasn’t already been documented in theory and is available (academics do that sort of thing as a career, and in between the obfuscation there are often good thoughts and answers). The same with benchmarking: it’s relatively the best, not absolutely the best.
I’ve largely made a career out of trying to find the principled best approaches, interpreting cognitive science research and looking broadly across relevant fields (including HCI/UI, software engineering, entertainment, and others) to find emergent principles that can guide design of solutions. And, reliably, I find that there are idea, concepts, models, etc that can guide efforts as broadly dispersed as virtual worlds, mobile, adaptive systems, content models, organizational implementation, and more. Models emerge that serve as checklists, principles, frameworks for design that allow us to examine tradeoffs and make the principled best solution. I regularly capture these models and share them (e.g. my models page, and more recent ones regularly appear in this blog).
I’m not saying it’s easy, but you look across our field and recognize there are those who are doing good work in either translating research into practice or finding emergent patterns that resonate with theoretical principles. It’s time to stop looking at what other organizations are doing in their context as a guide, and start drawing upon what’s known and customizing it to your context, and then having a cycle of continual tuning. With the increasing pressures to be competitive, I’d suggest that just being good enough isn’t. Being the best you can be is the only sustainable advantage.
Let’s see: copy your best competitor, and keep equal; or shoot for the principled best that can be in the category, and have an unassailable position of leadership? The answer seems obvious to me. How about you?
Nick Kearney says
Are you familiar with the concept of valorization? It is quite common in European Commission literature. The idea relates to the facilitating the appropriate adoption of innovation from successful projects in other contexts, something that in a multicultural context like the EU is vital. This might be a third way between the options you choose. I see it more as akin to the part of benchmarking that tends to get left out: the reflection on one’s own organization and how the practices identified in the benchmarking process may (or may not) be feasible in the new context.