Of late, there has been a number of articles talking about thinking and mental models (e.g. this one). One of the outcomes is that we have a lot of stories about how the world works. Some of them are accurate. Others, not. And pondering this when I should’ve been sleeping, I realized that there was a likelihood that our misinterpretations could cause problems. It made me think that maybe transparency isn’t enough. What does that mean?
We build models, period. We create explanations about how the world works. And they may not be right. If we aren’t given good ones up front, it’s likely. It’s also the case that they seem to come from previous models we’ve seen. (And diagrams. ;)
Now, it’s easy to misattribute an outcome to the wrong model if we don’t have better explanations. And this comes into play when we’re trying to figure out what has happened, or why something happened. This includes decisions made by others that may affect us, or even just lead to outcomes such as product designs, policies, or more.
Where I’m going is this: if we don’t see the thinking that explains how we got there, not just the process followed, we can infer wrongly about why it happened. And this is important in the ‘show your work’ sense.
I’m a fan of transparency. I like it when politics and other decisions are scrutable; we can see who’s making the decision, what influences they’ve had, what steps they took to get there. That’s not enough, however. Particularly when you disagree or have a problem. Take LinkedIn, for example; when I connect to someone using the app on the qPad, I can then send them a message, but when I do it through the web interface on my computer, it wants to use one of those precious ‘InMail’s. It’s inconsistent (read: frustrating). Is there a rationale?
So I’m going to suggest that just transparency is necessary, but not sufficient. You can’t just show your work, you need to show your thinking. You need to see the rationale! Two reasons: you can learn more when you see the associated cogitation, and you can provide better feedback as well. In short, we want to see why they believe this is the right solution. Otherwise, we could question their decision because we misattribute the reasoning.
Transparency is great, but if you can’t see the thinking behind it, you can make wrong inferences. It’s better if you can see the thinking and the result. Is this transparent enough on both?