At a recent event, they were talking about AI (artificial intelligence) and DI (decision intelligence). And, of course, I didn’t know what the latter was so it was of interest. The description mentioned visualizations, so I was prepared to ask about the limits, but the talk ended up being more about decisions (a topic I am interested in) and values. Which was an intriguing twist. And this, not surprisingly led me back to wisdom.
The initial discussion talked about using technology to assist decisions (c.f. AI), but I didn’t really comprehend the discussion around decision intelligence. A presentation on DA, decision analysis, however, piqued my interest. In it, a guy who’d done his PhD thesis on decision making talked about how when you evaluate the outputs of decisions, to determine whether the outcome was good, you needed values.
Now this to me ties very closely back to the Sternberg model of wisdom. There, you evaluate both short- and long-term implications, not just for you and those close to you but more broadly, and with an explicit consideration of values.
A conversation after the event formally concluded cleared up the DI issue. It apparently is not training up one big machine learning network to make a decision, but instead having the disparate components of the decision modeled separately and linking them together conceptually. In short, DI is about knowing what makes a good decision and using it. That is, being very clear on the decision making framework to optimize the likelihood that the outcome is right.
And, of course, you analyze the decision afterward to evaluate the outcomes. You do the best you can with DI, and then determine whether it was right with DA. Ok, I can go with that.
What intrigues me, of course, is how we might use technology here. We can provide guidelines about good decisions, provide support through the process, etc. And, if we we want to move from smart to wise decisions, we bring in values explicitly, as well as long-term and broad impacts. (There was an interesting diagram where the short term result was good but the long term wasn’t, it was the ‘lobster claw’.)
What would be the outcome of wiser decisions? I reckon in the long term, we’d do better for all of us. Transparency helps, seeing the values, but we’d like to see the rationale too. I’ll suggest we can, and should, be building in support for making wiser decisions. Does that sound wise to you?