Harold Jarche reviews Marina Gorbis’ new book The Nature of the Future, finding value in it. I was intrigued by one comment which I thought was relevant to organizations. It has to do with the nature of thinking.
In it, this quote struck a nerve: “Gorbis identifies unique human skills”. The list of them intrigued me:
- Sensemaking
- Social and emotional intelligence
- Novel and adaptive thinking
- Moral and ethical reasoning
While all are intriguing and important, the first and third really struck me. When I talk about digital technology (which I do a lot :), I mention how it perfectly augments our cognitive architecture. Our brains are pattern-matchers and meaning extractors. They’re really good at seeing insights. And they’re really bad at rote memory, and complex calculations.
Digital technology is exactly the reverse: it’s great at remembering rote information and in doing complex calculations. It’s extremely hard to get computers to do good pattern-matching or meaning making.
For the purposes of achieving meaningful outcomes, coupling our capabilities with digital technology makes a lot of sense. That’s why mobile makes so much sense: it decouples that complementary capability from the desktop, and untethers our outboard brain.
From an organizational point of view, you want to be empowering your people with digital augmentation. From a societal point of view, you want to have people doing meaningful tasks where they tap into human capability, and not doing rote tasks. They’re going to be bad at it! And, you can infer, it’s also the case that you’re going to want education to focus on how to do problem-solving and using digital technology as an augment, not on doing rote things and memory tasks. Ahem.
Rob Moser says
I’m really enjoying Nate Silver’s The Signal and the Noise atm, and I’m currently reading the chapter on how computer chess programs eventually beat Kasparov. He talks a lot about the different approaches to the same problem – the computer relying on its vast store of rote information (board position databases) and its calculation speed, while the human uses his vastly superior pattern-recognition and meaning extraction to see longer strategic effects. In that case the two were in conflict, but he also notes that a fairly recent “freestyle” chess tournament (which allowed computer assistance) was won not by a grandmaster nor by a program (both were present), but by a team of two (no-doubt talented) amateurs advised by 3 different programs.
On the flipside, however, he points out again and again through the book how computers – and the vast wealth of rote information they can supply us with – have a tendency to actually overwhelm our pattern matching skills, causing us to see patterns where none really exist. Apophenia, basically; or Pareidolia – I’ve always been a bit unclear as to the distinction. Faces on Mars / Jesus on toast / Lenin in my shower curtain type of thing. In realms where there is a lot of volume of data – like predicting weather, for instance – you have to design your technological augmentation to spend a little of its much-vaunted computing power to pare down the raw data dump. Maybe give you a little statistical measure of the significance of the data, or run multiple simulations with varying assumptions and feed you outcome probabilities. The human pattern matching machine is still unmatched, but it can be swamped.
Anyways, not entirely certain how relevant it is to what you’re discussing, but it seemed related and its quite an interesting book, so I thought I’d point it out.
Ara Ohanian says
Clark, what a great succinct post: treating people like machines and expecting them to think like machines not only demeans them also fails to play to their strengths. As natural makers of patterns and meaning we should, as you say, be building those skills with technology supporting us doing what it does best.