A recent Donald Clark post generated an extension from Stephen Downes. I respect both of these folks as people and as intellects, but while I largely agreed with one, I had a challenge with the other. So here’s a response, in defense of cognitive psychology. The caveat is that my Ph.D. is in Cognitive Psychology, so I may be defensive and biased, but I’ll try to present scrutable evidence.
Donald Clark’s post unpacks the controversies that surround efforts to measure the complicated concept of ‘intelligence’. He starts with the original Binet measure, and talks about how it’s been misused and has underlying problems. He goes through multiple intelligences, and emotional intelligence as well, similarly unpacking the problems and misuses. I’m reminded of Todd Rose’s End of Average, which did a nice job of pointing out the problems of trying to compress complex phenomena into single measures.
He goes on to talk about how it may be silly to talk about intelligence. His argument talks about all the different ways computers can do impressive computational tasks, under the rubric artificial intelligence (AI). While I laud the advances, my focus still remains on IA (intelligence augmentation), that is, using computers in conjunction with our own capability rather than purely on AI.
Stephen Downes responded to Donald’s article with a short piece. In it, he takes up the story of intelligence and argues that education and cognitive psychology have put on layers of ‘cruft’ (“extraneous matter“) on top of the neural underpinnings. And I have a small problem with that. In short, I think that the theories that have arisen have provided useful guidance for designing systems and learnings that wouldn’t have emerged from strictly neural explanations.
Take, for example, cognitive load. John Sweller’s theory posits that there are limits to our mental resources. Thus, having extraneous material can interfere with the ability to process what’s necessary. And it’s led to some important results on things like the importance of worked examples, and making useful diagrams.
We can also look to principles like Bjork’s desirable difficulty. Here, the type of practice matters (as also embodied in Ericsson’s deliberate practice), needing to be at the right level. This might be more easily derivable from neural net models, but still provided a useful basis for design.
I could go on: the value of mental models, what makes examples work, the value of creating a motivating introduction, and so on. I’d suggest that these aren’t obvious (yet) from neural models. And even if they are, they are likely more comprehensible from a cognitive perspective than a neural. Others have argued eloquently that neural is the wrong level of analysis for designing learning.
I will suggest, in defense of cognitive psychology, that the phenomena observed provide useful frameworks. These frameworks give us hooks for developing learning experiences that are more complicated to derive from neural models. As I’ve said, the human brain is arguably the most complex thing in the known universe. Eventually, our neural models may well advance enough to provide more granular and accurate models, but right now there’s still a lot unknown.
So I’m not ready to abandon useful guidance, even if some of it is problematic. Separating out what’s useful from what’s been overhyped may be an ongoing need, but throwing it all away seems premature. That’s my take, what’s yours?