A recent Donald Clark post generated an extension from Stephen Downes. I respect both of these folks as people and as intellects, but while I largely agreed with one, I had a challenge with the other. So here’s a response, in defense of cognitive psychology. The caveat is that my Ph.D. is in Cognitive Psychology, so I may be defensive and biased, but I’ll try to present scrutable evidence.
Donald Clark’s post unpacks the controversies that surround efforts to measure the complicated concept of ‘intelligence’. He starts with the original Binet measure, and talks about how it’s been misused and has underlying problems. He goes through multiple intelligences, and emotional intelligence as well, similarly unpacking the problems and misuses. I’m reminded of Todd Rose’s End of Average, which did a nice job of pointing out the problems of trying to compress complex phenomena into single measures.
He goes on to talk about how it may be silly to talk about intelligence. His argument talks about all the different ways computers can do impressive computational tasks, under the rubric artificial intelligence (AI). While I laud the advances, my focus still remains on IA (intelligence augmentation), that is, using computers in conjunction with our own capability rather than purely on AI.
Stephen Downes responded to Donald’s article with a short piece. In it, he takes up the story of intelligence and argues that education and cognitive psychology have put on layers of ‘cruft’ (“extraneous matter“) on top of the neural underpinnings. And I have a small problem with that. In short, I think that the theories that have arisen have provided useful guidance for designing systems and learnings that wouldn’t have emerged from strictly neural explanations.
Take, for example, cognitive load. John Sweller’s theory posits that there are limits to our mental resources. Thus, having extraneous material can interfere with the ability to process what’s necessary. And it’s led to some important results on things like the importance of worked examples, and making useful diagrams.
We can also look to principles like Bjork’s desirable difficulty. Here, the type of practice matters (as also embodied in Ericsson’s deliberate practice), needing to be at the right level. This might be more easily derivable from neural net models, but still provided a useful basis for design.
I could go on: the value of mental models, what makes examples work, the value of creating a motivating introduction, and so on. I’d suggest that these aren’t obvious (yet) from neural models. And even if they are, they are likely more comprehensible from a cognitive perspective than a neural. Others have argued eloquently that neural is the wrong level of analysis for designing learning.
I will suggest, in defense of cognitive psychology, that the phenomena observed provide useful frameworks. These frameworks give us hooks for developing learning experiences that are more complicated to derive from neural models. As I’ve said, the human brain is arguably the most complex thing in the known universe. Eventually, our neural models may well advance enough to provide more granular and accurate models, but right now there’s still a lot unknown.
So I’m not ready to abandon useful guidance, even if some of it is problematic. Separating out what’s useful from what’s been overhyped may be an ongoing need, but throwing it all away seems premature. That’s my take, what’s yours?
Matthias Melcher says
Thanks for asking. My take is that there IS some “cruft” in the Cognitive Load Theory that does more harm than good: the outdated doctrine of the “Split Attention Effect” from the paper age, blocks possible useful developments. See my post here https://x28newblog.wordpress.com/2018/06/23/cmaps-and-the-split-attention-effect/ or my thoughts for Intelligent Textbooks 2020 here https://x28newblog.wordpress.com/2020/06/25/intelligent-textbooks-rejected/
Clark says
Matthias, I’m a fan of concept maps; worked with Kathy Fisher on SemNet while a grad student. And, collaborating on maps can be a valuable knowledge negotiation approach. There’s empirical work on it. I think of IBIS, for instance. And have you looked at Plectica? I don’t see why concept mapping’s a problem for Cognitive Load. I agree that the tools I’d like to see don’t seem to exist yet (think: a collaborative version of the tool Trapeze that used to run on the Mac).
Matthias Melcher says
Thank you for the pointers to Plectica and Trapeze. I followed them, but I am not sure if these tools solved the problem of space which Downes addressed in his 2018 post that I have linked to.
If annotations must be close to the item described, the concept map will eventually end up being too large, or annoying with context-disrupting pop-ups. By contrast, annotations in a fixed corner will, after a short period of habituation, be effortlessly reachable.
I have now assembled another demo here in H5P https://mmelcher.org/wp/uncategorized/trying-out-h5p/ showing the basic idea independently of Cmaps.
Stephen Downes says
Hiya Clark… I address your comments here: https://www.downes.ca/post/71405