At the time that the connectionist folks were working on neural nets, another similar approach was genetic algorithms. Both were working in a different way than the previous formal approaches to AI. The distinction between the two became known as symbolic vs sub-symbolic. And it’s useful to review why, particularly in the current climate of increasing interest in AI and cognitive science. An interesting outcome is that the sub-symbolic work exposed the contextualized nature of our reasoning. So there’s a link between sub-symbolic and situated cognition.
The prevailing model, starting with the cognitive revolution which arguably began in 1956 (an auspicious year ;) was a formal logical one. Whether in ‘production’ rules of IF THEN, or other formal mechanisms, the notion was to operate on semantic objects like numbers and concepts. This reflected, at the time, the belief that we’re formal logical thinkers.
As cognitive research continued, there was a growing recognition that our behaviors didn’t match particularly well with formal logic (c.f. Kahnemann & Tversky’s work, summed up in Thinking Fast and Slow). Several cognitive scientists separately came up with structures that more aptly described some of the properties we saw: Roger Schank called them scripts (he was focused on episodic thinking, not semantic), Marvin Minksy called them frames, and Dave Rumelhart called them schemas (after Bartlett).
What Rumelhart subsequently saw was that the properties he was trying to capture were very hard to represent in formal logic. He went on, with his colleague Jay McLelland and their collaborators) to develop what they called Parallel Distributed Processing (PDP). These are now known as neural nets (NNs) and are the basis for much of machine learning.
I was in the lab at the time Dave and Jay were working on neural nets, but detoured down a different path. Following work on analogical reasoning (my Ph.D. thesis topic), I became aware of the work Holland, Holyoak, Nisbett, & Thagard were doing with induction. Their framework was genetic algorithms (GAs). Both GAs and NNs use input strings and output strings to work, but internally they represent things differently.
After so much work on symbolic reasoning, here were mechanisms operating beneath the symbolic level. Yet they were attempting to create symbolic behavior. NNs obviously, more closely resemble our cognitive architecture (though GAs are still used in some areas like program generation). So, our conscious thinking is symbolic, but our actual cognition is happening below our conscious thinking. Hence things like illusions, fallacies, myths, and more.
What emerged from this realization is that our cognition isn’t just sub-symbolic, but situated. That is, what is conscious is a combination of what comes in from our senses, and what we know. In fact, with the limited attention we have, much of what we think we’re perceiving, we’re actually generating!
This it accounts for why we’re bad at doing things by rote; we’re liable to confound steps and contexts. This ends up being important because it means we have to work harder for any learning interventions to work effectively across contexts. The relationship between sub-symbolic and situated is, at least to me, and interesting story of the development of cognitive science.
Yet, it still means that our learning works most effectively at the conscious level of symbols, because that can accelerate learning over having to deal with everything through practice and feedback. (And explains why programs talking about neural really aren’t working there.) We still need those, but conscious models can provide a framework to become self-improving over time. So don’t forget to provide the models, and sufficient practice, and feedback.
Leave a Reply