Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Sub-symbolic and Situated

13 August 2019 by Clark Leave a Comment

At the time that the connectionist folks were working on neural nets, another similar approach was genetic algorithms. Both were working in a different way than the previous formal approaches to AI. The distinction between the two became known as symbolic vs sub-symbolic. And it’s useful to review why, particularly in the current climate of increasing interest in AI and cognitive science. An interesting outcome is that the sub-symbolic work exposed the contextualized nature of our reasoning. So there’s a link between sub-symbolic and situated cognition.

The prevailing model, starting with the cognitive revolution which arguably began in 1956 (an auspicious year ;) was a formal logical one. Whether in ‘production’ rules of IF THEN, or other formal mechanisms, the notion was to operate on semantic objects like numbers and concepts. This reflected, at the time, the belief that we’re formal logical thinkers.

As cognitive research continued, there was a growing recognition that our behaviors didn’t match particularly well with formal logic (c.f. Kahnemann & Tversky’s work, summed up in  Thinking Fast and Slow). Several cognitive scientists separately came up with structures that more aptly described some of the properties we saw: Roger Schank called them scripts (he was focused on episodic thinking, not semantic), Marvin Minksy called them frames, and Dave Rumelhart called them schemas (after Bartlett).

What Rumelhart subsequently saw was that the properties he was trying to capture were very hard to represent in formal logic. He went on, with his colleague Jay McLelland and their collaborators) to develop what they called Parallel Distributed Processing (PDP). These are now known as neural nets (NNs) and are the basis for much of machine learning.

I was in the lab at the time Dave and Jay were working on neural nets, but detoured down a different path. Following work on analogical reasoning (my Ph.D. thesis topic), I became aware of the work Holland, Holyoak, Nisbett, & Thagard were doing with induction. Their framework was genetic algorithms (GAs). Both GAs and NNs use input strings and output strings to work, but internally they represent things differently.

After so much work on symbolic reasoning, here were mechanisms operating beneath the symbolic level. Yet they were attempting to create symbolic behavior. NNs obviously, more closely resemble our cognitive architecture (though GAs are still used in some areas like program generation). So, our conscious thinking  is symbolic, but our actual cognition is happening below our conscious thinking. Hence things like illusions, fallacies, myths, and more.

What emerged from this realization is that our cognition isn’t just sub-symbolic, but  situated.  That is, what is conscious is a combination of what comes in from our senses, and what we know. In fact, with the limited attention we have,  much of what we think we’re perceiving, we’re actually generating!

This it accounts for why we’re bad at doing things by rote; we’re liable to confound steps and contexts. This ends up being important because it means we have to work harder for any learning interventions to work effectively  across  contexts. The relationship between sub-symbolic and situated is, at least to me, and interesting story of the development of cognitive science.

Yet, it still means that our learning  works most effectively at the conscious level of symbols, because that can accelerate learning over having to deal with everything through practice and feedback.  (And explains why programs talking about neural really aren’t working there.) We still need those, but conscious models can provide a framework to become self-improving over time. So don’t forget to provide the models, and sufficient practice, and feedback.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

Blogroll

  • Charles Jennings
  • Christy Tucker
  • Connie Malamed
  • Dave's Whiteboard
  • Donald Clark's Plan B
  • Donald Taylor
  • Harold Jarche
  • Julie Dirksen
  • Kevin Thorn
  • Mark Britz
  • Mirjam Neelen & Paul Kirschner
  • Stephen Downes' Half an Hour

License

Previous Posts

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.