One of the things I’ve recognized is that we don’t pay enough attention to context. It turns out to be a really important factor in cognition, as our long-term memory interacts with the current context to determine our interpretation. And, as such, makes our interpretations very ’emergent’. Thus, our training needs to ensure that we’re liable to make the right interpretation and so choose the right action. Do we do this well? And can artificial intelligence (AI), specifically generative AI (GenAI), help? Here’re some thoughts on context and models.
So, we’ve gone from symbolic models to sub-symbolic ones as we’ve moved to a ‘post-cognitive’ interpretation of our thinking. What’s been realized is that we’re not the formal logical reasoning beings that we’d like to think. Instead, we’re very much assembling our understanding on the fly as an interaction between context and memory. In fact, our emergent memory can be altered by the context, as Beth Loftus’ research demonstrated. Which means that, if we want specific interpretations and reactions (e.g. making decisions under uncertainty), we should be careful to ensure that we provide training across a suitable suite of contexts.
Now, active inference models of cognition suggest that we’re actively building models of how the world works. Thus, we’re abstracting across experiences to generate ever-more accurate explanations. Research on mental models suggests that they’re incomplete, not completely accurate, and, arguably most importantly, hard to get rid of if they’re wrong. Thus, providing good models beforehand is important, and work by John Sweller further suggests that examples showing models in context benefit learning. You can present the model, but ultimately the learner must ‘own’ it. So, it’s important to know the models and their range of applicability to facilitate that abstraction.
What is important to know, however, is that GenAI doesn’t build models of the world. This was an important (and, sadly, not self-generated) realization for me. The implication, however, is clear. I have maintained that GenAI can’t understand context, and thus can’t generate suitable practice environments. Which, of course, is to the good for designers, since it leaves them a role ;). Importantly, however, this framing also suggests that GenAI also can’t choose an appropriate suite of contexts for practice, since it doesn’t understand models and how they’re applicable (and when not). (Another designer role!)
I am all for using technology to complement our own cognition. However, that entails knowing what the true affordances of the technology are, and also what it can’t do. So, GenAI can help think of great settings for practice. Along with a person (an expert actually) to vet the suggestions, of course. It can think of things we might forget, or ones we haven’t thought of yet. It can, of course, also create ones that aren’t realistic. There’re potentially great opportunities, but we have to know what matters, and what doesn’t. Context and models matter. GenAI can’t understand them. You can take it from there.
Leave a Reply