I read a post today about Artificial Intelligence and education, and it prompted some thoughts. Including causing me to rethink a post I made earlier! It’s all about using AI to assist with instruction, and it’s probably worth sharing my thinking, to see what you all think. So here are some thoughts on rethinking AI and education.
The post Carl Hendrick wrote talked (at length ;) about how AI is being used to support education. We know that using AI to create answers keeps you from doing the hard work to learn. So, for instance, the trend to use AI by students to write assignments, etc, undermines learning. However, it becomes an ‘us against them’ battle to try to create assignments that require students to think. So far, pretty much the best approach I’ve heard means that you ultimately have to spend some time talking to students one on one. Which doesn’t scale well.
Carl was getting philosophical, about trends and what teaching vs learning means. And it’s an important point: learning is a process, teaching is an intervention, ideally to facilitate that process. If we take Geary’s evolutionary learning model, we pretty much need instruction for certain topics. But it led to me to question the whole premise.
The conversation largely rode on ChatGPT (and, implicitly, other Large Language Models). These models are created to generate plausible language. Not correct answers, note. And, they do it so well that there’s been a revolution in the hype (not the substance, mind you). And what concerns me is that LLMs aren’t able to really ‘know’ anything. In my previous post, I posited that we could perhaps combine ‘agents’ (in some way that’d be secure) to create a tutoring model. But I wonder if that’s the right way.
I am thinking about efforts to generate models that instead of generating plausible language, do ‘knowing’. That is, modeling the predictive coding models of the brain. It might be hard to get them to the right level, but at least they’d understand. If you think back to the Intelligent Tutoring Systems of the past, they built deep models of expertise in the domain. Could systems learn this, instead of interpreting language ‘about’ this? Coupling such a system with a teaching engine (maybe learning what instruction really is) might be a real tutor.
Carl’s point about the nature of teaching is that it’s much more than providing answers. In the experiment he was citing, they carefully built a tutor that they had to tune to do real teaching, fighting its natural predilections of such systems to provide answers that sound like correct ones. That, ultimately, sounds wrong.
(I’m not going into the curriculum and assessment, by the way, I don’t know that what they’re teaching is actually useful in today’s day and age; was it knowledge about physics, or actual ability to use it? There’re robust results that students who learn formal physics still make bad predictions, such as after a semester, still thinking the a ball dropped from a plane still lands directly under where it was dropped!)
My point is that trying to make LLMs be teachers may be using AI in the wrong way. Sure, ITSs don’t scale well, but could we build an engine that learns a domain (rather than handcrafting) and one that learns to teach, and then scale that? It’s not language fluency that matters, it’s pedagogical fluency is my argument. That’s how I’m rethinking AI and education. Your thoughts?
Leave a Reply