As this is my place to ‘think out loud’, here’s yet another thought that occurred to me: is ‘average’ good enough? And, just what am I talking about? Well, LLMs are, by and large, trained on a vast corpora. Essentially, it’s averaging what is known. It’s creating summaries of what’s out there, based upon what’s out there. (Which, BTW, suggests that it’s going to get worse, as it processes its own summaries! ;) But, should we be looking to the ‘average’?
In certain instances, I think that’s right. If you’re below average in understanding, learning from the average is likely to lift you up. You can move from below average to, well, average. Can you go further? If you’re in well-defined spaces, like mathematics, or even programming, what LLMs know may well be better than average. Not as good as a real expert, but you can raise your game. Er, that is, if you really know how to learn.
Using these systems seems to become a mental crutch, if you don’t actually do the thinking. While above average people seem to be able to use the systems well, those below average don’t seem to learn. IF you used it to provide knowledge, and then put that knowledge into practice, and get feedback (so, for instance, experimenting), you could fine tune your performance (not as eloquently as having someone provide feedback, but perhaps sufficiently). However, this requires knowing how to learn, and the evidence here is also that we don’t do that well.
So, generative AI models give you average answers. Except, not always. They hallucinate (and always will, if this makes sense). For instance, they’ll happily support learning styles, because that’s a zombie idea that’s wrong but won’t die. They can even make stuff up, and don’t know and can’t admit to it. If you call them on it, they’ll go back and try again, and maybe get it right. Still, you really should have an ‘expert’ in the loop. Which may be you, of course.
Look, I get that they can facilitate speed. Though that would just seem to lead your employer to expect more from you. Would that be accompanied by more money? Ok, I’m getting a bit out of my lane here, but I’m not inclined. But is faster better?
Also, ‘average’ worries me. As I’ve written, Todd Rose wrote a book called The End of Average that is truly insightful. Indeed, one of those books that makes you see the world in a different way, and that’s high praise. The point being that average removes the quality. Averaging removes the nuances, the details, as does summarization. Ideally, you should be learning from the best, not the average, if learning is social (as Mark Britz likes to point out).
Sure, it can know the average of top thoughts, but what’s better is having those top thinkers. If they’re disagreeing, that’s better for dialog, but not summarization. In truth, I’d rather learn from a Wikipedia page put together by people than a Gen AI summary, because I don’t think we can trust GenAI summaries as much as socially constructed understanding. And it’s not the same thing.
So, I’ll suggest ‘average’ isn’t nearly good enough in most cases. We want people who know, and can do. I don’t mind if folks find GenAI useful, but I want them to use it as support, not as a solution. Hey, there’s a lot that can be done with regular AI in many instances, and Retrieval Augmented Generation (RAG) systems offer some promise of improvement for GenAI, but still not perfect outcomes. And, still, all the other problems (IP, business models, and…). So, where’ve I gone wrong?
Note, I should be putting references in here, but I’ve read a lot lately and not done a good job of saving the links. Mea culpa. Guess you’ll just have to trust me, or not.
Leave a Reply