Something triggered, for me, an analogy. I was thinking of aphasias, and thought one might be a good analogy for LLMs. Both to understand, and to use as guidance for using. So here are some initial thoughts on an aphasia analogy for LLMs.
First, while aphasia is complex, two types reliably correlate with damage to specific areas of the brain: Broca’s, and Wernicke’s. Interestingly, they’re partners, as both deal with knowledge and language. That is: what we know, and how we’re able to communicate. Both come from damage to a specific part of the brain, but have opposite effects.
Broca’s aphasia is reasonably clear. The evidence suggests that folks retain their knowledge, but struggle to communicate. That is, what comes out is broken and ungrammatical, but meaningful. People generally have no trouble thinking, just talking about it. There of course is a region of the brain for Broca.
The companion is Wernicke’s aphasia. Here, the language is eloquent, but essentially nonsensical. People may have thoughts, but what they say has no internal cohesion. There can even be made-up words! There’s also a region of the brain known for Wernicke.
You can probably figure out where this is going: LLMs learn from large corpora of text to produce accurate language. Not accurate answers, but accurate language, an important distinction! They’re not damaged, so if most of the training set is accurate, what they say will be accurate. However, if not, they may say other things. And, of course, they can just say things that sound right that aren’t, such as making up books, court cases, and more. They’re essentially Wernicke systems!
What does this mean? It means you probably can have a good idea generation sessions with an LLM, or give it a language task like summarizing or generating language. What it also means is that there’s no way that you should trust what comes out to be accurate, and you need an expert in the loop. Given that they’ve demonstrably been corruptible, besides not always being correct, however, they shouldn’t be trusted to act on your behalf. I worry a wee bit about them being good enough that most of the time what comes out is ok. This could, of course. lull you into a sense of complacency! Hopefully folks will always feel the professional obligation to ensure that what comes out is correct.
So, understand what LLMs do, if not how they do it. Then, act accordingly. (Feel free to also investigate issues like IP, environment, biz model, and more.) Caveat emptor!
Leave a Reply