A recent post on LinkedIn dubbed me in. In it, the author was decrying a post by our platform host, which mentioned Learning Styles. The post, as with several others, asks experts to weigh in. Which, I’ll suggest, is a broken model. Here’s my take on why I say don’t use AI unsupervised.
As a beginning, learning styles isn’t a thing. We’ve instruments, which don’t stand up to psychometric scrutiny. Further, reliable research to evaluate whether they have a measurable impact comes up saying ‘no’. So, despite fervent (and misguided) support, folks shouldn’t promote learning styles as a basis to adapt to. Yet that’s exactly what the article was suggesting!
So, as I’ve mentioned previously, you can’t trust the output of an LLM. They’re designed to string together sentences of the most probabilistic thing to say next. Further, they’ve been trained, essentially, on the internet. Which entails all the guff as well as the good stuff. So what can come out of it’s ‘mouth’ has a problematically high likelihood of saying something that’s utter bugwash (technical term).
In this case, LinkedIn (shamefully) is having AI write articles, and then circulating them for expert feedback. To me that’s wrong for two reasons. Each is bad enough in it’s own right, but together they’re really inexcusable.
The first reason is that they’ve a problematically high likelihood of saying something that’s utter bugwash! That gets out there, without scrutiny, obviously. Which, to me, doesn’t reflect well on LinkedIn for being willing to publicly demonstrate that they don’t review what they provide. Their unwillingness to interfere with obvious scams is bad enough, but this really seems expedient at best.
Worse, they’re asking so-called ‘experts’ to comment on it. I’ve had several requests to comment, and when I review them, they aren’t suitable for comment. However, asking folks to do this, for free on their generated content, is really asking for free work. Sure, we comment on each other’s posts. That’s part of community, helping everyone learn. And folks are contributing (mostly) their best thoughts. Willing, also, to get corrections and learn. (Ok, there’s blatant marketing and scams, but what keeps us there is community.) But when the hosting platform generates it’s own post, in ways that aren’t scrutable, and then invites people to improve it, it’s not community, it’s exploitation.
Simply, you can’t trust the output of LLMs. In general, you shouldn’t trust the output of anything, including other people, without some vetting. Some folks have earned the right to be trusted for what they say, including my own personal list of research translators. Then, you shouldn’t ask people to comment on unscrutinized work. Even your own, unless it’s the product of legitimate thought! (For instance, I usually reread my posts, but it is hopefully also clear it’s just me thinking out loud.)
So, please don’t use AI unsupervised, or at least until you’ve done testing. For instance, you might put policies and procedures into a system, but then test the answers across a suite of potential questions. You probably can’t anticipate them all, but you can do a representative sample. Similarly, don’t trust content or questions generated by AI. Maybe we’ll solve the problem of veracity and clarity, but we haven’t yet. We can do one or the other, but not both. So, don’t use AI unsupervised!
Leave a Reply