A couple of recent occurrences have prodded me to think. (Dangerous, I know!). In this case, generative AI continues to generate ;) hype and concern in close to equal measure. Which means it dominates conversations, including one I had recently with Markus Bernhardt. Then, there was a post by Simon Terry that said something related that doesn’t completely align. So, some thoughts arguing to have an expert in the loop.
First, as a neighbor as well as an AI strategist of renown, I’m grateful Markus and I can regularly converse. (And usually about AI!) His depth and practical experience in guiding organizations complements my long-standing fascination with AI. One item in particular was of note. We were discussing how you need a person to vet what comes out of Generative AI. And it became clear that it can’t just be anybody. It takes someone with expertise in the area to be able to determine if what’s said is true.
That would suggest that the AI is redundant. However, there are limitations to our cognition. As I’ve recounted numerous times, technology does well what we don’t, and vice-versa. So, we use tools. One of the things we do is unconsciously forget aspects of solutions that we could benefit from. Hence, for instance, checklists. In this case, Generative AI can be a thinking partner in that it can spin up a lot of ideas. (Ignoring, for the moment, issues like intellectual property and environmental costs, of course.) They may not be all good, or even accurate, but…they may be things we hadn’t recalled or even thought of. Which would be a nice complement to our thinking. It requires our expertise, but it’s a plausible role.
Now, Simon was talking about how ‘human in the loop’ perpetuates a view of humans as cogs in a machine. And I get it. I, too, worry about having people riding herd on AI. That is, for instance, AI doing the creative work, and humans taking responsibility. That’s broken. But, having AI as a thinking partner, with a human generating ideas with AI, and taking responsibility for the accuracy as well as the creativity, doesn’t seem to be problematic. (And I may be wrong, these are preliminary thoughts!)
Still, I think that just a ‘human in the loop’ could be wrong. Having an expert in the loop, as Markus suggested, may be a more appropriate situation. He pointed out a couple of ways Generative AIs can introduce errors, and it’s a known problem. We have to have a person in the loop, but who? As I recounted recently, are we just training the AI? Still, I can see a case being made that this is the right way to use AI. Not as an agent (acting on its own, *shudder*), but as a partner. Thoughts?
Leave a Reply