Recently, there’s been a lot of excitement about Generative Artificial Intelligence (Generative AI). Which is somewhat justified, in that this technology brings in two major new capabilities. Generative AI is built upon a large knowledge base, and then the ability to generate plausible versions of output. Output can in whatever media: text, visuals, or audio. However, there are two directions we can go. We can use this tool to produce more of the same more efficiently, or do what we’re doing more effectively. The question is what do we want as outcomes: quality or quantity?
There are a lot of pressures to be more efficient. When our competitors are producing X at cost Y, there’s pressure to do it for less cost, or produce more X’s per unit time. Doing more with less drives productivity increases, which shareholders generally think are good. There’re are always pushes for doing things with less cost or time. Which makes sense, under one constraint: that what we’re doing is good enough.
If we’re doing bad things faster, or cheaper, is that good? Should we be increasing our ability to produce planet-threatening outputs? Should we be decreasing the costs on things that are actually bad for us? In general, we tend to write policies to support things that we believe in, and reduce the likelihood of undesirable things occurring (see: tax policy). Thus, it would seem that if things are good, go for efficiency. If things aren’t good, go for quality, right?
So, what’s the state of L&D? I don’t know about you, but after literally decades talking about good design, I still see way too many bad practices: knowledge dump masquerading as learning, tarted up drill-and-kill instead of skill practice, high production values instead of meaningful design, etc. I argue that window-dressing on bad design is still bad design. You can use the latest shiny technology, compelling graphics, stunning video, and all, but still be wasting money because there’s no learning design underneath it. To put it another way, get the learning design right first, then worry about how technology can advance what you’re doing.
Which isn’t what I’m seeing with Generative AI (as only the latest in the ‘shiny object’ syndrome. We’ve seen it before with AR/VR, mobile, virtual worlds, etc. I am hearing people saying “how can I use this to work faster”, put out more content per unit time”, etc, instead of “how can we use this to make our learning more impactful”. Right now, we’re not designing to ensure meaningful changes, nor measuring enough of whether our interventions are having an impact. I’ll suggest, our practices aren’t yet worth accelerating, they still need improving! More bad learning faster isn’t my idea of where we should be.
The flaws in the technology provide plenty of fodder for worrying. They don’t know the truth, and will confidently spout nonsense. Generative AIs don’t ‘understand’ anything, let alone learning design. They are also knowledge engines, and can’t create impactful practice that truly embeds the core decisions in compelling and relevant settings. They can aid this, but only with knowledgeable use. There are ways to use such technology, but it comes from starting with the point of actually achieving an outcome besides having met schedule and budget.
I think we need to push much harder for effectiveness in our industry before we push for efficiency. We can do both, but it takes a deeper understanding of what matters. My answer to the question of quality or quantity is that we have to do quality first, before we address quantity. When we do, we can improve our organizations and their bottom lines. Otherwise, we can be having a negative impact on both. Where do you sit?