Recently, there’s been a lot of excitement about Generative Artificial Intelligence (Generative AI). Which is somewhat justified, in that this technology brings in two major new capabilities. Generative AI is built upon a large knowledge base, and then the ability to generate plausible versions of output. Output can in whatever media: text, visuals, or audio. However, there are two directions we can go. We can use this tool to produce more of the same more efficiently, or do what we’re doing more effectively. The question is what do we want as outcomes: quality or quantity?
There are a lot of pressures to be more efficient. When our competitors are producing X at cost Y, there’s pressure to do it for less cost, or produce more X’s per unit time. Doing more with less drives productivity increases, which shareholders generally think are good. There’re are always pushes for doing things with less cost or time. Which makes sense, under one constraint: that what we’re doing is good enough.
If we’re doing bad things faster, or cheaper, is that good? Should we be increasing our ability to produce planet-threatening outputs? Should we be decreasing the costs on things that are actually bad for us? In general, we tend to write policies to support things that we believe in, and reduce the likelihood of undesirable things occurring (see: tax policy). Thus, it would seem that if things are good, go for efficiency. If things aren’t good, go for quality, right?
So, what’s the state of L&D? I don’t know about you, but after literally decades talking about good design, I still see way too many bad practices: knowledge dump masquerading as learning, tarted up drill-and-kill instead of skill practice, high production values instead of meaningful design, etc. I argue that window-dressing on bad design is still bad design. You can use the latest shiny technology, compelling graphics, stunning video, and all, but still be wasting money because there’s no learning design underneath it. To put it another way, get the learning design right first, then worry about how technology can advance what you’re doing.
Which isn’t what I’m seeing with Generative AI (as only the latest in the ‘shiny object’ syndrome. We’ve seen it before with AR/VR, mobile, virtual worlds, etc. I am hearing people saying “how can I use this to work faster”, put out more content per unit time”, etc, instead of “how can we use this to make our learning more impactful”. Right now, we’re not designing to ensure meaningful changes, nor measuring enough of whether our interventions are having an impact. I’ll suggest, our practices aren’t yet worth accelerating, they still need improving! More bad learning faster isn’t my idea of where we should be.
The flaws in the technology provide plenty of fodder for worrying. They don’t know the truth, and will confidently spout nonsense. Generative AIs don’t ‘understand’ anything, let alone learning design. They are also knowledge engines, and can’t create impactful practice that truly embeds the core decisions in compelling and relevant settings. They can aid this, but only with knowledgeable use. There are ways to use such technology, but it comes from starting with the point of actually achieving an outcome besides having met schedule and budget.
I think we need to push much harder for effectiveness in our industry before we push for efficiency. We can do both, but it takes a deeper understanding of what matters. My answer to the question of quality or quantity is that we have to do quality first, before we address quantity. When we do, we can improve our organizations and their bottom lines. Otherwise, we can be having a negative impact on both. Where do you sit?
Neil Von Heupt says
I was often asked when parenting was it more important to give children quality time or quantity. I always responded ‘both!’. For me, it’s the same here. Continue to work on quality (as the priority, I agree), then produce more of it. I think both can be aided by AI, when used well.
Ray says
I forget which movie this was from, but there’s a scene I recall of characters sitting in a restaurant. One of them complains about the poor quality of the food. The other piles on with “And such small portions!”
Our industry is largely like that restaurant that serves sub-standard food. Serving more of it will not be an improvement.
Many IDs still don’t know how to design effective performance-oriented learning experiences. But even some who DO know how, generally don’t because of tight time constraints and low expectations from their management. The institutions they work for don’t recognize that the info-dump training is bad. If the ID wants to change the minds of the higher-ups, the ID will have to do so within the extremely limited time and budget that’s required to create info-dump “courses.” It’s in these situations where efficiency gains from AI can be usefully deployed to try to demonstrate the value of better-designed training, without running afoul of institutional time and budget expectations.
Another area where AI efficiency gains can be helpful is for institutions who already produce high-quality training. Certain regulatory requirements, such as making an interactive e-learning course accessible to differently-abled people (e.g., those who use screen readers, or who navigate without the use of a mouse) is excruciatingly time-consuming. My time would be much better spent designing the next quality training intervention then spending hours on tasks like setting focus order for keyboard navigation, creating audio descriptions of the visual content of all my videos, etc. These are tasks I want AI to do for me. AI can do this without messing up my learning design.
I am very skeptical of letting AI create the whole course. The training set would inevitably be filled with tons of garbage designs, so in all likelihood, the AI would just become highly efficient at churning out more garbage. What I want from AI at this stage is targeted assistance that works within the framework of my performance-oriented design. In other words, I want PRODUCTION help, not DESIGN help.
Now, if a training set could be assembled consisting of enough high-quality, performance-oriented designs, then there would be HUGE value in letting the well-trained AI generate instructional designs. But I have yet to see anything like this to date.
Until then, let good IDs create the design, and let AI provide targeted production assistance. In other words, in the short term at least, I think the best we can hope for is efficiency gains.
Robert Spence says
Perhaps we should be considering professionalism that extends well beyond utilising design models and techniques. Having the ability to argue the need for quality over quantity, starting with becoming a proactive solution provider rather than a reactive order taker and justifying the cost based on a value proposition, is often lacking. Over the years I have observed that organisational business units are usually good at business analysis but not so good at performance analysis. The problem can be that typical organisational learning functions are not so hot on performance analysis either. Think of performance outcomes, have those ratified and “owned” by the business unit and then frame your design around them – maybe by starting with the design of the assessment. Questions to the business unit like “How will you know when this (intervention) is successful” helps with clarification and determining the necessary balance between quality and quantity. Sharing an evaluation model with the business unit – such as Reaction, Learning, Application, Impact, ROI, Sustainability and Sharing the Benefit (of the learning) can often help.
Mohammad says
“I am hearing people saying “how can I use this to work faster”, put out more content per unit time”, etc, instead of “how can we use this to make our learning more impactful”.” I like so much this section. great points.