I get it, when you’ve a hammer, the whole world looks like a nail. Moreover, there’s money on the table, and it’d be a shame not to grab onto it. Still, there’s also integrity. And, frankly, I fear that we’re going down the wrong path. So I’ll rail again, by asking “where’s quality?”
So, a colleague recently provided a link to a report by a well-known analyst. In the report, they call for an AI revolution for L&D. And, yes, I do believe L&D needs a revolution, I wrote a whole book about it. However, I fear that the direction under advisement is focusing on the wrong thing. So here’s what the initial post summarized about the article:
* Despite significant investment, many companies are utilizing outdated learning models that do not deliver substantial business impact.
* Learning needs to be dynamic, personalized, and focused on enablement.
* Chief Learning Officers (CLOs) should re-establish themselves as leaders within the enterprise, focusing not just on learning but on employee enablement.
* Artificial intelligence (AI) offers the potential to speed up content creation, lower costs, and improve operational efficiency, which allows Learning and Development (L&D) to adopt a wider and more strategic role.
Do you see anything wrong with this? I actually agree with the first point, and probably the third. However, I think we can make a strong case that the second is not the primary issue. And very clearly the fourth point identifies what’s wrong in the second, at least before the last phrase.
So, first, when we invoke learning, we should be very careful to do it right. There are claims that up to 90% of our investment in training is going to waste. However, it’s not because our learning designs aren’t ‘dynamic, personalized, and focused on enablement’, it’s because our learning isn’t designed according to what research says works. Now, our learning needs change as our abilities improve. We start knowing what we need and why. There’re also times when performance support can be more effective than courses. Courses can still be valid, if they’re done well.
That’s the point I continue to make: I maintain that we’ll save more money and have more impact if we focus on good learning design before we invest in fancy technology. That includes AI. We want meaningful practice (which I suggest is still a role for designers, as AI doesn’t understand context), not information dump. Knowledge <> ability to perform. What we need is practice of doing. At least for novices. But beyond that, only effective self-learners will be truly able to leverage information on their own to learn. Even social learning gets better when we understand learning.
So, learning needs to be evidence-informed, first. Then, and only then, can it be dynamic, personalized, etc. Even knowing when and how to use AI as performance support counts (a more valid role, tho’ there needs to be scrutiny of the advice somehow, as AIs can give bad advice). Sure, CLO’s do need to be leaders in the enterprise, but that comes from understanding cognition and learning, and then using those to better enable innovation as well as optimizing performance. Enablement’s fine as a premise, but it’s got to come from understanding. For instance, you can’t get employees contributing just because you put in AI, you need to create a learning culture. (Putting AI into a Miranda organization isn’t going to magically fix the problem.)
Let me be clear: my argument is not Gen AI bad vs Gen AI good. No, it’s learning science involved versus not. I am fine if we start using AI, Gen or otherwise,, but after we’ve made sure we’re doing the right things first. Let me pose a hypothetical: for $30K, would you rather have 3 courses versus 10? What if those 3 courses were designed to actually have an impact, versus 10 that are pretty and full of information, but won’t move a single meaningful needle the organization? Sure, I’ve made up the numbers, but the reality is that we’re talking about achieving real outcomes versus making folks feel good; I’ll suggest “it’s pretty and people like it” is no substitute for improving the outcome.
This makes the last line above more problematic: we don’t need to speed up content creation. Content dump <> learning. Lowering costs and improving efficiency is all good, but after you’ve ensured adequate effectiveness. And no one seems to be talking about that. That’s why I’m asking “where’s quality?” It’s not being discussed, because AI is the next shiny object: “there’s plenty of money to be made”. Anyone else sensing a bubble? And that’s without even considering IP ethics, environmental impact, security, and VC funding. The business model is still up in the air. Hence, my question. Your thoughts?
As an aside, there’s a quote in the paper that illustrates their lack of deep understanding: “As our attention spans shorten”. Ahem. While there’s a credible argument made by Gloria Marks, I still suggest it’s not a change in our cognitive architecture, but instead availability and familiarity. We can still disappear for hours into a novel, movie, or game. It’s a fallacious basis for an argument.
Truth in advertising: I was tempted to title this “WTAH”, but…I decided that might be too incendiary ;). Hence, “Where’s quality?” Still, you can imagine my mood while reading and then writing this.
Hi Clark, long time reader first time poster here.
Completely agree with you.
As a learning designer of elearny, I find the time we have to spend on the evidence-informed design is limited. The majority of the time I am given for a project is in the writing of the piece after a design has been agreed.
I’m hopeful that GenAi can speed up the writing/development of a course. This would enable.me to spend more time developing a more thoughtful and evidence informed design.
What do you think?
Thanks for writing this.
In a recent eLearning design challenge, I noticed a clear trend. The submissions that got the most attention were highly polished. Clean visuals with animated transitions. They looked great but when I looked closer, many of them weren’t solving a real learning problem. Some didn’t even include a clear objective or any kind of practice activity. It’s just fancy slides.
What I didn’t see much of: performance-based thinking. Problem analysis. Real-world decisions. Opportunities to reflect or apply.
It started to feel like the challenge was rewarding decoration over design. The more visual flair, the better your chances. That sends a dangerous signal. If our professional community starts valuing how something looks more than how it works, we’re heading in the wrong direction.
Appreciate you raising the flag. We need more of these conversations.
Dan, thanks for weighing in! I find it sad that you’re given the project after the design, which could be tragic if the design isn’t any good. Which is, too sadly, likely an information dump. I’m afraid I don’t see how GenAI speeding up writing a course can give you space to do a better design, as it appears the design’s already been decided? Happy to find I’ve missed something. I do think that there’re opportunities for what I call ‘stealth’ design, where you sneak in better learning despite the design constraints, e.g. writing mini-scenarios instead of knowledge-test questions, giving models not just content. Forgiveness is easier than permission, and all that ;). Of course, you’ll have to perhaps spend a few more cycles to get it right, at least while you internalize the approach, but in the long term you’ll be doing better for the org and your learners. Not sure GenAI can help there, but again, happy to hear otherwise.
Trustin, I’m reminded of a design award ceremony I attended on behalf of a client. I got to wander around the displayed winners, and have to agree. There was one that was visually great, a carnival with games and all. At core, however, it was tarted-up drill and kill, with no meaningful link between setting and content. What Cammy Bean rightly calls “clicky clicky bling-bling”. What a waste! And, yes, I fear that we’re undermining our credibility by doing so. Hence my rant!