As part of the Learning Development Conference that’s going on for the next five weeks (not too late to join in!), there have already been events. Given that the focus is on evidence-based approaches, a group set up a separate discussion room for learning science. Interestingly, though perhaps not surprisingly, our discussion ended up including barriers. One of the barriers, as has appeared in several guises across recent conversations, are the expectations on L&D. Some of them are our own, and some are others, but they all hamper our ability to do our best. So I thought I’d discuss some of these misaligned expectations.
One of the most prominent expectations is around the timeframes for L&D work. My take is that after 9/11, a lot of folks didn’t want to travel, so all training went online. Unfortunately (as with the lingering pandemic), there was little focus on rethinking, and instead a mad rush to get things online. Which meant that a lot of content-based training ended up being content-based elearning. The rush to take content and put it onscreen drove some of the excitement around ‘rapid elearning’.
The continuing focus on efficiency – taking content, adding a quiz, and putting it online – was pushed to the extreme. It’s now an expectation that with an authoring tool and content, a designer can put up a course in 1-2 weeks. Which might satisfy some box-checking, but it isn’t going to lead to any change in meaningful outcomes. Really, we need slow learning! Yet there’s another barrier here.
Too often, we have our own expectation that “if we build it, it is good”. That is, too often we take an order for a course, we build it, and we assume all is well. There’s no measurement to see if the problem is fixed, let alone tuning to ensure it is. We don’t have expectations that we need to be measuring our impact! Sure it’s hard; we have to talk to the business owners about measurement, and get data. Yet, like other areas of the organization, we should be looking for our initiatives to lead to measurable change. One of these days, someone’s going to ask us to justify our expenditures in terms of impact, and we’ll struggle if we haven’t changed.
Of course, another of our misaligned expectations is that our learning design approaches are effective. We still see, too often, courses that are content-dump, not serious solutions. This is, of course, why we’re talking about learning science, but while one of us had support to be evidence-based, others still do not. We face a populace, stakeholders and our audiences, that have been to school. Therefore, the expectation is that if it looks like school, it must be learning. We have to fight this.
It d0esn’t help that well-designed (and well-produced) elearning is subtly different than just well-produced elearning. We can’t (and, frankly, many vendors get by on this) expect our stakeholders to know the difference, but we must and we must fight for the importance of the difference. While I laud the orgs that have expectations that their learning group is as evidence-based as the rest, and their group can back that up with data, they’re sadly not as prevalent as we need.
There are more, but these are some major expectations that interfere with our ability to do our best. The solution? That’s a good question. I think we need to do a lot more education of our stakeholders (as well as ourselves). We need to (gently, carefully) generate an understanding that learning requires practice and feedback, and extends beyond the event. We don’t need everyone to understand the nuances (just as we don’t need to know the details of sales or operations or…unless we’re improving performance on it), but we do need them to be thinking in terms of reasonable amounts of time to develop effective learning, that this requires data, and that not every problems has a training solution. If we can adjust these misaligned expectations, we just might be able to do our job properly, and help our organizations. Which, really, is what we want to be about anyway.