I end up seeing a lot of different elearning. And, I have to say, despite my frequent disparagement, it’s usually well-written, the problem seems to be in the starting objectives. But compared to learning that really has an impact: medical, flight, or military training for instance, it seems woefully under-practiced.
So, I’d roughly (and generously) estimate that the ratio is around 80:20 for content: practice. And, in the context of moving from ‘getting it right’ to ‘not getting it wrong’, that seems woefully inadequate. So, two questions: do we just need more practice, or do we also have too much content. I’ll put my money on the latter, that is: both.
To start, in most of the elearning I see (even stuff I’ve had a role in, for reasons out of my control), the practice isn’t enough. Of course, it’s largely wrong, being focused on reciting knowledge as opposed to making decisisions, but there just isn’t enough. That’s ok if you know they’ll be applying it right away, but that usually isn’t the case. We really don’t scaffold the learner from their initial capability, through more and more complex scenarios, until they’re at the level of ability we want. Where they’re performing the decisions they need to be making in the workplace with enough flexibility and confidence, and with sufficient retention until it’s actually needed. Of course, it shouldn’t be the event model, and that practice should be spaced over time. Yes, designing practice is harder than just delivering content, but it’s not that much harder to develop more than just to develop some.
However, I’ll argue we’re also delivering too much content. I’ve suggested in the past that I can rewrite most content to be 40% – 60% less than it starts (including my own; it takes me two passes). Learners appreciate it. We want a concise model, and some streamlined examples, but then we should get them practicing. And then let the practice drive them to the content. You don’t have to prepackage it as much, either; you can give them some source materials that they’ll be motivated to use, and even some guidance (read: job aids) on how to perform.
And, yes, this is a tradeoff: how do we find a balance that both yields the outcomes we need but doesn’t blow out the budget? It’s an issue, but I suggest that, once you get in the habit, it’s not that much more costly. And it’s much more justifiable, when you get to the point of actually measuring your impact. Which many orgs aren’t doing yet. And, of course, we should.
The point is that I think our ratio should really be 50:50 if not 20:80 for content:practice. That’s if it matters, but if it doesn’t why are you bothering? And if it does, shouldn’t it be done right? What ratios do you see? And what ratios do you think makes sense?
Alexis Kingsbury says
Great article, thanks Clark.
I think you are right – eLearning (and probably training more generally) would benefit from greater focus on ‘putting it into practice’ however, I think course developers tend to worry that exercises aren’t ‘meaty’ enough, and worry that people will feel short-changed if they attended an 5 hour workshop and found only 1 hour was content.
In reality, that model would be likely to help attendees really understand the content, and be more likely to put it into practice rather than ‘come away with 1000 ideas and no clue what to do with them!’.
I’m running a webinar on 25th June, I promise I’ll do my best to include more practice and less content…
All the best,
Alexis
David Gutiérrez says
I feel insecure when I think about a 20:80 ratio, but that’s just probably habit talking: I’m too used to have concepts presented to me instead of working on them. I know that action is the key to learning and, when designing training, I try to overcome the irrational fear of too little explaining.
However, I find that content delivered through practice, that is, embedded in it, is a quite natural way of achieving that ratio. Convincing clients that it’s the right thing to do is the really difficult battle here (if *I* feel insecure, no wonder they are plain anxious about it).
tyelmene says
The argument to adjust focus from learning content to acquiring and mastering skill has always been important to make, but in an age of unlimited content access, skill becomes almost the only thing. Knowledge workers today must commit to information processing, data processing/visualization, decision making, writing, presentation, productivity and professional development as the minimum ‘core competency’ skills to be fostered and honed. Very soon, knowing a “thing,” no matter how valuable it once was, will buy you nothing. Whereas knowing how to do even the most routine function, as long as that functional skill has some value to a prospective stakeholder, will be the difference that makes all the difference.
The problem is the medium. Any form of ‘programmatic instruction’ is absolutely ill-suited toward truly developing skill. That ‘s what the real world of trial and error is for! We gain real-world expertise by acting and self-directing out own learning in the real situational settings we encounter. I’m sure e-learning can progress, especially as more and better simulation-based training comes of age and is technologically enabled by augmented/virtual reality techs, but the real world has tremendous advantages.
I say; currently 15:85 is the approriate content to skill ratio and in the near future it will be more like 10:90.
John Laskaris says
To me it’s highly dependable on the course subject. There are issues requiring more practice and there are rather theoretical issues non-requiring that much practice. That’s why defining constant ratio isn’t actually an easy task. It would be reasonable to me to establish some average ratio around which we’ll be balancing accordingly to the particular course requirements.
Clark says
Great feedback. Yes, it is challenging, and appreciate your personal insight, Alexis and David. I reckon it’s hard as an individual designer, and I think the more external support we put in the world (including reflections about ratios ;) can help make it clearer. Also look to models like the military, medicine, etc.
And interesting contextualization, Tyelmene. I agree that it’s going to be more important, and blended and beyond the course are part of it. But our courses can also stand to have more practice. I was going gentle on the ratio, but I am trying to shift mindsets first!
it will vary by content, but John, I believe that the course that’s purely theoretical without practice will only be for those who are already engaged in practice and then it serves as a reflection opportunity (much like keynotes aren’t practice but can lead to learning). I am arguing this for those courses that try to be a full learning experience. Even theory needs application to be retained and transferred.
David Glow says
To borrow from your colleagues (70:20:10) even they admit the ratio isn’t the important thing. The thinking to snap us out of our habits (our current content addiction) is what truly matters.
The focus on the “DO” first and what content supports (as a support character in the theater of performance) that is what matters. Sometimes content will play a limited role (perhaps pivotal, but limited).
If designers start with performance and back into what practice is needed and then what content is required to support that process (Cathy Moore’s Action Mapping is spot on), we should be on the right track.
Currently, my experience is that most designs processes first huddle SMEs to tell everything they know about a subject and “what folks *need* (cough) to know”. After that exhaustive process, with the limited remaining time, they slap some sort of “skills check” at the end (I am being generous with the statement, because it is almost always a M/C recall check vs a skill check – classic bore and score design… college lecture/book and exam anyone?).
I always try to start with “design the do” and try to hold off ALL content discussions until after a valid, true skills check is designed. I don’t always win. Habits die hard- and many who pride themselves on contributing in the standard design methodology (both SMES and Designers) who are less equipped to design practice/feedback loops that users actually need can be very threatened by the performance-centered approach, as it shows an ability gap.
Clark says
Nice comment, David, and I think you’ve nailed the problem: listening to the SME tell you what’s needed (when we know they don’t have access to what they do, but they do have access to what they know). Of course, sympathies to those who actually have certification restrictions that enforce listening to the SME (as well as time constraints). And it’s not the numbers, for sure, I’m just trying provide ways to shift the focus to, as you say, “design the do”. We need better SME processes, I reckon.