We colloquially tout the Learning Development Accelerator as a society for ‘evidence-based’ practice. Or, more accurately, as ‘evidence-informed’, as Mirjam Neelen & Paul Kirschner advise us in their tome. But, what does ‘evidence-informed’ mean, in practice? Does everything you do have to align with what research tells us? What’s the practical interpretation? So, I have an admission to make.
To start, if you go to the LDA site (I just did), it says: “Explores and encourages research-aligned practices”. That is a noble goal, to be sure. Let’s be clear, however: research doesn’t cover all our particular situations. In fact, it’s unlikely to cover any of our specific situations. Much of the research we use is done on psychology undergraduates, and frequently for education purposes, e.g. K12 or higher ed. Which means it’s indicative of our general cognitive processing, but not our specific situations.
There is research on organizational learning, to be sure. It’s not always pristine laboratory conditions, as it may well be meeting real-world needs. Of course, we do see some A/B-type studies. Still, while legitimate, they’re not likely to be our particular situation. That is, our particular audience, our specific learning objectives, our timeline, our urgency, etc.
So what does one do? We must abstract the underlying principles, and reinstantiate for our circumstances. There are good overall principles, such as the benefit of generative activities and spaced retrieval practice. The nature of these, of course, such as choosing the right activities (Thiagi & Matt have a whole book on this!), and the right parameters for retrieval (we’re asking for that at Elevator9), means that we have to customize. Which means we have to test and tune. We can’t expect to get it right the first time. (Though, we’ll get better over time.)
There will be times, when we’re doing something that’s far enough away that we’re kind of making it up as we go along. (An area I love, as it requires considering all the models I’ve mentally collected over the years.) Then, we may find good examples to use as guidance. Someone’s tried something, and it worked for them. If you look at the LDA Research Checklist, for instance, you’ll see that replicated research is desirable. Well, that’s ideal. We live in the real world, however. BTW, this is a good reason to share what you learn (you may have to anonymize it, for sure): so others benefit.
So, and this is where I make an admission, there will be times where we don’t have adequate guardrails. There are times when we have only some examples, or basically we’re wading into new areas. Then, we are free, with a caveat: we can’t do what’s been shown to be wrong. For instance, learning styles. Or attention-span of a goldfish. Or any of the other myths. My take, and I require this for LDA Press as well, is that we ask for the evidence-base, but we require that submissions not violate what’s known.
So, evidence-based, research-aligned, etc, at least means avoiding what has been shown not to work. It starts from using the best evidence-available to guide design, and then testing (which research also tells us to do!). Why? Because we get better outcomes. We do know that not following research is unlikely to have an impact. Learning design is, at core, a probabilistic game. Increasing the likelihood of a real impact should be what we’re about. Doing so on the basis of research is a faster and more reliable path to having an impact. Ultimately, the answer to the question “what does ‘evidence-informed’ mean?” is better outcomes. Who doesn’t want that?