It’s June, and June is Learning Styles month for the Debunker’s Club. Now, I’ve gone off on Learning Styles before (here, here, here, and here), but it’s been a while, and they refuse to die. They’re like zombies, coming to eat your brain!
Let’s be clear, it’s patently obvious learners differ. They differ in how they work, what they pay attention to, how they like to interact, and more. Surely, it make sense to adapt the learning to their style, so that we’re optimizing their outcome, right?
Er, no. There is no consistent evidence that adapting to learning styles works. Hal Pashler and colleagues, on a study commissioned by Science in the Public Interest (read: a non-partisan, unbiased, truly independent work) found (PDF) that there was no evidence that adapting to learning styles worked. They did a meta-analysis of the research out there, and concluded this with statistical rigor. That is, some studies showed positive effects, and some showed negative, but across the body of studies suitably rigorous to be worth evaluating, there was no evidence that trying to adapt learning to learner characteristics had a definitive impact.
At least part of the problem is that the instruments people use to characterize learning styles are flawed. Surely, if learners differ, we can identify how? Not with psychometric validity (that means tests that stand up to statistical analysis). A commissioned study in the UK (like the one above, independent, etc) led by Coffield evaluated a representative sample of instruments (including the ubiquitous MBTI, Kolb, and more), and found (PDF) only one that met all four standards of psychometric validity. And that one was a simple one of one dimensions.
So, what’s a learning designer to do? Several things: first, design for what is being learned. Use the best learning design to accomplish the goal. Then, if the learner has trouble with that approach, provide help. Second, do use a variety of ways of supporting comprehension. The variety is good, even if the evidence to do so based upon learning style isn’t. (So, for example, 4MAT isn’t bad, it’s just not based upon sound science, and why you’d want to pay to use a heuristic approach when you can do that for free is beyond me.)
Learners do differ, and we want them to succeed. The best way to do that is good learning experience design. We do have evidence that problem-based and emotionally aware learning design helps. We know we need to start with meaningful objectives, create deep practice, ground in good models, and support with rich examples, while addressing motivation, confidence, and anxiety. And using different media maintains attention and increases the likelihood of comprehension. Do good learning design, and please don’t feed the zombie.
James Tyer says
A good summary of good design vs folk tales Clark. I feel it’s time to leave the learning styles debate behind, as it seems to be wasted effort on many. I’ve been looking for testing of psychometric validity – so thank you for the link. So many myths…so little time. We keep putting people in boxes in the workplace, which are hard to leave once your “box” becomes “you” in the culture. The funny thing is, most of what is “good” design comes down to simple communication and social interaction ideas. The industry tends to make everything more and more complicated in order to sell the next big thing.
John Laskaris says
Well, we can’t satisfy everyone why creating an eLearning content and turning it into a course. People differs and that’s why to me it’s fine when the course is decently prepared meaning everyone (or almost everyone) is able to understand the subject and gain new knowledge – this is the thing we should keep in mind while making the course.
Clark says
James, would welcome moving the debate behind, if it didn’t keep reappearing and eating people’s brains! Though I might challenge the communication and social interaction, I think the real key is ‘application’. Practice, practice, practice. And yes, that can (and often should) be social.
And yes, John, indeed it’s about designing so that the material is comprehended (and applied).
Cathe says
“(So, for example, 4MAT isn’t bad, it’s just not based upon sound science, and why you’d want to pay to use a heuristic approach when you can do that for free is beyond me.)”4 I learned about 4MAT in a couple of doctoral courses in the early 1990s and have used it informally ever since. Each of us in the first course took the assessment and most of us felt that our result mirrored our own perceived strengths and weaknesses pretty well. (Sorry that I didn’t keep the notes from that class; I’d love to be able to review for which students it seemed wrong and why…) Dr. Bernice McCarthy gave a guest lecture after we had scored each other’s assessments and answered questions. She was consistent in what she said and appeared sincere about her motivations. At that point in my career I already had worked as both a research psychometrician and as a school psychologist with elementary students. Her information “felt right” to me based on my testing experience with hundreds of children… I have used the broad concepts of 4MAT as a checklist when I create instructional programs to help me target my training to the widest possible audience. People unfamiliar with 4MAT can learn more so they can evaluate the approach – although not the science – for themselves: http://www.aboutlearning.com/what-is-4mat
Clark says
Cathe, that’s just the problem: “felt right”. That’s not science. And we’ve got to get to science in our practices, because we see too much that’s bad because it’s not based on science but intuition. Like I said, it’s not bad, it’s just not science. There are better paths to get to the type of design that works for learning.