Someone pointed me to a microlearning post, wondering if I agreed with their somewhat skeptical take on the article. And I did agree with the skepticism. Further, it referenced another site with worse implications. And I think it’s instructive to take these apart. They are emblematic of the type of thing we see too often, and it’s worth digging in. We need to stop this sort of malarkey. (And I don’t mean microlearning as a whole, that’s another issue; it’s articles like this one that I’m complaining about.)
The article starts out defining microlearning as small bite-sized chunks. Specifically: “learning that has been designed from the bottom up to be consumed in shorter modules.” Well, yes, that’s one of the definitions. To be clear, that’s the ‘spaced learning’ definition of microlearning. Why not just call it ‘spaced learning’?
It goes on to say “each chunk lasts no more than five-then minutes.” (I think they mean 10). Why? Because attention. Um, er, no. I like JD Dillon‘s explanation: it needs to be as long as it needs to be, and no longer.
That attention explanation? It went right to the ‘span of a goldfish’. Sorry, that’s debunked (for instance, here ;). That data wasn’t from Microsoft, it came from a secondary service who got it from a study on web pages. Which could be due to faster pages, greater experience, other explanations. But not a change in our attention (evolution doesn’t happen that fast and attention is too complex for such a simple assessment). In short, the original study has been misinterpreted. So, no, this isn’t a good basis for anything having to do with learning. (And I challenge you to find a study determining the actual attention span of a goldfish.)
But wait, there’s more! There’s an example using the ‘youtube’ explanation of microlearning. OK, but that’s the ‘performance support’ definition of microlearning, not the ‘spaced learning’ one. They’re two different things! Again, we should be clear about which one we’re talking about, and then be clear about the constraints that make it valid. Here? Not happening.
The article goes on to cite a bunch of facts from the Journal of Applied Psychology. That’s a legitimate source. But they’re not pulling all the stats from that, they’re citing a secondary site (see above) and it’s full of, er, malarkey. Let’s see…
That secondary site is pulling together statistics in ways that are thoroughly dubious. It starts citing the journal for one piece of data, that’s a reasonable effect (17% improvement for chunking). But then it goes awry. For one, it claims playing to learner preferences is a good idea, but the evidence is that learners don’t have good insight into their own learning. There’s a claim of 50% engagement improvement, but that’s a mismanipulation of the data where 50% of people would like smaller courses. That doesn’t mean you’ll get 50% improvement. They also make a different claim about appropriate length than the one above – 3-7 minutes – but their argument is unsound too. It sounds quantitative, but it’s misleading. They throw in the millennial myth, too, just for good measure.
Back to the original article, it cites a figure not on the secondary site, but listed in the same bullet list: “One minute of video content was found to be equal to about 1.8 million written words”. WHAT? That’s just ridiculous. 1.8 MILLION?!?!? Found by who? Of course, there’s no reference. And the mistakes go on. The other two bullet points aren’t from that secondary site either, and also don’t have cites. The reference, however could mislead you to believe that the rest of the statistics were also from the journal!
Overall, I’m grateful to the correspondent who pointed me to the article. It’s hype like both of these that mislead our field, undermine our credibility, and waste our resources. And it makes it hard for those trying to sell legitimate services within the boundaries of science. It’s important to call this sort of manipulation out. Let’s stop the malarkey, and get smart about what we’re doing and why.