In my inbox was an announcement about a new white paper with the intriguing title Human Learning is About to Change Forever. So naturally I gave up my personal details to download a copy. There are nine claims in the paper, from the obvious to the ridiculous. So I thought I’d have some fun.
First, let’s get clear. Our learning runs on our brain, our wetware. And that’s not changing in any fundamental way in the near future. As a famous article once had it: phenotypic plasticity triumphs over genotypic plasticity (in short, our human advantage has gained via our ability to adapt individually and learn from each other, not through species evolution). The latter takes a long time!
And as a starting premise, the “about to” bit implies these things are around the corner, so that’s going to be a bit of my critique. But nowhere near all of it. So here’s a digest of the nine claims and my comments:
- Enhanced reality tools will transform the learning environment. Well, these tools will certainly augment the learning environment (pun intended :). There’s evidence that VR leads to better learning outcomes, and I have high hopes for AR, too. Though is that a really fundamental transition? We’ve had VR and virtual worlds for over a decade at least. And is VR a evolutionary or revolutionary change from simulations? Then they go on to talk about performance support. Is that transforming learning? I’m on record saying contextualized learning (e.g. AR) is the real opportunity to do something interesting, and I’ll buy it, but we’re a long way away. I’m all for AR and VR, but saying that it puts learning in the hands of the students is a design issue, not a technology issue.
- People will learn collaboratively, no matter where they are. Um, yes, and…? They’re already doing this, and we’ve been social learners for as long as we’ve existed. The possibilities in virtual worlds to collaboratively create in 3D I still think is potentially cool, but even as the technology limitations come down, the cognitive limitations remain. I’m big on social learning, but mediating it through technology strikes me as just a natural step, not transformation.
- AI will banish intellectual tedium. Everything is awesome. Now we’re getting a wee bit hypish. The fact that software can parse text and create questions is pretty impressive. And questions about semantic knowledge aren’t going to transform education. Whether the questions are developed by hand, or by machine, they aren’t likely on their own to lead to new abilities to do. And AI is not yet to the level (nor will it be soon) where it can take content and create compelling activities that will drive learners to apply knowledge and make it meaningful.
- We will maximize our mental potential with wearables and neural implants. Ok, now we’re getting confused and a wee bit silly. Wearables are cool, and in cases where they can sense things about you and the world means they can start doing some very interesting AR. But transformative? This still seems like a push. And neural implants? I don’t like surgery, and messing with my nervous system when you still don’t really understand it? No thanks. There’s a lot more to it than managing to adjust firing to control limbs. The issue is again about the semantics: if we’re not getting meaning, it’s not really fundamental. And given that our conscious representations are scattered across our cortex in rich patterns, this just isn’t happening soon (nor do I want that much connection; I don’t trust them not to ‘muck about’).
- Learning will be radically personalized. Don’t you just love the use of superlatives? This is in the realm of plausible, but as I mentioned before, it’s not worth it until we’re doing it on top of good design. Again, putting together wearables (read: context sensing) and personalization will lead to the ability to do transformative AR, but we’ll need a new design approach, more advanced sensors, and a lot more backend architecture and semantic work than we’re yet ready to apply.
- Grades and brand-name schools won’t matter for employment. Sure, that MIT degree is worthless! Ok, so there’s some movement this way. That will actually be a nice state of affairs. It’d be good if we started focusing on competencies, and build new brand names around real enablement. I’m not optimistic about the prospects, however. Look at how hard it is to change K12 education (the gap between what’s known and what’s practiced hasn’t significantly diminished in the past decades). Market forces may change it, but the brand names will adapt too, once it becomes an economic necessity.
- Supplements will improve our mental performance. Drink this and you’ll fly! Yeah, or crash. There are ways I want to play with my brain chemistry, and ways I don’t. As an adult! I really don’t want us playing with children, risking potential long-term damage, until we have a solid basis. We’ve had chemicals support performance for a while (see military use), but we’re still in the infancy, and here I’m not sure our experiments with neurochemicals can surpass what evolution has given us, at least not without some pretty solid understanding. This seems like long-term research, not near-term plausibility.
- Gene editing will give us better brains. It’s alive! Yes, Frankenstein’s monster comes to mind here. I do believe it’s possible that we’ll be able to outdo evolution eventually, but I reckon there’s still not everything known about the human genome or the human brain. This similarly strikes me as a valuable long term research area, but in the short term there are so many interesting gene interactions we don’t yet understand, I’d hate to risk the possible side-effects.
- We won’t have to learn: we’ll upload and download knowledge. Yeah, it’ll be great! See my comments above on neural implants: this isn’t yet ready for primetime. More importantly, this is supremely dangerous. Do I trust what you say you’re making available for download? Certainly not the case now with many things, including advertisements. Think about downloading to your computer: not just spam ads, but viruses and malware. No thank you! Not that I think it’s close, but I’m not convinced we can ‘upgrade our operating system’ anyway. Given the way that our knowledge is distributed, the notion of changing it with anything less than practice seems implausible.
Overall, this is reads like more a sci-fi fan’s dreams than a realistic assessment of what we should be preparing for. No, human learning isn’t going to change forever. The ways we learn, e.g. the tools we learn with are changing, and we’re rediscovering how we really learn.
There are better guides available to what’s coming in the near term that we should prepare for. Again, we need to focus on good learning design, and leveraging technology in ways that align with how our brains work, not trying to meld the two. So, there’re my opinions, I welcome yours.