Michio Kaku opened the second day of DevLearn with a keynote on the future of the mind. He portrayed extrapolations of current research to some speculative ideas of what our future could mean. He talked about research from physics (?!?) on MRI, AI, and more to provide new capabilities.
Archives for October 2022
Bethune #DevLearn Keynote Mindmap
Kevin Bethune kicked off the 2022 DevLearn conference with a personal story about getting to delivering strategic innovation. Talking about interdisciplinary work that has an impact, he ended up laying out factors in leadership to support innovation. (Apologies, I had to take a brief break, so I missed a small bit. Sorry.)
Feast or Famine
This week is a wee bit of a hectic one, capping a few of the same. There’s an old saying about feast or famine, and I’m living it. It’s better than the alternative, regardless. A taste:
So, as context, I’ve been doing a few things (as previously noted). In addition to working on a STEM project and advising a startup, co-directing LDA, and continuing Quinnovation work, I’m now also serving as Chief Learning Strategist for Upside Learning. This latter role is exciting for me, as they’re one of many custom elearning solution providers, but the first I’ve seen really committing to learning science. They want to lift their game; truly refreshing! Also a chance to really practice what I preach, and of course learn what works and what I’m wrong about. Good folks, too, as I’ve begun working with them.
This all together has manifested in some commitments, including being in the middle of the LDConference. As part of it, I’m running four weeks around learning science, and will be starting 3 weeks on learning technology. In addition, I was asked to open the L&D Conference of People Matters two weeks ago, in India. I also ran a master class the same day. I was able to visit Upside in person before flying back, which was an unexpected bonus. Not completely a surprise, I came back with a raging cough (testing negative and no fever, fortunately).
This week, as a topper, is DevLearn. I like DevLearn, as the Guild runs a good event, and as such it attracts many of my colleagial friends. It’s a chance to hang out with some of my favorite folks! My schedule, of course, is a wee bit frenetic. Monday I run a Make It Meaningful workshop. Tuesday I’m a facilitator for the Learning Leaders forum (on short notice). I’ll have to take a break to run my learning science event for the LDC! Wed and Thurs morn I’ll be spending time in the booth with Upside, culminating in a book giveaway and signing. Thurs afternoon I’m on a Guild Master panel before running my own session on some work I’ve been doing. Back to back busyness…
Finally, Friday, I can attend actually attend sessions before I fly home. After that, it’s just LDC (and LDA) until mid Nov, and then life gets sorta kinda back to normal. I think! It’s definitely feast or famine. 2020 and 2021 were too much famine. I prefer feast. Busy is better than the alternative, though I’m looking forward to catching my breath. Sorry for less reflection than normal, but this is front of mind for now. Next week hopefully we’ll be back to normal here as well ;).
Fewer myths, please
I had the pleasure of being the opening keynote at the People Matters L&D conference in Mumbai this past week, with a theme of ‘disruption’. In it, I talked about some particular myths and their relation to our understanding of our own brains. Following my presentation, I sat through some other presentations. And heard at least one other myth being used to flog solutions. So, fewer myths, please.
My presentation focused on the evidence that we’re still operating under the assumption that we’re logical reasoners (which I pointed out, isn’t apt). I mentioned annual reviews, bullet points presos, unilateral decisions, and more. I also cited evidence that L&D isn’t doing well, so it is a worry. Pointing to post-cognitive frameworks like predictive coding, situated & distributed cognition, and more, I argued that we need to update our practices. I closed by urging two major disruptions: measurement, and implementing a learning culture in L&D before taking it out to the broader org.
In a subsequent presentation, however, the presenter (from a sponsoring org) was touting how leadership needed to accommodate millennials. I’m sorry, but there’s considerable evidence that ‘generation differences’ are a myth. The boundaries are arbitrary, there’re no significant differences in workplace values, and every effect is attributable to age and experience, not generation. (Wish I could find a link to the ‘eulogy for millennials myth’ two academics wrote.)
Another talk presented a lot of data, but ultimately seemed to be about supporting user preferences. Sorry, but user preferences, particularly for novices, aren’t a good guide. There was also a pitch for an ‘all-singing, all-dancing’ solution. Which could be appealing, if you’re willing to live with the tradeoffs. For instance, locking into whatever features your provider is willing to develop, and living without best-0f-breed for all components.
Yes, it’s marketing hype. However, marketing hype should be based on reality, not myths. I can get promising a bit more than you can deliver, and focusing on features you’re strong on. I can’t see telling people things that aren’t true. My first step in dealing with the post-cognitive brain is to know the cognitive and learning sciences, so you’ll know what’s plausible and what’s not. Not to PhD depth, but to have a working knowledge. That’s the jumping off point to much that’s the necessary disruption, revolution, that L&D needs to have. And fewer myths, please!
Misusing affordances?
Affordances is a complex term. Originally coined by Gibson, and popularized by Norman, it’s been largely used in terms of designing interfaces. Yet, it’s easy to misinterpret. I may have been guilty myself! In the past, I used it as a way to characterize technologies. Which isn’t really the intent, as it’s about sensory perception and action. So maybe I should explain what I mean, so you don’t think I’m misusing affordances.
To be clear, in interface design, it’s about the affordances you can perceive. If something looks like it can slide (e.g. a scrollbar), it lets you know you might be able to move the target of a related window in a field. Similarly a button affords pushing. One of the complaints about touch screens is that as people work to overload more functions on gestures. There might be affordances you can’t perceive: does a two-fingered swipe do anything differently than a single-finger swipe?
In my case, I’m talking more about what a technology supports. In my analysis of virtual worlds and mobile devices, I was looking to see what their core capabilities are, and so what we might naturally do with them. Similarly with media, what are their core natures?
So, for instance, an LMS’s core affordance is managing courses. Video captures dynamic context. You might be able to do course management with a spreadsheet and some elbow grease, or you can mimic video with a series of static shots (think: Ken Burns) and narration, but the purpose-designed tool is likely going to be better. There are tradeoffs. You can graft on capabilities to a core, still an LMS won’t naturally serve as a resource repository or social media platform.
It’s an analytical tool, in my mind. You should end up asking: what’s the DNA? For example, you can match the time affordance of different mobile devices to the task. You can determine whether you need a virtual world or VR based upon whether you truly need visual or sensory immersion, action, and social (versus the tradeoffs of cost and cognitive overhead).
With an affordance perspective, you can make inferences about technologies. For instance, LXPs are really (sometimes smart) portals. AI (artificial intelligence)’s best application is IA (intelligence augmentation). AR’s natural niche, like mobile, is performance support. This isn’t to say that each can’t be repurposed in useful ways. AR has the potential to annotate the world. LXPs can be learning guides for those beyond novice stage. AI can serve in particular ways like auto-content parsing (more an automation than an augmentation). Etc.
My intent is that this way of thinking helps us short-circuit that age-old problem that we use new technologies first in ways that mimic old technologies (the old cliche of tv starting out by broadcasting radio shows). It’s a way to generate your own hype curve for technologies: over-enthusiasm leading to overuse, disappointment, and rebirth leveraging the core affordances. Maybe there’s a better word, and I’ve been misusing affordances, but I think the concept is useful. I welcome your thoughts.
Prompted by prep for the advanced seminar on instructional tech for the upcoming Learning & Development Conference.
Myth Persistence
It’s been more than a decade (and probably several), that folks have been busting myths that permeate our industry. Yet, they persist. The latest evidence was in a recent chat I was in. I didn’t call them out at the time; this was a group I don’t really know, and I didn’t want to make any particular person defensive or look foolish. Sometimes I will, if it’s a deliberate attempt at misleading folks, but here I believe it’s safe to infer that it was just a lack of understanding. I’ll keep calling them out here, though. However, the myth persistence is troubling.
One of the myths was learning preferences. The claim was something like that with personalization we could support people’s preferences for learning. This is, really, the learning styles myth. There’s no evidence that adapting to learners’ preferred or identified styles makes a difference. Learner intuitions about what works is not well correlated with outcomes.. So this wasn’t a sensible statement.
There were several comments on unlearning. There is some controversy on this, some people saying that it’s necessary for organizations if not individuals. I still think it’s a misconception, at least. That is, your learning doesn’t go away and something replaces it, you have to actively practice the new behavior in response to the same context to learn a new way of doing things. It’s people, after all, and that’s how our cognitive architecture works!
Gamification also got a mention. Again, this is more misconception perhaps. That is, it matters how you define it. We had Karl Kapp on the LDA’s You Oughta Know session, talking about gamification (and micro learning). He talks about understanding that it’s more than just points and leaderboards. Yes, it is. However, that term leads people quickly to that mindset, hence my resistance to the term. However, the chat seemed to suggest that gamification, in combination with something else (memory fails), was a panacea. There are no panaceas, and gamification isn’t a part of any major advance. It’s a ‘tuning’ tool, at best.
A final one was really about tech excitement; with all the new tools, we’ll usher in a new era of productivity. Well, no. The transformation really is not digital. That is, if we use tech to augment our existing approaches, we’re liable to be stuck in the same old approaches. Most of which are predicated on broken models of human behavior. The transformation should be humane, reflecting how we really think, work, and learn. Without that, digitization isn’t going to accomplish as much as it could.
So, there’s significant myth persistence. I realize change can be hard and take time. Sometimes that’s frustrating, but we have to be similarly persistent in busting them. I’ll keep doing my part. How about you?