Michio Kaku opened the second day of DevLearn with a keynote on the future of the mind. He portrayed extrapolations of current research to some speculative ideas of what our future could mean. He talked about research from physics (?!?) on MRI, AI, and more to provide new capabilities.
Archives for 2022
Bethune #DevLearn Keynote Mindmap
Kevin Bethune kicked off the 2022 DevLearn conference with a personal story about getting to delivering strategic innovation. Talking about interdisciplinary work that has an impact, he ended up laying out factors in leadership to support innovation. (Apologies, I had to take a brief break, so I missed a small bit. Sorry.)
Feast or Famine
This week is a wee bit of a hectic one, capping a few of the same. There’s an old saying about feast or famine, and I’m living it. It’s better than the alternative, regardless. A taste:
So, as context, I’ve been doing a few things (as previously noted). In addition to working on a STEM project and advising a startup, co-directing LDA, and continuing Quinnovation work, I’m now also serving as Chief Learning Strategist for Upside Learning. This latter role is exciting for me, as they’re one of many custom elearning solution providers, but the first I’ve seen really committing to learning science. They want to lift their game; truly refreshing! Also a chance to really practice what I preach, and of course learn what works and what I’m wrong about. Good folks, too, as I’ve begun working with them.
This all together has manifested in some commitments, including being in the middle of the LDConference. As part of it, I’m running four weeks around learning science, and will be starting 3 weeks on learning technology. In addition, I was asked to open the L&D Conference of People Matters two weeks ago, in India. I also ran a master class the same day. I was able to visit Upside in person before flying back, which was an unexpected bonus. Not completely a surprise, I came back with a raging cough (testing negative and no fever, fortunately).
This week, as a topper, is DevLearn. I like DevLearn, as the Guild runs a good event, and as such it attracts many of my colleagial friends. It’s a chance to hang out with some of my favorite folks! My schedule, of course, is a wee bit frenetic. Monday I run a Make It Meaningful workshop. Tuesday I’m a facilitator for the Learning Leaders forum (on short notice). I’ll have to take a break to run my learning science event for the LDC! Wed and Thurs morn I’ll be spending time in the booth with Upside, culminating in a book giveaway and signing. Thurs afternoon I’m on a Guild Master panel before running my own session on some work I’ve been doing. Back to back busyness…
Finally, Friday, I can attend actually attend sessions before I fly home. After that, it’s just LDC (and LDA) until mid Nov, and then life gets sorta kinda back to normal. I think! It’s definitely feast or famine. 2020 and 2021 were too much famine. I prefer feast. Busy is better than the alternative, though I’m looking forward to catching my breath. Sorry for less reflection than normal, but this is front of mind for now. Next week hopefully we’ll be back to normal here as well ;).
Fewer myths, please
I had the pleasure of being the opening keynote at the People Matters L&D conference in Mumbai this past week, with a theme of ‘disruption’. In it, I talked about some particular myths and their relation to our understanding of our own brains. Following my presentation, I sat through some other presentations. And heard at least one other myth being used to flog solutions. So, fewer myths, please.
My presentation focused on the evidence that we’re still operating under the assumption that we’re logical reasoners (which I pointed out, isn’t apt). I mentioned annual reviews, bullet points presos, unilateral decisions, and more. I also cited evidence that L&D isn’t doing well, so it is a worry. Pointing to post-cognitive frameworks like predictive coding, situated & distributed cognition, and more, I argued that we need to update our practices. I closed by urging two major disruptions: measurement, and implementing a learning culture in L&D before taking it out to the broader org.
In a subsequent presentation, however, the presenter (from a sponsoring org) was touting how leadership needed to accommodate millennials. I’m sorry, but there’s considerable evidence that ‘generation differences’ are a myth. The boundaries are arbitrary, there’re no significant differences in workplace values, and every effect is attributable to age and experience, not generation. (Wish I could find a link to the ‘eulogy for millennials myth’ two academics wrote.)
Another talk presented a lot of data, but ultimately seemed to be about supporting user preferences. Sorry, but user preferences, particularly for novices, aren’t a good guide. There was also a pitch for an ‘all-singing, all-dancing’ solution. Which could be appealing, if you’re willing to live with the tradeoffs. For instance, locking into whatever features your provider is willing to develop, and living without best-0f-breed for all components.
Yes, it’s marketing hype. However, marketing hype should be based on reality, not myths. I can get promising a bit more than you can deliver, and focusing on features you’re strong on. I can’t see telling people things that aren’t true. My first step in dealing with the post-cognitive brain is to know the cognitive and learning sciences, so you’ll know what’s plausible and what’s not. Not to PhD depth, but to have a working knowledge. That’s the jumping off point to much that’s the necessary disruption, revolution, that L&D needs to have. And fewer myths, please!
Misusing affordances?
Affordances is a complex term. Originally coined by Gibson, and popularized by Norman, it’s been largely used in terms of designing interfaces. Yet, it’s easy to misinterpret. I may have been guilty myself! In the past, I used it as a way to characterize technologies. Which isn’t really the intent, as it’s about sensory perception and action. So maybe I should explain what I mean, so you don’t think I’m misusing affordances.
To be clear, in interface design, it’s about the affordances you can perceive. If something looks like it can slide (e.g. a scrollbar), it lets you know you might be able to move the target of a related window in a field. Similarly a button affords pushing. One of the complaints about touch screens is that as people work to overload more functions on gestures. There might be affordances you can’t perceive: does a two-fingered swipe do anything differently than a single-finger swipe?
In my case, I’m talking more about what a technology supports. In my analysis of virtual worlds and mobile devices, I was looking to see what their core capabilities are, and so what we might naturally do with them. Similarly with media, what are their core natures?
So, for instance, an LMS’s core affordance is managing courses. Video captures dynamic context. You might be able to do course management with a spreadsheet and some elbow grease, or you can mimic video with a series of static shots (think: Ken Burns) and narration, but the purpose-designed tool is likely going to be better. There are tradeoffs. You can graft on capabilities to a core, still an LMS won’t naturally serve as a resource repository or social media platform.
It’s an analytical tool, in my mind. You should end up asking: what’s the DNA? For example, you can match the time affordance of different mobile devices to the task. You can determine whether you need a virtual world or VR based upon whether you truly need visual or sensory immersion, action, and social (versus the tradeoffs of cost and cognitive overhead).
With an affordance perspective, you can make inferences about technologies. For instance, LXPs are really (sometimes smart) portals. AI (artificial intelligence)’s best application is IA (intelligence augmentation). AR’s natural niche, like mobile, is performance support. This isn’t to say that each can’t be repurposed in useful ways. AR has the potential to annotate the world. LXPs can be learning guides for those beyond novice stage. AI can serve in particular ways like auto-content parsing (more an automation than an augmentation). Etc.
My intent is that this way of thinking helps us short-circuit that age-old problem that we use new technologies first in ways that mimic old technologies (the old cliche of tv starting out by broadcasting radio shows). It’s a way to generate your own hype curve for technologies: over-enthusiasm leading to overuse, disappointment, and rebirth leveraging the core affordances. Maybe there’s a better word, and I’ve been misusing affordances, but I think the concept is useful. I welcome your thoughts.
Prompted by prep for the advanced seminar on instructional tech for the upcoming Learning & Development Conference.
Myth Persistence
It’s been more than a decade (and probably several), that folks have been busting myths that permeate our industry. Yet, they persist. The latest evidence was in a recent chat I was in. I didn’t call them out at the time; this was a group I don’t really know, and I didn’t want to make any particular person defensive or look foolish. Sometimes I will, if it’s a deliberate attempt at misleading folks, but here I believe it’s safe to infer that it was just a lack of understanding. I’ll keep calling them out here, though. However, the myth persistence is troubling.
One of the myths was learning preferences. The claim was something like that with personalization we could support people’s preferences for learning. This is, really, the learning styles myth. There’s no evidence that adapting to learners’ preferred or identified styles makes a difference. Learner intuitions about what works is not well correlated with outcomes.. So this wasn’t a sensible statement.
There were several comments on unlearning. There is some controversy on this, some people saying that it’s necessary for organizations if not individuals. I still think it’s a misconception, at least. That is, your learning doesn’t go away and something replaces it, you have to actively practice the new behavior in response to the same context to learn a new way of doing things. It’s people, after all, and that’s how our cognitive architecture works!
Gamification also got a mention. Again, this is more misconception perhaps. That is, it matters how you define it. We had Karl Kapp on the LDA’s You Oughta Know session, talking about gamification (and micro learning). He talks about understanding that it’s more than just points and leaderboards. Yes, it is. However, that term leads people quickly to that mindset, hence my resistance to the term. However, the chat seemed to suggest that gamification, in combination with something else (memory fails), was a panacea. There are no panaceas, and gamification isn’t a part of any major advance. It’s a ‘tuning’ tool, at best.
A final one was really about tech excitement; with all the new tools, we’ll usher in a new era of productivity. Well, no. The transformation really is not digital. That is, if we use tech to augment our existing approaches, we’re liable to be stuck in the same old approaches. Most of which are predicated on broken models of human behavior. The transformation should be humane, reflecting how we really think, work, and learn. Without that, digitization isn’t going to accomplish as much as it could.
So, there’s significant myth persistence. I realize change can be hard and take time. Sometimes that’s frustrating, but we have to be similarly persistent in busting them. I’ll keep doing my part. How about you?
The power of emotion
Increasingly, we’re seeing that emotion matters. Scientific evidence supports what we intuitively know. Yet, in many cases, our actions don’t support that understanding. At least, in nuance. In particular, our learning designs suffer from trivialized ‘like’ as opposed to useful and effective approaches. We can and should do better to tap into the power of emotion.
Again, I’m using the term ’emotion’ loosely here. While we do care about emotions like joy and grief (though our picture is changing), what we really need to be caring about are non-cognitive elements like motivation, anxiety, and confidence. It’s about designing to appropriately address them: develop motivation, keep a lid on anxiety, and build confidence. Each has it’s elements.
Motivation improves learning outcomes, but requires understanding what makes us interested. We’re driven by a desire to understand the world (c.f. ‘predictive coding‘. Curiosity can assist in developing an interest. Certainly, self-interest plays a role as well, and helping people tune into the positive consequences of a learning experience (or the negative outcomes of not having the requisite understanding) is also useful. Self-Determination Theory (c.f. Deci and Ryan) talks about mastery, autonomy, and relatedness. We can use this to help people connect with others (instructors/peers/experts), give them tasks (autonomy) and support to succeed (mastery).
Anxiety interferes, if it’s too much. While a small amount helps, that’s quickly overwhelming. Given that learning can be intrinsically anxiety-inducing, keeping anxiety to a minimum is important. Making it safe to fail is an important component of this. Psychological safety is an important element in organizational operation, and learning as well. We can not attach consequences to practice, certainly at first. We can also have the instructor make mistakes as well.
Building confidence is an adjunct here. As people master the skills, at greater and greater levels of challenge (an important component of successful learning experience design), they build confidence. That reduces anxiety, and maintains motivation. We don’t want false confidence, but we can steadily build confidence as we go. Ultimately, we want learners to have sufficient confidence to try out the skills (and succeed) after the learning experience.
There’s lots more that goes into making an experience effective and engaging, but understanding these elements, and how to enact them, is an important component. The power of emotion, properly harnessed, improves learning outcomes (which is what we should be about ;). I’ll be addressing these and more in my workshop Make It Meaningful at the upcoming DevLearn conference in Las Vegas on Oct 24. I’d love to see you there, as we talk about the complement to learning science that combines to achieve those experience goals.
Better RFPs, Please
I regularly rant about the quality of the learning designs we see. Knowledge dump and information test, I rail, is not going to lead to meaningful outcomes. Consequently, I work to promote more learning science in what we do. However, I have to acknowledge that frequently, the problem isn’t in the designer, but in the requester. Too often, there are RFPs (emblematic, they’re equivalent to the internal request for ‘a course on X’) that are asking for designers to take content and essentially put it up on the screen with a quiz (and window dressing). So we need better RFPs, please.
Ideally, RFPs would be expecting a good process. That includes a number of steps, from analysis through to deliver. For instance, to expect due diligence in analysis, with either clear metrics of success, or expectations of an appropriate process. That latter would include where appropriate individuals (experts, supervisors, performers) work with the team to identify ideal performance, gaps, and the causes.
Similarly in design, there’d be an expectation of iterative development and review, with testing. Where’s the expectation of meaningful practice, where the lowest level of practice is mini-scenarios (better written multiple choice questions) through full scenarios, to even serious games? We need identification of misconceptions and specific feedback as well.
Yet, the RFPs that come out often focus on cost, visual design, and an expectation that PPTs and PDFs are a sufficient basis to build a course. I recently suffered through a droned presentation of bullet points and unclear diagrams, followed by quiz questions that a) focused on random knowledge that wasn’t emphasized during the presentation and b) provided as feedback only ‘right’ or ‘wrong’. Let me assure you that little meaningful learning came from that experience.
While we need to push ourselves to be better, we also need to educate our clients (internal or external). They need to educate themselves, too. Orgs will get the courses they ask for. However, will the ask have any impact? Too often, unfortunately, the answer is no. There’s a quote in the article The Great Training Robbery that estimates suggest only 10% of the multi-billions spent on training has any impact. That’s a staggering loss. While there are many contributors, it behooves us to try to address them all. For one, can we have better RFPs, please?
Designing a conference
When I agreed to join as co-director of the Learning & Development Accelerator, I’d already attended their first two conferences. Those had been designed to reflect the circumstances at the time, e.g. the pandemic. In addition, there was a desire on the part of Matt Richter & Will Thalheimer (the original directors) to reflect certain values. Matt and I are running the event again, but times have changed. That means we have to rethink what’s being done. So here’s my thinking about designing a conference.
First, the values Matt and Will started with included being as global as possible, and being virtual. The former was reflected in having presentations given twice, once early in the US day, and then again later. That supported everything from Europe, Africa, and the Mideast to Asia and Australia. The virtual was, at least partly, a reaction to the lack of desire to travel and meet face to face, but also to provide options for those who might struggle.
We’re definitely still focusing on being virtual. Folks who would find it challenging to arrange travel for whatever reason can attend this event. There’s also the environmental considerations. Yes, technology requires resources, but not as much as collective travel. While there’s also a desire to meet different time needs, we’ve found less demand for multiple times. However, we will be recording sessions that are synchronous, so they can be viewed at convenient times. We also are spreading it over six weeks, so that there’s time to consume as much as you want. Further, faculty can choose when they’re offering ;).
The original design was focused on evidence-based L&D (which remains a key guiding principle for the LDA). Matt & Will solicited their presenters based upon their representation, but the agenda was largely what those folks wanted to present. Which, in many ways, reflects what other conferences do. In this new era, we wondered what would make a compelling proposition when you can travel to F2F events. We decided that we wanted to step away from ‘what we get’, and focus on ‘what the audience needs’.
This event, then, has a curriculum, across two tracks, designed to address specific needs. There’s also a different pedagogy than most conferences.We also have specific faculty, rather than presenters based upon submissions. Of course, there are tradeoffs. At least we can share our thinking.
The faculty are folks we know and trust to present evidence-based content. You won’t hear promotion for snake oil, like learning styles. We have a pretty impressive lineup, frankly, of people we think are world-class. This includes folks like Ruth Clark, Mirjam Neelen & Paul Kirschner, Karl Kapp, Julie Dirksen, Kat Koppett, Stella Lee, Nigel Paine, Will Thalheimer, and Thiagi. On top of, of course, Matt and myself. Reality means that a few folks we would’ve liked to have couldn’t commit, but this is a a broad and reputable group.
The tracks are basics and advanced. We want to be able to serve multiple audiences. The intent is that the basic track has the core knowledge an L&D person should know. As best we can, as we negotiate with the faculty, of course. Then, the advanced topics are things that are emergent and need addressing. Of course, there’s no commitment that you have to stay in one or another. As with other conferences, you can pick and choose what to view.
We’re also not just having presentations; we’ve asked the faculty to provide development. That is, we’re intending several rounds of content, activity, and feedback, spread out over several days or weeks. We don’t want people to hear good ideas, and maybe take them back. We want folks to take action! We’re also designing in the opportunity for mentoring.
Of course, there’ll be some social events, and other ways to not only hear content and apply it, but to mingle with faculty and other attendees. We want to foster some community. Also, we’re intending to somewhat front load stuff so that we can adapt. If we hear that we need to do something we haven’t planned, we’re looking to have leeway to address it. The nice thing about being small is the ability to be flexible!
None of this is saying you don’t get much of the same from conferences (except, perhaps, the design). I’ve been on conference program committees, and know conference organizers as well. They typically get more proposals than they can accept, so they can choose a suite that reflect things for various ranges of experience and cover important topics. They may not, however, know all the submitters, and take chances on a few. I laud that, actually, because we can’t know if a new approach or person is worthwhile without experimentation. Still, there is the chance for gaps, and for bad presentations/presenters. They’re also, except for the pre-conference workshops (e.g. my Make It Meaningful one at the upcoming DevLearn), one-off events.
We’re taking a chance on our format, too. We haven’t done it before. It may not work, though we have good reasons to believe it will. So, we hope to see you at the Learning & Development Conference, Oct 10 – Nov 18, if the above thinking about designing a conference sense. We think it does, we hope you do, too.
Projects That Didn’t Fly
I’ve had the pleasure of leading the design of a number of projects that have had some impact. These include a mobile app a company could point to. Also a game that helped real kids. Even a context-sensitive performance support system that was worth a patent. Then, of course, are the projects that didn’t, for whatever reason, see the light of day. So here are some reflections on a few projects that didn’t fly.
Back in the mid-90s, I was part of a government-sponsored initiative in online learning, and we were looking for a meaningful project. We made a connection to two folks with a small company that taught about communicating to the press. They could’ve come out with a book, but they wanted to do something more interesting. We collaborated on an online course on speaking to the media. I partnered with an experienced digital producer, and backstopped with a university-based media team. We had a comic skit writer, and cartoonists, to augment our resources. The result was technically sophisticated, educationally sound, and engaging both visually and in prose. It never flew, however, as we didn’t partner it with a viable business model. Which was reflective of the times.
Then, at the end of the 90’s, I was asked to lead a team developing an adaptive learning system. The charge was to help learners understand themselves as learners. I had a stellar team: software engineer, AI expert, psychometrician, learning science guru, visual designer, and an interface designer. The model was to do an initial profile, then present you with learning elements (concepts, examples, practice, etc) and update your model based on your performance. There was even a machine learning component to improve the models as we went along. We actually got a first draft up and running (10 elements in the student model), before ego and greed undermined and killed it. The lessons learned, of course, have continued to inform me, including, for instance, my calls for content systems.
Then, around the mid-2000s, I was given the task to devise a content model for a publisher. They wanted to develop once and populate a variety of business products. Drawing on previous experience, I developed a robust model, which started from individual elements and supplemented and aggregated them in a systematic way. This also ended sadly. In this case, the software side never reached fruition.
There are lots of reasons good intentions can go awry. In my case, it wasn’t going to be on a lack in the learning design ;). What I’ve learned, however, is that learning design isn’t the only element that matters. There’s vision, and execution, and partners, and more. All are ways in which things can go wrong. Yet, that doesn’t mean we shouldn’t try. It just means that we should, to the extent of our abilities, also try to ensure the success of the other comments. It’s worth exploring projects that didn’t fly so as to see how future ones might.