design Archives - Learnlets https://blog.learnlets.com/category/design/ Clark Quinn's learnings about learning Wed, 05 Mar 2025 21:36:25 +0000 en-US hourly 1 https://blog.learnlets.com/wp-content/uploads/2018/02/cropped-LearnletsIcon-32x32.png design Archives - Learnlets https://blog.learnlets.com/category/design/ 32 32 Analogy and models https://blog.learnlets.com/2025/03/analogy-and-models/ https://blog.learnlets.com/2025/03/analogy-and-models/#respond Tue, 04 Mar 2025 16:08:06 +0000 https://blog.learnlets.com/?p=9083 I’ve gone on a bit about the value of mental models in instruction (and performance). (I guess this cements my position as a representationalist!) My interest isn’t surprising, given my background. But someone recently pointed out to me an aspect that I hadn’t really commented on. And, I should! So here’s some thoughts on analogy […]

The post Analogy and models appeared first on Learnlets.

]]>
I’ve gone on a bit about the value of mental models in instruction (and performance). (I guess this cements my position as a representationalist!) My interest isn’t surprising, given my background. But someone recently pointed out to me an aspect that I hadn’t really commented on. And, I should! So here’s some thoughts on analogy and models.

The initial callout was me talking about models, and communicating them. In particular, I’ve mentioned a number of times the value of diagrams. Yet, someone else pointed out that another useful mechanism is analogy. And this rocked me, because of course! Yet, I’ve neglected this mention.

As context, I’ve been a fan of mental models for thinking since I got the gift of a book on them from my work colleagues as I headed off to grad school. Moreover, I did my PhD thesis on analogy! I broke down analogical processing in a unique way, and looked at performance. finding some processes could be improved. Then I tried training on a subset, and achieved some impact.

Analogy is, by the way, a useful way to communicate models. What’s important in models are the conceptual causal relationships. If there’s another, more familiar model with the same structure, you can use it. For instance, the flow of electricity in wired can be analogized to the flow of water in pipes. Another, flawed, model is saying that the orbit of electrons around a nucleus is like the orbit of planets around a sun.

So, why have I been blind to the use of analogies? Perhaps because I’m so familiar with them that I just assume others see the possibilities? Or maybe I’ve just got a huge blind spot!  Still, it’s a big miss on my part.

When you want learners to ‘get’ models (and I think we do), you can present them as diagrams. You can have people embody them through things like Gray’s gamestorming.  And, of course, you can use analogies. We have to be careful; empirically, most folks aren’t good at generating them, they focus too much on surface features. Yet, what’s necessary is sharing what cognitive scientists call ‘deep structure’, the important relationships that guide outcomes. People are good at using given analogies, but don’t always recognize them as useful unless prompted.

If, and it’s not a given, we have a familiar structure that happens to share the relationships of the model we’re trying to communicate, we can make an analogy! Though, there are nuances here too. For instance, Rand Spiro found that, when developing an understanding of muscle operation, a progression of analogies was needed to develop the final understanding!

Still, we shouldn’t ignore the possibilities of analogy. Some have argued that we fundamentally understand the world by bringing in prior models to explain. Which isn’t hard to countenance in a ‘predictive coding’ view of the world, that we’re actively trying to explain observations. Wrong models are typically an explanation for misconceptions, using  the wrong model in new ways. We have to diagnose and remediate those understandings, because folks don’t tend to replace their models, they patch them. Giving good models a priori, via analogy or otherwise, is a good remedy.

Analogy is a feature of our cognitive architecture and formal representations. It’s a useful way to communicate how the world works, when possible. Like with all things, of course, the nuances matter, but analogy and models are tools we have to facilitate understanding, if indeed we understand them. So let’s, eh?

The post Analogy and models appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/03/analogy-and-models/feed/ 0
Evidence-Informed Practitioner conference deals (today!) https://blog.learnlets.com/2025/02/evidence-informed-practitioner-conference-deals/ https://blog.learnlets.com/2025/02/evidence-informed-practitioner-conference-deals/#respond Fri, 28 Feb 2025 16:04:40 +0000 https://blog.learnlets.com/?p=9086 So, this is a wee bit not my normal post, but…I did want to let you know that today is the last day for the early bird pricing for the Evidence-Informed Practitioner conference we (the Learning Development Accelerator) are running come April. This won’t be my last post on it, but this is the last chance […]

The post Evidence-Informed Practitioner conference deals (today!) appeared first on Learnlets.

]]>
a mortar-boarded lightbulb on books, with the words "LDA Conference: L&D, the Evidence-Informed Practitioner, live online and asynchronous sessions April 7 - May 2 ldaccelerator.comSo, this is a wee bit not my normal post, but…I did want to let you know that today is the last day for the early bird pricing for the Evidence-Informed Practitioner conference we (the Learning Development Accelerator) are running come April. This won’t be my last post on it, but this is the last chance to get the best deal!  So, I’ll entice you with some details, and give you a special deal (which doesn’t end today, but adds on to that). It’s too good not to let you know about the Evidence-Informed Practitioner conference deals.

So, first, the conference is a follow-on to the Learning Science Conference we held last fall. That was a great conference, but there was one repeated sentiment: “but how do we do this in practice?” A fair question!  And, frankly, a topic that’s gotten my mind going in other ways (stay tuned ;). So, we decided to offer a conference to address it.

First, the conference follows the well-received format we saw for that last event. We have the important topics, with canned presentations beforehand, discussions forums to discuss, and then live sessions. The presentations were great, and the emerging discussions were really insightful!

Then, we have top presenters, and I mean really top. People who’ve been there, done that, and in many cases wrote the book or built the company. Julie Dirksen, Dawn Snyder, Will Thalheimer, Lori Niles-Hoffman, Dave Ferguson, Emma Weber, Maarten Vansteenkiste, and Nigel Paine, along with Nidhi Sachdeva and Kat Koppett. These are folks we look to for insight, and it’s a real pleasure to bring them to you.

So, there’s an early bird discount, that ends today!  That’s 20% off. So you should rush off right now, but wait, there’s more!  I get to offer you another 10% off, so that’d be 30% off if you go today. Or, you can use my code to get 10% off the regular price if you don’t get to act today. The secret password is EIP10CQ. That’s EIP (the conference acronym), 10 (percent), CQ (my initials).

I realize I should’ve mentioned this all before, but it’s not TOO late. Hope to see you there, it’ll be great (as my firstborn used to say)! Look, I don’t usually do such a promotion, but I really am excited to offer Evidence-Informed Practitioner conference deals. Hope to see you there!

The post Evidence-Informed Practitioner conference deals (today!) appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/02/evidence-informed-practitioner-conference-deals/feed/ 0
Fads and foundations https://blog.learnlets.com/2025/02/fads-and-foundations/ https://blog.learnlets.com/2025/02/fads-and-foundations/#respond Tue, 11 Feb 2025 16:00:36 +0000 https://blog.learnlets.com/?p=9068 Two recent things have prompted some reflection. For one, the LDA had another workshop with Emma Weber, in this case on transfer of learning. At the same time, Dave Snowden, on LinkedIn, was pointing to a post suggesting being wary of the latest management infatuation. How are they related? Well, to me it’s about fads […]

The post Fads and foundations appeared first on Learnlets.

]]>
Two recent things have prompted some reflection. For one, the LDA had another workshop with Emma Weber, in this case on transfer of learning. At the same time, Dave Snowden, on LinkedIn, was pointing to a post suggesting being wary of the latest management infatuation. How are they related? Well, to me it’s about fads and foundations.

So Emma’s workshop was about how to use coaching to facilitate post-event transfer. Her approach had a domain-independent coaching model. In it, the coaching is applied for roughly 30 minutes over a period of time, with at least a week between. She was looking to drill into what people wanted to accomplish and keep them on track. Also, doing so without being expert in the area of endeavor. In fact, to the contrary. Which I laud, with a caveat. As I’ve opined before, I think that we need domain-specific feedback until learners have a level of capability. They have to be able to know  what they don’t know and acquire it. They also need to critique their own performance. (She believes that the course should get people to that level; I’m a bit more cautious. Should.)

Now, what the post suggested was that the big consulting companies had a pattern of boosting the latest management approach. They then indicate expertise, and get businesses to follow them. The consultants then move on, without checking to see whether the fad has led to any improvement. (A small plug here for using your friendly neighborhood consultant for a reality check before embarking on heavy investment.) This reminds me of Alex Edman’s book May Contain Lies where he demonstrated how many management books took a biased data set and used that to make sweeping generalizations that weren’t justified. Nor checked for continuing success.

The link is that too often, folks will bring in a new executive, even CEO, who isn’t in their business but has had success elsewhere. A reliable situation is that they will have learned some MBA-spiel, like cost-cutting, and successfully applied it in a particular instance. (The ones who aren’t successful we don’t hear about.) Then, their approach doesn’t work in the new situation. Because it’s a new situation! They don’t have the foundational knowledge. Another recent item I saw said how a business had failed with a new CEO, and had to then hire another who knew the business to set it right. (If only I could remember where!)

The underlying message is that the world is contextual (see Brian Klaas’ Fluke). Without the knowledge of how the world works here, we’re liable to apply too-general approaches that aren’t matched to the current situation. When we acquire the contextual knowledge, we can then self-help. Yet, we do better when we know the situation. We need informed analysis and aligned interventions! This is something we can, and should, do.

The post Fads and foundations appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/02/fads-and-foundations/feed/ 0
What and why cognitive science? https://blog.learnlets.com/2025/01/what-and-why-cognitive-science/ https://blog.learnlets.com/2025/01/what-and-why-cognitive-science/#respond Tue, 28 Jan 2025 16:09:16 +0000 https://blog.learnlets.com/?p=9059 I was on LinkedIn, and noted this list of influences in a profile: “complex systems, cybernetics, anthropology, sociology, neuroscience, (evolutionary) biology, information technology and human performance.” And, to me, that’s a redundancy. Why? A while ago, I said “Departments of cognitive science tend to include psychologists, linguists, sociologists, anthropologists, philosophers, and, yes, neuroscientists. ” I […]

The post What and why cognitive science? appeared first on Learnlets.

]]>
I was on LinkedIn, and noted this list of influences in a profile: “complex systems, cybernetics, anthropology, sociology, neuroscience, (evolutionary) biology, information technology and human performance.” And, to me, that’s a redundancy. Why?

A while ago, I said “Departments of cognitive science tend to include psychologists, linguists, sociologists, anthropologists, philosophers, and, yes, neuroscientists. ” I missed artificial intelligence and computer science more generally. Really, it’s about everything that has to do with human thought, alone, or in aggregate. In a ‘post-cognitive’ era, we also recognize that thinking is not just in the head, but external. And it’s not just the formal reasoning, or lack thereof, but it’s personality (affect), and motivation (conation).

Cognitive science emerged as a way to bring different folks together who were thinking about thinking. Thus, that list above is, to me, all about cognitive science! And I get why folks might want to claim that they’re being integrative, but I’m saying “been there, done that”. Not me personally, to be clear, but rather that there’s a field doing precisely that. (Though I have pursued investigations across all of the above in my febrile pursuit of all things about applied cognitive science.)

Why should we care? Because we need to understand what’s been empirically shown about our thinking. If we want to develop solutions – individual, organizational, and societal –  to the pressing problems we face, we ought to do so in ways that are most aligned with how our brains work. To do otherwise is to invite inefficiencies, biases, and other maladaptive practices.

Part of being evidence-informed, in my mind, is doing things in ways that align with us. And there is lots of room for improvement. Which is why I love learning & development, these are the people who’ve got the most background, and opportunity, to work on these fronts. Yes, we need to liase with user experience, and organizational development, and more, but we are (or should be) the ones who know most about learning, which in many ways is the key to thinking (about thinking).

So, I’ve argued before that maybe we need a Chief Cognitive Officer (or equivalent). That’s not Human Resources, by the way (which seems to be a misnomer along the lines of Human Capital). Instead, it’s aligning work to be most effective across all the org elements. Maybe now more than ever before! At least, that’s where my thinking keeps ending up. Yours?

The post What and why cognitive science? appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/01/what-and-why-cognitive-science/feed/ 0
Writing, again https://blog.learnlets.com/2025/01/writing-again/ https://blog.learnlets.com/2025/01/writing-again/#respond Tue, 21 Jan 2025 16:04:55 +0000 https://blog.learnlets.com/?p=9056 So, I’m writing, again. Not a book (at least not initially ;), but something. I’m not sure exactly how it’ll manifest, but it’s emerged. Rather than share what I’m writing (too early), I’m reflecting a bit on the process. As usual, I’m writing in Word. I’d like to use other platforms (Pages? Scrivener? Vellum?), but there […]

The post Writing, again appeared first on Learnlets.

]]>
So, I’m writing, again. Not a book (at least not initially ;), but something. I’m not sure exactly how it’ll manifest, but it’s emerged. Rather than share what I’m writing (too early), I’m reflecting a bit on the process.

As usual, I’m writing in Word. I’d like to use other platforms (Pages? Scrivener? Vellum?), but there are a couple of extenuating circumstances. For one, I’ve been using Word since I wrote my PhD thesis on the Mac II I bought for the purpose. I think that was Word 2.0, circa late 80’s. In other words, I’ve been using Word a long time! Then, the most important thing besides ‘styles‘ (formatting, not learning) is the ability to outline. Word has industrial-strength outlining, and, to use an over-used and over-emphatic point, I live and die by outlines.

I outline my plan before I start writing, pretty much always. Not for blog posts like this, but for anything of any real length beyond such a post. Anything with intermediate headings is almost guaranteed to be outlined. I tend to prefer well-structured narratives (at least for non-fiction?). It likely will change, of course. When my very first book was written, it pretty much followed the structure. Ever since then…  My second book had me rearranging the structure as I typed. My most recent book got restructured after every time I shared it with my initial readers, until suddenly it gelled.

In this case, and not unlike most cases, I move things around as I go. This should be a section all its own. That is superfluous to need. This other goes better here than where I originally put it. And so on. I do take a pass through to reconcile any gaps or transitions, though I try to remedy those as I go.  The goal is to do a coherent treatment of whatever the topic is.

I throw resources in as I go. That is, if I find myself referring to a concept, I put a reminder in a References or Resources section at the end to grab a reference later. I have a separate (ever-growing) file of references for that purpose. Though I may not always include the reference in the document (currently I’m trying to keep the prose lean), but I want folks to have a resource at least.

I also jump around, a bit. Mostly I proceed from ‘go to whoa’, but occasionally I realize something I want to include, and put a note at the appropriate place. That sometimes ends up being prose, until I realize I need to go back to where I was ;). I hope that it leads to a coherent flow. Of course, as above, I do reread sections, and I try to give a final read before I pass on to whatever next step is coming. Typically, that means sending to someone to see if I’m on track or off the rails.

I also am pondering that I may retrofit with diagrams. Sometimes I’ve put them in as I go. At other times, I go back and fill them in. I do love me a good diagram, for the reasons Larkin & Simon articulated (Connie Malamed is doing a good job on visuals over at LinkedIn this month). Sometimes I edit the ones I have as I recognize improvements, sometimes I create new ones, sometimes I throw existing ones in. It’s when I think they’ll help, but I can think of several I probably should make.

The above holds true for pretty much all writing I do beyond these posts. This is for me, first, after all! Otherwise, I solicit feedback (which I don’t always get; I think folks trust me too much, at least for shorter things). I’m sure others work different. Still, these are my thoughts on writing, again. I welcome your reflections!

The post Writing, again appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/01/writing-again/feed/ 0
Getting smarter https://blog.learnlets.com/2025/01/getting-smarter/ https://blog.learnlets.com/2025/01/getting-smarter/#respond Tue, 14 Jan 2025 16:08:07 +0000 https://blog.learnlets.com/?p=9053 A number of years ago now, I analyzed the corporate market for a particular approach. Not normally something I do (not a tool/market analyst), but at the time it made sense. My recommendation, at the end of the day, was the market wasn’t ready for the product. I am inclined to think that the answer […]

The post Getting smarter appeared first on Learnlets.

]]>
A number of years ago now, I analyzed the corporate market for a particular approach. Not normally something I do (not a tool/market analyst), but at the time it made sense. My recommendation, at the end of the day, was the market wasn’t ready for the product. I am inclined to think that the answer would be different today. Maybe we are getting smarter?

First, why me? A couple of reasons. For one, I’m independent. You (should) know that you’ll get an unbiased (expert) opinion. Second, this product was something quite closely related to things I do know about, that is, learning experiences that are educationally sound. Third, the asker was not only a well-known proponent of quality learning, but knew I was also a fan of the work. So, while I’m not an analyst, few same would’ve really understood the product’s value proposition, and I do know the tools market at a useful level. I knew there was nothing else on the market like it, and the things that were closest I also knew (from my authoring simulation games work, as in my first book, and the research reports for the Learning Guild).

The product itself allowed you to author deep learning experiences. That is, where you immerse yourself in authentic tasks, with expert support and feedback. Learning tasks that align with performance tasks are the best practice environments, and in this case were augmented with resources available at the point of need. The main problem was that they required an understanding of deep learning to be able to successfully author. In many cases, the company ended up doing the design despite offering workshops about the underlying principles. Similarly, the industrial-strength branching simulation tools I knew then struggled to survive.

And that was my reason, then, to suggest that the market wasn’t ready. I didn’t think enough corporate trainers, let alone the managers and funding decision-makers, would get the value proposition. There still are many who are ‘accidental’ instructional designers, and more so then. The question, then, is whether such a tool could now succeed. And I’m more positive now.

I think we are seeing greater interest in learning science. The big societies have put it on their roadmaps, and our own little LDA learning science conference was well received. Similarly, we’re seeing more books on learning science (including my own), and more attention to same.  I think more folks are looking for tools that make it easy to do the right thing. Yes, we’re also confronting the AI hype, but I think after the backlash we’ll start thinking again about good, not just cheap and fast. I not only hope, but I think there’s evidence we are getting smarter and more focused on quality. Fingers crossed!

The post Getting smarter appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/01/getting-smarter/feed/ 0
Looking forward https://blog.learnlets.com/2024/12/looking-forward-2/ https://blog.learnlets.com/2024/12/looking-forward-2/#respond Tue, 31 Dec 2024 16:01:20 +0000 https://blog.learnlets.com/?p=9048 Last week, I expressed my gratitude for folks from this past year. That’s looking back, so it’s time to gaze a touch ahead. With some thoughts on the whole idea! So here’s looking forward to 2025. (Really? 25 years into this new century? Wow!) First, I’m reminded of the talk I heard once. The speaker, […]

The post Looking forward appeared first on Learnlets.

]]>
Woman on the ocean, peering into the distance.Last week, I expressed my gratitude for folks from this past year. That’s looking back, so it’s time to gaze a touch ahead. With some thoughts on the whole idea! So here’s looking forward to 2025. (Really? 25 years into this new century? Wow!)

First, I’m reminded of the talk I heard once. The speaker, who’d if memory serves had written a book about predicting the future, explained why it was so hard. His point was that, yes, there are trends and trajectories, but he found that there was always that unexpected twist. So you could expect X, but with some unexpected twist. For instance, I don’t think anyone a year ago really expected Generative AI to become such a ‘thing’.

There was also the time that someone went back and looked at some predictions of the coming year, and evaluated them. That didn’t turn out so well, including for me! While I have opinions, they’re just that. They may be grounded in theory and 4+ decades of experience, but they’re still pretty much guesswork, for the reason above.

What I have done, instead, for a number of years now is try to do something different. That is, talk about what I think we should see. (Or to put it another way, what I’d like to see. ;). Which hasn’t changed much, somewhat sadly. I do think we’ve seen a continuing rise of interest in learning science, but it’s been mitigated by the emergence of ways to do cheaper and faster. (A topic I riffed on for the LDA Blog.) When there’s pressure to do work faster, it’s hard to fight for good.

So, doing good design is a continued passion for me. However, in the conversations around the Learning Science conference we ran late this year, something else emerged that I think is worthy of attention. Many folks were looking for ways to do learning science. That is, resolving the practical challenges in implementing the principles. That, I think, is an interesting topic. Moreover, it’s an important one.

I have to be cautious. When I taught interface design, I deliberately pushed for more cognition than programming. My audience was software engineers, so I erred on getting them thinking about thinking. Which, I think, is right. I gave practical assignments and feedback. (I’d do better now.) I think you have to push further, because folks will backslide and you want them as far as you can get them.

On the other hand, you can’t push folks beyond what they can do. You need to have practical answers to the challenges they’ll face in making the change. In the case of user experience, their pushback was internal. Here, I think it’s more external. Designers want to do good design, generally. It’s the situation pragmatics that are the barrier here.

If I want people to pay more attention to learning science, I have to find a way to make it doable in the real world. While I’m finding more nuances, which interests me, I have to think of others. Someone railed that there are too many industry pundits who complain about the bad practices (mea culpa). That is, instead of cheering on folks that they can do better. And I think we need both, but I think it’s also incumbent to talk about what to do, practically.

Fortunately, I have not only principle but experience doing this in the real world. Also, we’ve talked to some folks along the way. And we’ll do more. We need to find that sweet spot (including ‘forgiveness is easier than permission’!) where folks can be doing good while doing well.  So that’s my intention for the year. With, of course, the caveat above! That’s what I’m looking forward to. You?

The post Looking forward appeared first on Learnlets.

]]>
https://blog.learnlets.com/2024/12/looking-forward-2/feed/ 0
The enemy of the good https://blog.learnlets.com/2024/12/the-enemy-of-the-good/ https://blog.learnlets.com/2024/12/the-enemy-of-the-good/#respond Tue, 10 Dec 2024 16:03:31 +0000 https://blog.learnlets.com/?p=9026 We frequently hear that ‘perfection is the enemy of the good’. And that may well be true. However, I want to suggest that there’s another enemy that plagues us as learning experience designers. We may be trying to do good, but there are barriers. These are worthy of explicit discussion. You also hear about the […]

The post The enemy of the good appeared first on Learnlets.

]]>
We frequently hear that ‘perfection is the enemy of the good’. And that may well be true. However, I want to suggest that there’s another enemy that plagues us as learning experience designers. We may be trying to do good, but there are barriers. These are worthy of explicit discussion.

You also hear about the holy trinity of engineering: cheap, fast, or good; pick two. We have real world pressures that want us to do things efficiently. For instance, we have lots of claims that generative AI will allow us to generate more learning faster. Thus, we can do more with less. Which isn’t a bad thing…if what we produce is good enough. If we’re doing good, I’ll suggest, then we can worry about fast and cheap. But doing bad faster and cheaper isn’t a good thing! Which brings us to the second issue.

What is our definition of ‘good’? It appears that, too often, good is if people ‘like’ it. Which isn’t a bad thing, it’s even the first level in the Kirkpatrick-Katzell model: asking what people think of the experience. One small problem: the correlation between what people think of an experience, and it’s actual impact, is .09 (Salas, et al, 2012). That’s zero with a rounding error! What it means is that people’s evaluation of what they think of it, and the actual impact, isn’t correlated at all. It could be highly rated and not be effective, or highly rated and be effective. Etc. At core, you can’t tell by the rating.

What should be ‘good’? The general intent of a learning intervention (or any intervention, really) is to have an impact! If we’re providing learning, it should yield a new ability to ‘do’. There are a multitude of problems here. For one, we don’t evaluate performance, so how would we know if our intervention is having an impact? Have learners acquired new abilities that are persisting in the workplace and leading to the necessary organizational change? Who knows? For another, folks don’t have realistic expectations about what it takes to have an impact. We’ve devolved to a state where if we build it, it must be good. Which isn’t a sound basis for determining outcomes.

There is, of course, a perfectly good reason to evaluate people’s affective experience of the learning. If we’re designing experiences, having it be ‘hard fun’ means we’ve optimized the engagement. This is fine, but only after, we’ve established efficacy. If we’re not having a learning impact in terms of new abilities to perform, what people think about it isn’t of use.

Look, I’d prefer us to be in the situation where perfection to be the enemy of the good! That’d mean we’re actually doing good. Yet, in our industry, too often we don’t have any idea whether we are or not. We’re not measuring ‘good’, so we’re not designing for it. If we measured impact first, then experience, we could get overly focused on perfection. That’d be a good problem to have, I reckon. Right now, however, we’re only focused on fast and cheap. We won’t get ‘good’ until we insist upon it from and for ourselves. So, let’s shall we?

The post The enemy of the good appeared first on Learnlets.

]]>
https://blog.learnlets.com/2024/12/the-enemy-of-the-good/feed/ 0
Convincing stakeholders https://blog.learnlets.com/2024/12/convincing-stakeholders/ https://blog.learnlets.com/2024/12/convincing-stakeholders/#respond Tue, 03 Dec 2024 16:01:26 +0000 https://blog.learnlets.com/?p=9022 As could be expected (in retrospect ;), a recurrent theme in the discussions from our recent Learning Science Conference was how to deal with objections. For instance, folks who believe myths, or don’t understand learning. Of course, we don’t measure, amongst other things. However, we also have mistaken expectations about our endeavors. That’s worth addressing. […]

The post Convincing stakeholders appeared first on Learnlets.

]]>
As could be expected (in retrospect ;), a recurrent theme in the discussions from our recent Learning Science Conference was how to deal with objections. For instance, folks who believe myths, or don’t understand learning. Of course, we don’t measure, amongst other things. However, we also have mistaken expectations about our endeavors. That’s worth addressing. So, here I’m talking about convincing stakeholders.

To be clear, I’m not talking about myths. Already addressed that. But is there something to be taken away? I suggested (and practiced in my book on myths in our industry), that we need to treat people with respect. I suggest that we need to:

  • Acknowledge the appeal
  • Also address what could be the downsides
  • Then, look to the research
  • Finally, and importantly, provide an alternative

The open question is whether this also applies to talking learning.

In general, when talking about trying to convince folks about why we need to shift our expectations about learning, I suggest that we need to be prepared with a suite of stories. I recognize that different approaches will work in different circumstances. So, I’ve suggested we should have to hand:

  • The theory
  • The data/research
  • A personal illustrative anecdote
  • Solicit and use one of their personal anecdotes
  • A case study
  • A case study of what competitors are doing

Then, we use the one we think works best with this stakeholder in this situation.

Can we put these together? I think we can, and perhaps should. We can acknowledge the appeal of the current approach. E.g., it’s not costing too much, and we have faith it’s working. We should also reveal the potential flaws if we don’t remedy the situation: we’re not actually moving any particular needle. Then we can examine the situation: here we draw upon one of the second list about approaches. Finally, we offer an alternative: that if we do good learning design, we can actually influence the organization in positive ways!

This, I suggest, is how we might approach convincing stakeholders. And, let me strongly urge, we need to! Currently there are far too many who believe that learning is the outcome of an event. That is, if we send people off to a training event, they’ll come back with new skills. Yet, learning science (and data, when we bother) tells us this isn’t what happens. People may like it, but there’s no persistent change. Instead, learning requires a plan and a journey that develops learners over time. We know how to do good learning design, we just have to do it. Further, we have to have the resources and understanding to do so. We can work on the former, but we should work on the latter, too.

The post Convincing stakeholders appeared first on Learnlets.

]]>
https://blog.learnlets.com/2024/12/convincing-stakeholders/feed/ 0
Across Contexts https://blog.learnlets.com/2024/11/across-contexts/ https://blog.learnlets.com/2024/11/across-contexts/#respond Tue, 26 Nov 2024 16:08:40 +0000 https://blog.learnlets.com/?p=9016 (Have I talked about looking across contexts for learning before? I looked and couldn’t find it. Though I’m pretty good about sharing diagrams?!? So, here it is; if again, please bear with me). In our recent learning science conference, one topic that came up was about contexts. That is, I suggest the contexts we see […]

The post <I>Across</I> Contexts appeared first on Learnlets.

]]>
(Have I talked about looking across contexts for learning before? I looked and couldn’t find it. Though I’m pretty good about sharing diagrams?!? So, here it is; if again, please bear with me).

In our recent learning science conference, one topic that came up was about contexts. That is, I suggest the contexts we see across examples and practice define the space of transfer. We know that contextual performance is better than abstract (c.f. Bransford’s work at Vanderbilt with the Cognitive Technology Group). The natural question is how to choose contexts. The answer, I suggest, is ad hoc: choose the minimal set of contexts that spans the space of transfer. What we’re talking about is looking for a set chosen across contexts that support the best learning.

A cloud of all possible applications, and inside an oval of correct applications. Within that, some clustered 'o' characters near each other, and a character 'A' further away. Then 'x' characters spaced more evenly aroud the oval, with the A inside the spanned space. So, in talks I’ve used the diagram to say that if you choose the set of contexts represented by the ‘o’s, you’ll be unlikely to transfer to A, whereas if you choose the ‘x’s, you’re much more likely. Let me make that concrete: let’s talk negotiation (something we’re all likely to experience). If all your contexts are about vendors (e.g. ‘o’s,) you may not apply the principles to negotiating with a customer, A. If, however, you have contexts negotiating with vendors, customers, maybe even employers (‘x’s), you’re more likely to transfer to other situations. (Though your employer might not like it! ;)

The point that was asked was how to choose the set. You can be algorithmic about it. If you could measure all dimensions of transfer, and ensure you’re progressing from simple to complex along those, you’d be doing the scientific best. It might lead you to choose too many, however. It may be that you can choose a suite based upon a more heuristic approach to coverage. Here I mean picking ones that provide some substantive coverage based upon expertise (say, from your SME or supervisors of performance). I suspect that you’ll have to make your best first guess and then test to see if you’re getting appropriate transfer, regardless.

It’s important to ensure that the set is minimal. You don’t want too many contexts to make the experience onerous. So pick a set that spans the space, but also is slim. The right set will illuminating the ways in which things can vary without being too large. Another criteria is to have interesting contexts. You are, I’ll suggest, free to exaggerate them a little to make them interesting if they’re not inherently so.

You may also need some times when the context says not to use the focus here. What I mean is that while it could seem appropriate to extend whatever’s being learned to this situation, you shouldn’t. Some ideas support over-generalization, and you’ll need to help people learn where those limits are.

Note that the contexts are those across both examples and practice. So, learners will see some contexts in examples, then others in practice. It may be (if it’s complex, or infrequent, or costly) that you need to have lots of practice, and this isn’t a worry. Still, making sure you’re covering the right swatch across contexts will support achieving the impact in all appropriate situations.

I’m less aware of research on the spread of contexts for transfer (PhD topic, anyone?), and welcome pointers. Still, cognitive theory suggests that this all makes sense. It does to me, how about you?

The post <I>Across</I> Contexts appeared first on Learnlets.

]]>
https://blog.learnlets.com/2024/11/across-contexts/feed/ 0