Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Standing up…

3 February 2026 by Clark 5 Comments

…and I won’t back down. Ok, so this is a little off my usual thread, but it does have some learning in it. What I’m talking about is using your attention and your money as a way to express your values. It’s what I’m increasingly doing, and there’re lessons in it. So let’s talk about standing up for what you believe in.

It may be that I’ve stood too much on principle in the past, and paid the price. I left a (probably) secure position at a university to come back to the US to be closer to our aging parents. A job at what was positioned as a secure startup appeared to be a good choice..but I didn’t properly account for ego and greed. I even was a bit cheeky about a possible position, to my long-term shame. Consulting then, I joke, went from a euphemism for ‘unemployed’ to a way of life. I’m fortunate, that despite my lack of business nous, my curiosity and inclination to share learnings has proven to be moderately valuable. Somehow, this hasn’t been enough to dissuade me.

As I theoretically get wiser, I’m being more forthright. I’m relinquishing my accounts on platforms that have demonstrated a lack of accountability, for instance. I’ve left a few places in the past few years. I stay on LinkedIn, because it’s not awful (though getting worse), and it’s the place where folks connect for business. I’m on a few other social networks, one that is built to be able to stay independent, and one that, so far, is seeming to have good principles. That latter one I’m willing to abandon if that changes.

I’m also avoiding technologies with misrepresentation, and calling out such claims. Not always, of course, I want to educate, not punish. Still, I strive to let what science tells us to serve as a guide, not what folks want you to believe. Their intentions may be simply misguided, or worse, they may not care. It’s important to be careful, which is why we (Matt Richter and I, e.g. the LDA) wrote the research checklist, for instance. (May require membership, but it’s free!) I even avoiding indulging in an opportunity to watch an activity I enjoy, because it was part of a trend I think is harmful overall (e.g. supporting increasing compartmentalization).

I’m also shifting my purchasing. I’m trying to shop more local, and use sources that aren’t aligned with the most problematic providers. This isn’t always easy, as the ‘long tail’ means certain things are hard to come by. There are consequences, including paying more, and doing with less. Tradeoffs.

Similarly, I try to do business only with those who have approaches I favor. For instance, I’ve avoid positions where I receive compensation for promoting a product, because that would bias my recommendations. I (perhaps wrongly) believe that having that unbiased opinion (and stating when I have conflicts) is of value. I am now am working with Elevator 9, but that’s because they have demonstrated that they care about learning science.

None of this is perfect. For one, there are barriers to completely shifting. Some services you just can’t get without aligning with one platform or another. Certain products are basically just impossible to source any other way. Not everyone you know and care about will go along. You do what you can, and live with the results.

There’s learning from this. It’s harder than not. I’ve learned that trusting what people say, particularly those with vested interests, isn’t a good bet unless they’ve earned your trust in other ways first. Acquisitions, for one, rarely go the way that the acquirers promise! Also, it’s pretty obvious that this stance is an effort that not everyone can, or is willing to, make. There’s risk, for instance. On the other hand, it’s rewarding. You do feel better that you’re doing things to support what you believe.

Note that I’m being relatively opaque about my intentions. I think they’re pretty obvious, but still, the principles hold regardless; vote with your attention and your dollars. Align your actions with your values. Standing up for what you believe in is a way to show what you believe so others can see what others think. It’s a way of learning ‘out loud’ I suppose. Or maybe ‘living out loud’. Still, I won’t back down. What think you?

(And now, back to your regularly scheduled posts. BTW, my intent is to keep Tuesdays for my thoughts; if I’m touting something I think you should know about, I’ll try to keep to Thursdays. And rare. ;)

Ideas we could do without

27 January 2026 by Clark Leave a Comment

Saw a post on LinkedIn from a colleague, ranting about how we are regularly putting old wine in new bottles. I do believe we’re getting deeper into design and strategy, but I also agree. Similarly, I’ve seen a regular feature on a newsletter talking about terms we can do without. So, I’m combining the two here. Not surprisingly, I’m channeling previous complaints (as a commenter made mention of), but this is the first time combining them into ideas we could do without.

Microlearning. As I’ve said before, the problem here is that there are regularly two things meant here: either spaced learning or performance support. Both are good things, but lumping them under one label constitutes a problem. For one, they have different design processes and goals, so using the same term for two different things risks confusion. I like the idea of emphasizing conciseness, but…we can call it minimalism, eh?

Workflow learning. This is problematic because it implies learning, yet, as I’ve repeated, you can’t learn ‘in the workflow’. My argument rests on the fact that learning is really action and reflection, and reflection breaks the workflow. I reckon this could be definitional, as some folks might argue that such reflection is part of the workflow, but like with microlearning, they’re also many times talking about performance support. So, it’s another term with wrong usage, or at least ambiguous provenance. Let’s talk performance support or learning from the workflow.

Mobile. This may seem odd, given that I’ve been talking and writing about mobile at least since my first book on the topic, more than a decade ago! (Notably, both books are now out of print. Indicative?) Yet, I still receive requests from developers to make my mobile apps (not what I do). Also, Google declared they were going ‘mobile first’ also over a decade ago. Really, mobile has kind of just merged into digital solutions, I would suggest. Sure, we get folks asking us to use the app, but that to me is frustrating. It shouldn’t matter whether I’m using the app or a website, I have the same goals, largely. Yes, there are some location-specific things, and we (still) aren’t taking advantage well of the contextual capabilities of mobile devices, but mobile is really moot. It’s about augmenting our thinking. And, separately, taking advantage of context.

Unlearning. I’m adding this after originally writing this, because it just emerged again, and literally two days after a really nice ‘takedown‘ by Tom McDowell, who’s developed a real capability for research translation. In short, our brains can’t unlearn. That is, we don’t forget things, so we need to really build a new, alternative response to a previously learned approach. Which means that solutions designed for ‘unlearning’ won’t achieve the necessary outcome. Thus, this isn’t just a nice shortcut, but instead creates impressions that can lead folks astray. Let’s dump the phrase completely. Please?

I’ll add a new one: AI. What?

AI. As I’ve mentioned, I’ve been a big fan of artificial intelligence (AI) for literally decades. So, why am I struggling? I admit I’m getting overwhelmed when people say “AI” and mean generative AI. Generative AI is, conceptually, a small subset of AI. Sure, it’s huge right now, but that’s largely hype driven by money. It’s not real in a meaningful sense. I wish people could and would be clear, like “I’m going to call it AI, but I’m talking about generative AI and large language models (LLMs) in particular.” Which kind of undermines the hype, but what’s wrong with that? (Except for the purveyors, of course.) Sure, we should be treating all our digital endeavors similarly in strategy, e.g. as Lori Niles Hoffman’s new book points out, but AI is just one of the tools we should be tapping into.

Do I think my rant will change anything? Of course not! There’s money to be made, after all. Also, no one pays much attention to my rants here anyway ;). Still, a chance to get this off my metaphorical chest. So those are my ‘ideas we could do without’. What are yours?

Age or experience?

23 December 2025 by Clark Leave a Comment

One of the things that has been a recurring theme across things I’ve been looking at lately is experience. Too often we confound age with experience. And, of course, sometimes it’s that we should be talking about it. So, a brief rant on age or experience.

First, I’ll bring up the ‘generations’ myth. It’s appealing, as our brains like buckets for things. We’re kinda wired that way. The only problem is that generations as a concept has been looked at and debunked. Heck, in Ancient Greek days they were complaining that ‘kids just have no respect”! And if you think about it, thinking that someone in Los Angeles CA of a certain age has more in common with someone in Nepal of the same age versus another Angelenõ of a different age is kinda ridiculous.

And, those ‘defining’ events? They affect every conscious person! And it’s so context dependent. A local event may not mean much to you, unless it affects you somehow, and then you share more with everyone else so affected. There’s actually a simpler explanation. Say, for instance, that “young folks want classes while old folks don’t”. That’s explainable by stage of life: when you’re young you need credentials, but later on you can point to your experience.

People share values, and gain motivation by the same underlying factors (differently across culture and personality), and more. Just look at the research on self-determination theory! Attributing to age rather than explaining by experience is a mistake. So, for instance, my kids, who arguably fit the label ‘digital natives’, still come to me (decreasingly, I’ll admit) for tech problems.

Then, there are many things that change as you develop in a domain. For instance, in our Learning Science Conference, my colleague Matt Richter was talking about feedback, and very clearly pointed out how what useful feedback is changes as you gain experience. This holds true for examples, too, the type of useful example changes. Also for practice: with more experience, you need more challenge.

Which, as we further see, is how we go wrong. We do the ‘one size fits all’, not recognizing that things need to change. To be fair, we also do the wrong practice (knowledge test rather than application to problems), give the wrong feedback to begin with (right/wrong), the list goes on. But even when we’re trying to do it right, we forget things like adapting for initial and developing experience. Yet, it’s a factor for instance in how much practice you need, how much spacing, etc.

This problem does go more broadly. We hear it in hiring (age discrimination). Of course, that’s only one problem. For example, gender, race, physical and neurological differences, and more are also present. Sadly. Okay, soapbox: DEI, done right, leads to better outcomes! Actually, that’s got an evidence-base, so probably more than soapbox. Still. So, consider experience as one of the factors distinguishing individuals. Folks can’t control their age, but they can determine their experience. So use it!

Analyzing analysis

9 December 2025 by Clark 1 Comment

Another reflection, triggered by my visit to DevLearn. One of the things that matters, and we don’t discuss enough, is analysis. That is, starting up front to determine what we need! There are nuances here, and I’m not a total expert (paging Dawn Snyder), but certain things are obvious, So let’s take some time analyzing analysis.

Analysis is the first part of the process. Yes, there’re the organizing and managing bits, but the process starts with analysis, whether ADDIE, SAM, LLAMA, or any other acronym. You need to determine what’s going on, what’s the need, and what’s the appropriate remedy.

One of the first things to note is that not everything L&D does fits. As is widely noted (e.g. here), there are lots of reasons courses aren’t the only answer. The real trigger should be a need. That is, there’s a new skillset required to do this thing we’ve identified as wrong or necessary. Or, there’s something we’re doing, but badly. At core, there are two situations: the one where we need to be, and the one where we are. The gap between is what we want to remedy.

Then, it’s matter of determining why we’re not where we want to be. The reason is, there are different interventions for different problems, as Guy Wallace talks about in his tome. It might be a lack of resources, or people get rewards for doing X, even though it’s Y they’re to be doing. These, by the way, aren’t things we deal with! That’s why you do this, so you don’t build a solution where said solution actually isn’t.

When it is a situation where knowledge in the world, or in the head, will help, then we can jump into action. Of course, we need a clear definition of what it is people need to be able to do, under what conditions, etc. BTW, what we need are performance objectives, not ‘learning’ objectives. That is, it’s about doing. Which is why, if the circumstances support, we should be providing job aids, not courses! You’ll usually find that job aids are cheaper to do than courses. If it’s not being performed very frequently, or too frequently, memory will play a role, and external memory is valuable in many such circumstances.

When you’ve determined that a course is needed, you can develop that. HOWEVER, you need certain things from the analysis phase here too. In short, you need to understand the actual performance. That includes what the performance should be, and how you can tell. Essentially, you need to know the decisions people must make to deliver the required outcomes. Which involves knowing the models that describe how the world works in this particular area, what ways people go wrong and why, and why people should care. This is where you need your subject matter experts (SMEs).  Then you can build your practices that align, and the models and examples, and then the hook and closing, and…

Whatever it is, ideally there’s a metric, that says this is what’s needed. You design to that metric, and then test until you achieve it. If you’re not achieving it faster than you’re losing resources, you can consciously evaluate. Is the lower level ok? Can we get more resources? Should we abandon ship? But doing so consciously is better than just going ’til you run out of time and/or money.

Analysis is a necessary first step. What is not is responding with acquiescence to a ‘we need a course on X’ request. Do you trust them to know that a course on X solves their problem? (Not the way to bet.) You can, and should, say, “yes, and…let’s dig in and make sure we’re solving the right problem”. Analysis is, properly, the way to start looking at problems. You understand what the gap is, then the root cause, and then align an intervention, or interventions, to address it. By analyzing analysis, we can figure out what we have to do, and why.

And, yes, I just gave a talk on designing in the real world, and you may have to do inference on resources to determine all the above, but at least you know what you need to come away with.

Making the case

4 November 2025 by Clark Leave a Comment

I was asked, recently, about how to get execs to look more favorably on L&D initiatives. And, I discover, I’ve talked about this before, more than 15 years ago. And we’re still having the conversation. So, it appears we’re still struggling with making the case. Maybe there’s another way?

So, there were two messages. Briefly, I heard this:

the difficulty lies in shifting behaviours and mindsets around how people perceive the L&D role and its function(s); particularly when advocating for a transition towards evidence-based approaches and best practices.

Then this response:

[we] are just considered a bunch of lowly content creators who are given SMEs to create courses with no real access to end learners and with no scope of reaching out to anyone else. We are supposed to create all singing and dancing content which will hopefully change behaviour and make some impact.

Both are absolutely tragic situations! We should not be having to fight to be using evidence-informed approaches, and we shouldn’t be expected to create courses without having an opportunity to do research (conversations and more). Why would you want to do things in a vacuum according to outdated beliefs? It’s maniacal.

Now, my earlier screed posited making sure that the executives were aware of the tradeoffs. In general, the model I believe has been validated is one that says people need to see the alternatives, before choosing one. In this case, they really aren’t aware of the costs, largely for one reason. Folks don’t measure impact of learning interventions! That’s not true everywhere, of course, but just asking if people liked it is worthless, and even asking if they thought it had an impact is pretty much a zero correlation with actual value. You have to do more. Our colleague Will Thalheimer has been one of the foremost proponents of this. In his recent tome, talking to org execs, he argues why you should. But that’s still the theoretical argument.

Sadly, if you don’t measure, you don’t have evidence. And, I’ll argue, you can’t look elsewhere, because I’ve tried. I have regularly looked for articles that cite research about whether training investments pay off. Beer and colleagues mentioned a meta-analysis that showed only 10% of investment showed a return, but…they didn’t cite the study and haven’t responded to a request for more data. So, I’ve been trying to think of another way.

Recently, it occurred to me that the measurement itself might be a mechanism. So, ATD had data on the use of evaluation, and most everybody was saying they did Kirkpatrick Level 1 (did they like it), but as mentioned, that’s not useful. A third did Kirkpatrick level 2, which checks whether learners can perform after the course. Too often, however, that can be knowledge checks, not actual performance. Only an eighth actually looked for change in the workplace behavior, and almost no one checked whether there was an org impact. One problem is that this totals more than 100%, so clearly some folks were doing more.

However, if we take the final two and add them, 13% and 3%, we get 16%, which means at best, 84% of folks aren’t measuring. Which means they’re likely not getting any results. So, a cynical view would say that 84% of efforts aren’t returning value! Now, to caveats. For one, the data is old; I don’t have an exact date but it precedes my book on strategy, so it’s at least before 2014. And, of course, we could be doing better (though the above quotes might argue otherwise). Also, maybe some of those unmeasured approaches actually are working. Who knows?

Still, I take this as a strong case that we’re still wasting money on L&D. Now, I’ve argued that you should have a collection of arguments: data, theory, examples, your personal experience, their personal experience, and perhaps what the competition is doing. Then, you present what works for them at the moment. Or you can (and should) do stealth evaluation. Find performance data, work with eager adopters, whatever it takes. But worst case, you might use the above argument to show that it’s not being measured, and, as the saying goes, “what’s measured matters”. I’ll suggest that this may be one way of making the case. I welcome hearing others, or better yet, actual real research! But we have to find some traction to get better, for ourselves and our colleagues.

Transforming from knowledge to performance

16 September 2025 by Clark Leave a Comment

As I’ve mentioned, I’m working with a startup looking at extending training through small LIFTs. The problem is that most training is ‘event’ based, where learning is in a concentrated time. Which is fine for performing right after. However, much of what we train for are things that may or may not happen soon. What we want is to go from the knowledge after the event to actually performing in new ways after the event, possibly a long time. We need retention from the learning to the situation, and transfer to all appropriate (and no inappropriate) situations. Thus, we need to think differently. And, as I suggested, we’re looking at supporting people not just with formal learning, but beyond, to developing their ability over time. We really want to be transforming from knowledge to performance. So, what’s that look like?

As usual, when I’m supposed to be sleeping is one of the times I end up noodling things over. And, so it was some nights ago. I was thinking about (as I’m wont to do) the cognitive roles that we need. I talk about practice, and models, and examples, and more recently, generative activities. But that’s formal learning, and we have a good evidence base for that. But what about going forward? What sorts of activities make sense?

Here I’m going out of my comfort zone. Yes, I’ve been doing some reading about coaching, particularly domain-independent vs domain-specific coaching. Now, here I don’t necessarily know what the research says specifically, but I do see the convergence of a variety of different models. So, I can make inferences. And post them here to get corrected!

Stages of early, middle, and late, with reflection (personal, conceptual) and reactivation (reconceptualization, recontextualization, reapplication) in early . Planning (initial is at the intersection of early mid, revision is in mid) and barriers (internal, external) are in mid. Impact (internal at boundary of mid and late, external) and survey are in late. As you might expect, I made a diagram to help me understand. So, I reckon there’s an early, mid, and late stage of development of capability. Formal learning should really be about getting you ready to apply.

That is the early phase which includes reflection (really, a generative activity), which can be personal (ala scripts) or conceptual (schemas). Also, reactivation. That is, seeing different ways of looking at it (new models), more examples in context, and of course more practice. (Retrieval practice, of course, where you’re applying the knowledge.)

Then, in mid-phase, your learners are applying, but to real situations, not simulations. Their initial plan on how to apply the knowledge might be part of the end of the early stage, but then it’s time to apply. Which could (should?) lead to revisions of the plan, and on reflecting on any barriers. Those barriers could be internal (their own understanding or hangups), or external (lack of resources, situations, tools, etc). The former are grounds for discussion, the latter for action on the part of the org!

Then, at the late stage, learners should be looking at the impact. They can reflect on the impact on them, which could also be a mid-phase action, but ultimately you want to see if they’re having an impact overall. Then, of course, you could want to survey about the learning experience itself. While it’s all data, the org impact is useful data to evaluate what’s going on and how it’s going, and the survey can help you continue to improve either this or your next initiative.

Those’re my initial thoughts on transforming from knowledge to performance. There’s some overlap, no doubt, e.g. you could continue sending reapplications if there aren’t frequent opportunities in the real world. Likewise, your learners should be assessing impact in the need to revise a plan. Still, this seems to make sense in the first instance, at least to me. (Addressing the ‘when’, how much and what spacing, is what I’ll be talking about at DevLearn. ;) Now, it’s over to you. What have I got wrong, am missing, …?

Learning science on tap

11 September 2025 by Clark Leave a Comment

In the interest of the continuation of Quinnovation, Learnlets, and me, this is a solicitation post. If it’s not for you, kindly ignore. However, it may be for your boss; if so, please pass it on! 

Do you run an L&D department, or make L&D decisions, and don’t have sufficient learning science background? You know, you get asked to make decisions that involve learning – responding to vendors, stakeholders asking “why”, etc – and you’re not sure how to respond. That’s not uncommon! While you know how to select technologies, design solutions, create strategies, etc in other areas, you don’t necessarily know how to do that with an enlightened view of how we think, work, and learn. L&D is unique because it deals with learning – skills, social, informal, and more. And your school experience is not a good guide. How do you cope? Learning science on tap!

Let me offer this solution, specifically Clark Quinn, Ph.D., on tap. There are reasons why: I’ve been recognized for my depth of knowledge and breadth of experience in translating learning science into practical terms. That includes writing books, keynoting, awards, and, of course, consulting.  I’ve applied that background for literally decades in the design of solutions: games, mobile, strategy, processes, policies, and more. So, that’s available. For instance, you could send me something that needs a learning science perspective – an RFP, a memo, an organizational initiative, and I’ll break it down from a learning science perspective, and provide you with same. Or we can talk on a call. What’s more, as I’m wont to do, I’ll provide the underlying thinking. That is, you learn as you go, too! (Just how I roll.)

Of course, you don’t have to take my advice. You’ll have it, and can factor it into your thinking. And, I can adapt my thinking to specific constraints. I am known to come up with better ideas than had been proposed initially. But it’s up to you. I’ll give you my feedback, and you can do with it as you will. This service is for those that can’t come up with that advice on their own, and it’s an important perspective. What I’ll suggest as recommendations will be grounded in evidence-based approaches. I’ll research anything I need to know and don’t (no extra charge), so I learn too. But I have been involved in thinking at most levels and areas of an organization, in a multitude of roles. 

I won’t be an employee (nor want to become one). And, I’m not generating new things (that’s a different engagement, we can talk about it), but I’ll review and opine, to your needs. So, I won’t write an RFP or a whitepaper for you; I won’t design a learning experience; nor will I read an article and summarize it for you. Those’d be different engagements. But I’ll review an RFP or whitepaper (incoming or outgoing) for the necessary learning science. I will review the rules and practices around such a design.  If someone sends you an article and asks your opinion, I’ll give you the perspective on that. In particular, I’ll help evaluate any claims that you’re faced with, again either coming from inside or outside.

In short, I’m your learning science advisor. Anything you need. Of course you’ll also get any other thoughts my experience provides: how to deal with issues or people, possible solutions, and more. Comes with the territory.

I also know to respect confidentiality. Heck, my IP has been used to train LLMs, and that doesn’t sit well with me. I will also likely want to write up any learning I attain. I can anonymize it or profile you, your choice. Obviously, I won’t share anything proprietary. And my advice is yours, and you can choose to acknowledge me or keep my participation out of it; I really don’t care. 

I’ve, over time, learned to be efficient. One of the benefits of knowing how our minds work is that I know what we’re not good at, and have developed practices to ensure that I don’t fall down on commitments. I have my own project management approach, which, coupled with my natural “just do it” inclination, means that you won’t be waiting weeks for a response. I’ll commit to 48 hours max on anything less than ebook length, and as folks who are using me in other ways (*cough* LDA and Elevator 9 *cough*) will tell you, I tend to do things in a matter of hours if it’s not too long. 

So, what would such an engagement entail? I’d like to keep it simple and fair. I reckon there’s anywhere from 3 to 10 such things a month. Some will be short, some will be longer. Some months more, some less. My initial ask is $1K per month, and an initial $500 retainer (just to make sure payment systems work, and that’ll cover a call to set the context). If you want to sign up for a year, it’s $10K (9999.99 if necessary to stay under a cutoff ;). Either of us can terminate at any time; in the case of a year purchase, I’ll prorate. What I do for you is yours, what I know and learn is mine. I’ll prod you weekly to remind you to take advantage, and you don’t have to. (Heck, you can always think of it as supporting your friendly neighborhood research translator!)

This may not be you, but if it is, think through the tradeoffs. No overhead – taxes, benefits, etc – the cost is the cost. What you get is yours and your department’s. It’s an investment in learning, for that matter, because you will have the opportunity to improve your understanding as we go. My goal in this (and every) engagement is to remove the need for me in the loop, and learning about learning isn’t just for those developing learning, it’s a good practice for everyone. It’s even a competitive advantage.

Oh, one other thing. I reckon, what with my other commitments, I can only take on 10 such relationships. So, first come, first served. Learning science on tap. Your move! You can reach out here.

We now return you to your regularly scheduled day, already in progress.

Knowledge or ability?

9 September 2025 by Clark Leave a Comment

As in the last post, I’ve been judging the iSpring Course Contest (over, of course). And, having finished, one other thing I’ve noticed is a clear distinction between ‘knowing’ and ‘doing’. We’re seeing lots of interest in skills, yet the courses are, with one exception, really assuming that if you know about it, you’ll do it right. Which isn’t a safe assumption! Are you trying to develop knowledge or ability? I’ll suggest you want the latter. And, can do it!

So, in 9 of the 10 cases, the questions are essentially about knowing. Some of them better than others, e.g. some seem to follow Patti Shank’s advice about how to write better multiple choice questions. That is, for instance, reasonably balanced prose describing the alternatives, and only 3 options. Not all follow it, of course.

The problem is that knowing about something isn’t the same as knowing how to do it. So, for instance, knowing that you should calibrate after changing the reagent isn’t the same as remembering to do it. We’ve all probably experienced this ourselves. They pretty much all had quizzes, as required, but most were just testing if you recalled the elements of the course. Not good enough!

What the one course did that I laud was that the final quiz was basically you applying the knowledge in a situation. You weren’t asked what this situation was, but instead chose how to respond. They were linked, each continuing the story, so it was really a linear scenario. Which I realize can be just a series of mini-scenarios! Still, you dragged your response from a list of responses. They weren’t all that challenging to choose between, as the alternatives were pretty clearly wrong, but for good reasons, reflecting the common mistakes. This is the way!

I think some designers were aspiring to this, as they did put the learner into a situation. However, they then asked learners to classify the answer, rather than actually make a decision about action to take, e.g. a mini-scenario. There is an art to doing this well (hence my workshop in two days)! Putting people into a context to choose their actions like they’ll have to do in the real world is the important practice. Of course, mentored live performance is better. Or simulations (tuned to games, of course ;). Even branching scenarios. But mini-scenarios are easily doable within your existing practice.

The question of knowledge or ability is easily answered. In how many cases will the ability to recite knowledge versus make decisions be the defining success factor for your organization? I’ll suggest that making better decisions will be the differentiator your organization needs. The ability to write better mini-scenarios seems to me to be the best investment you can make to have your interventions actually achieve an impact. And if you’re not doing that, why bother?

Is ‘average’ good enough?

26 August 2025 by Clark Leave a Comment

As this is my place to ‘think out loud’, here’s yet another thought that occurred to me: is ‘average’ good enough? And, just what am I talking about? Well, LLMs are, by and large, trained on a vast corpora. Essentially, it’s averaging what is known. It’s creating summaries of what’s out there, based upon what’s out there. (Which, BTW, suggests that it’s going to get worse, as it processes its own summaries! ;) But, should we be looking to the ‘average’?

In certain instances, I think that’s right. If you’re below average in understanding, learning from the average is likely to lift you up. You can move from below average to, well, average. Can you go further? If you’re in well-defined spaces, like mathematics, or even programming, what LLMs know may well be better than average. Not as good as a real expert, but you can raise your game. Er, that is, if you really know how to learn.

Using these systems seems to become a mental crutch, if you don’t actually do the thinking. While above average people seem to be able to use the systems well, those below average don’t seem to learn. IF you used it to provide knowledge, and then put that knowledge into practice, and get feedback (so, for instance, experimenting), you could fine tune your performance (not as eloquently as having someone provide feedback, but perhaps sufficiently). However, this requires knowing how to learn, and the evidence here is also that we don’t do that well.

So, generative AI models give you average answers. Except, not always. They hallucinate (and always will, if this makes sense). For instance, they’ll happily support learning styles, because that’s a zombie idea that’s wrong but won’t die. They can even make stuff up, and don’t know and can’t admit to it. If you call them on it, they’ll go back and try again, and maybe get it right. Still, you really should have an ‘expert’ in the loop. Which may be you, of course.

Look, I get that they can facilitate speed. Though that would just seem to lead your employer to expect more from you. Would that be accompanied by more money? Ok, I’m getting a bit out of my lane here, but I’m not inclined. But is faster better?

Also, ‘average’ worries me. As I’ve written, Todd Rose wrote a book called The End of Average that is truly insightful. Indeed, one of those books that makes you see the world in a different way, and that’s high praise. The point being that average removes the quality. Averaging removes the nuances, the details, as does summarization. Ideally, you should be learning from the best, not the average, if learning is social (as Mark Britz likes to point out).

Sure, it can know the average of top thoughts, but what’s better is having those top thinkers. If they’re disagreeing, that’s better for dialog, but not summarization. In truth, I’d rather learn from a Wikipedia page put together by people than a Gen AI summary, because I don’t think we can trust GenAI summaries as much as socially constructed understanding. And it’s not the same thing.

So, I’ll suggest ‘average’ isn’t nearly good enough in most cases. We want people who know, and can do. I don’t mind if folks find GenAI useful, but I want them to use it as support, not as a solution. Hey, there’s a lot that can be done with regular AI in many instances, and Retrieval Augmented Generation (RAG) systems offer some promise of improvement for GenAI, but still not perfect outcomes. And, still, all the other problems (IP, business models, and…). So, where’ve I gone wrong?

Note, I should be putting references in here, but I’ve read a lot lately and not done a good job of saving the links. Mea culpa. Guess you’ll just have to trust me, or not. 

Training Organization Fails

19 August 2025 by Clark Leave a Comment

I’ve worked with a lot of organizations that train others. I’ve consulted to them, spoken to them, and of course written and spoken for them. (And, of course, others!) And, I’ve seen that they have a reliable problem. Over the years, it occurs to me that these failures stem from a pattern that’s understandable, and also avoidable. So I want to talk about how a training organization fails. (And, realize, that most organizations should be learning organizations, so this is a bigger plea.)

The problem stems from the orgs’ offering. They offer training. Often, certification is linked. And folks need this, for continuing education needs. What folks are increasingly realizing is that much of the learning they’re offering is now findable on the web. For free. Which means that the companies not seeing the repeat business. Even if required, they’re not seeing loyalty. And I think there’s a simple reason why.

My explanation for this is that the orgs are focusing on training, not on performance solutions. People don’t want training for training’s sake, by and large. Sure, they need continuing education in some instances, so they’ll continue (until those requirements change, at least). Folks’ll take courses in the latest bizbuzz, in lieu of any other source, of course.  (That’s currently Generative Artificial Intelligence, generically called AI; before that as an article aptly pointed out it was the metaverse, or crypto, or Web 3.0, …)

What would get people to do more than attend the necessary or trendy courses? The evidence is that folks persist when they find value. If you’re providing real value, they will come. So what does that take? I posit that a full solution would be comprised of three things: skill development, performance support, and community.

Part 1: Actual learning

The first problem, of course, could be their learning design. Too often, organizations are falling prey to the same problems that belabor other organizational learning; bad design. They offer information instead of practice. Sure, they get good reviews, but folks aren’t leaving capable of doing something new. That’s not true of all, of course (recently engaged with an organization with really good learning design), but event-based learning doesn’t work.

What should happen is that the orgs target specific competencies, have mental models, examples, and meaningful practice. I’ve talked a lot about good learning design, and have worked with others on the same (c.f. Serious eLearning Manifesto). Still, it seems to remain a surprise to many organizations.

Further, learning has to extend beyond the ‘event’ model. That is, we need to space out practice with feedback. That’s neglected, though there are solutions now, and soon to be available. (Elevator 9, cough cough. ;) Thus, what we’re talking about is real skill development. That’s something people would care about. While it’s nice to have folks say they like it, it’s better if you actually demonstrate impact.

Part 2: Performance support

Of course, equipping learners with skills isn’t a total solution to need. If you really want to support people succeeding, you need more than just the skills. Folks need tools, too. In fact, your skill development should be built to include the tools. Yet, too often when I ask, such orgs admit that this is an area they don’t address.

There are times when courses don’t make sense. There are cognitive limits to what we can do, and we’ve reliably built ways to support our flaws. This can range from things performed rarely (so courses can’t help), through information that’s too volatile or arbitrary, to things done so frequently that we may forget whether we’ve taken a step. There are many situations in pretty much any endeavor where tools make sense. And providing good ones to complement the training, and in fact using those tools as part of the training, is a great way to provide additional value.

You can even make these tools an additional revenue stream, separate from the courses, or of course as part of them. Still, folks want solutions, not just skill development. It’s not about what you do for them, but about who they become through you (see Kathy Sierra’s Badass!).

Part 3: Community

The final piece of the picture is connecting people with others. There are several reasons to do this. For one, folks can get answers that courses and tools are too coarse to address. For another, they can help one another. There’s a whole literature on communities of practice. Sure, there are societies in most areas of practice, but they’re frequently not fulfilling all these needs (and they’re targets of this strategic analysis too). These orgs can offer courses, conferences, and readings, but do they have tools for people? And are they finding ways for people to connect? It’s about learning together.

I’ve learned the hard way that it takes a certain set of skills to develop and maintain a community. Which doesn’t mean you shouldn’t do it. When it reaches critical mass (that is, becomes self-correcting), the benefits to the members are great. Moreover, the dialog can point to the next offerings; your market’s right there!

There’s more, of course. Each of these areas drills down into considerable depth. Still, it’s worth addressing systematically. If you’re an org offering learning as a business, you need to consider this. Similarly, if you’re an L&D unit in an org, this is a roadmap for you as well. If you’re a startup and want to become a learning organization, this is the core of your strategy, too. It’s the revolution L&D needs ;). Not doing this is a suite of training organization fails.

My claim, and I’m willing to be wrong, is that you have to get all of this right. In this era of self-help available online, what matters is creating a full solution. Anything else and you’ll be a commodity. And that, I suggest, is not where you want to be. Look, this is true for L&D as a whole, but it’s particularly important, I suggest, for training companies that want to not just survive, but thrive in this era of internet capabilities.

Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.