Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Transforming from knowledge to performance

16 September 2025 by Clark Leave a Comment

As I’ve mentioned, I’m working with a startup looking at extending training through small LIFTs. The problem is that most training is ‘event’ based, where learning is in a concentrated time. Which is fine for performing right after. However, much of what we train for are things that may or may not happen soon. What we want is to go from the knowledge after the event to actually performing in new ways after the event, possibly a long time. We need retention from the learning to the situation, and transfer to all appropriate (and no inappropriate) situations. Thus, we need to think differently. And, as I suggested, we’re looking at supporting people not just with formal learning, but beyond, to developing their ability over time. We really want to be transforming from knowledge to performance. So, what’s that look like?

As usual, when I’m supposed to be sleeping is one of the times I end up noodling things over. And, so it was some nights ago. I was thinking about (as I’m wont to do) the cognitive roles that we need. I talk about practice, and models, and examples, and more recently, generative activities. But that’s formal learning, and we have a good evidence base for that. But what about going forward? What sorts of activities make sense?

Here I’m going out of my comfort zone. Yes, I’ve been doing some reading about coaching, particularly domain-independent vs domain-specific coaching. Now, here I don’t necessarily know what the research says specifically, but I do see the convergence of a variety of different models. So, I can make inferences. And post them here to get corrected!

Stages of early, middle, and late, with reflection (personal, conceptual) and reactivation (reconceptualization, recontextualization, reapplication) in early . Planning (initial is at the intersection of early mid, revision is in mid) and barriers (internal, external) are in mid. Impact (internal at boundary of mid and late, external) and survey are in late. As you might expect, I made a diagram to help me understand. So, I reckon there’s an early, mid, and late stage of development of capability. Formal learning should really be about getting you ready to apply.

That is the early phase which includes reflection (really, a generative activity), which can be personal (ala scripts) or conceptual (schemas). Also, reactivation. That is, seeing different ways of looking at it (new models), more examples in context, and of course more practice. (Retrieval practice, of course, where you’re applying the knowledge.)

Then, in mid-phase, your learners are applying, but to real situations, not simulations. Their initial plan on how to apply the knowledge might be part of the end of the early stage, but then it’s time to apply. Which could (should?) lead to revisions of the plan, and on reflecting on any barriers. Those barriers could be internal (their own understanding or hangups), or external (lack of resources, situations, tools, etc). The former are grounds for discussion, the latter for action on the part of the org!

Then, at the late stage, learners should be looking at the impact. They can reflect on the impact on them, which could also be a mid-phase action, but ultimately you want to see if they’re having an impact overall. Then, of course, you could want to survey about the learning experience itself. While it’s all data, the org impact is useful data to evaluate what’s going on and how it’s going, and the survey can help you continue to improve either this or your next initiative.

Those’re my initial thoughts on transforming from knowledge to performance. There’s some overlap, no doubt, e.g. you could continue sending reapplications if there aren’t frequent opportunities in the real world. Likewise, your learners should be assessing impact in the need to revise a plan. Still, this seems to make sense in the first instance, at least to me. (Addressing the ‘when’, how much and what spacing, is what I’ll be talking about at DevLearn. ;) Now, it’s over to you. What have I got wrong, am missing, …?

Knowledge or ability?

9 September 2025 by Clark Leave a Comment

As in the last post, I’ve been judging the iSpring Course Contest (over, of course). And, having finished, one other thing I’ve noticed is a clear distinction between ‘knowing’ and ‘doing’. We’re seeing lots of interest in skills, yet the courses are, with one exception, really assuming that if you know about it, you’ll do it right. Which isn’t a safe assumption! Are you trying to develop knowledge or ability? I’ll suggest you want the latter. And, can do it!

So, in 9 of the 10 cases, the questions are essentially about knowing. Some of them better than others, e.g. some seem to follow Patti Shank’s advice about how to write better multiple choice questions. That is, for instance, reasonably balanced prose describing the alternatives, and only 3 options. Not all follow it, of course.

The problem is that knowing about something isn’t the same as knowing how to do it. So, for instance, knowing that you should calibrate after changing the reagent isn’t the same as remembering to do it. We’ve all probably experienced this ourselves. They pretty much all had quizzes, as required, but most were just testing if you recalled the elements of the course. Not good enough!

What the one course did that I laud was that the final quiz was basically you applying the knowledge in a situation. You weren’t asked what this situation was, but instead chose how to respond. They were linked, each continuing the story, so it was really a linear scenario. Which I realize can be just a series of mini-scenarios! Still, you dragged your response from a list of responses. They weren’t all that challenging to choose between, as the alternatives were pretty clearly wrong, but for good reasons, reflecting the common mistakes. This is the way!

I think some designers were aspiring to this, as they did put the learner into a situation. However, they then asked learners to classify the answer, rather than actually make a decision about action to take, e.g. a mini-scenario. There is an art to doing this well (hence my workshop in two days)! Putting people into a context to choose their actions like they’ll have to do in the real world is the important practice. Of course, mentored live performance is better. Or simulations (tuned to games, of course ;). Even branching scenarios. But mini-scenarios are easily doable within your existing practice.

The question of knowledge or ability is easily answered. In how many cases will the ability to recite knowledge versus make decisions be the defining success factor for your organization? I’ll suggest that making better decisions will be the differentiator your organization needs. The ability to write better mini-scenarios seems to me to be the best investment you can make to have your interventions actually achieve an impact. And if you’re not doing that, why bother?

What’s In It For Them?

2 September 2025 by Clark Leave a Comment

I’m judging some submissions from the iSpring conference, and noticing a trend. And, of course, it’s not in the requirements (which focus on using all the capabilities of their tool, not surprisingly). It’s also not in the evaluation criteria. Yet, it’s something I obviously care about. (I mean, I basically wrote a book that was about it as half of the whole picture!) I’m talking about addressing the ‘what’s in it for them’ for the learners.

So, two things to start with. For one, the evaluation does ask “Does the course maintain interest throughout?”  So that’s the other half of the book, but…it doesn’t address the first half. Ok, many times you see the designers deal with it implicitly in the objectives, saying what you’ll be able to do. (Even, some times, in terms you will care about!) But that’s not enough.

What these courses seem to assume (and this is prevalent in much that I see) is that you’ve come to the course because you’ve interest in the topic. Which may be the case, if they’re already practitioners. Where it’s not appropriate is when it’s been assigned by someone else. And, overall, you probably shouldn’t assume the former. Unless you’re just hanging it out there for anyone who’s interested (and who can afford that?).

So, you should be addressing, up front, why the learner should care. What’s the context that makes this course of value and of interest? If you (as the learner) are a likely victim, er, audience for this course, what lets you know? Again, it’s not in the requirements, but I certainly wish it was pretty much habitual. There’s one case where it’s partly done, in that they start with the scenario and a question, but it takes some time to get there. This should be the very first thing learners see. Before objectives, before you say what the course will entail. Why should they pay attention to any of that? You haven’t made it visceral. And, motivation helps you learn better

So, please, make it a habit to hook your learners from the get-go. Show them the ‘what’s in it for them” up front. They’ll pay more attention to everything else you do, and that leads to better outcomes. Which is what we all want.

Top 10 Learning Tools 2025

14 August 2025 by Clark 1 Comment

Every year, the inimitable Jane Hart collects what people say are their top 10 tools for learning. The results are always intriguing, for instance, last year AI really jumped up the list. You can vote using this form, or email your list to her via the address on that page. I’ve participated every year I’ve known about it, and do so again. Here’s my list. Realize this is for ‘learning’, not formal education per se. It’s whatever makes sense for you.

Writing

I write, a lot. It’s one way of my making sense of things. So…Microsoft Word remains my goto tool. Less and less so, of course. I’ve been using Google Docs to collaborate with others quite a bit, and am currently using Apple’s Pages for that purpose. Still, I think of Word as my ‘goto’ tool, at least for now. I don’t like Microsoft, and am trying to wean myself away, but I really really need industrial strength outlining, and no one else has measured up.

Apple’s Notes needs a mention. I use it, a lot. Several things are pinned to the top (including my SoMe canned responses, and shopping lists). I also share recipes with family members (on Apple devices), take notes on books and the like, keep a list of ‘to consume’ (books, movies). I also use Notability for biz notes, but it’s not as ubiquitous, and I may just shift everything to notes as there’ve been an increasing number of ‘offers’ to upgrade. Yuck.

And, of course, WordPress for this blog. Here’s where I share preliminary thoughts that end up appearing in articles, presentations, or books. It’s a way to share thinking and get feedback.

Diagramming

I’m still using OmniGraffle. I tried using Google’s Draw, and Apple’s Freeform, but… OmniGraffle’s positives are its user interface. It works the way I want to think about it. Sure, it’s probably changed my thinking to adapt to it too, but from the get go I found using it to be sweet. In fact, as I’ve recounted, I immediately redid some diagrams in it that I’d created in other ways previously just because it was so elegant. The downsides are not only that it’s Mac-only (I work with many other folks), but that it’s not collaborative. Diagramming is one of the ways I make sense of things.

Presentation

Apple’s Keynote remains my preferred presentation tool. I continue to use it to draft presentations. It defaults to my ‘Quinnovation’ theme, tho’ for reasons (working with others, handouts w/o color, builds, etc) I will use a plain white theme. I even have built a deck of diagram builds, so I can paste them into presos but have them to hand rather than having to remake them each time. It’s another way to share.

Connection

Apple Mail, for email, is an absolute necessity. I have to stay in touch with folks, and mail’s critical to coordinate and share.

I use Safari all the time as my browser, tho’ occasionally I have to have Chrome-compatibility, at which time I use Brave; Chrome-compatible but without Google’s intrusiveness. Takes me to Wikipedia, a regular trusted source for looking things up.

Zoom remains my ‘goto’ virtual meeting tool (all my meetings are virtual these days!). I of course use Microsoft’s Teams (but only through the browser now, was able to turf the app), and Google Meet, but only as others request. Of course, connecting with others is critical to learning.

Wow, I’m running out of time and space. Let’s see: Slack is a coordination tool I use a lot with the LDA, and Elevator 9. It’s also a way to share thinking, so it’s a learning tool too.

There’s more, so I guess I’ll use my last slot and aggregate my Social Media tools. That includes LInkedIn, Bluesky, and Mastodon. All three get notification of blog posts, but other than that each has its separate uses. LinkedIn is for biz connections, and reading what others are posting. Bluesky is mostly what Twitter used to be (before it became Xitter), fun, quantity. Mastodon’s more restrained in growth, but the underlying platform is really resistant to political/business corruption.

That’s all I can think of. I welcome hearing your thoughts and seeing the results.

The ‘right’ level

5 August 2025 by Clark 1 Comment

So, I know I’ve talked about this before (not least, here), but it seems to continue to persist. What I’m talking about is the continuing interest in neuroscience for L&D. And, as has been said by others, it’s the wrong level of analysis. What, then, is the ‘right’ level? Here’re my thoughts, and I welcome yours.

This is not to say neuroscience isn’t valuable. It objectively is. We gain insights that bolster some views, and nuance others. That’s important, for sure. We find out about mirror neurons, important for social learning. And, for instance, we can find that dopamine ramps up more for preferred motivators, and orients us in those directions. That’s interesting. It also suggests that we should make sure we’re involving people’s motivation for learning.

However, my point is that we know this already. Cognitive science tells us this. So, for instance, at the neural level, learning is about reinforcing patterns, strengthening connections between neurons at an aggregate level. That’s great. However, how we do that is by triggering patterns in conjunction, to strengthen them. How do we trigger patterns? With words, images, etc. Things that mean something. That’s cognitive!

There’s a level above, too, the social level. Here, we are presented with what others think. Which is useful to understand. But, for learning, we have to translate back to the cognitive level. That is, we need to think about how seeing how others interpreted the same signs, and what that means for ours. Social learning is valuable, but…while we enact it publicly, our understanding of why and how will depend on what we know.

For instance, brainstorming. Without a cognitive understanding, we won’t know how to do it right. We can learn, empirically, that we get better results when we think alone first before converging (and other aspects, like avoiding premature evaluation). Why? When we get to the cognitive analysis, we recognize that if we haven’t generated our own ideas first, others’ ideas can constrain our thinking.

Sure, I’m biased. I was steeped in the cognitive perspective. Yet, when I look at what works and why, I see the meaningful analysis coming from the cognitive level. Likewise, when I see people tout ‘neuro’ and ‘brain-based’, etc, all the results I hear are really cognitive ones. Certainly, ones that cognitive science has already shown benediction for.

So, I keep learning (another recommendation from cognitive science ;). And I have no doubt that we’ll learn things from neuroscience as that field matures. Still, for good prescriptions for learning design, cognitive is the ‘right’ level for analysis. Which means it’s the right level to study and understand. Please, ensure you do understand learning science before you design for others. That’s so you’ll create experiences that honor our learners by providing learning that works: is meaningful and effective. Which is really what we should be about. Those are my thoughts, what are yours?

Auto-marked generative?

29 July 2025 by Clark Leave a Comment

As I continue to explore learning science, and get ever-deeper, one idea came to me that I had to check out. So, we’re recognizing the difference between elaboration (getting material into long-term memory), and retrieval (getting it out). They’re different, and yet both valuable. However, generative (not Generative AI, btw) activities typically have learners create their own understandings as a goal of having them reprocess the information. Which makes them labor-intensive to evaluate. Sure, you could have GenAI evaluate and respond, but that’s problematic for several reasons. Is there another way? Can you have auto-marked generative activities?

Increasingly, from educators I’m hearing more about generative activities. These are elaboration processing, where learners express the material in their own way. I argue that this can be either connecting it to personal experiences, or connecting it to prior knowledge (playing some semantics here ;). The goal, however, is to deepen and extend the patterns across neural activity, increasing the likelihood of their activation.

Whether prose, diagram, or mindmap (yes, a form of diagram, but…), these are free-form, and thus need review. Someone needs to look at them, to ascertain whether they’re right or whether they represent a significant misunderstanding. I remember when Kathy Fisher (of semantic networking fame and software SemNet) talked about how she asked students about how water got from the digestive to the excretory system, and they (many?) ended up positing in their mind-maps an extra tube connecting the two. (Fun fact: no such tube exists, water is absorbed into the blood, and then filtered out via kidneys.) Of course, with this evidence, it’s easy to diagnose misconceptions, at the expense of sufficient human interaction.

I was thinking about writing retrieval practice mini-scenarios, and was led to wonder whether you could do the same for generative activities. That is, present alternatives, perhaps of the most common misconceptions, and have learners choose between different representations. One advantage, then, would be the ability to auto-mark understanding. It seems to me that they’ll still need to process each representation, to be able to choose one, so they’re doing processing. It could be a mindmap, diagram, or prose restatement. You’d also be able to diagnose, and remediate, misunderstandings.

For example, you could ask:

How does water get from the digestive to the excretory system:

    • There’s a direct connection between the two, known as the aqueduct.
    • Water is absorbed into the blood and then filtered out via the kidneys.
    • There’s an organ that processes water from the former to the latter.

(A rough conceptualization; I’m sure a physiologist, could do better!)

I thought that perhaps I could ask someone who both talks about cognitive processing, researches instructional strategies, and in particular talks about generative activity. Professor Rich Mayer, who Ruth Clark introduced to us at the Learning Development Accelerator, was kind enough to respond, and we had a Zoom Chat. Not putting words in his mouth, it was my understanding that he agreed that this was a plausible model. I freely offer anyone to research this (including you, Rich!). Unless such are extant, in which case please point me to existing journal articles or the like.

There’s no telling whether this is useful, of course. Are auto-marked generative activities possible and plausible? Still, better to get the idea out there than not, it may end up being useful! Which, of course, is the ultimate goal. Thoughts?

Context and models

22 July 2025 by Clark Leave a Comment

One of the things I’ve recognized is that we don’t pay enough attention to context. It turns out to be a really important factor in cognition, as our long-term memory interacts with the current context to determine our interpretation. And, as such, makes our interpretations very ’emergent’. Thus, our training needs to ensure that we’re liable to make the right interpretation and so choose the right action. Do we do this well? And can artificial intelligence (AI), specifically generative AI (GenAI), help? Here’re some thoughts on context and models.

So, we’ve gone from symbolic models to sub-symbolic ones as we’ve moved to a ‘post-cognitive’ interpretation of our thinking. What’s been realized is that we’re not the formal logical reasoning beings that we’d like to think. Instead, we’re very much assembling our understanding on the fly as an interaction between context and memory. In fact, our emergent memory can be altered by the context, as Beth Loftus’ research demonstrated. Which means that, if we want specific interpretations and reactions (e.g. making decisions under uncertainty), we should be careful to ensure that we provide training across a suitable suite of contexts.

Now, active inference models of cognition suggest that we’re actively building models of how the world works. Thus, we’re abstracting across experiences to generate ever-more accurate explanations. Research on mental models suggests that they’re incomplete, not completely accurate, and, arguably most importantly, hard to get rid of if they’re wrong. Thus, providing good models beforehand is important, and work by John Sweller further suggests that examples showing models in context benefit learning. You can present the model, but ultimately the learner must ‘own’ it. So, it’s important to know the models and their range of applicability to facilitate that abstraction.

What is important to know, however, is that GenAI doesn’t build models of the world. This was an important (and, sadly, not self-generated) realization for me. The implication, however, is clear. I have maintained that GenAI can’t understand context, and thus can’t generate suitable practice environments. Which, of course, is to the good for designers, since it leaves them a role ;). Importantly, however, this framing also suggests that GenAI also can’t choose an appropriate suite of contexts for practice, since it doesn’t understand models and how they’re applicable (and when not). (Another designer role!)

I am all for using technology to complement our own cognition. However, that entails knowing what the true affordances of the technology are, and also what it can’t do. So, GenAI can help think of great settings for practice. Along with a person (an expert actually) to vet the suggestions, of course. It can think of things we might forget, or ones we haven’t thought of yet. It can, of course, also create ones that aren’t realistic. There’re potentially great opportunities, but we have to know what matters, and what doesn’t. Context and models matter. GenAI can’t understand them. You can take it from there.

From knowledge to performance

15 July 2025 by Clark Leave a Comment

For reasons, I’ve been looking at multiple-choice questions (MCQs). Of course, for writing them right, you should look to Patti Shank’s book Write Better Multiple-Choice Questions.  And there’s clearly a need!  Why? Because when it comes to writing meaningful MCQs, I’m wanting to move us from knowledge to performance. And the vast number of questions I found didn’t do that.

To start, I’ll point, as I often do, to Pooja Agarwal’s research (plays to my bias ;). She found that asking high-level questions (e.g. application questions, or mini-scenarios as I like to term them) leads to ability to answer high-level questions (e.g., to do). What wasn’t necessary were low-level knowledge questions. She tested low alone, high alone, and low + high. What she found that was to pass high tests, you needed high questions. Further, low questions didn’t add anything. I’ll also suggest that our needs, for our learners and our organizations, are the ability to apply knowledge in high-level ways.

Yet, when I look at what’s out there, I continually see knowledge questions. They violate, btw, many principles of good multiple questions (hence Patti’s book ;). These questions often have silly or obvious alternatives to the right answer. They include the wrong length responses, and too many (3 is ideal, usually, including the right answer). We also see a lack of feedback, just ‘right’ or ‘wrong’, not anything meaningful. We also see too many questions, or incomplete coverage, and arbitrary criteria (why 80%?). Then, too, the absolutes (never/always, etc), which isn’t the way to go. Perhaps worst, they don’t always focus on anything meaningful, but query random information that was in no way signaled as important.

Now, I suppose I can’t say that knowledge questions should be avoided. There might be reasons to ensure they’re there for diagnostic reasons (e.g. why are learners are getting this wrong). I’d suggest, however, that such questions are way overused. Moreover, we can do better. It’s even essentially easy (though not effortless).

What we have learners do is what’s critical for their effective learning, If we care (and we should), that means we need to make sure that what they do leads to the outcomes our organizations need. Which means that we need lots of practice. Deliberate practice, with desirable difficulty, spaced out over time. We need reactivation, for sure. But what we do to reactivate dictates what we’ll be able to do. If we ask people knowledge questions, they’ll be able to answer knowledge questions. But that has been shown to not lead to their ability to apply that knowledge to make decisions: solve problems, design solutions, generate better practices.

So, we can do better. We must do better. That is, if we want to actually assist our organizations. If we’re talking skilling (up-, re-, etc), we’re talking high-level questions. On the way, perhaps (and recommended), to more rigorous assessment (branching scenarios, sims, mentored practice, coaching, etc), Regardless, we want what we have learners do be meaningful, When we’re moving from knowledge to performance, it’s critical, And that’s what I believe we should be doing.

(BTW, technology’s an asset, but not a solution. As I like to say:

If you get the design right, there are lots of ways to implement it; if you don’t get the design right, it doesn’t matter how you implement it. )

Continually learning

8 July 2025 by Clark Leave a Comment

picture of a dictionary page with the word 'learning' highlightedI’ve been advising Elevator 9 on learning science. Now, while I advise companies via consulting, this is a different picture. For one, they’re keen to bake learning science into the core, which is rare and (in my mind) valuable. It’s also a learning opportunity for me. I’m watching all the things a startup has to deal with that I’ve avoided (I didn’t get the entrepreneurial gene). It’s also turning out to have a really interesting revelation, which is worth exploring. I like continually learning, and this is just such an opportunity.

To start, I’ve advised lots of companies over the years. This includes on learning design, product design, market strategy, and more. Of course, with me you always get more than anticipated (like it or not ;), because I’ve eclectic interests. I also collect models, and when they match, you’ll hear about it. (To be fair, most clients have welcomed my additional insights; it’s an extra bonus of working with me! :) It’s also fun, since I also educate folks as I go along (“working with you is like going to graduate school”). Rarely, however, have I been locked into the development. I come in, give good advice, and get out. Here, it’s not the same.

I’m always a sponge, learning as well as sharing. Here, however, I’ve had involvement for a longer time; from their first no-code version and now serious platform development (in User Acceptance Testing phase, which means we’re about to launch; exciting!). From CEO David Grad, through COO Page Chen, and then all the folks that have been added from tech, to sales and marketing, UI, and more, I’ve been usually at least peripherally involved and exposed. It’s fascinating, and I’m really learning the depths that each element takes, and of course it’s far more than my naive ideas had initially conceived.

There are two major elements to their solution. One is wrapping extended reactivation around training events. The second is taking the collected data and making it available as evidence of the learning trajectory. My role is essentially in the first; for one, there are lots of nuances going into the quantity and spacing of learning. While there’s good guidance, we’re making our best principled decisions, and then we’ll refine through testing. I’m also guiding about what those reactivation activities should be. We are extending learning, not quite to the continual, but certainly to the necessary proficiency.

This is where it’s getting interesting. I realized the other day that most of what learning science talks about is formal learning: practice before performance. Yet, here, we’re actually moving into applying the learning into the workplace, and having learners look at the impact they’re having. In many ways, this looks more like coaching. That is, we’re covering the full trajectory. Which means we have to base principles beyond just formal learning. This is serious fun! Our data collection, as a consequence, goes beyond just the cognitive outcomes, but also looks at how the experience is developing.

Sure, there are tradeoffs. The market demands that we incorporate artificial intelligence, and they’re not immune to the advantages. We’re also finding that, pragmatically, the implications can get complex really fast, and that we have to make some simplifying assumptions. Of course, they’re also needing to develop a minimally viable product first, after which they’ll see what direction extensions go. It’s not the ideal I would envision, but it’s also a solution that’s going to really meet what’s needed.

So, I’m continually learning, and enjoying the journey. We’ll see, of course, if we can penetrate awareness with the solution, which should be viable, and also handle the general difficulties that bedevil many startups. Still, it’s a great opportunity for me to be involved in, and similarly it’s one that can address real organizational needs.

Where’s quality?

1 July 2025 by Clark 4 Comments

I get it, when you’ve a hammer, the whole world looks like a nail. Moreover, there’s money on the table, and it’d be a shame not to grab onto it. Still, there’s also integrity. And, frankly, I fear that we’re going down the wrong path. So I’ll rail again, by asking “where’s quality?”

So, a colleague recently provided a link to a report by a well-known analyst. In the report, they call for an AI revolution for L&D. And, yes, I do believe L&D needs a revolution, I wrote a whole book about it. However, I fear that the direction under advisement is focusing on the wrong thing. So here’s what the initial post summarized about the article:

* Despite significant investment, many companies are utilizing outdated learning models that do not deliver substantial business impact.

* Learning needs to be dynamic, personalized, and focused on enablement.

* Chief Learning Officers (CLOs) should re-establish themselves as leaders within the enterprise, focusing not just on learning but on employee enablement.

* Artificial intelligence (AI) offers the potential to speed up content creation, lower costs, and improve operational efficiency, which allows Learning and Development (L&D) to adopt a wider and more strategic role.

Do you see anything wrong with this? I actually agree  with the first point, and probably the third. However, I think we can make a strong case that the second is not the primary issue. And very clearly the fourth point identifies what’s wrong in the second, at least before the last phrase.

So, first, when we invoke learning, we should be very careful to do it right. There are claims that up to 90% of our investment in training is going to waste. However, it’s not because our learning designs aren’t ‘dynamic, personalized, and focused on enablement’, it’s because our learning isn’t designed according to what research says works. Now, our learning needs change as our abilities improve. We start knowing what we need and why. There’re also times when performance support can be more effective than courses. Courses can still be valid, if they’re done well.

That’s the point I continue to make: I maintain that we’ll save more money and have more impact if we focus on good learning design before we invest in fancy technology. That includes AI. We want meaningful practice (which I suggest is still a role for designers, as AI doesn’t understand context), not information dump. Knowledge <> ability to perform. What we need is practice of doing. At least for novices. But beyond that, only effective self-learners will be truly able to leverage information on their own to learn. Even social learning gets better when we understand learning.

So, learning needs to be evidence-informed, first. Then, and only then, can it be dynamic, personalized, etc. Even knowing when and how to use AI as performance support counts (a more valid role, tho’ there needs to be scrutiny of the advice somehow, as AIs can give bad advice). Sure, CLO’s do need to be leaders in the enterprise, but that comes from understanding cognition and learning, and then using those to better enable innovation as well as optimizing performance. Enablement’s fine as a premise, but it’s got to come from understanding. For instance, you can’t get employees contributing just because you put in AI, you need to create a learning culture. (Putting AI into a Miranda organization isn’t going to magically fix the problem.)

Let me be clear: my argument is not Gen AI bad vs Gen AI good. No, it’s learning science involved versus not. I am fine if we start using AI, Gen or otherwise,, but after we’ve made sure we’re doing the right things first. Let me pose a hypothetical: for $30K, would you rather have 3 courses versus 10? What if those 3 courses were designed to actually have an impact, versus 10 that are pretty and full of information, but won’t move a single meaningful needle the organization? Sure, I’ve made up the numbers, but the reality is that we’re talking about achieving real outcomes versus making folks feel good; I’ll suggest “it’s pretty and people like it” is no substitute for improving the outcome.

This makes the last line above more problematic: we don’t need to speed up content creation. Content dump <> learning. Lowering costs and improving efficiency is all good, but after you’ve ensured adequate effectiveness. And no one seems to be talking about that. That’s why I’m asking “where’s quality?” It’s not being discussed, because AI is the next shiny object: “there’s plenty of money to be made”. Anyone else sensing a bubble? And that’s without even considering IP ethics, environmental impact, security, and VC funding. The business model is still up in the air. Hence, my question. Your thoughts?

As an aside, there’s a quote in the paper that illustrates their lack of deep understanding: “As our attention spans shorten”. Ahem. While there’s a credible argument made by Gloria Marks, I still suggest it’s not a change in our cognitive architecture, but instead availability and familiarity. We can still disappear for hours into a novel, movie, or game. It’s a fallacious basis for an argument. 

Truth in advertising: I was tempted to title this “WTAH”, but…I decided that might be too incendiary ;). Hence, “Where’s quality?” Still, you can imagine my mood while reading and then writing this.

Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.