Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Is ‘average’ good enough?

26 August 2025 by Clark Leave a Comment

As this is my place to ‘think out loud’, here’s yet another thought that occurred to me: is ‘average’ good enough? And, just what am I talking about? Well, LLMs are, by and large, trained on a vast corpora. Essentially, it’s averaging what is known. It’s creating summaries of what’s out there, based upon what’s out there. (Which, BTW, suggests that it’s going to get worse, as it processes its own summaries! ;) But, should we be looking to the ‘average’?

In certain instances, I think that’s right. If you’re below average in understanding, learning from the average is likely to lift you up. You can move from below average to, well, average. Can you go further? If you’re in well-defined spaces, like mathematics, or even programming, what LLMs know may well be better than average. Not as good as a real expert, but you can raise your game. Er, that is, if you really know how to learn.

Using these systems seems to become a mental crutch, if you don’t actually do the thinking. While above average people seem to be able to use the systems well, those below average don’t seem to learn. IF you used it to provide knowledge, and then put that knowledge into practice, and get feedback (so, for instance, experimenting), you could fine tune your performance (not as eloquently as having someone provide feedback, but perhaps sufficiently). However, this requires knowing how to learn, and the evidence here is also that we don’t do that well.

So, generative AI models give you average answers. Except, not always. They hallucinate (and always will, if this makes sense). For instance, they’ll happily support learning styles, because that’s a zombie idea that’s wrong but won’t die. They can even make stuff up, and don’t know and can’t admit to it. If you call them on it, they’ll go back and try again, and maybe get it right. Still, you really should have an ‘expert’ in the loop. Which may be you, of course.

Look, I get that they can facilitate speed. Though that would just seem to lead your employer to expect more from you. Would that be accompanied by more money? Ok, I’m getting a bit out of my lane here, but I’m not inclined. But is faster better?

Also, ‘average’ worries me. As I’ve written, Todd Rose wrote a book called The End of Average that is truly insightful. Indeed, one of those books that makes you see the world in a different way, and that’s high praise. The point being that average removes the quality. Averaging removes the nuances, the details, as does summarization. Ideally, you should be learning from the best, not the average, if learning is social (as Mark Britz likes to point out).

Sure, it can know the average of top thoughts, but what’s better is having those top thinkers. If they’re disagreeing, that’s better for dialog, but not summarization. In truth, I’d rather learn from a Wikipedia page put together by people than a Gen AI summary, because I don’t think we can trust GenAI summaries as much as socially constructed understanding. And it’s not the same thing.

So, I’ll suggest ‘average’ isn’t nearly good enough in most cases. We want people who know, and can do. I don’t mind if folks find GenAI useful, but I want them to use it as support, not as a solution. Hey, there’s a lot that can be done with regular AI in many instances, and Retrieval Augmented Generation (RAG) systems offer some promise of improvement for GenAI, but still not perfect outcomes. And, still, all the other problems (IP, business models, and…). So, where’ve I gone wrong?

Note, I should be putting references in here, but I’ve read a lot lately and not done a good job of saving the links. Mea culpa. Guess you’ll just have to trust me, or not. 

Training Organization Fails

19 August 2025 by Clark Leave a Comment

I’ve worked with a lot of organizations that train others. I’ve consulted to them, spoken to them, and of course written and spoken for them. (And, of course, others!) And, I’ve seen that they have a reliable problem. Over the years, it occurs to me that these failures stem from a pattern that’s understandable, and also avoidable. So I want to talk about how a training organization fails. (And, realize, that most organizations should be learning organizations, so this is a bigger plea.)

The problem stems from the orgs’ offering. They offer training. Often, certification is linked. And folks need this, for continuing education needs. What folks are increasingly realizing is that much of the learning they’re offering is now findable on the web. For free. Which means that the companies not seeing the repeat business. Even if required, they’re not seeing loyalty. And I think there’s a simple reason why.

My explanation for this is that the orgs are focusing on training, not on performance solutions. People don’t want training for training’s sake, by and large. Sure, they need continuing education in some instances, so they’ll continue (until those requirements change, at least). Folks’ll take courses in the latest bizbuzz, in lieu of any other source, of course.  (That’s currently Generative Artificial Intelligence, generically called AI; before that as an article aptly pointed out it was the metaverse, or crypto, or Web 3.0, …)

What would get people to do more than attend the necessary or trendy courses? The evidence is that folks persist when they find value. If you’re providing real value, they will come. So what does that take? I posit that a full solution would be comprised of three things: skill development, performance support, and community.

Part 1: Actual learning

The first problem, of course, could be their learning design. Too often, organizations are falling prey to the same problems that belabor other organizational learning; bad design. They offer information instead of practice. Sure, they get good reviews, but folks aren’t leaving capable of doing something new. That’s not true of all, of course (recently engaged with an organization with really good learning design), but event-based learning doesn’t work.

What should happen is that the orgs target specific competencies, have mental models, examples, and meaningful practice. I’ve talked a lot about good learning design, and have worked with others on the same (c.f. Serious eLearning Manifesto). Still, it seems to remain a surprise to many organizations.

Further, learning has to extend beyond the ‘event’ model. That is, we need to space out practice with feedback. That’s neglected, though there are solutions now, and soon to be available. (Elevator 9, cough cough. ;) Thus, what we’re talking about is real skill development. That’s something people would care about. While it’s nice to have folks say they like it, it’s better if you actually demonstrate impact.

Part 2: Performance support

Of course, equipping learners with skills isn’t a total solution to need. If you really want to support people succeeding, you need more than just the skills. Folks need tools, too. In fact, your skill development should be built to include the tools. Yet, too often when I ask, such orgs admit that this is an area they don’t address.

There are times when courses don’t make sense. There are cognitive limits to what we can do, and we’ve reliably built ways to support our flaws. This can range from things performed rarely (so courses can’t help), through information that’s too volatile or arbitrary, to things done so frequently that we may forget whether we’ve taken a step. There are many situations in pretty much any endeavor where tools make sense. And providing good ones to complement the training, and in fact using those tools as part of the training, is a great way to provide additional value.

You can even make these tools an additional revenue stream, separate from the courses, or of course as part of them. Still, folks want solutions, not just skill development. It’s not about what you do for them, but about who they become through you (see Kathy Sierra’s Badass!).

Part 3: Community

The final piece of the picture is connecting people with others. There are several reasons to do this. For one, folks can get answers that courses and tools are too coarse to address. For another, they can help one another. There’s a whole literature on communities of practice. Sure, there are societies in most areas of practice, but they’re frequently not fulfilling all these needs (and they’re targets of this strategic analysis too). These orgs can offer courses, conferences, and readings, but do they have tools for people? And are they finding ways for people to connect? It’s about learning together.

I’ve learned the hard way that it takes a certain set of skills to develop and maintain a community. Which doesn’t mean you shouldn’t do it. When it reaches critical mass (that is, becomes self-correcting), the benefits to the members are great. Moreover, the dialog can point to the next offerings; your market’s right there!

There’s more, of course. Each of these areas drills down into considerable depth. Still, it’s worth addressing systematically. If you’re an org offering learning as a business, you need to consider this. Similarly, if you’re an L&D unit in an org, this is a roadmap for you as well. If you’re a startup and want to become a learning organization, this is the core of your strategy, too. It’s the revolution L&D needs ;). Not doing this is a suite of training organization fails.

My claim, and I’m willing to be wrong, is that you have to get all of this right. In this era of self-help available online, what matters is creating a full solution. Anything else and you’ll be a commodity. And that, I suggest, is not where you want to be. Look, this is true for L&D as a whole, but it’s particularly important, I suggest, for training companies that want to not just survive, but thrive in this era of internet capabilities.

Beyond Design

12 August 2025 by Clark Leave a Comment

When you look at the full design process, I admit to a bias. Using Analysis-Design-Development-Implementation-Evaluation, ADDIE, (though I prefer more iterative models: SAM, LLAMA, …), I focus early. There are two reasons why, but I really should address them.  So let’s talk beyond ‘design’ and why my bias might exist. (It pays to be a bit reflective, or defensive?, from time to time.)

I do believe that it’s important to get the first parts right. I’ve quipped before that if you get the design right, there are lots of ways to implement it. To do that, you need to get the analysis and design right. So I focus there. And, to be sure, there’s enough detail there to suit (or befuddle) most. Also, lots of ways we go wrong, so there’s suitable room for improvement. It’s easy, and useful, to focus there.

Another reason is that implementation, as implied in the quip, can vary. If you have the resources, need, and motivation, you can build simulation-driven experiences, maybe even VR. There are different ways to do this, depending. And those ways change over time. For instance, a reliable tool was Authorware, and then Flash, and now we can build pretty fancy experiences in most authoring tools. It’s a craft thing, not a design thing.

Implementation does matter. How you roll things out is an issue. As Jay Cross & Lance Dublin made clear in Implementing eLearning, you need to treat interventions as organizational change. That includes vision, and incentives, and communication, and support, and… And there’s a lot to be learned there. Julie Dirksen addresses much in her new book Talk to the Elephant about how things might go awry, and how you can avoid the perils.

Finally, there’s evaluation. Here, our colleague Will Thalheimer leads the way, with his Learning Transfer Evaluation Model (LTEM). His book, Performance Focused Learner Surveys comes closest to presenting the whole model. Too often, we basically do what’s been asked, and don’t ask more than smile sheets at best. When, to be professional, we should have metrics that we’re shooting to achieve, and then test and tune until we achieve them.

Of course, there’re also my predilections. I find analysis and design, particularly the latter, to be most intellectually interesting. Perhaps it’s my fascination with cognition, which looks at both the product and process of design. My particular interest is in doing two things: elegantly integrating cognitive and ‘emotional‘ elements, and doing so in the best ways possible that push the boundaries but not the constraints under which we endeavor. I want to change the system in the long term, but I recognize that’s not likely to happen without small changes first.

So, while I do look beyond design, that’s my more common focus. I think it’s the area where we’re liable to get the best traction. Ok, so I do say that measurement is probably our biggest lever for change, but we’ll achieve the biggest impact by making the smallest changes that improve our outcomes the most. Of course, we have to be measuring so that we know the impact!

Overall, we do need the whole picture. I do address it all, but with a bias. There are others who look at the whole process. The aforementioned Julie, for one. Her former boss and one of our great role-models, Michael Allen, for another. Jane Bozarth channels research that goes up and down the chain. And, of course, folks who look at parts. Mirjam Neelen & Paul Kirschner, Connie Malamed, Patti Shank, they all consider the whole, but tend to have areas of focus, with considerable overlap. Then we go beyond, to performance support and social, and look to people like Mark Britz, Marc Rosenberg, Jay Cross, Guy Wallace, Nigel Paine, Harold Jarche, Charles Jennings, and more.

All to the good, we benefit from different perspectives. It’s hard to get your mind around it all, but if you start small, with your area, it’s easy to begin to see connections, and work out a path. Get your design right, but go beyond design as well to get that right (or make sure it’s being done right to not undermine the design ;). So say I, what say you?

Continually learning

8 July 2025 by Clark Leave a Comment

picture of a dictionary page with the word 'learning' highlightedI’ve been advising Elevator 9 on learning science. Now, while I advise companies via consulting, this is a different picture. For one, they’re keen to bake learning science into the core, which is rare and (in my mind) valuable. It’s also a learning opportunity for me. I’m watching all the things a startup has to deal with that I’ve avoided (I didn’t get the entrepreneurial gene). It’s also turning out to have a really interesting revelation, which is worth exploring. I like continually learning, and this is just such an opportunity.

To start, I’ve advised lots of companies over the years. This includes on learning design, product design, market strategy, and more. Of course, with me you always get more than anticipated (like it or not ;), because I’ve eclectic interests. I also collect models, and when they match, you’ll hear about it. (To be fair, most clients have welcomed my additional insights; it’s an extra bonus of working with me! :) It’s also fun, since I also educate folks as I go along (“working with you is like going to graduate school”). Rarely, however, have I been locked into the development. I come in, give good advice, and get out. Here, it’s not the same.

I’m always a sponge, learning as well as sharing. Here, however, I’ve had involvement for a longer time; from their first no-code version and now serious platform development (in User Acceptance Testing phase, which means we’re about to launch; exciting!). From CEO David Grad, through COO Page Chen, and then all the folks that have been added from tech, to sales and marketing, UI, and more, I’ve been usually at least peripherally involved and exposed. It’s fascinating, and I’m really learning the depths that each element takes, and of course it’s far more than my naive ideas had initially conceived.

There are two major elements to their solution. One is wrapping extended reactivation around training events. The second is taking the collected data and making it available as evidence of the learning trajectory. My role is essentially in the first; for one, there are lots of nuances going into the quantity and spacing of learning. While there’s good guidance, we’re making our best principled decisions, and then we’ll refine through testing. I’m also guiding about what those reactivation activities should be. We are extending learning, not quite to the continual, but certainly to the necessary proficiency.

This is where it’s getting interesting. I realized the other day that most of what learning science talks about is formal learning: practice before performance. Yet, here, we’re actually moving into applying the learning into the workplace, and having learners look at the impact they’re having. In many ways, this looks more like coaching. That is, we’re covering the full trajectory. Which means we have to base principles beyond just formal learning. This is serious fun! Our data collection, as a consequence, goes beyond just the cognitive outcomes, but also looks at how the experience is developing.

Sure, there are tradeoffs. The market demands that we incorporate artificial intelligence, and they’re not immune to the advantages. We’re also finding that, pragmatically, the implications can get complex really fast, and that we have to make some simplifying assumptions. Of course, they’re also needing to develop a minimally viable product first, after which they’ll see what direction extensions go. It’s not the ideal I would envision, but it’s also a solution that’s going to really meet what’s needed.

So, I’m continually learning, and enjoying the journey. We’ll see, of course, if we can penetrate awareness with the solution, which should be viable, and also handle the general difficulties that bedevil many startups. Still, it’s a great opportunity for me to be involved in, and similarly it’s one that can address real organizational needs.

Where’s quality?

1 July 2025 by Clark 4 Comments

I get it, when you’ve a hammer, the whole world looks like a nail. Moreover, there’s money on the table, and it’d be a shame not to grab onto it. Still, there’s also integrity. And, frankly, I fear that we’re going down the wrong path. So I’ll rail again, by asking “where’s quality?”

So, a colleague recently provided a link to a report by a well-known analyst. In the report, they call for an AI revolution for L&D. And, yes, I do believe L&D needs a revolution, I wrote a whole book about it. However, I fear that the direction under advisement is focusing on the wrong thing. So here’s what the initial post summarized about the article:

* Despite significant investment, many companies are utilizing outdated learning models that do not deliver substantial business impact.

* Learning needs to be dynamic, personalized, and focused on enablement.

* Chief Learning Officers (CLOs) should re-establish themselves as leaders within the enterprise, focusing not just on learning but on employee enablement.

* Artificial intelligence (AI) offers the potential to speed up content creation, lower costs, and improve operational efficiency, which allows Learning and Development (L&D) to adopt a wider and more strategic role.

Do you see anything wrong with this? I actually agree  with the first point, and probably the third. However, I think we can make a strong case that the second is not the primary issue. And very clearly the fourth point identifies what’s wrong in the second, at least before the last phrase.

So, first, when we invoke learning, we should be very careful to do it right. There are claims that up to 90% of our investment in training is going to waste. However, it’s not because our learning designs aren’t ‘dynamic, personalized, and focused on enablement’, it’s because our learning isn’t designed according to what research says works. Now, our learning needs change as our abilities improve. We start knowing what we need and why. There’re also times when performance support can be more effective than courses. Courses can still be valid, if they’re done well.

That’s the point I continue to make: I maintain that we’ll save more money and have more impact if we focus on good learning design before we invest in fancy technology. That includes AI. We want meaningful practice (which I suggest is still a role for designers, as AI doesn’t understand context), not information dump. Knowledge <> ability to perform. What we need is practice of doing. At least for novices. But beyond that, only effective self-learners will be truly able to leverage information on their own to learn. Even social learning gets better when we understand learning.

So, learning needs to be evidence-informed, first. Then, and only then, can it be dynamic, personalized, etc. Even knowing when and how to use AI as performance support counts (a more valid role, tho’ there needs to be scrutiny of the advice somehow, as AIs can give bad advice). Sure, CLO’s do need to be leaders in the enterprise, but that comes from understanding cognition and learning, and then using those to better enable innovation as well as optimizing performance. Enablement’s fine as a premise, but it’s got to come from understanding. For instance, you can’t get employees contributing just because you put in AI, you need to create a learning culture. (Putting AI into a Miranda organization isn’t going to magically fix the problem.)

Let me be clear: my argument is not Gen AI bad vs Gen AI good. No, it’s learning science involved versus not. I am fine if we start using AI, Gen or otherwise,, but after we’ve made sure we’re doing the right things first. Let me pose a hypothetical: for $30K, would you rather have 3 courses versus 10? What if those 3 courses were designed to actually have an impact, versus 10 that are pretty and full of information, but won’t move a single meaningful needle the organization? Sure, I’ve made up the numbers, but the reality is that we’re talking about achieving real outcomes versus making folks feel good; I’ll suggest “it’s pretty and people like it” is no substitute for improving the outcome.

This makes the last line above more problematic: we don’t need to speed up content creation. Content dump <> learning. Lowering costs and improving efficiency is all good, but after you’ve ensured adequate effectiveness. And no one seems to be talking about that. That’s why I’m asking “where’s quality?” It’s not being discussed, because AI is the next shiny object: “there’s plenty of money to be made”. Anyone else sensing a bubble? And that’s without even considering IP ethics, environmental impact, security, and VC funding. The business model is still up in the air. Hence, my question. Your thoughts?

As an aside, there’s a quote in the paper that illustrates their lack of deep understanding: “As our attention spans shorten”. Ahem. While there’s a credible argument made by Gloria Marks, I still suggest it’s not a change in our cognitive architecture, but instead availability and familiarity. We can still disappear for hours into a novel, movie, or game. It’s a fallacious basis for an argument. 

Truth in advertising: I was tempted to title this “WTAH”, but…I decided that might be too incendiary ;). Hence, “Where’s quality?” Still, you can imagine my mood while reading and then writing this.

In praise of reminders

17 June 2025 by Clark Leave a Comment

I have a statement that I actively recite to people: If I promise to do something, and it doesn’t get into a device, we never had the conversation. I’m not trying to be coy or problematic, there are sound reasons for this. It’s part of distributed cognition, and augmenting ourselves. It’s also part of a bigger picture, but here I am in praise of reminders.

Schedule by clock is relatively new from a historical perspective. We used to use the sun, and that was enough. As we engaged in more abstract and group activities, we needed better coordination. We invented clocks and time as a way to accomplish this. For instance, train schedules.

It’s an artifact of our creation, thus biologically secondary. We have to teach kids to tell time! Yet, we’re now beholden to it (even if we muck about with it, e.g. changing time twice a year, in conflict with research on the best outcomes for us). We created an external system to help us work better. However, it’s not well-aligned with our cognitive architecture, as we don’t naturally have instincts to recognize time.

We work better with external reminders. So, we have bells ringing to signal it’s time to go to another course, or to attend worship. Similar to, but different than other auditory signals (that don’t depend on our spatial attention) such as horns, buzzers, sirens, and the like. They can draw our attention to something that we should attend to. Which is a good thing!

I, for one, became a big fan of the Palm Pilot (I could only justify a III when I left academia, for complicated reasons). Having a personal device that I could add and edit things like reminders on a date/time calendar fundamentally altered my effectiveness. Before, I could miss things if I disappeared into a creative streak on a presentation, paper, diagram, etc. With this, I could be interrupted and be alerted that I had an appointment for something: call, meeting, etc. I automatically attach alerts to all my calendar entries.

Granted, I pushed myself to see just how effective I could make myself. Thus, I actively cultivated my address book, notes, and reminders as well as my calendar (and still do). But this is one area that’s really continued to support my ability to meet commitments. Something I immodestly pride myself for delivering on. I hate to have to apologize for missing a commitment! (I’ll add multiple reminders to critical things!)   Which doesn’t mean you shouldn’t, actively avoid all the unnecessary events people would like to add to your calendar, but that’s just self-preservation!

Again, reminders are just one aspect of augmenting ourselves. There are many tools we can use – creating representations, externalizing knowledge, … – but this on in particular as been a big key to improving my ability to deliver. So I am in praise of reminders, as one of the tools we can, and should, use. What helps you?

(And now I’ll tick the box on my weekly reminder to write a blog post!)

Locus of intelligence

6 May 2025 by Clark 1 Comment

I’m not a curmudgeon, or even anti-AI (artificial intelligence). To the contrary! Yet, I find myself in a bit of a rebellion in this ‘generative‘ AI era. And I’m wondering why. The hype, of course, bugs me. But it occurs to me that a core problem may reside in where we put the locus of intelligence. Let me try to make it clear.

In the early days of the computer (even before my time!), the commands were to load memory into registers, conduct boolean operations on them, and to display the results. The commands to do so were at the machine level. We went a level above with a translation of that machine instructions into somewhat more comprehensible terms, assembly language. As we went along, we went more and more to putting the onus on the machine. This was because we had more processor cycles, better software etc. We’re largely to the point where we can stipulate what we want, and the machine will code it!

There are limits. When Apple released the Newton, they tried to put the onus on the machine to read human writing. In short, it didn’t work. Palm’s Pilots succeeded because Jeff Hawkins went for Graffiti as the language, which shared the responsibility between person and processor. Nowadays we can do speech and text recognition, but there are still limitations. Yes, we have made advances in technology, but some of it’s done by distributing to non-local machines, and there are still instances where it fails.

I think of this when I think of prompt engineering. We’ve trained LLMs with vast quantities of information. But, to get it out, you have to ask in the right way! Which seems like a case of having us adapt to the system instead of vice versa. You have to give them heaps more context than a person would need, and they still can hallucinate.

I’m reminded of a fictional exchange I recently read (of course I can’t find it now), where the AI user is being advised to know the domain before asking the AI. When the user queries why they would need the AI if they know the domain, they’re told they’re training the AI!

As people investigate AI usage, one of the results is that your initial intelligence indicates how much use you’ll get out of this version of AI. If you’re already a critical thinker, it’s a good augment. If you’re not, it doesn’t help (and may hinder).

Sure, I have problems with the business models (much not being accounted for: environmental cost, IP licensing, security, VC boosting). But I’m more worried about people depending too much on these systems without truly understanding what the limitations are. The responsible folks I know advocating for AI always suggest having a person in the loop. Which is problematic if you’re giving such systems agency; it’ll be too late if they do something wrong!

I think experimenting is fine. I think it’s also still too early to place a bet on a long-term relationship with any provider. I’m seeing more and more AI tools, e.g. content recommenders, simulation avatars, and the like. Like with the LMS, when anyone who could program a database would build one, I’m seeing everyone wanting to get in on the goldrush. I fear that many will end up losing their shirts. Which is, I suppose, the way of the world.

I continue to be a big fan of augmenting ourselves with technology. I still think we need to consider AI a tool, not a partner. It’s nowhere near being our intellectual equal. It may know more, but it still has limitations overall. I want to develop, and celebrate our intelligence. I laud our partnership with technologies that augment what we do well with what we don’t. It’s why mobile became so big, why AI has already been beneficial, and why generative AI will find its place. It’s just that we can’t allow the hype to blind us to the real locus of intelligence: us.

Small changes with big impact

8 April 2025 by Clark 4 Comments

In the reality stakes, I recognize that people aren’t likely to throw their whole approach out. Instead, they make the small changes with big impact. Then, of course, they should use success to leverage the opportunity to do more. You can bring in a full evaluation of everything you do by the latest fad, but those tend to be expensive and out of date by the time they’re done.  Wherever you are, there’s room for improvement. How do you get there? By understanding how we think, work, and learn.

So, one of the things I’ve done, repeatedly across clients, is look at what they’re doing (including outputs and process). I have tended to do this in a lightweight approach, because I know most folks are sensitive to costs, and want to get the biggest bang for the buck. I’ve done so for content, for design practices, for market opportunities, and more.

To do so means I go through materials, whether products, processes, or plans, to understand the experience and look for ways to improve it. Then, we prioritize those potential opportunities. I then bring my independent observations together for a discussion on what’s useful and necessary. Of course, we always find things that don’t meet those criteria. My concluding reports typically state the goals, the current context, the applicable principles, and recommendations. I’m also happy to work with folks to see how it works out and what tweaks may be of use. Which isn’t every engagement, but it’s not infrequent.

One of the robust outcomes, for what it’s worth, is that folks get insights they (and I) didn’t expect! That may be because I’ve been an interdisciplinary mongrel, with interests in many things, or possibly because the cognitive foundations provide a basis to address most anything. Regardless, I’ve found opportunities to improve in pretty much all situations. These are at every level from how to implement a field to collect information to an assessment of the viability of a go-to-market strategy.

In short, looking at things from the perspective of how our brains work provides insights into ways in which we’ve violated that alignment. Further, it’s a reliable phenomena that pretty much everything we do has opportunities to improve. Sure, not all such moves will be worth the effort, or may conflict with what folks have learned to live with. Still, there’s a pretty-much guaranteed to be valuable changes that can be made. At least, that’s been my experience, and my clients.

What I’m really doing is a cognitive/learning audit. Basically, it’s about going through the cognitive processing cycle repeatedly through an experience. That experience can be the learner’s, the designer’s, purchaser’s, or more. Usually, all of the above! However, what you want to do is to minimize the barriers, and maximize the value. What’re the users goals, what’s  perceived, what’s considered, what’s processed, and what happens next.

There are benefits to having been actively investigating our minds for a number of decades now. I know the principles, I know how to apply them, and I also work in the real world. Also, perhaps against my own self-interest, I look to find ways to do it as easily and inexpensively as possible. I know organizations have limitations. Still, pretty much everyone benefits when you look for small changes with big impact. How about you?

Why the EIP Conference

1 April 2025 by Clark Leave a Comment

On my walk today, I was pondering the Evidence-informed Practitioner (EIP) conference (rapidly approaching, hence the top-of-mind positioning). And, I was looked at it a different way. Not completely, but enough. So, I thought I’d share those thoughts with you, as a possible answer to “why the EIP conference?”

To start, the conference was created to fill the gap articulated at our Learning Science conference. To wit, “this is all well and good, but how do we do it in practice?” Which, as I’ve opined, is a fair question. And we resolved to answer that. 

I started with pondering, while perambulating, about the faculty. We’ve assembled folks who’ve been there, done that, know the underpinnings, and are articulate at sharing. Sure, we could ask people to submit proposals, but instead we went out and searched for the folks we thought would do this best. 

My cogitations went further. What would be the best way for folks to get the answers they need? And, of course, the best is mentored live practice…like most learning would be. And, like most learning, that’s not necessarily practical to organize nor affordable. So, what’s the next best thing?

You could do uni courses in it all. You could read books about it all. Or, you could have a focused design. That is, first you have the best folks available create presentations about it. Then, have discussion forums available to answer the questions that arise. With the presenters participating. Finally, you have live sessions at accessible times to consolidate the content and discussions. Again, with the presenters hosting. 

That last is what we’ve actually done. That’s what my reflection told me; this is pretty much the best way to get practical advice you can put into practice right away, and refine it. At least, the best value. From the time the videos are available ’til the live sessions, you have a chance to put what’s relevant to you into practice – that is, try it out – and have experts around to share what you’ve learned and answer the emergent questions!  

Let’s be clear. Most confs have presentations and time to talk to the presenters, but not the time between presentations and scheduled discussion to try things out. Here, between my co-director Matt Richter and myself, we created a pedagogy that works. 

Further, I got to choose the curriculum, starting with what most folks do (design courses), and then branch out from there: first, the barriers, then forward to analysis, and back to evaluation. Then we go broader, talking about extending learning via motivation and coaching, resources for continuing to learn, technology, and move to not learning via performance support. Finally we on to org-spanning issues including innovation and culture. 

This is the right stuff to know, and an almost ideal way to learn it, in a practical format. It’s all asynchronous so you can do it at your own schedule, except for the live sessions, and for each they’re each offered at two different times to increase the likelihood that you can attend the ones you want to. Of course, they’re all taped as well. 

But wait, there’s more! (Always wanted to say that. ;) If you order now, using the code EIP10CQ, you get 10% off! That makes a great deal become exceptional! Ok, so I’m laying it on a bit thick, but we really did try to make this the gala event of the season, and a valuable learning experience. So, I hope to see you there. Anyways, that’s my answer to why the EIP conference.

Applied learning science

18 March 2025 by Clark Leave a Comment

One of my favorite things to do is to help people apply the cognitive and learning sciences (under realistic constraints). That can be to their practices, processes, or products, via consulting, workshops, writing, and more. One thing I’ve done over the past few years is doing this for a particular entity. I was found via a workshop, and ended up coming on as an advisor. They’re now about ready to go live, and it’s time for me to tell you what they’re doing, why, and how. So here’s an application of applied learning science.

It starts with a problem, as many good solutions do. The issue is that, in L&D, too often they’re delivering live sessions to address a particular situation. Whether someone’s said “we need a course on this”, or there’s been a deep analysis, at some point they’ve pulled people together. It could be a day, several days in a row, or even spaced out every other week, every month, what have you. And, we know, that by and large, this isn’t going to lead to change!

Research on learning tells us, quite strongly, that to achieve a persistent new ability to ‘do’, we need to strengthen the learning over time. New information gets forgotten after only a day or two, according to the forgetting curve! So, we need to reactivate the learning. That can be reconceptualization, recontextualization, or reapplication. It can also be reflection, and even planning, and evaluation.

However, it’s been tough to do this reactivation. It typically requires finagling, and faces objections; not just the learners, but also the stakeholders! Such interventions need to be small but effective. That’s what this solution does. Other approaches have been tried, and some other solutions do exist, but this one has a couple of advantages. For one, a clear focus. It’s not doing other things, except reactivating learning.

Ok, one other thing, it’s also collecting data. Too often,  there’s no way to know if it learning’s effective. Even if there’s intent, it’s hard to get approval. So, this solution not only reactivates learning as mentioned, it tracks the responses. In practical ways.

What’s been my role? That’s the other thing; we’re applying this in ways that reflect what learning science tells us. Ok, we have to make some inferences, that we’re testing, but we’re starting from good principles. So, I’m advising on the spacing of the learning and the content of the reactivation. We call those prompts, that ask learners to respond. These prompts then gather into small chunks called LIFTs (Learning Interventions Fueling Transformation). (Everyone’s gotta have an acronym, after all, and this plays along with the company name, Elevator 9 ;).  The sequence of LIFTs makes a learning journey.

What’s important is how many we need, and how frequently we deliver them. It’s dependent on some factors, so we’re asking about those too: frequency of application, complexity, importance, and prior experience. Hopefully, in clear and useful ways.  They’re actively  looking for companies that are keen to help us refine this, too (in return for the usual considerations ;).

The end result is a product that easily supplements your live events. Your learners get reactivations, and you get data. Importantly, you get better outcomes from your interventions. This capability is possible, the goal is just to make it easy to do. Moreover, with a solution that not only embodies but shares the underlying learning science, improving you as it does your learners. Win-Win! I generally don’t tout solutions, but this one has actively put learning science (tempered by reality, to be sure) at the forefront. Applied learning science, and technology, the way it ought to be done. It’s been an honor to work with them!

Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.