Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: quip

Training Organization Fails

19 August 2025 by Clark Leave a Comment

I’ve worked with a lot of organizations that train others. I’ve consulted to them, spoken to them, and of course written and spoken for them. (And, of course, others!) And, I’ve seen that they have a reliable problem. Over the years, it occurs to me that these failures stem from a pattern that’s understandable, and also avoidable. So I want to talk about how a training organization fails. (And, realize, that most organizations should be learning organizations, so this is a bigger plea.)

The problem stems from the orgs’ offering. They offer training. Often, certification is linked. And folks need this, for continuing education needs. What folks are increasingly realizing is that much of the learning they’re offering is now findable on the web. For free. Which means that the companies not seeing the repeat business. Even if required, they’re not seeing loyalty. And I think there’s a simple reason why.

My explanation for this is that the orgs are focusing on training, not on performance solutions. People don’t want training for training’s sake, by and large. Sure, they need continuing education in some instances, so they’ll continue (until those requirements change, at least). Folks’ll take courses in the latest bizbuzz, in lieu of any other source, of course.  (That’s currently Generative Artificial Intelligence, generically called AI; before that as an article aptly pointed out it was the metaverse, or crypto, or Web 3.0, …)

What would get people to do more than attend the necessary or trendy courses? The evidence is that folks persist when they find value. If you’re providing real value, they will come. So what does that take? I posit that a full solution would be comprised of three things: skill development, performance support, and community.

Part 1: Actual learning

The first problem, of course, could be their learning design. Too often, organizations are falling prey to the same problems that belabor other organizational learning; bad design. They offer information instead of practice. Sure, they get good reviews, but folks aren’t leaving capable of doing something new. That’s not true of all, of course (recently engaged with an organization with really good learning design), but event-based learning doesn’t work.

What should happen is that the orgs target specific competencies, have mental models, examples, and meaningful practice. I’ve talked a lot about good learning design, and have worked with others on the same (c.f. Serious eLearning Manifesto). Still, it seems to remain a surprise to many organizations.

Further, learning has to extend beyond the ‘event’ model. That is, we need to space out practice with feedback. That’s neglected, though there are solutions now, and soon to be available. (Elevator 9, cough cough. ;) Thus, what we’re talking about is real skill development. That’s something people would care about. While it’s nice to have folks say they like it, it’s better if you actually demonstrate impact.

Part 2: Performance support

Of course, equipping learners with skills isn’t a total solution to need. If you really want to support people succeeding, you need more than just the skills. Folks need tools, too. In fact, your skill development should be built to include the tools. Yet, too often when I ask, such orgs admit that this is an area they don’t address.

There are times when courses don’t make sense. There are cognitive limits to what we can do, and we’ve reliably built ways to support our flaws. This can range from things performed rarely (so courses can’t help), through information that’s too volatile or arbitrary, to things done so frequently that we may forget whether we’ve taken a step. There are many situations in pretty much any endeavor where tools make sense. And providing good ones to complement the training, and in fact using those tools as part of the training, is a great way to provide additional value.

You can even make these tools an additional revenue stream, separate from the courses, or of course as part of them. Still, folks want solutions, not just skill development. It’s not about what you do for them, but about who they become through you (see Kathy Sierra’s Badass!).

Part 3: Community

The final piece of the picture is connecting people with others. There are several reasons to do this. For one, folks can get answers that courses and tools are too coarse to address. For another, they can help one another. There’s a whole literature on communities of practice. Sure, there are societies in most areas of practice, but they’re frequently not fulfilling all these needs (and they’re targets of this strategic analysis too). These orgs can offer courses, conferences, and readings, but do they have tools for people? And are they finding ways for people to connect? It’s about learning together.

I’ve learned the hard way that it takes a certain set of skills to develop and maintain a community. Which doesn’t mean you shouldn’t do it. When it reaches critical mass (that is, becomes self-correcting), the benefits to the members are great. Moreover, the dialog can point to the next offerings; your market’s right there!

There’s more, of course. Each of these areas drills down into considerable depth. Still, it’s worth addressing systematically. If you’re an org offering learning as a business, you need to consider this. Similarly, if you’re an L&D unit in an org, this is a roadmap for you as well. If you’re a startup and want to become a learning organization, this is the core of your strategy, too. It’s the revolution L&D needs ;). Not doing this is a suite of training organization fails.

My claim, and I’m willing to be wrong, is that you have to get all of this right. In this era of self-help available online, what matters is creating a full solution. Anything else and you’ll be a commodity. And that, I suggest, is not where you want to be. Look, this is true for L&D as a whole, but it’s particularly important, I suggest, for training companies that want to not just survive, but thrive in this era of internet capabilities.

Beyond Design

12 August 2025 by Clark Leave a Comment

When you look at the full design process, I admit to a bias. Using Analysis-Design-Development-Implementation-Evaluation, ADDIE, (though I prefer more iterative models: SAM, LLAMA, …), I focus early. There are two reasons why, but I really should address them.  So let’s talk beyond ‘design’ and why my bias might exist. (It pays to be a bit reflective, or defensive?, from time to time.)

I do believe that it’s important to get the first parts right. I’ve quipped before that if you get the design right, there are lots of ways to implement it. To do that, you need to get the analysis and design right. So I focus there. And, to be sure, there’s enough detail there to suit (or befuddle) most. Also, lots of ways we go wrong, so there’s suitable room for improvement. It’s easy, and useful, to focus there.

Another reason is that implementation, as implied in the quip, can vary. If you have the resources, need, and motivation, you can build simulation-driven experiences, maybe even VR. There are different ways to do this, depending. And those ways change over time. For instance, a reliable tool was Authorware, and then Flash, and now we can build pretty fancy experiences in most authoring tools. It’s a craft thing, not a design thing.

Implementation does matter. How you roll things out is an issue. As Jay Cross & Lance Dublin made clear in Implementing eLearning, you need to treat interventions as organizational change. That includes vision, and incentives, and communication, and support, and… And there’s a lot to be learned there. Julie Dirksen addresses much in her new book Talk to the Elephant about how things might go awry, and how you can avoid the perils.

Finally, there’s evaluation. Here, our colleague Will Thalheimer leads the way, with his Learning Transfer Evaluation Model (LTEM). His book, Performance Focused Learner Surveys comes closest to presenting the whole model. Too often, we basically do what’s been asked, and don’t ask more than smile sheets at best. When, to be professional, we should have metrics that we’re shooting to achieve, and then test and tune until we achieve them.

Of course, there’re also my predilections. I find analysis and design, particularly the latter, to be most intellectually interesting. Perhaps it’s my fascination with cognition, which looks at both the product and process of design. My particular interest is in doing two things: elegantly integrating cognitive and ‘emotional‘ elements, and doing so in the best ways possible that push the boundaries but not the constraints under which we endeavor. I want to change the system in the long term, but I recognize that’s not likely to happen without small changes first.

So, while I do look beyond design, that’s my more common focus. I think it’s the area where we’re liable to get the best traction. Ok, so I do say that measurement is probably our biggest lever for change, but we’ll achieve the biggest impact by making the smallest changes that improve our outcomes the most. Of course, we have to be measuring so that we know the impact!

Overall, we do need the whole picture. I do address it all, but with a bias. There are others who look at the whole process. The aforementioned Julie, for one. Her former boss and one of our great role-models, Michael Allen, for another. Jane Bozarth channels research that goes up and down the chain. And, of course, folks who look at parts. Mirjam Neelen & Paul Kirschner, Connie Malamed, Patti Shank, they all consider the whole, but tend to have areas of focus, with considerable overlap. Then we go beyond, to performance support and social, and look to people like Mark Britz, Marc Rosenberg, Jay Cross, Guy Wallace, Nigel Paine, Harold Jarche, Charles Jennings, and more.

All to the good, we benefit from different perspectives. It’s hard to get your mind around it all, but if you start small, with your area, it’s easy to begin to see connections, and work out a path. Get your design right, but go beyond design as well to get that right (or make sure it’s being done right to not undermine the design ;). So say I, what say you?

From knowledge to performance

15 July 2025 by Clark Leave a Comment

For reasons, I’ve been looking at multiple-choice questions (MCQs). Of course, for writing them right, you should look to Patti Shank’s book Write Better Multiple-Choice Questions.  And there’s clearly a need!  Why? Because when it comes to writing meaningful MCQs, I’m wanting to move us from knowledge to performance. And the vast number of questions I found didn’t do that.

To start, I’ll point, as I often do, to Pooja Agarwal’s research (plays to my bias ;). She found that asking high-level questions (e.g. application questions, or mini-scenarios as I like to term them) leads to ability to answer high-level questions (e.g., to do). What wasn’t necessary were low-level knowledge questions. She tested low alone, high alone, and low + high. What she found that was to pass high tests, you needed high questions. Further, low questions didn’t add anything. I’ll also suggest that our needs, for our learners and our organizations, are the ability to apply knowledge in high-level ways.

Yet, when I look at what’s out there, I continually see knowledge questions. They violate, btw, many principles of good multiple questions (hence Patti’s book ;). These questions often have silly or obvious alternatives to the right answer. They include the wrong length responses, and too many (3 is ideal, usually, including the right answer). We also see a lack of feedback, just ‘right’ or ‘wrong’, not anything meaningful. We also see too many questions, or incomplete coverage, and arbitrary criteria (why 80%?). Then, too, the absolutes (never/always, etc), which isn’t the way to go. Perhaps worst, they don’t always focus on anything meaningful, but query random information that was in no way signaled as important.

Now, I suppose I can’t say that knowledge questions should be avoided. There might be reasons to ensure they’re there for diagnostic reasons (e.g. why are learners are getting this wrong). I’d suggest, however, that such questions are way overused. Moreover, we can do better. It’s even essentially easy (though not effortless).

What we have learners do is what’s critical for their effective learning, If we care (and we should), that means we need to make sure that what they do leads to the outcomes our organizations need. Which means that we need lots of practice. Deliberate practice, with desirable difficulty, spaced out over time. We need reactivation, for sure. But what we do to reactivate dictates what we’ll be able to do. If we ask people knowledge questions, they’ll be able to answer knowledge questions. But that has been shown to not lead to their ability to apply that knowledge to make decisions: solve problems, design solutions, generate better practices.

So, we can do better. We must do better. That is, if we want to actually assist our organizations. If we’re talking skilling (up-, re-, etc), we’re talking high-level questions. On the way, perhaps (and recommended), to more rigorous assessment (branching scenarios, sims, mentored practice, coaching, etc), Regardless, we want what we have learners do be meaningful, When we’re moving from knowledge to performance, it’s critical, And that’s what I believe we should be doing.

(BTW, technology’s an asset, but not a solution. As I like to say:

If you get the design right, there are lots of ways to implement it; if you don’t get the design right, it doesn’t matter how you implement it. )

Learning Science Conference 2024

15 October 2024 by Clark Leave a Comment

I believe, quite strongly, that the most important foundation anyone in L&D can have is understanding how learning really works. If you’re going to intervene to improve people’s ability to perform, you ought to know how learning actually happens! Which is why we’ve created the Learning Science Conference 2024.

We have some of the most respected translators of learning science research to practice. Presenters are Ruth Clark, Paul Kirschner, Will Thalheimer, Patti Shank, Nidhi Sachdeva, as well as Matt Richter and myself. They’ll be providing a curated curriculum of sessions. These are admittedly some of our advisors to the Learning Development Accelerator, but that’s because they’ve reliably demonstrated the ability to do the research, and then to communicate the results of theirs and others’ work in terms of the implications for practice. They know what’s right and real, and make that clear.

The conference is a hybrid model; we present the necessary concepts asynchronously, starting later this month. Then from 11- 15 November, we’ll have live online sessions led by the presenters. These are at two different times to accommodate as much of the globe as we can! In these live sessions we’ll discuss the implications and workshop issues raised by attendees. We will record the sessions in case you can’t make it. I’ll note, however, that participating is a chance to get your particular questions answered! Of course, we’ll have discussion forums too.

We’ve worked hard to make this the most valuable grounding you can get, as we’ve deliberately chosen the topics that we think everyone needs to comprehend. I suggest there’s something there for everyone, regardless of level. We’re covering the research and implications around the foundations of learning, practices for design and evaluation, issues of emotion and motivation, barriers and myths, even informal and social learning. It’s the content you need to do right by your stakeholders.

Our intent is that you’ll leave equipped to be the evidence-based L&D practitioner our industry needs. I hope you’ll take advantage of this opportunity, and hope to see you at the Learning Science Conference 2024.

Marathons and Sprints

3 September 2024 by Clark Leave a Comment

(Empty) Lanes on track on a gym field.

Besides Kahnemann’s Fast & Slow book, I’ve also talked about fast and slow innovation. Fast is where you have a specific problem to solve, or product to design, or thing to research, and you do so. Slow is the innovation that happens because you create opportunities for new ideas to flourish: making it safe, keeping the ‘adjacent possible’ open, facilitating creative friction, etc. Similarly, in my writing, I use both marathons and sprints. What do I mean?

So, I tend to have reasonably long time-frames for writing. I now blog once a week, and I tend to queue these up a week or two in advance. My books, of course, when I’m working on them, have deadlines months ahead. Presentations, too, are a form of communication. Overall, I tend to have months between proposals and when I have to deliver them. Occasionally, I’m asked for something on a short time frame, but even that’s several days.

And, in my life, I tend to have time (typically, in the morning) to respond to short term requirements, and also time to nick away at the longer term requirements. I’ve become relatively good at leaving projects open to contribute to them as I can. So, largely, this is the ‘marathon’ life. That is, I take care of details, and then take time to polish off the bigger projects. Which, I acknowledge, is a luxury. The tradeoff is that I haven’t had a secure income for most of the past 2.5 decades ;).

What also happens is that, at some point in my nicking away at a project, it comes together. The picture that’s been gestating finally emerges. Then, I tend to suddenly find myself grinding it out. It could be a chapter, a book, a presentation, or just an article, but ultimately it takes shape. That said, for my most recent tome, an iterative process emerged. I kept sending out the latest version to someone else, and rearranging it based upon their feedback. That is, until I realized that the latest rearrangement felt truly right, and I was done!

This varies, of course. Sometimes I’m asked for something short term, and then I tend to fall back on things I’ve already thought through. This blog, as I’ve mentioned in many ways, forces me to think through things (looking to keep it fed and not repeat myself too much). I don’t mind this, as it still forces me to rearticulate, which often forces me to rethink, which is a good thing! In my reprocessing, I’m not only cementing my understanding, but frequently deepening it!

Overall, however, this cycle of marathons and sprints works. The longer term processing provides the basis for the short-term sprints. As it is, I’m usually as productive as anyone else (possibly more), yet it seems like there’s a lot of time of me just musing. Percolation (fermentation, incubation, pick your metaphor) is a good thing! As a reflection, this strikes me as right. It also strikes me as a prescription: break things up, ensure you have enough time for the big things, and take time to reflect. It works for me! And, I realize, it’s contrary to much of organizational life, which to me says more about organizational life than how you (should) think.

(BTW, in real life, I was always better the longer I had to run; I was usually the slowest person in my phys ed classes in sprints! At least on land…) 

 

The easy answer

16 July 2024 by Clark Leave a Comment

In working on something, I’m looking at the likely steps people take. Of course, I’m listing them from easiest to most useful (with the hope that folks understand they should take the latter). However, it’s making me think that, too often, people are looking at the easy answer, not the most accurate one. Because they really don’t know the problem. When does the easy answer make sense? Are we letting ourselves off the hook too much?

So, for instance, in learning we really should do analysis when someone asks for something. “We need a course on X.” “Ok, what tells you that you need this, and how will we know when it’s worked?” In a quick family convo, we established that this sort of un-analytical request is made all the time:

  • “Why isn’t my plant blooming?” (It’s not the season.)
  • “Fix this code.” (The input’s broken, not the code.)
  • …

Yet, people actually don’t do this up-front analysis. Why? It’s harder, it takes more time, it slows things down, it costs more. Besides, we know what the problem is.

DivergeConvergeProblemSolutionExcept, we don’t know what the problem is. Too often, the question or request is making some assumptions about the state of the world that may not be true. It may be the right answer, but it may not. Ensuring that you’ve identified the problem correctly is the first part of the design process, and you should diverge on exploration before you converge on a solution. That’s the double diamond, where you first explore the problem, before you explore a solution.

Perhaps counter-intuitively, this is more efficient. Why? Because you’re not expending resources solving the wrong problem. Are you sure you’ve gotten it right? How do you know when to take the easier path? If you know the answer you need, you’re better equipped to choose the level of solution you need. If you don’t know the question, however, and make assumptions about the root cause, you can go off the rails. And, end up spending effort you didn’t need to.

Look, I live in the real world. I have to take shortcuts (heck, I’m lazy ;). And I do. However, I like to do that when I know the answer, and know that the outcome is good enough to meet the need. I’ll go for the easy answer, if I know it’ll solve the problem well enough. But I can’t if I don’t know the question or problem, and just assume. And we know what happens when we ass-u-me.

A Learning Science Conference?

9 July 2024 by Clark Leave a Comment

learning science conference 2024 banner: "Online. Asynchronous & Live Sessions"In our field of learning design (aka instructional design), it’s too frequently the case that folks don’t actually know the underlying learning science that guides processes, policies, and practices. Is this a problem? If it is, what is the remedy?

Consider that you wouldn’t want an electrician that didn’t understand the principles of electricity. Such a person might not understand, for instance, the importance of grounding, leavning open the possibility of burning down the house.

So, too, with learning. If you don’t understand learning science, you might not understand why learning styles is a waste of money, the lack of value of information alone, nor that you should make alternatives to the right answer reflect typical misconceptions. There’s lots more: models, context, and feedback are also included in the topics that most folks don’t understand the nuances of.

If you don’t understand learning science, you waste money. You are likely to design ineffective learning, wasting time and effort. Or you might expend unnecessary effort on things that don’t have an impact. Overall, it’s a path to the poorhouse.

Of course, there are other reasons why we don’t have the impact we should: mismatched expectations on costs and time, SME recalcitrance and hubris, and more. Still, you’re better equipped to counter these problems if you can justify your stance from sound research.

The way to address this, of course, also isn’t necessarily easy. You might read a book, though some can mislead you. And, you still don’t get answers if you have questions. Or, you could pay for a degree, but those can be quite expensive and ineffective. Too frequently they spend time on process and not enough on principles.

There’s another option, one we’re providing. What if you could get the core essentials curated for their relevance? Further, this content is provided for you asynchronously, buttressed by the opportunity for meaningful interaction, in a tight time frame (at different times depending on your location)? Then, the presentation is by some of the most important names in the field, individuals who’ve reliably demonstrated an ability to translate academic research into comprehensible principles? And, finally, this is delivered at an appropriate cost? Does that sound like a valuable proposition?

I’d like to invite you to the Learning Science Conference, put on by the Learning Development Accelerator. Faculty already agreed include Ruth Clark (co-author of eLearning & The Science of Instruction), myself (author of Learning Science for Instructional Designers), Matt Richter (co-director of the Thiagi group), and Nidhi Sachdeva (faculty at University of Toronto). The curriculum covers 9 of the most important elements of learning science including learning, myths and barriers, motivation, informal and social learning, media, and evaluation.

This event is designed to leave you with the foundations necessary to be able to design learning experiences that are both engaging and effective, as well as dealing with the expected roadblocks to success. Frankly, we see little else that’s as comprehensive and practical. We hope to see you there!

Getting Engagement Right

9 November 2023 by Clark Leave a Comment

I’m on record stating that I think learning experience design (LXD) is the elegant integration of learning science and engagement. In addition, I’ve looked at both. Amongst the things that stand out for me are that there are an increasing suite of resources for learning science. For one, I have my own book on the topic! There are other good ones too. However, on the flip side, for engagement, I didn’t find much. I had intended to write an LXD book, but then ATD asked for the learning science one. Once it was done, however, I quickly realized that I wanted to write the complement. Thus, Make It Meaningful was born. It’s available, but I’m also running a workshop on the topic, starting this coming week! Four weekly 2 hour meetings, at the convenient time of noon ET. It’s all about getting engagement right. So, what’s covered?

For the first week, there’s an overview of the importance of engagement, and how to set the ‘hook’. We’ll briefly review the reasons why to consider the engagement side (and trust me, this is something you want to do). Then we’ll talk about the first step, getting learners to the a motivated state to begin the learning. We’ll look at barriers to success as well, and what to do.

In the second week, we’ll talk about ‘landing’ the experience. Once the hook is set, it doesn’t mean you’ve got them through the experience. Instead, there’s much to do to maintain that motivation. In addition, you want to ensure that anxiety doesn’t overwhelm the learning, and you want to build confidence. We’ll talk about principles as well as heuristics.

In the third week, we dig into what this means in practical terms. What is an engaging introduction? What about the models and examples? The critical element is the practice that learners perform. We’ll talk about how aligning the practice with the desired objectives while making a compelling context is necessary, but doable.

In the last week, we’ll talk about making a design process that can reliably deliver on learning experience. We’ll take a generic design process and go through the changes that ensure both an effective learning design and an engaging experience. We’ll work from analysis, through specification, and on to evaluation (we won’t talk much about implementation, because of my quip that getting the design right leaves lots of ways to create the solution, and not doing so renders everything else extraneous.

Sure, you can just buy the book, and that’s ok. I’m all about getting the word out, and getting better learning happening for our learners. However, in the workshop, not only do you get the book, but we’ll work through the ideas systematically, put them into practice, and address the individual questions you may have. Look, getting engagement right is an advanced topic, but it’s increasingly what will differentiate our solutions from the knowledge ones that come from typical approaches, no matter how technologically augmented. This stuff matters! So, I hope to see you there.

DnD n LnD

31 October 2023 by Clark 2 Comments

multi-sided diceLast Friday, I joined in on a Dungeons & Dragons (DnD) campaign. This wasn’t just gratuitous fun, however, but was explicitly run to connect to Learning & Development folks (LnD). Organized by the Training, Learning, and Development Community (a competitor to LDA? I have bias. ;), there was both some preliminary guidance, and outcomes. I was privileged to play a role, and while not an official part of the followup (happening this week), I thought I’d share my reflections.

So, first, my DnD history. I played a few times while in college, but… I gave it up when a favorite character of mine was killed by an evil trap (that was really too advanced for our party). I’ve played a lot of RPGs since then, with a lot of similarities to the formal DnD games (tho’ the actual ones are too complex). Recently, with guidance from offspring two, our family is getting back into it (with a prompt from a Shakespeare and DnD skit at the local Renaissance Faire).

Then, I’ve been into games for learning since my first job out of college, programming educational computer games. It also became the catalyst for my ongoing exploration of engagement to accompany my interest in cognition/learning, design, and technology. The intersection of which is where I’ve pretty much stayed (in a variety of roles), since then! (And, led to my first book on how to do same.)

Also, about DnD. It’s a game where you create a character. There are lots of details. For one, your characteristics: strength, dexterity, wisdom, intelligence, and more. Those combine with lots of attributes (such race & role). Then, there’s lots of elaboration: backstory, equipment, and more. This can alter during the game, where your abilities also rise. This adds complexity to support ongoing engagement. (I heard one team has been going for over 40 years!)

Characters created by the players are then set loose in a campaign (a setting, precipitating story, and potential details). A Dungeon Master runs the game, Keegan Long-Wheeler in our case, writing it and managing the details. Outcomes happen probabilistically by rolling dice. Computers can play a role. For one, through apps that handle details like rolling the dice. Then folks create games that reflect pre-written campaigns.

One important thing, to me, is that the players organize and make decisions together. We were a group who didn’t necessarily know each other, and we were playing under time constraints. This meant we didn’t have the dialog and choices that might typically emerge in such playing. Yet, we managed a successful engagement in the hour+ we were playing. And had fun!

I was an early advocate of games for learning. To be clear, not the tarted up drill and kill we were mostly doing, but inspired by adventure games. John Carroll had written about this back in the day, I found out. However, I’d already seen adventure games having the potential to be a basis for learning. Adventure games naturally require exploring. In them, you’re putting clues together to choose actions to overcome obstacles. Which, really, is good learning practice! That is, making decisions in context in games is good practice for making decisions in performance situations. Okay, with the caveat that you should design the game so that decisions have a natural embed.

The complexity of DnD is a bit much, in my mind, for LnD, but…the design!  The underlying principles of designing campaigns bears some relation to designing learning experiences. I believe designing engaging learning may be harder than designing learning or games, but we do have good principles. I do believe learning can, and should, be ‘hard fun‘.  Heck, it’s the topic of my most recent tome! (I believe learning should be the elegant integration of learning science with engagement.)

This has been an opportunity to reflect a bit on the underlying structure of games, and what makes them work. That’s always a happy time for me. So, I’m curious what you see about the links between games and learning!

Two steps for L&D

6 June 2023 by Clark Leave a Comment

In a conversation, we were discussing how L&D fares. Badly, of course, but we were looking at why. One of the problems is that L&D folks don’t have credibility. Another was that they don’t measure. I didn’t raise it in the conversation, but it’s come up before that they’re also not being strategic. That came up in another conversation. Overall, there are two steps for L&D to really make an impact on.

Now, I joke that L&D isn’t doing well what it’s supposed to be doing, and isn’t doing enough. My first complaint is that we’re not doing a good job. In the second conversation, up-skilling came up as an important trend. My take is that it’s all well and good to want to do it, but if you really want persistent new skill development, you have to do it right! That is, shooting for retention and transfer. Which will be, by the way, the topic of my presentation at DevLearn this year, I’ve just found out. Also the topic of the Missing LXD workshop (coming in Asia Pacific times this July/Aug), in linking that learning science grounding to engagement as well.

I’ve argued that the most important thing L&D can do is start measuring, because it will point out what works (and doesn’t). That’s a barrier that came up in the first conversation; how do we move people forward in their measurements. We were talking about little steps; if they’re doing learner surveys (c.f. Thalheimer), let’s encourage them to move to survey some time after. If they’re doing that, let’s also have them ask supervisors. Etc.

So, this is a necessary step. It’s not enough, of course. You might throw courses at things where they don’t make sense, e.g. where performance support would work better. Measurement should tell you that, in that a course isn’t working, but it won’t necessarily point you directly to performance support. Still, measurement is a step along the way. There’s another step, however.

The second thing I argue we should do is start looking at going beyond courses. Not just performance support, but here I’m talking about informal and social learning, e.g. innovation. There are both principled and practical reasons for this. The principled reason is that innovation is learning; you don’t know the answer when you start. Thus, knowing how learning works provides a good basis for assisting here. The practical reason is it gives a way for L&D to contribute to the most important part of organizational success. Instead of being an appendage that can be cut when times are tough, L&D can be facilitating the survival and thrival strategies that will keep the organization agile.

Of course, we’re running a workshop on this as well. I’m not touting it because it’s on offer, I’m behind it because it’s something I’ve organized specifically because it’s so important! We’ll cover the gamut, from individual learning skills, to team, and organizational success. We’ll also cover strategy. Importantly, we have some of the best people in the world to assist! I’ve managed to convince  Harold Jarche, Emma Weber, Kat Koppett, and Mark Britz (each of which alone would be worth the price of entry!), on top of myself and Matt Richter. Because it’s the Learning Development Accelerator, it will be evidence-based. It’ll also be interactive, and practically focused.

Look, there are lots of things you can do. There are some things you should do. There are two steps for L&D to do, and you have the opportunity to get on top of each. You can do it any way you want, of course, but please, please start making these moves!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.