Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Gamification or…

10 May 2022 by Clark 1 Comment

On my walk yesterday, I was reflecting on our You Oughta Know with Christy Tucker  (a great session, as usual), who talked about scenarios. It got me pondering, in particular, about different interpretations of ‘gamification‘. As I dictated a note to myself as I walked (probably looking like one of those folks who holds phone calls on their perambulations), I found myself discussing the differences between two approaches. So here’re some thoughts on gamification or the alternative.

To start, let’s say we have a learning goal. For instance, how to deal with customers. A typical approach would be, after an initial course, to stream out questions about different aspects of the principles. For this, you might give points after correctly answering n. Once you answer n, you get X points (10, 100, 1000, whatever). 2n gets you 2X points or maybe 3X. These points may entitle you to prizes: swag, time off, office party. Pretty typical gamification stuff.

Then, consider an alternative: they do successively more challenging scenarios. That is, initially it’s an easy customer with a straightforward problem. Then, it’s a mix of more difficult customers with simple problems and easy customers with more difficult problems. Finally, you’re dealing with difficult customers  and difficult problems. Along the way, you give badges for successive levels of customer difficulty, and similarly for handling increasing levels of difficulty of problems.

Which of these is easier to implement? Will one or the other lead to better handling of customers? Which will lead to long-term engagement of your employees? Of course, these are extremes. You can have the questions in the ‘prize’ situation get steadily more challenging. They can even be written as ‘mini-scenarios’. You can mix in scenarios with knowledge questions.

What I want to suggest, however, is that  not doing the latter, the scenarios, is going to keep any initiative from having the biggest impact. They’re competency-based, providing explicit levels of capability. They’re also a chance to practice when it doesn’t matter, before it does.

This shouldn’t stand alone. Of course there should be coaching, and increasing responsibility, and more. It’s not about just the formal learning. Extending the learning experience should include both formal and informal mechanisms. The point I want to make, however, is that having learners perform in practice they way they’ll need to perform when it matters, is the best preparation. Yes, you need knowledge (the stuff that, increasingly, AI can handle), but then you need meaningful practice.

Of course, if it’s something you do frequently after the learning experience, coaching may be enough. However, if aspects of it are rare but important, scenarios are the important reactivation practice that will keep skills tuned. So, that’s my take on gamification or alternates. How would you fine tune my response?

Why L&D isn’t better

3 May 2022 by Clark 1 Comment

As I’ve noted before, someone on LinkedIn asked a question, and it’s prompting a reply. In this case, the question was in response to my previous post on superstitions (for new L&D practitioners). He asked “How did we even get here?” I’ve talked before about the sorry state of our industry, but haven’t really shared my thinking on why this is the case. My short response was that it’s complex. Here’s the longer response, trying to answer why L&D isn’t better.

First, I think we’re suffering from some mistaken beliefs. In particular, that presenting information will lead to behavior change. As I’ve noted before, I think this is a legacy of our beliefs that we’re formal logical reasoners. That is, if we were such beings (we’re not), this would likely be true. We’d respond to information by changing how we act. Instead, of course, we don’t change our behavior without practice, reinforcement, etc.

Another contributor, I suggest, is that a belief that if we can perform, we can teach. We can, therefore, take the best performer, and turn them into a trainer. Which is mistaken for a couple of reasons. For one, expertise is compiled away, and isn’t accessible. Estimates suggest around 70% of what experts do, they literally can’t tell us. It’s also a mistake to think that just anyone can teach. There’re specific skills that need to go into it.

Of course, we’re not aware of our flaws. We don’t measure, by and large. Even when we do, we too often measure the wrong things.  So, we see the bad practice of just looking at what learners think of the experience. Which has little correlation with the actual impact. We seldom look to see if the learning has actually changed any behavior, let alone whether it’s now at an acceptable level.

I do think we also still see the effects of 9/11. When we didn’t want to travel, we went to elearning. Rapid eLearning tools emerged to make it fast to take the PPTs and PDFs from the previous courses and put them onscreen with an added quiz. This has led to expectations that courses can be churned out quickly. Indeed, except that these ‘courses’ won’t have any impact!

One other factor is that our stakeholders also don’t know nor care. They know they need to invest in learning, so they do. It’s a cost center, not a driver of business success. No one is (yet) calling us on the carpet to justify our success. That’s changing, however. I just would like for us to be proactive, not reactive. Moreover, there’s a bigger opportunity on tap, not only to help the organization execute on the things that it needs to do, but also to facilitate the new knowledge the org will need.

In short, we don’t seem know what learning is, and we’re blind to the fact that our approaches aren’t useful. These, of course, are all premises I’ve addressed in my call to Revolutionize L&D. I still think there’s a meaningful role for L&D to play, but we have to lift our game. That’s my explanation of why L&D isn’t better, what’s yours?

 

Superstitions for New Practitioners

26 April 2022 by Clark 3 Comments

Black catIt’s become obvious (even to me) that there are a host of teachers moving to L&D. There are also a number of initiatives to support them. Naturally, I wondered what I could do to assist. With my reputation as a cynic apparently well-secured, I’m choosing to call out some bad behaviors. So here are some superstitions for new practitioners to watch out for!

As background, these aren’t the myths that I discuss in my book on the topic. That would be too obvious. Instead, I’m drawing on the superstitions from the same tome, that is things that people practice without necessarily being aware, let alone espousing them. No, these manifest through behaviors and expectations rather than explicit exhortation.

  • Giving people information will lead them to change. While we know this isn’t true, it still seems to be prevalent. I’ve argued before about why I think this exists, but what matters is what it leads to. That is, information dump and knowledge-test courses. What instead we need is not just a rationale, but also practice and then ongoing support for the change.
  • If it looks like school, it’s learning. We’ve all been to school, and thus we all know what learning looks like, right? Except many school practices are only useful for passing tests, not for actually solving real problems and meeting real goals. (Only two things wrong: the curriculum and the pedagogy, otherwise school’s fine.) It, however, creates barriers when you’re trying to create learning that actually works. Have people look at the things they learned outside of school (sports, hobbies, crafts, etc) for clues.
  • People’s opinion is a useful metric for success. Too often, we just ask ‘did you like it’. Or, perhaps, a ‘do you think it was valuable’. While the latter is better than the former, it’s still not good enough. The correlation between people’s evaluation of the learning and the actual impact is essentially zero. At least for novices. You need more rigorous criteria, and then test to achieve.
  • A request for a course is a sufficient rationale to make one. A frequent occurrence is a business unit asking for a course. There’s a performance problem (or just the perception of one), and therefore a course is the answer. The only problem is that there can be many reasons for a performance problem that have nothing to do with knowledge or skill gaps. You should determine what the performance gap is (to the level you’ll know when it’s fixed), and the cause.  Only when the cause is a skill gap does a course really make sense.
  • A course is always the best answer. See above; there are  lots  of reasons why performance may not be up to scratch: lack of resources, wrong incentives, bad messaging, the list goes on. As Harless famously said, “Inside every fat course there‘s a thin job aid crying to get out.” Many times we can put knowledge in the world, which makes sense because it’s actually  hard to get information and skills reliably in the head.
  • You can develop meaningful learning in a couple of weeks. The rise of rapid elearning tools and a lack of understanding of learning has led to the situation where someone will be handed a stack of PPTs and PDFs and a rapid authoring tool and expected to turn out a course. Which goes back to the first two problems. While it might take that long to get just a first version, you’re not done. Because…
  • You don’t need to test and tune. There’s this naive expectation in the industry that if you build it, it is good. Yet the variability of people, the uncertainty of the approach, and more, suggest that courses should be trialed, evaluated, and revised until actually achieving the necessary change. Beware the ‘build and release’ approach to learning design, and err on the side of iterative and agile.

This isn’t a definitive list, but hopefully it’ll help address some of the worst practices in the industry. If you’re wary of these superstitions for new practitioners, you’ll likely have a more successful career. Fingers crossed and good luck!

There’s some overlap here with messages to CXOs  1 and 2, but a different target here.  

Pre-order for Make It Meaningful now available

21 April 2022 by Clark 3 Comments

I’m happy to report that the ebook version of my next tome, Make It Meaningful: Taking Learning Design From Instructional to Transformational, is now available for pre-order! Why should you care?  Here’s a pass at explaining, and you can decide whether a pre-order for Make It Meaningful  makes sense for you.

Why this book?

Here’s the marketing blurb:

Learning Experience Design is, as author Clark Quinn puts it, about “the elegant integration of learning science with engagement”. While there are increasing resources available on the learning science side, the other side is somewhat neglected. Having written one of the books on the learning science side, Clark has undertaken to write the other half. The book is grounded in his early experience writing learning games, then researching cognition and engagement, and ongoing exploration and application of learning, technology, and design to creating solutions and strategies. It covers the underlying principles including surprise, story, and emotion and pulls them together to create a coherent approach. The book also covers not just the principles, but the implications for both learning elements and a design process. With concise prose and concrete examples, this book provides the framework to take your learning experience designs from instructional to transformational!

I hope that suggests why I think it’s important. Further, here are the short versions of what some early readers had to say:

“…the right emotional engagement tactics can be effective, desirable difficulties. The book explains why and how, with good examples.”
Patti Shank, PhD Author of Write Better Multiple-Choice Questions to Assess Learning

“… the notion of engagement, and its true meaning, is like the mysterious fifth element waiting to be discovered and summoned through three words in this book: Make. It. Meaningful.”
Zsolt Olah, Senior Learning Technologist, Amazon

“…systematically reveals the secret sauce for creating impactful learning experiences…brings to light the missing emotional design dimension that separates instructional design from LXD. Highly recommended..!”
Les Howles, Co-Author, Designing the Online Learning Experience

“As a fan of Clark Quinn‘s books, I‘m happy to announce this is another winner. Make It Meaningful closes a gaping hole in instructional design models by showing how to address the emotions in learning design.”
Connie Malamed, Publisher of theelearningcoach.com

Going a wee bit further…

What’s included

There are two sections, the first on principles, the second on practice. Initially I cover a bit of basics about learning, how to ‘hook’ people, then how to extend the experience, and some tips and tricks. In the subsequent section I consider the implications for the different elements of learning design: introduction, concepts, examples, practice, and closing, and then the amendments to your design process to incorporate the necessary elements. Thus, I’m trying to be thorough.

Who this is for

This is a book for those who already know the basics of science-grounded design, and are looking to take their learning experience design to the next level. It’s about addressing the emotional side. To be sure, it  also  makes mention of the cognitive essentials, but it is first and foremost focused on the emotional side.

What else should you know?

This is the first offering from the Learning Development Accelerator (LDA)  offshoot, LDA Press. (Note: as Editor-In-Chief, I’m biased.) In my own words:

LDA Press, an imprint of the Learning & Development Accelerator, is a boutique publisher focusing on evidence-informed titles that fill needed gaps in the literature while offering authors the relationship they deserve.

Hopefully, my experience with publishers (as author and consultant), is a good start. Then, the rigor of academic training in writing and reading should provide a reasonable expectation of quality. Additionally, I’m also looking to make the prose comprehensible. Finally, we’ve engaged professional copy-editing. We’ll see how that plays out, but so far it’s seems like we’re on track. Also, we’re actively soliciting additional needed works.

A further point: we’re keeping costs low. Thus, print copies of Meaningful  will be 22.99 (discount for LDA members), and the ebook is only $10.99 (also a discount for LDA members), plus there’s a special discount for pre-orders! The book releases 16 May, both ebook and print, but the latter may take awhile since orders will only be available on that date.

I think this book is needed, and immodestly believe it’s one that I am capable to write. At any rate, now you know you can make a pre-order for Make It Meaningful. Whether that makes sense for you is something only you can determine.  We now return you to your regularly scheduled blog…

 

The Wrong Bucket Lists

19 April 2022 by Clark 2 Comments

color bucketsOur brains like to categorize things; in fact, we can’t really  not do it. This has many benefits: we can better predict outcomes when we can categorize the situation, we can respond in appropriate ways via shared conceptualizations, and so on. It also has some downsides: stereotyping, for one. I reckon there’re tradeoffs, of course. But we also have to worry about when we over-use categorization, we can risk making the wrong bucket lists.

Our desire for simplification and categorization is manifest. The continued interest in reading one’s horoscope, for instance. And the continued success of personality typings, despite the evidence of their lack of utility. Other than the Big 5 or HEXACO, the rest are problematic at best. I’m just reading Annie Murphy Paul’s  The Cult of Personality Testing  (the predecessor to her  The  Extended Mind) and hearing abuses like Rorschach tests being used in child custody decisions is really horrific. Similarly, to hear that people are being denied employment based on their color (not race, but their ‘color’ on a particular test, blue or orange) isn’t new and continues (as does the other, sadly).  Most of these tests don’t stand up to scientific scrutiny!

This is the explanation of learning styles, too, another myth that won’t die. Generations similarly. We like to have simplification. Further, there are times it’s useful. For example, recording your blood type can prevent potentially life-threatening complications. Having a basis to adapt learning, such as people’s performance (success or failure), also. Even more so if additional factors are added, such as confidence. Yet, we can overdo it. We might over-categorize, and miss important nuances.

Todd Rose’s  The End of Average made an excellent case for not trying to conform people into one bucket. In it, he points out that when we assign a single grade for complex performance, we miss important nuances. For instance, if you get it wrong,  why did you get it wrong? It matters in terms of the feedback we might give you. If you had one misconception instead of another, you should get different feedback than if you had the other.

How do we reconcile this? There’re benefits to simplifications, and risks. We have to be careful to simplify as much as we can,  and no simpler. Which isn’t an easy task to undertake. The best recommendation I can make is to be mindful of the risks when you do simplify. Maybe start more broadly, and then winnow down? Explicitly consider the risks and costs as well as the benefits and savings. We’re using learner personas in a project. Many times, these personas can differ on important dimensions, and characterize the audience space in ways that a simple ‘the learner’ can’t capture.

Overall, we want to make sure we’re only using simplifications and categorizations in ways that are both helpful  and scrutable. When we do so, we can avoid the wrong bucket lists. That should be our goal, after all.

Sensitivities and Sensibilities

12 April 2022 by Clark 2 Comments

We are currently experiencing a crisis of communication. While this is true of our nation and arguably the world, it‘s also true in our little world of L&D. Recently, there have been at least four different ‘spats‘ about things. While I don‘t want to address the specifics of any of them, what I do want to do is talk about how we engage. So here‘s a post on sensitivities and sensibilities.

First, let me be clear, I‘ve some social issues. I‘m an introvert, and also miss social cues. I also have a bad habit of speaking before I‘ve done the knowledge-check: is this true, kind, and necessary? Subtlety and diplomacies aren‘t my strong suit. I continue to be a work in progress. Still, I never intentionally hurt anyone, at least not anyone who hasn‘t demonstrated a reliable propensity to violate norms that I feel are minimum. I continue to try to refine my responses.

There are two issues, to me: what we should say, and how we should say it. For instance, I think when someone says something wrong, we need to educate. Initially, we need to evaluate the reason. It could be that they don‘t know any better. Or it could be that they‘re deliberately trying to mislead.  

Let‘s also realize we‘re emotional animals. If I‘m attacked, for instance, I’m likely to blame myself, even when it’s wrongly. Others are highly unlikely to wear blame, and lash out. We are affected by our current context; we are more critical if we‘re tired or otherwise upset, and on the reverse are more tolerant if rested and content.  

I‘m also aware that we have no insight into where someone‘s coming from. We can guess, but we really don‘t know. I really learned this when I was suffering from a pinched nerve in my back; I have more sympathy now since I‘ve come to recognize I don‘t know what anyone else is living with.

So, I‘m trying to come up with some principles about how to respond. For instance, when I write posts about things I think are misguided or misleading, I call out the problems, but not the person, e.g. I don‘t link to the post. I‘m not trying to shame anyone, and instead want to educate the market. I think this is a general principle of feedback: don‘t attack the person, attack the behavior.  

Also, if you‘re concerned about something, ask first. Assume good intentions. How you ask matters as well. The same principle above applies: ask about the behavior. I’m  impressed with those who worry about the asker. If the ask seems a bit harsh, they wonder whether the asker might be struggling. That‘s a very thoughtful response.  

There‘s a caveat on all this: if folks continue to promote something that‘s demonstrably wrong, after notification, they should get called out. Here in the US, the first amendment says we can say whatever, but it doesn‘t say we don‘t have any consequences from what we say. (You can‘t yell ‘fire‘ in a crowded theatre if there isn‘t one!) Similarly, if you continue to promote, say, a debunked personality test, you can be called out. ;)

So this is my first draft on sensitivities and sensibilities. Assume good intent. Ask first. Educate the individual and the market. Don‘t attack the person, but the behavior. I‘m sure I‘m missing situations, conditions, additional constraints, etc. Let me know.  

Confidence and Correctness

5 April 2022 by Clark Leave a Comment

Not surprisingly, I am prompted regularly to ponder new things. (Too often, in the wee hours of the morning…) In this case, I realize I haven’t given a lot of thought to the role of confidence  (PDF). It’s a big thing in the system my co-author on an AI and ID paper, Markus Bernhardt, represents, so I realized it’s time to think  about it some more. Here are some thoughts on confidence and correctness.

Confidence by correctnessThe idea is that it matters whether you get it right, or not, and whether you’re confident, or not. That is, they interact (creating the familiar four quadrant model). You can be wrong and unconfident (lo/no), wrong and confident (hi/no),  right and unconfident (lo/yay), and right and confident (hi/yay). Those are arguably importantly different. In particular for what they imply about what sort of intervention makes sense.

I was pondering what this suggests for interventions. I turned it 90 degrees to the left, to put low/no to the left, or beginning spot, and hi/yay to the right, and the other two in-between.  Simplified, my view is that if you’re wrong and not confident, you don’t know it. If you’re wrong and believe you know it, you’re at a potential teachable moment. When you’re right, but not confident, you’re ready for more practice. If you’re right and confident, it may be time to move on.

Which suggests, looking back at my previous exploration of worked examples, that the very first thing to do is to provide worked examples if they’re new. At some point, you give them practice. If they get it right but aren’t confident, you give more practice at roughly the same level. If they’re wrong but confident, you give them feedback (and arguably ramp them backwards). Eventually they’re getting it right  and confident, and at that point you move on (except for some spaced and varied reactivation).

Assessing confidence is an extra step, but there seems to be a valid reason to incorporate it in your learning design. The benefits of being able to more accurately target your interventions, at least in an adaptive system, suggest that the effort is worth it. That’s my initial thinking on confidence and correctness. What’s yours?

My Personal Knowledge Management Approach

29 March 2022 by Clark Leave a Comment

Last week, in our Learning Development Accelerator You Oughta Know session, we had Harold Jarche as a guest. Harold’s known for many things, but in particular his approach to continual learning. Amongst the things he shared was a collection of others’ approaches. I checked and I hadn’t made a contribution! So with no further ado, here’s my personal knowledge management approach.

First, Harold’s Personal Knowledge Management (PKM) model has three components: seek, sense, and share. Seeking is about information coming in, that is, what you’re looking for and the feeds you track. It can be in any conceivable channel, and one of the important things is that it’s  your seeking. Then, you make sense of what comes in, finding ways to comprehend and make use of it. The final step is to share back out the sense you’ve made. It’s a notion of contributing back. Importantly, it’s not that necessarily anybody consumes what you share, but the fact that you’ve prepared it for others is part of the benefit you receive.

Seek

Most seeking is two-fold, and mine’s no exception. First of all there’s the ‘as needed’ searches for specific information. Here I typically use DuckDuckGo as my search engine, and often end up at Wikipedia. With much experience, I trust it.  If there are multiple hits and not a definitive one, I’ll scan the sources as well as the title, and likely open several. Then I review them until I’m happy.

The second part is the feeds. I have a number of blogs I’m subscribed to. There are also the people I follow on Twitter. On LinkedIn, a while ago I actively removed all my follows on my connections, and only retained ones for folks I trust. As I add new people, I similarly make a selection of those I know to trust, and ones who look interesting from a role, domain, location, or other diversity factor.  An important element is to be active in selecting feeds, and even review your selections from time to time.

Sense

Sometimes, I’m looking for a specific answer, and it gets put into my work. Other times, it’s about processing something I’ve come across. It may lead me to diagramming, or writing up something, frequently both (as here). Diagramming is about trying to come to grip with conceptual relationships by mapping them to spatial ones. Writing is about creating a narrative around it.

Another thing I do is apply knowledge, that is put it into action. This can be in a design, or in writing something up. This is different than just writing, for me. That is, I’m not just explaining it, I’m using it in a solution.

Share

To share, I do things like blog, do presentations and workshop, and write books. I also write articles, and sometimes just RT. Harold mentioned, during the session, that sharing should be more than just passing it on, but also adding value. However, I do sometimes just like or share things, thinking spreading it to a different audience is value. If you’re not too prolific in your output, I reckon that the selected shares add value. Of course, in general if I pass things on I do try to make a note, such as when sharing someone else’s blog that I thought particularly valuable.

So that’s my process. It’s evolving, of course. We talked about how our approaches have changed; we’ve both dropped the quantity of posts, for instance. We’re also continually updating our tools, too. I’ve previously noted how comments that used to appear on my blog now appear on LinkedIn.

To be fair, it’s also worth noting that this approach scales. So workgroups and communities can do a similar approach to continually processing. Harold’s done it in orgs, and it factors nicely into social learning as well. One attendee immediately thought about how it could be used in training sessions!

So that’s a rough cut at my PKM process. I invite you to reflect on yours, and share it with Harold as well!

I discuss PKM in both my Revolutionize L&D book, and my Learning Science book.

Emphasis and Effort

22 March 2022 by Clark 1 Comment

For reasons that aren’t quite clear (even to me), I was thinking about where, on a continuum, do L&D elements fit? Where does performance support go? Formal learning? Informal learning? I began to think that it depends on what focus you’re thinking of. So here’re some nascent thoughts on emphasis and effort.

To start with, I generally think of formal learning as the starting point. For instance, in thinking about performance & development (as an alternative to learning & development), I put training first. Similarly, in my strategy work, I likewise suggest the first step is to put learning science more central in training. Here, the order is:

  • Formal Learning
  • Performance Support
  • Informal Learning

I’m looking as much as where we typically start. This may well be because training is always the line of first-response (throw training at it!). Also perhaps because it’s familiar (it looks like school).

However, in another cut at it, I started with performance support. Here, I was thinking more about the utility to achieve goals rather than the way L&D allocates resources. That is, from a performer’s perspective, if the answer can be in the world, it should. I can use a tool to achieve my goal rather than have to take a course. Still, taking a pre-digested course is easier than having to work together to collaborate and solve it. Of course, if someone else has the answer, just asking and getting it is easier than working to create an unknown answer. (So, do I need to separate out communication from collaboration? Hmm…) Thus, the list here might be:

  • Performance Support
  • Formal Learning
  • Informal Learning

However, if I look at it from the effort required from L&D, a new order emerges. Here, formal learning is hardest. That is, if you’re doing it right. To successfully get a persistent change in the way someone does something is harder than even facilitating informal learning, and performance support is easiest. Not saying that any are trivial, mind you, designing good job aids isn’t easy, it’s just not as hard as designing a whole course. Then the list comes out like this:

  • Performance Support
  • Informal Learning
  • Formal Learning

I guess there isn’t one answer. To do this successfully, however, requires an understanding how to do all of the above, and then apply as priorities demands. If you’ve expert performers, you’ll do something different than if you have high turnover. If you’re doing something complex, your design strategies may differ from something important. However, you do need to know the tradeoffs in emphasis and effort to make the right calls. Am I missing something important here?

Working with SMEs

15 March 2022 by Clark 1 Comment

In a recent post, I talked about how expertise is compiled away, and the impact on designing learning and documentation. Someone, of course, asked  how do you then work with SMEs to get the necessary information. Connie Malamed, one of our recognized research translators, has recently written about getting tacit knowledge, but I also want to address the more usual process.  I thought I’d written about it somewhere, but I can’t find it. So here are some thoughts on working with SMEs.

First, I’ve heard from several folks experienced in this that any one SME may not have both necessary elements. One element is to have a good model to guide the performance. The second elements is the ability to articulate that model. Their solution is to work with SMEs in groups. Guy Wallace (Eppic), Roger Schank (Socratic Arts), and Norman Bier (CMU) have all mentioned to me that they’ve found utility in getting SMEs together as a group and having that knowledge negotiation unpack the necessary learnings. They’re all folks worth listening to. You have to manage the process right, of course, but if you can do it, it’s useful.

I suggest that you also want several  different types of SMEs. You want not only the top performers, and theoretical experts, you also want just-past novices (also attributable to Guy) and supervisors. Theorists can give you models, while top performers can talk about the practical implications. Novices can let you know what they found hard to understand, and supervisors provide insight into what performers typically do wrong. All are helpful information for different parts of the learning.

Another trick I use is to focus on decisions. I argue that making better decisions will be more important to organizations than the ability to recite knowledge. SMEs  do have access to all the knowledge they learned, and it’s easy for them to focus on that. That’s where you get ‘info dump’ courses and bullet point-laden slides. By using decisions as a focus, you cut through the knowledge. “What decisions will they make differently/better as a result of this knowledge?” is a helpful question.

You can use questionnaires as well. Asking specifically about the elements: models, misconceptions, consequences, can be a good preliminary step before you actually talk to them. Or have a template for content for them to fill out. Any guidance and structure helps keep them focused.

Another preparatory step is to create a draft proposal of the information. You’ll likely be getting a dump of PDFs and PPTs). Process that material, and make your first, best, guess. It’s easier to critique than generate, so if you’re willing to be wrong (and why not), you can have them shoot holes in what you did. You’ll have focused on decisions (right?), and they’ll fix it, but you’ll have biased them for action.

Of course, you want to ensure you test for confirmation. You should circulate what you have learned, and get validation. You’ll need to have clear objectives that operationalize your learnings. You then should prototype and test what you’ve developed and see if it actually changes the behavior in useful ways. Ensure that your focus actually leads to the necessary change.

There are other elements you want from SMEs, such as their personal interest. However, it’s critical that you get them to focus on behavior change. It’s not easy, but it’s part of the job. Working with SMEs, correctly, is key to designing learning experiences that address real needs. These are my thoughts, what are yours?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok