Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Engage, yea or nay?

17 February 2015 by Clark 10 Comments

In a recent chat, a colleague I respect said the word ‘engagement’  was anathema.  This surprised me, as I’ve been quite outspoken about the need for engagement (for one small example, writing a book about it!).  It may be that the conflict is definitional, for it appeared that my colleague and another respondent viewed engagement as bloating the content, and that’s not what I mean at all. So I thought I lay out what  I  mean when I say engaging, and why I think it’s crucial.

Let’s be clear what I  don’t mean.  If you think by engagement it’s adding in extra stuff, we’re using a very different definition of engagement.  It’s not about tarting up uninteresting stuff with ‘fun’ (e.g. racing themed window dressing on knowledge test).  It’s not about putting in unnecessary unrelated imagery, sounds, or anything else.  Heck, the research of Dick Mayer at UCSB shows this actually hinders learning!

So what do I mean?  For one thing, stripping away any ‘nice to have’ or unnecessary info.  Lean is engaging!  You have to focus on what really will help the learners,  and in ways that they get.  And they do.  And then help them in the ‘in the ways they get’ bit.

You need contextualized practice.  Engaging is making the context meaningful to the learners.  You need contextualization (e.g research by John Bransford on anchored cognition), but arbitrary contextualization isn’t as good as intrinsically interesting contexts.  This isn’t window dressing, since you need to be doing it anyway, but do it. And in a minimal style (as de Saint-Exupery said: “Perfection is finally attained not when there is no longer anything to add but when there is no longer anything to take away…”).

You want compelling examples. We know that examples lead to better learning (ala, for instance John Sweller’s work on cognitive load), but again, making them meaningful to the learners is critical. This isn’t window dressing, as we need them, but they’re better if they’re well told as intrinsically interesting stories.

Finally, we need to introduce the learning.  Too often we do this in ways that the learner doesn’t get the WIIFM (What’s In It For Me).  Learners learn better when they’re emotionally open to the content instead of uninterested. This may be a wee bit more, but we can account for this by getting rid of the  usual introductory stuff.  And it’s worth it.

Now, let’s be clear, this is for when we’ve deemed formal learning as necessary. When the audience is practitioners who know what they need and why it’s important, then giving them ‘just the facts’, performance support, is sufficient.  But if it’s new skills they need, when you need a learning experience, then you want to make it engaging. Not extrinsically, but intrinsically.  And that’s not more in quantity, it’s not bloated, it’s more in quality, in minimalism for content and maximal for immersion.

Engaging learning is a good thing, a better thing than not, the right thing.  I’m hoping it’s just definitional, because I can’t see the contrary argument unless there’s confusion over what I mean.  Anyone?

The Grail of Effective and Engaging Learning Experiences

10 February 2015 by Clark 2 Comments

There’s a considerable gap between what we can be doing, and what we are doing.  When you look at what’s out there, we see that there are several way in which we fall short of the mark.  While there are many dimensions that  could be considered, for the sake of simplicity let’s characterize the two important ones as effectiveness of our learning and the engagement of the experience.  And I want to characterize where we are and where we could be, and the gaps we need to bridge.

GrailEffectiveEngagingLearningIf we map the space, we see that the lower left is the space of low engagement  and low effectiveness.  Too much elearning resides there.  Now, to be fair, it’s easy to add engaging media and production values, so the space of typical elearning does span from low to high engagement. Moving up the diagram, however, towards increasing effectiveness, is an area that’s less populated.  The red line separates the undesirable areas from the space we’d like to start hitting, where we begin to have some modicum of both effectiveness  and  engagement, moving towards the upper right.  This space is relatively sparsely populated, I’m afraid.  And  while there are instances  of content that do increase the effectiveness, there’s little that really hits the ultimate goal, the holy grail, with a fully integrated effective and engaging experience is achieved.

How do we move in the right direction? I’ve talked before about trying to hit the sweet spot of maximal effectiveness within pragmatic constraints.  Certainly from an effectiveness standpoint, you should be looking at the components of the Serious eLearning Manifesto.  To get effective learning, you need a number of elements, for instance:

  • meaningful practice: practice aligned with the real world task
  • contextualized practice: learning across contexts that support transfer
  • sustained practice: sufficient and increasingly challenging practice to develop the skills to the necessary level
  • spaced practice: practice spread out over  time (brains need sleep to learn more than a certain threshold)
  • real world consequences providing feedback  coupled with scaffolded  reflection
  • model-based guidance: the best guide for practice is a conceptual basis (not rote information)
  • appropriate examples: that show the concepts being applied in context

Some of these elements, also contribute to engagement, as well as others.  Components include:

  • learning-centered  contexts: problems learners recognize as important
  • learner-centered contexts: problems  learners want to solve
  • emotionally engaging introductions: hooking learners in viscerally as well as cognitively
  • adapted challenge: ramping up the challenge appropriately to avoid both boredom and frustration
  • unpredictability: maintaining the learner’s attention through  surprise
  • meaningfulness: learners playing roles they want to be in
  • drama and/or humor

The integration of these elements was the underlying premise behind Engaging Learning, my book on integrating effectiveness and engagement, specifically on making meaningful practice, e.g. serious games.  Serious games are one way to achieve this end, by contextualizing practice as decisions in a meaningful environment and using a game engine to adapt the  challenge and providing essentially unlimited practice.

Other approaches achieve much of this effectiveness in different ways. Branching scenarios are powerful approximations to this by showing consequences in context but with limited replay, and so are constructivist and problem-based learning pedagogies. This may sound daunting, but with practice, and some shortcuts, this is doable.

For example, Socratic Arts has a powerful online pedagogy that leverages media and a constructivist pedagogy in a relatively simple framework. The learner is given ‘assignments‘ that mirror real world tasks, via emails or videos of characters playing roles such as a boss.  The outputs required similarly mimic work products you might find in this area. Scaffolding is available in a couple of ways. For one, there are guidelines about Videos of experts and documents are available as resources, to support the learner in getting the best outcome.  While it’s low on fancy visual design,  it’s effective because it’s closely aligned to the needed skills post-learning.  And the cognitive challenge is pitched at the right level to engage the intellect, if not the aesthetics.  This is a cost-effective balance.

The work I did with the Wadhwani Foundation hit a slightly different spot in trying to get to the grail.  I didn’t have the ability to work quite as tightly with the SMEs from the get-go, and we didn’t have the ability to simulate the hands-on tasks as well as we’d like,  but we did our best to infer real tasks and used low-tech simulations and scenarios to make it effective.  We did use more media, animations and contextualized videos, to make the experience more engaging and effective as well.

The point being that we can start making learning more effective and engaging in practical ways. We need to make it effective, or why bother?  We should make it engaging, to optimize the outcomes and not insult our learners. And we can.  So why don’t we?

Agile Bay Area #LNDMeetup Mindmap

5 February 2015 by Clark 1 Comment

I’ve been interested in process, so I attended this month’s Bay Area Learning Design Meetup that showcased LinkedIn’s work on Agile using Scrum for learning design. It was very nice of them to share the specifics of their process, and while there were more details than time permitted to cover, it was a great beginning to understand the differences.

Basically, a backlog is kept of potential new projects.  They’re prioritized and a subset is chosen as the basis of the sprint and put on the board.  Then for two weeks they work on hitting the elements on the board, with a daily standup meeting to present where they’re at and synchronize.  At the end they demo to the stakeholders and reflect.  As part of the reflection, they’re supposed to change something for the next iteration.

There’re different roles: a project owner who’s the ‘client’ in a sense (and a relation to who may be the end client).  There is a Scrum master who’s responsible for facilitating the group through the steps, and then the team, which should be small but at least represent all the necessary roles to execute whatever is being accomplished.

When I asked about scope, they said that they’ve found they can do about 100 story points (which are empirical) in a sprint, and they may distribute that across some elearning, some job aids, whatever.  They didn’t seem too eager to try to quantify that relative to other known metrics, and I understand it’s hard, particularly in the time they had.  Here’s the Mindmap:

(null)

 

Allen Interactions also discussed their SAM project (which I know and like), but the mind map didn’t match too well to their usual diagram (only briefly shown at the end), and I ran out of time trying to remedy. It’s better just to look at the diagram ;).

 

It’s the process, silly!

14 January 2015 by Clark Leave a Comment

So yesterday, I went off on some of the subtleties in elearning that are being missed.  This is tied to last weeks posts about how we’re not treating elearning seriously enough.  And part of it is in the knowledge and skills of the designers, but it’s also in the process. Or, to put it another way, we should be using steps and tools that align with the type of learning we need. And I don’t mean ADDIE, though not inherently.

So what  do I mean?  For one, I’m a fan of Michael Allen’s Successive Approximation Model (SAM), which iterates several times (tho’ heuristically, and it could be better tied to a criterion).  Given that people are far less predictable than, say, concrete, fields like interface design have long known that testing and refinement need to be included.  ADDIE isn’t inherently linear, certainly as it has evolved, but in many ways it makes it easy to make it a one-pass process.

Another issue, to me, is to structure the format for your intermediate representations so that make it hard to do aught but come up with useful information.  So, for instance, in recent work I’ve emphasized that a preliminary output  is a competency doc that includes (among other things)  the objectives (and measures), models, and common misconceptions.  This has evolved from a similar document I use in (learning) game design.

You then need to capture your initial learning flow. This is what Dick &  Carey call your instructional strategy, but to me it’s the overall experience of the learner, including addressing the anxieties learners may feel, raising their interest and motivation, and systematically building their confidence.  The anxieties or emotional barriers to learning may well be worth capturing at the same time as the competencies, it occurs to me (learning out loud ;).

It also helps if your tools don’t interfere with your goals.  It should be easy to create animations that help illustrate models (for the concept) and tell stories (for examples).  These can be any media tools, of course. The most important tools are the ones you use to create meaningful practice. These should allow you to create mini-, linear-, and branching-scenarios (at least).  They should have alternative feedback for every wrong answer. And they should support contextualizing the practice activity. Note that this does  not mean tarted up drill and kill with gratuitous ‘themes’ (race cars, game shows).  It means having learners make meaningful decisions and act on them in ways like they’d act in the real world (click on buttons for tech, choose dialog alternatives for interpersonal interactions, drag tools to a workbench or adjust controls for lab stuff, etc).

Putting in place processes that only use formal learning when it makes sense,  and then doing it right when it does make sense, is key to putting L&D on a path to relevancy.   Cranking out courses on demand, focusing on measures like cost/butt/seat, adding rote knowledge quizzes to SME knowledge dumps, etc are instead continuing down the garden path to oblivion. Are you ready to get scientific and strategic about your learning  design?

The subtleties

13 January 2015 by Clark Leave a Comment

I recently opined that good learning design was complex, really perhaps close to rocket science.  And I suggested that a consequent problem was that the nuances are subtle.  It occurs to me that perhaps discussing some example problems will help make this point more clear.

Without being exhaustive, there are several consistent problems I see in the elearning content I review:

  • The wrong focus. Seriously, the outcomes for the class aren’t meaningful!  They are about information or knowledge, not skill.  Which leads to no meaningful change in behavior, and more importantly, in outcomes. I don’t want to learn about X, I want to learn how to  do  X!
  • Lack of motivating introductions.  People are expected to give a hoot  about this information, but no one helps them understand why it’s important?  Learners should be assisted to viscerally ‘get’ why this is important,  and helped to see how it connects to the rest of the world.  Instead we get some boring drone about how this is really important.  Connect it to the world and let me see the context!
  • Information focused or arbitrary content presentations. To get the type of flexible problem-solving organizations need, people need mental models about why  and how  to do it this way, not just the rote steps.  Yet too often I see arbitrary lists of information accompanied  by a rote knowledge test.  As if that’s gonna stick.
  • A lack of examples, or trivial ones.  Examples need to show a context, the barriers, and how the content model provides guidance about how to succeed (and when it won’t).  Instead we get fluffy stories that don’t connect to the model and show the application to the context.  Which means it’s not going to support transfer (and if you don’t know what I’m talking about, you’re not ready to be doing design)!
  • Meaningless and insufficient practice.  Instead of asking learners to make decisions like they will be making in the workplace (and this is my hint for the  first  thing to focus on fixing), we ask rote knowledge questions. Which isn’t going to make a bit of difference.
  • Nonsensical alternatives to the right answer.  I regularly ask of audiences “how many of you have ever taken a quiz where the alternatives to the right answer are so silly or dumb that you didn’t need to know anything to pass?”  And  everyone raises their hand.  What possible benefit does that have?  It insults the learner’s intelligence, it wastes their time, and it has no impact on learning.
  • Undistinguished feedback. Even if you do have an alternative that’s aligned with a misconception, it seems like there’s an industry-wide conspiracy to ensure that there’s only one response for all the wrong answers. If you’ve discriminated meaningful differences to the right answer based upon how they go wrong, you should be addressing them individually.

The list goes on.  Further, any one of these can severely impact the learning outcomes, and I typically see  all of these!

These are really  just the flip side of the elements of good design I’ve touted in previous posts (such as this series).  I mean, when I look at most elearning content, it’s like the authors have no idea how we really learn, how our brains work.  Would you design a tire for a car without knowing how one works?  Would you design a cover for a computer without knowing what it looks like?  Yet it appears that’s what we’re doing in most elearning. And it’s time to put a stop to it.  As a first step, have a look at the Serious eLearning Manifesto, specifically the 22 design principles.

Let me be clear, this is just the surface.  Again, learning engineering is complex stuff.  We’ve hardly touched on engagement, spacing, and more.    This may seem like a lot, but this is really the boiled-down version!  If it’s too much, you’re in the wrong job.

Shiny objects and real impact

9 January 2015 by Clark 2 Comments

Yesterday I went off about how learning design should be done right and it’s not easy.  In a conversation two days ago, I was talking to a group that was  supporting several initiatives in adaptive learning, and I wondered if this was a good idea.

Adaptive learning is  desirable.  If learners come from different initial abilities, learn at different rates, and have different availability, the learning should adapt.  It should skip things you already know, work at your pace, and provide extra practice if the learning experience is extended.  (And, BTW, I’m  not talking learning styles).  And this is worthwhile,  if the content you are starting  with is good.  And even then, is it really necessary. To explain, here’s an analogy:

I have heard it said that the innovations for the latest drugs should be, in many cases, unnecessary. The extra costs (and profits for the drug companies) wouldn’t be necessary. The claim is that the new drugs aren’t any more effective than the existing treatments  if they were used properly.  The point being that people don’t take the drugs as prescribed (being irregular,  missing, not continuing past the point they feel better, etc), and if they did the new drugs wouldn’t be as good.  (As a side note, it would appear that focusing on improving patient drug taking protocols would be a sound strategy, such as using a mobile app.)  This isn’t true in all cases, but even in some it makes a point.

The analogy here is that using all the fancy capabilities: tarted up templates for simple questions, 3D virtual worlds, even adaptive learning, might not be needed if we did better learning design!  Now, that’s not to say we couldn’t add value with using the right technology at the right points, but as I’ve quipped in the past: if you get the design right, there are  lots of ways to implement it.  And, as a corollary, if you don’t get the design right, it doesn’t matter how you implement it.

We do need to work on improving our learning design, first, rather than worrying about the latest shiny objects. Don’t get me wrong, I  love  the shiny objects, but that’s with the assumption that we’re getting the basics right.  That was my assumption ’til I hit the real world and found out what’s happening. So let’s please get the basics right, and then worry about leveraging the technology on  top of a strong foundation.

Maybe it is rocket science!

8 January 2015 by Clark 11 Comments

As I’ve been working with the Foundation over the past 6 months I’ve had the occasion to review a wide variety of elearning, more specifically in the vocational and education space, but my experience mirrors that from the corporate space: most of it isn’t very good.  I realize that’s a harsh pronouncement, but I fear that it’s all too true; most of the elearning I see will have very little impact.  And I’m becoming ever more convinced that what I’ve quipped  in the past is true:

Quality design is hard to distinguish from well-produced but under-designed content.

And here’s the thing: I’m beginning to think that this is not just a problem with the vendors, tools, etc., but that it’s more fundamental.  Let me elaborate.

There’s a continual problem of bad elearning, and yet I hear people lauding certain examples, awards are granted, tools are touted, and processes promoted.  Yet what I see really isn’t that good. Sure, there are exceptions, but that’s the problem, they’re exceptions!  And while I (and others, including the instigators of the Serious eLearning Manifesto) try to raise the bar, it seems to be an uphill fight.

Good learning design is rigorous. There’re some significant effort just getting the right objectives, e.g. finding the  right  SME, working with them and not taking what they say verbatim, etc.  Then working to establish the right model and communicating it, making meaningful practice, using media correctly.  At the same time, successfully fending off the forces of fable (learning styles, generations, etc).

So, when it comes to the standard  tradeoff    –  fast, cheap, or good, pick two – we’re ignoring ‘good’.  And  I think a fundamental problem is  that everyone ‘knows’  what learning is, and they’re not being astute consumers.  If it looks good, presents content, has some interaction, and some assessment, it’s learning, right?  NOT!  But stakeholders don’t know, we don’t worry enough about quality in our metrics (quantity per time is not a quality metric), and we don’t invest enough in learning.

I’m reminded of a thesis that says medicos reengineered their status in society consciously.  They went from being thought of ‘quacks’ and ‘sawbones’ to an almost reverential status today by a process of making the process of becoming a doctor quite rigorous.  I’m tempted to suggest that we need to do the same thing.

Good learning design is complex.  People don’t have predictable properties as does concrete.  Understanding the necessary distinctions to do the right things is complex.  Executing the processes to successfully design, refine, and deliver a learning experience that leads to an outcome is a complicated engineering endeavor.  Maybe we do have to treat it like rocket science.

Creating learning should be considered a highly valuable outcome: you are helping people achieve their goals.  But if you really aren’t, you’re perpetrating malpractice!  I’m getting stroppy, I realize, but it’s only because I care and I’m concerned.  We have  got to raise our game, and I’m seriously concerned with the perception of our work, our own knowledge, and our associated processes.

If you agree, (and if you don’t, please do let me know in the comments),  here’s my very serious question because I’m running out of ideas: how do we get awareness of the nuances of good learning design out there?

 

Quinn-Thalheimer: Tools, ADDIE, and Limitations on Design

23 December 2014 by Clark 2 Comments

A few months back, the esteemed Dr. Will Thalheimer encouraged me to join him in a blog dialog, and we posted the first one on who L&D had responsibility to.  And while we took the content seriously, I can’t say our approach was similarly.  We decided to continue, and here’s the second in the series, this time trying to look at what might be hindering the opportunity for design to get better.  And again, a serious convo leavened with a somewhat demented touch:

Clark:

Will, we‘ve suffered Fear and Loathing on the Exhibition Floor at the state of the elearning industry before, but I think it‘s worth looking at some causes and maybe even some remedies.  What is the root cause of our suffering?  I‘ll suggest it‘s not massive consumption of heinous chemicals, but instead think that we might want to look to our tools and methods.

For instance, rapid elearning tools make it easy to take PPTs and PDFs, add a quiz, and toss the resulting knowledge test and dump over to the LMS to lead to no impact on the organization.  Oh, the horror!  On the other hand, processes like ADDIE make it easy to take a waterfall approach to elearning, mistakenly trusting that ‘if you include the elements, it is good‘ without understanding the nuances of what makes the elements work.  Where do you see the devil in the details?

Will:

Clark my friend, you ask tough questions! This one gives me Panic, creeping up my spine like the first rising vibes of an acid frenzy. First, just to be precise—because that‘s what us research pedants do—if this fear and loathing stayed in Vegas, it might be okay, but as we‘ve commiserated before, it‘s also in Orlando, San Francisco, Chicago, Boston, San Antonio, Alexandria, and Saratoga Springs. What are the causes of our debauchery? I once made a list—all the leverage points that prompt us to do what we do in the workplace learning-and-performance field.

First, before I harp on the points of darkness, let me twist my head 360 and defend ADDIE. To me, ADDIE is just a project-management tool. It‘s an empty baseball dugout. We can add high-schoolers, Poughkeepsie State freshman, or the 2014 Red Sox and we‘d create terrible results. Alternatively, we could add World-Series champions to the dugout and create something beautiful and effective. Yes, we often use ADDIE stupidly, as a linear checklist, without truly doing good E-valuation, without really insisting on effectiveness, but this recklessness, I don‘t think, is hardwired into the ADDIE framework—except maybe the linear, non-iterative connotation that only a minor-leaguer would value. I‘m open to being wrong—iterate me!

Clark:

Your defense of ADDIE is admirable, but is the fact that it‘s misused perhaps reason enough to dismiss it? If your tool makes it easy to lead you astray, like the alluring temptation of a forgetful haze, is it perhaps better to toss it in a bowl and torch it rather than fight it? Wouldn‘t the Successive Approximation Method be a better formulation to guide design?

Certainly the user experience field, which parallels ours in many ways and leads in some, has moved to iterative approaches specifically to help align efforts to demonstrably successful approaches. Similarly, I get ‘the fear‘ and worry about our tools. Like the demon rum, the temptations to do what is easy with certain tools may serve as a barrier to a more effective application of the inherent capability. While you can do good things with bad tools (and vice versa), perhaps it‘s the garden path we too easily tread and end up on the rocks. Not that I have a clear idea (and no, it‘s not the ether) of how tools would be configured to more closely support meaningful processing and application, but it‘s arguably a collection worth assembling. Like the bats that have suddenly appeared…

Will:

I‘m in complete agreement that we need to avoid models that send the wrong messages. One thing most people don‘t understand about human behavior is that we humans are almost all reactive—only proactive in bits and spurts. For this discussion, this has meaning because many of our models, many of our tools, and many of our traditions generate cues that trigger the wrong thinking and the wrong actions in us workplace learning-and-performance professionals. Let‘s get ADDIE out of the way so we can talk about these other treacherous triggers. I will stipulate that ADDIE does tend to send the message that instructional design should take a linear, non-iterative approach. But what‘s more salient about ADDIE than linearity and non-iteration is that we ought to engage in Analysis, Design, Development, Implementation, and Evaluation. Those aren‘t bad messages to send. It‘s worth an empirical test to determine whether ADDIE, if well taught, would automatically trigger linear non-iteration. It just might. Yet, even if it did, would the cost of this poor messaging overshadow the benefit of the beneficial ADDIE triggers? It‘s a good debate. And I commend those folks—like our comrade Michael Allen—for pointing out the potential for danger with ADDIE. Clark, I‘ll let you expound on rapid authoring tools, but I‘m sure we‘re in agreement there. They seem to push us to think wrongly about instructional design.

Clark:

I spent a lot of time looking at design methods across different areas – software engineering, architecture, industrial design, graphic design, the list goes on – as a way to look for the best in design (just as I‘ve looked across engagement disciplines, learning approaches, and more; I can be kinda, er, obsessive).   I found that some folks have 3 step models, some 4, some 5. There‘s nothing magic about ADDIE as ‘the‘ five steps (though having *a* structure is of course a good idea).  I also looked at interface design, which has arguably the most alignment with what elearning design is about, and they‘ve avoided some serious side effects by focusing on models that put the important elements up front, so they talk about participatory design, and situated design, and iterative design as the focus, not the content of the steps. They have steps, but the focus is on an evaluative design process. I‘d argue that‘s your empirical design (that or the fumes are getting to me).  So I think the way you present the model does influence the implementation. If advertising has moved from fear motivation to aspirational motivation (c.f. Sach‘s Winning the Story Wars), so too might we want to focus on the inspirations.

Will:

Yes, let‘s get back to tools. Here‘s a pet peeve of mine. None of our authoring tools—as far as I can tell—prompt instructional designers to utilize the spacing effect or subscription learning. Indeed, most of them encourage—through subconscious triggering—a learning-as-an-event mindset.

For our readers who haven‘t heard of the spacing effect, it is one of the most robust findings in the learning research. It shows that repetitions that are spaced more widely in time support learners in remembering. Subscription learning is the idea that we can provide learners with learning events of very short duration (less than 5 or 10 minutes), and thread those events over time, preferably utilizing the spacing effect.

Do you see the same thing with these tools—that they push us to see learning as a longer-then-necessary bong hit, when tiny puffs might work better?

Clark:

Now we’re into some good stuff!  Yes, absolutely; our tools have largely focused on the event model, and made it easy to do simple assessments.  Not simple good assessments, just simple ones. It’s as if they think designers don’t know what they need.  And, as our colleague Cammy Bean’s book The Accidental Instructional Designer’s success shows, they may be right.  Yet I’d rather have a power tool that’s incrementally explorable, but scaffolds good learning than one that ceilings out just when we’re getting to somewhere interesting. Where are the templates for spaced learning, as you aptly point out?  Where are the tools to make two-step assessments (first tell us which is right, then why it’s right, as Tom Reeves has pointed us to)?  Where are more branching scenario tools?  They tend to hover at the top end of some tools, unused. I guess what I’m saying is that the tools aren’t helping us lift our game, and while we shouldn’t blame the tools, tools that pointed the right way would help.  And we need it (and a drink!).

Will:

Should we blame the toolmakers then? Or how about blaming ourselves as thought leaders? Perhaps we‘ve failed to persuade! Now we‘re on to fear and self-loathing…Help me Clark! Or, here‘s another idea. How about you and I raise $5 million in venture capital and we‘ll build our own tool? Seriously, it‘s a sad sign about the state of the workplace learning market that no one has filled the need. Says to me that (1) either the vast cadre of professionals don‘t really understand the value, or (2) the capitalists who might fund such a venture don‘t think the vast cadre really understand the value, (3) or the vast cadre are so unsuccessful in persuading their own stakeholders that truth about effectiveness doesn‘t really matter. When we get our tool built, how about we call it Vastcadre? Help me Clark! Kent you help me Clark? Please get this discussion back on track…What else have you seen that keeps us ineffective?

Clark:

Gotta hand it to Michael Allen, putting his money where his mouth is, and building ZebraZapps.  Whether that‘s the answer is a topic for another day.  Or night.  Or…  so what else keeps us ineffective?  I‘ll suggest that we‘re focusing on the wrong things.  In addition to our design processes, and our tools, we‘re not measuring the right things. If we‘re focused on how much it costs per bum in seat per hour, we‘re missing the point. We should be measuring the  impact  of our learning.  It‘s about whether we‘re decreasing sales times, increasing sales success, solving problems faster, raising customer satisfaction.  If we look at what we‘re trying to impact, then we‘re going to check to see if our approaches are working, and we‘ll get to more effective methods.  We‘ve got to cut through the haze and smoke (open up what window, sucker, and let some air into this room), and start focusing with heightened awareness on moving some needles.

So there you have it.  Should we continue our wayward ways?

Why L&D?

17 December 2014 by Clark 3 Comments

One of the concerns I hear is whether L&D still has a role.  The litany is  that  they’re so far out of touch with their organization, and science, that it’s probably  better to let them die an unnatural death than to try to save them. The prevailing attitude of this extreme view is that the Enterprise Social Network is the natural successor to the LMS, and it’s going to come from operations or IT rather than L&D.  And, given that I’m on record suggesting that we revolutionize L&D rather than ignoring it, it makes sense to justify why.  And while I’ve had other arguments, a really good argument comes from my thesis advisor, Don Norman.

Don’s on a new mission, something he calls DesignX, which is scaling up design processes to deal with “complex socio-technological systems”.   And he recently wrote an article about  why  DesignX that put out a good case why L&D as well.  Before I get there, however, I want to point out two other facets  of his argument.

The first is that often design has to go  beyond science. That is, while you use science when you can, when you can’t you use theory inferences,  intuition, and more to fill in the gaps, which you hope  you’ll find out later (based upon later science, or your own data) was the right choice.  I’ve often had to do this in my designs, where, for instance, I think research hasn’t gone quite far enough in understanding engagement.  I’m not in a research position as of now, so I can’t do the research myself, but I continue to look at what can be useful.  And this is true of moving L&D forward. While we have some good directions and examples, we’re still ahead of documented research.  He points out that system science and service thinking are science based, but suggests design needs to come in beyond those approaches.   To the extent L&D can, it should draw from science, but also theory and keep moving forward regardless.

His other important point is, to me, that he is talking about systems.  He points out that design  as a craft  works well on simple areas, but where he wants to scale design is to the level of systemic solutions.  A noble goal, and here too I think this is an approach  L&D needs to consider as well.  We have to go beyond point solutions – training, job aids, etc – to performance ecosystems, and this won’t come without a different mindset.

Perhaps the most interesting one, the one that triggered this post, however, was a point on why designers are needed.  His point is that others have focuses on efficiency and effectiveness, but he  argued that  designers have empathy for the users as well.  And I think this is really important.  As I used to say the budding software engineers I was teaching interface design to: “don’t trust your intuition, you don’t think like normal people”.  And similarly, the reason I want L&D in the equation is that they (should) be the ones who really understand how we think, work, and learn, and consequently they should be the ones facilitating performance and development. It takes an empathy with users to facilitate them through change, to help them deal with fears and anxieties dealing with new systems, to understand what a good learning culture is and help foster it.

Who else would you want to be guiding an organization in achieving effectiveness in a humane way?   So Don’s provided, to me, a good point on why we might still want L&D (well, P&D really ;)  in the organization. Well, as long as they also addressing the bigger picture and not just pushing info dump and knowledge test.  Does this make sense to you?

#itashare #revolutionizelnd

Challenges in engaging learning

16 December 2014 by Clark 2 Comments

I’ve been working on moving a team to deeper learning design.  The goal is to practice what I preach, and make sure that the learning design is competency-aligned, activity-based, and model-driven.  Yet, doing it in a pragmatic way.

And this hasn’t been without it’s challenges.  I  presented to the team my vision, we worked out a process, and started coaching the team during development.  In retrospect, this wasn’t proactive enough.  There were a few other hiccups.

We’re currently engaged in a much tighter cycle of development and revision, and now feel we’re getting close to the level of effectiveness  and  engagement we need.  Whether a) it’s really better, and b) whether we can replicate it yet scale it as well is an open question.

At core are a few elements. For one, a rabid focus on what learners are  doing is key.  What do they need to be able to do, and what contexts do they need to do it in?

The competency-alignment focus is on the key tasks that they have to do in the workplace, and making sure we’re preparing them across pre-class, in-class, and post-class activities to develop that ability.  A key focus is having them make the decision in the learning experience that they’ll have to make afterward.

I’m also pushing very hard on making sure that there are models behind the decisions.  I’m trying hard to avoid arbitrary categorizations, and find the principles that drove those categorizations.

Note that all this is  not easy.  Getting the models is hard when the resources  provided don’t include that information.  Avoiding presenting just knowledge and definitions is hard work.  The tools we use make certain interactions easy, and other ones not so easy.  We have to map meaningful decisions into what the tools support.  We end up making  tradeoffs, as do we all.  It’s good, but not as good as it could be.  We’ll get better, but we do want to run in a practical fashion as well.

There are more elements to weave in: layering on some general biz skills is embryonic.  Our use of examples needs to get more systematic.  As does our alignment of learning goal to practice activity.    And we’re struggling to have a slightly less didactic and earnest tone;  I haven’t worked hard enough on pushing a bit of humor in, tho’ we are ramping up some exaggeration.  There’s only so much you can focus on at one time.

We’ll be running some student tests next week before presenting to the founder.  Feeling mildly confident that we’ve gotten a decent take on quality learning design with suitable production value, but there is the barrier that the nuances of learning design are  subtle. Fingers crossed.

I still believe that, with practice, this becomes habit and easier.  We’ll see.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.