Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: engagement

The subtleties

13 January 2015 by Clark Leave a Comment

I recently opined that good learning design was complex, really perhaps close to rocket science.  And I suggested that a consequent problem was that the nuances are subtle.  It occurs to me that perhaps discussing some example problems will help make this point more clear.

Without being exhaustive, there are several consistent problems I see in the elearning content I review:

  • The wrong focus. Seriously, the outcomes for the class aren’t meaningful!  They are about information or knowledge, not skill.  Which leads to no meaningful change in behavior, and more importantly, in outcomes. I don’t want to learn about X, I want to learn how to  do  X!
  • Lack of motivating introductions.  People are expected to give a hoot  about this information, but no one helps them understand why it’s important?  Learners should be assisted to viscerally ‘get’ why this is important,  and helped to see how it connects to the rest of the world.  Instead we get some boring drone about how this is really important.  Connect it to the world and let me see the context!
  • Information focused or arbitrary content presentations. To get the type of flexible problem-solving organizations need, people need mental models about why  and how  to do it this way, not just the rote steps.  Yet too often I see arbitrary lists of information accompanied  by a rote knowledge test.  As if that’s gonna stick.
  • A lack of examples, or trivial ones.  Examples need to show a context, the barriers, and how the content model provides guidance about how to succeed (and when it won’t).  Instead we get fluffy stories that don’t connect to the model and show the application to the context.  Which means it’s not going to support transfer (and if you don’t know what I’m talking about, you’re not ready to be doing design)!
  • Meaningless and insufficient practice.  Instead of asking learners to make decisions like they will be making in the workplace (and this is my hint for the  first  thing to focus on fixing), we ask rote knowledge questions. Which isn’t going to make a bit of difference.
  • Nonsensical alternatives to the right answer.  I regularly ask of audiences “how many of you have ever taken a quiz where the alternatives to the right answer are so silly or dumb that you didn’t need to know anything to pass?”  And  everyone raises their hand.  What possible benefit does that have?  It insults the learner’s intelligence, it wastes their time, and it has no impact on learning.
  • Undistinguished feedback. Even if you do have an alternative that’s aligned with a misconception, it seems like there’s an industry-wide conspiracy to ensure that there’s only one response for all the wrong answers. If you’ve discriminated meaningful differences to the right answer based upon how they go wrong, you should be addressing them individually.

The list goes on.  Further, any one of these can severely impact the learning outcomes, and I typically see  all of these!

These are really  just the flip side of the elements of good design I’ve touted in previous posts (such as this series).  I mean, when I look at most elearning content, it’s like the authors have no idea how we really learn, how our brains work.  Would you design a tire for a car without knowing how one works?  Would you design a cover for a computer without knowing what it looks like?  Yet it appears that’s what we’re doing in most elearning. And it’s time to put a stop to it.  As a first step, have a look at the Serious eLearning Manifesto, specifically the 22 design principles.

Let me be clear, this is just the surface.  Again, learning engineering is complex stuff.  We’ve hardly touched on engagement, spacing, and more.    This may seem like a lot, but this is really the boiled-down version!  If it’s too much, you’re in the wrong job.

Quinn-Thalheimer: Tools, ADDIE, and Limitations on Design

23 December 2014 by Clark 2 Comments

A few months back, the esteemed Dr. Will Thalheimer encouraged me to join him in a blog dialog, and we posted the first one on who L&D had responsibility to.  And while we took the content seriously, I can’t say our approach was similarly.  We decided to continue, and here’s the second in the series, this time trying to look at what might be hindering the opportunity for design to get better.  And again, a serious convo leavened with a somewhat demented touch:

Clark:

Will, we‘ve suffered Fear and Loathing on the Exhibition Floor at the state of the elearning industry before, but I think it‘s worth looking at some causes and maybe even some remedies.  What is the root cause of our suffering?  I‘ll suggest it‘s not massive consumption of heinous chemicals, but instead think that we might want to look to our tools and methods.

For instance, rapid elearning tools make it easy to take PPTs and PDFs, add a quiz, and toss the resulting knowledge test and dump over to the LMS to lead to no impact on the organization.  Oh, the horror!  On the other hand, processes like ADDIE make it easy to take a waterfall approach to elearning, mistakenly trusting that ‘if you include the elements, it is good‘ without understanding the nuances of what makes the elements work.  Where do you see the devil in the details?

Will:

Clark my friend, you ask tough questions! This one gives me Panic, creeping up my spine like the first rising vibes of an acid frenzy. First, just to be precise—because that‘s what us research pedants do—if this fear and loathing stayed in Vegas, it might be okay, but as we‘ve commiserated before, it‘s also in Orlando, San Francisco, Chicago, Boston, San Antonio, Alexandria, and Saratoga Springs. What are the causes of our debauchery? I once made a list—all the leverage points that prompt us to do what we do in the workplace learning-and-performance field.

First, before I harp on the points of darkness, let me twist my head 360 and defend ADDIE. To me, ADDIE is just a project-management tool. It‘s an empty baseball dugout. We can add high-schoolers, Poughkeepsie State freshman, or the 2014 Red Sox and we‘d create terrible results. Alternatively, we could add World-Series champions to the dugout and create something beautiful and effective. Yes, we often use ADDIE stupidly, as a linear checklist, without truly doing good E-valuation, without really insisting on effectiveness, but this recklessness, I don‘t think, is hardwired into the ADDIE framework—except maybe the linear, non-iterative connotation that only a minor-leaguer would value. I‘m open to being wrong—iterate me!

Clark:

Your defense of ADDIE is admirable, but is the fact that it‘s misused perhaps reason enough to dismiss it? If your tool makes it easy to lead you astray, like the alluring temptation of a forgetful haze, is it perhaps better to toss it in a bowl and torch it rather than fight it? Wouldn‘t the Successive Approximation Method be a better formulation to guide design?

Certainly the user experience field, which parallels ours in many ways and leads in some, has moved to iterative approaches specifically to help align efforts to demonstrably successful approaches. Similarly, I get ‘the fear‘ and worry about our tools. Like the demon rum, the temptations to do what is easy with certain tools may serve as a barrier to a more effective application of the inherent capability. While you can do good things with bad tools (and vice versa), perhaps it‘s the garden path we too easily tread and end up on the rocks. Not that I have a clear idea (and no, it‘s not the ether) of how tools would be configured to more closely support meaningful processing and application, but it‘s arguably a collection worth assembling. Like the bats that have suddenly appeared…

Will:

I‘m in complete agreement that we need to avoid models that send the wrong messages. One thing most people don‘t understand about human behavior is that we humans are almost all reactive—only proactive in bits and spurts. For this discussion, this has meaning because many of our models, many of our tools, and many of our traditions generate cues that trigger the wrong thinking and the wrong actions in us workplace learning-and-performance professionals. Let‘s get ADDIE out of the way so we can talk about these other treacherous triggers. I will stipulate that ADDIE does tend to send the message that instructional design should take a linear, non-iterative approach. But what‘s more salient about ADDIE than linearity and non-iteration is that we ought to engage in Analysis, Design, Development, Implementation, and Evaluation. Those aren‘t bad messages to send. It‘s worth an empirical test to determine whether ADDIE, if well taught, would automatically trigger linear non-iteration. It just might. Yet, even if it did, would the cost of this poor messaging overshadow the benefit of the beneficial ADDIE triggers? It‘s a good debate. And I commend those folks—like our comrade Michael Allen—for pointing out the potential for danger with ADDIE. Clark, I‘ll let you expound on rapid authoring tools, but I‘m sure we‘re in agreement there. They seem to push us to think wrongly about instructional design.

Clark:

I spent a lot of time looking at design methods across different areas – software engineering, architecture, industrial design, graphic design, the list goes on – as a way to look for the best in design (just as I‘ve looked across engagement disciplines, learning approaches, and more; I can be kinda, er, obsessive).   I found that some folks have 3 step models, some 4, some 5. There‘s nothing magic about ADDIE as ‘the‘ five steps (though having *a* structure is of course a good idea).  I also looked at interface design, which has arguably the most alignment with what elearning design is about, and they‘ve avoided some serious side effects by focusing on models that put the important elements up front, so they talk about participatory design, and situated design, and iterative design as the focus, not the content of the steps. They have steps, but the focus is on an evaluative design process. I‘d argue that‘s your empirical design (that or the fumes are getting to me).  So I think the way you present the model does influence the implementation. If advertising has moved from fear motivation to aspirational motivation (c.f. Sach‘s Winning the Story Wars), so too might we want to focus on the inspirations.

Will:

Yes, let‘s get back to tools. Here‘s a pet peeve of mine. None of our authoring tools—as far as I can tell—prompt instructional designers to utilize the spacing effect or subscription learning. Indeed, most of them encourage—through subconscious triggering—a learning-as-an-event mindset.

For our readers who haven‘t heard of the spacing effect, it is one of the most robust findings in the learning research. It shows that repetitions that are spaced more widely in time support learners in remembering. Subscription learning is the idea that we can provide learners with learning events of very short duration (less than 5 or 10 minutes), and thread those events over time, preferably utilizing the spacing effect.

Do you see the same thing with these tools—that they push us to see learning as a longer-then-necessary bong hit, when tiny puffs might work better?

Clark:

Now we’re into some good stuff!  Yes, absolutely; our tools have largely focused on the event model, and made it easy to do simple assessments.  Not simple good assessments, just simple ones. It’s as if they think designers don’t know what they need.  And, as our colleague Cammy Bean’s book The Accidental Instructional Designer’s success shows, they may be right.  Yet I’d rather have a power tool that’s incrementally explorable, but scaffolds good learning than one that ceilings out just when we’re getting to somewhere interesting. Where are the templates for spaced learning, as you aptly point out?  Where are the tools to make two-step assessments (first tell us which is right, then why it’s right, as Tom Reeves has pointed us to)?  Where are more branching scenario tools?  They tend to hover at the top end of some tools, unused. I guess what I’m saying is that the tools aren’t helping us lift our game, and while we shouldn’t blame the tools, tools that pointed the right way would help.  And we need it (and a drink!).

Will:

Should we blame the toolmakers then? Or how about blaming ourselves as thought leaders? Perhaps we‘ve failed to persuade! Now we‘re on to fear and self-loathing…Help me Clark! Or, here‘s another idea. How about you and I raise $5 million in venture capital and we‘ll build our own tool? Seriously, it‘s a sad sign about the state of the workplace learning market that no one has filled the need. Says to me that (1) either the vast cadre of professionals don‘t really understand the value, or (2) the capitalists who might fund such a venture don‘t think the vast cadre really understand the value, (3) or the vast cadre are so unsuccessful in persuading their own stakeholders that truth about effectiveness doesn‘t really matter. When we get our tool built, how about we call it Vastcadre? Help me Clark! Kent you help me Clark? Please get this discussion back on track…What else have you seen that keeps us ineffective?

Clark:

Gotta hand it to Michael Allen, putting his money where his mouth is, and building ZebraZapps.  Whether that‘s the answer is a topic for another day.  Or night.  Or…  so what else keeps us ineffective?  I‘ll suggest that we‘re focusing on the wrong things.  In addition to our design processes, and our tools, we‘re not measuring the right things. If we‘re focused on how much it costs per bum in seat per hour, we‘re missing the point. We should be measuring the  impact  of our learning.  It‘s about whether we‘re decreasing sales times, increasing sales success, solving problems faster, raising customer satisfaction.  If we look at what we‘re trying to impact, then we‘re going to check to see if our approaches are working, and we‘ll get to more effective methods.  We‘ve got to cut through the haze and smoke (open up what window, sucker, and let some air into this room), and start focusing with heightened awareness on moving some needles.

So there you have it.  Should we continue our wayward ways?

Why L&D?

17 December 2014 by Clark 3 Comments

One of the concerns I hear is whether L&D still has a role.  The litany is  that  they’re so far out of touch with their organization, and science, that it’s probably  better to let them die an unnatural death than to try to save them. The prevailing attitude of this extreme view is that the Enterprise Social Network is the natural successor to the LMS, and it’s going to come from operations or IT rather than L&D.  And, given that I’m on record suggesting that we revolutionize L&D rather than ignoring it, it makes sense to justify why.  And while I’ve had other arguments, a really good argument comes from my thesis advisor, Don Norman.

Don’s on a new mission, something he calls DesignX, which is scaling up design processes to deal with “complex socio-technological systems”.   And he recently wrote an article about  why  DesignX that put out a good case why L&D as well.  Before I get there, however, I want to point out two other facets  of his argument.

The first is that often design has to go  beyond science. That is, while you use science when you can, when you can’t you use theory inferences,  intuition, and more to fill in the gaps, which you hope  you’ll find out later (based upon later science, or your own data) was the right choice.  I’ve often had to do this in my designs, where, for instance, I think research hasn’t gone quite far enough in understanding engagement.  I’m not in a research position as of now, so I can’t do the research myself, but I continue to look at what can be useful.  And this is true of moving L&D forward. While we have some good directions and examples, we’re still ahead of documented research.  He points out that system science and service thinking are science based, but suggests design needs to come in beyond those approaches.   To the extent L&D can, it should draw from science, but also theory and keep moving forward regardless.

His other important point is, to me, that he is talking about systems.  He points out that design  as a craft  works well on simple areas, but where he wants to scale design is to the level of systemic solutions.  A noble goal, and here too I think this is an approach  L&D needs to consider as well.  We have to go beyond point solutions – training, job aids, etc – to performance ecosystems, and this won’t come without a different mindset.

Perhaps the most interesting one, the one that triggered this post, however, was a point on why designers are needed.  His point is that others have focuses on efficiency and effectiveness, but he  argued that  designers have empathy for the users as well.  And I think this is really important.  As I used to say the budding software engineers I was teaching interface design to: “don’t trust your intuition, you don’t think like normal people”.  And similarly, the reason I want L&D in the equation is that they (should) be the ones who really understand how we think, work, and learn, and consequently they should be the ones facilitating performance and development. It takes an empathy with users to facilitate them through change, to help them deal with fears and anxieties dealing with new systems, to understand what a good learning culture is and help foster it.

Who else would you want to be guiding an organization in achieving effectiveness in a humane way?   So Don’s provided, to me, a good point on why we might still want L&D (well, P&D really ;)  in the organization. Well, as long as they also addressing the bigger picture and not just pushing info dump and knowledge test.  Does this make sense to you?

#itashare #revolutionizelnd

Challenges in engaging learning

16 December 2014 by Clark 2 Comments

I’ve been working on moving a team to deeper learning design.  The goal is to practice what I preach, and make sure that the learning design is competency-aligned, activity-based, and model-driven.  Yet, doing it in a pragmatic way.

And this hasn’t been without it’s challenges.  I  presented to the team my vision, we worked out a process, and started coaching the team during development.  In retrospect, this wasn’t proactive enough.  There were a few other hiccups.

We’re currently engaged in a much tighter cycle of development and revision, and now feel we’re getting close to the level of effectiveness  and  engagement we need.  Whether a) it’s really better, and b) whether we can replicate it yet scale it as well is an open question.

At core are a few elements. For one, a rabid focus on what learners are  doing is key.  What do they need to be able to do, and what contexts do they need to do it in?

The competency-alignment focus is on the key tasks that they have to do in the workplace, and making sure we’re preparing them across pre-class, in-class, and post-class activities to develop that ability.  A key focus is having them make the decision in the learning experience that they’ll have to make afterward.

I’m also pushing very hard on making sure that there are models behind the decisions.  I’m trying hard to avoid arbitrary categorizations, and find the principles that drove those categorizations.

Note that all this is  not easy.  Getting the models is hard when the resources  provided don’t include that information.  Avoiding presenting just knowledge and definitions is hard work.  The tools we use make certain interactions easy, and other ones not so easy.  We have to map meaningful decisions into what the tools support.  We end up making  tradeoffs, as do we all.  It’s good, but not as good as it could be.  We’ll get better, but we do want to run in a practical fashion as well.

There are more elements to weave in: layering on some general biz skills is embryonic.  Our use of examples needs to get more systematic.  As does our alignment of learning goal to practice activity.    And we’re struggling to have a slightly less didactic and earnest tone;  I haven’t worked hard enough on pushing a bit of humor in, tho’ we are ramping up some exaggeration.  There’s only so much you can focus on at one time.

We’ll be running some student tests next week before presenting to the founder.  Feeling mildly confident that we’ve gotten a decent take on quality learning design with suitable production value, but there is the barrier that the nuances of learning design are  subtle. Fingers crossed.

I still believe that, with practice, this becomes habit and easier.  We’ll see.

Taking note

6 November 2014 by Clark Leave a Comment

A colleague pointed me to this  article that posited the benefits of digital note-taking.  While I agree, I want to take it further.  There are some non-0bvious factors in note taking.

As the article points out, there are numerous benefits possible by taking notes digitally.  They can be saved and reviewed, have text and/or sketches and/or images (even video too), be shared, revised, elaborated with audio both to add to notes and to read back the prose, and more.  Auto-correct is also valuable.  And I absolutely believe all this is valuable.  But there’s more.

One thing the article touched on is the value of structure.  Whether outlining, where indents capture relationships, or networks similarly, capturing that structure means valuable processing by the note-taker. Interestingly, graphical frameworks can support cycles or cross references in the structure better than outlines can (I once was called out that there was no additional value to mindmaps over outlines, and this is one area where they are superior).

However, as the article noted, research has shown that taking verbatim notes doesn’t help. You have to actively reprocess the information, extracting structure through outlines or networks, and paraphrasing what you hear instead of parroting it. This is the real value of note taking.  You need to be actively engaged.

Note-taking  also helps keep that engagement. The mindmaps that I frequently post started as a way for me to listen better.   My brain can be somewhat lateral (an understatement; a benefit for Quinnovating, but a problem for listening to presentations), and if someone says something interesting, by the time I’ve explored the thought and returned, I’ve lost the plot. Mindmapping was a way to occupy enough extra cognitive overhead to keep my mind from sparking off.  It just so happens that  when I posted one, it drew significant interest (read: hits), and so I’ve continued it for me, the audience, and the events.

Interestingly, the benefit of the note taking can persist even if the notes aren’t reviewed; the act of note-taking with the extra processing in paraphrasing is valuable in itself.  I once asked an audience how many took notes, and many hands went up. I then asked how many read the notes afterwards, and the result was significantly less.  Yet that’s not a bad thing!

So, take notes that reprocess the information presented.  Then, review them if useful.  But give yourself the benefit of the processing, if nothing else.

#DevLearn Schedule

24 October 2014 by Clark Leave a Comment

As usual, I will be at DevLearn (in Las Vegas) this next week, and welcome meeting up with you there.  There  is a lot going on.  Here’re the things I’m involved in:

  • On Tuesday, I’m running an all day workshop on eLearning Strategy. (Hint: it’s really a Revolutionize L&D  workshop  ;).  I’m pleasantly surprised at how many folks will be there!
  • On Wednesday at 1:15 (right after lunch), I’ll be speaking on the design approach  I’m leading  at the Wadhwani Foundation, where we’re trying to integrate learning science with pragmatic execution.  It’s at least partly a Serious eLearning Manifesto session.
  • On Wednesday at 2:45, I’ll be part of a panel on mlearning with my fellow mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell, chaired by conference program director David Kelly.

Of course, there’s much more. A few things I’m looking forward to:

  • The  keynotes:
    •  Neil DeGrasse Tyson, a fave for his witty support  of science
    • Beau Lotto talking about perception
    • Belinda Parmar talking about women in tech (a burning issue right now)
  • DemoFest, all the great examples people are bringing
  • and, of course, the networking opportunities

DevLearn is probably my favorite conference of the year: learning focused, technologically advanced, well organized, and with the right people.  If you can’t make it this year, you might want to put it on your calendar for another!

Align, deepen, and space

8 July 2014 by Clark 1 Comment

I was asked about, in regards to the Serious eLearning Manifesto, about how people could begin to realize the potential of eLearning.  I riffed about this once before, but I want to spin it a different way.  The key is making meaningful practice.  And there are three components: align it, deepen it, and space it.

First, align it. What do I mean here?  I mean make sure that your learning objective, what they’re learning, is aligned to a real change in the business. Something you know that, if they improve, it will have an impact on a measurable business outcome.  This means two things, underneath. First, it has to be something that, if people do differently and better, it will solve a problem in what the organization is trying to do.  Second, it has to be something learning benefits from.  If it’s not a case where it’s a cognitive skill shift, it should be about using a tool, or replaced with using a tool. Only use a course when a course makes sense, and make sure that course is addressing a real need.

Second, deepen it.  Abstract practice, and knowledge test are both less effective than practice that puts the learner in a context like they’ll be facing in the workplace, and having them make the same decisions they’ll need to be making after the learning experience.  Contextualize it, and exaggerate the context (in appropriate ways) to raise the level of interest and importance to be closer to the level of engagement that will be involved in live performance.  Make sure that the challenge is sufficient, too, by having alternatives that are seductive unless you really understand. Reliable misconceptions are great distractors, by the way.  And have sufficient practice that leads from their beginning ability to the final ability they need to have, and so that they can’t get it wrong (not just until they get it right; that’s amateur hour).

Here’s where the third, space it, can come in.  Will Thalheimer has written a superb document (PDF) explaining the need for spacing. You can space out the complexity of development, and sufficient practice, but we need to practice, rest (read: sleep), and then practice some more. Any meaningful learning really can’t be done in one go, but has to be spread.  How much? As Will explains, that depends on how complex the task is, and how often the task will be performed and the gaps in between, but it’s a fair bit. Which is why I say learning  should be expensive.

After these three steps, you’ll want to only include the resources that will lead to success, provide models and examples that will support success, etc, but I believe that, regardless,  learners with good practice are likely to get more out of the learning experience than any other action you can take. So start with good practice, please!

Manifesting in practice extremis

26 March 2014 by Clark Leave a Comment

Yesterday, I posted about what we might like to see from folks, by role, in terms of the Manifesto.  The other question to be answered is how to do this in the typical current situation where there’s little support for doing things differently.  Let me take a worst-case scenario and try to take a very practical approach. This isn’t an answer for the pulpit, but is for the folks who put all this in the ‘too hard’ basket.

So, worst case: you’re going to still get a shower of PPTs and PDFs and be expected to make a course out of it, maybe (if you’re lucky) with a bit of SME access.  And no one cares if it makes a difference, it’s just “do this”.  And, first, you have my deepest sympathies. We’re hoping the manifesto changes this, but sometimes we have to start with where you live, eh?  Recognize that the following is not PoliticallyCorrectâ„¢; I’m going outside the principled response to give you an initial kickstart.

The short version is that you’ve got to put meaningful practice in there.  You need an experience that sets up a story, requires a choice using the knowledge, and lets the learner see the consequences.  That’s the thing that has the most impact, and you’ll want several.  This will have far more impact than a knowledge test.  To do that isn’t too complex.

The very first thing you need to do when you’ve parsed that content is to figure out what, at core, the person who’s going to have this experience should be able to do differently.  What performance aren’t they doing now?  This is problematic, because sometimes the problem isn’t a performance problem, but here I’m assuming you don’t have that leeway. So you’ll have to do some inference.  Yes, it’s a bit more thinking, but you already have to pull out knowledge, so it’s not that different (and gets easier with practice).

Say you’ve gotten product data.  How would they use that?  To sell?  To address objections? To trouble shoot?  Maybe it’s process information you’re working on. What would they do with that? Recognize problems? Take the next step?  If you’re given information on workplace behavior problems? Let them determine whether grey areas exist, or coach people.

You’ll need to make a believable context and precipitative situation, and then ask them to respond. Make it challenging, so that the situation isn’t clear, and the alternative are plausible ways the learner could go wrong.  The SME can help here.  Make the scenario they’re facing and the decisions they must make as representative of the types of problems that they’ll be facing as you can.  And try to have the story play out, e.g. the consequences of their choice be  presented  before they get the right answer or feedback about why it’s wrong. There are good reasons for this, but the short version is it’s to help them learn to read the situation when it’s real.

Let’s be clear, this is really just better multiple choice question design!  I say that so you see you’re not going beyond what you already do, you’re just taking a slightly different tack to it.  The point is to work within the parameters of content and questions (for now!), and yet get better outcomes.

Ideally, you’ll find all the plausible application scenarios, and be able to write multiple questions.  If there’s any knowledge they  have to know cold, you might have to also test that knowledge, but consider designing a job aid.  (Even if it’s not tested and revised, which it should be, it’s a start on the path.)

There’s more, but that’s a start (more in my next post). Focus on meaningful practice first.  Dress it up. Exaggerate it. But if you put good practice in their path, that’s probably the most valuable change to start with.  There’re lots of steps from there, basically turning it into a learning experience:  making everything less dense, more minimal, more focused on performance, adding in more meaningfulness.  And redoing concept, example, introduction, etc.  But the first thing, valuable practice, engages many of the eight values that form the core of the Manifesto: performance focused, meaningful to learners, engagement-driven, authentic contexts, realistic decisions, and real world consequences.

I’ve argued elsewhere that doing better elearning doesn’t take longer, and I believe it.  Start here, and start talking about what you’re doing with your colleagues, bosses, what have you.  Sign on to the Manifesto, and let them know  why. And let me know how it goes.

Manifesting in principle

25 March 2014 by Clark 1 Comment

The launch of the Manifesto has surfaced at least a couple of issues that are worth addressing. The first asks who the manifesto is for, and what should they do differently.  That’s a principled response.  The second is just  how to work differently in the existing situations where the emphasis is on speed.  That’s a more pragmatic response.  There are not necessarily easy answers, but I’ll try.  Today I’ll address the first question, and tomorrow the second.

To the first point, what should the impact be on different sectors?  Will Thalheimer (fellow instigator), laid out some points here.  My thoughts are related:

  • Tool vendors should ensure that their tools can support designers interested in these elements. In particular, in addition to presentation of multimedia content, there needs to be: a)  the ability to provide separate feedback for different choices, b) the ability to have scenario interactions whereby learners can take multistep decision paths mimicking real experiences, and c) the ability to get the necessary evaluation feedback. In reality, the tools aren’t the limitation, though some may make it more challenging than others. The real issue is in the design.
  • We’d like custom content houses (aka elearning solution providers) to try to get their clients to allow them to work against these principles, and then do so. Of course, we’d like them to do so regardless!  I’ve argued in the past that better design doesn’t take longer.  Of course, we realize that clients may not be willing to pay for testing and revision, but that’s the second part…
  • …we’d like purchasers of custom content to ask that their learning experiences meet these standards, and expect and allow in contracts for appropriate processes.  If you’re going to pay for it, get real  value!  Purchasers need to become aware that not meeting these standards increases the likelihood that any intervention will be of little use.
  • Similarly, if you’re buying pre-made content (aka shelfware), you should check to see if it also meets these standards.  It’s certainly possible!
  • Managers and executives, whether purchasing or overseeing in-house teams, ideally will be insisting that these standards be met.  They should start revising processes both external (e.g. RFPs) and internal (templates, checklists and reviews) to start meeting these criteria.
  • And designers and developers should start building this into their solutions (within their constraints) while beginning to promote the longer term picture.

Of course, we realize that there are real world challenges. The first is that the internal elearning unit will have to be working with the business units about taking a richer and more meaningful approach.   Those units may not be ready to consider this!  The ‘order taker’ mentality has become rife in the industry, and it’s hard for a L&D unit to suddenly change the rules of engagement.  It will take some education around the workplace, but to ensure that the efforts are really leading to meaningful change mean it’s critical.

The second caveat is that not all of these elements will be addressable from day 1.  While we’d love that to be the case, we recognize that some things will be easier than others.  Focusing on meaningful objectives  and, relatedly, meaningful practice are the two first priorities.  (While I suspect my colleagues might instead champion measurement, I’m hopeful that making more meaningful practice will drive better outcomes. Then, there’ll be a natural desire to check the impact.) When the meaningful focus is accomplished, trimming extraneous content becomes easier.

The goal is to hit the core eight values first, as these are the biggest gaps we see, and integrate many of the principles: performance focused, meaningful to learners, individualized challenges, engagement-driven, authentic contexts, realistic decisions, real-world consequences, and spaced practice.  With those, you’ve got a real start on making a difference.  And that’s what we’re about, eh?  We hope you’ll sign on!

Aligning with us

12 March 2014 by Clark Leave a Comment

The main complaint I think I have about the things L&D does isn’t so much that it’s still mired in the industrial age of plan, prepare, and execute, but that it’s just not aligned with how we think, learn, and perform, certainly not for information age organizations.  There are very interesting rethinks in all these areas, and our practices are not aligned.

So, for example, the evidence is that our thinking is not the formal logical thinking that underpins our assumptions of support.  Recent work paints a very different picture of how we think.  We abstract meaning but don’t handle concrete details well, have trouble doing complex thinking and focusing attention, and our thinking is very much influenced by context and the tools we use.

This suggests that we should be looking much more at contextual performance support and providing models, saving formal learning for cases when we really need a significant shift in our understanding and how that plays out in practice.

Similarly, we learn better when we’re emotionally engaged, when we’re equipped with explanatory and predictive models, and when we practice in rich contexts.    We learn better when our misunderstandings are understood, when our practice adjusts for how we are performing, and feedback is individual and richly tied to conceptual models.  We also learn better  together, and when our learning to learn skills are also well honed.

Consequently, our learning similarly needs support in attention, rich models, emotional engagement, and deeply contextualized practice with specific feedback.  Our learning isn’t a result of a knowledge dump and a test, and yet that’s most of what see.

And not only do we learn better together, we work better together.  The creative side of our work is enhanced significantly when we are paired with diverse others in a culture of support, and we can make experiments.  And it helps if we understand how our work contributes, and we’re empowered to pursue our goals.

This isn’t a hierarchical management model, it’s about leadership, and culture, and infrastructure.  We need bottom-up contributions and support, not top-down imposition of policies and rigid definitions.

Overall, the way organizations need to work requires aligning all the elements to work with us the way our minds operate.  If we want to optimize outcomes, we need to align both performance  and  innovation.  Shall we?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.