Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Monday Broken ID Series: Perfect Practice

1 March 2009 by Clark 1 Comment

Previous Series Post | Next Series Post
This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do good design.

Really, the key to learning is the practice. Learners have to apply knowledge, in the form of skills, to really internalize and ‘own‘ the learning.   Knowledge recitation, in the absence of application, leads to what cognitive science calls ‘inert knowledge‘, that‘s able to be recited back, but isn‘t activated in appropriate contexts.

What we see, unfortunately, is too much of knowledge test, and not meaningful application. We see meaningless questions seeing if people can recite back memorized facts, and no application of those facts to solve problems.   We see alternatives to the right answer that are so obviously wrong that we can pass the test without learning anything!   And we see feedback that‘s not specific to the deficit.   In short, we waste our and the learner‘s time.
What we want is appropriate challenge, contextualized performance, meaningful tasks, appropriate feedback, and more.

First, we should have picked meaningful objectives that indicate what they can do, in what context, to what level, and now we design the practice to determine whether they can do it. Of course, we may need to have some intermediate tasks to develop their skills at an appropriate pace, providing scaffolding to simplify the task until it‘s mastered.

We can scaffold in a variety of ways. We can provide tasks with simplified data first, that don‘t get complicated with other factors.   We can provide problems with parts worked, so learners can accomplish the component skills separately and then combine. We can provide support tools such as checklists or flowcharts to assist, and gradually remove them until the learner is capable.

We do need to balance the level of challenge, so that the task gets difficult at the right rate for the learner: too easy, and the learner is bored; too hard and the learner is frustrated.   Don‘t make it too easy!   If it matters, ensure they know it (and if it doesn‘t, why are you bothering?).

The trick is not only the inherent nature of the task, but many times is a factor of the alternatives to the right answer.   Learners don‘t make random mistakes (generally), they make patterned mistakes that represent inappropriate models that they perceive as appropriate.   We should choose alternatives to the right answer or choice that represent these misconceptions.

Consequently, we need to provide specific feedback for that particular misconception.   That‘s why any quiz tool that only has one response for all the wrong answers should be tossed out; it‘s worthless.

We need to ensure that the setting for the task is of interest to the learner.   The contexts we choose should setup problems that the learner viscerally understands are important problems, and ones that they are interested in.
We also need, as mentioned with examples, that the contexts seen across both examples and practice determine the space of transfer, so that still needs to be kept in mind.

The elements listed here are the elements that make effective practice, but also those that make engaging experiences (hence, the book).   That is, games.   While the best practice is individually mentored real performance, that doesn‘t scale well, and the consequences can be costly.   The next best practice, I argue, is simulated performance, tuned into a game (not turned, tuned).   While model-driven simulations are ideal for a variety of reasons (essentially infinite replay, novelty, adaptive challenge), it can be simplified to branching or linear scenarios.   If nothing else, just write better multiple choice questions!

Note that, here, practice encompasses formative and summative assessment. In either case, the learner‘s performing, it‘s just whether you evaluate and record that performance to determine what the learner is capable of.   I reckon assessment should always be formative, helping the learner understand what they know. And summative assessment, in my mind, has to be tied back to the learning objectives , seeing if they can now do what they need to be able to do that‘s difference.

If you make meaningful challenging, contextualized performance, you make effective practice.   And that‘s key to behavior change, and learning.   So practice making perfect practice, because practice makes perfect.

Designing Learning

28 February 2009 by Clark 2 Comments

Another way to think about what I was talking about yesterday in revisiting the training department is taking a broader view.   I was thinking about it as Learning Design, a view that incorporates instructional design, information design and experience design.

leiI‘m leery of the term instructional design, as that label has been tarnished with too many cookie cutter examples and rote approaches to make me feel comfortable (see my Broken ID series).   However, real instructional design theory (particularly when it‘s cognitive-, social-, and constructivist-aware) is great stuff (e.g. Merrill, Reigeluth, Keller, et al); it‘s just that most of it‘s been neutered in interpretation.   The point being, really understanding how people learn is critical.   And that includes Cross‘ informal learning.   We need to go beyond just the formal courses, and provide ways for people to self-help, and group-help.

However, it‘s not enough.   There‘s also understanding information design.   Now, instructional designers who really know what they‘re doing will say, yes, we take a step back and look at the larger picture, and sometimes it‘s job aids, not courses.   But I mean more, here.   I‘m talking about, when you do sites, job aids, or more, including the information architecture, information mapping, visual design, and more, to really communicate, and support the need to navigate. I see reasonable instructional design undone by bad interface design (and, of course, vice-versa).

Now, how much would you pay for that? But wait, there‘s more!   A third component   is the experience design.   That is, viewing it not from a skill-transferral perspective, but instead from the emotional view.   Is the learner engaged, motivated, challenged, and left leaving fulfilled?   I reckon that‘s largely ignored, yet myriad evidence is pointing us to the realization that the emotional connection matters.

We want to integrate the above.   Putting a different spin on it, it‘s about the intersection of the cognitive, affective, conative, and social components of facilitating organizational performance.   We want the least we can to achieve that, and we want to support working alone and together.

There‘s both a top-down and bottom-up component to this.   At the bottom, we‘re analyzing how to meet learner needs, whether it‘s fully wrapped with motivation, or just the necessary information, or providing the opportunity to work with others to answer the question.   It‘s about infusing our design approaches with a richer picture, respecting our learner‘s time, interests, and needs.

At the top, however, it‘s looking at an organizational structure that supports people and leverages technology to optimize the ability of the individuals and groups to execute against the vision and mission.   From this perspective, it‘s about learning/performance, technology, and business.

And it‘s likely not something you can, or should, do on your own.   It‘s too hard to be objective when you‘re in the middle of it, and the breadth of knowledge to be brought to bear is far-reaching.   As I said yesterday, what I reckon is needed is a major revisit of the organizational approach to learning.   With partners we‘ve been seeing it, and doing it, but we reckon there‘s more that needs to be done.   Are you ready to step up to the plate and redesign your learning?

Monday Broken ID Series: Examples

22 February 2009 by Clark 2 Comments

Previous Series Post |Next Series Post

This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do good design.

I see several reliable problems with examples, and they aren‘t even the deepest problems. They tend to be mixed in with the concept, instead of separate, if they exist at all.   Then, when they do exist, too often they‘re cookie-cutter examples, that don‘t delve into the necessary elements that make examples successful, let alone are intrinsically interesting, yet we know what these elements are!

Conceptually, examples are applications of the concept in a context.   That is, we have a problem in a particular setting, and we want to use the model as a guide to solving the problem. Note that the choice of examples is important. The broader the transfer space, that is, the more general the skills, the more you want examples that differ in many respects.   Learners generalize the concept from the examples, and the extent to which they‘ll generalize to all appropriate situations depends on the breadth of contexts they‘ve seen (across both examples and practice).   You need to ensure that the contexts the learner sees are as broadly disparate as possible.

Note that we should also be choosing problems and contexts that are of interest to the audience.   Going beyond just the cognitive role, we should be trying to tap into the motivational and engagement factors.   Factor that into the example design as well!

Now, we know that examples have to show the steps that were taken.   They have to have specific steps from beginning to end.   And, I add, those steps have to refer back to the concept that guides the presentation.   You can‘t just say “first you do this, then you do this”, etc, you have to say “first, using the model, you do this, and then the model says to do that”.   You need to show the steps, and the intermediate work products.   Annotating them is really important.

And that annotation is not just the steps, but also the underlying thought processes.   The problem is, experts don‘t even have access to their thought processes anymore!   Yet, their thinking really works along lines like “well, I could‘ve done A, but because of X, and thought B was a better approach, and then I could do C, but because of Y I tried D”, etc.   The point being, there‘s a lot of contextual clues that they evaluate that aren‘t even conscious, yet these clues are really important for learners. (BTW, this is one of the many reasons I recommend comics in elearning, thought bubbles are great for cognitive annotation.)

Another valuable component is showing mistakes and backtracking. This is a hard one to get your mind around, and yet it‘s powerful both cognitively and emotionally.   First, experts model the behavior perfectly, and when learners try, they make mistakes, and may turn off emotionally (“I‘m having trouble, and it looks so easy, I must not be good at this”).   In reality, experts make mistakes all the time, and learners need to know that. It keeps you from losing them altogether!

Cognitively it‘s valuable, too.   When experts show backtracking and repair, they‘re modeling the meta-skills that are part of the expertise.   Unpacking that self-monitoring helps learners internalize the ‘check your answer‘ component that‘s part of expert performance.   This takes more work on the part of the designer, like we had with the concept, but if the content is important (otherwise, why are you building a course), it‘s worth doing right.

Finally, I believe it‘s important to convey the example as a story.   Our brains are wired to comprehend stories, and a good narrative has better uptake.   Having a protagonist documenting the context and problem, and then solving it with the model to achieve meaningful outcomes, is more interesting, and consequently more memorable.   We can use a variety of media to tell stories, from prose, through audio (think mobile and podcasts) and narrated slideshow, animation, or video.   Comics are another channel.   Stories also are useful for conveying the underlying thought processes, via thought bubbles or reflective narration (“What was I thinking?…”).

So, please do good examples.   Be exemplary!

The ‘Least Assistance’ Principle

20 February 2009 by Clark 10 Comments

While I agree vehemently with most of a post by Lars Hyland, he said one thing I slightly disagree with, and I want to elaborate on it.   He was disagreeing with   “buying rapid development tools to bash out ill formed ‘e-learning’ to an audience that will not only be unimpressed but also none the wiser – or more productive”, a point I want to nuance.   I agree with not using rapid elearning to create courses for novices, but there is a role for bashing out courses for another audience, the practitioner.   And there’s something deeper here to tease out.

I want to bring up John Carroll’s minimalist instruction, and highly recommend it to you. He focused on a) meaningful tasks, b) active learning quickly, c) including error recogition & recovery, and d) making learning activities self-contained (a lot like games, actually).   In The Nurnberg Funnel, he documented how this design led to 25 cards, 1 per learning goal, that beat a 94 page traditionally designed manual hands-down in outcomes.

Another way to think about it is something Jim Spohrer mentioned to me once. Now, Jim’s been an Apple Fellow, and is leading research at IBM’s Almaden Research Center.   He really cares and likes to help people, but he’s very busy.   So he adopted a ‘least assistance’ principle, where he would ask himself what’s the least he can do to get this person going, because there was more to do and more people to help than he was able to keep up with.   And I think it is a useful way to think about supporting learning.

This sounds a lot like performance support, and that’s definitely a mind-set we need to adopt. When Harold Jarche and Jay Cross talk about the death of the training department, they’re talking about not focusing on courses, and instead taking a broader, performance perspective.   Obviously, we want to talk about portals of resources, but we also need to recognize that there are formal learning situations that don’t require the full formality.

We develop full courses to incorporate motivation, practice, all the things non-self-directed learners need.   But there are times when we need to provide new information and skills to self-directed learners.   When we’re talking to practitioners who are good at their job, know what they’re doing and why, and know that they need to know this information and how they’ll apply it, we can strip away a lot of the window dressing. We can just provide support to a SME so that their talk presents the relevant bits   in a streamlined and effective way, and let them loose.     That, to me, is the role of rapid elearning.

It’s not for novices, but it’s effective, and more efficient.   In this economic climate, we don’t have the luxury of full development of courses for every need.   Moreover, in any climate, we shouldn’t give people what they don’t need, instead we need to focus on what the ‘least assistance’ we can give them is.

In many cases, the least assistance we can give is self-help, which is why I believe social learning tools are one of the best investments that can be made.   The answer may well be ‘out there’, and rather than for learning designers to try to track it down and capture it, the learner can send out the need   and there’s a good chance an answer will come back!   There’s a lot to making such an environment work; it’s not the case that ‘if you build it, they will learn’, but it’s still going to fill a sweet spot in the performance ecosystem that may not be being hit as of now.

Don’t look for everything you can do in one situation, unless you’re flush with too much time and resources (in which case, watch out!), instead look for the least you can do that will get the job done so you can do more for everybody. It’s likely that’s more to their taste, anyway. And that’s enough from me on that!

Monday Broken ID Series: Concept Presentation

15 February 2009 by Clark 9 Comments

Previous Series Post | Next Series Post

This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   The intention is to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do better design.

At some point (typically, after the introduction) we need to present the concept.   The concept is the key to the learning, really.   While we‘ve derived our ultimate alignment from the performance objective, the concept provides the underlying framework to guide one‘s performance.   We use the framework to provide feedback to help the learner understand why their behavior was wrong, both in the learning experience and ideally past the learning experience the learner uses the model to continue to develop their performance.   Except that, too often, we don‘t provide the concept in a useful way.

What we too often see is a presentation of a rote procedure, without the underlying justification.   In business, we‘ll teach a process.   In software, we‘ll see feature/function presentations (literally going item by item through the menus!).   We‘ll see tutorials to achieve a particular goal without presenting an underlying model.   And that‘s broken.

We need models! The reason why is that people create mental models to explain the world.   People aren‘t very good at remembering rote things (our brains are really good at pattern matching, but not rote memorization).   We can fake it, but it‘s just crazy to have people memorize rote things unless it‘s something we have to absolutely know cold (medical terminology is an example, as are emergency checklists for flights).   By and large, very little of what we need to know needs to be memorized.

Instead, what people need are models.   Models are powerful, because they have explanatory and predictive power.   If you forget a step in a procedure, but know the model driving the performance, you can regenerate the missing step.   With software, for instance, if you present the model, and several examples where the way to do something is derived from the model, and then you have the learner use inferences from the model to do a couple of tasks, you might be saved from having to present the whole system.

People will build models, so if you don‘t give them one, it‘s quite likely that the one they do build will be wrong.   And bad models are very hard to extinguish, because we patch them rather than replace them.   It requires more responsibility on the designer to get the model, as, for reasons mentioned before, our SMEs may not be able to help us, but get them we must.   Realize that every procedure, software, or behavior has a model that drives the reason why it should be done in a particular way, and find it. Then we need to communicate it.

Multiple models help! To communicate a model most effectively, we should communicate it in several ways.   Models are more memorable than rote material, but we need to facilitate internalization.   Prose is certainly one tool we can and should use (carefully, it‘s way too easy to overwrite), but we should look at other ways to communicate it as well.

Multiple representations help in several ways.   First, they increase the likelihood that a learner will comprehend the model, and then have a path to comprehend the other representations.   Second, the multiple representations increase the number of paths to activate a model in a relevant context.   Finally, multiple representations increase the likelihood that one can map closely to the problem and facilitate a solution.

Multiple representations are, unfortunately, sometimes difficult to generate (more so than finding the original model).   However, we should always be able to at least generate a diagram.   This is because the model should have conceptual relationships, and these can be mapped to spatial relationships.   There‘s some creativity involved, but that‘s the fun part anyways!

Yes, doing good instructional design does take more work, but anything worth doing is worth doing well.   On a related, but important, note, unfortunately the difference between broken ID and good ID is subtle.     You may have to explain it (I have literally had to), but if you know what you‘re doing and why, you should be able to.   And having developed a powerful representation increases the power, and success of the learning, and consequently the performance.   Which is, of course, our goal. So, go forth and conceptualize!

Pacing

10 February 2009 by Clark 9 Comments

We recently finished watching a video series called Kamichu (we like anime).   It‘s a remarkably cute series about a middle school girl who finds out she‘s a god (apparently the Shinto belief system). There are some subtle digs at cultural artifacts like politicians, sweet explorations of the difficulties of romance, and funny running gags.   I recommend it, but the thoughts it prompted are what I‘m talking about here.

One of the interesting things about the show is it‘s speed.   Each episode unfolds at it‘s own leisurely pace, with soft musical backgrounds, and no laugh tracks.   Our (only recently) Disney-watching kids, now experienced with laugh tracks and frantic pacing, were enchanted.   It made me think about taking time to develop an atmosphere, the time taken to really develop a mood.   Good movies do that, though less and less.

I‘d recently been reflecting on pacing in music as well, regarding Pink Floyd. They similarly take the time to build the tension to make their musical flourishes.   As did the landmark Who‘s Next Album.   (Ok, so my musical tastes indicate my age.   Still, the pacing matters.)

Serendipitously, I also just read an intriguing post about the history of addiction.   It starts off talking about how we used to listen to music, hearing our favorite pieces only infrequently, and likely badly.   Similarly, getting together for conversations and fun was time-consuming.   The post then goes on to cover the rise, and fall, of opiates (legal for many years), and finally suggests that technology is our new addiction, and that we still haven‘t figured out what‘s now appropriate with technology or not.   It‘s long, but very interesting.

I‘ve gone off before about slow learning, and I think this is another facet.   Not only are we‘re rushing too much in our performance, our development processes, and the amount of time we devote to learning, we‘re not properly setting the stage.   I‘ve been quick myself, but some of the best speakers seem to take their time getting to the point.   I think there‘s a lot to process here, and perhaps a lot to learn.   We‘ve less patience, and I think that it‘s affecting our confidence to take time to do things properly.   If we don‘t, we risk it not working. If we do take our time, we run the risk of costing a bit more money.

In business, increasingly, I think we need to slow down and think a little, and the end result will end up being at least as fast, but also better quality.   I think that‘s the wise decision, what do you think?

Monday Broken ID Series: The Introduction

8 February 2009 by Clark 3 Comments

First Series Post | Next Series Post

This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   The intention is to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do better design.

One of the first things learners see is the introduction to the content; it‘s the first place that they can be disappointed, and all too often they are.   They are given objectives that don‘t matter to them, they‘re told what they‘re going to see in dull terms, it‘s all aversive rather than interesting.   Which is a wonderful way to start a learning experience, eh?

What we want to do is bring in the emotion! Almost all of instructional design is about the cognitive part, yet the motivational part is often just as important.   And we‘ve got to go beyond simplistic views of what that means.

Even cognitive science recognizes that there‘s more to the mind that the cognitive aspect, and includes the affective and conative as well.   Affective are your learning characteristics, your learner‘s styles.   Whereas conative is the interesting bit: the intention to learn, which includes things like motivation to learn, anxiety about learning, etc.

I‘ve gone off before about learning styles, and the short answer is to a) use the right media for the message, and b) to provide help for learners.   However, addressing motivation and anxiety is a different, and important, thing.   We want to assist their motivation, which happens by helping the learner connect this experience to themselves and their goals.   And we want to reduce their anxiety to an appropriate level (people perform better under a little pressure), by helping manage their expectations.

To help with motivation, there are a couple of things to do.   We know that learners learn better when we activate relevant information up front (it helps associate the new information to existing information).   I maintain that we want to extend that, and open them up emotionally too. And, I believe that it should be done first.   I think we need to indicate the consequences of the knowledge, either negative for not having the information, or positive for having the information.   I think the consequences can be exaggerated, to increase the emotional impact, within bounds, and it can be done dramatically (see Michael Allen‘s Flight Safety video) or humorously.   I‘ve used comic strips to begin elearning sections (we don’t use comics enough)!

There are nuances here: it has to be specific to the situation, not just a non-related exaggeration.   Done well, it can incorporate the cognitive association activation as well!   But hook them emotionally, and the information will stick better.   Too often in the learning I see, there‘s not just little, but essentially no addressing why this information is important to the learner, and that‘s got to be job number 1, or we risk wasting the rest of the effort.

Then we come to objectives, and here I nod in the direction of Will Thalheimer, who‘s said this better than I: the objectives we show to the learner are not the ones we use to design!   Too often, there‘s a section in the cookie-cutter template for objectives, and we slap in the ones we‘re designing to.   Wrong, bad designer, no Twinkie ™.   We (should) use objectives [previous post] to align what we‘re doing to the real need, but the learners don‘t want to know about our metrics.   The objectives for them need to be rewritten in a WIIFM (What‘s In It For Me) framework. They should get objectives that let them know what they‘ll be able to do that they can‘t do now, that they care about!

Another thing that helps, and now we‘re onto anxiety more, is addressing expectations.   Stephanie Burns showed that of people who set out to accomplish a goal, those that succeeded were those who managed their expectations appropriately.   Similarly, when I run workshops, I find I get less concerns when I help lay out what‘s going to happen and why rather than just barging ahead.   If people don‘t know what to expect, or expect it‘ll be X (e.g. entertaining) and there‘s some Y (e.g. hard work), they get frustrated or concerned with the mismatch.   They can get upset in particular if one aspect is difficult and they feel like they‘re floundering.   Making sure that the expectations are set appropriately helps learners feel like they‘re in synch with what‘s happening, and maintains their confidence.

A role that‘s cognitive as well as motivational is that we don‘t do enough has to do with contextualizing what‘s happening.   Too often, learning is conduced in a vacuum.   Yet Charles Reigeluth‘s Elaboration Theory suggests drilling down, and I say contextualize the learning in the larger context of what‘s happening in the world. Even if we‘re learning about some minor medical procedure, we can talk about how health care is a major issue, and getting it right is one of the components to make it effective and efficient.   Or somesuch, but you can quickly connect what they‘re learning to the real world, and you should.   It‘ll help again associate relevant knowledge and increase the effectiveness of the message by connecting what‘s happening now to what‘s really important.

And, I‘ll finally add, no pre-tests, unless it‘s to let the learners test out. I‘ve talked about that before, so I‘ll merely point you to my previous screed.

So, introduce your learners appropriately to the learning, get them cognitively and emotionally ready for the learning experience, and you won‘t be throwing away all the effort to develop what follows the introduction, you‘ll be maximizing it.   And that‘s what you want, at the end, is for that learning to stick.

(New) Monday Broken ID Series: Objectives

1 February 2009 by Clark 9 Comments

Next series post

This is the first in a series of thoughts on some broken areas of ID that I will be posting for Mondays.   The intention is to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do better design.

The way I‘ve seen many learning solutions go awry is right at the beginning, focusing on the wrong objective.   Too often the objective is focused on rote knowledge, whether it‘s facts, procedures, or canned statements.   What we see is knowledge dump, or as I‘ve heard it called: show up and throw up.   Then, the associated assessment is similarly regurgitation of what you‘ve just heard.   The reasons this happens, and why it doesn‘t work, are both firmly rooted in the way our brains work.

First, our brains are really bad at rote remembering.   We‘re really good at pattern-matching, and extracting underlying meaning.   That‘s why we use external aids like calendars.   Heck, if it‘s rote knowledge, don‘t make them memorize it, let them look it up, or automate it.   OK, in the rare case where they do have to know it, we can address that, but we overuse this approach.   And that‘s due to the second reason.

Experts don‘t know how they do what they do, by and large.   Our brains ‘compile‘ information; expertise implies becoming so practiced that the process is inaccessible to conscious thought (ask an expert concert pianist to describe what they‘re doing while playing and their performance falls apart).   We found this out in the 80‘s, when we built so-called ‘expert systems‘ to do what experts said they did,   When the systems didn‘t work, we went back and looked at what the experts were really doing, and there was essentially zero correlation between what they said they did, and what they actually did.

What happens, then, is that our Subject Matter Experts (SMEs) do recall what they studied, and toss that out.   They‘ll dump a bunch of relevant knowledge on the designer, and the good little designer will develop a course around what the SME tells them.   So, we see objectives like:

Be able to cite common objections to our product.

What‘s needed is to focus on more meaningful outcomes.   Dave Ferguson has written a nice post defending Bloom‘s skill taxonomies, and he‘s largely right when saying that focusing on what people actually do with the knowledge is critical. However, I find it simpler to distinguish, ala Van Merrienboer, between the knowledge the learner needs, and the complex decisions they   apply that knowledge to, with the emphasis on the latter.   So, I’d like to see objectives more like:

Be able to counter customer objections to our product.

The nuances may seem subtle, but the difference is important.

How does a designer do that?   SMEs are not the easiest folks to work with in this regard.   I‘ve found it useful to turn the conversation to focus on the things that the learner needs to be able to do after the learning experience.   That is, ask them what decisions learners need to be able to make that they can’t make know.   Not what they need to know, but what do they need to be able to do.

And, I argue, what will likely be making the difference going forward will be skills: things that learners can do differently, not just what they know.   I recall a case where an organization was not just looking for the learners to understand the organizational values, but to act in accordance with them (and that that meant).   That‘s what I‘m talking about!

When it comes to capturing objectives, I‘m perfectly happy with Mager‘s format of specifying who the audience is, what they need to be able to do, and a way to determine that they‘re successfully performing.   From there, you can work backwards to the assessment, to the concept, examples, and practice that will develop the skills to pass the assessment.

There‘s another step, really, before this, and that‘s determining what decision learners need to make differently or better to impact the bottom line, e.g. choosing objectives that will affect the organization in important ways, but that‘s another topic for another day.

Doing good objectives is both a skill that can be learned, and a process that can be supported.   You should be doing both.   Starting from the right objective makes everything else flow well; if you start on the wrong foot, everything else you do is wasted.   Get your objectives right, and get your learning going!

Tools and tradeoffs

28 January 2009 by Clark 2 Comments

originalquinnovationsite1
Old Site

I’ve been busy updating my website.   The previous version was done by hand in an old version of Adobe’s DreamWeaver, and while it was very light and minimal, it wasn’t very ‘elegant’.   For instance, I’d had one problem that really bugged me, hadn’t been able to fix (though recently I managed to beat it into submission).   I had several options: continue to maintain it, pay someone to do a better job, or find some tool that makes it easy to make reasonable sites.   I got my mitts on a copy of RealMac’s RapidWeaver, and started to play around.

RapidWeaver uses templates: there are quite a few included, and you can pay for more.   I wasn’t completely happy with any, but by systematic exploration (aka messing around), I managed to make one I was happy with. (Recognize that the small size of the screenshots can make the old one look plausible, but it was a bit space-wasting; e.g. it’s still readable at 50%!)   I haven’t dived into the actual design behind the themes, as that takes me somewhere I don’t want to go.   Still, when I’d find things I thought it couldn’t do, I’d look deeper and find it.   It took quite a few attempts to get things the way I liked them, but it’s mostly quite clean.     Yes, I could delve into CSS and PHP and really get a handle on it, but that’s not the best investment of my time, and I could’ve stuck with DreamWeaver.   It’s enough that I understand what they do, without getting into the syntax of a specific site.

newquinnovationsite
New Site

The interesting thing to consider here, however, are the tradeoffs.   I wanted a decent starting point, and the application handling all the background work when I changed things around (like maintaining the navigation bar, adding the cookie crumbs, etc).   I didn’t want to have to tweak everything myself. If I were a professional web designer, I’d want power tools; if I were an amateur I’d want hand-holding.   As it is, I want something in-between.   RapidWeaver does a relatively elegant job of providing simplicity upfront but letting you open up the hood and mess about inside.   I had to get deep into the program to get done some things I wanted to get done, but it’s output is better than I was getting on my own.   Note that if you use it’s built-in ‘text and image’ pages, I don’t like how it looks.   I went to HTML pages (which I can handle).

The more general lesson is that there are no right answers, only tradeoffs.   Ideally, you get more power as you take on more learning.   Andrea diSessa termed this ‘incremental advantage’, where well-designed tool environments give you more power as a direct outcome of your willingness to explore.   HyperCard had this, as you could start with just draw tools, but then explore fields, buttons, and backgrounds (before you hit the ‘HyperTalk’ programming language wall).

There’s been notable progress in providing power tools (though too many people don’t even know about the concept of ‘styles’), but there’s still a pretty linear relationship between learning and power.   For example, as I have mentioned before, everyone wants the full game development tool that doesn’t require programming, though I argue it can’t exist.   It’s nice (and all too rare) when you get an elegant segue from templates through to being able to open up the underpinnings.

Understanding the tradeoff between ease of use and power is important in bringing knowledge, information, and tools to your learners, as well as your own learning tools.   You’ll want good defaults, and then the ability to customize.   Some of our tools are still not doing a good job of that, and the tutorials still tend to be focused on either product features or rote procedures, instead of helping you understand the software model underneath.   We could do a lot better!

Back to your user goals: you’ve got to know what you’re trying to do, how much you’re willing to learn about it, and live within what that gives you.   And I’d like feedback on the new website.     Put on your ‘potential customer’ goggles, prepared with what you’d want to know, and have a look; I welcome feedback to improve it!

Less than words

22 January 2009 by Clark 8 Comments

Yesterday, while I was posting on how words could be transcended by presentation, there was an ongoing twitfest on terms that have become overused and, consequently, meaningless.   It started when Jane Bozarth asked what ‘instructionally sound’ meant, then Cammy Bean chimed in with ‘rich’, Steve Sorden added ‘robust’, and it went downhill from there.

I responded to Jane’s initial query that instructionally sound cynically meant following the ID cookie cutter, but ideally meant following what’s known about how people learn.   I similarly tried to distinguish the hyped version of engaging (gratuitous media use) from a more principled one (challenging, contextualized, meaningful, etc).   (I had to do the latter, given I’ve got the word engaging in my book title.)

Other overused terms mentioned include: adaptive, brain-based. game-like, comprehensive, interactive, compelling, & robust.   Yet, behind most of these are important concepts (ok, game-like is hype, and Daniel Willingham’s put a bucket of cold water on brain-based).   I should’ve added ‘personalized’ when a demo of an elearning authoring suite I sat through yesterday could capture the learner’s name and use it to print a ‘personalized’ certificate at the end.

And that’s the problem: important concepts are co-opted for marketing by using the most trivially qualifying meaning of the term to justify touting it as an instance.   Similarly, clicking to move on is, apparently, interactive.   Ahem.   It’s like the marketers don’t want to give us any credit for having a brain. (Though, sadly, from what I see, there does seem to be some lack of awareness of the deeper principles behind learning.)   I invoke the Cluetrain, and ask elearning vendors to get on board.

So, before you listen to the next pitch from a vendor, get your Official eLearning Buzzword Bingoâ„¢ card, make sure you know what the terms mean, and challenge them to ensure that they a) really understand the concept, and b) really have the capability.   You win when you catch them out; a smarter market is a better market. Ok, let’s play!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok