Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: engag

Transformative Experience Design

28 March 2009 by Clark 6 Comments

As part of the continual rethink about what I offer and to who (e.g. training department rethinks to managers, directors, VPs; experience design reviews/refines to learning teams), my thoughts on learning experience design took a leap.   I’ve argued that the skills in Engaging Learning (my book) are the ones that are critical for Pine & Gilmore’s next step beyond their experience economy, the transformative experience economy. But I’ve started to think deeper.

John Seely Brown challenged us at the Learning Irregulars meeting that what fundamentally made a difference was a ‘questing disposition’ found in certain active learning communities.   This manifests as an orientation to experimentation and learning. My curiosity was whether it was capable of being developed, as I’m loath to think that the 10% that learn despite schooling :) is inflexible because I believe that more and better learning has a chance to change our world for the better.

I hadn’t finished the article he subsequently sent me (coming soon), but it drove me back to some early thinking on attitude change.   I recognize that just learning skills aren’t enough, and that a truly transformative experience subjectively needs to result in a changed worldview, a feeling of new perspectives.   This could be a change in attitude, a new competency, or a fundamental change in perspective.

Which brings me back to looking at myth and ritual, something I tried to get my mind around before. I was looking for the Complete Idiot’s Guide to Ritual, and the closest thing I could find is Rapport’s Ritual and Religion in the Making of Humanity, which is almost impenetrably dense (and I’m trained and practiced at reading academic prose!).   However, the takeaway is that ritual is hard to design, most artificial attempts fail miserably.

Others have suggested that transformation is at core about movement, which takes me back to ritual.   Both a search on transformation and a twitter response brought that element to the surface.   The other element that the search found was spirituality (not just religious).   Which is not surprising, but not necessarily useful.

Naturally, I fall back to thinking from the perspective of creating an experience that will yield that transformational aesthetic, but it’s grounded in intuition rather than any explicit guidance. Still, I think there’s something necessary in the perspective that skills alone isn’t enough, and as I said before, as much of our barriers may be attitude or motivation as knowledge and skills.

I’ve skimmed ahead in JSB’s article, and can see I need a followup post, but in the interim, I’d welcome your thoughts on designing truly transformative experiences, not just learning experiences.

Monday Broken ID Series: Process

22 March 2009 by Clark 5 Comments

Previous Series Post

This is the last formal post in a series of thoughts on some broken areas of ID that I’ve been posting for Mondays.   The intention is to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer’, but instead to point out how to do better design.

We’ve been talking about lots of ways instructional design can be wrong, but if that’s the case, the process we’re using must be broken too.   If we’re seeing cookie-cutter instructional design, we must not be starting from the right point, and we must be going about it wrong.

Realize that the difference between really good instructional design, and ordinary or worse, is subtle.   Way too often I’ve had the opportunity to view seemingly well-produced elearning that I’ve been able to dismantle systematically and thoroughly.   The folks were trying to do a good job, and companies had paid good money and thought they got their money’s worth.   But they really hadn’t.

It’d be easy to blame the problems on tight budgets and schedules, but that’s a cop-out.   Good instructional design doesn’t come from big budgets or unlimited timeframes, it comes from knowing what you’re doing.   And it’s not following the processes that are widely promoted and taught.

You know what I’m talking about – the A-word, that five letter epithet – ADDIE.   Analysis, Design, Development, Implementation, and Evaluation.   A good idea, with good steps, but with bad implementation.   Let me take the radical extreme: we’re better off tossing out the whole thing rather than continue to allow the abominations committed under that banner.

OK, now what am I really talking about?   I was given a chance to look at an organization’s documentation of their design process.   It was full of taxonomies, and process, and all the ID elements.   And it led to boring, bloated content.   If you follow all the procedures, without a deep understanding of the underpinnings that make the elements work, and know what can be finessed based upon the audience, and add the emotional elements that instructional design largely leaves out (with the grateful exception of Keller’s ARCS model).

The problem is that more people are doing design than have sufficient background, as Cammy Bean’s survey noted.   Not that you have to have a degree, but you do have to have the learning background to understand the elements behind the processes.   Folks are asked to become elearning designers and yet haven’t really had the appropriate training.

Blind adherence to ADDIE will, I think, lead to more boring elearning than someone creative taking their best instincts about how to get people to learn.   Again, Cathy Moore’s Action Mapping is a pretty good shortcut that I’ll suggest will lead to better outcomes than ADDIE.

Which isn’t to say that following ADDIE when you know what you’re doing, and have a concern for the emotional and aesthetic side (or a team with same), won’t yield a good result, it will.   And, following ADDIE likely will yield something that’s pretty close to effective, but it’s so likely to be undermined by the lack of engagement, that there’s a severe worry.

And, worse, there’s little in their to ensure that the real need is met, asking the designer to go beyond what the SME and client tells you and ensure that the behavior change is really what’s needed.   The Human Performance Improvement model actually does a better job at that, as far as I can tell.

It’s not hard to fix up the problem.   Start by finding out what significant decision-making change will impact the organization or individual, and work backward from there, as the previous posts have indicated. I don’t mean to bash ADDIE, as it’s conceptually sound from a cognitive perspective, it just doesn’t extend far enough pragmatically in terms of focusing on the right thing, and it errs too much on the side of caution instead of focusing on the learner experience.It’s not clear to me that ADDIE will even advocate a job aid, when that’s all that’s needed (and I’m willing to be wrong).

Our goal is to make meaningful change, and that’s what we need to do.   I hope this series will enable you to do more meaningful design.   There may be more posts, but I’ve exhausted my initial thoughts, so we’ll see how it goes.

Cultural success

21 March 2009 by Clark Leave a Comment

I’ve been a wee bit busy this week, engaged on two different initiatives involved in improving what the organizations are doing. The interesting bit was that there were two widely different cultures, and yet each was successful.   How could that be?

Normally, we look at the elements of successful learning cultures as providing safety and reward for contributing, acceptance of diversity, and other dimensions.   It’s easy to imagine that this results in a relatively homogeneous outcome, which, while certainly desirable, might seem bland.   However, the two juxtaposed experiences demonstrated that this is definitely not the case.

In one, there’s definitely a feeling of responsible progress, but it’s a very supportive environment, and while there’s gentle teasing, it’s a very warm and fuzzy place, self-described by the leader.   This leader has some clear ideas, but is very collaborative in getting input in what goals to choose and more so in how to get there.   It’s necessary in the community in which they play, but it works.   People are clear about where they’re going, and feel supported in getting there in reasonable steps.

The other culture is similarly committed to quality, but the leader has a much different personality. Instead of warm and fuzzy, there’s much more attitude and edge.   The comments are more pointed, but it’s even more self-directed than other directed, and is taken as well as given. It’s more lively, probably not quite as ‘safe’, but also probably a bit more fun.   It’s probably more suited to the entrepreneurial nature of the organization than the previous more institutional approach.

Yet both are in continual processes of improvement; in both cases my role was to add the outside knowledge of learning and technology in their self-evaluation.   It’s a pleasure to work with organizations that are serious about improvement, and eager to include the necessary input to get there.

My take-home is that there are lots of different ways organizations can be functional, as well as dysfunctional.   It doesn’t take much more than commitment to move from the latter to the former, and the leader’s style can be different, as long as it’s consistent, appropriate, and successful.   Definitely a nice thing to learn.

Meeting unreasonable needs

20 March 2009 by Clark 3 Comments

I was contacted yesterday by a relatively new ID person, who was in a tough spot.   This person understood the principles of Tony Karrer’s “Before You Ask” post, as the situation was well laid out.   Some help was asked for (clearly no expectation other than, perhaps, a thoughtful reply; the circumstances were quite clear).

The situation is that this person is the support for an LMS across multiple geographic locations.   The ID was hired to do ‘training’ on the system, but access to SMEs is limited at beast, the uses in the different contexts were different enough that a course model isn’t a viable solution, yet this person wasn’t clear on what alternatives to take: “I am beginning to think that the position is flawed in its design.”

For what it’s worth, here’s what I replied (slightly modified for clarity and anonymity):

First, I’d offer a pointer to John Carroll’s minimalist instruction (via “The Nurnberg Funnel”).   He taught a word processing system via a set of cards that trumped the instructionally designed manual by focusing on the learners’ existing knowledge and goals.   It’d be one way to ‘teach photography’ instead of ‘the camera’.

Of course, I also recommend teaching ‘the model’, not the software *nor* the task. That is, what is the LMS’s underlying model, and how does it lead you to predict how to do x, y, and z.   If you can teach the model, and through a couple of examples and practice get them to be able to infer how to do other tasks, you’ve minimized ‘training’ and maximized their long-term success.   Your lack of access to SMEs means you have to become one, however, I reckon.   Doing good ID does mean more responsibility on the designer in any case.   Sorry.

On top of either approach (common tasks, or model-based learning) consider that your role is to put out some basic materials (don’t think training, think job aids), and then serve as a ‘consultant’.   Have them come to you to ask how to do things, and either create FAQ’s or more job aids, depending on their need and your assessment of the value proposition in either.   So don’t think your only solution is ‘training’.

Also consider gestating a ‘community’ to surround your wiki, and grow it into a self-help resource that people can get into to the level they can handle.   Have discussion board where people can post questions. You’ll be busy at first, but if they find value, it can grow to be self-sustaining.   People will often self-help, if it’s easy enough.

BTW, another organization had some success many years ago starting with a central office, bringing in and training local ‘champions’ who gradually moved the locus of responsibility back to their unit.   Of course, they got buy-in to do so, but you might try to work with your early adopters and help them become the local resources.

Overall, don’t try to accomplish everything with ‘the course’, but look to the broader range of performance ecosystem components (if you’ve followed my blog, you know I’m talking job aids, ecommunity, etc) and balance your efforts appropriately.

The response was that this was, indeed, helpful.   I feel for the person in the situation of having to do a particular role when the ‘received wisdom’ about how to do it is at odds with what really is useful, and is underresourced to boot. A too-frequent situation, and probably not decreasing, sigh.   But taking the broader performance perspective is a useful framework I also found useful in another recent engagement, professional development for teachers.   Don’t just worry about getting them the basics, and develop them as practitioners, even into experts, as well.   Moreover, help them help themselves!

This is just the type of situation where taking a step back and looking at what is being done can yield ways to rethink, or even just fine-tune the approach.   I typically find that it’s the case that there *are* such opportunities, and it’s an easy path to better outcomes.   Of course, I also find that years of experience and a wealth of relevant frameworks makes that easier ;).   What is your experience in adapting to circumstances and improving situations?

A wee bit o’ experience…

11 March 2009 by Clark 1 Comment

A personal reflection, read if you’d like a little insight into what I do, why and what I’ve done.

Reading an article in Game Developer about some of the Bay Area history of the video game industry has made me reflective.   As an undergrad (back before there really were programs in instructional technology) I saw the link between computers and learning, and it’s been my life ever since.   I designed my own major, and got to be part of a project where we used email to conduct classroom discussion, in 1978!

Having called all around the country to find a job doing computers and learning,   I arrived in the Bay Area as a ‘wet behind the ears’ uni graduate to design and program ‘educational’ computer games.   I liked it; I said my job was making computers sing and dance.   I was responsible for FaceMaker, Creature Creator, and Spellicopter (among others) back in 81-82.   (So, I’ve been designing ‘serious games’, though these were pretty un-serious, for getting close to 30 years!)

I watched the first Silicon Valley gold rush, as the success of the first few home computers and software had every snake oil salesman promising that they could do it too.   The crash inevitably happened, and while some good companies managed to emerge out of the ashes, some were trashed as well.   Still, it was an exciting time, with real innovation happening (and lots of it in games; in addition to the first ‘drag and drop’ showing up in Bill Budge’s Pinball Construction Set, I put windows into FaceMaker!).

I went back to grad school for a PhD in applied cog sci (with Don Norman), because I had questions about how best to design learning (and I’d always been an AI groupie :).   I did a relatively straightforward thesis, not technical but focused on training meta-cognitive skills, a persistent (and, I argue, important) interest.   I looked at all forms of learning; not just cognitive but behavioral, ID, constructivist, connectionist, social, even machine learning.   I was also getting steeped in applying cognitive science to the design of systems, and of course hanging around the latest/coolest tech.   On the side, I worked part-time at San Diego State University’s Center for Research on Mathematics and Science Education working with Kathy Fischer and her application SemNet.

My next stop was the University of Pittsburgh’s Learning Research & Development Center for a post-doctoral fellowship working on a project about mental models of science through manipulable systems, and on the side I designed a game that exercised my dissertation research on analogy (and published on it).   This was around 1990, so I’d put a pretty good stake in the ground about computer games for deep thinking.

In 1991 I headed to the Antipodes, taking up a faculty position at UNSW in the School of Computer Science, teaching interface design, but quickly getting into learning technology again.   I was asked, and I supervised a project designing a game to help kids (who grow up without parents) learn to live on their own. This was a very serious game (these kids can die because they don’t know how to be independent), around 1993.   As soon as I found out about CGIs (the first ‘state’-maintaining technology) we ported it to the web (circa 1995), where you can still play it (the tech’s old, but the design’s still relevant).

I did a couple other game-related projects, but also experimented in several other areas.   For one, as a result of looking at design processes,   I supervised the development of a web-based performance support system for usability, as well as meta-cognitive training and some adaptive learning stuff.

I joined a government-sponsored initiative on online learning, determining how to run an internet university, but the initiative lost out to politics.   I jumped to another, and got involved in developing an online course that was too far ahead of the market (this would be about 1996-1997).   The design was lean, engaging, and challenging, I believe (I shared responsibility), and they’re looking at resurrecting it now, more than 10 years later!   I returned to the US to lead an R&D project developing an intelligent learning system based on learning objects that adapted on learner characteristics (hence my strong opinions on learning styles), which we got up and running in 2001 before that gold rush went bust.   Since then, I’ve been an independent consultant.

It’s been interesting watching the excitement around serious games.   Starting with Prensky, and then Aldrich, Gee, and now a deluge, there’s been a growing awareness and interest; now there are multiple conferences on the topics, and new initiatives all the time.   The folks in it now bring new sensibilities, and it’s nice to see that the potential is finally being realized. While I’ve not been in the thick of it, I’ve quietly continued to work, think, and write on the issue (thanks to clients, my book, and the eLearning Guild‘s research reports).   Fortunately, I’ve kept from being pigeonholed, and have been allowed to explore and be active in other areas, like mobile, advanced design, performance support, content models, and strategy.

The nice thing about my background is that it generalizes to many relevant tasks: usability and user experience design and information design are just two, in addition to the work I cited, so I can play in many relevant places, and not only keep up with but also generate new ideas.   My early technology experience and geeky curiosity keeps me up on the capabilities of the new tools, and allows me to quickly determine their fundamental learning capabilities.   Working on real projects, meeting real needs, and ability to abstract to the larger picture has given me the ability to add value across a range of areas and needs.   I find that I’m able to quickly come in and identify opportunities for improvement, pretty much without exception, at levels from products, through processes, to strategy.   And I’m less liable to succumb to fads, perhaps because I’ve seen so many of them.

I’m incredibly lucky and grateful to be able to work in the field that is my passion, and still getting to work on cool and cutting edge projects, adding value.   You’ll keep seeing me do so, and if you’ve an appetite for pushing the boundaries, give me a holler!

Focusing on the Do: Moore’s Action Mapping

4 March 2009 by Clark 6 Comments

Cathy Moore has a lovely post with a slideshow that talks about using action mapping to design better elearning, and it’s a really nice approach.   While I don’t know from Action Mapping (tm?), I do know that the approach taken avoids the typical mistakes and focuses on the same thing I advocate: what do people need to be able to do?

The presentation rightly points out the problems with knowledge dump, and instead focuses on the business goal first, and then asks you to map out what the learner would need to be able to do to achieve that business goal.   That’s the point I was making in my ‘objectives‘ post of the Broken ID series.

Cathy nicely elaborates on that point, going directly to practice that has them doing the task, as close as possible to the real task.   Finally, she has you bring in the minimum information needed to allow them to do the task.   This is really a great ‘least assistance‘ approach!

Now, it’s not talking about examples or models (though those could fit under the minimum information principle, above), nor introducing the topic, so I’d want to ensure that the learners are engaged into the learning experience up-front, and provide a model to guide their performance in the task.   What this does, however, is give you a framework and set of steps that really focuses on the important elements and avoiding the typical approach that is knowledge-full and value-light.   Recommended.

Monday Broken ID Series: Perfect Practice

1 March 2009 by Clark 1 Comment

Previous Series Post | Next Series Post
This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do good design.

Really, the key to learning is the practice. Learners have to apply knowledge, in the form of skills, to really internalize and ‘own‘ the learning.   Knowledge recitation, in the absence of application, leads to what cognitive science calls ‘inert knowledge‘, that‘s able to be recited back, but isn‘t activated in appropriate contexts.

What we see, unfortunately, is too much of knowledge test, and not meaningful application. We see meaningless questions seeing if people can recite back memorized facts, and no application of those facts to solve problems.   We see alternatives to the right answer that are so obviously wrong that we can pass the test without learning anything!   And we see feedback that‘s not specific to the deficit.   In short, we waste our and the learner‘s time.
What we want is appropriate challenge, contextualized performance, meaningful tasks, appropriate feedback, and more.

First, we should have picked meaningful objectives that indicate what they can do, in what context, to what level, and now we design the practice to determine whether they can do it. Of course, we may need to have some intermediate tasks to develop their skills at an appropriate pace, providing scaffolding to simplify the task until it‘s mastered.

We can scaffold in a variety of ways. We can provide tasks with simplified data first, that don‘t get complicated with other factors.   We can provide problems with parts worked, so learners can accomplish the component skills separately and then combine. We can provide support tools such as checklists or flowcharts to assist, and gradually remove them until the learner is capable.

We do need to balance the level of challenge, so that the task gets difficult at the right rate for the learner: too easy, and the learner is bored; too hard and the learner is frustrated.   Don‘t make it too easy!   If it matters, ensure they know it (and if it doesn‘t, why are you bothering?).

The trick is not only the inherent nature of the task, but many times is a factor of the alternatives to the right answer.   Learners don‘t make random mistakes (generally), they make patterned mistakes that represent inappropriate models that they perceive as appropriate.   We should choose alternatives to the right answer or choice that represent these misconceptions.

Consequently, we need to provide specific feedback for that particular misconception.   That‘s why any quiz tool that only has one response for all the wrong answers should be tossed out; it‘s worthless.

We need to ensure that the setting for the task is of interest to the learner.   The contexts we choose should setup problems that the learner viscerally understands are important problems, and ones that they are interested in.
We also need, as mentioned with examples, that the contexts seen across both examples and practice determine the space of transfer, so that still needs to be kept in mind.

The elements listed here are the elements that make effective practice, but also those that make engaging experiences (hence, the book).   That is, games.   While the best practice is individually mentored real performance, that doesn‘t scale well, and the consequences can be costly.   The next best practice, I argue, is simulated performance, tuned into a game (not turned, tuned).   While model-driven simulations are ideal for a variety of reasons (essentially infinite replay, novelty, adaptive challenge), it can be simplified to branching or linear scenarios.   If nothing else, just write better multiple choice questions!

Note that, here, practice encompasses formative and summative assessment. In either case, the learner‘s performing, it‘s just whether you evaluate and record that performance to determine what the learner is capable of.   I reckon assessment should always be formative, helping the learner understand what they know. And summative assessment, in my mind, has to be tied back to the learning objectives , seeing if they can now do what they need to be able to do that‘s difference.

If you make meaningful challenging, contextualized performance, you make effective practice.   And that‘s key to behavior change, and learning.   So practice making perfect practice, because practice makes perfect.

Designing Learning

28 February 2009 by Clark 2 Comments

Another way to think about what I was talking about yesterday in revisiting the training department is taking a broader view.   I was thinking about it as Learning Design, a view that incorporates instructional design, information design and experience design.

leiI‘m leery of the term instructional design, as that label has been tarnished with too many cookie cutter examples and rote approaches to make me feel comfortable (see my Broken ID series).   However, real instructional design theory (particularly when it‘s cognitive-, social-, and constructivist-aware) is great stuff (e.g. Merrill, Reigeluth, Keller, et al); it‘s just that most of it‘s been neutered in interpretation.   The point being, really understanding how people learn is critical.   And that includes Cross‘ informal learning.   We need to go beyond just the formal courses, and provide ways for people to self-help, and group-help.

However, it‘s not enough.   There‘s also understanding information design.   Now, instructional designers who really know what they‘re doing will say, yes, we take a step back and look at the larger picture, and sometimes it‘s job aids, not courses.   But I mean more, here.   I‘m talking about, when you do sites, job aids, or more, including the information architecture, information mapping, visual design, and more, to really communicate, and support the need to navigate. I see reasonable instructional design undone by bad interface design (and, of course, vice-versa).

Now, how much would you pay for that? But wait, there‘s more!   A third component   is the experience design.   That is, viewing it not from a skill-transferral perspective, but instead from the emotional view.   Is the learner engaged, motivated, challenged, and left leaving fulfilled?   I reckon that‘s largely ignored, yet myriad evidence is pointing us to the realization that the emotional connection matters.

We want to integrate the above.   Putting a different spin on it, it‘s about the intersection of the cognitive, affective, conative, and social components of facilitating organizational performance.   We want the least we can to achieve that, and we want to support working alone and together.

There‘s both a top-down and bottom-up component to this.   At the bottom, we‘re analyzing how to meet learner needs, whether it‘s fully wrapped with motivation, or just the necessary information, or providing the opportunity to work with others to answer the question.   It‘s about infusing our design approaches with a richer picture, respecting our learner‘s time, interests, and needs.

At the top, however, it‘s looking at an organizational structure that supports people and leverages technology to optimize the ability of the individuals and groups to execute against the vision and mission.   From this perspective, it‘s about learning/performance, technology, and business.

And it‘s likely not something you can, or should, do on your own.   It‘s too hard to be objective when you‘re in the middle of it, and the breadth of knowledge to be brought to bear is far-reaching.   As I said yesterday, what I reckon is needed is a major revisit of the organizational approach to learning.   With partners we‘ve been seeing it, and doing it, but we reckon there‘s more that needs to be done.   Are you ready to step up to the plate and redesign your learning?

Revisiting the Training Department

27 February 2009 by Clark 1 Comment

Harold Jarche and Jay Cross have been talking about rethinking the training department, and I have to agree.   In principle, if there is a ‘training‘ department, it needs to be coupled with a ‘performance‘ department and a ‘social learning‘ department, all under an organizational learning & performance umbrella.

What‘s wrong with a training department?   Several things you‘ll probably recognize: all problems have one answer – ‘a course‘; no relationships to the groups providing the myriad of portals, no relationship to anyone doing any sort of social learning, no ‘big picture‘ comprehension of the organization‘s needs, and typically the courses aren‘t that great either!

To put it another way, it‘s not working for the organizational constituencies.   The novices aren‘t being served because the courses are too focused on knowledge and not skills, aren‘t sufficiently motivating to engage them, and use courses even when job aids would do.   The practitioners are not getting or able to find the information they need, and have trouble getting access to expert knowledge.   And experts aren‘t able to collaborate with each other, and to work effectively with practitioners to solve problems.   Epic fail, as they say.   OK, so that‘s a ‘straw man‘, but I‘ll suggest that it‘s all too frequent.

The goal is a team serving the entire learnscape: looking at it holistically, matching needs to tools, nurturing communities, leveraging content overlap, and creating a performance-focused ecosystem.   I‘ve argued before that such an approach is really the only sustainable way to support an organization.   However, that‘s typically not what we see.

Instead, we tend to see different training groups making courses in their silos, with no links between their content (despite the natural relationships), often no link to content in portals, no systematic support for collaboration, and overall no focus on long-term development of individuals and capabilities.

So, how do we get there from here?   That‘s not an easy answer, because (and this isn‘t just consultant-speak) it depends on where the particular organization is at, and what makes sense as a particular end version, and what metrics are meaningful to the organization.   There are systematic ways to assess an organization (Jay, Harold, and I‘ve drafted just such an instrument), and processes to follow to come up with recommendations for what you do tomorrow, next month, and next year.

The goal should be a plan, a strategy, to move towards the goal.   The path differs, as the starting points are organization-specific. One way to do it is DIY, if you‘ve got the time; it‘s cheaper, but more error-prone.   The fast track is to bring in assistance and take advantage of a high-value, lightweight infusion of the best thinking to set the course.   No points for guessing my recommendation.   But with the economic crisis and organizational ‘efficiencies‘, can you afford to stick to the old ineffective path?

Monday Broken ID Series: Examples

22 February 2009 by Clark 2 Comments

Previous Series Post |Next Series Post

This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do good design.

I see several reliable problems with examples, and they aren‘t even the deepest problems. They tend to be mixed in with the concept, instead of separate, if they exist at all.   Then, when they do exist, too often they‘re cookie-cutter examples, that don‘t delve into the necessary elements that make examples successful, let alone are intrinsically interesting, yet we know what these elements are!

Conceptually, examples are applications of the concept in a context.   That is, we have a problem in a particular setting, and we want to use the model as a guide to solving the problem. Note that the choice of examples is important. The broader the transfer space, that is, the more general the skills, the more you want examples that differ in many respects.   Learners generalize the concept from the examples, and the extent to which they‘ll generalize to all appropriate situations depends on the breadth of contexts they‘ve seen (across both examples and practice).   You need to ensure that the contexts the learner sees are as broadly disparate as possible.

Note that we should also be choosing problems and contexts that are of interest to the audience.   Going beyond just the cognitive role, we should be trying to tap into the motivational and engagement factors.   Factor that into the example design as well!

Now, we know that examples have to show the steps that were taken.   They have to have specific steps from beginning to end.   And, I add, those steps have to refer back to the concept that guides the presentation.   You can‘t just say “first you do this, then you do this”, etc, you have to say “first, using the model, you do this, and then the model says to do that”.   You need to show the steps, and the intermediate work products.   Annotating them is really important.

And that annotation is not just the steps, but also the underlying thought processes.   The problem is, experts don‘t even have access to their thought processes anymore!   Yet, their thinking really works along lines like “well, I could‘ve done A, but because of X, and thought B was a better approach, and then I could do C, but because of Y I tried D”, etc.   The point being, there‘s a lot of contextual clues that they evaluate that aren‘t even conscious, yet these clues are really important for learners. (BTW, this is one of the many reasons I recommend comics in elearning, thought bubbles are great for cognitive annotation.)

Another valuable component is showing mistakes and backtracking. This is a hard one to get your mind around, and yet it‘s powerful both cognitively and emotionally.   First, experts model the behavior perfectly, and when learners try, they make mistakes, and may turn off emotionally (“I‘m having trouble, and it looks so easy, I must not be good at this”).   In reality, experts make mistakes all the time, and learners need to know that. It keeps you from losing them altogether!

Cognitively it‘s valuable, too.   When experts show backtracking and repair, they‘re modeling the meta-skills that are part of the expertise.   Unpacking that self-monitoring helps learners internalize the ‘check your answer‘ component that‘s part of expert performance.   This takes more work on the part of the designer, like we had with the concept, but if the content is important (otherwise, why are you building a course), it‘s worth doing right.

Finally, I believe it‘s important to convey the example as a story.   Our brains are wired to comprehend stories, and a good narrative has better uptake.   Having a protagonist documenting the context and problem, and then solving it with the model to achieve meaningful outcomes, is more interesting, and consequently more memorable.   We can use a variety of media to tell stories, from prose, through audio (think mobile and podcasts) and narrated slideshow, animation, or video.   Comics are another channel.   Stories also are useful for conveying the underlying thought processes, via thought bubbles or reflective narration (“What was I thinking?…”).

So, please do good examples.   Be exemplary!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.