Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

30 June 2015

SME Brains

Clark @ 8:10 am

As I push for better learning design, I’m regularly reminded that working with subject matter experts (SMEs) is critical, and problematic.   What makes SMEs has implications that are challenging but also offers a uniquely valuable perspective.  I want to review some of those challenges and opportunities in one go.

One of the artifacts about how our brain works is that we compile knowledge away.  We start off with conscious awareness of what we’re supposed to be doing, and apply it in context.  As we practice, however, our expertise becomes chunked up, and increasingly automatic. As it does so, some of the elements that are compiled away are awarenesses that are not available to conscious inspection. As Richard Clark of the Cognitive Technology Lab at USC lets us know, about 70% of what SMEs do isn’t available to their conscious mind.  Or, to put it another way, they literally can’t tell us what they do!

On the other hand, they have pretty good access to what they know. They can cite all the knowledge they have to hand. They can talk about the facts and the concepts, but not the decisions.  And, to be fair, many of them aren’t really good at the concepts, at least not from the perspective of being able to articulate a model that is of use in the learning process.

The problem then becomes a combination of both finding a good SME, and working with them in a useful way to get meaningful objectives, to start. And while there are quite rigorous ways (e.g. Cognitive Task Analysis), in general we need more heuristic approaches.

My recommendation, grounded in Sid Meier’s statement that “good games are a series of interesting decisions” and the recognition that making better decisions are likely to be the most valuable outcome of learning, is to focus rabidly on decisions.  When SMEs start talking about “they need to know X” and “they need to know Y” is to ask leading questions like “what decisions do they need to be able to make that they don’t make know” and “how does X or Y actually lead them to make better decisions”.

Your end goal here is to winnow the knowledge away and get to the models that will make a difference to the learner’s ability to act.  And when you’re pressed by a certification body that you need to represent what the SME tells you, you may need to push back.  I even advocate anticipating what the models and decisions are likely to be, and getting the SME to criticize and improve, rather than let them start with a blank slate. This does require some smarts on the part of the designer, but when it works, it leverages the fact that it’s easier to critique than generate.

They also are potentially valuable in the ways that they recognize where learners go wrong, particularly if they train.  Most of the time, mistakes aren’t random, but are based upon some inappropriate models.  Ideally, you have access to these reliable mistakes, and the reason why they’re made. Your SMEs should be able to help here. They should know ways in which non-experts fail.  It may be the case that some SMEs aren’t as good as others here, so again, as in ones that have access to the models, you need to be selective.

This is related to one of the two ways SMEs are your ally.  Ideally, you’re equipped with stories, great failures and great successes. These form the basis of your examples, and ideally come in the form of a story. A SME should have some examples of both that they can spin and you can use to build up an example. This may well be part of your process to get the concepts and practice down, but you need to get these case studies.

There’s one other way that SMEs can help. The fact that they are experts is based upon the fact that they somehow find the topic fascinating or rewarding enough to spend the requisite time to acquire expertise. You can, and should, tap into that. Find out what makes this particular field interesting, and use that as a way to communicate the intrinsic interest to learners. Are they playing detective, problem-solver, or protector? What’s the appeal, and then build that into the practice stories you ask learners to engage in.

Working with SMEs isn’t easy, but it is critical. Understanding what they can do, and where they intrinsic barriers, gives you a better handle on being able to get what you need to assist learners in being able to perform.  Here are some of my tips, what have you found that works?

9 June 2015

Content/Practice Ratio?

Clark @ 6:06 am

I end up seeing a lot of different elearning. And, I have to say, despite my frequent disparagement, it’s usually well-written, the problem seems to be in the starting objectives.  But compared to learning that really has an impact: medical, flight, or military training for instance, it seems woefully under-practiced.

So, I’d roughly (and generously) estimate that the ratio is around 80:20 for content: practice.  And, in the context of moving from ‘getting it right’ to ‘not getting it wrong’, that seems woefully inadequate.  So, two questions: do we just need more practice, or do we also have too much content. I’ll put my money on the latter, that is: both.

To start, in most of the elearning I see (even stuff I’ve had a role in, for reasons out of my control), the practice isn’t enough.  Of course, it’s largely wrong, being focused on reciting knowledge as opposed to making decisisions, but there just isn’t enough.  That’s ok if you know they’ll be applying it right away, but that usually isn’t the case.  We really don’t scaffold the learner from their initial capability, through more and more complex scenarios, until they’re at the level of ability we want.  Where they’re performing the decisions they need to be making in the workplace with enough flexibility and confidence, and with sufficient retention until it’s actually needed.  Of course, it shouldn’t be the event model, and that practice should be spaced over time.  Yes, designing practice is harder than just delivering content, but it’s not that much harder to develop more than just to develop some.

However, I’ll argue we’re also delivering too much content.  I’ve suggested in the past that I can rewrite most content to be 40% – 60% less than it starts (including my own; it takes me two passes).  Learners appreciate it.  We want a concise model, and some streamlined examples, but then we should get them practicing.  And then let the practice drive them to the content.  You don’t have to prepackage it as much, either; you can give them some source materials that they’ll be motivated to use, and even some guidance (read: job aids) on how to perform.

And, yes, this is a tradeoff: how do we find a balance that both yields the outcomes we need but doesn’t blow out the budget?  It’s an issue, but I suggest that, once you get in the habit, it’s not that much more costly.  And it’s much more justifiable, when you get to the point of actually measuring your impact.  Which many orgs aren’t doing yet.  And, of course, we should.

The point is that I think our ratio should really be 50:50 if not 20:80 for content:practice.  That’s if it matters, but if it doesn’t why are you bothering? And if it does, shouldn’t it be done right?  What ratios do you see? And what ratios do you think makes sense?

2 June 2015

Model responses

Clark @ 8:12 am

I was thinking about how to make meaningful practice, and I had a thought that was tied to some previous work that I may not have shared here.  So allow me to do that now.

Ideally, our practice has us performing in ways that are like the ways we perform in the real world.  While it is possible to make alternatives available that represent different decisions, sometimes there are nuances that require us to respond in richer ways. I’m talking about things like writing up an RFP, or a response letter, or creating a presentation, or responding to a live query. And while these are desirable things, they’re hard to evaluate.

The problem is that our technology to evaluate freeform text is difficult, let alone anything more complex.  While there are tools like latent semantic analysis that can be developed to read text, it’s complex to develop and  it won’t work on spoken responses , let alone spreadsheets or slide decks (common forms of business communication).  Ideally, people would evaluate them, but that’s not a very scalable solution if you’re talking about mentors, and even peer review can be challenging for asynchronous learning.

An alternative is to have the learner evaluate themselves.  We did this in a course on speaking, where learners ultimately dialed into an answering machine, listened to a question, and then spoke their responses.  What they then could do was listen to a model response as well as their response.  Further, we could provide a guide, an evaluation rubric, to guide the learner in evaluating their response in respect to the model response (e.g. “did you remember to include a statement and examples”?).

This would work with more complex items, too.  “Here’s a model spreadsheet (or slide deck, or document); how does it compare to yours?”  This is very similar to the types of social processing you’d get in a group, where you see how someone else responded to the assignment, and then evaluate.

This isn’t something you’d likely do straight off; you’d probably scaffold the learning with simple tasks first.  For instance, in the example I’m talking about we first had them recognize well- and poorly-structured responses, then create them from components, and finally create them in text before having them call into the answering machine. Even then, they first responded to questions they knew they were going to get before tasks where they didn’t know the questions.  But this approach serves as an enriching practice on the way to live performance.

There is another benefit besides allowing the learner to practice in richer ways and still get feedback. In the process of evaluating the model response and using an evaluation rubric, the learner internalizes the criteria and the process of evaluation, becoming a self-evaluator and consequently a self-improving learner.  That is, they use a rubric to evaluate their response and the model response. As they go forward, that rubric can serve to continue to guide as they move out into a performance situation.

There are times where this may be problematic, but increasingly we can and should mix media and use technology to help us close the gap between the learning practice and the performance context. We can prompt, record learner answers, and then play back theirs and the model response with an evaluation guide.  Or we can give them a document template and criteria, take their response, and ask them to evaluate theirs and another, again with a rubric.  This is richer practice and helps shift the learning burden to the learner, helping them become self-learners.   I reckon it’s a good thing. I’ll suggest that you consider this as another tool in your repertoire of ways to create meaningful practice. What do you think?

26 May 2015

Evolutionary versus revolutionary prototyping

Clark @ 8:14 am

At a recent meeting, one of my colleagues mentioned that increasingly people weren’t throwing away prototypes.  Which prompted reflection, since I have been a staunch advocate for revolutionary prototyping (and here I’m not talking about “the” Revolution ;).

When I used to teach user-centered design, the tools for creating interfaces were complex. The mantras were test early, test often, and I advocated Double Double P’s (Postpone Programming, Prefer Paper; an idea I first grabbed from Rob Phillips then at Curtin).  The reason was that if you started building too early in the design phase, you’d have too much invested to throw things away if they weren’t working.

These days, with agile programming, we see sprints producing working code, which then gets elaborated in subsequent sprints.  And the tools make it fairly easy to work at a high level, so it doesn’t take too much effort to produce something. So maybe we can make things that we can throw out if they’re wrong.

Ok, confession time, I have to say that I don’t quite see how this maps to elearning.  We have sprints, but how do you have a workable learning experience and then elaborate it?  On the other hand, I know Michael Allen’s doing it with SAM and Megan Torrance just had an article on it, but I’m not clear whether they’re talking storyboard, and then coded prototype, or…

Now that I think about it, I think it’d be good to document the core practice mechanic, and perhaps the core animation, and maybe the spread of examples.  I’m big on interim representations, and perhaps we’re talking the same thing. And if not, well, please educate me!

I guess the point is that I’m still keen on being willing to change course if we’ve somehow gotten it wrong.  Small representations is good, and increasing fidelity is fine, and so I suppose it’s okay if we don’t throw out prototypes often as long as we do when we need to.  Am I making sense, or what am I missing?

12 May 2015

David McCandless #CALDC3 Keynote Mindmap

Clark @ 6:27 pm

David McCandless gave a graphically and conceptually insightful talk on the power of visualization at the Callidus Cloud Connections.  He demonstrated the power of insight by tapping into the power of our pattern matching cognitive architecture.   visualization

5 May 2015

Pushing back

Clark @ 7:46 am

In a recent debate with my colleague on the Kirkpatrick model, our host/referee asked me whether I’d push back on a request for a course. Being cheeky, I said yes, but of course I know it’s harder than that.  And I’ve been mulling the question, and trying to think of a perhaps more pragmatic (and diplomatic ;) approach.  So here’s a cut at it.

The goal is not to stay with just ‘yes’, but to followup.  The technique is to drill in for more information under the guise of ensuring you’re making the right course. Of course, really you’re trying to determine whether there really is a need for a course at all, or maybe a job aid or checklist instead will do, and if so what’s critical to success.  To do this, you need to ask some pointed questions with the demeanor of being professional and helpful.

You might, then, ask something like “what’s the problem you’re trying to solve” or “what will the folks taking this course be able to do that they’re not doing now”.  The point is to start focusing on the real performance gap that you’re addressing (and unmasking if they don’t really know).  You want to keep away from the information that they think needs to be in the head, and focus in on what decisions people can make that they can’t make now.

Experts can’t tell you what they actually do, or at least about 70% of it, so you need to drill in more about behaviors, but at this point you’re really trying to find out what’s not happening that should be.  You can use the excuse that “I just want to make sure we do the right course” if there’s some push back on your inquiries, and you may also have to stand up for your requirements on the basis that you have expertise in your area and they have to respect that just as you respect their expertise in their area (c.f. Jon Aleckson’s MindMeld). 

If what you discover does end up being about information, you might ask about “how fast will this information be changing”, and “how much of this will be critical to making better decisions”.  It’s hard to get information into the head, and it’s a futile effort if it’ll be out of date soon and it’s an expensive one if it’s large amounts and arbitrary. It’s also easy to think that information will be helpful (and the nice-to-know as well as the must), but really you should be looking to put information in the world if you can. There are times when it has to be in the head, but not as often as your stakeholders and SMEs think.  Focus on what people will do differently.

You also want to ask “how will we know the course is working”.  You can ask about what change would be observed, and should talk about how you will measure it.  Again, there could be pushback, but you need to be prepared to stick to your guns.  If it isn’t going to lead to some measurable delta, they haven’t really thought it through.  You can help them here, doing some business consulting on ROI for them. And here’s it’s not a guise, you really are being helpful.

So I think the answer can be ‘yes’, but that’s not the end of the conversation. And this is the path to start demonstrating that you are about business.  This may be the path that starts getting your contribution to the organization to start being strategic. You’ll have to start being about more than efficiency metrics (cost/seat/hour; “may as well weigh ’em”) and about how you’re actually impacting the business. And that’s a good thing.  Viva la Revolucion!

30 April 2015

Activities for Integrating Learning

Clark @ 8:11 am

I’ve been working on a learning design that integrates developing social media skills with developing specific competencies, aligned with real work.  It’s an interesting integration, and I drafted a pedagogy that I believe accomplishes the task.  It draws heavily on the notion of activity-based learning.  For your consideration.

Activity ModelThe learning process is broken up into a series of activities. Each activity starts with giving the learning teams a deliverable they have to create, with a deadline an appropriate distance out.  There are criteria they have to meet, and the challenge is chosen such that it’s within their reach, but out of their grasp.  That is, they’ll have to learn some things to accomplish it.

As they work on the deliverable, they’re supported. They may have resources available to review, ideally curated (and, across the curricula, their responsibility for curating their own resources is developed as part of handing off the responsibility for learning to learn).  There may be people available for questions, and they’re also being actively watched and coached (less as they go on).

Now, ideally the goal would be a real deliverable that would achieve an impact on the organization.  That, however, takes a fair bit of support to make it a worthwhile investment. Depending on the ability of the learners, you may start with challenges that are like but not necessarily real challenges, such as evaluating a case study or working on a simulation.  The costs of mentoring go up as the consequences of the action, but so do the benefits, so it’s likely that the curriculum will similarly get closer to live tasks as it progresses.

At the deadline, the deliverables are shared for peer review, presumably with other teams. In this instance, there is a deliberate intention to have more than one team, as part of the development of the social capabilities. Reviewing others’ work, initially with evaluation heuristics, is part of internalizing the monitoring criteria, on the path to becoming a self-monitoring and self-improving learner. Similarly, the freedom to share work for evaluation is a valuable move on the path to a learning culture.  Expert review will follow, to finalize the learning outcomes.

The intent is also that the conversations and collaborations be happening in a social media platform. This is part of helping the teams (and the organization) acquire social media competencies.  Sharing, working together, accessing resources, etc. are being used in the platform just as they are used for work. At the end, at least, they are being used for work!

This has emerged as a design that develops both specific work competencies and social competencies in an integrated way.  Of course, the proof is when there’s a chance to run it, but in the spirit of working out loud…your thoughts welcome.

28 April 2015

Got Game?

Clark @ 8:15 am

Why should you, as a learning designer, take a game design workshop?  What is the relationship between games and learning?  I want to suggest that there are very important reasons why you should.

Just so you don’t think I’m the only one saying it, in the decade since I wrote the book Engaging Learning: Designing e-Learning Simulation Games, there have been a large variety of books on the topic. Clark Aldrich has written three, at least count. James Paul Gee has pointed out how the semantic features of games match to the way our brains learn, as has David  Williamson Shaeffer.  People like Kurt Squire, Constance Steinkuhler, Henry Jenkins, and Sasha Barab have been strong advocates of games for learning. And of course Karl Kapp has a recent book on the topic.  You could also argue that Raph Koster’s A Theory of Fun is another vote given that his premise is that fun is learning. So I’m not alone in this.

But more specifically, why get steeped in it?  And I want to give you three reasons: understanding engagement, understanding practice, and understanding design.  Not to say you don’t know these, but I’ll suggest that there are depths which you’re not yet incorporating into your learning, and  you could and should.  After all, learning should be ‘hard fun’.

The difference between a simulation and a game is pretty straightforward.  A simulation is just a model of the world, and it can be in any legal state and be taken to any other.  A self-motivated and effective self-learner can use that to discover what they need to know.  But for specific learning purposes, we put that simulation into an initial state, and ask the learner to take it to a goal state, and we’ve chosen those so that they can’t do it until they understand the relationships we want them to understand. That’s what I call a scenario, and we typically wrap a story around it to motivate the goal.  We can tune that into a game.  Yes, we turn it into a game, but by tuning.

And that’s the important point about engagement. We can’t call it game; only our players can tell us whether it’s a game or not. To achieve that goal, we have to understand what motivates our learners, what they care about, and figure out how to integrate that into the learning.  It’s about not designing a learning event, but designing a learning experience.  And, by studying how games achieve that, we can learn how to take our learning from mundane to meaningful.   Whether or not we have the resources and desire to build actual games, we can learn valuable lesssons to apply to any of our learning design. It’s the emotional element most ID leaves behind.

I also maintain that, next to mentored live practice, games are the best thing going (and individual mentoring doesn’t scale well, and live practice can be expensive both to develop but particularly when mistakes are made).  Games build upon that by providing deep practice; embedding important decisions in a context that makes the experience as meaningful as when it really counts.  We use game techniques to heighten and deep the experience, which makes it closer to live practice, reducing transfer distance. And we can provide repeated practice.  Again, even if we’re not able to implement full game engines, there are many important lessons to take to designing other learning experiences: how to design better multiple choice questions, the value of branching scenarios, and more.  Practical improvements that will increase engagement and increase outcomes.

Finally, game designers use design processes that have a lot to offer to formal learning design. Their practices in terms of information collection (analysis), prototyping and refinement, and evaluation are advanced by the simple requirement that their output is such that people will actually pay for the experience.  There are valuable elements that can be transferred to learning design even if you aren’t expecting to have an outcome so valuable you can charge for it.

As professionals, it behooves us to look to other fields with implications that could influence and improve our outcomes. Interface design, graphic design, software engineering, and more are all relevant areas to explore. So is game design, and arguably the most relevant one we can.

So, if you’re interested in tapping into this, I encourage you to consider the game design workshop I’ll be running for the ATD Atlanta chapter on the 3rd of June. Their price is fair even if you’re not a chapter member, and it’s great deal if you are.  Further, it’s a tried and tested format that’s been well received since I first started offering it. The night before, I’ll be busting myths at the chapter meeting.  I hope I’ll see you there!

21 April 2015

Why models matter

Clark @ 7:52 am

In the industrial age, you really didn’t need to understand why you were doing what you were doing, you were just supposed to do it.  At the management level, you supervised behavior, but you didn’t really set strategy. It was only at the top level where you used the basic principles of business to run your organization.  That was then, this is now.

Things are moving faster, competitors are able to counter your advances in months, there’s more information, and this isn’t decreasing.  You really need to be more agile to deal with uncertainty, and you need to continually innovate.   And I want to suggest that this advantage comes from having a conceptual understanding, a model of what’s happening.

There are responses we can train, specific ways of acting in context.  These aren’t what are most valuable any more.  Experts, with vast experience responding in different situations, abstract models that guide what they do, consciously or unconsciously (this latter is a problem, as it makes it harder to get at; experts can’t tell you 70% of what they actually do!).  Most people, however, are in the novice to practitioner range, and they’re not necessarily ready to adapt to changes, unless we prepare them.

What gives us the ability to react are having models that explain the underlying causal relations as we best understand them, and then support in applying those models in different contexts.  If we have models, and see how those models guide performance in context A, then B, and then we practice applying it in context C and D (with model-based feedback), we gradually develop a more flexible ability to respond. It’s not subconscious, like experts, but we can figure it out.

So, for instance, if we have the rationale behind a sales process, how it connects to the customer’s mental needs and the current status, we can adapt it to different customers.  If we understand the mechanisms of medical contamination, we can adapt to new vectors.  If we understand the structure of a cyber system, we can anticipate security threats. The point is that making inferences on models is a more powerful basis than trying to adapt a rote procedure without knowing the basis.

I recognize that I talk a lot in concepts, e.g. these blog posts and diagrams, but there’s a principled reason: I’m trying to give you a flexible basis, models, to apply to your own situation.  That’s what I do in my own thinking, and it’s what I apply in my consulting.  I am a collector of models, so that I have more tools to apply to solving my own or other’s problems.   (BTW, I use concept and model relatively interchangeably, if that helps clarify anything.)

It’s also a sound basis for innovation.  Two related models (ahem) of creativity say that new ideas are either the combination of two different models or an evolution of an existing one.  Our brains are pattern matchers, and the more we observe a pattern, the more likely it will remind us of something, a model. The more models we have to match, the more likely we are to find one that maps. Or one that activates another.

Consequently, it’s also one  of the things I push as a key improvement to learning design. In addition to meaningful practice, give the concept behind it, the why, in the form of a model. I encourage you to look for the models behind what you do, the models in what your presented, and the models in what your learners are asked to do.

It’s a good basis for design, for problem-solving, and for learning.  That, to me, is a big opportunity.

15 April 2015

Cyborg Thinking: Cognition, Context, and Complementation

Clark @ 8:25 am

I’m writing a chapter about mobile trends, and one of the things I’m concluding with are the different ways we need to think to take advantage of mobile. The first one emerged as I wrote and kind of surprised me, but I think there’s merit.

The notion is one I’ve talked about before, about how what our brains do well, and what mobile devices do well, are complementary. That is, our brains are powerful pattern matchers, but have a hard time remembering rote information, particularly arbitrary or complicated details.  Digital technology is the exact opposite. So, that complementation whenever or wherever we are is quite valuable.

Consider chess.  When first computers played against humans,  they didn’t do well.  As computers became more powerful, however, they finally beat the world champion. However, they didn’t do it like humans do, they did it by very different means; they couldn’t evaluate well, but they could calculate much deeper in the amount of turns played and use simple heuristics to determine whether those were good plays.  The sheer computational ability eventually trumped the familiar pattern approach.  Now, however, they have a new type of competition, where a person and a computer will team and play against another similar team. The interesting result is not the best chess player, nor the best computer program, but a player who knows best how to leverage a chess companion.

Now map this to mobile: we want to design the best complement for our cognition. We want to end up having the best cyborg synergy, where our solution does the best job of leaving to the system what it does well, and leaving to the person the things we do well. It’s maybe only a slight shift in perspective, but it is a different view than designing to be, say, easy to use. The point is to have the best partnership available.

This isn’t just true for mobile, of course, it should be the goal of all digital design.  The specific capability of mobile, using sensors to do things because of when and where we are, though, adds unique opportunities, and that has to figure into thinking as well.  As does, of course, a focus on minimalism, and thinking about content in a new way: not as a medium for presentation, but as a medium for augmentation: to complement the world, not subsume it.

It’s my thinking that this focus on augmenting our cognition and our context with content that’s complementary is the way to optimize the uses of mobile. What’s your thinking?

Next Page »

Powered by WordPress