Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

30 June 2015

SME Brains

Clark @ 8:10 am

As I push for better learning design, I’m regularly reminded that working with subject matter experts (SMEs) is critical, and problematic.   What makes SMEs has implications that are challenging but also offers a uniquely valuable perspective.  I want to review some of those challenges and opportunities in one go.

One of the artifacts about how our brain works is that we compile knowledge away.  We start off with conscious awareness of what we’re supposed to be doing, and apply it in context.  As we practice, however, our expertise becomes chunked up, and increasingly automatic. As it does so, some of the elements that are compiled away are awarenesses that are not available to conscious inspection. As Richard Clark of the Cognitive Technology Lab at USC lets us know, about 70% of what SMEs do isn’t available to their conscious mind.  Or, to put it another way, they literally can’t tell us what they do!

On the other hand, they have pretty good access to what they know. They can cite all the knowledge they have to hand. They can talk about the facts and the concepts, but not the decisions.  And, to be fair, many of them aren’t really good at the concepts, at least not from the perspective of being able to articulate a model that is of use in the learning process.

The problem then becomes a combination of both finding a good SME, and working with them in a useful way to get meaningful objectives, to start. And while there are quite rigorous ways (e.g. Cognitive Task Analysis), in general we need more heuristic approaches.

My recommendation, grounded in Sid Meier’s statement that “good games are a series of interesting decisions” and the recognition that making better decisions are likely to be the most valuable outcome of learning, is to focus rabidly on decisions.  When SMEs start talking about “they need to know X” and “they need to know Y” is to ask leading questions like “what decisions do they need to be able to make that they don’t make know” and “how does X or Y actually lead them to make better decisions”.

Your end goal here is to winnow the knowledge away and get to the models that will make a difference to the learner’s ability to act.  And when you’re pressed by a certification body that you need to represent what the SME tells you, you may need to push back.  I even advocate anticipating what the models and decisions are likely to be, and getting the SME to criticize and improve, rather than let them start with a blank slate. This does require some smarts on the part of the designer, but when it works, it leverages the fact that it’s easier to critique than generate.

They also are potentially valuable in the ways that they recognize where learners go wrong, particularly if they train.  Most of the time, mistakes aren’t random, but are based upon some inappropriate models.  Ideally, you have access to these reliable mistakes, and the reason why they’re made. Your SMEs should be able to help here. They should know ways in which non-experts fail.  It may be the case that some SMEs aren’t as good as others here, so again, as in ones that have access to the models, you need to be selective.

This is related to one of the two ways SMEs are your ally.  Ideally, you’re equipped with stories, great failures and great successes. These form the basis of your examples, and ideally come in the form of a story. A SME should have some examples of both that they can spin and you can use to build up an example. This may well be part of your process to get the concepts and practice down, but you need to get these case studies.

There’s one other way that SMEs can help. The fact that they are experts is based upon the fact that they somehow find the topic fascinating or rewarding enough to spend the requisite time to acquire expertise. You can, and should, tap into that. Find out what makes this particular field interesting, and use that as a way to communicate the intrinsic interest to learners. Are they playing detective, problem-solver, or protector? What’s the appeal, and then build that into the practice stories you ask learners to engage in.

Working with SMEs isn’t easy, but it is critical. Understanding what they can do, and where they intrinsic barriers, gives you a better handle on being able to get what you need to assist learners in being able to perform.  Here are some of my tips, what have you found that works?

26 June 2015

Personal processing

Clark @ 7:48 am

I was thinking about a talk on mobile I’m going to be giving, and realized that mobile is really about personal processing. Many of the things you can do at your desktop you can do with your mobile, even a wearable: answering calls, responding to texts.  Ok, so responding to email, looking up information, and more might require the phone for a keyboard (I confess to not being a big Siri user, mea culpa), but it’s still where/when/ever.

So the question then became “what doesn’t make sense on a mobile”. And my thought was that industrial strength processing doesn’t make sense on a mobile.  Processor intensive work: video editing, 3D rendering, things that require either big screens or lots of CPU.  So, for instance, while word processing isn’t really CPU intensive, for some reason mobile word processors don’t seamlessly integrate outlining.  Yet I require outlining for big scale writing, book chapters or whole books. I don’t do 3D or video processing, but that would count too.

One of the major appeals of mobile is having versatile digital capabilities, the rote/complex complement to our pattern-matching brains, (I really wanted to call my mobile book ‘augmenting learning’) with us at all times.  It makes us more effective.  And for many things – all those things we do with mobile such as looking up info, navigating, remembering things, snapping pictures, calculating tips – that’s plenty of screen and processing grunt.  It’s for personal use.

Sure, we’ll get more powerful capabilities (they’re touting multitasking on tablets now), and the boundaries will blur, but I still think there’ll be the things we do when we’re on the go, and the things we’ll stop and be reflective about.  We’ll continue to explore, but I think the things we do on the wrist or in the hand will naturally be different than those we do seated.   Our brains work in active and reflective modes, and our cognitive augment will similarly complement those needs.  We’ll have personal processing, and then we’ll have powerful processing. And that’s a good thing, I think. What think you?

 

23 June 2015

The Learning Styles Zombie

Clark @ 7:37 am

It’s June, and June is Learning Styles month for the Debunker’s Club.  Now, I’ve gone off on Learning Styles before (here, here, here, and here), but  it’s been a while, and they refuse to die. They’re like zombies, coming to eat your brain!

Let’s be clear, it’s patently obvious learners differ.  They differ in how they work, what they pay attention to, how they like to interact, and more. Surely, it make sense to adapt the learning to their style, so that we’re optimizing their outcome, right?

Er, no.  There is no consistent evidence that adapting to learning styles works.  Hal Pashler and colleagues, on a study commissioned by Science in the Public Interest (read: a non-partisan, unbiased, truly independent work) found (PDF) that there was no evidence that adapting to learning styles worked. They did a meta-analysis of the research out there, and concluded this with statistical rigor.  That is, some studies showed positive effects, and some showed negative, but across the body of studies suitably rigorous to be worth evaluating, there was no evidence that trying to adapt learning to learner characteristics had a definitive impact.

At least part of the problem is that the instruments people use to characterize learning styles are flawed.  Surely, if learners differ, we can identify how?  Not with psychometric validity (that means tests that stand up to statistical analysis). A commissioned study in the UK (like the one above, independent, etc) led by Coffield evaluated a representative sample of instruments (including the ubiquitous MBTI, Kolb, and more), and found (PDF) only one that met all four standards of psychometric validity. And that one was a simple one of one dimensions.

So, what’s a learning designer to do?  Several things: first, design for what is being learned. Use the best learning design to accomplish the goal. Then, if the learner has trouble with that approach, provide help.  Second, do use a variety of ways of supporting comprehension.  The variety is good, even if the evidence to do so based upon learning style isn’t.  (So, for example, 4MAT isn’t bad, it’s just not based upon sound science, and why you’d want to pay to use a heuristic approach when you can do that for free is beyond me.)

Learners do differ, and we want them to succeed. The best way to do that is good learning experience design. We do have evidence that problem-based and emotionally aware learning design helps.  We know we need to start with meaningful objectives, create deep practice, ground in good models, and support with rich examples, while addressing motivation, confidence, and anxiety.  And using different media maintains attention and increases the likelihood of comprehension.  Do good learning design, and please don’t feed the zombie.

DoNotFeedLSZombie

18 June 2015

Why Work Out Loud? (for #wolweek)

Clark @ 8:06 am

Why should one work out loud (aka Show Your Work)?  Certainly, there are risks involved.  You could be wrong.  You could have to share a mistake. Others might steal your ideas.  So why would anyone want to be Working Out Loud?  Because the risks are trumped by the benefits.

Working out loud is all about being transparent about what you’re doing.  The benefits of these are multiple. First, others know what you’re doing, and can help. They can provide pointers to useful information, they can provide tips about what worked, and didn’t, for them, and they’re better prepared for what will be forthcoming.

Those risks? If you’re wrong, you can find out before it’s too late.  If you share a mistake, others don’t have to make the same one.  If you put your ideas out there, they’re on record if someone tries to steal them.  And if someone else uses your good work, it’s to the general benefit.

Now, there are times when this can be bad. If you’re in a Miranda organization, where anything you say can be held against you, it may not be safe to share.  If your employer will take what you know and then let you go (without realizing, of course, there’s more there), it’s not safe.  Not all organizations are ready for sharing you work.

Organizations, however, should be interested in creating an environment where working out loud is safe.  When folks share their work, the organization benefits.  People know what others are working on. They can help one another.  The organization learns faster.  Make it safe to share mistakes, not for the sake of the mistake, but for the lesson learned; so no one else has to make the same mistake!

It’s not quite enough to just show your work, however, you really want to ‘narrate’ your work. So working out loud is not just about what you’re doing, but also explaining why.  Letting others see why you’re doing what you’re doing helps them either improve your thinking or learn from it.  So not just your work output improves, but your continuing ability to work gets better too!

You can blog your thoughts, microblog what you’re looking at, make your interim representations available as collaborative documents, there are many ways to make your work transparent. This blog, Learnlets, is just for that purpose of thinking out loud: so I can get feedback and input or others can benefit.  Yeah, there are risks (I have seen my blog purloined without attribution), but the benefits outweigh the risks.  That’s as an independent, but imagine if an organization made it safe to share; the whole organization learns faster. And that’s the key to the continual innovation that will be the only sustainable differentiator.

Organizations that work together effectively are organizations that will thrive.  So there are personal benefits and organizational benefits.  And I personally think this is a role for L&D (this is part of the goal of the Revolution). So, work out loud about your efforts to work out loud!

#itashare

17 June 2015

Embrace Plan B

Clark @ 7:56 am

The past two weeks, I’ve been on the road (hence the paucity of posts).  And they’ve been great opportunities to engage around interesting topics, but also have provided some learning opportunities (ahem).  The title of this post, by the way, came from m’lady, who was quoting what a senior Girl Scout said was the biggest lesson she learned from her leader, “to embrace Plan B” ;).

So two weeks ago I was visiting a client working on upping their learning game. This is a challenge in a production environment, but as I discussed many times in posts over the second half of 2014 and some this year, I think there are some serious actions that can be taken.  What is needed are better ways to work with SMEs, better constraints around what makes useful content, and perhaps most importantly what makes meaningful interaction and practice.  I firmly believe that there are practical ways to get serious elearning going without radical change, though some initial hiccups will be experienced.

This past week I spoke twice. First on a broad spectrum of learning directions to a group that was doing distance learning and wanted to take a step back and review what they’d been doing and look for improvement opportunities. I covered deeper learning, social learning, meta-learning, and more. Then I went beyond and talked about 70:20:10, measurement, games and simulations, mlearning, the performance ecosystem, and more.  I then moved on to a separate (and delightful) event in Vancouver to promote the Revolution.

It was the transition between the two events last week that threw me. So, Plan A was to fly back home on Tuesday, and then fly on to Vancouver on Wed morning.   But, well, life happened.  All my flights were delayed (thanks, American) on my flight there and back to the first engagement, and both of the first flights such that I missed the connection. On the way out I just got in later than I expected (leading to 4.5 hours sleep before the long and detailed presentation).  But on the way back, I missed the last connecting flight home.  And this had several consequences.

So, instead of spending Tuesday night in my own bed, and repacking for the next day, I spent the night in the Dallas/Fort Worth airport.  Since they blamed it on weather (tho’ if the incoming flight had been on time, it might’ve gotten out in time to avoid the storm), they didn’t have any obligation to provide accommodation, but there were cots and blankets available. I tried to pull into a dark and quiet place, but most of the good ones were taken already. I found a boarding gate that was out of the way, but it was bright and loud.  I gave up after an hour or so and headed off to another area, where I found a lounge where I could pull together a couple of armchairs and managed to doze for 2.5 or so hours, before getting up and on the hunt for some breakfast.  Lesson: if something’s not working, change!

I caught a flight back home in just enough time to catch the next one up to Vancouver. The problem was, I wasn’t able to swap out my clothes, so I was desperately in need of some laundry.  Upon arriving, I threw one of the shirts, socks, etc into a sink and gave them a wash and hung them up. (I also took a shower, which was not only a necessity after a rough night but a great way to gather myself and feel a bit more human).  The next morning, as I went to put on the shirt, I found a stain!  I couldn’t get up in front of all those people with a stained shirt. Plan B was out the door. Also, the other shirt had acquired one too!  Plan C on the dust heap. Now what?  Fortunately, my presentation was in the afternoon, but I needed to do something.

So I went downstairs and found a souvenir shop in the hotel, but the shirts were all a wee bit too loud.  I didn’t really want to pander to the crowd quite so egregiously. I asked at the hotel desk if there was a place I could buy a shirt within walking distance, and indeed there was.  I was well and truly on Plan D by this time. So I hiked on out to a store and fortunately found another shirt I could throw on.  Lesson: keep changing!

I actually made the story part of my presentation.  I made the point that just like in my case, organizations need not only optimal execution of the plans, but then also the ability to innovate if the plan isn’t working.  And L&D can (and should) play a role in this.  So, help your people be prepared to create and embrace Plan B (and C and…however many adaptations they need to have).

And one other lesson for me: be better prepared for tight connections to go awry!

9 June 2015

Content/Practice Ratio?

Clark @ 6:06 am

I end up seeing a lot of different elearning. And, I have to say, despite my frequent disparagement, it’s usually well-written, the problem seems to be in the starting objectives.  But compared to learning that really has an impact: medical, flight, or military training for instance, it seems woefully under-practiced.

So, I’d roughly (and generously) estimate that the ratio is around 80:20 for content: practice.  And, in the context of moving from ‘getting it right’ to ‘not getting it wrong’, that seems woefully inadequate.  So, two questions: do we just need more practice, or do we also have too much content. I’ll put my money on the latter, that is: both.

To start, in most of the elearning I see (even stuff I’ve had a role in, for reasons out of my control), the practice isn’t enough.  Of course, it’s largely wrong, being focused on reciting knowledge as opposed to making decisisions, but there just isn’t enough.  That’s ok if you know they’ll be applying it right away, but that usually isn’t the case.  We really don’t scaffold the learner from their initial capability, through more and more complex scenarios, until they’re at the level of ability we want.  Where they’re performing the decisions they need to be making in the workplace with enough flexibility and confidence, and with sufficient retention until it’s actually needed.  Of course, it shouldn’t be the event model, and that practice should be spaced over time.  Yes, designing practice is harder than just delivering content, but it’s not that much harder to develop more than just to develop some.

However, I’ll argue we’re also delivering too much content.  I’ve suggested in the past that I can rewrite most content to be 40% – 60% less than it starts (including my own; it takes me two passes).  Learners appreciate it.  We want a concise model, and some streamlined examples, but then we should get them practicing.  And then let the practice drive them to the content.  You don’t have to prepackage it as much, either; you can give them some source materials that they’ll be motivated to use, and even some guidance (read: job aids) on how to perform.

And, yes, this is a tradeoff: how do we find a balance that both yields the outcomes we need but doesn’t blow out the budget?  It’s an issue, but I suggest that, once you get in the habit, it’s not that much more costly.  And it’s much more justifiable, when you get to the point of actually measuring your impact.  Which many orgs aren’t doing yet.  And, of course, we should.

The point is that I think our ratio should really be 50:50 if not 20:80 for content:practice.  That’s if it matters, but if it doesn’t why are you bothering? And if it does, shouldn’t it be done right?  What ratios do you see? And what ratios do you think makes sense?

3 June 2015

Disrupting Education

Clark @ 6:01 am

The following was prompted by a discussion on how education has the potential to be disrupted.  And I don’t disagree, but I don’t see the disruptive forces marshaling that I think it will take.  Some thoughts I lobbed in another forum (lightly edited):

Mark Warschauer, in his great book Learning in the Cloud (which has nothing to do with ‘the cloud’), pointed out that there are only 3 things wrong with public education: the curricula, the pedagogy, and the way they use tech; other than that they’re fine. Ahem. And much of what I’ve read about disruption seems flawed in substantial ways.

I’ve seen the for-profits institutions, and they’re flawed because even if they did understand learning (and they don’t seem to), they’re handicapped: they have to dance to the ridiculous requirements of accrediting bodies. Those bodies don’t understand why SMEs aren’t a good source of objectives, so the learning goals are not useful to the workplace. It’s not the profit requirement per se, because you could do good learning, but you have to start with good objectives, and then understand the nuances that make learning effective. WGU is at least being somewhat disruptive on the objectives.

MOOCs don’t yet have a clear business model; right now they’re subsidized by either the public institutions, or biz experiments.  And the pedagogy doesn’t really scale well: their objectives also tend to be knowledge based, and to have a meaningful outcome they’d need to be application based, and you can’t really evaluate that at scale (unless you get *really* nuanced about peer review, but even then you need some scrutiny that just doesn’t scale.). For example, just because you learn to do AI programming doesn’t mean you’re ready to be an AI programmer.  That’s the xMOOCs, the cMOOCs have their own problems with expectations around self-learning skills.  Lovely dream, but it’s not the world I live in, at least yet.

As for things like the Khan Academy, well, it’s a nice learning adjunct, and they’re moving to a more complete learning experience, but they’re still largely tied to the existing curricula (e.g. doing what Jonassen railed against: the problems we give kids in schools bear no relation to the problems they’ll face in the real world).

The totally missed opportunity across all of this is the possibility of layering 21C skills across this in a systematic and developable way. If we could get a better curricula, focused on developing applicable skills and meta-skills, with a powerful pedagogy, in a pragmatically deliverable way…

Lots of room for disruption, but it’s really a bigger effort than I’ve yet seen someone willing to take. And yet, if you did it right, you’d have an essentially unassailable barrier to entry: real learning done at scale. However, I’m inclined to think that it’s more plausible in the countries who increasingly ‘get’ that higher ed is an investment in the future of a country, and are making it free, and make it a ‘man on the moon’ program. I’m willing, even eager to be wrong on this, so please let me know what you think!

2 June 2015

Model responses

Clark @ 8:12 am

I was thinking about how to make meaningful practice, and I had a thought that was tied to some previous work that I may not have shared here.  So allow me to do that now.

Ideally, our practice has us performing in ways that are like the ways we perform in the real world.  While it is possible to make alternatives available that represent different decisions, sometimes there are nuances that require us to respond in richer ways. I’m talking about things like writing up an RFP, or a response letter, or creating a presentation, or responding to a live query. And while these are desirable things, they’re hard to evaluate.

The problem is that our technology to evaluate freeform text is difficult, let alone anything more complex.  While there are tools like latent semantic analysis that can be developed to read text, it’s complex to develop and  it won’t work on spoken responses , let alone spreadsheets or slide decks (common forms of business communication).  Ideally, people would evaluate them, but that’s not a very scalable solution if you’re talking about mentors, and even peer review can be challenging for asynchronous learning.

An alternative is to have the learner evaluate themselves.  We did this in a course on speaking, where learners ultimately dialed into an answering machine, listened to a question, and then spoke their responses.  What they then could do was listen to a model response as well as their response.  Further, we could provide a guide, an evaluation rubric, to guide the learner in evaluating their response in respect to the model response (e.g. “did you remember to include a statement and examples”?).

This would work with more complex items, too.  “Here’s a model spreadsheet (or slide deck, or document); how does it compare to yours?”  This is very similar to the types of social processing you’d get in a group, where you see how someone else responded to the assignment, and then evaluate.

This isn’t something you’d likely do straight off; you’d probably scaffold the learning with simple tasks first.  For instance, in the example I’m talking about we first had them recognize well- and poorly-structured responses, then create them from components, and finally create them in text before having them call into the answering machine. Even then, they first responded to questions they knew they were going to get before tasks where they didn’t know the questions.  But this approach serves as an enriching practice on the way to live performance.

There is another benefit besides allowing the learner to practice in richer ways and still get feedback. In the process of evaluating the model response and using an evaluation rubric, the learner internalizes the criteria and the process of evaluation, becoming a self-evaluator and consequently a self-improving learner.  That is, they use a rubric to evaluate their response and the model response. As they go forward, that rubric can serve to continue to guide as they move out into a performance situation.

There are times where this may be problematic, but increasingly we can and should mix media and use technology to help us close the gap between the learning practice and the performance context. We can prompt, record learner answers, and then play back theirs and the model response with an evaluation guide.  Or we can give them a document template and criteria, take their response, and ask them to evaluate theirs and another, again with a rubric.  This is richer practice and helps shift the learning burden to the learner, helping them become self-learners.   I reckon it’s a good thing. I’ll suggest that you consider this as another tool in your repertoire of ways to create meaningful practice. What do you think?

27 May 2015

Attention to connections

Clark @ 8:16 am

A colleague was describing his journey, and attributed much of his success (rightly) to his core skills: including his creativity. I was resonating with his list until I got to ‘attention to detail’, and it got me to thinking.

Attention to detail is good, right?  We want people to sweat the nuances, and I certainly am inspired by folks who do that. But there are times when I don’t want to be responsible for the details. To be sure, these are times when it doesn’t make sense to have me do the details. For example, once I’ve helped a client work out a strategy, the implementation really largely should be on them, and I might take some spot reviews (far better than just helping them start and abandoning them).

So I wondered about what the alternative would be. Now the obvious thought is lack of attention to detail, which might initially be negative, but could there be a positive connotation?  What came to me was attention to connections. That is, seeing how what’s being considered might map to a particular conceptual model, or a related field. Seeing how it’s contextualized, and bringing together solutions.    Seeing the forest, not the trees.

I’m inclined to think that there are benefits to those who see connections, just as there is a need for those who can plug away at the details.  And it’s probably contextual; some folks will be one in one area and another in another.  For example, there are times I’m too detail oriented (e.g. fighting for conceptual clarity), and times where I’m missing connections (particularly in reading the politics of a situation).  And vice-versa, times when I’m not detail-0riented enough, and very good at seeing connections.

They’re probably not ends of a spectrum, either, as I’ve gone away from that in practical matters (hmm, wonder what that implies about the Big 5?). Take introvert and extrovert, from a learning perspective it’s about how well you learn on your own versus how well you learn with others, and you could be good or bad at each or both.  Similarly here, you could be able to do both (as in my colleague, he’s one of the smartest folks I know who is demonstrably innovative and connecting as well as being able to sweat the details whether writing code or composing music).

Or maybe this is all a post-hoc justification for wanting to play out at the conceptual frontier, but I’m not going to apologize for that.  It seems to work…

26 May 2015

Evolutionary versus revolutionary prototyping

Clark @ 8:14 am

At a recent meeting, one of my colleagues mentioned that increasingly people weren’t throwing away prototypes.  Which prompted reflection, since I have been a staunch advocate for revolutionary prototyping (and here I’m not talking about “the” Revolution ;).

When I used to teach user-centered design, the tools for creating interfaces were complex. The mantras were test early, test often, and I advocated Double Double P’s (Postpone Programming, Prefer Paper; an idea I first grabbed from Rob Phillips then at Curtin).  The reason was that if you started building too early in the design phase, you’d have too much invested to throw things away if they weren’t working.

These days, with agile programming, we see sprints producing working code, which then gets elaborated in subsequent sprints.  And the tools make it fairly easy to work at a high level, so it doesn’t take too much effort to produce something. So maybe we can make things that we can throw out if they’re wrong.

Ok, confession time, I have to say that I don’t quite see how this maps to elearning.  We have sprints, but how do you have a workable learning experience and then elaborate it?  On the other hand, I know Michael Allen’s doing it with SAM and Megan Torrance just had an article on it, but I’m not clear whether they’re talking storyboard, and then coded prototype, or…

Now that I think about it, I think it’d be good to document the core practice mechanic, and perhaps the core animation, and maybe the spread of examples.  I’m big on interim representations, and perhaps we’re talking the same thing. And if not, well, please educate me!

I guess the point is that I’m still keen on being willing to change course if we’ve somehow gotten it wrong.  Small representations is good, and increasing fidelity is fine, and so I suppose it’s okay if we don’t throw out prototypes often as long as we do when we need to.  Am I making sense, or what am I missing?

« Previous PageNext Page »

Powered by WordPress