Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

17 December 2014

Why L&D?

Clark @ 8:33 am

One of the concerns I hear is whether L&D still has a role.  The litany is that they’re so far out of touch with their organization, and science, that it’s probably  better to let them die an unnatural death than to try to save them. The prevailing attitude of this extreme view is that the Enterprise Social Network is the natural successor to the LMS, and it’s going to come from operations or IT rather than L&D.  And, given that I’m on record suggesting that we revolutionize L&D rather than ignoring it, it makes sense to justify why.  And while I’ve had other arguments, a really good argument comes from my thesis advisor, Don Norman.

Don’s on a new mission, something he calls DesignX, which is scaling up design processes to deal with “complex socio-technological systems”.   And he recently wrote an article about why DesignX that put out a good case why L&D as well.  Before I get there, however, I want to point out two other facets of his argument.

The first is that often design has to go beyond science. That is, while you use science when you can, when you can’t you use theory inferences, intuition, and more to fill in the gaps, which you hope you’ll find out later (based upon later science, or your own data) was the right choice.  I’ve often had to do this in my designs, where, for instance, I think research hasn’t gone quite far enough in understanding engagement.  I’m not in a research position as of now, so I can’t do the research myself, but I continue to look at what can be useful.  And this is true of moving L&D forward. While we have some good directions and examples, we’re still ahead of documented research.  He points out that system science and service thinking are science based, but suggests design needs to come in beyond those approaches.   To the extent L&D can, it should draw from science, but also theory and keep moving forward regardless.

His other important point is, to me, that he is talking about systems.  He points out that design as a craft works well on simple areas, but where he wants to scale design is to the level of systemic solutions.  A noble goal, and here too I think this is an approach L&D needs to consider as well.  We have to go beyond point solutions – training, job aids, etc – to performance ecosystems, and this won’t come without a different mindset.

Perhaps the most interesting one, the one that triggered this post, however, was a point on why designers are needed.  His point is that others have focuses on efficiency and effectiveness, but he argued that designers have empathy for the users as well.  And I think this is really important.  As I used to say the budding software engineers I was teaching interface design to: “don’t trust your intuition, you don’t think like normal people”.  And similarly, the reason I want L&D in the equation is that they (should) be the ones who really understand how we think, work, and learn, and consequently they should be the ones facilitating performance and development. It takes an empathy with users to facilitate them through change, to help them deal with fears and anxieties dealing with new systems, to understand what a good learning culture is and help foster it.

Who else would you want to be guiding an organization in achieving effectiveness in a humane way?   So Don’s provided, to me, a good point on why we might still want L&D (well, P&D really ;) in the organization. Well, as long as they also addressing the bigger picture and not just pushing info dump and knowledge test.  Does this make sense to you?

#itashare #revolutionizelnd

16 December 2014

Challenges in engaging learning

Clark @ 8:05 am

I’ve been working on moving a team to deeper learning design.  The goal is to practice what I preach, and make sure that the learning design is competency-aligned, activity-based, and model-driven.  Yet, doing it in a pragmatic way.

And this hasn’t been without it’s challenges.  I presented to the team my vision, we worked out a process, and started coaching the team during development.  In retrospect, this wasn’t proactive enough.  There were a few other hiccups.

We’re currently engaged in a much tighter cycle of development and revision, and now feel we’re getting close to the level of effectiveness and engagement we need.  Whether a) it’s really better, and b) whether we can replicate it yet scale it as well is an open question.

At core are a few elements. For one, a rabid focus on what learners are doing is key.  What do they need to be able to do, and what contexts do they need to do it in?

The competency-alignment focus is on the key tasks that they have to do in the workplace, and making sure we’re preparing them across pre-class, in-class, and post-class activities to develop that ability.  A key focus is having them make the decision in the learning experience that they’ll have to make afterward.

I’m also pushing very hard on making sure that there are models behind the decisions.  I’m trying hard to avoid arbitrary categorizations, and find the principles that drove those categorizations.

Note that all this is not easy.  Getting the models is hard when the resources provided don’t include that information.  Avoiding presenting just knowledge and definitions is hard work.  The tools we use make certain interactions easy, and other ones not so easy.  We have to map meaningful decisions into what the tools support.  We end up making  tradeoffs, as do we all.  It’s good, but not as good as it could be.  We’ll get better, but we do want to run in a practical fashion as well.

There are more elements to weave in: layering on some general biz skills is embryonic.  Our use of examples needs to get more systematic.  As does our alignment of learning goal to practice activity.  And we’re struggling to have a slightly less didactic and earnest tone; I haven’t worked hard enough on pushing a bit of humor in, tho’ we are ramping up some exaggeration.  There’s only so much you can focus on at one time.

We’ll be running some student tests next week before presenting to the founder.  Feeling mildly confident that we’ve gotten a decent take on quality learning design with suitable production value, but there is the barrier that the nuances of learning design are subtle. Fingers crossed.

I still believe that, with practice, this becomes habit and easier.  We’ll see.

4 December 2014

Getting Models

Clark @ 8:25 am

In trying to shift from a traditional elearning approach to a more enlightened one, a deeper one, you are really talking about viewing things differently, which is non-trivial. And then, even if you know you want to do better, you still need some associated skills. Take, for example, models.

I’ve argued before that models are a better basis for action, for making better decisions.  Arbitrary knowledge is hard to recollect, and consequently brittle.  We need a coherent foundation upon which to base foundations, and arbitrary information doesn’t help.  If I see a ‘click to learn more’, for instance, I have good clue that someone’s presenting arbitrary information.  However, as I concluded in the models article, “It’s not always there, nor even easily inferable.”  Which is a problem that I’ve been wrestling with.  So here’re my interim thoughts.

Others have counseled that not just any Subject Matter Expert (SME) will do.  They may be able to teach material with their stories and experience, and they can certainly do the work, but they may not have a conscious model that’s available to guide novices.  So I’ve head that you have to find one capable. If you don’t, and you don’t have good source material, you’re going to have to do the work yourself.  You might be able to find one in a helpful place like Wikipedia (and please join us in donating to help keep it going, would you please?), but otherwise you’re going to have to do the hard yards.

Say you’re wrestling with a list of things, like attacks on networks, or impacts on blood pressure.  There is a laundry list of them, and there may seem to be no central order.  So what do you do?  Well, in these cases where I don’t have one, I make one.

For instance, in attacks on networks, it seems that the inherent structure of the network provides an overarching framework for vulnerabilities.  Networks can be attacked digitally through password cracking or software vulnerabilities.  The data streams could also be hacked either physically connecting to wires or intercepting wireless signals.  Socially, you can trick people into doing wrong things too.  Similarly with blood pressure, the nature of the system tells us that constricted or less flexible vessels (e.g. from aging) will increase blood pressure. Decreased volume in the system will decrease, etc.

The point is, I’m using the inherent structure to provide a framework that wasn’t given. Is it more than the minimum?  Yes.  But I’ll argue that if you want the information to be available when necessary, or rather that learners will be able to make the right decisions, this is the most valuable thing you can do. And it might take less effort overall, as you can teach the model and support making good inferences more efficiently than teaching all the use cases.

And is this a sufficient approach?  I can’t say that; I haven’t spent enough time on other content. So at this point treat it like a heuristic.  However, it gives you something you can at least take to a SME and have them critique and improve it (which is easier than trying to extract a model whole-cloth ;).

Now there might also be the case that there just isn’t an organizing principle (I’m willing to concede that, for now…). Then, you may  need simply to ask your learners to do some meaningful processing on the material.  Look, if you’re presenting it, then you’re expecting them to remember it. Presenting arbitrary information isn’t going to do that. If they need to remember it, have them process it.  Otherwise, why present it at all?

Now, this is only necessary when you’re trying to do formal learning; it might be that you don’t have to get it in folks heads and can put it in the world. Do it if you can.   But I believe that what will make a bigger difference for learners, for performers, will be the ability to make better decisions. And, in our increasingly turbulent times that will come from models, not rote information.  So please, if you’re doing formal learning, do it right, and get the models you need. Beg, borrow, steal, or make, but get them.  Please?

25 November 2014

Transformative Experiences

Clark @ 8:05 am

I’ve had the pleasure last week of keynoting Charles Sturt University’s annual Education conference.  They’re in the process of rethinking what their learning experience should be, and I talked about the changes we’re trying to make at the Wadhwani Foundation.

I was reminded of previous conversations about learning experience design and the transformative experience.   And I have argued in the past that what would make an optimal value proposition (yes, I used that phrase) in a learning market would be to offer a transformative learning experience.  Note that this is not just about the formal learning experience, but has two additional components.

Now, it does start with a killer learning experience.  That is, activity-based, competency-driven, model-guided, with lean and compelling content.  Learners need role-plays and simulations to be immersed in practice, and scaffolded with reflection to develop their flexible ability to apply these abilities going forward.  But wait, there’s more!

As a complement, there needs to be a focus on developing the learner as well as their skills. That is, layering on the 21st Century skills: the ability to communicate, lead, problem-solve, analyze, learn, and more.  These need to be included and developed across the learning experience.  So learners not only get the skills they need to succeed now, but to adapt as things change.

The third element is to be a partner in their success.  That is, don’t give them a chance to sink or swim on the basis of the content, but to look for ways in which learners might be struggling with other issues, and work hard to ensure they succeed.

I reckon that anyone capable of developing and delivering on this model provides a model that others can only emulate, not improve upon.  We’re working on the first two initially at the Foundation, and hopefully we’ll get to the latter soon.  But I reckon it’d be great if this were the model all were aspiring to.  Here’s hoping!

 

 

11 November 2014

Learning Problem-solving

Clark @ 8:33 am

While I loved his presentation, his advocacy for science, and his style, I had a problem with one thing Neil deGrasse Tyson said during his talk. Now, he’s working on getting deeper into learning, but this wasn’t off the cuff, this was his presentation (and he says he doesn’t say things publicly until he’s ready). So while it may be that he skipped the details, I can’t. (He’s an astrophysicist, I’m the cognitive engineer ;)

His statement, as I recall and mapped,  said that math wires brains to solve problems. And yes, with two caveats.  There’s an old canard that they used to teach Latin because it taught you how to think, and it actually didn’t work that way. The ability to learn Latin taught you Latin, but not how to think or learn, unless something else happened.   Having Latin isn’t a bad thing, but it’s not obviously a part of a modern curriculum.

Similarly, doing math problems isn’t necessarily going to teach you how to do more general problem-solving.  Particularly doing the type of abstract math problems that are the basis of No Child Left Untested, er Behind.  What you’ll learn is how to do abstract math problems, which isn’t part of most job descriptions these days.  Now, if you want to learn to solve meaningful math problems, you have to be given meaningful math problems, as the late David Jonassen told us.  And the feedback has to include the problem-solving process, not just the math!

Moreover, if you want to generalize to other problem-solving, like science or engineering, you need explicit scaffolding to reflect on the process and the generality across domains.  So you  need some problem-solving in other domains to abstract and generalize across.  Otherwise, you’ll get good at solving real world math problems, which is necessary but not sufficient.  I remember my child’s 2nd grade teacher who was talking about the process they emphasized for writing – draft, get feedback, review, refine – and I pointed out that was good for other domains as well: math, drawing, etc.  I saw the light go on.  And that’s the point, generalizing is valuable  in learning, and facilitating that generalization is valuable in teaching.

I laud the efforts to help folks understand why math and science are important, but you can’t let people go away thinking that doing abstract math problems is a valuable activity.  Let’s get the details right, and really accelerate our outcomes.

5 November 2014

#DevLearn 14 Reflections

Clark @ 9:57 am

This past week I was at the always great DevLearn conference, the biggest and arguably best yet.  There were some hiccups in my attendance, as several blocks of time were taken up with various commitments both work and personal, so for instance I didn’t really get a chance to peruse the expo at all.  Yet I attended keynotes and sessions, as well as presenting, and hobnobbed with folks both familiar and new.

The keynotes were arguably even better than before, and a high bar had already been set.

Neil deGrasse Tyson was eloquent and passionate about the need for science and the lack of match between school and life.    I had a quibble about his statement that doing math teaches problem-solving, as it takes the right type of problems (and Common Core is a step in the right direction) and it takes explicit scaffolding.  Still, his message was powerful and well-communicated. He also made an unexpected connection between Women’s Liberation and the decline of school quality that I hadn’t considered.

Beau Lotto also spoke, linking how our past experience alters our perception to necessary changes in learning.  While I was familiar with the beginning point of perception (a fundamental part of cognitive science, my doctoral field), he took it in very interesting and useful direction in an engaging and inspiring way.  His take-home message: teach not how to see but how to look, was succinct and apt.

Finally, Belinda Parmar took on the challenge of women in technology, and documented how small changes can make a big difference. Given the madness of #gamergate, the discussion was a useful reminder of inequity in many fields and for many.  She left lots of time to have a meaningful discussion about the issues, a nice touch.

Owing to the commitments both personal and speaking, I didn’t get to see many sessions. I had the usual situation of  good ones, and a not-so-good one (though I admit my criteria is kind of high).  I like that the Guild balances known speakers and topics with taking some chances on both.  I also note that most of the known speakers are those folks I respect that continue to think ahead and bring new perspectives, even if in a track representing their work.  As a consequence, the overall quality is always very high.

And the associated events continue to improve.  The DemoFest was almost too big this year, so many examples that it’s hard to start looking at them as you want to be fair and see all but it’s just too monumental. Of course, the Guild had a guide that grouped them, so you could drill down into the ones you wanted to see.  The expo reception was a success as well, and the various snack breaks suited the opportunity to mingle.  I kept missing the ice cream, but perhaps that’s for the best.

I was pleased to have the biggest turnout yet for a workshop, and take the interest in elearning strategy as an indicator that the revolution is taking hold.  The attendees were faced with the breadth of things to consider across advanced ID, performance support, eCommunity, backend integration, decoupled delivery, and then were led through the process of identifying elements and steps in the strategy.  The informal feedback was that, while daunted by the scope, they were excited by the potential and recognizing the need to begin.  The fact that the Guild is holding the Learning Ecosystem conference and their release of a new and quite good white paper by Marc Rosenberg and Steve Foreman are further evidence that awareness is growing.   Marc and Steve carve up the world a little differently than I do, but we say similar things about what’s important.

I am also pleased that Mobile interest continues to grow, as evidenced by the large audience at our mobile panel, where I was joined by other mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell.  They provide nicely differing viewpoints, with Sarah representing the irreverent designer, Robert the pragmatic systems perspective, and Chad the advanced technology view, to complement my more conceptual approach.  We largely agree, but represent different ways of communicating and thinking about the topic. (Sarah and I will be joined by Nick Floro for ATD’s mLearnNow event in New Orleans next week).

I also talked about trying to change the pedagogy of elearning in the Wadhwani Foundation, the approach we’re taking and the challenges we face.  The goal I’m involved in is job skilling, and consequently there’s a real need and a real opportunity.  What I’m fighting for is to make meaningful practice as a way to achieve real outcomes.  We have some positive steps and some missteps, but I think we have the chance to have a real impact. It’s a work in progress, and fingers crossed.

So what did I learn?  The good news is that the audience is getting smarter, wanting more depth in their approaches and breadth in what they address. The bad news appears to be that the view of ‘information dump & knowledge test = learning’ is still all too prevalent. We’re making progress, but too slowly (ok, so perhaps patience isn’t my strong suit ;).  If you haven’t, please do check out the Serious eLearning Manifesto to get some guidance about what I’m talking about (with my colleagues Michael Allen, Julie Dirksen, and Will Thalheimer).  And now there’s an app for that!

If you want to get your mind around the forefront of learning technology, at least in the organizational space, DevLearn is the place to be.

 

28 October 2014

Cognitive prostheses

Clark @ 8:05 am

While our cognitive architecture has incredible capabilities (how else could we come up with advances such as Mystery Science Theater 3000?), it also has limitations. The same adaptive capabilities that let us cope with information overload in both familiar and new ways also lead to some systematic flaws. And it led me to think about the ways in which we support these limitations, as they have implications for designing solutions for our organizations.

The first limit is at the sensory level. Our mind actually processes pretty much all the visual and auditory sensory data that arrives, but it disappears pretty quickly (within milliseconds) except for what we attend to. Basically, your brain fills in the rest (which leaves open the opportunity to make mistakes). What do we do? We’ve created tools that allow us to capture things accurately: cameras and microphones with audio recording. This allows us to capture the context exactly, not as our memory reconstructs it.

A second limitation is our ‘working’ memory. We can’t hold too much in mind at one time. We ‘chunk’ information together as we learn it, and can then hold more total information at one time. Also, the format of working memory largely is ‘verbal’. Consequently, using tools like diagramming, outlines, or mindmaps add structure to our knowledge and support our ability to work on it.

Another limitation to our working memory is that it doesn’t support complex calculations, with many intermediate steps. Consequently we need ways to deal with this. External representations (as above), such as recording intermediate steps, works, but we can also build tools that offload that process, such as calculators. Wizards, or interactive dialog tools, are another form of a calculator.

Processing information in short term memory can lead to it being retained in long term memory. Here the storage is almost unlimited in time and scope, but it is hard to get in there, and isn’t remembered exactly, but instead by meaning. Consequently, models are a better learning strategy than rote learning. But external sources like the ability to look up or search for information is far better than trying to get it in the head.

Similarly, external support for when we do have to do things by rote is a good idea. So, support for process is useful and the reason why checklists have been a ubiquitous and useful way to get more accurate execution.

In execution, we have a few flaws too. We’re heavily biased to solve new problems in the ways we’ve solved previous problems (even if that’s not the best approach. We’re also likely to use tools in familiar ways and miss new ways to use tools to solve problems. There are ways to prompt lateral thinking at appropriate times, and we can both make access to such support available, and even trigger same if we’ve contextual clues.

We’re also biased to prematurely converge on an answer (intuition) rather than seek to challenge our findings. Access to data and support for capturing and invoking alternative ways of thinking are more likely to prevent such mistakes.

Overall, our use of more formal logical thinking fatigues quickly. Scaffolding help like the above decreases the likelihood of a mistake and increases the likelihood of an optimal outcome.

When you look at performance gaps, you should look to such approaches first, and look to putting information in the head last. This more closely aligns our support efforts with how our brains really think, work, and learn. This isn’t a complete list, I’m sure, but it’s a useful beginning.

24 October 2014

#DevLearn Schedule

Clark @ 8:30 am

As usual, I will be at DevLearn (in Las Vegas) this next week, and welcome meeting up with you there.  There is a lot going on.  Here’re the things I’m involved in:

  • On Tuesday, I’m running an all day workshop on eLearning Strategy. (Hint: it’s really a Revolutionize L&D workshop ;).  I’m pleasantly surprised at how many folks will be there!
  • On Wednesday at 1:15 (right after lunch), I’ll be speaking on the design approach I’m leading at the Wadhwani Foundation, where we’re trying to integrate learning science with pragmatic execution.  It’s at least partly a Serious eLearning Manifesto session.
  • On Wednesday at 2:45, I’ll be part of a panel on mlearning with my fellow mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell, chaired by conference program director David Kelly.

Of course, there’s much more. A few things I’m looking forward to:

  • The keynotes:
    •  Neil DeGrasse Tyson, a fave for his witty support of science
    • Beau Lotto talking about perception
    • Belinda Parmar talking about women in tech (a burning issue right now)
  • DemoFest, all the great examples people are bringing
  • and, of course, the networking opportunities

DevLearn is probably my favorite conference of the year: learning focused, technologically advanced, well organized, and with the right people.  If you can’t make it this year, you might want to put it on your calendar for another!

14 October 2014

Types of meaningful processing

Clark @ 8:21 am

In an previous post, I argued for different types and ratios for worthwhile learning activities. I’ve been thinking about this (and working on it) quite a bit lately. I know there are other resources that I should know about (pointers welcome), but I’m currently wrestling with several types of situations and wanted to share my thinking. This is aside from scenarios/simulations (e.g. games) that are the first, best, learning practice you can engage in, of course. What I’m looking for is ways to get learners to do processing in ways that will assist their ability to do.  This isn’t recitation, but application.

So one situation is where the learner has to execute the right procedure. This seems easy, but the problem is that they’re liable to get it right in practice.  The problem is that they still can get it wrong when in real situations. An idea I had heard of before, but was reiterated through Socratic Arts (Roger Schank & cohorts) was to have learners observe (e.g. video) of someone performing it and identifying whether it was right or not. This is a more challenging task than  just doing it right for many routine but important tasks (e.g. sanitation). It has learners monitor the process, and then they can turn that on themselves to become self-monitoring.  If the selection of mistakes is broad enough, they’ll have experience that will transfer to their whole performance.

Another task that I faced earlier was the situation where people had to interpret guidelines to make a decision. Typically, the extreme cases are obvious, and instructors argue that they all are, but in reality there are many ambiguous situations.  Here, as I’ve argued before, the thing to do is have folks work in groups and be presented with increasingly ambiguous situations. What emerges from the discussion is usually a rich unpacking of the elements.  This processing of the rules in context exposes the underlying issues in important ways.

Another type of task is helping people understand applying models to make decisions. Rather than present them with the models, I’m again looking for more meaningful processing.  Eventually I’ll expect learners to make decisions with them, but as a scaffolding step, I’m asking them to interpret the models in terms of their recommendations for use.  So before I have them engage in scenarios, I’ll ask them to use the models to create, say, a guide to how to use that information. To diagnose, to remedy, to put in place initial protections.  At other times, I’ll have them derive subsequent processes from the theoretical model.

One other example I recall came from a paper that Tom Reeves wrote (and I can’t find) where he had learners pick from a number of options that indicated problems or actions to take. The interesting difference was then there was a followup question about why. Every choice was two stages: decision and then rationale. This is a very clever way to see if they’re not just getting the right answer but can understand why it’s right.  I wonder if any of the authoring tools on the market right now include such a template!

I know there are more categories of learning and associated tasks that require useful processing (towards do, not know, mind you ;), but here are a couple that are ‘top of mind’ right now. Thoughts?

 

 

1 October 2014

Constructive vs instructive

Clark @ 8:11 am

A commenter on last week’s post asked an implicit question that caused me to think. The issue was whether the solutions I was proposing are having the learners be self directed or whether it was ‘push’ learning.  And I reckon there’s a bit of both, but I’m fighting for more of a constuctivist approach  than the instructivist model.

I’ve argued in the past for a more active learning, and I think the argument for pure instructivism sets up a straw man (Feuerzeig argued for guided discovery back in ’85!).  Obviously, I think that pure exploration is doomed to failure, as we know that learners can stay in one small corner of a search space without support (hence the coaching in Quest).  However, a completely guided experience doesn’t ‘stick’ as well, either.

Another factor is our target learners.  In my experience, more constructivist approaches can be disturbing to learners who have had more instructivist approaches.  And the learners we are dealing with haven’t been that successful in school, and typically need a lot of scaffolding.

Yet our goals are fairly pragmatic overall (and in general we should be looking for ways to pragmatic in more of our learning). We’re focused on meaningful skills, so we should leverage this.

In this case, I’m moving the design to more and more “here’s a goal, here’re some resources” type of approach where the goal is to generate a work-related integration (requiring relevant cognitive processing).  Even if it’s conceptual material, I want learners to be doing this, and of course the main focus is on real contextualized practice.

I’m pushing a very activity-based pedagogy (and curriculum). Yes, the tasks are designed, but they’re expected to take some responsibility for processing the information to produce outputs. The longer term goal is to increase the challenge and variety as we go through the curriculum, developing learner’s ability to  learn to learn and ability to adapt as well. Make sense?

Next Page »

Powered by WordPress