Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

27 January 2016

Reactivating Learning

Clark @ 8:10 am

(I looked because I’m sure I’ve talked about this before, but apparently not a full post, so here we go.)

If we want our learning to stick, it needs to be spaced out over time. But what sorts of things will accomplish this?  I like to think of three types, all different forms of reactivating learning.

Reactivating learning is important. At a neural level, we’re generating patterns of activation in conjunction, which strengthens the relationships between these patterns, increasing the likelihood that they’ll get activated when relevant. That’s why context helps as well as concept (e.g. don’t just provide abstract knowledge).  And I’ll suggest there are 3 major categories of reactivation to consider:

Reconceptualization: here we’re talking about presenting a different conceptual model that explains the same phenomena. Particularly if the learners have had some meaningful activity from your initial learning or through their work, showing a different way of thinking about the problem is helpful. I like to link it to Rand Spiro’s Cognitive Flexibility Theory, and explain that having more ways to represent the underlying model provides more ways to understand the concept to begin with, a greater likelihood that one of the representations will get activated when there’s a problem to be solved, and will activate the other model(s) so there’s a greater likelihood of finding one that leads to a solution.  So, you might think of electrical circuits like water flowing in pipes, or think about electron flow, and either could be useful.  It can be as simple as a new diagram, animation, or just a small prose recitation.

Recontextualization: here we’re showing another example. We’re showing how the concept plays out in a new context, and this gives a greater base upon which to abstract from and comprehend the underlying principle, and providing a new reference that might match a situation they could actually see.   To process it, you’re reactivating the concept representation, comprehending the context, and observing how the concept was used to generate a solution to this situation.  A good example, with a challenging situation that the learner recognizes, a clear goal, and cognitive annotation showing the underlying thinking, will serve to strengthen the learning.  A graphic novel format would be fun, or story, or video, anything that captures the story, thinking, and outcome would work.

Reapplication: this is the best, where instead of consuming a concept model or an example, we actually provide a new practice problem. This should require retrieving the underlying concept, comprehending the context, and determining how the model predicts what will happen to particular perturbations and figuring out which will lead to the desired outcomes.  Practice makes perfect, as they say, and so this should ideally be the emphasis in reactivation.  It might be as simple as a multiple-choice question, though a scenario in many instances would be better, and a sim/game would of course be outstanding.

All of these serve as reactivation. Reactivation, as I’ve pointed out, is a necessary part of learning.  When you don’t have enough chance to practice in the workplace, but it’s important that you have the ability when you need it (and try to avoid putting it in the head if you can), reactivation is a critical tool in your arsenal.

19 January 2016

Performance Detective

Clark @ 8:15 am

I was on a case. I’m a performance detective, and that’s what I do.  Someone wasn’t performing they way they were supposed to, and it was my job to figure out why. My client thought he knew. They always do.  But I had to figure it out myself.  Like always.

Before I hit the bricks, I hit the books. Look, there’s no point watching anyone if you don’t know what you’re looking for.  What’s this mug supposed to be doing?  So I read up. What’s the job?  What’s the goal?  How do you know when it’s going well? These are questions, and I need answers. So I check it out.  Even better, if I can find numbers.  Can’t always, as some folks don’t really get the value.  Suckers.

Then I had to get a move on.  You need what you find from the background, but you can’t trust it.  There could be many reasons why this palooka isn’t up to scratch. Everyone wants to throw a course at it.  And that may not the problem.  If it isn’t a skill problem, it’s not likely a course is going to help.  You’re wasting money.

The mug might not believe it’s important. Or not want to do it a particular way. There’re lots of reasons not do it the way someone wants. It could be harder, with no obvious benefit.  If you don’t make it clear, why would they?  People aren’t always dumb, it just seems that way.

Or they might not have what they need.  Too often, some well-intentioned but under-aware designers wants to put some arbitrary information in their heads.  Which is hard. And usually worthless.  Put in the world. Have it to hand.  They may need a tool, not a knowledge dump.

Or, indeed, they may not be capable. A course could be the answer. Not just a course, of course. It needs more. Coaching, and practice. Lots of practice.  They may really be out of their depth, and dumping knowledge on them is only going to keep them drowning.

It’s not always easy. It may not be a simple answer. There can be multiple problems. It can be all of the above.  Or any combination. And that’s why they bring me in. To get the right answer, not the easy answer. And certainly not the wrong answer.

So I had to go find out what was really going on.  That’s what detectives do. They watch. They investigate. They study.  That’s what I do. I want the straight dope. If you can’t do the hard yards, you’re in the wrong job.  I love the job. And I’m good at it.

So I watched. And sure enough, there it was. Obvious, really. In retrospect. But you wouldn’t have figured it out if you hadn’t looked.  It’s not my job to fix it.  I told the client what I found.  That’s it.  Not my circus, not my monkeys. Get an architect to come up with a solution. I find the problem, and report. That’s what I do.

This quite literally came from a dream I had, and my subsequent thoughts when I woke up.  And when I first conceived it, I wasn’t thinking about the role that Charles Jennings, Jos Arets, and Vivian Heijnen have as one of  five in their new 70:20:10 book, but there is a nice resonance.  Hopefully my ‘hard boiled’ prose isn’t too ‘on the nose’!  More importantly, what did I miss? I welcome  your thoughts and feedback.

14 January 2016

10 years!?!?

Clark @ 8:08 am

A comment on my earliest blog post (thanks, Henrik), made me realize that this post will mark 10 years of blogging. Yes, my first post came out on January 14th, 2006.  This will be my 1,200th post (I forced one in yesterday to be the 1199th so I could say that ;), yow!  That’s 120 a year, or just under every 3rd day.  And, I am happy to add, 2,542 comments (just more than 2 per post), so thanks to you for weighing in.

It’s funny, when I started I can’t really say it was more than an experiment.  I had no idea where it would lead, or how.  It’s  had some challenges, to continue to find topics, but it’s been helpful.  It’s forced me to deliberately consider things I otherwise might not have, just to try to keep up the momentum.

I confess I originally had a goal of 5 a week (one per business day), but even then I was happy if I got 2-3. I’m gobsmacked at my colleague Harold who seems to put out a post every day.  I can’t quite do that. My goal has moderated to be 2 a week (very occasionally I live with 1 per week, but other weeks like when I’m at conferences I might have 3 if there are lots of keynotes to mind map).  Typically it’s Tuesday and Wednesday, for no good reason.

I also try to have something new to say every time. It’s hard, but forcing myself to find something to talk about has led to me thinking about lots of things and therefore ready to bring them to bear on behalf of clients.  I think out loud relatively freely (particularly with the popularity of Work and Learn Out Loud and Show Your Work).  And it’s a way to share my diagrams, another way to ‘think out loud’.  And I admit that I don’t share some things that are either proprietary (until I can anonymize them) or something I’m planning on doing something with.

And I’ve also resisted commercializing this.  Obviously I’ve avoided the offers to exchange links or blog posts that include links for SEO stuff, but I’ve even, rightly or wrongly, not allowed ads.  While it is the official Quinnovation blog, it’s been my belief that sharing my thinking is the best way to help me get interest in what I have to offer (extensive experience mapping a wide variety of concepts onto specific client contexts to yield innovative yet practical and successful solutions).  I haven’t (yet) followed a formula to drive business traffic, and only occasionally mention my upcoming events (though hopefully that’s a public service :).  There’re other places to track that.

I’m also pretty lax about looking at the metrics. I do weekly pop by Google Analytics to see what sort of traffic I get (pretty steady), but I haven’t tried to see what might improve it.  This is, largely, for me.  And for you if your interests run this way. So welcome, and here’s to another 10 years!  Who knows what there will be to talk about then…or even next week!

12 January 2016

Working wiser?

Clark @ 8:03 am

Noodling:  I’ve been thinking about Working Smarter, a topic I took up over four years ago.  And while I still think there’s too little talk about it, I wondered also about pushing it further.  I also talked in the past about an interest in wisdom, and what that would mean for learning.  So what happens when they come together?

Working smarter, of course, means recognizing how we really think, work, and learn, and aligning our processes and tools accordingly. That includes recognizing that we do use external representations, and ensuring that the ones we want in the world are there, and we also support people being able to create their own. It means tapping into the power of people, and creating ways for them to get together and support one another through both communication and collaboration.  And, of course, it means using Serious learning design.

But what, then, does working ‘wiser’ mean?  I like Sternberg’s model of wisdom, as it’s actionable (other models are not quite specific enough).  It talks about taking into account several levels of caring about others, several time scales, several levels of action, and all influenced by an awareness of values.  So how do we work that into practices and tools?

Well, pragmatically, we can provide rubrics for evaluation of ideas that include considerations of others inside and outside your circles of your acquaintances, and in short- and long-term timeframes, and the impacts on existing states of affairs, ultimately focusing on the common good. So we can have job aids that provide guidance, or bake it into our templates.  These, too, can be shown in collaboration tools, so the outputs will reflect these values.  But there’s another approach.

But, at core, it’s really about what you value, and that becomes about culture.  What values does the organization care about?  Do employees know about the organization’s ultimate goal and role?  Is it about short-term shareholder return, or some contribution to society?  I’m reminded about the old statements about whether you’re about selling candles or providing light.  And do employees know how what they do fits in?

It’s pretty clear that the values implicit in steps to make workplaces more effective are really about making workplaces more humane, that is: respecting our inherent nature.  And movements like this, that provide real meaning, ongoing support, freedom of approach, and time for reflection, are to me about working not just smarter but also wiser.

We can work smarter with tools and practices, but I think we can work better, wiser, with an enlightened approach to who we are working with and how we work to deliver real value to not only customers but to society.  And, moreover, I think that doing so would yield better organizational outcomes.

Ok, so have I gone off the edge of the hazy cosmic jive?  I am a native Californian, after all, but I’m thinking that this makes real business sense.  I think we can do this, and that the outputs will be better too, in all respects.  No one says it’d be easy, but my suspicion is it’d be worthwhile.

31 December 2015

2015 Reflections

Clark @ 8:02 am

It’s the end of the year, and given that I’m an advocate for the benefits of reflection, I suppose I better practice what I preach. So what am I thinking I learned as a consequence of this past year?  Several things come to mind (and I reserve the right for more things to percolate out, but those will be my 2016 posts, right? :):

  1. The Revolution is real: the evidence mounts that there is a need for change in L&D, and when those steps are taken, good things happen. The latest Towards Maturity report shows that the steps taken by their top-performing organizations are very much about aligning with business, focusing on performance, and more.  Similarly, Chief Learning Officer‘s Learning Elite Survey similarly point out to making links across the organization and measuring outcomes.  The data supports the principled observation.
  2. The barriers are real: there is continuing resistance to the most obvious changes. 70:20:10, for instance, continues to get challenged on nonsensical issues like the exactness of the numbers!?!?  The fact that a Learning Management System is not a strategy still doesn’t seem to have penetrated.  And so we’re similarly seeing that other business units are taking on the needs for performance support, social media, and ongoing learning. Which is bad news for L&D, I reckon.
  3. Learning design is rocket science: (or should be). The perpetration of so much bad elearning continues to be demonstrated at exhibition halls around the globe.  It’s demonstrably true that tarted up information presentation and knowledge test isn’t going to lead to meaningful behavior change, but we still are thrusting people into positions without background and giving them tools that are oriented at content presentation.  Somehow we need to do better. Still pushing the Serious eLearning Manifesto.
  4. Mobile is well on it’s way: we’re seeing mobile becoming mainstream, and this is a good thing. While we still hear the drum beating to put courses on a phone, we’re also seeing that call being ignored. We’re instead seeing real needs being met, and new opportunities being explored.  There’s still a ways to go, but here’s to a continuing awareness of good mobile design.
  5. Gamification is still being confounded: people aren’t really making clear conceptual differences around games. We’re still seeing linear scenarios confounded with branching, we’re seeing gamification confounded with serious games, and more.  Some of these are because the concepts are complex, and some because of vested interests.
  6. Games  seem to be reemerging: while the interest in games became mainstream circa 2010 or so, there hasn’t been a real sea change in their use.  However, it’s quietly feeling like folks are beginning to get their minds around Immersive Learning Simulations, aka Serious Games.   There’s still ways to go in really understanding the critical design elements, but the tools are getting better and making them more accessible in at least some formats.
  7. Design is becoming a ‘thing’: all the hype around Design Thinking is leading to a greater concern about design, and this is a good thing. Unfortunately there will probably be some hype and clarity to be discerned, but at least the overall awareness raising is a good step.
  8. Learning to learn seems to have emerged: years ago the late great Jay Cross and I and some colleagues put together the Meta-Learning Lab, and it was way too early (like so much I touch :p). However, his passing has raised the term again, and there’s much more resonance. I don’t think it’s necessarily a thing yet, but it’s far greater resonance than we had at the time.
  9. Systems are coming: I’ve been arguing for the underpinnings, e.g. content systems.  And I’m (finally) beginning to see more interest in that, and other components are advancing as well: data (e.g. the great work Ellen Wagner and team have been doing on Predictive Analytics), algorithms (all the new adaptive learning systems), etc. I’m keen to think what tags are necessary to support the ability to leverage open educational resources as part of such systems.
  10. Greater inputs into learning: we’ve seen learning folks get interested in behavior change, habits, and more.  I’m thinking we’re going to go further. Areas I’m interested in include myth and ritual, powerful shapers of culture and behavior. And we’re drawing on greater inputs into the processes as well (see 7, above).  I hope this continues, as part of learning to learn is to look to related areas and models.

Obviously, these are things I care about.  I’m fortunate to be able to work in a field that I enjoy and believe has real potential to contribute.  And just fair warning, I’m working on a few areas in several ways.  You’ll see more about learning design and the future of work sometime in the near future. And rather than generally agitate, I’m putting together two specific programs – one on (e)learning quality and one on L&D strategy – that are intended to be comprehensive approaches.  Stay tuned.

That’s my short list, I’m sure more will emerge.  In the meantime, I hope you had a great 2015, and that your 2016 is your best year yet.

10 December 2015

Scenarios and Conceptual Clarity

Clark @ 6:02 am

I recently came across an article ostensibly about branching scenarios, but somehow the discussion largely missed the point.  Ok, so I can be a stickler for conceptual clarity, but I think it’s important to distinguish between different types of scenarios and their relative strengths and weaknesses.

So in my book Engaging Learning, I was looking to talk about how to make engaging learning experiences.  I was pushing games (and still do) and how to design them, but I also wanted to acknowledge the various approximations thereto.  So in it, I characterized the differences between what I called mini-scenarios, linear scenarios, and contingent scenarios (this latter is what’s traditionally called branching scenarios).  These are all approximations to full games, with various tradeoffs.

At core, let me be clear, is the need to put learners in situations where they need to make decisions. The goal is to have those decisions closely mimic the decisions they need to make after the learning experience. There’s a context (aka the story setting), and then a specific situation triggers the need to make a decision.  And we can deliver this in a number of ways. The ideal is a simulation-driven (aka model-driven or engine-driven) experience.  There’s a model of the world underneath that calculates the outcomes of your action and determines whether you’ve yet achieved success (or failure), or generates a new opportunity to act.  We can (and should) tune this into a serious game.  This gives us deep experience, but the model-building is challenging and there are short cuts.

MiniScenarioIn mini-scenarios, you put the learner in a setting with a situation that precipitates a decision.  Just one, and then there’s feedback.  You could use video, a graphic novel format, or just prose, but the game problem is a setting and a situation, leading to choices. Similarly, you could have them respond by selecting option A B or C, or pointing to the right answer, or whatever.  It stops there. Which is the weakness, because in the real world the consequences are typically more complex than this, and it’s nice off the learning experience reflects that reality.  Still, it’s better than knowledge test.  Really, these are just a better written multiple choice question, but that’s at least a start!

LinearScenarioLinear scenarios are a bit more complex. There are a series of game problems in the same context, but whatever the player chooses, the right decision is ultimately made, leading to the next problem. You use some sort of sleight of hand, such as “a supervisor catches the mistake and rectifies it, informing you…” to make it all ok.  Or, you can terminate out and have to restart if you make the wrong decision at any point. These are a step up in terms of showing the more complex consequences, but are a bit unrealistic.  There’s some learning power here, but not as much as is possible. I have used them as sort of multiple mini-scenarios with content in between, and the same story is used for the next choice, which at least made a nice flow. Cathy Moore suggests these are valuable for novices, and I think it’s also useful if everyone needs to receive the same ‘test’ in some accreditation environment to be fair and balanced (though in a competency-based world they’d be better off with the full game).

BranchingScenarioThen there’s the full branching scenario (which I called contingent scenarios in the book, because the consequences and even new decisions are contingent on your choices).  That is, you see different opportunities depending on your choice. If you make one decision, the subsequent ones are different.  If you don’t shut down the network right away, for instance, the consequences are different (perhaps a breach) than if you do (you get the VP mad).  This, of course, is much  more like the real world.  The only difference between this and a serious game is that the contingencies in the world are hard-wired in the branches, not captured in a separate model (rules and variables). This is easier, but it gets tough to track if you have too many  branches. And the lack of an engine limits the replay and ability to have randomness. Of course, you can make several of these.

So the problem I had with the article that triggered this post is that their generic model looked like a mini-scenario, and nowhere did they show the full concept of a real branching scenario. Further, their example was really a linear scenario, not a branching scenario.  And I realize this may seem like an ‘angels dancing on the head of a pin’, but I think it’s important to make distinctions when they affect the learning outcome, so you can more clearly make a choice that reflects the goal you are trying to achieve.

To their credit, that they were pushing for contextualized decision making at all is a major win, so I don’t want to quibble too much.  Moving our learning practice/assessment/activity to more contextualized performance is a good thing.  Still, I hope this elaboration is useful  to get more nuanced solutions.  Learning design really can’t be treated as a paint-by-numbers exercise, you really should know what you’re doing!

2 December 2015

Useful cognitive overhead

Clark @ 8:02 am

As I’ve reported before, I started mind mapping keynotes not as a function of filling the blog, but for listening better.  That is, without the extra processing requirement of processing the talk into a structure, my mind was (too) free to go wandering. I only posted it because I thought I should do something with it!  And I’ve realized there’s another way I leverage cognitive overhead.

As background, I diagram.  It’s one of the methods I use to reflect.  A famous cognitive science article talked about how diagrams are representations that map conceptual relationships to spatial ones, to use the power of our visual system to facilitate comprehension. And that’s what I do, take something I’m trying to understand, some new thoughts I have, and get concrete about them.  If I can map them out, I feel like I’ve got my mind around them.

I use them to communicate, too. You’ve seen them here in my blog (or will if you browse around a bit), and in my presentations.  Naturally, they’re a large part of my workshops too, and even reports and papers.  As I believe models composed of concepts are powerful tools for understanding the world, I naturally want to convey them to support people in applying them themselves.

Now, what I realized (as I was diagramming) is that the way I diagram actually leverages cognitive overhead in a productive way. I use a diagramming tool (Omnigraffle if you must know, expensive but works well for me) to create them, and there’s some overhead in getting the diagram components sized, and located, and connected, and colored, and…  And in so doing, I’m allowing time for my thoughts to coalesce.

It doesn’t work with paper, because it’s hard to edit, and what comes out isn’t usually right at first.  I move things around, break them up, rethink the elements.  I can use a whiteboard, but usually to communicate a diagram already conceived.  Sometimes I can capture new thinking, but it’s easy to edit a whiteboard. Flip charts are consequently more problematic.

So I was unconsciously leveraging the affordances of the tool to help allow my thinking to ferment/percolate/incubate (pick your metaphor).  Another similar approach is to seed a question you want to answer or a thought you want to ponder before some activity like driving, showering, jogging, or the like.  Our unconscious brain works powerfully in the background, given the right fodder.  So hopefully this gives you some mental fodder too.

1 December 2015

Templates and tools

Clark @ 8:06 am

A colleague who I like and respect recently tweeted: “I can’t be the only L&D person who shudders when I hear the word ‘template'”, and I felt vulnerable because I’ve recently been talking about templates.   To be fair, I have a different meaning than most of what’s called a ‘template’, so I thought perhaps I should explain.

Let’s be clear: what’s typically referred to as a template is usually a simple screen type for a rapid authoring tool.  That is, it allows you to easily fill in the information and generate a particular type of interaction: drag-and-drop, multiple-choice, etc.  And this can be useful when you’ve got well-designed activities but want to easily develop them.  But they’re not a substitute for good design, and can make it easy to do bad design too. Worse are those skins that add gratuitous visual elements (e.g. a ‘racing’ theme) to a series of questions in some deluded view that such window dressing has any impact on anything.

So what am I talking about?  I’m talking about templates that help reinforce the depth of learning science around the elements. I’m talking about templates for: introductions that ask for the emotional opener, the drill-down from the larger context, etc; practices that are contextualized, meaningful to learner, differentiated response options and specific feedback, etc; etc.  This could be done in other ways, such as a checklist, but putting it into the place where you’re developing strikes me as a better driver ;).  Particularly if it is embedded in the house ‘style’, so that the look and feel is tightly coupled to learner experience.

Atul Gawande, in his brilliant The Checklist Manifesto, points out how there are gaps in our mental processing that means we can skip steps and forget to coordinate.  Whether the guidelines are in a template or a process tool like a checklist, it helps to have cognitive facilitation.  So what I’m talking about is not a template that says how it’s to look, but instead what it should contain. There are ways to combine intrinsic motivation openings with initial practice, for instance.

Templates don’t have to stifle creativity, they can serve to improve quality instead.  As big a fan as I am of creativity, I also recognize that we can end up less than optimal if there isn’t some rigor in our approach.  (Systematic creativity is not an oxymoron!)  In fact, systematicity in the creative process can help optimize the outcomes. So however you want to scaffold quality and creativity, whether through templates or other tools, I do implore you to put in place support to ensure the best outcomes for you and  your audience.

24 November 2015

CERTainly room for improvement

Clark @ 8:08 am

As mentioned before, I’ve become a member of my local Community Emergency Response Team (CERT), as in the case of disaster, the official first-responders (police, fire, and paramedics) will be overwhelmed.  And it’s a good group, with a lot of excellent efforts in processes and tools as well as drills.  Still, of course, there’s  room for improvement.  I encountered one such at our last meeting, and I think it’s an interesting case study.

So one of the things you’re supposed to do in conducting search and rescue is to go from building to building assessing damage and looking for people to help.  And one of the useful things to do is to mark the status of the search and the outcomes, so no one wastes effort on an already explored building. While the marking is covered in training and there’re support tools to help you remember,  ideally it’d be memorable, so that you  can regenerate the information and don’t have to look it up.

The design for the marking is pretty clear: you first make a diagonal slash when you start investigating a building, and then you make a crossing slash when you’ve made your assessment. And specific information is to be recorded in each quarter of the resulting X: left, right, top, and bottom.  (Note that the US standard set by FEMA doesn’t correspond to the international standard from the International Search & Rescue Advisory Group, interestingly).

However, when we brought it up in a recent meeting (and they’re very good about revisiting things that quickly fade from memory), it was obvious that most people couldn’t recall what goes where. And when I heard what the standard was, I realized it didn’t have a memorable structure.  So, here are the four things to record:

  • the group who goes in
  • when the group completes
  • what hazards may exist
  • and how many people and what condition they’re in*

So how would you map these to the quadrants?  And in one sense it doesn’t matter if there’s a sensible rationale behind them. One sign that there’s not?  You can’t remember what goes where.

Our local team leader was able to recall that the order is: left – group, top – completion, right – hazards, and bottom – people.  However, this seems to me to be less than  memorable, so let me explain.

To me, wherever you put the in, left or top, the coming out ought to be opposite. And given our natural flow, group going in makes sense to the left, and coming out ought to go on the right.  In – out.  Then, it’s relatively arbitrary where hazards and people go.  I’d make a case that top-of-mind should be the hazards found to warn others, but that the people are the bottom line (see what I did there?).  I could easily make a case for the reverse, but either would be a mnemonic to support remembering.  Instead, as far as I can tell, it’s completely arbitrary. Now, if it’s not arbitrary and there is a rationale,  it’d help to share that!

The point being, to help people remember things that are in some sense arbitrary, make a story that makes it memorable. Sure, I can look it up, assuming that the lookup book they handed out stays in the pocket in my special backpack.  (And I’m likely to remember now, because of all this additional processing, but that’s not what happens in the training.)  However, making it regenerable from some structure gives you a much better chance of having it to hand. Either a model or a story is better than arbitrary, and one’s possible with a rewrite, but as it is, there’s neither.

So there’s a lesson in design to be had, I reckon, and I hope you’ll put it to use.

* (black or dead, red or needing immediate treatment for life-threatening issues, yellow or needing non-urgent treatment, and green or ok)

13 November 2015

Learning and frameworks

Clark @ 8:13 am

There’s recently been a spate of attacks on 70:20:10 and moving beyond courses, and I have to admit I just don’t get it.  So I thought it’s time to set out why I think these approaches make sense.

Let’s start with what we know about how we learn. Learning is action and reflection.  Instruction (education, training) is designed action and guided reflection.  That’s why, by the way, that information dump and knowledge test isn’t a learning solution.   People need to actively apply the information.

And it can’t follow an ‘event’ model, as learning is spaced out over time. Our brains can only accommodate so much (read: very little) learning at any one time.  There needs to be ongoing facilitation after a formal learning experience – coaching over time and stretch assignments – to help cement and accelerate the learning experience.

Now, this can be something L&D does formally, but at some point formal has to let go (not least for pragmatics) and it becomes the responsibility of the individual and the community. It shifts from formal coaching to informal mentoring, personal exploration, and feedback from colleagues and fellow practitioners.  It’s impractical for L&D to take on this full responsibility, and instead becomes a role in facilitation of mentoring, communication, and collaboration.

That’s where the 70:20:10 framework comes in.  Leaving that mentoring and collaboration to chance is a mistake, because it’s demonstrably the case that people don’t necessarily have good self-learning skills.  And if we foster self-learning skills, we can accelerate the learning outcomes for the organization. Addressing the skills and culture for learning, personally and collectively, is a valuable contribution that L&D should seize. And it’s not about controlling it all, but making an environment that’s conducive, and facilitating the component skills.

Further, some people  seem to get their knickers in a twist about the numbers, and I’m not sure why that is.  People seem comfortable with the Pareto Principle, for instance (aka the 80/20 rule), and it’s the same. In both cases it’s not the exact numbers that matter, but the concept. For the Pareto Rule it’s recognizing that some large fraction of outcomes comes from a small fraction of inputs.  For the 70:20:10 framework, it’s recognizing that much of what you apply as your expertise comes from things other than courses.  And tired old cliches about “wouldn’t want a doctor who didn’t have training” don’t reflect that you’d also not want a doctor who didn’t continue learning through internships and practice.  It’s not denying the 10, it’s augmenting it.

And this is really what Modern Workplace Learning is about: looking beyond the course.  The course is one important, but ultimately small, piece of being a practitioner, and organizations can no longer afford to ignore the rest of the learning picture.  Of course, there’s also the whole innovation side and performance support when learning doesn’t have to happen as well, which is something L&D also should facilitate (cue the L&D Revolution), but getting the learning right by looking at the bigger picture of how we really learn is critical.

I welcome debate on this, but pragmatically if you think about how you learned what you do, you should recognize that much of it came from other than courses. Beyond Education, the other two E’s have been characterized as Exposure and Experience. Doing the task in the company of others, socially learning, and by the outcomes of actually applying the knowledge in context, and making mistakes.  That’s real learning, and the recognition that it should not be left to chance is how these frameworks help raise awareness and provide an opportunity for L&D to become more relevant to the organization.  And that, I strongly believe, is a valuable outcome. So, what do you think?

Next Page »

Powered by WordPress