Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

21 April 2015

Why models matter

Clark @ 7:52 am

In the industrial age, you really didn’t need to understand why you were doing what you were doing, you were just supposed to do it.  At the management level, you supervised behavior, but you didn’t really set strategy. It was only at the top level where you used the basic principles of business to run your organization.  That was then, this is now.

Things are moving faster, competitors are able to counter your advances in months, there’s more information, and this isn’t decreasing.  You really need to be more agile to deal with uncertainty, and you need to continually innovate.   And I want to suggest that this advantage comes from having a conceptual understanding, a model of what’s happening.

There are responses we can train, specific ways of acting in context.  These aren’t what are most valuable any more.  Experts, with vast experience responding in different situations, abstract models that guide what they do, consciously or unconsciously (this latter is a problem, as it makes it harder to get at; experts can’t tell you 70% of what they actually do!).  Most people, however, are in the novice to practitioner range, and they’re not necessarily ready to adapt to changes, unless we prepare them.

What gives us the ability to react are having models that explain the underlying causal relations as we best understand them, and then support in applying those models in different contexts.  If we have models, and see how those models guide performance in context A, then B, and then we practice applying it in context C and D (with model-based feedback), we gradually develop a more flexible ability to respond. It’s not subconscious, like experts, but we can figure it out.

So, for instance, if we have the rationale behind a sales process, how it connects to the customer’s mental needs and the current status, we can adapt it to different customers.  If we understand the mechanisms of medical contamination, we can adapt to new vectors.  If we understand the structure of a cyber system, we can anticipate security threats. The point is that making inferences on models is a more powerful basis than trying to adapt a rote procedure without knowing the basis.

I recognize that I talk a lot in concepts, e.g. these blog posts and diagrams, but there’s a principled reason: I’m trying to give you a flexible basis, models, to apply to your own situation.  That’s what I do in my own thinking, and it’s what I apply in my consulting.  I am a collector of models, so that I have more tools to apply to solving my own or other’s problems.   (BTW, I use concept and model relatively interchangeably, if that helps clarify anything.)

It’s also a sound basis for innovation.  Two related models (ahem) of creativity say that new ideas are either the combination of two different models or an evolution of an existing one.  Our brains are pattern matchers, and the more we observe a pattern, the more likely it will remind us of something, a model. The more models we have to match, the more likely we are to find one that maps. Or one that activates another.

Consequently, it’s also one  of the things I push as a key improvement to learning design. In addition to meaningful practice, give the concept behind it, the why, in the form of a model. I encourage you to look for the models behind what you do, the models in what your presented, and the models in what your learners are asked to do.

It’s a good basis for design, for problem-solving, and for learning.  That, to me, is a big opportunity.

14 April 2015

Defining Microlearning?

Clark @ 8:32 am

Last week on the #chat2lrn twitter chat, the topic was microlearning. It was apparently prompted by this post by Tom Spiglanin which does a pretty good job of defining it, but some conceptual confusion showed up in the chat that makes it clear there’s some work to be done.  I reckon there may be a role for the label and even the concept, but I wanted to take a stab at what it is and isn’t, at least on principle.

So the big point to me is the word ‘learning’.  A number of people opined about accessing a how-to video, and let’s be clear: learning doesn’t have to come from that.   You could follow the steps and get the job done and yet need to access it again if you ever needed it. Just like I can look up the specs on the resolution of my computer screen, use that information, but have to look it up again next time.  So it could be just performance support, and that’s a good thing, but it’s not learning.  It suits the notion of micro content, but again, it’s about getting the job done, not developing new skills.

Another interpretation was little bits of components of learning (examples, practice) delivered over time. That is learning, but it’s not microlearning. It’s distributed learning, but the overall learning experience is macro (and much more effective than the massed, event, model).  Again, a good thing, but not (to me) microlearning.  This is what Will Thalheimer calls subscription learning.

So, then, if these aren’t microlearning, what is?  To me, microlearning has to be a small but complete learning experience, and this is non-trivial.  To be a full learning experience, this requires a model, examples, and practice.  This could work with very small learnings (I use an example of media roles in my mobile design workshops).  I think there’s a better model, however.

To explain, let me digress. When we create formal learning, we typically take learners away from their workplace (physically or virtually), and then create contextualized practice. That is, we may present concepts and examples (pre- via blended, ideally, or less effectively in the learning event), and then we create practice scenarios. This is hard work. Another alternative is more efficient.

Here, we layer the learning on top of the work learners are already doing.  Now, why isn’t this performance support? Because we’re not just helping them get the job done, we’re explicitly turning this into a learning event by not only scaffolding the performance, but layering on a minimal amount of conceptual material that links what they’re doing to a model. We (should) do this in examples and feedback on practice, now we can do it around real work. We can because (via mobile or instrumented systems) we know where they are and what they’re doing, and we can build content to do this.  It’s always been a promise of performance support systems that they could do learning on top of helping the outcome, but it’s as yet seldom seen.

And the focus on minimalism is good, too.  We overwrite and overproduce, adding in lots that’s not essential.  C.f. Carroll’s Nurnberg Funnel or Moore’s Action Mapping.  And even for non-mobile, minimalism makes sense (as I tout under the banner of the Least Assistance Principle).  That is, it’s really not rude to ask people (or yourself as a designer) “what’s the least I can do for you?”  Because that’s what people generally really prefer: give me the answer and let me get back to work!

Microlearning as a phrase has probably become current (he says, cynically) because elearning providers are touting it to sell the ability of their tools to now deliver to mobile.   But it can also be a watch word to emphasize thinking about performance support, learning ‘in context’, and minimalism.  So I think we may want to continue to use it, but I suggest it’s worthwhile to be very clear what we mean by it. It’s not courses on a phone (mobile elearning), and it’s not spaced out learning, it’s small but useful full learning experiences that can fit by size of objective or context ‘in the moment’.  At least, that’s my take; what’s yours?

8 April 2015

Starting from the end

Clark @ 8:20 am

Week before last, Will Thalheimer and I had another one of our ‘debates’, this time on the Kirkpatrick model (read the comments, too!).  We followed up last week with a live debate.  And in the course of it I said something that I want to reiterate and extend.

The reason I like the Kirkpatrick model is it emphasizes one thing that I see the industry failing to do.  Properly applied (see below), it starts with the measurable change you need to see in the organization, and you work backwards from there. You go back to the behavior change you need in the workplace to address that measure, and from there to the changes in training and/or resources to create that behavior change.  The important point is starting with a business metric.  No ‘we need a course on this’, but instead: “what business goal are we trying to impact”.

Note: the solution can just be a tool, it doesn’t have to always be learning.  For example, if what people need to access accurately are the specific product features of one of a multitude of solutions that are in rapid flux (financial packages, electronic hardware, …), trying to get it in the head accurately isn’t a good goal. Having people able to access the information ‘in the head’ is an exercise in futility, and you’re better off putting the information ‘in the world’.  (Which is why I want to change from Learning & Development to Performance & Development, it’s not about learning, it’s about doing!)

The problems with Kirkpatrick are several.  For one, even he admitted he numbered it wrong.  The starting point is numbered ‘four’, which misleads people.  So we get the phenomena that people do stage 1, sometimes stage 2, rarely do they get to stage 3, and stage 4 is almost non-existent, according to ATD research.  And stage 1, as Will rightly points out, is essentially worthless, because the correlation between what learners think of the learning and the actual impact is essentially zero!  Finally, too often Kirkpatrick is wrongly considered as only to evaluate training (even the language on the site, as the link above will show you, talks only about training). It should be about the impact of an intervention whatever the means (see above).  And the impact is what the Kirkpatrick model properly is about, as I opined in the blog debate.

So, in the live debate, I said I’d be happy for any other model that focused on working backwards. And was reminded that, well, I proposed just that a while ago!  The blog post is the short version, but I also wrote this rather longer and more rigorous paper (PDF), and I’m inclined think it’s one of my more important contributions to design (to date ;). It’s a fairly thorough look at the design process and where we go wrong (owing to our cognitive architecture), and a proposal for an alternative approach based upon sound principles.   I welcome your thoughts!

7 April 2015

Labeling 70:20:10

Clark @ 8:42 am

In the Debunker Club, a couple of folks went off on the 70:20:10 model, and it prompted some thoughts.  I thought I’d share them.

If you’re not familiar with 70:20:10, it’s a framework for thinking about workplace learning that suggests we need to recognize that the opportunity is about much more than courses. If you ask people how they learned the things they know to do in the workplace, the responses suggest that somewhere around 10% came from formal learning, 20% from informal coaching and such, and about 70% from trial and error.  Note the emphasis on the fact that these numbers aren’t exact, it’s just an indication (though considerable evidence suggests that the contribution of formal learning is somewhere between 5 and 20%, with evidence from a variety of sources).

Now, some people complain that the numbers can’t be right, no one gets perfect 10 measurements. To be fair, they’ve been fighting against the perversion of Dale’s Cone, where someone added numbers on that were bogus but have permeated learning for decades and can’t seem to be exterminated. It’s like zombies!  So I suspect they’re overly sensitive to whole numbers.

And I like the model!  I’ve used it to frame some of my work, using it as a framework to think about what else we can do to support performance. Coaching and mentoring, facilitating social interaction, providing challenge goals, supporting reflection, etc.  And again to justify accelerated organizational outcomes.

The retort I hear is that “it’s not about the numbers”, and I agree.  It’s just tool to help shake people out of the thought that a course is the only solution to all needs.  And, outside the learning community, people get it.  I have heard that, over presentations to hundreds of audiences of executives and managers, they all recognize that the contributions to their success came largely from sources other than courses.

However, if it’s not about the numbers, maybe calling it the 70:20:10 model may be a problem.  I really like Jane Hart’s diagram about Modern Workplace Learning as another way to look at it, though I really want to go beyond learning too.  Performance support may achieve outcomes in ways that don’t require or deliver any learning, and that’s okay. There’re times when it’s better to have knowledge in the head than in the world.

So, I like the 70:20:10 framework, but recognize that the label may be a barrier. I’m just looking for any tools I can use to help people start thinking ‘outside the course’.  I welcome suggestions!

2 April 2015

Measurement?

Clark @ 10:55 am

Sorry for the lack of posts this week; Monday was shot while I migrated my old machine to a new one (yay)!  Tuesday was shot with catching up. Wed was shot with lost internet, and trying to migrate the lad to my old machine.  So today I realize I haven’t posted all week (though you got extra from me last week ;)!  So here’s one reflection on the conference last week.

First, if you haven’t seen it, you should check out the debate I had with the good Dr. Will Thalheimer over at his blog about the Kirkpatrick model.  He’s upset with it as it’s not permeated by learning, and I argue that it’s role is impact, not learning design (see my diagram at the end).  Great comments, too! We’ll be doing a hangout on it on Friday the 3rd of April.

The other interesting thing that happened is on the first day I was cornered three times for deep conversations on measurement. This is a good thing, mostly, but one in particular was worth a review.  The discussion for this last centered on whether measurement was needed for most initiatives, and I argued yes, but with a caveat.

There was an implicit thought that for many things that measurement wasn’t needed. In particular, for informal learning when we’ve got folks successfully developed as effective self-learners and a good culture, we don’t need to measure. And I agree, though we might want to track (via something like the xAPI) to see what things are effective or not.

However, I did still think that any formal interventions, whether courses, performance support, or even specific social initiatives should be measured. First, how are you going to tune it to get it right? Second, don’t you want to attach the outcome to the intervention? I mean, if you’re doing performance consulting, there should be a gap you’re trying to address or why are you bothering?  If there is a gap, you have a natural metric.

I am pleased to see the interest in measurement, and I hope we can start getting some conceptual clarity, some good case studies, and really help make our learning initiatives into strategic contributions to the organization.  Right?

24 March 2015

Tech Limits?

Clark @ 8:26 am

A couple of times last year, firms with some exciting learning tools approached me to talk about the market.  And in both cases, I had to advise them that there were some barriers they’d have to address. That was brought home to me in another conversation, and it makes me worry about the state of our industry.

So the first tool is based upon a really sound pedagogy that is consonant with my activity-based learning approach.  The basis is giving learners assignments very much like the assignments they’ll need to accomplish in the workplace, and then resourcing them to succeed.  They wanted to make it easy for others to create these better learning designs (as part of a campaign for better learning). The only problem was, you had to learn the design approach as well as the tool. Their interface wasn’t ready for prime time, but the real barrier was getting people to be able to use a new tool. I indicated some of the barriers, and they’re reconsidering (while continuing to develop content against this model as a service).

The second tool supports virtual role plays in a powerful way, having smart agents that react in authentic ways. And they, too, wanted to provide an authoring tool to create them.  And again my realistic assessment of the market was that people would have trouble understanding the tool.  They decided to continue to develop the experiences as a service.

Now, these are somewhat esoteric designs, though the former should be the basis of our learning experiences, and the latter would be a powerful addition to support a very common and important type of interaction.  The more surprising, and disappointing, issue came up with a conversation earlier this year with a proponent of a more familiar tool.

Without being specific (I’ve not received permission to disclose the details in all of the above), this person indicated that when training a popular and fairly straightforward tool, that the biggest barrier wasn’t the underlying software model. I was expecting that too much of training was based upon rote assignments without an underlying model, and that is the case, but instead there was a more fundamental barrier: too many potential users just didn’t have sufficient computer skills!  And I’m not talking about programming code, but instead fundamental understandings of files and ‘styles‘ and other core computing elements just were not present in sufficient quantities in these would-be authors. Seriously!

Now I’ve complained before that we’re not taking learning design seriously, but obviously we’re compounded by a lack of fundamental computer skills.  Folks, this is elearning, not chalk learning, not chalk talk, not edoing, etc.  If you struggle to add new apps on your computer, or find files, you’re not ready to be an elearning developer.

I admit that I struggle to see how folks can assume that without knowledge of design, nor knowledge of technology, that they can still be elearning designers and developers. These tools are scaffolding to allow your designs to be developed. They don’t do design, nor will they magically cover up for lacks of tech literacy.

So, let’s get realistic.  Learn about learning design, and get comfortable with tech, or please, please, don’t do elearning.  And I promise not to do music, architecture, finance, and everything else I’m not qualified to. Fair enough?

 

18 March 2015

Giving it away, or worse

Clark @ 7:58 am

The other day, I was wondering about the possibilities of removing mandatory courses.  Ok, maybe not mandated compliance, but any others.  And then a colleague took it further, and I like it.  So what are we talking about?

I was thinking that, if you give people a meaningful mission (ala Dan Pink’s Drive), the learner (assuming reasonable self-learning skills, a separate topic), they would take responsibility for the learning they needed.  We could have courses around, or maybe await their desires and point them to outside resources, etc, unless it’s specifically internal.  That is, we become much more pull (from the user) than push (from us).

However, my colleague Mark Britz took it further.  He argued that instead of not making them go, instead we’d charge them what it cost to provide the learning!  That is, if folks wanted training or webinars or…, they’d pay for the privilege.  As he put it, if requests for elearning, being cautious about signing up, etc happened: “I couldn’t be happier!”

His point is that it would drive people to more workflow learning, more social and shared learning, etc.  And that’s a good thing.   I might couple that with some way to make sure they knew how to work, play, and learn well together, but it’s the different view that’s a needed jumpstart.

It’s a refreshing twist on the ‘if we build it it is good’ sort of mentality, and really helps focus the L&D unit on doing things that will significantly improve outcomes for others.  If you can make a meaningful impact, people will have to pay for your assistance.  You want change?  You’ll pay but it’ll be worth it.

If we’re going to kick off a revolution, we need to rethink what we’re about and how we’re doing it.  Mark’s upended view is a necessary kick in the status quo to get us to think anew about what we’re doing and why.

I recommend you read his original post.

3 March 2015

On the road again

Clark @ 7:42 am

Well, some more travels are imminent, so I thought I’d update you on where the Quinnovation road show would be on tour this spring:

  • March 9-10 I’ll be collaborating with Sarah Gilbert and Nick Floro to deliver ATD’s mLearnNow event in Miami on mobile
  • On the 11th I’ll be at a private event talking the Revolution to a select group outside Denver
  • Come the 18th I’ll be inciting the revolution at the ATD Golden Gate chapter meeting here in the Bay Area
  • On the 25th-27th, I’ll be in Orlando again instigating at the eLearning Guild’s Learning Solutions conference
  • May 7-8 I’ll be kicking up my heels about the revolution for the eLearning Symposium in Austin
  • I’ll be stumping the revolution at another vendor event in Las Vegas 12-13
  • And June 2-3 I’ll be myth-smashing for ATD Atlanta, and then workshopping game design

So, if you’re at one of these, do come up and introduce yourself and say hello!

 

 

25 February 2015

mLearning more than mobile elearning?

Clark @ 6:17 am

Someone tweeted about their mobile learning credo, and mentioned the typical ‘mlearning is elearning, extended’ view. Which I rejected, as I believe mlearning is much more (and so should elearning be).  And then I thought about it some more.  So I’ll lay out my thinking, and see what you think.

I have been touting that mLearning could and should be focused, as should P&D, on anything that helps us achieve our goals better. Mobile, paper, computers, voodoo, whatever technology works.  Certainly in organizations.  And this yields some interesting implications.

So, for instance, this would include performance support and social networks.  Anything that requires understanding how people work and learn would be fair game. I was worried about whether that fit some operational aspects like IT and manufacturing processes, but I think I’ve got that sorted.  UI folks would work on external products, and any internal software development, but around that, helping folks use tools and processes belongs to those of us who facilitate organizational performance and development.  So we, and mlearning, are about any of those uses.

But the person, despite seeming to come from an vendor to orgs, not schools, could be talking about schools instead, and I wondered whether mLearning for schools, definitionally, really is about only supporting learning.  And I can see the case for that; that mlearning in education is about using mobile to help people learn, not perform.  It’s about collaboration, for sure, and tools to assist.

Note I’m not making the case for schools as they are, a curriculum rethink definitely needs to accompany using technology in schools in many ways.  Koreen Pagano wrote this nice post separating Common Core teaching versus assessment, which goes along with my beliefs about the value of problem solving.  And I also laud Roger Schank‘s views, such as the value (or not) of the binomial theorem as a classic example.

But then, mobile should be a tool in learning, so it can work as a channel for content, but also for communication, and capture, and compute (e.g. the 4C’s of mlearning).  And the emergent capability of contextual support (the 5th C, e.g. combinations of the first four).  So this view would argue that mlearning can be used for performance support in accomplishing a meaningful task that’s part of an learning experience.

That would take me back to mlearning being more than just mobile elearning, as Jason Haag has aptly separated.  Sure, mobile elearning can be a subset of mlearning, but not the whole picture. Does this make sense to you?

11 February 2015

Rethinking Redux

Clark @ 9:04 am

Last week I wrote about Rethinking, how we might want and need to revise our approaches, and showed a few examples of folks thinking out of the box and upending our cherished viewpoints.  I discovered another one (much closer to ‘home’) and tweeted it out, only to get a pointer to another.  I think it’s worth looking at these two examples that help make the point that maybe it’s time for a rethink of some of our cherished beliefs and practices.

The first was a pointer from a conversation I had with the proprietor of an organization with a new mobile-based coaching engine.  Among the things touted was that much of our thinking about feedback appears to be wrong.  I was given a reference and found an article that indeed upends our beliefs about the benefits of feedback.

The article investigates performance reviews, and finds them lacking, citing one study that found:

“a meta-analysis of 607 studies of performance evaluations and concluded that at least 30% of the performance reviews ended up in decreased employee performance.”

30% decrease performance?  And that’s not including the others that are just neutral.  That’s a pretty bad outcome!  Worse, the Society for Human Resource Management is cited as stating  “90% of performance appraisals are painful and don’t work“.  In short, one of the most common performance instruments is flawed.

As a consequence of tweeting this out, a respondent pointed to another article that he was reminded of.  This one upends the notion that we’re good at rating others’ behavior: “research has demonstrated that each of us is a disturbingly unreliable rater of other people’s performance”.  That is, 360 degree reviews, manager reviews, etc., are fundamentally based upon review by others, and they’re demonstrably bad at it.  The responses given have reliable biases that makes the data invalid.

As a consequence, again, we cannot continue as we are:

“we must first stop, take stock, and admit to ourselves that the systems we currently use to reveal our people only obscure them”

This is just like learning styles: there’s no reliable data that it works, and the measurement instrument used is flawed. In short, one of the primary tools for organizational improvement is fundamentally broken.  We’re using industrial age tools in an information age.

What’s a company to do?  The first article quoted Josh Bersin when saying “companies need to focus very heavily on ‘collaboration, professional development, coaching and empowering people to do great things’“.  This is the message of the Internet Time Alliance and an outflow of the Coherent Organization model and the L&D Revolution.  There are alternatives that are more respectful of how people really think, work, and learn, and consequently more effective.  Are you ready to rethink?

#itashare

Next Page »

Powered by WordPress