Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

21 April 2015

Why models matter

Clark @ 7:52 am

In the industrial age, you really didn’t need to understand why you were doing what you were doing, you were just supposed to do it.  At the management level, you supervised behavior, but you didn’t really set strategy. It was only at the top level where you used the basic principles of business to run your organization.  That was then, this is now.

Things are moving faster, competitors are able to counter your advances in months, there’s more information, and this isn’t decreasing.  You really need to be more agile to deal with uncertainty, and you need to continually innovate.   And I want to suggest that this advantage comes from having a conceptual understanding, a model of what’s happening.

There are responses we can train, specific ways of acting in context.  These aren’t what are most valuable any more.  Experts, with vast experience responding in different situations, abstract models that guide what they do, consciously or unconsciously (this latter is a problem, as it makes it harder to get at; experts can’t tell you 70% of what they actually do!).  Most people, however, are in the novice to practitioner range, and they’re not necessarily ready to adapt to changes, unless we prepare them.

What gives us the ability to react are having models that explain the underlying causal relations as we best understand them, and then support in applying those models in different contexts.  If we have models, and see how those models guide performance in context A, then B, and then we practice applying it in context C and D (with model-based feedback), we gradually develop a more flexible ability to respond. It’s not subconscious, like experts, but we can figure it out.

So, for instance, if we have the rationale behind a sales process, how it connects to the customer’s mental needs and the current status, we can adapt it to different customers.  If we understand the mechanisms of medical contamination, we can adapt to new vectors.  If we understand the structure of a cyber system, we can anticipate security threats. The point is that making inferences on models is a more powerful basis than trying to adapt a rote procedure without knowing the basis.

I recognize that I talk a lot in concepts, e.g. these blog posts and diagrams, but there’s a principled reason: I’m trying to give you a flexible basis, models, to apply to your own situation.  That’s what I do in my own thinking, and it’s what I apply in my consulting.  I am a collector of models, so that I have more tools to apply to solving my own or other’s problems.   (BTW, I use concept and model relatively interchangeably, if that helps clarify anything.)

It’s also a sound basis for innovation.  Two related models (ahem) of creativity say that new ideas are either the combination of two different models or an evolution of an existing one.  Our brains are pattern matchers, and the more we observe a pattern, the more likely it will remind us of something, a model. The more models we have to match, the more likely we are to find one that maps. Or one that activates another.

Consequently, it’s also one  of the things I push as a key improvement to learning design. In addition to meaningful practice, give the concept behind it, the why, in the form of a model. I encourage you to look for the models behind what you do, the models in what your presented, and the models in what your learners are asked to do.

It’s a good basis for design, for problem-solving, and for learning.  That, to me, is a big opportunity.

15 April 2015

Cyborg Thinking: Cognition, Context, and Complementation

Clark @ 8:25 am

I’m writing a chapter about mobile trends, and one of the things I’m concluding with are the different ways we need to think to take advantage of mobile. The first one emerged as I wrote and kind of surprised me, but I think there’s merit.

The notion is one I’ve talked about before, about how what our brains do well, and what mobile devices do well, are complementary. That is, our brains are powerful pattern matchers, but have a hard time remembering rote information, particularly arbitrary or complicated details.  Digital technology is the exact opposite. So, that complementation whenever or wherever we are is quite valuable.

Consider chess.  When first computers played against humans,  they didn’t do well.  As computers became more powerful, however, they finally beat the world champion. However, they didn’t do it like humans do, they did it by very different means; they couldn’t evaluate well, but they could calculate much deeper in the amount of turns played and use simple heuristics to determine whether those were good plays.  The sheer computational ability eventually trumped the familiar pattern approach.  Now, however, they have a new type of competition, where a person and a computer will team and play against another similar team. The interesting result is not the best chess player, nor the best computer program, but a player who knows best how to leverage a chess companion.

Now map this to mobile: we want to design the best complement for our cognition. We want to end up having the best cyborg synergy, where our solution does the best job of leaving to the system what it does well, and leaving to the person the things we do well. It’s maybe only a slight shift in perspective, but it is a different view than designing to be, say, easy to use. The point is to have the best partnership available.

This isn’t just true for mobile, of course, it should be the goal of all digital design.  The specific capability of mobile, using sensors to do things because of when and where we are, though, adds unique opportunities, and that has to figure into thinking as well.  As does, of course, a focus on minimalism, and thinking about content in a new way: not as a medium for presentation, but as a medium for augmentation: to complement the world, not subsume it.

It’s my thinking that this focus on augmenting our cognition and our context with content that’s complementary is the way to optimize the uses of mobile. What’s your thinking?

14 April 2015

Defining Microlearning?

Clark @ 8:32 am

Last week on the #chat2lrn twitter chat, the topic was microlearning. It was apparently prompted by this post by Tom Spiglanin which does a pretty good job of defining it, but some conceptual confusion showed up in the chat that makes it clear there’s some work to be done.  I reckon there may be a role for the label and even the concept, but I wanted to take a stab at what it is and isn’t, at least on principle.

So the big point to me is the word ‘learning’.  A number of people opined about accessing a how-to video, and let’s be clear: learning doesn’t have to come from that.   You could follow the steps and get the job done and yet need to access it again if you ever needed it. Just like I can look up the specs on the resolution of my computer screen, use that information, but have to look it up again next time.  So it could be just performance support, and that’s a good thing, but it’s not learning.  It suits the notion of micro content, but again, it’s about getting the job done, not developing new skills.

Another interpretation was little bits of components of learning (examples, practice) delivered over time. That is learning, but it’s not microlearning. It’s distributed learning, but the overall learning experience is macro (and much more effective than the massed, event, model).  Again, a good thing, but not (to me) microlearning.  This is what Will Thalheimer calls subscription learning.

So, then, if these aren’t microlearning, what is?  To me, microlearning has to be a small but complete learning experience, and this is non-trivial.  To be a full learning experience, this requires a model, examples, and practice.  This could work with very small learnings (I use an example of media roles in my mobile design workshops).  I think there’s a better model, however.

To explain, let me digress. When we create formal learning, we typically take learners away from their workplace (physically or virtually), and then create contextualized practice. That is, we may present concepts and examples (pre- via blended, ideally, or less effectively in the learning event), and then we create practice scenarios. This is hard work. Another alternative is more efficient.

Here, we layer the learning on top of the work learners are already doing.  Now, why isn’t this performance support? Because we’re not just helping them get the job done, we’re explicitly turning this into a learning event by not only scaffolding the performance, but layering on a minimal amount of conceptual material that links what they’re doing to a model. We (should) do this in examples and feedback on practice, now we can do it around real work. We can because (via mobile or instrumented systems) we know where they are and what they’re doing, and we can build content to do this.  It’s always been a promise of performance support systems that they could do learning on top of helping the outcome, but it’s as yet seldom seen.

And the focus on minimalism is good, too.  We overwrite and overproduce, adding in lots that’s not essential.  C.f. Carroll’s Nurnberg Funnel or Moore’s Action Mapping.  And even for non-mobile, minimalism makes sense (as I tout under the banner of the Least Assistance Principle).  That is, it’s really not rude to ask people (or yourself as a designer) “what’s the least I can do for you?”  Because that’s what people generally really prefer: give me the answer and let me get back to work!

Microlearning as a phrase has probably become current (he says, cynically) because elearning providers are touting it to sell the ability of their tools to now deliver to mobile.   But it can also be a watch word to emphasize thinking about performance support, learning ‘in context’, and minimalism.  So I think we may want to continue to use it, but I suggest it’s worthwhile to be very clear what we mean by it. It’s not courses on a phone (mobile elearning), and it’s not spaced out learning, it’s small but useful full learning experiences that can fit by size of objective or context ‘in the moment’.  At least, that’s my take; what’s yours?

25 March 2015

Tom Wujec #LSCon Keynote Mindmap

Clark @ 7:02 am

Tom Wujec gave a discursive and well illustrated talk about how changes in technology were changing industry, ultimately homing in on creativity.  Despite a misstep mentioning Kolb’s invalid learning styles instrument, it was entertaining and intriguing.

 

10 March 2015

Design Thinking?

Clark @ 6:14 am

There’s been quite a bit of flurry about Design Thinking of late (including the most recent #lrnchat), and I’m trying to get my around what’s unique about it.  The wikipedia entry linked above helps clarify the intent, but is there any there there?

It helps to understand that I’ve been steeped in design approaches since at least the 80’s. Herb Simon’s Sciences of the Artificial argued, essentially, that design is the quintessential human activity. And my grad school experience was in a research lab focused on interface design.  Process was critical, and when I was subsequently teaching interface design, I was tracking new initiatives like situated design and participatory design, anthropological efforts designed to get closer to the ‘customer’.

In addition to being somewhat obsessive about learning how people learn, and as a confirmed geek continually exploring new technology, I also got interested in design processes beyond interface design. As my passion was designing learning technology solutions to meet real needs, I explored other design approaches to look for universals.  Along the way I looked at industrial, graphic, architectural, software, and other design disciplines.  I also read the psychological research on our cognitive limitations and design approaches.  (I made a small bit of my career on bringing the advances in HCI, which was more advanced in process, to ed tech.)

The reason I mention this is that the elements of Design Thinking: being open minded, diverging before converging, using teams, empathy for the customer, etc, all strike me as just good design. It’s not obvious to me whether it gets into the nuances (e.g. the steps in the Wikipedia article don’t allow me to see whether they do things like ensure that everyone takes time to brainstorm on their own before coming together; an important step to prevent groupthink), but at the granularity I’ve seen, it seems to be quite good.  You mean everyone isn’t already both aware of and using this?  Apparently not.

So in that respect, Design Thinking is a win.  If adding a label to a systematized compendium of good practices will raise awareness, I’m all for it.  And I’m willing to have my consciousness raised that there’s more to it, because as a proponent of design, I’m glad to see that folks are taking steps to help design get better and will be thrilled if it adds something new.

 

17 February 2015

Engage, yea or nay?

Clark @ 8:20 am

In a recent chat, a colleague I respect said the word ‘engagement’  was anathema.  This surprised me, as I’ve been quite outspoken about the need for engagement (for one small example, writing a book about it!).  It may be that the conflict is definitional, for it appeared that my colleague and another respondent viewed engagement as bloating the content, and that’s not what I mean at all. So I thought I lay out what mean when I say engaging, and why I think it’s crucial.

Let’s be clear what I don’t mean.  If you think by engagement it’s adding in extra stuff, we’re using a very different definition of engagement.  It’s not about tarting up uninteresting stuff with ‘fun’ (e.g. racing themed window dressing on knowledge test).  It’s not about putting in unnecessary unrelated imagery, sounds, or anything else.  Heck, the research of Dick Mayer at UCSB shows this actually hinders learning!

So what do I mean?  For one thing, stripping away any ‘nice to have’ or unnecessary info.  Lean is engaging!  You have to focus on what really will help the learners, and in ways that they get.  And they do.  And then help them in the ‘in the ways they get’ bit.

You need contextualized practice.  Engaging is making the context meaningful to the learners.  You need contextualization (e.g research by John Bransford on anchored cognition), but arbitrary contextualization isn’t as good as intrinsically interesting contexts.  This isn’t window dressing, since you need to be doing it anyway, but do it. And in a minimal style (as de Saint-Exupery said: “Perfection is finally attained not when there is no longer anything to add but when there is no longer anything to take away…”).

You want compelling examples. We know that examples lead to better learning (ala, for instance John Sweller’s work on cognitive load), but again, making them meaningful to the learners is critical. This isn’t window dressing, as we need them, but they’re better if they’re well told as intrinsically interesting stories.

Finally, we need to introduce the learning.  Too often we do this in ways that the learner doesn’t get the WIIFM (What’s In It For Me).  Learners learn better when they’re emotionally open to the content instead of uninterested. This may be a wee bit more, but we can account for this by getting rid of the usual introductory stuff.  And it’s worth it.

Now, let’s be clear, this is for when we’ve deemed formal learning as necessary. When the audience is practitioners who know what they need and why it’s important, then giving them ‘just the facts’, performance support, is sufficient.  But if it’s new skills they need, when you need a learning experience, then you want to make it engaging. Not extrinsically, but intrinsically.  And that’s not more in quantity, it’s not bloated, it’s more in quality, in minimalism for content and maximal for immersion.

Engaging learning is a good thing, a better thing than not, the right thing.  I’m hoping it’s just definitional, because I can’t see the contrary argument unless there’s confusion over what I mean.  Anyone?

10 February 2015

The Grail of Effective and Engaging Learning Experiences

Clark @ 8:08 am

There’s a considerable gap between what we can be doing, and what we are doing.  When you look at what’s out there, we see that there are several way in which we fall short of the mark.  While there are many dimensions that could be considered, for the sake of simplicity let’s characterize the two important ones as effectiveness of our learning and the engagement of the experience.  And I want to characterize where we are and where we could be, and the gaps we need to bridge.

GrailEffectiveEngagingLearningIf we map the space, we see that the lower left is the space of low engagement and low effectiveness.  Too much elearning resides there.  Now, to be fair, it’s easy to add engaging media and production values, so the space of typical elearning does span from low to high engagement. Moving up the diagram, however, towards increasing effectiveness, is an area that’s less populated.  The red line separates the undesirable areas from the space we’d like to start hitting, where we begin to have some modicum of both effectiveness and engagement, moving towards the upper right.  This space is relatively sparsely populated, I’m afraid.  And while there are instances of content that do increase the effectiveness, there’s little that really hits the ultimate goal, the holy grail, with a fully integrated effective and engaging experience is achieved.

How do we move in the right direction? I’ve talked before about trying to hit the sweet spot of maximal effectiveness within pragmatic constraints.  Certainly from an effectiveness standpoint, you should be looking at the components of the Serious eLearning Manifesto.  To get effective learning, you need a number of elements, for instance:

  • meaningful practice: practice aligned with the real world task
  • contextualized practice: learning across contexts that support transfer
  • sustained practice: sufficient and increasingly challenging practice to develop the skills to the necessary level
  • spaced practice: practice spread out over time (brains need sleep to learn more than a certain threshold)
  • real world consequences providing feedback coupled with scaffolded reflection
  • model-based guidance: the best guide for practice is a conceptual basis (not rote information)
  • appropriate examples: that show the concepts being applied in context

Some of these elements, also contribute to engagement, as well as others.  Components include:

  • learning-centered contexts: problems learners recognize as important
  • learner-centered contexts: problems learners want to solve
  • emotionally engaging introductions: hooking learners in viscerally as well as cognitively
  • adapted challenge: ramping up the challenge appropriately to avoid both boredom and frustration
  • unpredictability: maintaining the learner’s attention through surprise
  • meaningfulness: learners playing roles they want to be in
  • drama and/or humor

The integration of these elements was the underlying premise behind Engaging Learning, my book on integrating effectiveness and engagement, specifically on making meaningful practice, e.g. serious games.  Serious games are one way to achieve this end, by contextualizing practice as decisions in a meaningful environment and using a game engine to adapt the challenge and providing essentially unlimited practice.

Other approaches achieve much of this effectiveness in different ways. Branching scenarios are powerful approximations to this by showing consequences in context but with limited replay, and so are constructivist and problem-based learning pedagogies. This may sound daunting, but with practice, and some shortcuts, this is doable.

For example, Socratic Arts has a powerful online pedagogy that leverages media and a constructivist pedagogy in a relatively simple framework. The learner is given ‘assignments’ that mirror real world tasks, via emails or videos of characters playing roles such as a boss.  The outputs required similarly mimic work products you might find in this area. Scaffolding is available in a couple of ways. For one, there are guidelines about Videos of experts and documents are available as resources, to support the learner in getting the best outcome.  While it’s low on fancy visual design, it’s effective because it’s closely aligned to the needed skills post-learning.  And the cognitive challenge is pitched at the right level to engage the intellect, if not the aesthetics.  This is a cost-effective balance.

The work I did with the Wadhwani Foundation hit a slightly different spot in trying to get to the grail.  I didn’t have the ability to work quite as tightly with the SMEs from the get-go, and we didn’t have the ability to simulate the hands-on tasks as well as we’d like,  but we did our best to infer real tasks and used low-tech simulations and scenarios to make it effective.  We did use more media, animations and contextualized videos, to make the experience more engaging and effective as well.

The point being that we can start making learning more effective and engaging in practical ways. We need to make it effective, or why bother?  We should make it engaging, to optimize the outcomes and not insult our learners. And we can.  So why don’t we?

5 February 2015

Agile Bay Area #LNDMeetup Mindmap

Clark @ 8:05 am

I’ve been interested in process, so I attended this month’s Bay Area Learning Design Meetup that showcased LinkedIn’s work on Agile using Scrum for learning design. It was very nice of them to share the specifics of their process, and while there were more details than time permitted to cover, it was a great beginning to understand the differences.

Basically, a backlog is kept of potential new projects.  They’re prioritized and a subset is chosen as the basis of the sprint and put on the board.  Then for two weeks they work on hitting the elements on the board, with a daily standup meeting to present where they’re at and synchronize.  At the end they demo to the stakeholders and reflect.  As part of the reflection, they’re supposed to change something for the next iteration.

There’re different roles: a project owner who’s the ‘client’ in a sense (and a relation to who may be the end client).  There is a Scrum master who’s responsible for facilitating the group through the steps, and then the team, which should be small but at least represent all the necessary roles to execute whatever is being accomplished.

When I asked about scope, they said that they’ve found they can do about 100 story points (which are empirical) in a sprint, and they may distribute that across some elearning, some job aids, whatever.  They didn’t seem too eager to try to quantify that relative to other known metrics, and I understand it’s hard, particularly in the time they had.  Here’s the Mindmap:

(null)

 

Allen Interactions also discussed their SAM project (which I know and like), but the mind map didn’t match too well to their usual diagram (only briefly shown at the end), and I ran out of time trying to remedy. It’s better just to look at the diagram ;).

 

14 January 2015

It’s the process, silly!

Clark @ 8:32 am

So yesterday, I went off on some of the subtleties in elearning that are being missed.  This is tied to last weeks posts about how we’re not treating elearning seriously enough.  And part of it is in the knowledge and skills of the designers, but it’s also in the process. Or, to put it another way, we should be using steps and tools that align with the type of learning we need. And I don’t mean ADDIE, though not inherently.

So what do I mean?  For one, I’m a fan of Michael Allen’s Successive Approximation Model (SAM), which iterates several times (tho’ heuristically, and it could be better tied to a criterion).  Given that people are far less predictable than, say, concrete, fields like interface design have long known that testing and refinement need to be included.  ADDIE isn’t inherently linear, certainly as it has evolved, but in many ways it makes it easy to make it a one-pass process.

Another issue, to me, is to structure the format for your intermediate representations so that make it hard to do aught but come up with useful information.  So, for instance, in recent work I’ve emphasized that a preliminary output is a competency doc that includes (among other things)  the objectives (and measures), models, and common misconceptions.  This has evolved from a similar document I use in (learning) game design.

You then need to capture your initial learning flow. This is what Dick & Carey call your instructional strategy, but to me it’s the overall experience of the learner, including addressing the anxieties learners may feel, raising their interest and motivation, and systematically building their confidence.  The anxieties or emotional barriers to learning may well be worth capturing at the same time as the competencies, it occurs to me (learning out loud ;).

It also helps if your tools don’t interfere with your goals.  It should be easy to create animations that help illustrate models (for the concept) and tell stories (for examples).  These can be any media tools, of course. The most important tools are the ones you use to create meaningful practice. These should allow you to create mini-, linear-, and branching-scenarios (at least).  They should have alternative feedback for every wrong answer. And they should support contextualizing the practice activity. Note that this does not mean tarted up drill and kill with gratuitous ‘themes’ (race cars, game shows).  It means having learners make meaningful decisions and act on them in ways like they’d act in the real world (click on buttons for tech, choose dialog alternatives for interpersonal interactions, drag tools to a workbench or adjust controls for lab stuff, etc).

Putting in place processes that only use formal learning when it makes sense, and then doing it right when it does make sense, is key to putting L&D on a path to relevancy.   Cranking out courses on demand, focusing on measures like cost/butt/seat, adding rote knowledge quizzes to SME knowledge dumps, etc are instead continuing down the garden path to oblivion. Are you ready to get scientific and strategic about your learning design?

13 January 2015

The subtleties

Clark @ 8:06 am

I recently opined that good learning design was complex, really perhaps close to rocket science.  And I suggested that a consequent problem was that the nuances are subtle.  It occurs to me that perhaps discussing some example problems will help make this point more clear.

Without being exhaustive, there are several consistent problems I see in the elearning content I review:

  • The wrong focus. Seriously, the outcomes for the class aren’t meaningful!  They are about information or knowledge, not skill.  Which leads to no meaningful change in behavior, and more importantly, in outcomes. I don’t want to learn about X, I want to learn how to do X!
  • Lack of motivating introductions.  People are expected to give a hoot about this information, but no one helps them understand why it’s important?  Learners should be assisted to viscerally ‘get’ why this is important, and helped to see how it connects to the rest of the world.  Instead we get some boring drone about how this is really important.  Connect it to the world and let me see the context!
  • Information focused or arbitrary content presentations. To get the type of flexible problem-solving organizations need, people need mental models about why and how to do it this way, not just the rote steps.  Yet too often I see arbitrary lists of information accompanied by a rote knowledge test.  As if that’s gonna stick.
  • A lack of examples, or trivial ones.  Examples need to show a context, the barriers, and how the content model provides guidance about how to succeed (and when it won’t).  Instead we get fluffy stories that don’t connect to the model and show the application to the context.  Which means it’s not going to support transfer (and if you don’t know what I’m talking about, you’re not ready to be doing design)!
  • Meaningless and insufficient practice.  Instead of asking learners to make decisions like they will be making in the workplace (and this is my hint for the first thing to focus on fixing), we ask rote knowledge questions. Which isn’t going to make a bit of difference.
  • Nonsensical alternatives to the right answer.  I regularly ask of audiences “how many of you have ever taken a quiz where the alternatives to the right answer are so silly or dumb that you didn’t need to know anything to pass?”  And everyone raises their hand.  What possible benefit does that have?  It insults the learner’s intelligence, it wastes their time, and it has no impact on learning.
  • Undistinguished feedback. Even if you do have an alternative that’s aligned with a misconception, it seems like there’s an industry-wide conspiracy to ensure that there’s only one response for all the wrong answers. If you’ve discriminated meaningful differences to the right answer based upon how they go wrong, you should be addressing them individually.

The list goes on.  Further, any one of these can severely impact the learning outcomes, and I typically see all of these!

These are really  just the flip side of the elements of good design I’ve touted in previous posts (such as this series). I mean, when I look at most elearning content, it’s like the authors have no idea how we really learn, how our brains work.  Would you design a tire for a car without knowing how one works?  Would you design a cover for a computer without knowing what it looks like?  Yet it appears that’s what we’re doing in most elearning. And it’s time to put a stop to it.  As a first step, have a look at the Serious eLearning Manifesto, specifically the 22 design principles.

Let me be clear, this is just the surface.  Again, learning engineering is complex stuff.  We’ve hardly touched on engagement, spacing, and more.   This may seem like a lot, but this is really the boiled-down version!  If it’s too much, you’re in the wrong job.

Next Page »

Powered by WordPress