Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

4 May 2016

Learning in Context

Clark @ 8:09 am

In a recent guest post, I wrote about the importance of context in learning. And for a featured session at the upcoming FocusOn Learning event, I’ll be talking about performance support in context.  But there was a recent question about how you’d do it in a particular environment, and that got me thinking about the the necessary requirements.

As context (ahem), there are already context-sensitive systems. I helped lead the design of one where a complex device was instrumented and consequently there were many indicators about the current status of the device. This trend is increasing.  And there are tools to build context-sensitive helps systems around enterprise software, whether purchased or home-grown. And there are also context-sensitive systems that track your location on mobile and allow you to use that to trigger a variety of actions.

Now, to be clear, these are already in use for performance support, but how do we take advantage of them for learning. Moreover, can we go beyond ‘location’ specific learning?  I think we can, if we rethink.

So first, we obviously can use those same systems to deliver specific learning. We can have a rich model of learning around a system, so a detailed competency map, and then with a rich profile of the learner we can know what they know and don’t, and then when they’re at a point where there’s a gap between their knowledge and the desired, we can trigger some additional information. It’s in context, at a ‘teachable moment’, so it doesn’t necessarily have to be assessed.

This would be on top of performance support, typically, as they’re still learning so we don’t want to risk a mistake. Or we could have a little chance to try it out and get it wrong that doesn’t actually get executed, and then give them feedback and the right answer to perform.  We’d have to be clear, however, about why learning is needed in addition to the right answer: is this something that really needs to be learned?

I want to go a wee bit further, though; can we build it around what the learner is doing?  How could we know?  Besides increasingly complex sensor logic, we can use when they are.  What’s on their calendar?  If it’s tagged appropriately, we can know at least what they’re supposed to be doing.  And we can develop not only specific system skills, but more general business skills: negotiation, running meetings, problem-solving/trouble-shooting, design, and more.

The point is that our learners are in contexts all the time.  Rather than take them away to learn, can we develop learning that wraps around what they’re doing? Increasingly we can, and in richer and richer ways. We can tap into the situational motivation to accomplish the task in the moment, and the existing parameters, to make ordinary tasks into learning opportunities. And that more ubiquitous, continuous development is more naturally matched to how we learn.

26 April 2016

Learning in context

Clark @ 8:10 am

In preparation for the upcoming FocusOn Learning Conference, where I’ll be running a workshop about cognitive science for L&D, not just for learning but also for mobile and performance support, I was thinking about how  context can be leveraged to provide more optimal learning and performance.  Naturally, I had to diagram it, so let me talk through it, and you let me know what you think.

ApartLearningWhat we tend to do, as a default, is to take people away from work, provide the learning resources away from the context, then create a context to practice in. There are coaching resources, but not necessarily the performance resources.  (And I’m not even mentioning the typical lack of sufficient practice.) And this makes sense when the consequences of making a mistake on the task are irreversible and costly.  E.g. medicine, transportation.  But that’s not as often as we think. And there’s an alternative.

We can wrap the learning around the context. Our individual is in the world, and performing the task. There can be coaching (particularly at the start, and then gradually removed as the individual moves to acceptable competence). There are also performance resources – job aids, checklists, etc – in the environment. There also can be learning resources, so the individual can continue to self-develop, particularly in the increasingly likely situation that the task has some ambiguity or novelty in it. Of course, that only works if we have a learner capable of self learning (hint hint).

The problems with always taking people away from their jobs are multiple:

  • it is costly to interrupt their performance
  • it can be costly to create the artificial context
  • the learning has a lower likelihood to make it back to the workplace

Our brains don’t learn in an event model, they learn in little bits over time. It’s more natural, more effective, to dribble the learning out at the moment of need, the learnable moment.  We have the capability, now, to be more aware of the learner, to deliver support in the moment, and develop learners over time. The way their brains actually learn.  And we should be doing this.  It’s more effective as well as more efficient.  It requires moving out of our comfort zone; we know the classroom, we know training.  However, we now also know that the effectiveness of classroom training can be very limited.

We have the ability to start making learning effective as well as efficient. Shouldn’t we do so?

15 March 2016

Context Rules

Clark @ 8:15 am

I was watching a blab (a video chat tool) about the upcoming FocusOn Learning, a new event from the eLearning Guild. This conference combines their previous mLearnCon and Performance Support Symposium with the addition of video.  The previous events have been great, and I’ll of course be there (offering a workshop on cognition for mobile, a mobile learning 101 session, and one on the topic of this post). Listening to folks talk about the conference led me to ponder the connection, and something struck me.

I find it kind of misleading that it’s FocusOn Learning, given that performance support, mobile, and even video typically is more about acting in the moment than developing over time.  Mobile device use tends to be more about quick access than extended experience.  Performance support is more about augmenting our cognitive capabilities. Video (as opposed to animation or images or graphics, and similar to photos) is about showing how things happen in situ (I note that this is my distinction, and they may well include animation in their definition of video, caveat emptor).  The unifying element to me is context.

So, mobile is a platform.  It’s a computational medium, and as such is the same sort of computational augment that a desktop is.  Except that it can be with you. Moreover, it can have sensors, so not just providing computational capabilities where you are, but because of when and where you are.

Performance support is about providing a cognitive augment. It can be any medium – paper, audio, digital – but it’s about providing support for the gaps in our mental capabilities.  Our architecture is powerful, but has limitations, and we can provide support to minimize those problems. It’s about support in the moment, that is, in context.

And video, like photos, inherently captures context.  Unlike an animation that represents conceptual distinctions separated from the real world along one or more dimensions, a video accurately captures what the camera sees happening.  It’s again about context.

And the interesting thing to me is that we can support performance in the moment, whether a lookup table or a howto video, without learning necessarily happening. And that’s OK!  It’s also possible to use context to support learning, and in fact we can provide least material to augment a context than create an artificial context which so much of learning requires.

What excited me was that there was a discussion about AR and AI. And these, to me, are also about context.  Augmented Reality layers  information on top of your current context. And the way you start doing contextually relevant content delivery is with rules tied to content descriptors (content systems), and such rules are really part of an intelligently adaptive system.

So I’m inclined to think this conference is about leveraging context in intelligent ways. Or that it can be, will be, and should be. Your mileage may vary ;).

16 February 2016

Litmos Guest Blog Series

Clark @ 8:09 am

As I did with Learnnovators, with Litmos I’ve also done a series of posts, in this case a year’s worth.  Unlike the other series, which was focused on deeper eLearning design, they’re not linked thematically and instead cover a wide range of topics that were mutually agreed as being personally interesting and of interest to their argument.

So, we have presentations on:

  1. Blending learning
  2. Performance Support
  3. mLearning: Part 1 and Part 2
  4. Advanced Instructional Design
  5. Games and Gamification
  6. Courses in the Ecosystem
  7. L&D and the Bigger Picture
  8. Measurement
  9. Reviewing Design Processes
  10. New Learning Technologies
  11. Collaboration
  12. Meta-Learning

If any of these topics are of interest, I welcome you to check them out.

 

27 January 2016

Reactivating Learning

Clark @ 8:10 am

(I looked because I’m sure I’ve talked about this before, but apparently not a full post, so here we go.)

If we want our learning to stick, it needs to be spaced out over time. But what sorts of things will accomplish this?  I like to think of three types, all different forms of reactivating learning.

Reactivating learning is important. At a neural level, we’re generating patterns of activation in conjunction, which strengthens the relationships between these patterns, increasing the likelihood that they’ll get activated when relevant. That’s why context helps as well as concept (e.g. don’t just provide abstract knowledge).  And I’ll suggest there are 3 major categories of reactivation to consider:

Reconceptualization: here we’re talking about presenting a different conceptual model that explains the same phenomena. Particularly if the learners have had some meaningful activity from your initial learning or through their work, showing a different way of thinking about the problem is helpful. I like to link it to Rand Spiro’s Cognitive Flexibility Theory, and explain that having more ways to represent the underlying model provides more ways to understand the concept to begin with, a greater likelihood that one of the representations will get activated when there’s a problem to be solved, and will activate the other model(s) so there’s a greater likelihood of finding one that leads to a solution.  So, you might think of electrical circuits like water flowing in pipes, or think about electron flow, and either could be useful.  It can be as simple as a new diagram, animation, or just a small prose recitation.

Recontextualization: here we’re showing another example. We’re showing how the concept plays out in a new context, and this gives a greater base upon which to abstract from and comprehend the underlying principle, and providing a new reference that might match a situation they could actually see.   To process it, you’re reactivating the concept representation, comprehending the context, and observing how the concept was used to generate a solution to this situation.  A good example, with a challenging situation that the learner recognizes, a clear goal, and cognitive annotation showing the underlying thinking, will serve to strengthen the learning.  A graphic novel format would be fun, or story, or video, anything that captures the story, thinking, and outcome would work.

Reapplication: this is the best, where instead of consuming a concept model or an example, we actually provide a new practice problem. This should require retrieving the underlying concept, comprehending the context, and determining how the model predicts what will happen to particular perturbations and figuring out which will lead to the desired outcomes.  Practice makes perfect, as they say, and so this should ideally be the emphasis in reactivation.  It might be as simple as a multiple-choice question, though a scenario in many instances would be better, and a sim/game would of course be outstanding.

All of these serve as reactivation. Reactivation, as I’ve pointed out, is a necessary part of learning.  When you don’t have enough chance to practice in the workplace, but it’s important that you have the ability when you need it (and try to avoid putting it in the head if you can), reactivation is a critical tool in your arsenal.

31 December 2015

2015 Reflections

Clark @ 8:02 am

It’s the end of the year, and given that I’m an advocate for the benefits of reflection, I suppose I better practice what I preach. So what am I thinking I learned as a consequence of this past year?  Several things come to mind (and I reserve the right for more things to percolate out, but those will be my 2016 posts, right? :):

  1. The Revolution is real: the evidence mounts that there is a need for change in L&D, and when those steps are taken, good things happen. The latest Towards Maturity report shows that the steps taken by their top-performing organizations are very much about aligning with business, focusing on performance, and more.  Similarly, Chief Learning Officer‘s Learning Elite Survey similarly point out to making links across the organization and measuring outcomes.  The data supports the principled observation.
  2. The barriers are real: there is continuing resistance to the most obvious changes. 70:20:10, for instance, continues to get challenged on nonsensical issues like the exactness of the numbers!?!?  The fact that a Learning Management System is not a strategy still doesn’t seem to have penetrated.  And so we’re similarly seeing that other business units are taking on the needs for performance support, social media, and ongoing learning. Which is bad news for L&D, I reckon.
  3. Learning design is rocket science: (or should be). The perpetration of so much bad elearning continues to be demonstrated at exhibition halls around the globe.  It’s demonstrably true that tarted up information presentation and knowledge test isn’t going to lead to meaningful behavior change, but we still are thrusting people into positions without background and giving them tools that are oriented at content presentation.  Somehow we need to do better. Still pushing the Serious eLearning Manifesto.
  4. Mobile is well on it’s way: we’re seeing mobile becoming mainstream, and this is a good thing. While we still hear the drum beating to put courses on a phone, we’re also seeing that call being ignored. We’re instead seeing real needs being met, and new opportunities being explored.  There’s still a ways to go, but here’s to a continuing awareness of good mobile design.
  5. Gamification is still being confounded: people aren’t really making clear conceptual differences around games. We’re still seeing linear scenarios confounded with branching, we’re seeing gamification confounded with serious games, and more.  Some of these are because the concepts are complex, and some because of vested interests.
  6. Games  seem to be reemerging: while the interest in games became mainstream circa 2010 or so, there hasn’t been a real sea change in their use.  However, it’s quietly feeling like folks are beginning to get their minds around Immersive Learning Simulations, aka Serious Games.   There’s still ways to go in really understanding the critical design elements, but the tools are getting better and making them more accessible in at least some formats.
  7. Design is becoming a ‘thing’: all the hype around Design Thinking is leading to a greater concern about design, and this is a good thing. Unfortunately there will probably be some hype and clarity to be discerned, but at least the overall awareness raising is a good step.
  8. Learning to learn seems to have emerged: years ago the late great Jay Cross and I and some colleagues put together the Meta-Learning Lab, and it was way too early (like so much I touch :p). However, his passing has raised the term again, and there’s much more resonance. I don’t think it’s necessarily a thing yet, but it’s far greater resonance than we had at the time.
  9. Systems are coming: I’ve been arguing for the underpinnings, e.g. content systems.  And I’m (finally) beginning to see more interest in that, and other components are advancing as well: data (e.g. the great work Ellen Wagner and team have been doing on Predictive Analytics), algorithms (all the new adaptive learning systems), etc. I’m keen to think what tags are necessary to support the ability to leverage open educational resources as part of such systems.
  10. Greater inputs into learning: we’ve seen learning folks get interested in behavior change, habits, and more.  I’m thinking we’re going to go further. Areas I’m interested in include myth and ritual, powerful shapers of culture and behavior. And we’re drawing on greater inputs into the processes as well (see 7, above).  I hope this continues, as part of learning to learn is to look to related areas and models.

Obviously, these are things I care about.  I’m fortunate to be able to work in a field that I enjoy and believe has real potential to contribute.  And just fair warning, I’m working on a few areas in several ways.  You’ll see more about learning design and the future of work sometime in the near future. And rather than generally agitate, I’m putting together two specific programs – one on (e)learning quality and one on L&D strategy – that are intended to be comprehensive approaches.  Stay tuned.

That’s my short list, I’m sure more will emerge.  In the meantime, I hope you had a great 2015, and that your 2016 is your best year yet.

27 October 2015

Showing the World

Clark @ 8:03 am

One of the positive results of investigations into making work more effective has been the notion of transparency, which manifests as either working and learning ‘out loud‘, or in calls to Show Your Work.  In these cases, it’s so people can know what you’re doing, and either provide useful feedback or learn from you.  However, a recent chat in the L&D Revolution group on LinkedIn on Augmented Reality (AR) surfaced another idea.

We were talking about how AR could be used to show how to do things, providing information for instance on how to repair a machine. This has already been seen in examples by BMW, for instance. But I started thinking about how it could be used to support education, and took it a bit further.

So many years ago, Jim Spohrer proposed WorldBoard, a way to annotate the world. It was like the WWW, but it was location specific, so you could have specific information about a place at the place.  And it was a good idea that got some initial traction but obviously didn’t continue.

The point, however, would be to ‘expose’ the world. In particular, given my emphasis on the value of models, I’d love to have models exposed. Imagine what we could display:

  • the physiology of an animal we’re looking at to flows of energy in an ecosystem
  • the architectural or engineering features of a building or structure
  • the flows of materials through a manufacturing system
  • the operation of complex devices

The list goes on. I’ve argued before that we should expose our learning designs as a way to hand over learning control to learners, developing their meta-learning skills. I think if we could expose how things work and the thinking behind them, we’d be boosting STEM in a big way.

We could go further, annotating exhibits and performances as well.  And it could be auditory as well, so you might not need to have glasses, or you could just hold up the camera and see the annotations on the screen. You could of course turn them on or off, and choose which filters you want.

The systems exist: Layar commercially, ARIS in the open source space (with different capabilities).  The hard part is the common frameworks, agreeing what and how, etc.   However, the possibilities to really raise understanding is very much an opportunity.  Making the workings of the world visible seems to me to be a very intriguing possibility to leverage the power we now hold in our hand. Ok, so this is ‘out there’, but I hope we might see this flourishing quickly.  What am I missing?

13 October 2015

Supporting our Brains

Clark @ 8:29 am

One of the ways I’ve been thinking about the role mobile can play in design is thinking about how our brains work, and don’t.  It came out of both mobile and the recent cognitive science for learning workshop I gave at the recent DevLearn.  This applies more broadly to performance support in general, so I though I’d share where my thinking is going.

To begin with, our cognitive architecture is demonstrably awesome; just look at your surroundings and recognize your clothing, housing, technology, and more are the product of human ingenuity.  We have formidable capabilities to predict, plan, and work together to accomplish significant goals.  On the flip side, there’s no one all-singing, all-dancing architecture out there (yet) and every such approach also has weak points. Technology, for instance, is bad at pattern-matching and meaning-making, two things we’re really pretty good at.  On the flip side, we have some flaws too. So what I’ve done here is to outline the flaws, and how we’ve created tools to get around those limitations.  And to me, these are principles for design:

table of cognitive limitations and support toolsSo, for instance, our senses capture incoming signals in a sensory store.  Which has interesting properties that it has almost an unlimited capacity, but for only a very short time. And there is no way all of it can get into our working memory, so what happens is that what we attend to is what we have access to.  So we can’t recall what we perceive accurately.  However, technology (camera, microphone, sensors) can recall it all perfectly. So making capture capabilities available is a powerful support.

Similar, our attention is limited, and so if we’re focused in one place, we may forget or miss something else.  However, we can program reminders or notifications that help us recall important events that we don’t want to miss, or draw our attention where needed.

The limits on working memory (you may have heard of the famous 7±2, which really is <5) mean we can’t hold too much in our brains at once, such as interim results of complex calculations.  However, we can have calculators that can do such processing for us. We also have limited ability to carry information around for the same reasons, but we can create external representations (such as notes or scribbles) that can hold those thoughts for us.  Spreadsheets, outlines, and diagramming tools allow us to take our interim thoughts and record them for further processing.

We also have trouble remembering things accurately. Our long term memory tends to remember meaning, not particular details. However, technology can remember arbitrary and abstract information completely. What we need are ways to look up that information, or search for it. Portals and lookup tables trump trying to put that information into our heads.

We also have a tendency to skip steps. We have some randomness in our architecture (a benefit: if we sometimes do it differently, and occasionally that’s better, we have a learning opportunity), but this means that we don’t execute perfectly.  However, we can use process supports like checklists.  Atul Gawande wrote a fabulous book on the topic that I can recommend.

Other phenomena include that previous experience can bias us in particular directions, but we can put in place supports to provide lateral prompts. We can also prematurely evaluate a solution rather than checking to verify it’s the best. Data can be used to help us be aware.  And we can trust our intuition too much and we can wear down, so we don’t always make the best decisions.  Templates, for example are a tool that can help us focus on the important elements.

This is just the result of several iterations, and I think more is needed (e.g. about data to prevent premature convergence), but to me it’s an interesting alternate approach to consider where and how we might support people, particularly in situations that are new and as yet untested.  So what do you think?

6 October 2015

Mobile Time

Clark @ 8:05 am

At the recent DevLearn conference, David Kelly spoke about his experiences with the Apple Watch.  Because I don’t have one yet, I was interested in his reflections.  There were a number of things, but what came through for me (and other reviews I’ve read) is that the time scale is a factor.

Now, first, I don’t have one because as with technology in general, I don’t typically acquire anything in particular until I know how it’s going to make me more effective.  I may have told this story before, but for instance I didn’t wasn’t interested in acquiring an iPad when they were first announced (“I’m not a content consumer“). By the time they were available, however, I’d heard enough about how it would make me more productive (as a content creator), that I got one the first day it was available.

So too with the watch. I don’t get a lot of notifications, so that isn’t a real benefit.   The ability to be navigated subtly around towns sounds nice, and to check on certain things.  Overall, however, I haven’t really found the tipping-point use-case.  However, one thing he said triggered a thought.

He was talking about how it had reduced the amount of times he accessed his phone, and I’d heard that from others, but here it struck a different cord. It made me realize it’s about time frames. I’m trying to make useful conceptual distinctions between devices to try to help designers figure out the best match of capability to need. So I came up with what seemed an interesting way to look at it.

Various usage times by category: wearable, pocketable, bag able.This is similar to the way I’d seen Palm talk about the difference between laptops and mobile, I was thinking about the time you spent in using your devices.  The watch (a wearable)  is accessed quickly for small bits of information.  A pocketable (e.g. a phone) is used for a number of seconds up to a few minutes.  And a tablet tends to get accessed for longer uses (a laptop doesn’t count).  Folks may well have all 3, but they use them for different things.

Sure, there are variations, (you can watch a movie on a phone, for instance; phone calls could be considerably longer), but by and large I suspect that the time of access you need will be a determining factor (it’s also tied to both battery life and screen size). Another way to look at it would be the amount of information you need to make a decision about what to do, e.g. for cognitive work.

Not sure this is useful, but it was a reflection and I do like to share those. I welcome your feedback!

3 September 2015

Designing mLearning in Korean

Clark @ 8:12 am

It actually happened a while ago, but I was pleased to learn that Designing mLearning has been translated into Korean.  That’s kind of a nice thing to have happen!  A slightly different visual treatment, presumably  appropriate to the market. Who knows,  maybe I’ll get a chance to visit instead of just transferring through the airport.  Anyways, just had to share ;).

DesigningmLearningKorean

Next Page »

Powered by WordPress