Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

27 January 2016

Reactivating Learning

Clark @ 8:10 am

(I looked because I’m sure I’ve talked about this before, but apparently not a full post, so here we go.)

If we want our learning to stick, it needs to be spaced out over time. But what sorts of things will accomplish this?  I like to think of three types, all different forms of reactivating learning.

Reactivating learning is important. At a neural level, we’re generating patterns of activation in conjunction, which strengthens the relationships between these patterns, increasing the likelihood that they’ll get activated when relevant. That’s why context helps as well as concept (e.g. don’t just provide abstract knowledge).  And I’ll suggest there are 3 major categories of reactivation to consider:

Reconceptualization: here we’re talking about presenting a different conceptual model that explains the same phenomena. Particularly if the learners have had some meaningful activity from your initial learning or through their work, showing a different way of thinking about the problem is helpful. I like to link it to Rand Spiro’s Cognitive Flexibility Theory, and explain that having more ways to represent the underlying model provides more ways to understand the concept to begin with, a greater likelihood that one of the representations will get activated when there’s a problem to be solved, and will activate the other model(s) so there’s a greater likelihood of finding one that leads to a solution.  So, you might think of electrical circuits like water flowing in pipes, or think about electron flow, and either could be useful.  It can be as simple as a new diagram, animation, or just a small prose recitation.

Recontextualization: here we’re showing another example. We’re showing how the concept plays out in a new context, and this gives a greater base upon which to abstract from and comprehend the underlying principle, and providing a new reference that might match a situation they could actually see.   To process it, you’re reactivating the concept representation, comprehending the context, and observing how the concept was used to generate a solution to this situation.  A good example, with a challenging situation that the learner recognizes, a clear goal, and cognitive annotation showing the underlying thinking, will serve to strengthen the learning.  A graphic novel format would be fun, or story, or video, anything that captures the story, thinking, and outcome would work.

Reapplication: this is the best, where instead of consuming a concept model or an example, we actually provide a new practice problem. This should require retrieving the underlying concept, comprehending the context, and determining how the model predicts what will happen to particular perturbations and figuring out which will lead to the desired outcomes.  Practice makes perfect, as they say, and so this should ideally be the emphasis in reactivation.  It might be as simple as a multiple-choice question, though a scenario in many instances would be better, and a sim/game would of course be outstanding.

All of these serve as reactivation. Reactivation, as I’ve pointed out, is a necessary part of learning.  When you don’t have enough chance to practice in the workplace, but it’s important that you have the ability when you need it (and try to avoid putting it in the head if you can), reactivation is a critical tool in your arsenal.

14 January 2016

10 years!?!?

Clark @ 8:08 am

A comment on my earliest blog post (thanks, Henrik), made me realize that this post will mark 10 years of blogging. Yes, my first post came out on January 14th, 2006.  This will be my 1,200th post (I forced one in yesterday to be the 1199th so I could say that ;), yow!  That’s 120 a year, or just under every 3rd day.  And, I am happy to add, 2,542 comments (just more than 2 per post), so thanks to you for weighing in.

It’s funny, when I started I can’t really say it was more than an experiment.  I had no idea where it would lead, or how.  It’s  had some challenges, to continue to find topics, but it’s been helpful.  It’s forced me to deliberately consider things I otherwise might not have, just to try to keep up the momentum.

I confess I originally had a goal of 5 a week (one per business day), but even then I was happy if I got 2-3. I’m gobsmacked at my colleague Harold who seems to put out a post every day.  I can’t quite do that. My goal has moderated to be 2 a week (very occasionally I live with 1 per week, but other weeks like when I’m at conferences I might have 3 if there are lots of keynotes to mind map).  Typically it’s Tuesday and Wednesday, for no good reason.

I also try to have something new to say every time. It’s hard, but forcing myself to find something to talk about has led to me thinking about lots of things and therefore ready to bring them to bear on behalf of clients.  I think out loud relatively freely (particularly with the popularity of Work and Learn Out Loud and Show Your Work).  And it’s a way to share my diagrams, another way to ‘think out loud’.  And I admit that I don’t share some things that are either proprietary (until I can anonymize them) or something I’m planning on doing something with.

And I’ve also resisted commercializing this.  Obviously I’ve avoided the offers to exchange links or blog posts that include links for SEO stuff, but I’ve even, rightly or wrongly, not allowed ads.  While it is the official Quinnovation blog, it’s been my belief that sharing my thinking is the best way to help me get interest in what I have to offer (extensive experience mapping a wide variety of concepts onto specific client contexts to yield innovative yet practical and successful solutions).  I haven’t (yet) followed a formula to drive business traffic, and only occasionally mention my upcoming events (though hopefully that’s a public service :).  There’re other places to track that.

I’m also pretty lax about looking at the metrics. I do weekly pop by Google Analytics to see what sort of traffic I get (pretty steady), but I haven’t tried to see what might improve it.  This is, largely, for me.  And for you if your interests run this way. So welcome, and here’s to another 10 years!  Who knows what there will be to talk about then…or even next week!

31 December 2015

2015 Reflections

Clark @ 8:02 am

It’s the end of the year, and given that I’m an advocate for the benefits of reflection, I suppose I better practice what I preach. So what am I thinking I learned as a consequence of this past year?  Several things come to mind (and I reserve the right for more things to percolate out, but those will be my 2016 posts, right? :):

  1. The Revolution is real: the evidence mounts that there is a need for change in L&D, and when those steps are taken, good things happen. The latest Towards Maturity report shows that the steps taken by their top-performing organizations are very much about aligning with business, focusing on performance, and more.  Similarly, Chief Learning Officer‘s Learning Elite Survey similarly point out to making links across the organization and measuring outcomes.  The data supports the principled observation.
  2. The barriers are real: there is continuing resistance to the most obvious changes. 70:20:10, for instance, continues to get challenged on nonsensical issues like the exactness of the numbers!?!?  The fact that a Learning Management System is not a strategy still doesn’t seem to have penetrated.  And so we’re similarly seeing that other business units are taking on the needs for performance support, social media, and ongoing learning. Which is bad news for L&D, I reckon.
  3. Learning design is rocket science: (or should be). The perpetration of so much bad elearning continues to be demonstrated at exhibition halls around the globe.  It’s demonstrably true that tarted up information presentation and knowledge test isn’t going to lead to meaningful behavior change, but we still are thrusting people into positions without background and giving them tools that are oriented at content presentation.  Somehow we need to do better. Still pushing the Serious eLearning Manifesto.
  4. Mobile is well on it’s way: we’re seeing mobile becoming mainstream, and this is a good thing. While we still hear the drum beating to put courses on a phone, we’re also seeing that call being ignored. We’re instead seeing real needs being met, and new opportunities being explored.  There’s still a ways to go, but here’s to a continuing awareness of good mobile design.
  5. Gamification is still being confounded: people aren’t really making clear conceptual differences around games. We’re still seeing linear scenarios confounded with branching, we’re seeing gamification confounded with serious games, and more.  Some of these are because the concepts are complex, and some because of vested interests.
  6. Games  seem to be reemerging: while the interest in games became mainstream circa 2010 or so, there hasn’t been a real sea change in their use.  However, it’s quietly feeling like folks are beginning to get their minds around Immersive Learning Simulations, aka Serious Games.   There’s still ways to go in really understanding the critical design elements, but the tools are getting better and making them more accessible in at least some formats.
  7. Design is becoming a ‘thing’: all the hype around Design Thinking is leading to a greater concern about design, and this is a good thing. Unfortunately there will probably be some hype and clarity to be discerned, but at least the overall awareness raising is a good step.
  8. Learning to learn seems to have emerged: years ago the late great Jay Cross and I and some colleagues put together the Meta-Learning Lab, and it was way too early (like so much I touch :p). However, his passing has raised the term again, and there’s much more resonance. I don’t think it’s necessarily a thing yet, but it’s far greater resonance than we had at the time.
  9. Systems are coming: I’ve been arguing for the underpinnings, e.g. content systems.  And I’m (finally) beginning to see more interest in that, and other components are advancing as well: data (e.g. the great work Ellen Wagner and team have been doing on Predictive Analytics), algorithms (all the new adaptive learning systems), etc. I’m keen to think what tags are necessary to support the ability to leverage open educational resources as part of such systems.
  10. Greater inputs into learning: we’ve seen learning folks get interested in behavior change, habits, and more.  I’m thinking we’re going to go further. Areas I’m interested in include myth and ritual, powerful shapers of culture and behavior. And we’re drawing on greater inputs into the processes as well (see 7, above).  I hope this continues, as part of learning to learn is to look to related areas and models.

Obviously, these are things I care about.  I’m fortunate to be able to work in a field that I enjoy and believe has real potential to contribute.  And just fair warning, I’m working on a few areas in several ways.  You’ll see more about learning design and the future of work sometime in the near future. And rather than generally agitate, I’m putting together two specific programs – one on (e)learning quality and one on L&D strategy – that are intended to be comprehensive approaches.  Stay tuned.

That’s my short list, I’m sure more will emerge.  In the meantime, I hope you had a great 2015, and that your 2016 is your best year yet.

27 October 2015

Showing the World

Clark @ 8:03 am

One of the positive results of investigations into making work more effective has been the notion of transparency, which manifests as either working and learning ‘out loud‘, or in calls to Show Your Work.  In these cases, it’s so people can know what you’re doing, and either provide useful feedback or learn from you.  However, a recent chat in the L&D Revolution group on LinkedIn on Augmented Reality (AR) surfaced another idea.

We were talking about how AR could be used to show how to do things, providing information for instance on how to repair a machine. This has already been seen in examples by BMW, for instance. But I started thinking about how it could be used to support education, and took it a bit further.

So many years ago, Jim Spohrer proposed WorldBoard, a way to annotate the world. It was like the WWW, but it was location specific, so you could have specific information about a place at the place.  And it was a good idea that got some initial traction but obviously didn’t continue.

The point, however, would be to ‘expose’ the world. In particular, given my emphasis on the value of models, I’d love to have models exposed. Imagine what we could display:

  • the physiology of an animal we’re looking at to flows of energy in an ecosystem
  • the architectural or engineering features of a building or structure
  • the flows of materials through a manufacturing system
  • the operation of complex devices

The list goes on. I’ve argued before that we should expose our learning designs as a way to hand over learning control to learners, developing their meta-learning skills. I think if we could expose how things work and the thinking behind them, we’d be boosting STEM in a big way.

We could go further, annotating exhibits and performances as well.  And it could be auditory as well, so you might not need to have glasses, or you could just hold up the camera and see the annotations on the screen. You could of course turn them on or off, and choose which filters you want.

The systems exist: Layar commercially, ARIS in the open source space (with different capabilities).  The hard part is the common frameworks, agreeing what and how, etc.   However, the possibilities to really raise understanding is very much an opportunity.  Making the workings of the world visible seems to me to be a very intriguing possibility to leverage the power we now hold in our hand. Ok, so this is ‘out there’, but I hope we might see this flourishing quickly.  What am I missing?

13 October 2015

Supporting our Brains

Clark @ 8:29 am

One of the ways I’ve been thinking about the role mobile can play in design is thinking about how our brains work, and don’t.  It came out of both mobile and the recent cognitive science for learning workshop I gave at the recent DevLearn.  This applies more broadly to performance support in general, so I though I’d share where my thinking is going.

To begin with, our cognitive architecture is demonstrably awesome; just look at your surroundings and recognize your clothing, housing, technology, and more are the product of human ingenuity.  We have formidable capabilities to predict, plan, and work together to accomplish significant goals.  On the flip side, there’s no one all-singing, all-dancing architecture out there (yet) and every such approach also has weak points. Technology, for instance, is bad at pattern-matching and meaning-making, two things we’re really pretty good at.  On the flip side, we have some flaws too. So what I’ve done here is to outline the flaws, and how we’ve created tools to get around those limitations.  And to me, these are principles for design:

table of cognitive limitations and support toolsSo, for instance, our senses capture incoming signals in a sensory store.  Which has interesting properties that it has almost an unlimited capacity, but for only a very short time. And there is no way all of it can get into our working memory, so what happens is that what we attend to is what we have access to.  So we can’t recall what we perceive accurately.  However, technology (camera, microphone, sensors) can recall it all perfectly. So making capture capabilities available is a powerful support.

Similar, our attention is limited, and so if we’re focused in one place, we may forget or miss something else.  However, we can program reminders or notifications that help us recall important events that we don’t want to miss, or draw our attention where needed.

The limits on working memory (you may have heard of the famous 7±2, which really is <5) mean we can’t hold too much in our brains at once, such as interim results of complex calculations.  However, we can have calculators that can do such processing for us. We also have limited ability to carry information around for the same reasons, but we can create external representations (such as notes or scribbles) that can hold those thoughts for us.  Spreadsheets, outlines, and diagramming tools allow us to take our interim thoughts and record them for further processing.

We also have trouble remembering things accurately. Our long term memory tends to remember meaning, not particular details. However, technology can remember arbitrary and abstract information completely. What we need are ways to look up that information, or search for it. Portals and lookup tables trump trying to put that information into our heads.

We also have a tendency to skip steps. We have some randomness in our architecture (a benefit: if we sometimes do it differently, and occasionally that’s better, we have a learning opportunity), but this means that we don’t execute perfectly.  However, we can use process supports like checklists.  Atul Gawande wrote a fabulous book on the topic that I can recommend.

Other phenomena include that previous experience can bias us in particular directions, but we can put in place supports to provide lateral prompts. We can also prematurely evaluate a solution rather than checking to verify it’s the best. Data can be used to help us be aware.  And we can trust our intuition too much and we can wear down, so we don’t always make the best decisions.  Templates, for example are a tool that can help us focus on the important elements.

This is just the result of several iterations, and I think more is needed (e.g. about data to prevent premature convergence), but to me it’s an interesting alternate approach to consider where and how we might support people, particularly in situations that are new and as yet untested.  So what do you think?

6 October 2015

Mobile Time

Clark @ 8:05 am

At the recent DevLearn conference, David Kelly spoke about his experiences with the Apple Watch.  Because I don’t have one yet, I was interested in his reflections.  There were a number of things, but what came through for me (and other reviews I’ve read) is that the time scale is a factor.

Now, first, I don’t have one because as with technology in general, I don’t typically acquire anything in particular until I know how it’s going to make me more effective.  I may have told this story before, but for instance I didn’t wasn’t interested in acquiring an iPad when they were first announced (“I’m not a content consumer“). By the time they were available, however, I’d heard enough about how it would make me more productive (as a content creator), that I got one the first day it was available.

So too with the watch. I don’t get a lot of notifications, so that isn’t a real benefit.   The ability to be navigated subtly around towns sounds nice, and to check on certain things.  Overall, however, I haven’t really found the tipping-point use-case.  However, one thing he said triggered a thought.

He was talking about how it had reduced the amount of times he accessed his phone, and I’d heard that from others, but here it struck a different cord. It made me realize it’s about time frames. I’m trying to make useful conceptual distinctions between devices to try to help designers figure out the best match of capability to need. So I came up with what seemed an interesting way to look at it.

Various usage times by category: wearable, pocketable, bag able.This is similar to the way I’d seen Palm talk about the difference between laptops and mobile, I was thinking about the time you spent in using your devices.  The watch (a wearable)  is accessed quickly for small bits of information.  A pocketable (e.g. a phone) is used for a number of seconds up to a few minutes.  And a tablet tends to get accessed for longer uses (a laptop doesn’t count).  Folks may well have all 3, but they use them for different things.

Sure, there are variations, (you can watch a movie on a phone, for instance; phone calls could be considerably longer), but by and large I suspect that the time of access you need will be a determining factor (it’s also tied to both battery life and screen size). Another way to look at it would be the amount of information you need to make a decision about what to do, e.g. for cognitive work.

Not sure this is useful, but it was a reflection and I do like to share those. I welcome your feedback!

3 September 2015

Designing mLearning in Korean

Clark @ 8:12 am

It actually happened a while ago, but I was pleased to learn that Designing mLearning has been translated into Korean.  That’s kind of a nice thing to have happen!  A slightly different visual treatment, presumably  appropriate to the market. Who knows,  maybe I’ll get a chance to visit instead of just transferring through the airport.  Anyways, just had to share ;).

DesigningmLearningKorean

18 August 2015

Where in the world is…

Clark @ 8:09 am

It’s time for another game of Where’s Clark?  As usual, I’ll be somewhat peripatetic this fall, but more broadly scoped than usual:

  • First I’ll be hitting Shenzhen, China at the end of August to talk advanced mlearning for a private event.
  • Then I’ll be hitting the always excellent DevLearn in Las Vegas at the end of September to run a workshop on learning science for design (you should want to attend!) and give a session on content engineering.
  • At the beginning of November I’ll be at LearnTech Asia in Singapore, with an impressive lineup of fellow speakers to again sing the praises of reforming L&D.

Yes, it’s quite the whirl, but with this itinerary I should be somewhere near you almost anywhere you are in the world. (Or engage me to show up at your locale!) I hope to see you at one event or another before the year is out.

 

26 June 2015

Personal processing

Clark @ 7:48 am

I was thinking about a talk on mobile I’m going to be giving, and realized that mobile is really about personal processing. Many of the things you can do at your desktop you can do with your mobile, even a wearable: answering calls, responding to texts.  Ok, so responding to email, looking up information, and more might require the phone for a keyboard (I confess to not being a big Siri user, mea culpa), but it’s still where/when/ever.

So the question then became “what doesn’t make sense on a mobile”. And my thought was that industrial strength processing doesn’t make sense on a mobile.  Processor intensive work: video editing, 3D rendering, things that require either big screens or lots of CPU.  So, for instance, while word processing isn’t really CPU intensive, for some reason mobile word processors don’t seamlessly integrate outlining.  Yet I require outlining for big scale writing, book chapters or whole books. I don’t do 3D or video processing, but that would count too.

One of the major appeals of mobile is having versatile digital capabilities, the rote/complex complement to our pattern-matching brains, (I really wanted to call my mobile book ‘augmenting learning’) with us at all times.  It makes us more effective.  And for many things – all those things we do with mobile such as looking up info, navigating, remembering things, snapping pictures, calculating tips – that’s plenty of screen and processing grunt.  It’s for personal use.

Sure, we’ll get more powerful capabilities (they’re touting multitasking on tablets now), and the boundaries will blur, but I still think there’ll be the things we do when we’re on the go, and the things we’ll stop and be reflective about.  We’ll continue to explore, but I think the things we do on the wrist or in the hand will naturally be different than those we do seated.   Our brains work in active and reflective modes, and our cognitive augment will similarly complement those needs.  We’ll have personal processing, and then we’ll have powerful processing. And that’s a good thing, I think. What think you?

 

23 April 2015

Personal Mobile Mastery

Clark @ 8:29 am

A conversation with a colleague prompted a reflection.  The topic was personal learning, and in looking for my intersections (beyond my love of meta-learning), I looked at my books. The Revolution isn’t an obvious match, nor is games (though trust me, I could make them work ;), but a more obvious match was mlearning. So the question is, how do we do personal knowledge mastery with mobile?

Let’s get the obvious out of the way. Most of what you do on the desktop, particularly social networking, is doable on a mobile device.  And you can use search engines and reference tools just the same. You can find how to videos as well. Is there more?

First, of course, are all the things to make yourself more ‘effective’.  Using the four key original apps on the Palm Pilot for instance: your calendar to remind you of events or to check availability, using ToDo checklists to remember commitments to do something, using memos to take notes for reference, and using your contact list to reach people.  Which isn’t really learning, but it’s valuable to learn to be good at these.

Then we start doing things because of where you are.  Navigation to somewhere or finding what’s around you are the obvious choices. Those are things you won’t necessarily learn from, but they make you more effective.  But they can also help educate you. You can look where you are on a map and see what’s around you, or identify the thing on the map that’s in that direction (“oh, that’s the Quinnsitute” or “There’s Mount Clark” or whatever), and have a chance of identifying a seen prominence.

And you can use those social media tools as before, but you can also use them because of where or when you are. You can snap pictures of something and send it around and ask how it could help you. Of course, you can snap pictures or films for later recollection and reflection, and contribute them to a blog post for reflection.  And take notes by text or audio. Or even sketching or diagramming. The notes people take for themselves at conferences, for instance, get shared and are valuable not just for the sharer, but for all attendees.

Certainly searching things you don’t understand or, when there’s unknown language, seeing if you can get a translation, are also options.  You can learn what something means, and avoid making mistakes.

When you are, e.g. based upon what you’re doing, is a little less developed.  You’d have to have rich tagging around your calendar to signal what it is you’re doing for a system to be able to leverage that information, but I reckon we can get there if and when we want.

I’m not a big fan of ‘learning’ on a mobile device, maybe a tablet in transit or something, but not courses on a phone.  On the other hand, I am a big fan of self-learning on a phone, using your phone to make you smarter. These are embryonic thoughts, so I welcome feedback.   Being more contextually aware both in the moment and over time is a worthwhile opportunity, one we can and should look to advance.  I think there’s  much yet, though tools like ARIS are going to help change that. And that’ll be good.

 

Next Page »

Powered by WordPress