Learnlets

Secondary

Clark Quinn’s Learnings about Learning

You are here: Home / Search for "top tools"

Search Results for: top tools

2015 Reflections

31 December 2015 by Clark 3 Comments

It’s the end of the year, and given that I’m an advocate for the benefits of reflection, I suppose I better practice what I preach. So what am I thinking I learned as a consequence of this past year?  Several things come to mind (and I reserve the right for more things to percolate out, but those will be my 2016 posts, right? :):

  1. The Revolution is real: the evidence mounts that there is a need for change in L&D, and when those steps are taken, good things happen. The latest Towards Maturity report shows that the steps taken by their top-performing organizations are very much about aligning with business, focusing on performance, and more.  Similarly, Chief Learning Officer‘s Learning Elite Survey similarly point out to making links across the organization and measuring outcomes.  The data supports the principled observation.
  2. The barriers are real: there is continuing resistance to the most obvious changes. 70:20:10, for instance, continues to get challenged on nonsensical issues like the exactness of the numbers!?!?  The fact that a Learning Management System is not a strategy still doesn’t seem to have penetrated.  And so we’re similarly seeing that other business units are taking on the needs for performance support, social media, and ongoing learning. Which is bad news for L&D, I reckon.
  3. Learning design is rocket science: (or should be). The perpetration of so much bad elearning continues to be demonstrated at exhibition halls around the globe.  It’s demonstrably true that tarted up information presentation and knowledge test isn’t going to lead to meaningful behavior change, but we still are thrusting people into positions without background and giving them tools that are oriented at content presentation.  Somehow we need to do better. Still pushing the Serious eLearning Manifesto.
  4. Mobile is well on it’s way: we’re seeing mobile becoming mainstream, and this is a good thing. While we still hear the drum beating to put courses on a phone, we’re also seeing that call being ignored. We’re instead seeing real needs being met, and new opportunities being explored.  There’s still a ways to go, but here’s to a continuing awareness of good mobile design.
  5. Gamification is still being confounded: people aren’t really making clear conceptual differences around games. We’re still seeing linear scenarios confounded with branching, we’re seeing gamification confounded with serious games, and more.  Some of these are because the concepts are complex, and some because of vested interests.
  6. Games  seem to be reemerging: while the interest in games became mainstream circa 2010 or so, there hasn’t been a real sea change in their use.  However, it’s quietly feeling like folks are beginning to get their minds around Immersive Learning Simulations, aka Serious Games.   There’s still ways to go in really understanding the critical design elements, but the tools are getting better and making them more accessible in at least some formats.
  7. Design is becoming a ‘thing’: all the hype around Design Thinking is leading to a greater concern about design, and this is a good thing. Unfortunately there will probably be some hype and clarity to be discerned, but at least the overall awareness raising is a good step.
  8. Learning to learn seems to have emerged: years ago the late great Jay Cross and I and some colleagues put together the Meta-Learning Lab, and it was way too early (like so much I touch :p). However, his passing has raised the term again, and there’s much more resonance. I don’t think it’s necessarily a thing yet, but it’s far greater resonance than we had at the time.
  9. Systems are coming: I’ve been arguing for the underpinnings, e.g. content systems.  And I’m (finally) beginning to see more interest in that, and other components are advancing as well: data (e.g. the great work Ellen Wagner and team have been doing on Predictive Analytics), algorithms (all the new adaptive learning systems), etc. I’m keen to think what tags are necessary to support the ability to leverage open educational resources as part of such systems.
  10. Greater inputs into learning: we’ve seen learning folks get interested in behavior change, habits, and more.  I’m thinking we’re going to go further. Areas I’m interested in include myth and ritual, powerful shapers of culture and behavior. And we’re drawing on greater inputs into the processes as well (see 7, above).  I hope this continues, as part of learning to learn is to look to related areas and models.

Obviously, these are things I care about.  I’m fortunate to be able to work in a field that I enjoy and believe has real potential to contribute.  And just fair warning, I’m working on a few areas in several ways.  You’ll see more about learning design and the future of work sometime in the near future. And rather than generally agitate, I’m putting together two specific programs – one on (e)learning quality and one on L&D strategy – that are intended to be comprehensive approaches.  Stay tuned.

That’s my short list, I’m sure more will emerge.  In the meantime, I hope you had a great 2015, and that your 2016 is your best year yet.

Filed Under: design, games, meta-learning, mobile, social, strategy, technology

CERTainly room for improvement

24 November 2015 by Clark 3 Comments

As mentioned before, I’ve become a member of my local Community Emergency Response Team (CERT), as in the case of disaster, the official first-responders (police, fire, and paramedics) will be overwhelmed.  And it’s a good group, with a lot of excellent efforts in processes and tools as well as drills.  Still, of course, there’s  room for improvement.  I encountered one such at our last meeting, and I think it’s an interesting case study.

So one of the things you’re supposed to do in conducting search and rescue is to go from building to building assessing damage and looking for people to help.  And one of the useful things to do is to mark the status of the search and the outcomes, so no one wastes effort on an already explored building. While the marking is covered in training and there’re support tools to help you remember,  ideally it’d be memorable, so that you  can regenerate the information and don’t have to look it up.

The design for the marking is pretty clear: you first make a diagonal slash when you start investigating a building, and then you make a crossing slash when you’ve made your assessment. And specific information is to be recorded in each quarter of the resulting X: left, right, top, and bottom.  (Note that the US standard set by FEMA doesn’t correspond to the international standard from the International Search & Rescue Advisory Group, interestingly).

However, when we brought it up in a recent meeting (and they’re very good about revisiting things that quickly fade from memory), it was obvious that most people couldn’t recall what goes where. And when I heard what the standard was, I realized it didn’t have a memorable structure.  So, here are the four things to record:

  • the group who goes in
  • when the group completes
  • what hazards may exist
  • and how many people and what condition they’re in*

So how would you map these to the quadrants?  And in one sense it doesn’t matter if there’s a sensible rationale behind them. One sign that there’s not?  You can’t remember what goes where.

Our local team leader was able to recall that the order is: left – group, top – completion, right – hazards, and bottom – people.  However, this seems to me to be less than  memorable, so let me explain.

To me, wherever you put the in, left or top, the coming out ought to be opposite. And given our natural flow, group going in makes sense to the left, and coming out ought to go on the right.  In – out.  Then, it’s relatively arbitrary where hazards and people go.  I’d make a case that top-of-mind should be the hazards found to warn others, but that the people are the bottom line (see what I did there?).  I could easily make a case for the reverse, but either would be a mnemonic to support remembering.  Instead, as far as I can tell, it’s completely arbitrary. Now, if it’s not arbitrary and there is a rationale,  it’d help to share that!

The point being, to help people remember things that are in some sense arbitrary, make a story that makes it memorable. Sure, I can look it up, assuming that the lookup book they handed out stays in the pocket in my special backpack.  (And I’m likely to remember now, because of all this additional processing, but that’s not what happens in the training.)  However, making it regenerable from some structure gives you a much better chance of having it to hand. Either a model or a story is better than arbitrary, and one’s possible with a rewrite, but as it is, there’s neither.

So there’s a lesson in design to be had, I reckon, and I hope you’ll put it to use.

* (black or dead, red or needing immediate treatment for life-threatening issues, yellow or needing non-urgent treatment, and green or ok)

Filed Under: design, strategy

A Competent Competency Process

4 November 2015 by Clark 3 Comments

In the process of looking at ways to improve the design of courses, the starting point is good objectives. And as a consequence, I’ve been enthused about the notion of competencies, as a way to put the focus on what people do, not what they know. So how do we do this, systematically, reliably, and repeatably?

Let’s be clear, there are times we need knowledge level objectives. In medicine or any other field where responses need to be quick and accurate, we need a very constrained vocabulary. SO drilling in the exact meanings of words is valuable, as an example. Though ideally, that’s coupled with using that language to set context or make decisions. So “we know it’s the right medial collateral ligament, prep for the surgery” could serve as a context, or we could have a choice to operate on the left or right atrial ventricle as a decision point. As Van Merriënboer’s 4 Component Instructional Design talks about, we need to separate out the knowledge from the complex problems we apply it to. Still, I suggest that what’s likely to make a difference to individuals and organizations is the ability to make better decisions, not recite rote knowledge.

So how do we get competencies when we want them? The problem, as I’ve talked about before, is that SMEs don’t have access to 70% of what they actually do, it’s compiled away. We then need good processes, so I’ve talked to a couple of educational institutions doing competencies, to see what could be learned. And it’s clear that while there’s no turnkey approach, what’s emerging is a process with some specific elements.

One thing is that if you’re trying to cover a whole college level course, you’ve got to break it up. Break down the top level into a handful of competencies. Then you continue to take each of those apart, and perhaps another level, ‘til you have a reasonable scope. This is heuristic, of course, but with a focus on ‘do’, you have a good likelihood to get here.

One of the things I’ve heard across various entities trying to get meaningful objectives is working with more than one SME. If you can get several, you have a better chance of triangulating on the right outcomes and objectives. They may well disagree about the knowledge, but if you manage the process right (emphasize ‘do’, lather, rinse, repeat), you should be able to get them to converge. It may take some education, and you may have to let them get the

Not just any SMEs will do. Two things are really valuable: on the ground experience to know what needs to be done (and doesn’t), and the ability to identify and articulate the models that guide the performance. Some instructors, for instance, can teach to a text but really aren’t truly masters of the content nor are experienced practitioners. Multiple helps, but the better the SME, the better the outcome.

I believe you want to ensure that you’re getting both the right things, and all the things. I’ve recommended to a client about triangulating not just with SMEs, but with practitioners (or, rather, the managers of the roles the learners will be engaged in), and any other reliable stakeholders. The point is to get input from the practice as well as the theory, identifying the models that support proper behavior, and the misconceptions that underpin where they go wrong.

Once you have a clear idea of the things people need to be able to do, you can then identify the language for the competencies. I’m not a fan of Bloom’s (unwieldy, hard to reliably apply), but I am a fan of Mager-style definitions (action, context, metric).

After this is done, you can identify the knowledge needed, and perhaps created objectives for that, but to me the focus is on the ‘do’, the competencies. This is very much aligned with an activity-based learning model, whereby you immediately design the activities that align with the competencies before you decide the content.

So, this is what I’m inferring. There would be good tools and templates you could design to go with this, identifying competencies, misconceptions, and at the same time also getting stories and motivations. (An exercise left for the reader. ;) The overall goal, however, of getting meaningful objectives is key to getting good learning design. Any nuances I’m missing?

Filed Under: design, strategy

Supporting our Brains

13 October 2015 by Clark 3 Comments

One of the ways I’ve been thinking about the role mobile can play in design is thinking about how our brains work, and don’t.  It came out of both mobile and the recent cognitive science for learning workshop I gave at the recent DevLearn.  This applies more broadly to performance support in general, so I though I’d share where my thinking is going.

To begin with, our cognitive architecture is demonstrably awesome; just look at your surroundings and recognize your clothing, housing, technology, and more are the product of human ingenuity.  We have formidable capabilities to predict, plan, and work together to accomplish significant goals.  On the flip side, there’s no one all-singing, all-dancing architecture out there (yet) and every such approach also has weak points. Technology, for instance, is bad at pattern-matching and meaning-making, two things we’re really pretty good at.  On the flip side, we have some flaws too. So what I’ve done here is to outline the flaws, and how we’ve created tools to get around those limitations.  And to me, these are principles for design:

table of cognitive limitations and support toolsSo, for instance, our senses capture incoming signals in a sensory store.  Which has interesting properties that it has almost an unlimited capacity, but for only a very short time. And there is no way all of it can get into our working memory, so what happens is that what we attend to is what we have access to.  So we can’t recall what we perceive accurately.  However, technology (camera, microphone, sensors) can recall it all perfectly. So making capture capabilities available is a powerful support.

Similar, our attention is limited, and so if we’re focused in one place, we may forget or miss something else.  However, we can program reminders or notifications that help us recall important events that we don’t want to miss, or draw our attention where needed.

The limits on working memory (you may have heard of the famous 7±2, which really is <5) mean we can’t hold too much in our brains at once, such as interim results of complex calculations.  However, we can have calculators that can do such processing for us. We also have limited ability to carry information around for the same reasons, but we can create external representations (such as notes or scribbles) that can hold those thoughts for us.  Spreadsheets, outlines, and diagramming tools allow us to take our interim thoughts and record them for further processing.

We also have trouble remembering things accurately. Our long term memory tends to remember meaning, not particular details. However, technology can remember arbitrary and abstract information completely. What we need are ways to look up that information, or search for it. Portals and lookup tables trump trying to put that information into our heads.

We also have a tendency to skip steps. We have some randomness in our architecture (a benefit: if we sometimes do it differently, and occasionally that’s better, we have a learning opportunity), but this means that we don’t execute perfectly.  However, we can use process supports like checklists.  Atul Gawande wrote a fabulous book on the topic that I can recommend.

Other phenomena include that previous experience can bias us in particular directions, but we can put in place supports to provide lateral prompts. We can also prematurely evaluate a solution rather than checking to verify it’s the best. Data can be used to help us be aware.  And we can trust our intuition too much and we can wear down, so we don’t always make the best decisions.  Templates, for example are a tool that can help us focus on the important elements.

This is just the result of several iterations, and I think more is needed (e.g. about data to prevent premature convergence), but to me it’s an interesting alternate approach to consider where and how we might support people, particularly in situations that are new and as yet untested.  So what do you think?

Filed Under: design, meta-learning, mobile

Agile?

17 September 2015 by Clark 6 Comments

Last Friday’s #GuildChat was on Agile Development.  The topic is interesting to me, because like with Design Thinking, it seems like well-known practices with a new branding. So as I did then, I’ll lay out what I see and hope others will enlighten me.

As context, during grad school I was in a research group focused on user-centered system design, which included design, processes, and more. I subsequently taught interface design (aka Human Computer Interaction or HCI) for a number of years (while continuing to research learning technology), and made a practice of advocating the best practices from HCI to the ed tech community.  What was current at the time were iterative, situated, collaborative, and participatory design processes, so I was pretty  familiar with the principles and a fan. That is, really understand the context, design and test frequently, working in teams with your customers.

Fast forward a couple of decades, and the Agile Manifesto puts a stake in the ground for software engineering. And we see a focus on releasable code, but again with principles of iteration and testing, team work, and tight customer involvement.  Michael Allen was enthused enough to use it as a spark that led to the Serious eLearning Manifesto.

That inspiration has clearly (and finally) now moved to learning design. Whether it’s Allen’s SAM or Ger Driesen’s Agile Learning Manifesto, we’re seeing a call for rethinking the old waterfall model of design.  And this is a good thing (only decades late ;).  Certainly we know that working together is better than working alone (if you manage the process right ;), so the collaboration part is a win.

And we certainly need change.  The existing approaches we too often see involve a designer being given some documents, access to a SME (if lucky), and told to create a course on X.  Sure, there’re tools and templates, but they are focused on making particular interactions easier, not on ensuring better learning design. And the person works alone and does the design and development in one pass. There are likely to be review checkpoints, but there’s little testing.  There are variations on this, including perhaps an initial collaboration meeting, some SME review, or a storyboard before development commences, but too often it’s largely an independent one way flow, and this isn’t good.

The underlying issue is that waterfall models, where you specify the requirements in advance and then design, develop, and implement just don’t work. The problem is that the human brain is pretty much the most complex thing in existence, and when we determine a priori what will work, we don’t take into account the fact that like Heisenberg what we implement will change the system. Iterative development and testing allows the specs to change after initial experience.  Several issues arise with this, however.

For one, there’s a question about what is the right size and scope of a deliverable.  Learning experiences, while typically overwritten, do have some stricture that keeps them from having intermediately useful results. I was curious about what made sense, though to me it seemed that you could develop your final practice first as a deliverable, and then fill in with the required earlier practice, and content resources, and this seemed similar to what was offered up during the chat to my question.

The other one is scoping and budgeting the process. I often ask, when talking about game design, how to know when to stop iterating. The usual (and wrong answer) is when you run out of time or money. The right answer would be when you’ve hit your metrics, the ones you should set before you begin that determine the parameters of a solution (and they can be consciously reconsidered as part of the process).  The typical answer, particularly for those concerned with controlling costs, is something like a heuristic choice of 3 iterations.  Drawing on some other work in software process, I’d recommend creating estimates, but then reviewing them after. In the software case, people got much better at estimates, and that could be a valuable extension.  But it shouldn’t be any more difficult to estimate, certainly with some experience, than existing methods.

Ok, so I may be a bit jaded about new brandings on what should already be good practice, but I think anything that helps us focus on developing in ways that lead to quality outcomes is a good thing.  I encourage you to work more collaboratively, develop and test more iteratively, and work on discrete chunks. Your stakeholders should be glad you did.

 

Filed Under: design, strategy

Designing Learning Like Professionals

12 August 2015 by Clark 4 Comments

I’m increasingly realizing that the ways we design and develop content are part of the reason why we’re not getting the respect we deserve.  Our brains are arguably the most complex things in the known universe, yet we don’t treat our discipline as the science it is.  We need to start combining experience design with learning engineering to really start delivering solutions.

To truly design learning, we need to understand learning science.  And this does not mean paying attention to so-called ‘brain science’. There is legitimate brain science (c.f. Medina, Willingham), and then there’s a lot of smoke.

For instance, there’re sound cognitive reasons why information dump and knowledge test won’t lead to learning.  Information that’s not applied doesn’t stick, and application that’s not sufficient doesn’t stick. And it won’t transfer well if you don’t have appropriate contexts across examples and practice.  The list goes on.

What it takes is understanding our brains: the different components, the processes, how learning proceeds, and what interferes.  And we need to look at the right levels; lots of neuroscience is not relevant at the higher level where our thinking happens.  And much about that is still under debate (just google ‘consciousness‘ :).

What we do have are robust theories about learning that pretty comprehensively integrate the empirical data.  More importantly, we have lots of ‘take home’ lessons about what does, and doesn’t work.  But just following a template isn’t sufficient.  There are gaps where have to use our best inferences based upon models to fill in.

The point I’m trying to make is that we have to stop treating designing learning as something anyone can do.  The notion that we can have tools that make it so anyone can design learning has to be squelched. We need to go back to taking pride in our work, and designing learning that matches how our brains work. Otherwise, we are guilty of malpractice. So please, please, start designing in coherence with what we know about how people learn.

If you’re interested in learning more, I’ll be running a learning science for design workshop at DevLearn, and would love to see you there.

Filed Under: design, meta-learning, strategy

Symbiosis

20 May 2015 by Clark Leave a Comment

One of the themes I’ve been strumming in presentations is one where we complement what we do well with tools that do well the things we don’t. A colleague reminded me that JCR Licklider wrote of this decades ago (and I’ve similarly followed the premise from the writings of Vannevar Bush, Doug Engelbart, and Don Norman, among others).

We’re already seeing this.   Chess has changed from people playing people, thru people playing computers and computers playing computers, to computer-human pairs playing other computer-human pairs. The best competitors aren’t the best chess players or the best programs, but the best pairs, that is the player and computer that best know how to work together.

The implications are to stop trying to put everything in the head, and start designing systems that complement us in ways that assure that the combination is the optimized solution to the problem being confronted. Working backwards [], we should decide what portion should be handled by the computer, and what by the person (or team), and then design the resources and then training the humans to use the resources in context to achieve the goals.

Of course, this is only in the case of known problems, the ‘optimal execution’ phase of organizational learning. We similarly want to have the right complements to support the ‘continual innovation’ phase as well. What that means is that we have to be providing tools for people to communicate, collaborate, create representations, access and analyze data, and more. We need to support ways for people to draw upon and contribute to their communities of practice from their work teams. We need to facilitate the formation of work teams, and make sure that this process of interaction is provided with just the right amount of friction.

Just like a tire, interaction requires friction. Too little and you go skidding out of control. Too much, and you impede progress. People need to interact constructively to get the best outcomes. Much is known about productive interaction, though little enough seems to make it’s way into practice.

Our design approaches need to cover the complete ecosystem, everything from courses and resources to tools and playgrounds. And it starts by looking at distributed cognition, recognizing that thinking isn’t done just in the head, but in the world, across people and tools. Let’s get out and start playing instead of staying in old trenches.

Filed Under: strategy, technology

Personal Mobile Mastery

23 April 2015 by Clark Leave a Comment

A conversation with a colleague prompted a reflection.  The topic was personal learning, and in looking for my intersections (beyond my love of meta-learning), I looked at my books. The Revolution isn’t an obvious match, nor is games (though trust me, I could make them work ;), but a more obvious match was mlearning. So the question is, how do we do personal knowledge mastery with mobile?

Let’s get the obvious out of the way. Most of what you do on the desktop, particularly social networking, is doable on a mobile device.  And you can use search engines and reference tools just the same. You can find how to videos as well. Is there more?

First, of course, are all the things to make yourself more ‘effective’.  Using the four key original apps on the Palm Pilot for instance: your calendar to remind you of events or to check availability, using ToDo checklists to remember commitments to do something, using memos to take notes for reference, and using your contact list to reach people.  Which isn’t really learning, but it’s valuable to learn to be good at these.

Then we start doing things because of where you are.  Navigation to somewhere or finding what’s around you are the obvious choices. Those are things you won’t necessarily learn from, but they make you more effective.  But they can also help educate you. You can look where you are on a map and see what’s around you, or identify the thing on the map that’s in that direction (“oh, that’s the Quinnsitute” or “There’s Mount Clark” or whatever), and have a chance of identifying a seen prominence.

And you can use those social media tools as before, but you can also use them because of where or when you are. You can snap pictures of something and send it around and ask how it could help you. Of course, you can snap pictures or films for later recollection and reflection, and contribute them to a blog post for reflection.  And take notes by text or audio. Or even sketching or diagramming. The notes people take for themselves at conferences, for instance, get shared and are valuable not just for the sharer, but for all attendees.

Certainly searching things you don’t understand or, when there’s unknown language, seeing if you can get a translation, are also options.  You can learn what something means, and avoid making mistakes.

When you are, e.g. based upon what you’re doing, is a little less developed.  You’d have to have rich tagging around your calendar to signal what it is you’re doing for a system to be able to leverage that information, but I reckon we can get there if and when we want.

I’m not a big fan of ‘learning’ on a mobile device, maybe a tablet in transit or something, but not courses on a phone.  On the other hand, I am a big fan of self-learning on a phone, using your phone to make you smarter. These are embryonic thoughts, so I welcome feedback.   Being more contextually aware both in the moment and over time is a worthwhile opportunity, one we can and should look to advance.  I think there’s  much yet, though tools like ARIS are going to help change that. And that’ll be good.

 

Filed Under: meta-learning, mobile

Why models matter

21 April 2015 by Clark 2 Comments

In the industrial age, you really didn’t need to understand why you were doing what you were doing, you were just supposed to do it.  At the management level, you supervised behavior, but you didn’t really set strategy. It was only at the top level where you used the basic principles of business to run your organization.  That was then, this is now.

Things are moving faster, competitors are able to counter your advances in months, there’s more information, and this isn’t decreasing.  You really need to be more agile to deal with uncertainty, and you need to continually innovate.   And I want to suggest that this advantage comes from having a conceptual understanding, a model of what’s happening.

There are responses we can train, specific ways of acting in context.  These aren’t what are most valuable any more.  Experts, with vast experience responding in different situations, abstract models that guide what they do, consciously or unconsciously (this latter is a problem, as it makes it harder to get at; experts can’t tell you 70% of what they actually do!).  Most people, however, are in the novice to practitioner range, and they’re not necessarily ready to adapt to changes, unless we prepare them.

What gives us the ability to react are having models that explain the underlying causal relations as we best understand them, and then support in applying those models in different contexts.  If we have models, and see how those models guide performance in context A, then B, and then we practice applying it in context C and D (with model-based feedback), we gradually develop a more flexible ability to respond. It’s not subconscious, like experts, but we can figure it out.

So, for instance, if we have the rationale behind a sales process, how it connects to the customer’s mental needs and the current status, we can adapt it to different customers.  If we understand the mechanisms of medical contamination, we can adapt to new vectors.  If we understand the structure of a cyber system, we can anticipate security threats. The point is that making inferences on models is a more powerful basis than trying to adapt a rote procedure without knowing the basis.

I recognize that I talk a lot in concepts, e.g. these blog posts and diagrams, but there’s a principled reason: I’m trying to give you a flexible basis, models, to apply to your own situation.  That’s what I do in my own thinking, and it’s what I apply in my consulting.  I am a collector of models, so that I have more tools to apply to solving my own or other’s problems.   (BTW, I use concept and model relatively interchangeably, if that helps clarify anything.)

It’s also a sound basis for innovation.  Two related models (ahem) of creativity say that new ideas are either the combination of two different models or an evolution of an existing one.  Our brains are pattern matchers, and the more we observe a pattern, the more likely it will remind us of something, a model. The more models we have to match, the more likely we are to find one that maps. Or one that activates another.

Consequently, it’s also one  of the things I push as a key improvement to learning design. In addition to meaningful practice, give the concept behind it, the why, in the form of a model. I encourage you to look for the models behind what you do, the models in what your presented, and the models in what your learners are asked to do.

It’s a good basis for design, for problem-solving, and for learning.  That, to me, is a big opportunity.

Filed Under: design, meta-learning, strategy

Defining Microlearning?

14 April 2015 by Clark 8 Comments

Last week on the #chat2lrn twitter chat, the topic was microlearning. It was apparently prompted by this post by Tom Spiglanin which does a pretty good job of defining it, but some conceptual confusion showed up in the chat that makes it clear there’s some work to be done.  I reckon there may be a role for the label and even the concept, but I wanted to take a stab at what it is and isn’t, at least on principle.

So the big point to me is the word ‘learning’.  A number of people opined about accessing a how-to video, and let’s be clear: learning doesn’t have to come from that.   You could follow the steps and get the job done and yet need to access it again if you ever needed it. Just like I can look up the specs on the resolution of my computer screen, use that information, but have to look it up again next time.  So it could be just performance support, and that’s a good thing, but it’s not learning.  It suits the notion of micro content, but again, it’s about getting the job done, not developing new skills.

Another interpretation was little bits of components of learning (examples, practice) delivered over time. That is learning, but it’s not microlearning. It’s distributed learning, but the overall learning experience is macro (and much more effective than the massed, event, model).  Again, a good thing, but not (to me) microlearning.  This is what Will Thalheimer calls subscription learning.

So, then, if these aren’t microlearning, what is?  To me, microlearning has to be a small but complete learning experience, and this is non-trivial.  To be a full learning experience, this requires a model, examples, and practice.  This could work with very small learnings (I use an example of media roles in my mobile design workshops).  I think there’s a better model, however.

To explain, let me digress. When we create formal learning, we typically take learners away from their workplace (physically or virtually), and then create contextualized practice. That is, we may present concepts and examples (pre- via blended, ideally, or less effectively in the learning event), and then we create practice scenarios. This is hard work. Another alternative is more efficient.

Here, we layer the learning on top of the work learners are already doing.  Now, why isn’t this performance support? Because we’re not just helping them get the job done, we’re explicitly turning this into a learning event by not only scaffolding the performance, but layering on a minimal amount of conceptual material that links what they’re doing to a model. We (should) do this in examples and feedback on practice, now we can do it around real work. We can because (via mobile or instrumented systems) we know where they are and what they’re doing, and we can build content to do this.  It’s always been a promise of performance support systems that they could do learning on top of helping the outcome, but it’s as yet seldom seen.

And the focus on minimalism is good, too.  We overwrite and overproduce, adding in lots that’s not essential.  C.f. Carroll’s Nurnberg Funnel or Moore’s Action Mapping.  And even for non-mobile, minimalism makes sense (as I tout under the banner of the Least Assistance Principle).  That is, it’s really not rude to ask people (or yourself as a designer) “what’s the least I can do for you?”  Because that’s what people generally really prefer: give me the answer and let me get back to work!

Microlearning as a phrase has probably become current (he says, cynically) because elearning providers are touting it to sell the ability of their tools to now deliver to mobile.   But it can also be a watch word to emphasize thinking about performance support, learning ‘in context’, and minimalism.  So I think we may want to continue to use it, but I suggest it’s worthwhile to be very clear what we mean by it. It’s not courses on a phone (mobile elearning), and it’s not spaced out learning, it’s small but useful full learning experiences that can fit by size of objective or context ‘in the moment’.  At least, that’s my take; what’s yours?

Filed Under: design, mobile, strategy

  • « Previous Page
  • 1
  • …
  • 5
  • 6
  • 7
  • 8
  • 9
  • …
  • 18
  • Next Page »

Clark Quinn

The Company

Search

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

Blogroll

  • Bamboo Project
  • Charles Jennings
  • Clive on Learning
  • Communication Nation
  • Conversations
  • Corporate eLearning Development
  • Dave’s Whiteboard
  • Donald Taylor
  • e-Clippings
  • eeLearning
  • Eide NeuroLearning
  • eLearn Mag
  • eLearning Post
  • eLearning RoadTrip
  • eLearning Technology
  • eLearnSpace
  • Guild Research
  • Half an Hour
  • Here Comes Everybody
  • Informal Learning
  • Internet Time
  • Janet Clary
  • Kapp Notes
  • Karyn Romeis
  • Lars is Learning
  • Learning Circuits Blog
  • Learning Matters
  • Learning Visions
  • Leverage Innovation
  • Marcia Conner
  • Middle-earth
  • mLearnopedia
  • Nancy White
  • Performance Support Blog
  • Plan B
  • Sky’s Blog
  • Sociate
  • Value Networks
  • Will at Work Learning
  • WriteTech

License

Previous Posts

  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

Copyright © 2022 · Agency Pro on Genesis Framework · WordPress · Log in