Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: design

Direct Instruction and Learning Experience Design

30 July 2019 by Clark Leave a Comment

After my previous article on direct instruction versus guided discovery, some discussion mentioned Engelmann’s Direct Instruction (DI). And, something again pointed me to the most comprehensive survey of educational effects. So, I tracked both of these down, and found some interesting results that both supported, and confounded, my learning. Ultimately, of course, it expanded my understanding, which is always my desire. So it’s time to think a bit deeper about Direct Instruction and Learning Experience Design.

Engelmann’s Direct Instruction is very scripted. It is rigorous in its goals, and has a high amount of responses from learners.  Empirically, DI has great success, with some complaints about lack of teacher flexibility. It strikes me as very good for developing core skills like reading and maths.  I was worried about the intersection of many responses a minute and more complex tasks, though it appears that’s an issue that has been addressed. I couldn’t find the paper that makes that case, however.

Another direction, however, proved fruitful.  John Hattie, an educational researcher, collected and conducted reviews of 800+ meta-analyses to look at what worked (and didn’t) in education.  It’s a monumental work, collected in his book Visible Learning. I’d heard of it before, but hadn’t tracked it down. It was time.

And it’s impressive in breadth  and depth.  This is arguably the single most important work in education. And it opened my eyes in several ways.  To illustrate, let me collect for you the top (>.4)  impacts found, which have some really interesting implications:

  • Reciprocal teaching (.74)
  • Providing feedback (.72)
  • Teaching student self-verbalization (.67)
  • Meta-cognition strategies (.67)
  • Direction instruction (.59)
  • Mastery learning (.57)
  • Goals-challenging (.56)
  • Frequent/effects of testing (.46)
  • Behavioral organizers (.41)

Reciprocal teaching and meta-cognition strategies coming out highly, a great outcome. And of course I am not surprised to see the importance of feedback. I have to say that I  was surprised to see direct instruction and mastery learning coming out so high.  So what’s going on?  It’s related to what I mentioned in the afore-mentioned article, about just what the definition of DI is.

So, Hattie says: …”what the critics mean by direct instruction is didactic teacher-led talking from the front…” And, indeed, that’s my fear of using the label. He goes on to point out the major steps of DI (in my words):

  1. Have clear learning objectives: what should the learner be able to  do?
  2. Clear success criteria (which to me is part of 1)
  3. Engagement: an emotional ‘hook’
  4. A clear pedagogy: info (models & examples), modeling, checking for understanding
  5. Guided practice
  6. Closure of the learning experience
  7. Reactivation: spaced and varied practice

And, of course, this is pretty much everything I argue for as being key to successful learning experience design. And, as I suspected, DI is not what the label would lead you to believe (which I  do think is a problem).  As I mentioned in a subsequent post, I’ve synthesized my approach across many elements, integrating the emotional elements along with effective education practice (see the alignment).  There’s so much more here, but it’s a very interesting result. Direct Instruction and Learning Experience Design have a really nice alignment.

And a perfect opportunity to remind you that I’ll be offering a Learning Experience Design workshop at DevLearn, which will include the results of my continuing investigation (over decades) to create an approach that’s doable and works. Hope to see you there!

Redesigning Learning Design

16 January 2019 by Clark 2 Comments

Of late, a lot of my work has been designing learning design. Helping orgs transition their existing design processes to ones that will actually have an impact. That is, someone’s got a learning design process, but they want to improve it. One idea, of course, is to replace it with some validated design process. Another approach, much less disruptive, is to find opportunities to fine tune the design. The idea is to find the minimal set of changes that will yield the maximal benefit. So what are the likely inflection points?  Where am I finding those spots for redesigning?  It’s about good learning.

Starting at the top, one place where organizations go wrong right off the bat is the initial analysis for a course. There’s the ‘give us a course on this’, but even if there’s a decent analysis the process can go awry. Side-stepping the big issue of performance consulting (do a reality check: is this truly a case for a course), we get into working to create the objectives. It’s about how you work with SMEs. Understanding what they can,  and can’t, do well means you have the opportunity to ensure that you get the right objectives to design to.

From there, the most meaningful and valuable step is to focus on the practice. What are you having learners  do, and how can you change that?  Helping your designers switch to good  assessment writing is going to be useful. It’s nuanced, so the questions don’t  seem that different from typical ones, but they’re much more focused for success.

Of course, to support good application of the content to develop abilities, you need the right content!  Again, getting designers to understand what the nuances of useful examples from just stories isn’t hard but rarely done. Similarly knowing why you want  models and not just presentations about the concept isn’t fully realized.

Of course, making it an emotionally compelling experience has learning impact as well. Yet too often we see the elements just juxtaposed instead of integrated. There  are systematic ways to align the engagement and the learning, but they’re not understood.

A final note is knowing when to have someone work alone, and when some collaboration will help.  It’s not a lot, but unless it happens at the right time (or happens at all) can have a valuable contribution to the quality of the outcome.

I’ve provided many resources about better learning design, from my 7 step program white paper  to  my deeper elearning series for Learnnovators.  And I’ve a white paper about redesigning as well. And, of course, if you’re interested in doing this organizationally, I’d welcome hearing from you!

One other resource will be my upcoming workshop at the Learning Solutions conference on March 25 in Orlando, where we’ll spend a day working on learning experience design, integrating engagement and learning science.  Of course, you’ll be responsible for taking the learnings back to your learning process, but you’ll have the ammunition for redesigning.  I’d welcome seeing you there!

From instructor to designer & facilitator

26 December 2018 by Clark Leave a Comment

Someone on Quora asked me about the instructor role:
How would the role of a teacher change in this modern online learning world?
While I posted an answer there, I thought I’d post it here too:

I see two major roles in that of the ‘teacher‘: the designer of learning experiences (pre), and the facilitator of same (during/post). I think the design changes by returning to natural learning approaches, an apprenticeship model (c.f. Cognitive Apprenticeship). Our wetware hasn‘t changed, so we want to use technology as an augment. Tech can make it easier to follow such a design paradigm.

The in-class role moves from presentation to facilitation. Ideally we have content and check, as well as any preliminary experiences, done in a ‘flipped model‘. Leveling-up learners to a baseline happens before engaging in the key learning activities. Major activities can be solo if the material is more dedicated to training, but ideally are social particularly when complex understandings are required (mostly).

The role of teacher is to check in on group discussions and projects, and bring out important lessons from the report-backs. We extend the learning with efforts to either or both of expand understandings into more contexts, or document the resulting applied understandings, to create a unified understanding.

Application-based instruction is the focus, having learners do things with the learning, not just recite it. The design role is to create a sequence of preparation, meaningful engagement, and knowledge consolidation that‘s a learning experience. The facilitation role is to help bring out misconceptions and important hints and tips to lead to learner success.

This really is true face-to-face as well, but technology offers us tools to take the drudge work out of the experience and end up having the facilitation role be focused on the most valuable aspects. That‘s my take, at any rate.

And what’s your take?

(And this may be my only post this week; happy holidays everyone!)

Designing with science

17 July 2018 by Clark 1 Comment

How should we design? It’s all well and good to spout principles, but putting them into practice is another thing. While we always would like to follow learning science, there’re not always all the answers we need. I was thinking about this with a project I’m working on, and it occurred to me that there might be some confusion. So I thought I’d share how I like to think and go about it, and see what you think.

So, first of all, you should go with the science. There are good principles around in a variety of forms.  Some good guidance comes in books such as:

  • eLearning & the Science of Instruction (Clark & Mayer)
  • Design for How People Learn (Dirksen)
  • the Make it Learnable series (Shank)
  • and less directly but no less applicably, Michael Allen’s Guide to eLearning

There’s also ATD’s Science of Learning topic (with some good and some less good stuff).  And the 3 Star Learning site. Both of these, of course, aren’t as comprehensive as a book.   And, of course, you can also go right to the pure journals, like Instructional Science, and Learning Sciences, and the like, if you are fluent in academese.  For that matter, I’ve a video course that is about Deeper Instructional Design, e.g. a design approach with learning science ‘baked in’.

But what I was thinking of what happens when they don’t address the specific concern you are wondering about. The second approach I recommend is theory. In particular,  Cognitive Apprenticeship (my favorite model; Collins & Brown), or other theories like Elaboration Theory (Reigeluth), Pebble in a Pond (Merrill), or 4 Component ID (Van Merriënboer). Or, arguably more modern, something from Jonassen on problem-based learning or other more social constructivist approaches.  They’re based on empirical data, but pulled together, and you can often make inferences in between the principles.  While the next step is arguably better, in the real world you want a scrutable approach but one that gets you moving forward the fastest.

Finally, you test. If science and theory can’t provide the answer, you either wing it, but it’s better if you set up an experiment. Ideally, with your sample population.  So, for instance, you don’t know whether to place the learner’s role in the simulation game as a consultant to many orgs or as a role in one org with many situations. There’re tradeoffs: in the former it’s easier to provide multiple contexts for practice, but the latter may be more closely aligned with job performance.   You can test it, and see what learners think about the experience. Of course, it may be that in the process of just designing both that you have some insight. And that’s ok.

And, if you’re a reflective practitioner (and we should be), you might share your findings.  What did you learn?  Learning science advances to the extent that we continue to explore and test.  Speaking of which, how does this approach match with what you do?

Silly Design

3 July 2018 by Clark Leave a Comment

Time for a brief rant on interface designs.  Here we’re talking about two different situations, one device, and and one interface.  And, hopefully, we can extract some lessons, because these are just silly design decisions.

OXO timerFirst up is our timer.  And it’s a good timer, and gets lots of use. Tea, rice, lots of things. And, sometimes, a few things at a time.  As you can see, there’re 3 timers. And, as far as I know, we’ve only used at most two at a time.  So what’s the problem?

Well, there’re different beeps signaling the end of different timers. And that’s a good thing. Mostly.  But there’s one very very silly design decision here. Let me tell you that one has one beep, one has two beeps, and one has three. So, guess which number of beeps goes to which timer?  You can see they’re numbered…

Got your guess? It’d be sensible, of course, if the one beep went with the first timer, and two beeps went with the second. But you  know we’re not going there!  Nope, the first timer has two beeps. The second timer has 3 beeps. And the 3rd timer, of course, has one.

It’s a principle called ‘mapping’ (see Don Norman’s essential reading for anyone who designs for people:  The Design of Everyday Things). In it, you make the mapping logical, so for instance between the number of the timer and the number of beeps.  How could you get this wrong? (Cliche cue: you had  one job…)

iTunes way 1On to our second of today’s contestants, the iTunes interface.  Now, everyone likes to bash iTunes, and either it’s a bad design for what it’s doing, or it shouldn’t be trying to do too many things. I’m not going there today, I’m going off on something else.

I’ve always managed the files on the qPad through iTunes. It used to be straightforward, but they changed it. Of course.  There’re also more ways to do it: AirDrop & iFiles being two. And, frankly, they’re both somewhat confusing.  But that’s not my concern today.  The new way I use is only a slight modification on the old way, which is  why I use it. And it works. But there’s a funny little hiccup…

So, there are two ways to bring up a list of things on your iPad.  For one, you select it from the device picture at the top (to the right of the forward/back arrows), and you see a list of things you can access/adjust: music, movies, etc. As you see to the left.

other way to access iPadOn the other hand (to the right), you select it from a list of devices, and you get the drop down you see to the right.  Note that the lists aren’t the  same.

Wait, they’re not the same?  No, only one has “File Sharing”!  So, you have to remember which way to access the device before you can choose to add a file.  This is just silly!  Only recently have I started remembering which way works (bad design, BTW, trusting to memory), and before that I had to explore. It’s not much, just an extra click, but it’s unnecessary memory load.

The overhead isn’t much, to be clear, but it’s still irritating. Why, why would you have two different ways to access the device, and not have the same information come up?  It’s just silly!  Moreover, it violates a principle. Here, the principle is consistency (and, arguably, affordances). When you access a device, you expect to be able to manipulate the device. And you don’t expect that two different ways to get to what should be the same place would yield two different suites of information. (And don’t even get me started about the stupid inconsistencies between the mobile and web app versions of LinkedIn!)

At least if you haven’t communicated a clear model about why the one way is different than the other. But it’s  not there.  It’s a seemingly arbitrary list. We operate on models, but there’s no obvious way to discriminate between these two, so the models are random. Choosing the device, either way, is supposed to access the device.  That’s the affordance.  Unless you convey clearly  why these are different.

This holds for learning too. Interface folks argued that Gloria Gery’s Electronic Performance Support Systems were really making up for bad design. And so, too, is much training. Don argued in his  The Invisible Computer that UI should be up front in product design, because they could catch the design decisions that would make it more difficult to use. I want to argue that it’s the same with the training folks: they should be up front in product  or service design to catch decisions that will confuse the audience and require extra support costs.

Design, learning or product/service, works best when it aligns with how our brains work.  If we own that knowledge, we can then lobby to apply it, and help make our organizations more successful.  If we can make happier users, and less support costs, we should. And as Kathy Sierra suggests, really it’s  all about learning.

 

Designing a game

12 June 2018 by Clark Leave a Comment

When I was a young academic in Australia, a colleague asked me if I would talk to some folks about a game. He knew that I had designed games before returning to grad school, and had subsequently done one on my thesis research. This group, the Australian Children’s Welfare Agency, had an ‘After Care’ project to assist kids  who needed to live independently. They’d spent their budget on a video, comic book, and a poster, but now realized that the kids would play games at the Care centers. I had a talented student who wanted to do a meaningful honours project, and so I agreed.

Following best principles, we talked not only to the project leaders, and the counselors, but more. We weren’t allowed to talk to youth ‘in care’ (for obvious reasons), but they did get us access to some recent graduates. They gave us great insights, and later they playtested the prototype for fine-tuning.

One of the lessons from this was important. The counselors told us that what these kids needed were to learn to shop and cook. While I  could have made a game for that, when we talked to the kids we learned that there was more.  (My claim: you can’t give me a learning objective I can’t make a game for, though I reserve the right to raise the objective in a taxonomic sense.)   They said what was important were the chains. That is, you could get money while you looked for a job, but… They wouldn’t give you money, however, they’d deposit in a bank account. BUT, to get that, you needed ID.  To get that, however, you needed references. And so on. So that was the critical focus.

I taught my interface design students HyperCard, to have a simple language to prototype in. This meant that we had an environment that we knew games could be built in.  My student did most of the programming, under my direction.  When that wasn’t quite sufficient to finish the development, I used some grant money to hire her for the summer to finish it.

early screenThe resulting play was good, but the design was lacking (neither my student nor I were graphic designers). I ended up going with the project team leaders to get philanthropic funding to add graphics. (Which introduced bugs I had to fix.)  They also had it ported to the PC, which ended up being a mistake.Their hired gun used a platform with an entirely different underlying model and wasn’t able to translate it appropriately. Ouch.

Later street

The resulting game, had some specific design features:

  • It was exploratory, in that the player had to wander around and try to survive.
  • It was built upon a simple simulation engine, which supported replay.
  • There were variables, like health and hunger and sleep that would get worse over time, driving action.
  • The audience was low literacy, so we used graphics to convey variable states, interface elements, and location.
  • Success was difficult. Jobs were difficult to obtain, and better jobs were even harder. And, of course, you had to discover the chains.
  • There was coaching: if you were struggling, the game would offer you the opportunity for a hint. If you continued to struggle, eventually you’d get the hint anyway.
  • There was also a help system, where the basics were laid out.
  • There were random events, like getting (or losing) money, or having drugs or sex. (We were trying to save lives, and didn’t worry about upsetting the wowsers.)

There was more, but this characterized some of the important elements.  In reflecting upon the experience, I realized the alignment between effective education and engaging experiences that means you can, and should, make learning  hard fun.  I wrote a journal article (with my student) that captured what I will  suggest are critical realizations (still!).

They held an event to launch the entire project, including the game (and they gave me a really nice sweater, and Dana something too ;).  Tomorrow, I’ll pass on some of the subsequent outcomes.

SMEs for Design

25 April 2018 by Clark Leave a Comment

In thinking through my design checklist, I was pondering how information comes from SMEs, and the role it plays in learning design. And it occurred to me visually, so of course I diagrammed it.

The problem with getting design guidance from SMEs is that they literally can’t tell us what they do!  The way our brains work, our expertise gets compiled away. While they  can tell us what they know (and they do!), it’s hard to get what really needs to be understood.  So we need a process.

Mapping SME Qs to ID elementsMy key is to focus on the  decisions that learners will be able to make that they can’t make now. I reckon what’s going to help organizations is not what people know, but how they can apply that to problems to make better choices.  And we need SMEs who can articulate that. Which isn’t all SMEs!

That  also means that we need models. Information that helps guide learners’ performance while they compile away their expertise. Conceptual  models  are the key here; causal relationships that can explain what  did  happen or predict what  will happen, so we can choose the outcomes we want. And again, not all SMEs may be able to do  this part.

There’s also other useful information SMEs can give us. For one, they can tell us where learners go wrong. Typically, those errors aren’t random, but come from bringing in the wrong model.  It would make sense if you’re not fully on top of the learning.  And, again we may need more than one SME, as sometimes the theoretical expert (the one who can give us models and/or decisions) isn’t as in tune with what happens in the field, and we may need the supervisor of those performers.

Then, of course, there are the war stories. We need examples of wins (and losses).  Ideally, compelling ones (or we may have to exaggerate). They should  be (or end up) in the form of stories, to facilitate processing (our brains are wired to parse stories).  Of course, after we’re done they should refer to the models, and show the underlying thinking, but that may be our role (and if that’s hard, maybe we either have the wrong story or the wrong model).

Finally, there’s one other way experts can assist us. They’ve found this topic interesting enough to spend the years necessary to  be the experts.  Find out why they find it so fascinating!  Then of course, bake that in.

And it makes sense to gather the information from experts in this order. However, for learning, this information plays roles in different places.  To flip it around, our:

  • introductions need to manifest that intrinsic interest (what will the learners be able to do  that they care about?)
  • concepts need to be presenting those models
  • examples need to capture those stories
  • practice need to embed the decisions and
  • practice needs to provide opportunities to exhibit those misconceptions  before they matter
  • closing may also reference the intrinsic experience in closing the emotional experience

That’s the way I look at it.  Does this make sense to you? What am I missing?

 

 

Tools and Design

11 April 2018 by Clark 2 Comments

I’ve often complained about how the tools we have make it easy to do bad design. They make it easy to put PPTs and PDFs on the screen and add a quiz. And not that that’s not so, but I decided to look at it from the other direction, and I found that instructive. So here’re some thoughts on tools.

Authoring tools, in general, are oriented on a ‘page’ metaphor; they’re designed to provide a sequence of pages. The pages can contain a variety of media: text, audio, video.  And then there are special pages, the ones where you can interact.  And, of course, these interactions are the critical point for learning. It’s when you have to act, to  do, that you retrieve and apply knowledge, that learning really happens.

What’s critical is  what you do.  If it’s just answering knowledge questions, it’s not so good.  If it’s just ‘click to see more’, it’s pretty bad.  The critical element is being faced with a decision about an action to take, then apply the knowledge to discriminate between the alternatives, and make a decision.  The learner has to commit!  Now, if I’m complaining about the tools making it easy to do bad things, what would be good things?

That was my thinking: what would be ideal for tools to support? I reasoned that the interactions should represent things we do in the real world.  Which, of course, are things like fill in forms, write documents, fill out spreadsheets, film things, make things.  And these are all done through typical interactions like drag, drop, click, and more.

Which made me realize that the tools aren’t the problem!  Well, mostly; click to see more is still problematic.  Deciding between courses of action can be done as just a better multiple choice question, or via any common form of interaction: drag/drop, reorder, image click, etc. Of course, branching scenarios are good too, for so-called soft skills (which are increasingly the important things), but tools are supporting those as well.  The challenge  isn’t inherent in the tool design.  The challenge is in our thinking!

As someone recently commented to me, the problem isn’t the tools, it’s the mindset.  If you’re thinking about information dump and knowledge test, you can do that. If you’re thinking about putting people into place to made decisions like they’ll need to make, you can do that. And, of course, provide supporting materials to be able to make those decisions.

I reckon the tool vendors are still focused on content and a quiz, but the support is there to do learning designs that will really have an impact.  We may have to be a bit creative, but the capability is on tap. It’s up to (all of) us to create design processes that focus on the important aspects.  As I’ve said before, if you get the design right, there are  lots of ways to implement it!

Evil design?

6 June 2017 by Clark 1 Comment

This is a rant, but it’s coupled with lessons.  

I’ve been away, and one side effect was a lack of internet bandwidth at the residence.  In the first day I’d used up a fifth of the allocation for the whole time (> 5 days)!  So, I determined to do all I could to cut my internet usage while away from the office.  The consequences of that have been heinous, and  on the principle of “it’s ok to lose, but don’t lose the lesson”, I want to share what I learned.  I don’t think it was evil, but it well could’ve been, and in other instances it might be.

So, to start, I’m an Apple fan.  It started when I followed the developments at Xerox with SmallTalk and the Alto as an outgrowth of Alan Kay‘s Dynabook work. Then the Apple Lisa was announced, and I knew this was the path I was interested in. I did my graduate study in a lab that was focused on usability, and my advisor was consulting to Apple, so when the Mac came out I finally justified a computer to write my PhD thesis on. And over the years, while they’ve made mistakes (canceling HyperCard), I’ve enjoyed their focus on making me more productive. So when I say that they’ve driven me to almost homicidal fury, I want you to understand how extreme that is!

I’d turned on iCloud, Apple’s cloud-based storage.  Innocently, I’d ticked the ‘desktop/documents’ syncing (don’t).  Now, with  every other such system that I know of, it’s stored locally *and* duplicated on the cloud.  That is, it’s a backup. That was my mental model.  And that model was reinforced:  I’d been able to access my files even when offline.  So, worried about the bandwidth of syncing to the cloud, I turned it off.

When I did, there was a warning that  said something to the effect of: “you’ll lose your desktop/documents”.  And, I admit, I didn’t interpret that literally (see: model, above).  I figured it would disconnect their syncing. Or I’d lose the cloud version. Because, who would actually steal the files from your hard drive, right?

Well, Apple DID!  Gone. With an option to have them transferred, but….

I turned it back on, but didn’t want to not have internet, so I turned it off again but ticked the box that said to copy the files to my hard drive.  COPY BACK MY OWN @##$%^& FILES!  (See fury, above.)   Of course, it started, and then said “finishing”.  For 5 days!  And I could see that my files weren’t coming back in any meaningful rate. But there was work  to do!

The support  guy I reached had some suggestion that really didn’t work. I did try to drag my entire documents folder from the iCloud drive to my hard drive, but it said it was making the estimate of how long, and hung on that for a day and a half.  Not helpful.

In meantime, I started copying over the files I needed to do work. And continuing to generate the new ones that reflected what I was working on.  Which meant that the folders in the cloud, and the ones on my hard drive that I  had  copied over, weren’t in sync any longer.  And I have a  lot of folders in my documents folder.  Writing, diagrams, client files, lots of important information!

I admit I made some decisions in my panic that weren’t optimal.  However, after returning I called Apple again, and they admitted that I’d have to manually copy stuff back.  This has taken hours of my time, and hours yet to go!

Lessons learned

So, there are several learnings from this.  First, this is bad design. It’s frankly evil to take someone’s hard drive files after making it easy to establish the initial relationship.  Now, I don’t  think Apple’s intention was to hurt me this way, they just made a bad decision (I hope; an argument could be made that this was of the “lock them in and then jack them up” variety, but that’s contrary to most of their policies so I discount it).  Others, however,  do make these decisions (e.g. providers of internet and cable from whom you can only get a 1 or 2  year price which will then ramp up  and unless you remember to check/change, you’ll end up paying them more than you should until you get around to noticing and doing something about it).  Caveat emptor.

Second, models are important and can be used for or against you. We do  create models about how things work and use evidence to convince ourselves of their validity (with a bit of confirmation bias). The learning lesson is to provide good models.  The warning is to check your models when there’s a financial stake that could take advantage of them for someone else’s gain!

And the importance of models for working and performing is clear. Helping people get good models is an important boost to successful performance!  They’re not necessarily easy to find (experts don’t have access to 70% of what they do), but there are ways to develop them, and you’ll be improving your outcomes if you do.

Finally, until Apple changes their policy, if you’re a Mac and iCloud user I  strongly recommend you avoid the iCloud option to include Desktop and Documents in the cloud unless you can guarantee that you won’t have a bandwidth blockage.  I like the idea of backing my documents to the cloud, but not when I can’t turn it off without losing files. It’s a bad policy that has unexpected consequences to user expectations, and frankly violates my rights to  my data.

We now return you to our regularly scheduled blog topics.

 

Designing Microlearning

10 May 2017 by Clark 6 Comments

Yesterday, I clarified what I meant about microlearning. Earlier, I wrote about designing microlearning, but what I was really talking about was the design of spaced learning. So how should you design the type of microlearning I really feel is valuable?

To set the stage, here’re we’re talking about layering learning on performance in a context. However, it’s more than just performance support. Performance support would be providing a set of steps (in whatever ways: series of static photos, video, etc) or supporting those steps (checklist, lookup table, etc).  And again, this is a good thing, but microlearning, I contend, is more.

To make it learning, what you really need is to support developing an ability to understand the rationale behind the steps, to support adapting the steps in different situations. Yes, you can do this in performance support as well, but here we’re talking about  models.  

What (causal) models give us is a way to explain what has happened, and predict what will happen.  When we make these available around performing a task, we unpack the rationale. We want to provide an understanding behind the rote steps, to support adaptation of the process in difference situations. We also provide a basis for regenerating missing steps.

Now, we can also be providing examples, e.g. how the model plays out in different contexts. If what the learner is doing now can change under certain circumstances, elaborating how the model guides  performing differently in different context provides the ability to transfer that understanding.

The design process, then, would be to identify the model guiding the performance (e..g.  why  we do things in this order, and it might be an interplay between structural constraints (we have to remove this screw first because…) and causal ones (this is the chemical that catalyzes the process).  We need to identify and determine how to represent.

Once we’ve identified the task, and the associated models, we  then need to make these available through the context. And here’s why I’m excited about augmented reality, it’s an obvious way to make the model visible. Quite simply, it can be layered  on top of the task itself!   Imagine that the workings behind what you’re doing are available if you want. That you can explore more as you wish, or not, and simply accept the magic ;).

The actual task  is the practice, but I’m suggesting providing a model explaining  why it’s done this way is the minimum, and providing examples for a representative sample of other appropriate contexts provides support when it’s a richer performance.  Delivered, to be clear, in the context itself. Still, this is what I think  really constitutes microlearning.  So what say you?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok