Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

30 September 2014

Types and proportions of learning activities?

Clark @ 8:49 am

I’ve been on quite the roll of late, calling out some bad practices and calling for learning science. And it occurs to me that there could be some pushback.  So let me be clear, I strongly suggest that the types of learning that are needed are not info dump and knowledge test, by and large.  What does that mean? Let’s break it down.

First, let me suggest that what’s going to make a difference to organizations is not better fact-remembering. There are times when fact remembering is needed, such as medical vocabulary (my go-to example). When that needs to happen, tarted up drill-and-kill (e.g .quiz show templates, etc) are the way to do it.   Getting people to remember rote facts or arbitrary things (like part names) is very difficult. And largely unnecessary if people can look it up, e.g. the information is in the world (or can be).  There are some things that need to be known cold, e.g. emergency procedures, hence the tremendous emphasis on drills in aviation and the military. Other than that, put it in the world, not the head.  Look up tables, info sheets, etc are the solution.  And I’ll argue that the need for this is less than 5-10% of the time.

So what is useful?  I’ll argue that what is useful is making better decisions.  That is, the ability to explain what’s happened and react, or predict what will happen and make the right choice as as consequence.  This comes from model-based reasoning.  What sort of learning helps model-based reasoning? Two types, in a simple framework. You need to process the models to help them be comprehended, and use them in context to make decisions with the consequences providing feedback.  Yes, there likely will be some content presentation, but it’s not everything, and instead is the core model with examples of how it plays out in context. That is, annotated diagrams or narrated animations for the models; comic books, cartoons, or videos for the examples.  Media, not bullet points.

The processing that helps make models stick includes having learners generate products: giving them data or outcomes and having them develop explanatory models. They can produce summary charts and tables that serve as decision aids. They can create syntheses and recommendations.  This really leads to internalization and ownership, but it may be more time-consuming than worthwhile. The other approach is to have learners make predictions using the models, explaining things.  Worst case, they can answer questions about what this model implies in particular contexts.  So this is a knowledge question, but not a “is this an X or a Y”, but rather “you have to achieve Z, would you use approach X, or approach Y”.

Most importantly, you need people to use the models to make decisions like they’ll be making in the workplace.  That means scenarios and simulations.  Yes, a mini-scenario of one question is essentially a multiple choice (though better written with a context and a decision), but really things tend to be bundled up, and you at least need branching scenarios. A series of these might be enough if the task isn’t too complex, but if it’s somewhat complex, it might be worth creating a model-based simulation and giving the learners lots of goals with it (read: serious game).

And, don’t forget, if it matters (and why are you bothering if it doesn’t), you need to practice until they can’t get it wrong.  And you need to be facilitating reflection.  The alternatives to the right answer should reflect ways learners often go wrong, and address them individually. “No, that’s not correct, try again” is a really rude way to respond to learner actions.  Connect their actions to the model!

What this also implies is that learning is much more practice than content presentation.  Presenting content and drilling knowledge (particularly in about an 80/20 ratio), is essentially a waste of time.  Meaningful practice should be more than half the time.  And you should consider putting the practice up front and driving them to the content, as opposed to presenting the content first.  Make the task make the content meaningful.

Yes, I’m making these numbers up, but they’re a framework for thinking. You should be having lots of meaningful practice.  There’s essentially no role for bullet points or prose and simplistic quizzes, very little role for tarted up quizzes, and lots of role for media on the content side and  branching scenarios and model-driven interactions on the interaction side.  This kind of is an inverse of the tools and outputs I see.  Hence my continuing campaign for better learning.  Make sense?

24 September 2014

Better Learning in the Real World

Clark @ 8:25 am

I tout the value of learning science and good design.  And yet, I also recognize that to do it to the full extent is beyond most people’s abilities.  In my own work, I’m not resourced to do it the way I would and should do it. So how can we strike a balance?  I believe that we need to use smart heuristics instead of the full process.

I have been talking to a few different people recently who basically are resourced to do it the right way.  They talk about getting the right SMEs (e.g. with sufficient depth to develop models), using a cognitive task analysis process to get the objectives, align the processing activities to the type of learning objective, developing appropriate materials and rich simulations, testing the learning and using  feedback to refine the product, all before final release.  That’s great, and I laud them.  Unfortunately, the cost to get a team capable of doing this, and the time schedule to do it right, doesn’t fit in the situation I’m usually in (nor most of  you).  To be fair, if it really matters (e.g. lives depend on it or you’re going to sell it), you really do need to do this (as medical, aviation, military training usually do).

But what if you’ve a team that’s not composed of PhDs in the learning sciences, your development resources are tied to the usual tools, your budgets far more stringent, and schedules are likewise constrained? Do you have to abandon hope?  My claim is no.

Law of diminishing returns curveI believe that a smart, heuristic approach is plausible.  Using the typical ‘law of diminishing returns’ curve (and the shape of this curve is open to debate), I  suggest that it’s plausible that there is a sweet spot of design processes that gives you an high amount of value for a pragmatic investment of time and resources.  Conceptually, I believe you can get good outcomes with some steps that tap into the core of learning science without following the letter.  Learning is a probabilistic game, overall, so we’re taking a small tradeoff in probability to meet real world constraints.

What are these steps? Instead of doing a full cognitive task analysis, we’ll do our best guess of meaningful activities before getting feedback from the SME.  We’ll switch the emphasis from knowledge test to mini- and branching-scenarios for practice tasks, or we’ll have them take information resources and use them to generate work products (charts, tables, analyses) as processing.  We’ll try to anticipate the models,  and ask for misconceptions & stories to build in.  And we’ll align pre-, in-, and post-class activities in a pragmatic way.  Finally, we’ll do a learning equivalent of heuristic evaluation, not do a full scientifically valid test, but we’ll run it by the SMEs and fix their (legitimate) complaints, then run it with some students and fix the observed flaws.

In short, what we’re doing here are  approximations to the full process that includes some smart guesses instead of full validation.  There’s not the expectation that the outcome will be as good as we’d like, but it’s going to be a lot better than throwing quizzes on content. And we can do it with a smart team that aren’t learning scientists but are informed, in a longer but still reasonable schedule.

I believe we can create transformative learning under real world constraints.  At least, I’ll claim this approach is far more justifiable than the too oft-seen approach of info dump and knowledge test. What say you?

23 September 2014

Design like a pro

Clark @ 8:20 am

In other fields of endeavors, there is a science behind the approaches.  In civil engineering, it’s the properties of materials.  In aviation, it’s aeronautical engineering.  In medicine, it’s medical science.  If you’re going to be a professional in your field, you have to know the science.  So, two questions: is there a science of learning, and is it used.  The answers appear to be yes and no.  And yet, if you’re going to be a learning designer or engineer, you should know the science and be using it.

There is a science of learning, and it’s increasingly easy to find.  That’s the premise behind the Serious eLearning Manifesto, for instance (read it, sign it, use it!).  You could read Julie Dirksen’s Design for How People Learn as a very good interpretation of the science.  The Pittsburgh Science of Learning Center is compiling research to provide guidance about learning if you want a fuller scientific treatment.  Or read Bransford, et al’s summary of the science of How People Learna very rich overview.  And Hess & Saxberg’s recent Breakthrough Leadership in the Digital Age: Using Learning Science to Reboot Schooling is both a call for why and some guidance on how.

Among the things we know are that rote and abstract information isn’t retained, knowledge test doesn’t mean ability to do, getting it right once doesn’t mean it’s known, the list goes on.  Yet, somehow, we see elearning tools like ‘click to learn more’ (er, less), tarted up quiz show templates to drill knowledge, easy ways to take content and add quizzes to them, and more.  We see elearning that’s arbitrary info dump and simplistic knowledge test.  Which will have a negligible impact on anything meaningful.

We’re focused on speed and cost efficiencies, not on learning outcomes, and that’s not professional.  Look, if you’re going to do design, do it right.   Anything less is really malpractice!

17 September 2014

Learning in 2024 #LRN2024

Clark @ 8:14 am

The eLearning Guild is celebrating it’s 10th year, and is using the opportunity to reflect on what learning will look like 10 years from now.  While I couldn’t participate in the twitter chat they held, I optimistically weighed in: “learning in 2024 will look like individualized personal mentoring via augmented reality, AI, and the network”.  However, I thought I would elaborate in line with a series of followup posts leveraging the #lrn2024 hashtag.  The twitter chat had a series of questions, so I’ll address them here (with a caveat that our learning really hasn’t changed, our wetware hasn’t evolved in the past decade and won’t again in the next; our support of learning is what I’m referring to here):

1. How has learning changed in the last 10 years (from the perspective of the learner)?

I reckon the learner has seen a significant move to more elearning instead of an almost complete dependence on face-to-face events.  And I reckon most learners have begun to use technology in their own ways to get answers, whether via the Google, or social networks like FaceBook and LinkedIn.  And I expect they’re seeing more media such as videos and animations, and may even be creating their own. I also expect that the elearning they’re seeing is not particularly good, nor improving, if not actually decreasing in quality.  I expect they’re seeing more info dump/knowledge test, more and more ‘click to learn more‘, more tarted-up drill-and-kill.  For which we should apologize!

2. What is the most significant change technology has made to organizational learning in the past decade?

I reckon there are two significant changes that have happened. One is rather subtle as yet, but will be profound, and that is the ability to track more activity, mine more data, and gain more insights. The ExperienceAPI coupled with analytics is a huge opportunity.  The other is the rise of social networks.  The ability to stay more tightly coupled with colleagues, sharing information and collaborating, has really become mainstream in our lives, and is going to have a big impact on our organizations.  Working ‘out loud’, showing our work, and working together is a critical inflection point in bringing learning back into the workflow in a natural way and away from the ‘event’ model.

3. What are the most significant challenges facing organizational learning today?

The most significant change is the status quo: the belief that an information oriented event model has any relationship to meaningful outcomes.  This plays out in so many ways: order-taking for courses, equating information with skills, being concerned with speed and quantity instead of quality of outcomes, not measuring the impact, the list goes on.   We’ve become self-deluded that an LMS and a rapid elearning tool means you’re doing something worthwhile, when it’s profoundly wrong.  L&D needs a revolution.

4. What technologies will have the greatest impact on learning in the next decade? Why?

The short answer is mobile.  Mobile is the catalyst for change. So many other technologies go through the hype cycle: initial over-excitement, crash, and then a gradual resurgence (c.f. virtual worlds), but mobile has been resistant for the simple reason that there’s so much value proposition.  The cognitive augmentation that digital technology provides, available whenever and wherever you are clearly has benefits, and it’s not courses!  It will naturally incorporate augmented reality with the variety of new devices we’re seeing, and be contextualized as well.  We’re seeing a richer picture of how technology can support us in being effective, and L&D can facilitate these other activities as a way to move to a more strategic and valuable role in the organization.  As above, also new tracking and analysis tools, and social networks.  I’ll add that simulations/serious games are an opportunity that is yet to really be capitalized on.  (There are reasons I wrote those books :)

5. What new skills will professionals need to develop to support learning in the future?

As I wrote (PDF), the new skills that are necessary fall into two major categories: performance consulting and interaction facilitation.  We need to not design courses until we’ve ascertained that no other approach will work, so we need to get down to the real problems. We should hope that the answer comes from the network when it can, and we should want to design performance support solutions if it can’t, and reserve courses for only when it absolutely has to be in the head. To get good outcomes from the network, it takes facilitation, and I think facilitation is a good model for promoting innovation, supporting coaching and mentoring, and helping individuals develop self-learning skills.  So the ability to get those root causes of problems, choose between solutions, and measure the impact are key for the first part, and understanding what skills are needed by the individuals (whether performers or mentors/coaches/leaders) and how to develop them are the key new additions.

6. What will learning look like in the year 2024?

Ideally, it would look like an ‘always on’ mentoring solution, so the experience is that of someone always with you to watch your performance and provide just the right guidance to help you perform in the moment and develop you over time. Learning will be layered on to your activities, and only occasionally will require some special events but mostly will be wrapped around your life in a supportive way.  Some of this will be system-delivered, and some will come from the network, but it should feel like you’re being cared for in the most efficacious way.

In closing, I note that, unfortunately,my Revolution book and the Manifesto were both driven by a sense of frustration around the lack of meaningful change in L&D. Hopefully, they’re riding or catalyzing the needed change, but in a cynical mood I might believe that things won’t change near as much as I’d hope. I also remember a talk (cleverly titled: Predict Anything but the Future :) that said that the future does tend to come as an informed basis would predict with an unexpected twist, so it’ll be interesting to discover what that twist will be.

16 September 2014

On the Road Fall 2014

Clark @ 8:05 am

Fall always seems to be a busy time, and I reckon it’s worthwhile to let you know where I’ll be in case you might be there too! Coming up are a couple of different events that you might be interested in:

September 28-30 I’ll be at the Future of Talent retreat  at the Marconi Center up the coast from San Francisco. It’s a lovely spot with a limited number of participants who will go deep on what’s coming in the Talent world. I’ll be talking up the Revolution, of course.

October 28-31 I’ll be at the eLearning Guild’s DevLearn in Las Vegas (always a great event; if you’re into elearning you should be there).  I’ll be running a Revolution workshop (I believe there are still a few spots), part of  a mobile panel, and talking about how we are going about addressing the challenges of learning design at the Wadhwani Foundation.

November 12-13 I’ll be part of the mLearnNow event in New Orleans (well, that’s what I call it, they call it LearnNow mobile blah blah blah ;).  Again, there are some slots still available.  I’m honored to be co-presenting with Sarah Gilbert and Nick Floro (with Justin Brusino pulling strings in the background), and we’re working hard to make sure it should be a really great deep dive into mlearning.  (And, New Orleans!)

There may be one more opportunity, so if anyone in Sydney wants to talk, consider Nov 21.

Hope to cross paths with you at one or more of these places!

10 September 2014

Learning Engineering

Clark @ 8:37 am

Last week I had the opportunity to attend the inaugural meeting of the Global Learning Council.  While not really global in either sense (little representation from overseas nor from segments other than higher ed), it was a chance to refresh myself in some rigor around learning sciences. And one thing that struck me was folks talking about learning engineering.

If we take the analogy from regular science and engineering, we are talking about taking the research from the learning sciences, and applying it to the design of solutions.  And this sounds like a good thing, with some caveats.  When talking about the Serious eLearning Manifesto, for example, we’re talking about principles that should be embedded in your learning design approach.

While the intention was not to provide coverage of learning science, several points emerged at one point or another as research-based outcomes to be desired. For one, the value of models in learning.  Another was, of course, the value of spacing practice. The list goes on.  The focus of the engineering, however, is different.

While it wasn’t an explicit topic of the talk, it emerged in several side conversations, but the focus is on design processes and tools that increase the likelihood of creating effective learning practices.  This includes doing a suitable job of creating aligned outcomes through processes of working with SMEs, identifying misconceptions to be addressed, ensuring activities are designed that have learners appropriately processing and applying information, appropriate spread of examples, and more.

Of course, developing an accurate course for any topic is a thorough exercise.  Which is desirable, but not always pragmatic.  While the full rigor of science would go as far as adaptive intelligent tutoring systems, the amount of work to do so can be prohibitive under pragmatic constraints.  It takes a high importance and large potential audience to do this for other than research purposes.

In other cases, we use heuristics.  Sometimes we go too far; so just dumping information and adding a quiz is often seen, though that’s got little likelihood of having any impact.  Even if we do create an appropriate practice, we might only have learners practice until they get it right, not until they can’t get it wrong.

Finding the balance point is an ongoing effort. I reckon that the elements of good design is a starting point, but you need processes that are manageable, repeatable, and scalable.  You need structures to help, including representations that have support for identifying key elements and make it difficult to ignore the important elements.  You ideally have aligned tools that make it easy to do the right things.

And if this is what Learning Engineering can be, systematically applying learning science to design, I reckon there’s also a study of learning science engineering, aligning not just the learning, but the design process, with how we think, work, and learn.  And maybe then there’s a learning architecture as well – where just as an architect designs the basic look and feel of the halls & rooms and the engineers build them – that designs the curriculum approach and the pedagogy, but the learning engineers follow through on those principles for developing courses.

Is learning engineering an alternative to instructional design?  I’m wondering if the focus on engineering rather than design (applied science, rather than art) and learning rather than instruction (outcomes, not process), is a better characterization.  What do you think?

9 September 2014

Emotional connection

Clark @ 8:38 am

I was just at my high school reunion, and despite initial doubts, I had a great time. And it made me wonder why.  These are people I haven’t seen in a long time (in some cases, for decades!).  How is it we could reconnect so easily and generate powerful emotions?

I don’t have any obvious answers.  Now, you have to understand that this was a subset of the whole class. My graduating class was around 900 folks, give or take, and only around 200 or so were at this event, so it’s a non-representative sample.  So we had friends who brought in friends, and it consequently followed a small bit of ‘degrees of separation‘, so there was likely to be greater affinity.

Second, despite being a ‘suburb’ of a major metropolitan center, my hometown has a real ‘small-town’ feel, as we’re geographically isolated and had a more focused employment situation (we were a harbor town).  And we were relatively ethnically diverse, lower on the socio-economic status (this was not Beverly Hills), and consequently shared some ‘scrappy underdog’ spirit.

So what was it like?  Not just in my opinion, but in most accounts it was a great event!  People were hugging, laughing, dancing, and more.  There was sharing, and celebration or commiseration, of life’s travails.  People reconnected with friends that they’d lost contact with, and strengthened ties with those who had been less tight. We also shared thoughts for those who couldn’t join for pragmatic reasons, and memorialized those who were no longer with us.

Interestingly, this was largely organized through Facebook, which despite it’s not intended use as an organizing tool, sufficed to allow us to reconnect before the event through posts to the group.  People who couldn’t come shared thoughts, others talked about their experiences.  There was a lot of preparation. And perhaps because it was this select group, the sharing was very positive.  And the effort to organize was volunteer; and the individuals doing it in that spirit set a tone for the rest of the event.

I wonder, though, if one of the main reasons this worked so well as the strength of the emotional connections.  The teen age years are some of the first emotional connections you make with friends, and some of these friendships had been established earlier (e.g. the two friends I’d reconnected with had become 3 musketeers in Jr High, and I’d known once since kindergarten).  The additional emotional aspects of puberty on emotions likely only heightened it.

We’d also shared the ups and downs of high school together, and as in other cases the relationships take advantage of the strengths of shared experiences.  We’d survived the high school experience together, and had ties through sports, clubs, or events that tightened the connections.

It’s not clear to me that this is really replicable, though I have long advocated that there are reasons to address the emotional components of events such as learning.  Helping find shared ground, and working together to achieve goals, are both elements of team building, and we should look to them when we can.  And positive spirits shown and reflected help.

High school is a tough time: bodily changes, finding one’s self, tough decisions, and more.  I suspect most of us, at least those of us with sufficient empathy to care, struggle to navigate the desire to be oneself and to be accepted.  It’s not an easy journey. The ability to successfully navigate it, and to have found others who help and share the journey, creates lifelong bonds.

A true friend, to me, is one who you can not see for years or even decades, and when you’re together again it’s like no time has passed in your ability communicate with authenticity and, yes, passion.  I hope that you have or can find, if not at a reunion then somewhere, that true connection.

3 September 2014


Clark @ 8:03 am

As a fan of comics and animations (read: cartoons) in learning, I was pleased to see a small mention of comics in a twitter discussion (triggered by this post). When I lauded the claim, I was asked what I think of machinima, and I had to think for a bit.  My feelings are mixed, so it’s probably worth it to think them through out loud.

So, first, machinima are animations made by using characters in 3D virtual worlds or computer games.  They share the look and feel of whatever platform is used, which can range from cartoon-like to quite complex.  Similarly, their speed can range from quite slow to pretty fast.

One particularly attractive feature, which I hadn’t really thought of, is that they may be easy ways to create animation.  As Karl Kapp (professor at Bloomsburg College and clear thinker on games, virtual worlds etc) mentioned in the exchange, they can be great for inexpensively creating animations. And that’s a good thing, if you get the animations you want to use.

My concern has to do with the output of the animations. Many times, I find the complexity of computer graphics containing too much unnecessary detail.  And when surfing the web for some other examples, I found ones where the dialog was too slow (which I’ve seen in other animation forms as well, I confess).  So I worry about matching the detail of output to the need, despite the cost.

Now, as Karl also mentioned, they’re good for procedural tasks. This certainly could be true, as the extra detail would help contextualize. However, is it better than a video?  Certainly if you can expand or contract the scale, so you’re seeing it at the necessary level of detail, not the only real one that video can provide. So for minute details, this would be really good!

As the original respondent suggested, it’s better to be there (e.g. in game) rather than watch, and I’d certainly agree to that, as you can negotiate some of the other issues that might be confusing.  And of course social learning adds value in and of itself.

So, the question is, when is machinima useful?  I wouldn’t want to use it just because of cost; if you’re not getting the right characteristics, it might be a false economy. If it’s producing output within a range of acceptability at a reasonable cost, or really capturing the affordances of virtual worlds, I think it makes sense.  And I’m willing to be wrong. What are your thoughts?


Powered by WordPress