Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

21 June 2016

eLearning Process Survey results!

Clark @ 8:05 am

So, a few weeks ago I ran a survey asking about elearning processes*, and it’s time to look at the results (I’ve closed it).  eLearning process is something I’m suggesting is ripe for change, and I thought it appropriate to see what people thoughts.  Some caveats: it’s self-selected, it’s limited (23 respondents), and it’s arguably readers of this blog or the other folks who pointed to it, so it’s a select group.  With those caveats, what did we see?

SQ1The first question was looking at how we align our efforts with business needs. The alternatives were ‘providing what’s asked for’ (e.g. taking orders), ‘getting from SMEs’, and ‘using a process’.  These are clearly in ascending order of appropriateness. Order taking doesn’t allow for seeing if a course is needed and SMEs can’t tell you what they actually do. Creating a process to ensure a course is the best solution (as opposed to a job aid or going to the network), and then getting the real performance needs (by triangulating), is optimal.  What we see, however, is that only a bit more than 20% are actually getting this right from the get-go, and almost 80% are failing at one of the two points along the way.

SQ2The second question was asking about how the assessments were aligned with the need. The options ranged from ‘developing from good sources’, thru ‘we test knowledge’ and ‘they have to get it right’ to ‘sufficient spaced contextualized practice’, e.g. ’til they can’t get it wrong.  The clear need, if we’re bothering to develop learning, is to ensure that they can do it at the end.  Doing it ‘until they get it right’ isn’t sufficient to develop a new ability to do. And, we see more than 40% are focusing on using the existing content! Now, the alternatives were not totally orthogonal (e.g. you could have the first response and any of the others), so interpreting this is somewhat problematic.  I assumed  people would know to choose the lowest option in the list if they could, and I don’t know that (flaw in the survey design).  Still it’s pleasing to see that almost 30% are doing sufficient practice, but that’s only a wee bit ahead of those who say they’re just testing knowledge!  So it’s still a concern.

SQ3The third question was looking at the feedback provided. The options included ‘right or wrong’, ‘provides the right answer’, and ‘indication for each wrong answer’.  I’ve been railing against one piece of feedback for all the wrong answers for years now, and it’s important. The alternatives to the wrong answer shouldn’t be random, but instead should represent the ways learners typically get it wrong (based upon misconceptions).  It’s nice (and I admit somewhat surprising) that almost 40% are actually providing feedback that addresses each wrong answer. That’s a very positive outcome.  However, that it’s not even half is still kind of concerning.

SQ4The fourth question digs into the issue of examples.  There are nuances of details about examples, and here I was picking up on a few of these. The options ranged from ‘having’, thru ‘coming from SMEs’ and ‘illustrate the concept and context’, to ‘showing the underlying thinking’.  Again, obviously the latter is the best.  It turns out that experts don’t typically show the underlying cognition, and yet it’s really valuable for the learning. We see that we are getting the link of concept to context clear, and together with showing thinking we’re nabbing roughly 70% of the examples, so that’s a positive sign.

SQ5The fifth question asks about concepts.  Concepts are (or should be) the models that guide performance in the contexts seen across examples and practice (and the basis for the aforementioned feedback). The alternatives ranged from ‘using good content’ and ‘working with SMEs’ to ‘determining the underlying model’.  It’s the latter that is indicated as the basis for making better decisions, going forward.  (I suggest that what will helps orgs is not the ability to receive knowledge, but to make better decisions.)  And we see over 30% going to those models, but still a high percentage still taking the presentations from the SMEs. Which isn’t totally inappropriate, as they do have access to what they learned. I’m somewhat concerned overall that much of ID seems to talk about practice and ‘content’, lumping intros and concepts and examples and closing all together into the latter (without suitable differentiation), so this was better than expected.

SQ6The sixth question tapped into the emotional side of learning, engagement. The options were ‘giving learners what they need’, ‘a good look’, ‘gamification’, and ‘tapping into intrinsic motivation’.  I’ve been a big proponent of intrinsic motivation (heck, I effectively wrote a book on it ;), and not gamification. I think an appealing visual design, but just ‘giving them what they need’ isn’t sufficient for novices: they need the emotional component too. For practitioners, of course, not so much.  I’m pleased that no one talked about gamification (yet the success of companies that sell ‘tart up’ templates suggests that this isn’t the norm). Still, more than a third are going to the intrinsic motivation, which is heartening. There’s a ways to go, but some folks are hearing the message.

SQ7The last question gets into measurement.  We should be evaluating what we do. Ideally, we start from a business metric we need to address and work backward. That’s typically not seen. The questions basically covered the Kirkpatrick model, working from ‘smile sheets’, through’ testing after the learning experience’ and ‘checking changes in workplace behavior’ to ‘tuning until impacting org  metrics’.  I was pleasantly surprised to see over a third doing the latter, and my results don’t parallel what I’ve seen elsewhere. I’m dismayed, of course, that over 20% are still just asking learners, which we know in general isn’t of particular use.

This was a set of questions deliberately digging into areas where I think elearning falls down, and (at least with this group of respondents), it’s not good as I’d hope, but not as bad as I feared.  Still, I’d suggest there’s room for improvement, given the constraints above about who the likely respondents are.  It’s not a representative sample, I’d suspect.

Clearly, there are ways to do well, but it’s not trivial. I’m arguing that we can do good elearning without breaking the bank, but it requires an understanding of the inflection points of the design process where small changes can yield important results. And it requires an understanding of the deeper elements to develop the necessary tools and support. I have been working with several organizations to make these improvements, but it’s well past time to get serious about learning, and start having a real impact.

So over to you: do you see this as a realistic assessment of where we are? And do you take the overall results as indicating a healthy industry, or an industry that needs to go beyond haphazard approaches and start practicing Learning Engineering?

*And, let me say, thanks very much to those respondents who bothered to take the time to respond.  It was quick, but still, the effort was completely appreciated.

 

16 June 2016

John Black #ICELW Keynote Mindmap

Clark @ 8:06 am

Professor John Black of Columbia Unveristy gave a fascinating talk about how games can leverage “embodied cognition” to achieve deeper learning. The notion is that by physical enaction, you get richer activation, and sponsor deeper learning.  It obviously triggered lots of thoughts (mine are the ones in the bubbles :). Lots to ponder.

1 June 2016

The Quinnovation eLearning Process Survey

Clark @ 8:08 am

In the interests of understanding where the market is, I’m looking to benchmark where organizations are. Sure, there are other data points, but I have my own questions I would like to get answered. So I’ve created a quick survey of seven questions (thanks, SurveyMonkey) I’d love for you to fill out.

My interest is in finding out about the processes used in designing and delivering elearning. While I’ve my own impressions, I thought it would be nice to bolster it with data. So here we are.
 
And I’m not asking what org you’re working for, because I’d appreciate honest answers. Please feel free to respond and circulate to those you know in other organizations (but try to only have one person from your org fill it out).

This is an experiment (hey, that’s what innovation is all about ;), so we’ll see how it goes. I’ll report out what happens when responses start petering out (or when I hit my 100 response cap ;). I welcome your comments or questions as well. Thanks!

Create your own user feedback survey

31 May 2016

Where do comics/cartoons fit?

Clark @ 8:07 am

I’ve regularly suggested that you want to use the right media for the task, and there are specific cognitive properties of media that help determine the answer.  One important dimension is context versus concept, and another is dynamic versus static.  But I realized I needed to extend it.

MediaPropertiesNewTo start with, concepts are relationships, such as diagrams (as this one is!).  Whereas context is the actual setting. For one, you want to abstract away, for the other you want to be concrete.  Similarly, some relationships, and settings, are static, whereas others are dynamic. Obviously, here we’re talking static relationships, but if we wanted to illustrate some chemical process, we might need an animation.

So, for contextualization, we can use a photo capturing the real setting. Unless, of course, it’s dynamic and we need a video. Similarly, if we need conceptual relationships, we use a diagram, unless again if it’s dynamic and we need an animation. (By animation, I mean a dynamic diagram, not a cartoon, just as a video is a dynamic recording of a live setting, not a cartoon.)

Audio’s a funny case, in that it can be static as text or dynamic as audio. The needs change depending on where you need your attention represented: you can’t (and shouldn’t) put static text on a dynamic visual, and you can’t use video if the attention can’t be visually distracted. Audio is valuable when you can’t take your eyes away (e.g. the audio guidance on a GPS, “now turn left”).

Note that there are halfway points. You can capture a sequence of static images in lieu of a video (think narrated slide show).  Similarly, a diagram could be shown in multiple states.  And this is all ignoring interactives.  But there’s a particular place I want to go, hinted above.

I was reflecting that comics (static) and cartoons (dynamic) are  instances that don’t naturally fall out of my characterization, and realized I needed a way to consider them.  I posit that comics/cartoons are halfway between context and concept.  They strip away unnecessary context, so that it’s easier to see what’s important, and have the potential (via, say, thought balloons) to annotate the world with the concept.  So they’re semi-conceptual, and semi-contextual.  I’ve regularly argued that we don’t use them often enough for a number of reasons, and it’s important to think where they fit.

This is my proposal: that they help focus attention on important elements without unnecessary details and the ability to elaborate (as well as the rest of the benefits: familiarity, bandwidth, etc).  So, what do you say?  Does this fit and make sense?  Are you going to use more comics/graphic novels/cartoons?

26 May 2016

Heading in the right direction

Clark @ 8:06 am

Most of our educational approaches – K12, Higher Ed, and organizational – are fundamentally wrong.  What I see in schools, classrooms, and corporations are information presentation and knowledge testing.  Which isn’t bad in and of itself, except that it won’t lead to new abilities to do!  And this bothers me.

As a consequence, I took a stand trying to create a curricula that wasn’t about content, but instead about action.  I elaborated it in some subsequent posts, trying to make clear that the activities could be connected and social, so that you could be developing something over time, and also that the output of the activity produced products – both the work and thoughts on the work – that serve as a portfolio.

I just was reading and saw some lovely synergistic thoughts that inspire me that there’s hope. For one, Paul Tough apparently wrote a book on the non-cognitive aspects of successful learners, How Children Succeed, and then followed it up with Helping Children Succeed, which digs into the ignored ‘how’.  His point is that elements like ‘grit’ that have been (rightly) touted aren’t developed in the same way cognitive skills are, and yet they can be developed. I haven’t read his book (yet), but in exploring an interview with him, I found out about Expeditionary Learning.

And what Expeditionary Learning has, I’m happy to discover, is an approach based upon deeply immersive projects that integrate curricula and require the learning traits recognized as important.  Tough’s point is that the environment matters, and here are schools that are restructured to be learning environments with learning cultures.  They’re social, facilitated, with meaningful goals, and real challenges. This is about learning, not testing.  “A teacher’s primary task is to help students overcome their fears and discover they can do more than they think they can.”

And I similarly came across an article by Benjamin Riley, who’s been pilloried as the poster-child against personalization.  And he embraces that from a particular stance, that learning should be personalized by teachers, not technology.  He goes further, talking about having teachers understand learning science, becoming learning engineers.  He also emphasizes social aspects.

Both of these approaches indicate a shift from content regurgitation to meaningful social action, in ways that reflect what’s known about how we think, work, and learn. It’s way past time, but it doesn’t mean we shouldn’t keep striving to do better. I’ll argue that in higher ed and in organizations, we should also become more aware of learning science, and on meaningful activity.  I encourage you to read the short interview and article, and think about where you see leverage to improve learning.  I’m happy to help!

4 May 2016

Learning in Context

Clark @ 8:09 am

In a recent guest post, I wrote about the importance of context in learning. And for a featured session at the upcoming FocusOn Learning event, I’ll be talking about performance support in context.  But there was a recent question about how you’d do it in a particular environment, and that got me thinking about the the necessary requirements.

As context (ahem), there are already context-sensitive systems. I helped lead the design of one where a complex device was instrumented and consequently there were many indicators about the current status of the device. This trend is increasing.  And there are tools to build context-sensitive helps systems around enterprise software, whether purchased or home-grown. And there are also context-sensitive systems that track your location on mobile and allow you to use that to trigger a variety of actions.

Now, to be clear, these are already in use for performance support, but how do we take advantage of them for learning. Moreover, can we go beyond ‘location’ specific learning?  I think we can, if we rethink.

So first, we obviously can use those same systems to deliver specific learning. We can have a rich model of learning around a system, so a detailed competency map, and then with a rich profile of the learner we can know what they know and don’t, and then when they’re at a point where there’s a gap between their knowledge and the desired, we can trigger some additional information. It’s in context, at a ‘teachable moment’, so it doesn’t necessarily have to be assessed.

This would be on top of performance support, typically, as they’re still learning so we don’t want to risk a mistake. Or we could have a little chance to try it out and get it wrong that doesn’t actually get executed, and then give them feedback and the right answer to perform.  We’d have to be clear, however, about why learning is needed in addition to the right answer: is this something that really needs to be learned?

I want to go a wee bit further, though; can we build it around what the learner is doing?  How could we know?  Besides increasingly complex sensor logic, we can use when they are.  What’s on their calendar?  If it’s tagged appropriately, we can know at least what they’re supposed to be doing.  And we can develop not only specific system skills, but more general business skills: negotiation, running meetings, problem-solving/trouble-shooting, design, and more.

The point is that our learners are in contexts all the time.  Rather than take them away to learn, can we develop learning that wraps around what they’re doing? Increasingly we can, and in richer and richer ways. We can tap into the situational motivation to accomplish the task in the moment, and the existing parameters, to make ordinary tasks into learning opportunities. And that more ubiquitous, continuous development is more naturally matched to how we learn.

26 April 2016

Learning in context

Clark @ 8:10 am

In preparation for the upcoming FocusOn Learning Conference, where I’ll be running a workshop about cognitive science for L&D, not just for learning but also for mobile and performance support, I was thinking about how  context can be leveraged to provide more optimal learning and performance.  Naturally, I had to diagram it, so let me talk through it, and you let me know what you think.

ApartLearningWhat we tend to do, as a default, is to take people away from work, provide the learning resources away from the context, then create a context to practice in. There are coaching resources, but not necessarily the performance resources.  (And I’m not even mentioning the typical lack of sufficient practice.) And this makes sense when the consequences of making a mistake on the task are irreversible and costly.  E.g. medicine, transportation.  But that’s not as often as we think. And there’s an alternative.

We can wrap the learning around the context. Our individual is in the world, and performing the task. There can be coaching (particularly at the start, and then gradually removed as the individual moves to acceptable competence). There are also performance resources – job aids, checklists, etc – in the environment. There also can be learning resources, so the individual can continue to self-develop, particularly in the increasingly likely situation that the task has some ambiguity or novelty in it. Of course, that only works if we have a learner capable of self learning (hint hint).

The problems with always taking people away from their jobs are multiple:

  • it is costly to interrupt their performance
  • it can be costly to create the artificial context
  • the learning has a lower likelihood to make it back to the workplace

Our brains don’t learn in an event model, they learn in little bits over time. It’s more natural, more effective, to dribble the learning out at the moment of need, the learnable moment.  We have the capability, now, to be more aware of the learner, to deliver support in the moment, and develop learners over time. The way their brains actually learn.  And we should be doing this.  It’s more effective as well as more efficient.  It requires moving out of our comfort zone; we know the classroom, we know training.  However, we now also know that the effectiveness of classroom training can be very limited.

We have the ability to start making learning effective as well as efficient. Shouldn’t we do so?

20 April 2016

Deeper Learning Reading List

Clark @ 8:10 am

So, for my last post, I had the Revolution Reading List, and it occurred to me that I’ve been reading a bit about deeper learning design, too, so I thought I’d offer some pointers here too.

The starting point would be Julie Dirksen’s Design For How People Learn (already in it’s 2nd edition). It’s a very good interpretation of learning research applied to design, and very readable.

A new book that’s very good is Make It Stick, by Peter Brown, Henry Roediger III, and Mark McDaniel, the former being a writer who’s worked with two scientists to take learning research into 10 principles.

And let me mention two Ruth Clark books. One with Dick Mayer from UCSB, e-Learning and the Science of Instruction, that focuses on the use of media.  A second with Frank Nguyen and the wise John Sweller, Efficiency in Learning, focuses on cognitive load (which has many implications, including some overlap with the first).

Patti Schank has come out with a concise compilation of research called The Science of Learning that’s available to ATD members. Short and focused with her usual rigor.  If you’re not an ATD member, you can read her blog posts that contributed (click ‘View All’).

Dorian Peters book on Interface Design for Learning also has some good learning principles as well as interface design guidance.  It’s not the same for learning as for doing.

Of course, a classic is a compilation of research by a blue-ribbon team lead by John Bransford, How People Learn, (online or downloadable).  Voluminous, but pretty much state of the art.

Another classic is the Cognitive Apprenticeship model of Allen Collins & John Seely Brown. A holistic model abstracted across some seminal work, and quite readable.

The Science of Learning Center has an academic integration of research to instruction theory by Ken Koedinger, et al, The Knowledge-Learning-Instruction Framework, that’s freely available as a PDF.

I’d be remiss if I don’t point out the Serious eLearning Manifesto, which has 22 research principles underneath the 8 values that differentiate serious elearning from typical versions.  If you buy in, please sign on!

And, of course, I can point you to my own series for Learnnovators on Deeper ID.

So there you go with some good material to get you going. We need to do better at elearning, treating it with the importance it deserves.  These don’t necessarily tell you how to redevelop your learning design processes, but you know who can help you with that.  What’s on your list?

6 April 2016

A complex look at task assignments

Clark @ 8:09 am

I was thinking (one morning at 4AM, when I was wishing I was asleep) about designing assignment structures that matched my activity-based learning model.  And a model emerged that I managed to recall when I finally did get up.  I’ve been workshopping it a bit since, tuning some details. No claim that it’s there yet, by the way.

ModelAssignmentAnd I’ll be the first to acknowledge that it’s complex, as the diagram represents, but let me tease it apart for you and see if it makes sense. I’m trying to integrate meaningful tasks, meta-learning, and collaboration.  And there are remaining issues, but let’s get to the model first.

So, it starts by assigning the learners a task to create an artefact. (Spelling intended to convey that it’s not a typical artifact, but instead a created object for learning purposes.) It could be a presentation, a video, a document, or what have you.  The learner is also supposed to annotate their rationale for the resulting design as well.  And, at least initially, there’s a guide to principles for creating an artefact of this type.  There could even be a model presentation.

The instructor then reviews these outputs, and assigns the student several others to review.  Here it’s represented as 2 others, but it could be 4. The point is that the group size is the constraining factor.

And, again at least initially, there’s a rubric for evaluating the artefacts to support the learner. There could even be a video of a model evaluation. The learner writes reviews of the two artefacts, and annotates the underlying thinking that accompanies and emerges.  And the instructor reviews the reviews, and provides feedback.

Then, the learner joins with other learners to create a joint output, intended to be better than each individual submission.  Initially, at least, the learners will likely be grouped with others that are similar.  This step might seem counter intuitive, but while ultimately the assignments will be to widely different artefacts, initially the assignment is lighter to allow time to come to grips with the actual process of collaborating (again with a guide, at least initially). Finally, the final artefacts are evaluated, perhaps even shared with all.

Several points to make about this.  As indicated, the support is gradually faded. While another task might use another artefact, so the guides and rubrics will change, the working together guide can gradually first get to higher and higher levels (e.g. starting with “everyone contributes to the plan”, and ultimately getting to “look to ensure that all are being heard”) and gradually being removed. And the assignment to different groups goes from alike to as widely disparate as possible. And the tasks should eventually get back to the same type of artefact, developing those 21 C skills about different representations and ways of working.  The model is designed more for a long-term learning experience than a one-off event model (which we should be avoiding anyways).

The artefacts and the notes are evidence for the instructor to look at the learner’s understanding and find a basis to understand not only their domain knowledge (and gaps), but also their understanding of the 21st Century Skills (e.g. the artefact-creation process, and working and researching and…), and their learning-to-learn skills. Moreover, if collaborative tools are used for the co-generation of the final artefact, there are traces of the contribution of each learner to serve as further evidence.

Of course, this could continue. If it’s a complex artefact (such as a product design, not just a presentation), there could be several revisions.  This is just a core structure.  And note that this is  not for every assignment. This is a major project around or in conjunction with other, smaller, things like formative assessment of component skills and presentation of models may occur.

What emerges is that the learners are learning about the meta-cognitive aspects of artefact design, through the guides. They are also meta-learning in their reflections (which may also be scaffolded). And, of course, the overall approach is designed to get the valuable cognitive processing necessary to learning.

There are some unresolved issues here.  For one, it could appear to be heavy load on the instructor. It’s essentially impossible to auto-mark the artefacts, though the peer review could remove some of the load, requiring only oversight. For another, it’s hard to fit into a particular time-frame. So, for instance, this could take more than a week if you give a few days for each section.  Finally, there’s the issue of assessing individual understanding.

I think this represents an integration of a wide spread of desirable features in a learning experience. It’s a model to shoot for, though it’s likely that not all elements will initially be integrated. And, as yet, there’s no LMS that’s going to track the artefact creation across courses and support all aspects of this.  It’s a first draft, and I welcome feedback!

 

23 March 2016

Activity-Based Learning

Clark @ 8:12 am

On a recent conversation with some Up to All of Us colleagues, I was reminded about my ‘reimagining learning‘ model. The conversation was about fractals and learning, and how most tools (e.g. the LMS) don’t reflect the conversational nature of learning.  And I was thinking again about how we need to shift our thinking, and how we can reframe it.

I’d pointed one colleague to Diana Laurillard’s model of conversational learning, as it does reflect a more iterative model of learning with ongoing cycle of action and reflection. And it occurred to me that I hadn’t conveyed what the learner’s experience with the activity curriculum would look like. It’s implicit, but not explicit.

New Learning CycleOf course, it’s a series of activities (as opposed to a series of content), but it’s about the product of those activities.  The learner (alone or together) creates a response to a challenge, perhaps accessing relevant content as part of the process, and additionally annotates the thinking behind it.

This is then viewed by peers and/or a mentor, who provide feedback to the learner. As a nuance, there should be guidance for that feedback, so that it explicitly represent the concept(s) that should guide the performance. The subsequent activity could be to revise the product, or move along to something else.

The point being that the learner is engaged in a meaningful assignment (the activity should be contextualized), and actively reflecting. The subsequent activity, as the Laurillard model suggests, should reflect what the learner’s actions have demonstrated.

It’s very much the social cognition benefits I’ve talked about before, in creating and then getting feedback on that representation.  The learner’s creating and reflecting, and that provides a rich basis for understanding where they are at.

Again, my purpose here is to help make it clear that a curriculum properly should be about doing, not knowing.  And this is why I believe that there must  be people in the loop. And while much of that burden might be placed on the other learners (if you have a synchronous cohort model), or even the learner with guidance on generating their own feedback, with rubrics for evaluation, but you still benefit from oversight in case the understanding gets off track.

We can do a lot to improve asynchronous learning, but we should not neglect social when we can take advantage of it. So, are you wanting to improve your learning?

Next Page »

Powered by WordPress