Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Working wiser?

12 January 2016 by Clark Leave a Comment

Noodling:   I’ve been thinking about Working Smarter, a topic I took up over four years ago.  And while I still think there’s too little talk about it, I wondered also about pushing it further.  I also talked in the past about an interest in wisdom, and what that would mean for learning.  So what happens when they come together?

Working smarter, of course, means recognizing how we  really think, work, and learn, and aligning our processes and tools accordingly. That includes recognizing that we  do use external representations, and ensuring that the ones we want in the world are there, and we also support people being able to create their own. It means tapping into the power of people, and creating ways for them to get together and support one another through both communication and collaboration.  And, of course, it means using Serious learning design.

But what, then, does working  ‘wiser’ mean?  I like Sternberg’s model of wisdom, as it’s actionable (other models are not quite specific enough).  It talks about taking into account several levels of  caring about others, several time scales, several levels of action, and all influenced by an awareness of values.  So how do we work that into practices and tools?

Well, pragmatically, we can provide rubrics for evaluation of ideas that include considerations of others inside and outside your circles of your acquaintances, and in short- and long-term timeframes, and the impacts on existing states of affairs, ultimately focusing on the common good. So we can have job aids that provide guidance,  or bake it into our templates.  These, too, can be shown in collaboration tools, so the outputs will reflect these values.  But there’s another approach.

But, at core, it’s really about what you value, and that becomes about culture.  What values does the organization care about?  Do employees know about the organization’s ultimate goal and role?  Is it about short-term shareholder return, or some contribution to society?  I’m reminded about the old statements about whether you’re about selling candles or providing light.  And do employees know how what they do fits in?

It’s pretty clear that the values implicit in  steps to make workplaces more effective are really about making workplaces more humane, that is: respecting our inherent nature.  And movements like this, that provide real meaning, ongoing support, freedom of approach, and time for reflection, are to me about working not just smarter but also wiser.

We can work smarter with tools and practices, but I think we can work better, wiser, with an enlightened approach to who we are working with and how we work to deliver real value to not only customers but to society.  And, moreover, I think that doing so would yield better organizational  outcomes.

Ok, so have I gone off the edge of the hazy cosmic jive?  I am a native Californian, after all, but I’m thinking that this makes real business sense.  I think we can do this, and that the outputs will be better too, in all respects.  No one says it’d be easy, but my suspicion is it’d be worthwhile.

2015 Reflections

31 December 2015 by Clark 3 Comments

It’s the end of the year, and given that I’m an advocate for the benefits of reflection, I suppose I better practice what I preach. So what am I thinking I learned as a consequence of this past year?  Several things come to mind (and I reserve the right for more things to percolate out, but those will be my 2016 posts, right? :):

  1. The Revolution  is real: the evidence mounts that there is a need for change in L&D, and when those steps are taken, good things happen. The latest  Towards Maturity report shows that the steps taken by their top-performing organizations are very much about aligning with business,  focusing on performance, and more.  Similarly, Chief Learning Officer‘s Learning Elite Survey similarly point out to making links across the organization and measuring outcomes.  The data supports the principled observation.
  2. The barriers are real: there is continuing resistance to the most obvious changes. 70:20:10, for instance, continues to get challenged on nonsensical issues like the exactness of the numbers!?!?  The fact that a Learning Management System is not a strategy still doesn’t seem to have penetrated.  And so we’re similarly seeing that other business units are taking on the needs for performance support, social media, and ongoing learning. Which is bad news for L&D, I reckon.
  3. Learning design is  rocket science: (or should be). The perpetration of so much bad elearning continues to be demonstrated at exhibition halls around the globe.  It’s demonstrably true that tarted up information presentation and knowledge test isn’t going to lead to meaningful behavior change, but we still are thrusting people into positions without background and giving them tools that are oriented at content presentation.  Somehow we need to do better. Still pushing the Serious eLearning Manifesto.
  4. Mobile is well on it’s way: we’re seeing mobile becoming mainstream, and this is a good thing. While we still hear the drum beating to put courses on a phone, we’re also seeing that call being ignored. We’re instead seeing real needs being met, and new opportunities being explored.  There’s still a ways to go, but here’s to a continuing awareness of good mobile design.
  5. Gamification is still being confounded: people aren’t really making clear conceptual differences around games. We’re still seeing linear scenarios confounded with branching, we’re seeing gamification confounded with serious games, and more.  Some of these are because the concepts are complex, and some because of vested interests.
  6. Games  seem to be reemerging: while the interest in games became mainstream circa 2010 or so, there hasn’t been a real sea change in their use.  However, it’s quietly feeling like folks are beginning to get their minds around Immersive Learning Simulations, aka Serious Games.   There’s still ways to go in really understanding the critical design elements, but the tools are getting better and making them more accessible in at least some formats.
  7. Design is becoming a ‘thing’: all the hype around Design Thinking is leading to a greater concern about design, and this is a good thing. Unfortunately there will probably be some hype and clarity to be discerned, but at least the overall awareness raising is a good step.
  8. Learning to learn seems to have emerged: years ago the late great Jay Cross and I and some colleagues put together the Meta-Learning Lab, and it was way too early (like so much I touch :p). However, his passing has raised the term again, and there’s much more resonance. I don’t think it’s necessarily a  thing yet, but it’s far greater resonance than we had at the time.
  9. Systems are coming: I’ve been arguing for the underpinnings, e.g. content systems.  And I’m (finally) beginning to see more interest in that, and other components are advancing as well: data  (e.g. the great work Ellen Wagner and team have  been doing on Predictive Analytics), algorithms (all the new adaptive learning systems), etc. I’m keen to think what tags are necessary to support the ability to leverage open educational resources as part of such systems.
  10. Greater inputs into learning: we’ve seen learning folks get interested in behavior change, habits, and more.  I’m thinking we’re going to go further. Areas I’m interested in include myth and ritual, powerful shapers of culture and behavior. And we’re drawing on greater inputs into the processes as well (see 7, above).  I hope this continues, as part of learning to learn is to look to related areas and models.

Obviously, these are things I care about.  I’m fortunate to be able to work in a field that I enjoy and believe has real potential to contribute.  And just fair warning, I’m working on a few areas  in several ways.  You’ll see more about learning design and the future of work sometime in the near future. And rather than generally agitate, I’m putting together two specific programs – one on (e)learning quality and one on L&D strategy – that are intended to be comprehensive approaches.  Stay tuned.

That’s my short list, I’m sure more will emerge.  In the meantime, I hope you had a great 2015, and that your 2016 is your best year yet.

Scenarios and Conceptual Clarity

10 December 2015 by Clark 5 Comments

I recently came across an article ostensibly about branching scenarios, but somehow the discussion largely missed the point.  Ok, so I can be a stickler for conceptual clarity, but I think it’s important to distinguish between different types of scenarios and their relative strengths and weaknesses.

So in my book  Engaging Learning, I was looking to talk about how to make engaging learning experiences.  I was pushing games (and still do) and how to design them, but I also wanted to acknowledge the various approximations thereto.  So in it, I characterized the differences between what I called mini-scenarios, linear scenarios, and contingent scenarios (this latter is what’s traditionally called branching scenarios).  These are all approximations to full games, with various tradeoffs.

At core, let me be clear, is the need to put learners in situations where they need to make decisions. The goal is to have those decisions closely mimic the decisions they need to make  after the learning experience. There’s a context (aka the story setting), and then a specific situation triggers the need to make a decision.  And we can deliver this in a number of ways. The ideal is a simulation-driven (aka model-driven or engine-driven) experience.  There’s  a model of the world underneath that calculates the outcomes of your action and determines whether you’ve yet achieved success (or failure), or generates  a new opportunity to act.  We can (and should) tune this into a serious game.  This gives us deep experience, but the model-building is challenging and there are short cuts.

MiniScenarioIn  mini-scenarios, you put the learner in a setting with a situation that precipitates a decision.  Just one, and then there’s feedback.   You could use video, a graphic novel format, or just prose, but the game problem is a setting and a situation, leading to choices. Similarly, you could have them respond by selecting option A B or C, or pointing to the right answer, or whatever.  It stops there. Which is the weakness, because in the real world the consequences are typically more complex than this, and it’s nice off the learning experience reflects that reality.  Still, it’s better than knowledge test.  Really, these are  just a better written multiple choice question, but that’s at least a start!

LinearScenarioLinear scenarios are a bit more complex. There are a series of game problems in the same context, but whatever the player  chooses, the right decision is ultimately made, leading to the next problem. You use some sort of sleight of hand, such as “a supervisor catches the mistake and rectifies it, informing you…” to make it all ok.  Or, you can terminate out and have to restart if you make the wrong decision  at any point.  These are a step up in terms of showing the more complex consequences, but are a bit unrealistic.  There’s some learning power here, but not as much as is possible.  I have used them as sort of multiple mini-scenarios with content in between, and  the same story is used for the next choice, which at least made a nice flow. Cathy Moore  suggests  these  are valuable for novices, and I think it’s also useful if everyone needs to receive the same ‘test’ in some accreditation environment to be fair and balanced (though in a competency-based world they’d be better off with the full game).

BranchingScenarioThen there’s the full branching scenario (which I called contingent scenarios in the book, because the consequences and even new decisions are contingent on your choices).  That is, you see different opportunities depending on your choice. If you make one decision, the subsequent ones are different.  If you don’t shut down the network right away, for instance, the consequences are different (perhaps a breach) than if you do (you get the VP mad).  This, of course, is much  more like the real world.  The only difference between this and a serious  game is that the contingencies in the world are hard-wired in the branches, not captured in a separate model (rules and variables). This  is easier, but it gets tough to track if you have too many  branches. And the lack of an engine  limits the replay and ability to have randomness.  Of course, you can make several of these.

So the problem I had with the article  that triggered this post is that their generic model looked like a mini-scenario, and nowhere did they show the full concept of a real branching scenario. Further,  their example was really a linear scenario, not a branching scenario.  And I realize this may seem like an ‘angels dancing on the head of a pin’, but I think it’s important to make distinctions when they affect the learning outcome, so you can more clearly make a choice that reflects the goal you are trying to achieve.

To their credit, that they  were pushing for contextualized decision making at all is a major win, so I don’t want to quibble too much.  Moving our learning practice/assessment/activity to more contextualized performance is a good thing.  Still, I  hope this elaboration is useful  to get more nuanced solutions.  Learning design really can’t be treated as a paint-by-numbers exercise, you really should know what you’re doing!

Useful cognitive overhead

2 December 2015 by Clark 2 Comments

As I’ve reported before, I started mind mapping keynotes not as a function of filling the blog, but for listening better.  That is, without the extra processing requirement of processing the talk into a structure, my mind was (too) free to go wandering. I only posted it because I thought I should do  something with it!  And I’ve realized there’s another way I leverage cognitive overhead.

As background, I diagram.  It’s one of the methods I use to reflect.  A famous cognitive science article talked about how diagrams are representations that map conceptual relationships to spatial ones, to use the power of our visual system to facilitate comprehension. And that’s what I do, take something I’m trying to understand, some new thoughts I have, and get concrete about them.  If I can map them out, I feel like I’ve got my mind around them.

I use them to communicate, too. You’ve seen them here in my blog (or will if you browse around a bit), and in my presentations.  Naturally, they’re a large part of my workshops too, and even reports and papers.  As I believe models composed of concepts are powerful tools for understanding the world, I naturally want to convey them to support people in applying them themselves.

Now, what I realized (as I was diagramming) is that the way I diagram actually leverages cognitive overhead in a productive way. I use a diagramming tool (Omnigraffle if you must know, expensive but works well for me) to create them, and there’s some overhead in getting the diagram components sized, and located, and connected, and colored, and…  And in so doing, I’m allowing time for my thoughts to coalesce.

It doesn’t  work with paper, because it’s hard to edit, and what comes out isn’t usually right at first.  I move things around, break them up, rethink the elements.  I can use a whiteboard, but usually to communicate a diagram already conceived.  Sometimes I can capture new thinking, but it’s easy to edit a whiteboard. Flip charts are consequently more problematic.

So I was unconsciously leveraging the affordances of the tool to help allow my thinking to ferment/percolate/incubate (pick your metaphor).  Another similar approach is to seed a question you want to answer or a thought you want to ponder before some activity like driving, showering, jogging, or the like.  Our unconscious brain works powerfully in the background, given the right fodder.  So hopefully this gives you some mental fodder too.

Templates and tools

1 December 2015 by Clark 2 Comments

A colleague who I like and respect recently tweeted: “I can’t be the only L&D person who shudders when I hear the word ‘template'”, and I felt vulnerable because I’ve recently been talking about templates.   To be fair, I have a different meaning than most of what’s called a ‘template’, so I thought perhaps I should explain.

Let’s be clear: what’s typically referred to as a template is usually a simple screen type for a rapid authoring tool.  That is, it allows you to easily fill in the information and generate a particular type of interaction: drag-and-drop, multiple-choice, etc.  And this can be useful when you’ve got well-designed activities but want to easily develop them.  But they’re not a substitute for good design, and can make it easy to do bad design too. Worse are those skins that add gratuitous visual elements (e.g. a ‘racing’ theme) to a series of questions in some deluded view that such window dressing has any impact on anything.

So what  am  I talking about?  I’m talking about templates that help reinforce the depth of learning science around the elements. I’m talking about templates for: introductions that ask for the emotional opener, the drill-down from the larger context, etc; practices that are contextualized, meaningful to learner, differentiated response options and specific feedback, etc; etc.  This could be done in other ways, such as a checklist, but putting it into the place where you’re developing strikes me as a better driver ;).  Particularly if it is embedded in the house ‘style’, so that the look and feel is tightly coupled to learner experience.

Atul Gawande, in his brilliant  The Checklist Manifesto, points out how there are gaps in our mental processing that means we can skip steps and forget to coordinate.  Whether the guidelines are in a template or a process tool like a checklist, it helps to have cognitive facilitation.  So what I’m talking about is  not a template that says how it’s to look, but instead what it should contain. There are ways to combine intrinsic motivation openings with initial practice, for instance.

Templates don’t have to stifle creativity, they can serve to improve quality instead.  As big a fan as I am of creativity, I also recognize that we can end up less than optimal if there isn’t some rigor  in our approach.  (Systematic creativity is  not an oxymoron!)  In fact, systematicity in the creative process can help optimize the outcomes. So however you want to scaffold quality and creativity, whether through templates or other tools, I do implore you to put in place support to ensure the best outcomes for you and  your audience.

CERTainly room for improvement

24 November 2015 by Clark 3 Comments

As mentioned before, I’ve become a member of my local Community Emergency Response Team (CERT), as in the case of disaster, the official first-responders (police, fire, and paramedics) will be overwhelmed.  And it’s a good group, with a lot of excellent  efforts in processes and tools as well as drills.  Still, of course, there’s  room for improvement.  I encountered one such at our last meeting, and I think it’s an interesting case study.

So one of the things you’re supposed to do in conducting search and rescue is to go from building to building assessing damage and looking for people to help.  And one of the useful things to do is to mark the status of the search and the outcomes, so no one wastes effort on an already explored building. While the marking is  covered in training and there’re support tools to help you remember,  ideally it’d be memorable, so that you  can regenerate the information and  don’t have to look it up.

The design for the marking is pretty clear: you first make a diagonal slash when you start investigating a building, and then you make a crossing slash  when you’ve made your assessment. And  specific information is to be recorded in each quarter of the resulting X: left, right, top, and bottom.  (Note that the US standard set by FEMA doesn’t correspond to the international standard from the  International Search & Rescue Advisory Group, interestingly).

However, when we brought it up in a recent meeting (and they’re very good about revisiting things that quickly fade from memory), it was obvious that most people couldn’t recall what goes where. And when I heard what the standard was, I realized it didn’t have a memorable structure.  So, here are the four things to record:

  • the group who goes in
  • when the group completes
  • what hazards may exist
  • and how many people and what condition they’re in*

So how would  you  map these to the quadrants?  And in one sense it doesn’t matter  if there’s a sensible rationale behind them. One sign that there’s not?  You can’t remember what goes where.

Our  local team leader was able to recall that the order is: left – group, top – completion, right – hazards, and bottom – people.  However, this seems to me to be less than  memorable, so let me explain.

To me, wherever you put the in, left or top, the coming out ought to be opposite. And given our natural flow, group going in makes sense to the left, and coming out ought to go on the right.  In – out.  Then, it’s relatively arbitrary where hazards and people go.  I’d make a case that top-of-mind should be the hazards found to warn others, but that the people are the bottom line (see what I did there?).  I could easily make a case for the reverse, but either would be a mnemonic to support remembering.  Instead, as far as I can tell, it’s completely arbitrary. Now, if it’s not arbitrary and there is a rationale,  it’d help to share  that!

The point being, to help people remember things that are in some sense arbitrary, make a story that makes it memorable. Sure, I can look it up, assuming that the lookup book they handed out stays in the pocket in my special backpack.  (And I’m likely to remember now, because of all this additional processing, but that’s  not what happens in the training.)  However,  making it regenerable from some structure gives you a much better chance of having it to hand. Either a model or a story is better than arbitrary, and one’s possible with a rewrite, but as it is, there’s neither.

So there’s a lesson in design to be had, I reckon, and I hope you’ll put it to use.

* (black or dead, red or needing immediate treatment for life-threatening issues, yellow or needing non-urgent treatment, and green or ok)

Learning and frameworks

13 November 2015 by Clark 4 Comments

There’s recently been a spate of attacks on 70:20:10 and moving beyond courses, and I have to admit I just don’t get it.  So I thought it’s time to set out why I think these approaches make sense.

Let’s start with what we know about how we learn. Learning is action and reflection.  Instruction (education, training) is designed action and guided reflection.  That’s why, by the way, that information dump and knowledge test isn’t a learning solution.   People need to actively apply the information.

And it can’t follow an ‘event’ model, as learning is spaced out over time. Our brains can only accommodate so much (read: very little) learning at any one time.  There needs to  be ongoing facilitation after  a formal learning experience – coaching over time and stretch assignments – to help cement and accelerate the learning experience.

Now, this can be something L&D does formally, but at some point formal has to let  go (not least for pragmatics) and it becomes the responsibility of the individual  and the community. It shifts from formal coaching to informal mentoring, personal exploration, and feedback from colleagues and fellow practitioners.  It’s impractical for L&D to take on this full responsibility, and instead becomes a role in facilitation of mentoring, communication, and collaboration.

That’s where the 70:20:10 framework comes in.  Leaving that mentoring and collaboration to chance is a mistake, because it’s demonstrably the case that people don’t necessarily have good self-learning skills.  And if we foster self-learning skills, we can accelerate the learning outcomes for the organization. Addressing the skills and culture for learning, personally and collectively, is a valuable contribution that L&D should seize. And it’s not about controlling it all, but making an environment that’s conducive, and facilitating the component skills.

Further, some people  seem to get their knickers in a twist about the numbers, and I’m not sure why that is.  People seem comfortable with the Pareto Principle, for instance (aka the 80/20 rule), and it’s the same. In both cases it’s not the exact numbers that matter, but the concept. For the Pareto Rule it’s recognizing that some large fraction of outcomes  comes from a small fraction of  inputs.  For the 70:20:10 framework, it’s recognizing that much of what you apply as your expertise comes from things other than courses.  And tired old cliches about “wouldn’t want a doctor who didn’t have training” don’t reflect that you’d also not want a doctor who didn’t continue  learning through internships and practice.  It’s not denying the 10, it’s augmenting it.

And this is really what Modern Workplace Learning is about: looking beyond the course.  The course is one important, but ultimately small, piece of being a practitioner, and organizations can no longer afford to ignore the rest of the learning picture.  Of course, there’s also the whole innovation side and performance support when learning doesn’t have to happen  as well, which is  something L&D also should facilitate (cue the L&D Revolution), but getting the learning right by looking at the bigger picture of how we really learn is critical.

I welcome debate on this, but pragmatically if you think about how you  learned what you do, you should  recognize that much of it came from other than courses. Beyond Education, the other two E’s have been characterized as Exposure and Experience. Doing the task in the company of others, socially learning, and by the outcomes of actually applying the knowledge in context, and making mistakes.  That’s real learning, and the recognition that it should not be left to chance is how these frameworks help raise awareness and provide an opportunity for L&D to become more relevant to the organization.  And that, I strongly believe, is a valuable outcome. So, what do you think?

Levels of Design

11 November 2015 by Clark 3 Comments

In a recent conversation, we were talking  about the Kirkpatrick model, and a colleague  had an interesting perspective that hadn‘t really struck me overtly. Kirkpatrick is widely (not widely enough, and wrongly) used as an evaluation tool, but he talked about using it as a design tool, and that perspective made clear for me a problem with our approaches.

So, there‘s a lot of debate about the Kirkpatrick model, whether it helps or hinders the movement towards good learning. I think it‘s misrepresented (including by its own progenitors, though they‘re working on that ;), and while I‘m open to new tools I think it does a nice job of framing a fairly simple but important idea. The goal is to start with the end in mind.

And the evidence is that it‘s not being used well. The largest implementation of the model is level 1, which isn‘t of use (correlation between learner reaction and actual impact is .09, essentially zero with a rounding error). Level 2 drops to a third of orgs, and it drops from there. And this is broken.

The point, and this is emphasized by the ‘design‘ perspective, is that you are supposed to start with level 4, and work back. What‘s the measurable indicator in the organization that isn‘t up to snuff, and what behavior (level 3) would likely impact that? And how do we change that behavior (Level 2)? And here‘s where it can go beyond training: that intervention might be a job aid, or access to a network (which hasn‘t been much in the promotion of the model).

To be fair, the proponents do argue you should be starting at Level 4, but with the numbering (which Don admits he might have got wrong) and the emphasis on evaluation, it doesn‘t hit you up front. Using it as a design tool, however, would emphasize the point.

So here‘s to thinking of learning design as working backwards from a problem, not forwards from a request. And, of course, to better learning design overall.

Under the ‘Content‘ Cover

10 November 2015 by Clark 5 Comments

Too often I see instructional design training and tools, in addition to talking about ‘objectives‘ and ‘assessment‘ (which I tend to call ‘practice‘, for hopefully obvious reasons), talking about ‘content‘. And I think that simplification is a path to bad learning design. It misses emphasizing the nuances, and that‘s a bad thing.

What should be the elements of content are an introduction to the learning experience, a presentation of the concept(s), examples that illustrate applying the concept to contexts, and a closing of the experience. Each of these have component parts that, when addressed, contribute to the likelihood of a good learning outcome. Ignoring them, however, is likely to lead to a lack of impact.

The problem is that our cognitive architecture is prone to mistakes in execution. We‘re bad at remembering bits and pieces, and we naturally can skip steps. That‘s why we create external tools like checklists and templates to support good design. So if we‘re not scaffolding here, we run the risk of creating content that may be well-written, but isn‘t well-designed.

And we see this all too often: eLearning that‘s content-heavy and learning light. It may have good production values, with a consistent look-and-feel, elegant prose, and great images, but it also tends to have too much rote information, little enough concepts, sparse and un-illuminating examples, and no real emotional ‘hook‘.

Instead, we could be using checklists or templates to ensure we get the right elements. We could have support for designing introductions, concept, examples, and closing, (and better support for good practice too ;). It doesn‘t have to be built into an authoring tool, but certainly should be manifest in the development tools for interim representations.

There are other reasons to be a bit more granular, such as flexible content that supports repurposing for delivery in the moment, and adaptive learning, but overall the real reason is for good design. It doesn‘t have to be granular, but it does have to explicitly consider the elements that contribute to learning and get those right. Right?

A Competent Competency Process

4 November 2015 by Clark 3 Comments

In the process of looking at ways to improve the design of courses, the starting point is good objectives. And as a consequence, I‘ve been enthused about the notion of competencies, as a way to put the focus on what people do, not what they know. So how do we do this, systematically, reliably, and repeatably?

Let‘s be clear, there are times we need knowledge level objectives. In medicine or any other field where responses need to be quick and accurate, we need a very constrained vocabulary. SO drilling in the exact meanings of words is valuable, as an example. Though ideally, that‘s coupled with using that language to set context or make decisions. So “we know it‘s the right medial collateral ligament, prep for the surgery” could serve as a context, or we could have a choice to operate on the left or right atrial ventricle as a decision point. As Van Merriënboer‘s 4 Component Instructional Design talks about, we need to separate out the knowledge from the complex problems we apply it to. Still, I suggest that what‘s likely to make a difference to individuals and organizations is the ability to make better decisions, not recite rote knowledge.

So how do we get competencies when we want them? The problem, as I‘ve talked about before, is that SMEs don‘t have access to 70% of what they actually do, it‘s compiled away. We then need good processes, so I‘ve talked to a couple of educational institutions doing competencies, to see what could be learned. And it‘s clear that while there‘s no turnkey approach, what‘s emerging is a process with some specific elements.

One thing is that if you‘re trying to cover a whole college level course, you‘ve got to break it up. Break down the top level into a handful of competencies. Then you continue to take each of those apart, and perhaps another level, ‘til you have a reasonable scope. This is heuristic, of course, but with a focus on ‘do‘, you have a good likelihood to get here.

One of the things I‘ve heard across various entities trying to get meaningful objectives is working with more than one SME. If you can get several, you have a better chance of triangulating on the right outcomes and objectives. They may well disagree about the knowledge, but if you manage the process right (emphasize ‘do‘, lather, rinse, repeat), you should be able to get them to converge. It may take some education, and you may have to let them get the

Not just any SMEs will do. Two things are really valuable: on the ground experience to know what needs to be done (and doesn‘t), and the ability to identify and articulate the models that guide the performance. Some instructors, for instance, can teach to a text but really aren‘t truly masters of the content nor are experienced practitioners. Multiple helps, but the better the SME, the better the outcome.

I believe you want to ensure that you‘re getting both the right things, and all the things. I‘ve recommended to a client about triangulating not just with SMEs, but with practitioners (or, rather, the managers of the roles the learners will be engaged in), and any other reliable stakeholders. The point is to get input from the practice as well as the theory, identifying the models that support proper behavior, and the misconceptions that underpin where they go wrong.

Once you have a clear idea of the things people need to be able to do, you can then identify the language for the competencies. I‘m not a fan of Bloom‘s (unwieldy, hard to reliably apply), but I am a fan of Mager-style definitions (action, context, metric).

After this is done, you can identify the knowledge needed, and perhaps created objectives for that, but to me the focus is on the ‘do‘, the competencies. This is very much aligned with an activity-based learning model, whereby you immediately design the activities that align with the competencies before you decide the content.

So, this is what I‘m inferring. There would be good tools and templates you could design to go with this, identifying competencies, misconceptions, and at the same time also getting stories and motivations. (An exercise left for the reader. ;) The overall goal, however, of getting meaningful objectives is key to getting good learning design. Any nuances I‘m missing?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.