Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Archives for 2015

It’s the process, silly!

14 January 2015 by Clark Leave a Comment

So yesterday, I went off on some of the subtleties in elearning that are being missed.  This is tied to last weeks posts about how we’re not treating elearning seriously enough.  And part of it is in the knowledge and skills of the designers, but it’s also in the process. Or, to put it another way, we should be using steps and tools that align with the type of learning we need. And I don’t mean ADDIE, though not inherently.

So what  do I mean?  For one, I’m a fan of Michael Allen’s Successive Approximation Model (SAM), which iterates several times (tho’ heuristically, and it could be better tied to a criterion).  Given that people are far less predictable than, say, concrete, fields like interface design have long known that testing and refinement need to be included.  ADDIE isn’t inherently linear, certainly as it has evolved, but in many ways it makes it easy to make it a one-pass process.

Another issue, to me, is to structure the format for your intermediate representations so that make it hard to do aught but come up with useful information.  So, for instance, in recent work I’ve emphasized that a preliminary output  is a competency doc that includes (among other things)  the objectives (and measures), models, and common misconceptions.  This has evolved from a similar document I use in (learning) game design.

You then need to capture your initial learning flow. This is what Dick &  Carey call your instructional strategy, but to me it’s the overall experience of the learner, including addressing the anxieties learners may feel, raising their interest and motivation, and systematically building their confidence.  The anxieties or emotional barriers to learning may well be worth capturing at the same time as the competencies, it occurs to me (learning out loud ;).

It also helps if your tools don’t interfere with your goals.  It should be easy to create animations that help illustrate models (for the concept) and tell stories (for examples).  These can be any media tools, of course. The most important tools are the ones you use to create meaningful practice. These should allow you to create mini-, linear-, and branching-scenarios (at least).  They should have alternative feedback for every wrong answer. And they should support contextualizing the practice activity. Note that this does  not mean tarted up drill and kill with gratuitous ‘themes’ (race cars, game shows).  It means having learners make meaningful decisions and act on them in ways like they’d act in the real world (click on buttons for tech, choose dialog alternatives for interpersonal interactions, drag tools to a workbench or adjust controls for lab stuff, etc).

Putting in place processes that only use formal learning when it makes sense,  and then doing it right when it does make sense, is key to putting L&D on a path to relevancy.   Cranking out courses on demand, focusing on measures like cost/butt/seat, adding rote knowledge quizzes to SME knowledge dumps, etc are instead continuing down the garden path to oblivion. Are you ready to get scientific and strategic about your learning  design?

The subtleties

13 January 2015 by Clark Leave a Comment

I recently opined that good learning design was complex, really perhaps close to rocket science.  And I suggested that a consequent problem was that the nuances are subtle.  It occurs to me that perhaps discussing some example problems will help make this point more clear.

Without being exhaustive, there are several consistent problems I see in the elearning content I review:

  • The wrong focus. Seriously, the outcomes for the class aren’t meaningful!  They are about information or knowledge, not skill.  Which leads to no meaningful change in behavior, and more importantly, in outcomes. I don’t want to learn about X, I want to learn how to  do  X!
  • Lack of motivating introductions.  People are expected to give a hoot  about this information, but no one helps them understand why it’s important?  Learners should be assisted to viscerally ‘get’ why this is important,  and helped to see how it connects to the rest of the world.  Instead we get some boring drone about how this is really important.  Connect it to the world and let me see the context!
  • Information focused or arbitrary content presentations. To get the type of flexible problem-solving organizations need, people need mental models about why  and how  to do it this way, not just the rote steps.  Yet too often I see arbitrary lists of information accompanied  by a rote knowledge test.  As if that’s gonna stick.
  • A lack of examples, or trivial ones.  Examples need to show a context, the barriers, and how the content model provides guidance about how to succeed (and when it won’t).  Instead we get fluffy stories that don’t connect to the model and show the application to the context.  Which means it’s not going to support transfer (and if you don’t know what I’m talking about, you’re not ready to be doing design)!
  • Meaningless and insufficient practice.  Instead of asking learners to make decisions like they will be making in the workplace (and this is my hint for the  first  thing to focus on fixing), we ask rote knowledge questions. Which isn’t going to make a bit of difference.
  • Nonsensical alternatives to the right answer.  I regularly ask of audiences “how many of you have ever taken a quiz where the alternatives to the right answer are so silly or dumb that you didn’t need to know anything to pass?”  And  everyone raises their hand.  What possible benefit does that have?  It insults the learner’s intelligence, it wastes their time, and it has no impact on learning.
  • Undistinguished feedback. Even if you do have an alternative that’s aligned with a misconception, it seems like there’s an industry-wide conspiracy to ensure that there’s only one response for all the wrong answers. If you’ve discriminated meaningful differences to the right answer based upon how they go wrong, you should be addressing them individually.

The list goes on.  Further, any one of these can severely impact the learning outcomes, and I typically see  all of these!

These are really  just the flip side of the elements of good design I’ve touted in previous posts (such as this series).  I mean, when I look at most elearning content, it’s like the authors have no idea how we really learn, how our brains work.  Would you design a tire for a car without knowing how one works?  Would you design a cover for a computer without knowing what it looks like?  Yet it appears that’s what we’re doing in most elearning. And it’s time to put a stop to it.  As a first step, have a look at the Serious eLearning Manifesto, specifically the 22 design principles.

Let me be clear, this is just the surface.  Again, learning engineering is complex stuff.  We’ve hardly touched on engagement, spacing, and more.    This may seem like a lot, but this is really the boiled-down version!  If it’s too much, you’re in the wrong job.

Shiny objects and real impact

9 January 2015 by Clark 2 Comments

Yesterday I went off about how learning design should be done right and it’s not easy.  In a conversation two days ago, I was talking to a group that was  supporting several initiatives in adaptive learning, and I wondered if this was a good idea.

Adaptive learning is  desirable.  If learners come from different initial abilities, learn at different rates, and have different availability, the learning should adapt.  It should skip things you already know, work at your pace, and provide extra practice if the learning experience is extended.  (And, BTW, I’m  not talking learning styles).  And this is worthwhile,  if the content you are starting  with is good.  And even then, is it really necessary. To explain, here’s an analogy:

I have heard it said that the innovations for the latest drugs should be, in many cases, unnecessary. The extra costs (and profits for the drug companies) wouldn’t be necessary. The claim is that the new drugs aren’t any more effective than the existing treatments  if they were used properly.  The point being that people don’t take the drugs as prescribed (being irregular,  missing, not continuing past the point they feel better, etc), and if they did the new drugs wouldn’t be as good.  (As a side note, it would appear that focusing on improving patient drug taking protocols would be a sound strategy, such as using a mobile app.)  This isn’t true in all cases, but even in some it makes a point.

The analogy here is that using all the fancy capabilities: tarted up templates for simple questions, 3D virtual worlds, even adaptive learning, might not be needed if we did better learning design!  Now, that’s not to say we couldn’t add value with using the right technology at the right points, but as I’ve quipped in the past: if you get the design right, there are  lots of ways to implement it.  And, as a corollary, if you don’t get the design right, it doesn’t matter how you implement it.

We do need to work on improving our learning design, first, rather than worrying about the latest shiny objects. Don’t get me wrong, I  love  the shiny objects, but that’s with the assumption that we’re getting the basics right.  That was my assumption ’til I hit the real world and found out what’s happening. So let’s please get the basics right, and then worry about leveraging the technology on  top of a strong foundation.

Maybe it is rocket science!

8 January 2015 by Clark 11 Comments

As I’ve been working with the Foundation over the past 6 months I’ve had the occasion to review a wide variety of elearning, more specifically in the vocational and education space, but my experience mirrors that from the corporate space: most of it isn’t very good.  I realize that’s a harsh pronouncement, but I fear that it’s all too true; most of the elearning I see will have very little impact.  And I’m becoming ever more convinced that what I’ve quipped  in the past is true:

Quality design is hard to distinguish from well-produced but under-designed content.

And here’s the thing: I’m beginning to think that this is not just a problem with the vendors, tools, etc., but that it’s more fundamental.  Let me elaborate.

There’s a continual problem of bad elearning, and yet I hear people lauding certain examples, awards are granted, tools are touted, and processes promoted.  Yet what I see really isn’t that good. Sure, there are exceptions, but that’s the problem, they’re exceptions!  And while I (and others, including the instigators of the Serious eLearning Manifesto) try to raise the bar, it seems to be an uphill fight.

Good learning design is rigorous. There’re some significant effort just getting the right objectives, e.g. finding the  right  SME, working with them and not taking what they say verbatim, etc.  Then working to establish the right model and communicating it, making meaningful practice, using media correctly.  At the same time, successfully fending off the forces of fable (learning styles, generations, etc).

So, when it comes to the standard  tradeoff    –  fast, cheap, or good, pick two – we’re ignoring ‘good’.  And  I think a fundamental problem is  that everyone ‘knows’  what learning is, and they’re not being astute consumers.  If it looks good, presents content, has some interaction, and some assessment, it’s learning, right?  NOT!  But stakeholders don’t know, we don’t worry enough about quality in our metrics (quantity per time is not a quality metric), and we don’t invest enough in learning.

I’m reminded of a thesis that says medicos reengineered their status in society consciously.  They went from being thought of ‘quacks’ and ‘sawbones’ to an almost reverential status today by a process of making the process of becoming a doctor quite rigorous.  I’m tempted to suggest that we need to do the same thing.

Good learning design is complex.  People don’t have predictable properties as does concrete.  Understanding the necessary distinctions to do the right things is complex.  Executing the processes to successfully design, refine, and deliver a learning experience that leads to an outcome is a complicated engineering endeavor.  Maybe we do have to treat it like rocket science.

Creating learning should be considered a highly valuable outcome: you are helping people achieve their goals.  But if you really aren’t, you’re perpetrating malpractice!  I’m getting stroppy, I realize, but it’s only because I care and I’m concerned.  We have  got to raise our game, and I’m seriously concerned with the perception of our work, our own knowledge, and our associated processes.

If you agree, (and if you don’t, please do let me know in the comments),  here’s my very serious question because I’m running out of ideas: how do we get awareness of the nuances of good learning design out there?

 

« Previous Page

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok