Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

27 December 2017

Pernicious problems

Clark @ 8:05 AM

I’m using a standard for organizational learning quality in the process of another task.  Why or for whom doesn’t matter. What does matter is that there are two problems in their standard that indicate we still haven’t overcome some pernicious problems.  And we need to!

So, for the first one, this is in their standard for developing learning solutions:

Uses blended models that appeal to a variety of learning styles.

Do you see the problem here?  Learning styles are debunked! There’s no meaningful and valid instrument to measure them, and no evidence that adapting to them is of use.  Appealing to them is a waste of time and effort. Design for the learning instead!  Yet here we’re seeing someone conveying legitimacy by communicating this message.

The second one is also problematic, in their standard for evaluation:

Reports typical L&D metrics such as Kirkpatrick levels, experimental models, pre- and post-tests and utility analyses.

This one’s a little harder to see. If you think about it, however, you should see that pre- and post-test measures aren’t good measures.  What you’re measuring here is a delta, and the problem is, you would expect a delta. It doesn’t really tell you anything. You shouldn’t have even bothered if the performance isn’t up to scratch! What you want to do is confirm that you’re achieving a higher level of performance set objectively. Are they now able to perform? Or how many are?  Doing the pre-post is like doing normative reference (e.g. grading on a curve) when you should be doing criteria-referenced performance.

And this is from an organization that’s purports to communicate L&D quality! These are both from their base level of operation, which means it’s acceptable. This is evidence that our problems aren’t just in practice, they’re pernicious; they’re present in the mindset of even the supposed experts. Is it any wonder the industry is having trouble?  And I haven’t rigorously reviewed the standard, I was merely using it (I wonder what I’d find if I did?).

Maybe I’m being too harsh. Maybe the wording doesn’t imply what I think it does.  But I’ll suggest that we need a bit more rigor, a bit more attention to science in what we do. What have I missed?

 

 

3 Comments »

  1. Exactly!

    Comment by Guy W. Wallace — 27 December 2017 @ 8:13 AM

  2. Appears they are eating their own dog food so to speak and it tastes… well, like dog food.

    Comment by Mark — 27 December 2017 @ 7:04 PM

  3. You are not the one missing, until they get out of “training” metrics as measures of success the words won’t change. Need to refocus them on performance, operational, metrics to get them to see real ROI. As for learning styles… when faced with that discussion I have found it not worth the energy and instead have asked the questions that bring us to the best delivery modality for the content & client (and sometimes it works too!).

    Comment by William Ryan — 29 December 2017 @ 2:27 PM

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress