A few years ago, I had a ‘debate’ with Will Thalheimer about the Kirkpatrick model (you can read it here). In short, he didn’t like it, and I did, for different reasons. However, the situation has changed, and it’s worth revisiting the issue of evaluation.
In the debate, I was lauding how Kirkpatrick starts with the biz problem, and works backwards. Will attacked that the model didn’t really evaluate learning. I replied that it’s role wasn’t evaluating the effectiveness of the learning design on the learning outcome, it was assessing the impact of the learning outcome on the organizational outcome.
Fortunately, this discussion is now resolved. Will, to his credit, has released his own model (while unearthing the origins of Kirkpatrick’s work in Katzell’s). His model is more robust, with 8 levels. This isn’t overwhelming, as you can ignore some. Fortunately, there’re indicators as to what’s useful and what’s not!
It’s not perfect. Kirkpatrick (or Katzell? :) can relatively easily be used for other interventions (incentives, job aids, … tho’ you might not tell it from the promotional material). It’s not so obvious how to do so with his new model. However, I reckon it’s more robust for evaluating learning interventions. (Caveat: I may be biased, as I provided feedback.) And should he have numbered them in reverse, which Kirkpatrick admitted might’ve been a better idea?
Evaluation is critical. We do some, but not enough. (Smile sheets, level 1, where we ask learners what they think of the experience, has essentially zero correlation with outcomes.) We need to do a better job of evaluating our impacts (not just our efficiency). This is a more targeted model. I draw it to your attention.