In a couple of articles, the notion that we should be measuring our impact on the business is called out. And being one who says just that, I feel obligated to respond. So let’s get clear on what I’m saying and why. It’s about what to evaluate, why, and possibly when.
So, in the original article, by my colleague Will Thalheimer, he calls the claim that we should focus on business impact ‘dangerous’! To be fair (I know Will, and we had a comment exchange), he’s saying that there are important metrics we should be paying attention to about what we do and how we do it. And no argument! Of course we have to be professional in what we do. The claim isn’t that the business measure is all we need to pay attention to. And he acknowledges that later. Further, he does say we need to avoid what he calls ‘vanity metrics’, just how efficient we are. And I think we do need to look at efficiency, but only after we know we’re doing something worthwhile.
The second article is a bit more off kilter. It seems to ignore the value of business metrics all together. It talks about competencies and audience, but not impacting the business. Again, the author raises the importance of being professional, but still seems to be in the ‘if we do good design, it is good’, without seeming to even check to see if the design is addressing something real.
Why does this matter? Partly because, empirically, what the profession measures are what Will called ‘vanity’ measures. I put it another way: they’re efficiency metrics. How much per seat per hour? How many people are served per L&D employee? And what do we compare these to? Industry benchmarks. And I’m not saying these aren’t important, ultimately. Yes, we should be frugal with our resources. We even should ultimately ensure that the cost to improve isn’t more than the problem costs! But…
The big problem is that we’ve no idea if that butt in that seat for that hour is doing any good for the org. We don’t know if the competency is a gap that means the org isn’t succeeding! I’m saying we need to focus on the business imperatives because we aren’t!
And then, yes, let’s focus on whether our learning interventions are good. Do we have the best practice, the least amount of content and it’s good, etc. Then we can ask if we’re efficient. But if we only measure efficiency, we end up taking PDFs and PPTs and throwing them up on the screen. If we’re lucky, with a quiz. And this is not going to have an impact.
So I’m advocating the focus on business metrics because that’s part of a performance consulting process to create meaningful impacts. Not in lieu of the stuff Will and the other author are advocating, but in addition. It’s all too easy to worry about good design, and miss that there’s no meaningful impact.
Our business partners will not be impressed if we’re designing efficient, and even effective learning, if it isn’t doing anything. Our solutions need to be targeted at a real problem and address it. That’s why I’ll continue to say things like “As a discipline, we must look at the metrics that really matter… not to us but to the business we serve.” Then we also need to be professional. Will’s right that we don’t do enough to assure our effectiveness, and only focus on efficiency. But it takes it all, impact + effectiveness + efficiency, and I think it’s dangerous to say otherwise. So what say you?
Guy Wallace says
Exactly. Always measure/evaluate for Effectiveness 1st … and Efficiency 2nd.
Clark says
Guy, thanks for weighing in, and I think you know what I mean (heck, your Lean ISD is a guide here), but…. I’ll say impact 1st, effectiveness 2nd, and efficiency 3rd. Why? Because we could be effective at a learning objective that doesn’t do anything for the org, and be efficient at it, too. And it’s still not what we should be doing. And I’d extend: design for, and then measure/evaluate. Make sense?
Will Thalheimer says
Clark, I’m always delighted to hear your thinking, especially when it challenges mine! Really!
I’m still learning about learning evaluation — it is an immensely complex area of endeavor — but I’m beginning to think about learning evaluation with a different frame. One I’ve missed before, or maybe one that hasn’t arisen before. I’m thinking that evaluation is done to help us make decisions. Indeed, there are different players who must make different decisions. In short, and I will elaborate on this in the future, but in short we should evaluate to help us make decisions. So, to determine what to evaluate, we have to determine what our decisions are.
Labels like “efficiency” and “impact” I don’t think are that helpful. Anyway, I know I’m on my own here, and so maybe I’m just suffering from early-onset of dementia, but I’ll try to keep working on this to see if I get any traction.
Clark says
Will, thanks for weighing in. And I agree, it is complex. What seems simple up-top unpacks down into considerable nuances. That’s why you get paid the big bucks! :). Seriously, I do think impact and efficiency are important, concepts, as well as effectiveness. Sometime we need to sit down around a white board and beat this into submission. Maybe Orlando in March?