I continue to get asked about social learning metrics. Until we get around to a whitepaper or something on metrics, here’re some thoughts:
Frankly, the problem with Kirkpatrick (sort of like with LMS’ and ADDIE, *drink*) is not in the concept, but in the execution. As he would say, stopping at level 1 or 2 is worthless. You need to start with Level 4, and work back. This is true whether you’re talking about formal learning, informal learning, or whatever. Then, I’m not feeling like you have to be anal about levels 1-3, it’s level 4 that matters, but there’s plausibility that making the link makes your case stronger. And I also like what I heard added at a client meeting: level 0, are they even taking the course/accessing the system? But I digress…
So, let’s say you are interested in seeing what social media can do for your organization: what are you not seeing but need to? If you’re putting in a social media system into a call center, maybe you want reduced time to problem solution, fewer customer return calls on the same problem, etc. If you’re into an operations group, maybe you want more service or product ideas. What is it you’re trying to achieve? What would indicate the innovation that you’re looking to spark?
Then, you need to find ways to measure those outcomes. You have three basic decisions to make in terms of a strategic initiative:
- it’s working, yay, let’s keep it.
- hmm, it’s kinda working, but we need to tweak it
- oh oh, this is bleeding money, let’s kill it
You should set parameters before you launch the initiative that you think indicate the thresholds you are talking about. The keep and kill thresholds likely have to do with the costs versus the benefits. You may change those parameters on inspection of the results at any time, but at least you are doing it consciously. And gradually your patience will or should fade. Eventually you end up with either a leave or kill decision.
Frankly, even activity is a metric. A vendor of a social media system uses that as a metric for billing (though I don’t think that two touches a month constitutes a meaningful interaction by a user), and if people are talking productively and getting value, you’ve got at least an argument that intangible benefits are being generated. You could couple that with subjective evaluation of value, but overall I would like to argue for more meaningful outcomes.
And don’t think that you have to have only one. Depending on the size of the initiative and the different silos that are being integrated, you might have more. You might check not only key business metrics, but look for impacts on retention and morale as well, if the benefits of improving work environments are to believed (and I do). And, of course, there’s more than the installation and measuring: the tweaking for instance could involve messaging, culture, interface design, or more.
Metrics for informal learning aren’t rocket science, but instead mapping of best principles into specific contexts. Your organization needs to find ways to facilitate social learning, as the innovation outcomes are the key differentiator going forward, as so many say. You should be experimenting, but with impacts you’d like to have, not just on faith.
Communications Forum says
Social media can be difficult to quantify fo us. Sometimes it’s enough just to be engaged in a conversation, other times you want to measure the behaviors that are being encouraged, and sometimes you just want to get the word out. ROI regarding social media is kind of like ROI on communications–it’s hard to get objective and quantifiable numbers that translate into actions/behaviors. I like your idea that even activity is a metric.
Kelly Meeker says
It’s easier to measure ROI if you’re funneling all the conversation to one place – whether a webpage, twitter profile or social network of some kind. It’s incredibly difficult to track engagement across multiple platforms and figure out which platform is driving the best performance improvement or time saved. A lot of the time, it’s nothing more than a gut feeling, which is not easy to explain to management. I look forward to the development of measurement tools that track how a user engages across networks – of course, better single sign on services would be a good first step.
Jay Cross says
Clark, one of the things I like about your approach that’s often overlooked by evaluation efforts: iterations over time. You make it clear that measurement leads to action that leads to renewed measurement that leads to new action and so on. This is vital. Improvement is not yes/no; it’s incremental. Evaluaate, tweak, evaluate, tweak, evaluate tweak ad infinitum.