In our LDA Forum, someone posted a question asking about taking Cathy Moore’s Action Mapping for soft skills, like improving team dynamics. Now, they’re specifically asking about a) people with experience, and b) in the context of not-for-profits, so…I’m not a good candidate to respond. However, what it does raise is a more common problem: how do you train things that are more ephemeral. Like, for instance, leadership, or communication? My short answer is “break it down”. What do I mean? Here’re some thoughts, and I welcome feedback!
Many moons ago, I co-wrote a paper on evaluating social media impacts. There are the usual metrics, like ‘engagement’. That is, are people using the system? Of course, for companies charging for their platform, this could be as infrequent as a person accessing it once a month. More practically, however, it should be a person hitting it at least several times a week, or even several times a day! If you’re communicating, cooperating, and collaborating, you really should be interacting at a fair frequency.
I, on the other hand, argued for more detailed implications. If you’re putting it into a sales team, you should expect not only messages, but more success on sales, shorter sales cycles, etc. So you can get more detailed. These days, you can do even more, and have the system actually tag what the messages are about and count them. You can go deeper.
Which is what I think is the answer here. What skills do you want? For an innovation demo with Upside Learning, I argued we should break it down. That includes how to work out loud, and how to provide feedback, and how to run group meetings. (I’m just reading Alex Edman’s May Contain Lies, and it contains a lot of details about how to consider data and evidence.) We can look for more granular evidence. Even for skills like team dynamics, you should be looking at what makes good dynamics. So, things like making it safe yet accountable, providing feedback on behavior not on the person, valuing diversity, etc. There should be specific skills you want to develop, and assess. These, then, become the skills you design your learning to accomplish. You are, basically, creating a curriculum of the various skills that comprise the aggregated topic.
It may be that you assess a priori, and discover that only some are missing in your teams. That upfront analysis should happen regardless, but is too infrequent. The interlocutor here also mentioned the audience complaining about the time for analysis. Yep, that’s a problem. Reckon you have to sell the whole package: analyzing, designing, and evaluating for impact on performance, not just some improvement. Yet, compared to throwing money away? Seems like targeting intervention efforts should be a logical sell. If only we lived in a rational world, eh?
Still, overall, I think that these broad programs break down into specific skills that can be targeted and developed. And, we should. Let’s not get away with vague intentions, explanations, and consequently no outcomes. Let’s do the work, break it down, and develop actual skills. That, at least, is my take, I welcome hearing yours!
Rich James says
I agree with your analysis. The challenge is that, in general, we have poor understanding of what enables these “ephemeral” qualities. So while we might be able to measure certain outcomes or actions, we are not good at creating the social environment that produces them. As learning professionals, we have to know enough to counter the pet theories and conventional wisdom clients have.
Then we have to convince them that the hard work of self-awareness and accountability are worth it. Not easy. Theories around psychology safety, self determination, cognitive diversity, and cooperation are useful but applying them demands significant ongoing attention and effort. So we muddle along with “awareness training” that only the most highly motivated embrace and build on.
Clark says
Rich, sadly I agree. It’s hard work to understand an area, drill down, and create meaningful engagement. Far easier to just ‘buy in’ to someone’s model. That’s ‘well sold’ or popular, though not necessarily accurate. We really need to raise the understanding of what learning really takes, instead of allowing us to continue in this ‘take orders’ framework having no impact. Measuring is the first step, I reckon.