As has become a pattern, someone recently asked me how to evaluate soft skills. And without being an expert on soft skill or evaluation, I tried to answer on principle. So I thought about the types of observable data you should expect to find. And that yielded an initial answer. Then I watched an interesting video of a lecture by a scholar and consultant, and it elaborated the challenges. So, there‘s a longer answer too. So here‘s an extended riff on evaluating soft skills.
I started with wondering what performance outcomes would you expect for soft skills. Coupled, as well, with how could you find evidence of these observable differences. As a short answer, I suggested that there should be 3(+) outcomes from effective soft skills training.
0) the learner should be able to perform in soft skills scenarios (c.f. Will Thalheimer’s LTEM). This is the most obvious. Put them in the situation and ask them to perform. This is the bit that gets re-addressed further down.
1) the learner should be aware of an improvement in their ability to perform. However, asking immediately can lead to a misapprehension of ability. So, as Will Thalheimer advises in his Performance-Focused Smile Sheets, ask them 3 months later. Also, ask about behavior, not knowledge. E.g. “Are you using the <> model in your work, and do you notice an improvement in your ability”
2) The ‘customers’ of the learner should notice the improvement. Depending on whether that’s internal or external, it might show up (at least in aggregate) in either 360 eval scores, or some observable metric like customer sat scores. It may be harder to collect this data, but of course it‘s also more valuable.
3) Finally, their supervisors/managers should notice the improvement, whether observationally or empirically.They should be not only prepared to support the change over time, but asked to look for evidence (including as a basis to fine tune performance).
All together, triangulating on this should be a way to establish the validity.
Now, extending this, Guy Wallace tweeted a link to a lecture by Neil Rackham. In it, Neil makes the case that universities need to change to teaching core skills, in particular the 4 C‘s: critical thinking, creativity, communication, and collaboration. He also points out how hard it is to evaluate these without a labor-intensive effort of an individual observing performance. This is a point that others have made, that these skills have hard to observe criteria.
There‘s some argument about so-called 21C skills, and yet I can agree that these four things would be good. The question is how to assess them reliably. Rackham argues that perhaps AI can help here. Perhaps, but at this point I‘d argue for two things. First, help students self-evaluate (which has the benefits of them understanding what‘s involved). Second, instrumenting environments (say, for instance, with xAPI) in which these activities are performed. There will be data records that can be matched to behaviors, initially for human evaluation, but perhaps ultimately for machine evaluation.
Of course, this requires assigning meaningful activities that necessarily involve creativity, critical thinking, communication, and/or collaboration. This means project based work, and I‘ve long argued that you can‘t learn such skills without a domain. Actually, to create transferable versions, you‘d need to develop the skills across domains.
When I teach, I prefer to give group work projects that do require these skills. It was, indeed, hard to mark these extra skills, but I found that scaffolding it (e.g. a ‘how to collaborate‘ document) facilitated good outcomes. Being explicit about the best thinking practices isn‘t only a good idea, it‘s a demonstrably useful approach in general.
So I think developing skills is important. That means we need a means to be evaluating soft skills. We know it when we see it, but it‘s hard to necessarily find the opportunity, but if we can assign it, we can evaluate and develop these skills more readily. That, I think, is a desirable goal. What think you?