Our brains like to categorize things; in fact, we can’t really not do it. This has many benefits: we can better predict outcomes when we can categorize the situation, we can respond in appropriate ways via shared conceptualizations, and so on. It also has some downsides: stereotyping, for one. I reckon there’re tradeoffs, of course. But we also have to worry about when we over-use categorization, we can risk making the wrong bucket lists.
Our desire for simplification and categorization is manifest. The continued interest in reading one’s horoscope, for instance. And the continued success of personality typings, despite the evidence of their lack of utility. Other than the Big 5 or HEXACO, the rest are problematic at best. I’m just reading Annie Murphy Paul’s The Cult of Personality Testing (the predecessor to her The Extended Mind) and hearing abuses like Rorschach tests being used in child custody decisions is really horrific. Similarly, to hear that people are being denied employment based on their color (not race, but their ‘color’ on a particular test, blue or orange) isn’t new and continues (as does the other, sadly). Most of these tests don’t stand up to scientific scrutiny!
This is the explanation of learning styles, too, another myth that won’t die. Generations similarly. We like to have simplification. Further, there are times it’s useful. For example, recording your blood type can prevent potentially life-threatening complications. Having a basis to adapt learning, such as people’s performance (success or failure), also. Even more so if additional factors are added, such as confidence. Yet, we can overdo it. We might over-categorize, and miss important nuances.
Todd Rose’s The End of Average made an excellent case for not trying to conform people into one bucket. In it, he points out that when we assign a single grade for complex performance, we miss important nuances. For instance, if you get it wrong, why did you get it wrong? It matters in terms of the feedback we might give you. If you had one misconception instead of another, you should get different feedback than if you had the other.
How do we reconcile this? There’re benefits to simplifications, and risks. We have to be careful to simplify as much as we can, and no simpler. Which isn’t an easy task to undertake. The best recommendation I can make is to be mindful of the risks when you do simplify. Maybe start more broadly, and then winnow down? Explicitly consider the risks and costs as well as the benefits and savings. We’re using learner personas in a project. Many times, these personas can differ on important dimensions, and characterize the audience space in ways that a simple ‘the learner’ can’t capture.
Overall, we want to make sure we’re only using simplifications and categorizations in ways that are both helpful and scrutable. When we do so, we can avoid the wrong bucket lists. That should be our goal, after all.
Christine Bernat says
You mention a great deal about the “myth” of learning styles. While I admit that the notion has been overdone, I can’t help but think that they exist to some extent. People are different. I once designed two courses — one for using Excel to an accounting department and one for using Quark Express for a graphics’ design department. The two student groups could not have been more different. We actually had to redesign the classes because they wanted to learn so differently. The Excel group wanted step-by-step instructions with a manual. The Quark Express group wanted very brief overviews on functions and then a lot of time to play with the software. I’m primarily a technical writer and I don’t think I’d even have a job if Engineers and Computer Programmers liked to write. They generally hate writing. They want to deal with math and coding. Don’t you even consider the characteristics of the learners when you design training? I do — as more analytic vs. creative, etc.?
Clark says
Christine, as I’ve mentioned in the section on learning styles in my myths book, no one’s arguing that learners don’t differ. What has been robustly demonstrated is a) that we can’t reliably identify learner differences in approach to learning, and b) there’s no evidence that adapting to their styles matters. So, yes, a) is ‘yet’. There may eventually be a robust learning differentiator, but essentially none of the existing ones are psychometrically valid. It may well be that it’s because we’re so context-dependent, and change depending on what we’re learning, our previous experience, time of day, motivation, phase of the moon, etc. This also doesn’t mean that we don’t need to design for the audience. As you point out, there are audiences that are self-selected as to their preferences, strengths, etc. We should take that into account. But also, there’re clear indications that we should design for the learning, that is, learning to operate a tractor requires different approaches than learning to deal with difficult customers. I’ll suggest that the differences between the outcomes you were trying to achieve also played a role in the type of pedagogy that ended up being useful. There likely were also were still valuable elements like concepts, examples, and practice, but the type of practice differs massively for using Quark Express to be creative, and Excel to achieve specific numeric goals.