Well, it turns out I was wrong. I like to believe it doesn’t happen very often, but I do have to acknowledge it when I am. Let me start from the worst, and then qualify it all over the place ;).
In the latest Scientific American Mind, there is an article on The Pluses of Getting It Wrong (first couple paragraphs available here). In short, people remember better if they first try to access knowledge that they don’t have, before they are presented with the to-be-learned knowledge. That argues that pre-tests, which I previously claimed are learner-abusive, may have real learning benefits. This result is new, but apparently real. You empirically have better recall for knowledge if you tried to access it, even though you know you don’t have it. My cognitive science-based explanation is that the search in some ways exercises appropriate associations that make the subsequent knowledge stick better.
Now, I could try to argue against the relevance of the phenomenon, as it’s focused on knowledge recovery which is not applied, and may still lead to ‘inert knowledge’ (where you may ‘know it’, but you don’t activate it in relevant situations). However, it is plausible that this is true for application as well. Roger Schank has argued that you have to fail before you can learn. (Certainly I reckon that’s true with overconfident learners ;). That is, if you try to solve a problem that you aren’t prepared for, the learning outcome may be better than if you don’t. Yet I don’t think it’s useful to deny this result, and instead I want to think about what it might mean for still creating a non-aversive learner experience.
I still believe that giving learners a test they know they can’t pass at best seems to waste their time, and at worst may actually cause some negative affect like lack of self-esteem. Obviously, we could and should let them know that we are doing this for the larger picture learning outcome. But can we make the experience more ‘positive’ and engaging?
I think we can do more. I think we can put the mental ‘reach’ in the form of problem-based learning (this may explain the effectiveness of PBL), and ask learners to solve the problem. That is, put the ‘task’ in a context where the learner can both recognize the relevance of the problem and is interested in it. Once learners recognize they can’t solve the problem, they’re motivated to learn the material. And they should be better prepared mentally for the learning, according to this result. While it *is*, in a sense, a pre-test, it’s one that is connected to the world, is applied, and consequently is less aversive. And, yes, you should still ensure that it is known that this is done to achieve a better outcome.
Now, I can’t guarantee that the results found for knowledge generalize to application, but I do know that, by and large, rote knowledge is not going to be the competitive edge for organizations. So I’d rather err on the side of caution and have the learners do the mental ‘reach’ for the answer, but I do want it to be as close as possible to the reach they’ll do when they really are facing a problem. If there is (and please, do ensure there really is, don’t just take the client’s or SME’s word for it), then you may want to take this approach for that knowledge too, but I’m (still) pushing for knowledge application, even in our pre-tests.
So, I think there’s a revision to the type of introduction you use to the content, presenting the problem or type of problem they’ll be asked to solve later and encouraged to have an initial go at it before the concept, examples, etc are presented. It’s a pre-test, but of a more meaningful and engaging kind. Love to see any experimental investigation of this, by the way.
Maybe a renaming of the term “pre-test” as a “hook” is more accurate for its purpose. A hook is something to draw the leaner in. I always associate a pre-test as an evaluation for assessing before and after knowledge. Pre-tests or level-tests could be seen as abusive or irrelevant for learners, just something that has to get done. But, PBL motivates to learn new ideas. It (should) put students in the mindset that they can achieve some relevant knowledge (if it is done well).
Interesting. This does make sense. However, I think careful strategies would need to be in place to match the ‘space prep’ with the realignment during delivery and activation. Seems to me that this would be a nearly precise chemical preparation, sort of like reserving a parking spot. Without careful strategic matching, that careful preparation could be for nothing – and you’ll prepping the right spot later anyway after wasting a reaction prepping a slot for something that isn’t filled.
I too, would love to see some experimental investigation. I worry that interpretation of the phenomenon might tend to propel things that are just not that useful. I love that you are making the distinction between pre-test questions and problems / challenges. I think there is an appreciable difference between the cognition involved in answering a trivia question and applying what you know to make a decision to solve a problem.
Jane McGinnis says
I love this article! I’ve occasionally used pre-testing, and found it effective in certain classes, but have not seen it impact students in an abusive way. I’ve never observed a loss of confidence or sense of “I’m stupid” among students. The benefit I have experienced is one of allowing the student to make an effective assessment of the training benefits received, as they compare the feeling of having real information and an indicated path to the pre-class status before information was received. That’s a wordy way to say “they feel they have learned something today.”
Blair Rorani says
1. What should we teach?
Macro (what competencies should be in this ‘training’ curriculum?) vs. micro (what do I already know how to do in relation to this topic?)
2. When should we teach it?
Learning for the first time (here’s everything we think you should learn) vs. learning as a result of coaching and development (here is what you should learn/brush up on)
3. How should we teach it?
Courses (do module 1 then module 2 etc) vs. learning experiences (work on this realistic project, acquire target skills, have just-in-time teaching as support if you’re stuck)
I think there is a difference between a pre-test and failure before you learn.
Pre-test is the top down, ‘this is what we know you need to know to do X’ and should be based on competencies identified. Fail-to-learn is more about the experience itself – ‘what can I be challenged to do that I can safely fail at doing so my brain knows it needs to learn something new?’ Then cue just-in-time teaching.
Pre-test are about skills being the focus of the teaching. This is true for stakeholder. Accomplishing a realistic goal/mission should be the focus of the learner and failure and learning and recovery is simply part of achieving that mission.
Pre-test ‘Can you fly to the moon and back?’ > Course ‘here’s how to fly to the moon …’
Fail-to-learn ‘Pretend you’re flying to the moon, doing everything right and then – Houston, we have a problem – the blah blah breaks and you need to build a new one using some tape and a radio and a cup – or you die’. Here’s some just-in-time teaching on how to do that.
So bring the two together – build failure into the course and treat that as the pre-test. There are no pre-tests in life, only failure, teaching, recovery from failure (I’d like to see my scores on the marriage and parenting pre-test – I might not have met the pre-requisites) :)
Pre-assessments can be used to determine skill gaps and where I need to pitch the learning. Many of our learners will have life or work experience in our subject matter already. It’s rare that learners come to us with absolutely no knowledge of the subject at all. So pre-assessments can also be used to acknowledge existing knowledge and skills. Like Jane, I’ve found that they can often be confidence building. It’s got a lot to do with how you frame it – as long as you don’t force a learner to disclose their results to the rest of the class or call them losers if they don’t do very well – I can’t see how it can be abusive. Another benefit is that you can compare the pre- and post- assessments for ROI purposes and training evaluation.
@Steve: I agree. In Australia, it is recommended that you use a minimum of three assessment methods. I like to think of it as:
1) Can they do it? (skill assessment: eg. observation, demonstration & explanation, inspection of a finished product, etc)
2) Do they know why they have to do it that way? (theory assessment: eg. short answer, verbal explanation)
3) Can they think on their feet if something unexpected happens? (cognitive assesment: role-play, simulation, case study, etc)
And we’re also expected to assess whether they actually apply on it on job or not(Workplace assessment, stats analysis, supervisor review, etc).
Gary H says
We currently use pre-tests as a pre-requisite to a few in-person certification courses. We are concerned that people will get discouraged, but are using the test as a screening mechanism to ensure they know enough to be successful before they come to an in-person class. We tell them up front what topics are included in the pre-test and even give them a list of references and recommended training. I think people use the pre-test as a learning tool, but I don’t have any evidence to support it. If they don’t pass, they know they need more preparation. We try to use problem-based questions in the pre-test. After reading this, I’m definitely going to ask people if the pre-test helped them or scared them.
Good way to look at it Gary. Never really looked at the test as a scary mechanism, but I suppose there is some appreciable affect to confidence (which in itself is an interesting mechanism to couple with activities and reflection).
We’ve tried to move to pre-test as an optional tailoring mechanism that also provides the opportunity to test-out of the course activities.
I’m hesitant to support pre and post test delta comparisons as a reasonable measure of effectiveness. If the existence of this measure didn’t become the primary / only checkbox that indicated evaluation had taken place, then I’d be less apprehensive. Improvement reflected in short term recall is one measure, but it’s not all measures. And in many cases it is the least important measure of effectiveness and impact, if it is important at all.
I also lean towards divorcing folks from the traditional notion of a test. Some folks really get it and build assessments and challenges based on problems and authentic context. Sadly, most do not. For pre and post tests, trivial pursuit is an annoyance for the learner and a waste of time for the person assembling the ‘test’. Temporarily changing the lexicon until we get a stronger contingent of exemplars seems to make alot of sense to me.
Skill gauges, challenges, and gating activities seem like powerful alternatives to test. It’s semantics… I know. But there’s plenty of rewiring to be done to weed out the bad habits and bad examples.
We built a self-paced for the USMC on heavy machine guns. This was for folks that were incidental gunners (you are a supply guy, your gunner is taken out of action and you are in the $h^t, what do you do now?) What a fun course to work on.
We were required to include a post test as an aggregate measure. But we did some cool things with the sections and activities. These were context and problem based, and they required 100% mastery to continue. Most of it was procedural and the activities were chunked relatively well (according to our evaluation data). We combined what we viewed as meaningful interactions that required placement of parts in the right proximity (drag and drop) but immediately chained into expanding pie menu’s that evaluated choices (place this part notch forward or notch back, rotate it, rotate it how far, what are you thinking / concerned about when you perform this step). Each step provided feedback and the procedure chains also provided an after action report that told what learner got right and wrong. They could also play back the proper procedure when they were finished. These would have been great to offer at the beginning of the section (see how well you do without any instruction).
Lots of opportunity.
Such great thoughts reflected. I like the way you folks are using pre-tests, giving them the ability to be prepared, letting them know why it’s there, and not doing it for deltas (I was intrigued to use the delta for the learner, but I’d much rather have a clear new ability (that is, pure objective measure that you can or can’t do X) than a delta. I’m willing to be wrong.
For that matter, helping an overconfident learner find out what they don’t know they don’t know might be useful from a motivation side ;).
Great feedback, thanks!