Good formal learning consists of an engaging introductions, rich presentation of concepts, annotated examples, and meaningful practice, all aligned on cognitive skills. As we start seeing user-generated online c, publishers and online schools are feeling the pressure. Particularly as MOOCs come into play, with (decreasingly) government funded institutions giving online content and courses for free. Are we seeing the demise of for-profit institutions and publishers?
I will suggest that there’s one thing that is harder to get out of the user-generated content environment, and that’s meaningful practice. I recall hearing of, but haven’t yet seen, a seriously threatening repository of such. Yes, there are learning object repositories, but they’re not yet populated with a rich suite of contextualized practice.
Writing good assessments is hard. Principles of good practice include meaningful decisions, alternatives that represent reliable misconceptions, relevant contexts, believable dialog, and more. They must be aligned to the objectives, and ideally have an increasing level of challenge.
There are some technical issues as well. Extensions that are high value include problem generators and randomness in the order of options (challenging attempts to ‘game’ the assessment). A greater variety of response options for novelty isn’t bad either, and automarking is desirable for at least a subset of assessment.
I don’t want to preclude essays or other interpretive work like presentations or media content, and they are likely to require human evaluation, even with peer marking. Writing evaluation rubrics is also a challenge for untrained designers or experts.
While SMEs can write content and even examples (if they get pedagogical principles and are in touch with the underlying thinking, but writing good assessments is another area.
I’ve an inkling that writing meaningful assessments, particularly leveraging interactive technology like immersive simulation games, is an area where skills are still going to be needed. Aligning and evaluating the assessment, and providing scrutable justification for the assessment attributes (e.g. accreditation) is going to continue to be a role for some time.
We may need to move accreditation from knowledge to skills (a current problem in many accreditation bodies), but I think we need and can have a better process for determining, developing, and assessing certain core skills, and particularly so-called 21st century skills. I think there will continue to be a role for doing so, even if we make it possible to develop e necessary understanding in any way the learner chooses.
As is not unusual, I’m thinking out loud, so I welcome your thoughts and feedback.
Jane Hart says
Hi Clark – agree very much on accrediting skills rather than knowledge as the latter so quickly gets out of date. That’s why Harold and I are helping LPI to accredit new skills for the learning profession – and very 21st century ones too http://www.learningandperformanceinstitute.com/diplomaworkplacecollaboration.htm Janre
Ara Ohanian says
Clarke, you raise a really good point about what happens after we learn. And you’re right that there is not enough meaningful practice out there. Certainly nothing that can be automated. Whether it is setting an exam question and marking it or requiring a stretch activity at work and assessing performance against it – human agency will always be required to understand and feed back on the nuance of activity and help the learner understand where they have succeeded and where they need extra work.
Jeff Walter says
As I read your post I thought of the difference between educating and training. To me, educating is about enabling someone to achieve a higher level of enlightenment on a subject matter, while training is about helping them acquire and internalize information and/or techniques.
Many of your points are spot on in the context of educating someone, but when the focus is training, I believe it can be highly automated.