Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: engag

Getting strategic

28 June 2010 by Clark 3 Comments

Was on a call with my Internet Time Alliance colleagues, and we were talking about how to help organizations make the transition from delivering courses to supporting the full performance ecosystem.   Jane Hart has had a recent series on what she calls ‘performance consulting‘, and its a good way to look at things from a broader perspective.   She was about to give a presentation, and we were talking through her slides.

Charles Jennings pointed out that they layer above her slides to the Learning and Development group was a missing ‘governance’ role, which he’s been thinking about quite a bit.   The point being that someone needs to be assisting in the strategic role of ensuring the coverage is addressing the broad needs of the organization, not just courses.

Harold Jarche pointed out that just mimicking the Human Performance Technology (HPT) approach (e.g. ISPI) would miss the same things it misses.   I’ve been a fan of HPT since it goes beyond ADDIE in considering other potential sources of problems than just skills (e.g. performance support, incentives), but Harold’s right that it doesn’t inherently cover social learning, let alone engagement.

Jay Cross reminded us we can’t just ignore the fact that their perspective is strongly focused on compliance and other such needs.   They have LMSs, and if we try to say that’s irrelevant we’ll be ignored as being out of touch.   The fact is that there is a role for formal learning, it’s just not everything.

5 types of org learningMy takeaway was that we need a combined approach to help folks understand the bigger picture.   From Rand Spiro’s Cognitive Flexibility Theory, we need to provide multiple models to increase the likelihood that the audience will find one that resonates. Whether it’s the continuum from novice through practitioner to expert, Jane & Harold’s 5 types of org learning (e.g. FSL, IOL, GDL, PDL, & ASL), or Jay’s point about continual change meaning formal methods aren’t sufficient, there are multiple ways it helps to think about the full spectrum of learning design.   It’s also important to point out how supporting these is critical to the organization, and that it’s a way to take a strategic role and increase relevance to the organization.

Similarly, there are some sticks available, such as increasingly irrelevancy if the L&D department does not take on this role.   If they allow IT or operations to take it over, a) it won’t be run as well as if learning folks are involved, and b) they’ll be the ones seen providing the necessary performance infrastructure and adding value to the enterprise.

Finally, what’s also needed is a suite of tools and processes to move forward. It’s clear to us that there are systematic ways to augment existing approaches to move in this new direction, but it may not be obvious to those who would want to change what they should start doing differently.   We talked about ‘layers’ of extension of operation, starting with adding engagement to the design of learning experiences, and incorporating performance support and eCommunity to the potential solution quiver.   Next steps include considering Knowledge Management and Organizational Development.   Governance also needs to work it’s way into the mix.   My barrows include mobile and deeper content models in addition to the others.

Quite simply, it has to start with the first step: analysis of the problems.   For example, if the answer is changing quickly, if the audience are experts, or it’s easier to connect to the right person than to develop content, facilitating communication may be a better solution than developing content.   It helps to have the tools available in the infrastructure, a platform approach, which is why we advocate thinking about having a portal system and social networking in place in the organization, so you don’t have to build a whole infrastructure when   you see the need.   The learning processes will have to be richer than existing ones, and that will require new tools, I reckon.   However, it will also require a new attitude and initiative.

The L&D group may not be the right group for the message, it may have to go higher (as Charles and Jay continue to suggest), but we’re looking to figure out how to help folks wherever they may be.   The final solution, however, has to be that some group that understands learning is facilitating the learning function in the organization at a systemic level. That’s the goal. How your organization gets there will depend on where you’re at, and many other factors, but that’s what any organization that wants to succeed in this time of increasing change will have to achieve.   Get it on your radar now, and figure out how you’re going to get there!

7 questions from the University of Wisconsin-Stout ID Program

22 June 2010 by Clark Leave a Comment

The program at the University of Wisconsin-Stout Online Professional Development’s Instructional Design program regularly asks someone to answer a series of questions from their students. I think these sorts of efforts are worthwhile to see a variety of different ideas, and consequently I agreed. Here’re the questions and my answers as presented to the students:

Learning Design Evangelist Clark Quinn Answers Questions
June 2010

1. Are there any critical gaps in knowledge that you frequently encounter in the ID industry?

Clark:

Several: The first is folks who only know the surface level of ID, not understanding the nuances of the components of learning (examples, concepts, etc), and consequently creating ineffective designs without even being aware. This is, of course, not the fault of those who’ve taken formal training, but many designers are transported from face-to-face training without adequate presentation.A related problem is the focus on the ‘event’ model, where learning is a massed event, which we know is one of the least effective mechanisms to lead to long-term retention.

Another gap is a focus on the course, without taking a step back and analyzing whether the performance gap is caused by attitude, motivation or other issues besides skills and knowledge. The Human Performance Technology approach (ala ISPI) is a necessary analysis before ADDIE, but it’s too infrequently seen.

The last is the lack of consideration of the emotional (read: affective and conative) side of instructional design. Most ID only focuses on the cognitive side, and despite the efforts of folks like John Keller, Michael Allen, and Cathy Moore, among others, we’re not seeing sufficient consideration of engagement.

2. In a world where technology changes daily, do you feel we place too much emphasis on the latest and greatest delivery method? Do you foresee a future where higher education is delivered primarily through distance learning?

Clark:

Yes, we do see ‘crushes’ on the latest technology, whereas we should be focusing on looking at the key affordances and matching technologies appropriately to need. I’m a strong proponent of the potential of new technologies to create new opportunities, but very much first focused on the learning outcomes we need to achieve. Which is why I have complicated feelings about the future of higher ed. In a time of increasing change, I think that the new role of higher education will increasingly be to develop the ability to learn. The domain will be a vehicle, but not the end goal. Which could be largely independent of place, but I liked the old role of new and independent mentorship beyond family and community, and always felt that there was a socializing role that university provides. I’m not quite sure how that could play out via technology mediation, but I do note the increasing role of social media.

3. Is there an elearning authoring tool you would endorse?

Clark:

Paper and pencil. Seriously. I wrote many years ago of a design heuristic, the double double P’s: postpone programming, and prefer paper. An associated mantra of mine is “if you get the design right, there are lots of ways to implement it; if you don’t get the design right, it doesn’t matter how you implement it”. Consequently, I prefer the cheapest forms of prototyping, and rapid cycles of iteration, and you can do a lot with post-it notes (e.g. the Pictive technique from interface design).

4. What impact, if any, do you think that the shortened attention span habits dictated by most social media will have on e-learning?

Clark:

I think that you should be very careful about media-manufactured trends. Our wetware hasn’t changed, just our tolerance of certain behaviors. We’ve always had short attention spans, it’s just that our schooling forced us to mask it. We’ve also been quite capable of multi-tasking (ask any single parent), but it does provide a detriment to performance in each task, or cause the task to take longer. (Other seriously misconstrued ideas include digital natives, learning styles, and generational differences).

I think we should look to learning that optimizes what’s known about how we learn (and see Daniel WIllingham for a very apt critique of brain-based learning), which includes smaller chunks over a longer period of time. That’s just one component of a more enlightened learning experience predicated on a longer-term relationship with a learner.

5. Is there current research that shows whether employers view fully online degrees programs any different than a traditional degree program? Do employers care that an applicant may have not attended any face to face classes while earning an advanced degree?

Clark:

Frankly, this is research I haven’t really tracked. I do know recent research shows that online is better than face-to-face, but most likely due to quality of design (instructors aren’t necessarily experts in learning design, sadly) than media.

6. What skills are critical to the survival of a new ID professional? What skills must be focused upon in the first three critical years of business?

Clark:

The skills that are necessary are much more pragmatic than conceptual. While I’d love to say “knowledge of learning theory”, and “enlightened design”, I think in the initial stages proper time/project management will probably pay off more immediately. Also, the ability to know what rules to break and when. That said, I think you absolutely need the domain knowledge, but street smarts are equally valuable.

However, the core one is the ability to learn effectively and efficiently. I argue that the best investment a business could make is not to take learning skills for granted but document them, assess them, and develop them. Personally, I’ll say the same: the best investment you can make is in your ability to learn continuously, eagerly, even joyously.

7. What areas of growth do you see in the ID market?

Clark:

With lots of caveats, because I’m involved in many:

Right now I’m seeing growth in the social learning space. Understanding and taking advantage of social learning is trendy, but offers the potential for real learning outcomes as well. Naturally, the only problem is separating the snake oil from the real value. My involvement in the Internet Time Alliance is indicative of my beliefs of the importance.

I think the whole ‘cloud’/web-based delivery area is seeing some interesting growth too, with everything from rich internet applications to collaborative authoring. The opportunities of web 3.0 and semantic technologies are still a ways off, but I think the time is right to start laying the foundations (caveat, I generally find I’m several years ahead of the market in predicting when the time is ripe).

An area that I’m seeing a small uptick in is engagement, fortunately, the use of games and scenarios. Having a book out on the topic makes it gratifying to see the growth finally taking off.

And mobile is finally taking off! Having just left the first biz-focused mobile learning conference, I was thrilled to see the amount of excitement and progress. (Snake oil disclaimer: I’ve been on the stump for years, and finally have a book coming out on the topic. :)

I’ve been podcasted!

6 June 2010 by Clark Leave a Comment

Rob Penn, CEO of SuddenlySmart (makers of SmartBuilder, one of the new breed of authoring tools), interviewed me last fall about engaging learning: game design, simulations, etc.   It followed one by Professor Allison Rossett of SDSU (also available at the site).

I always find it hard to listen to myself (my voice sounds much better in my head :), and the audio is a little murky, but I hit the usual important notes about focusing on decisions that learners need to be able to make, getting challenge right, capturing misconceptions, and more.

Rob also gets me to discriminate between simulations, scenarios, and games (simulations are just models, scenarios have an initial state and a goal state learners should get to, you can tune a scenario into a game), and I also elaborate how you go from multiple choice, through branching scenarios, to full simulation driven engines (jumping off from Rob’s question instead of first answering it, mea culpa!).

Feedback welcome!

Training Book Reviews

14 May 2010 by Clark 2 Comments

The eminent Jane Bozarth has started a   new site called Training Book Reviews.   Despite the unfortunate name, I think it’s a great idea: a site for book reviews for those of us passionate about solving workplace performance needs.   While submitting new reviews would be great, she notes:

share a few hundred words

1) on a favorite, must-own title, or maybe even

2) of criticism about a venerated work that has perhaps developed an undeserved glow

In the interest of sparking your participation (for instance, someone should write a glowing review of Engaging Learning :), here’s a contribution:

More than 20 years ago now, Donald Norman released what subsequently became the first of a series of books on design.   My copy is titled The Psychology of Everyday Things, (he liked the acronym POET) but based upon feedback, it was renamed The Design of Everyday Things as it really was a fundamental treatise on design.     And it has become a classic. (Disclaimer, he was my PhD advisor while he was writing this book.)

Have you ever burned yourself trying to get the shower water flow and temperature right?   Had trouble figuring out which knob to turn to turn on a particular burner on the stove?   Push on a door that pulls or vice-versa?   Don explains why.   The book looks at how our minds interact with the world, how we use the clues that our current environment provides us coupled with our prior experience to figure out how to do things. And how designers violate those expectations in ways that reliably lead to frustration.   While Don’s work   on design had started with human-computer interaction and user-centered design, this book is much more general.   Quite simply, you will find that you look at everyday things: shower controls, door handles, and more in a whole new way.

The understanding of how we understand the world is not just for furniture designers, or interface designers, but is a critical component of how learning designers need to think.   While his subsequent books, including Things That Make Us Smart and Emotional Design, add deeper cognition and engagement (respectively) and more, the core understanding from this first book provides a foundation that you can (and should) apply directly.

Short, pointed, and clear, this book will have you nodding your head in agreement when you recognize the frustrations you didn’t even know you were experiencing.   It will, quite simply, change the way you look at the world, and improve your ability to design learning experiences. A must read.

A case for the LMS?

6 May 2010 by Clark 6 Comments

My Internet Time Alliance colleagues Harold Jarche and Jane Hart have been (rightly) eviscerating the LMS.   Harold put up a post that the “LMS is no longer the centre of the universe“,   while Jane asked “what is the future of the LMS“.   Both of them are recognizing the point I make about the scope of learning in thinking about performance: it’s more than just courses, it’s the whole ecosystem.

I think that, before we completely abandon the LMS (and that’s not necessarily what they advocate), we should examine the key capabilities an LMS provides and determine whether that role can be taken up elsewhere or how it can manifest in the broader system.   I see two key functions an LMS provides.

The first role is to provide access to courses: there’s one place where learners can go to sign up for face-to-face courses, or access online courses (whether to signup and then attend a synchronous event or to complete an asynchronous one).   Providing access to courses is a good thing, as there are situations where formal learning is the appropriate approach.

A second role is to track learner usage and completion of courses. Again, ascertaining an individual’s capabilities is valuable, whether it be by programmed assessment, 360 evaluation or otherwise.   Linking these interventions back to organizational outcomes is also valuable to determine whether the original objectives were appropriate and whether the intervention needs modification.   (BTW, I’m definitely assuming for the sake of the argument that there’s an enlightened analysis focusing on meaningful workplace objectives and an enlightened design combining cognitive and emotional design into a minimal and engaging experience).

Other capabilities – authoring, communications, etc – are secondary, really.   There are other ways to get those functions, so focusing on the core affordances is the appropriate perspective.

How do you provide learners with the ability to access courses?   The LMS model is that the learner comes to the LMS.   That’s a course-centric model. In a performance ecosystem model, we should have a learner performance-centric view, where courses, communities, resources (e.g. job aids, media files), etc are aligned to their interests, roles, and tasks.   Really, performers should have custom portals!

Similarly, tracking performance should cross courses, use of resources, and community actions to look for opportunities to facilitate.   We want to find ways to assist people in using the environment successfully, to augment the elements of the ecosystem, and to align it to the performance needs.   This is a bigger problem, but an LMS isn’t going to solve it.

All this argues, as Jane suggests in a followup post on A Transition Path to the Future, that “It may be that you want to retain it in some cut-down form, or it may be that it is providing no real value at all, and it is a barrier to ‘learning'”.     Harold similarly says in his followup post on Identifying a Collaboration Platform, that you “minimize use of the LMS”.

You could make access to formal learning available through a portal, but I think there’s an argument to have a tool for those responsible for formal learning to manage it. However, it probably should not be a performer-facing interface.

The big problem I see is that it’s too easy for the learning function in an organization to take the easy path and focus on the formal learning, and an LMS may be an enabler.   If you take the Pareto rule Jay Cross (another ITA colleague) touts where we spend 80% of our money on the 20% of value people obtain in the workplace from formal learning, you may have misplaced priorities.

It is likely that the first tool you should buy is a collaboration platform, as Harold’s suggesting, and LMS capability is an afterthought or addition, rather than the core need.   Truly, once people are up and performing, they need tools for accessing resources and each other. That infrastructure, like plumbing or electricity or air, is probably the most important (and potentially the best value) investment you can make.

Yes, you need to prepare the ground to seed, feed, weed, and breed the outcome, but the benefits are not only in the output, but also the demonstrable investment in employee value and success.   Let an LMS be a functional tool, not an enabler of mis-focused energy, and certainly not the core of your learning technology investment.   Look at the bigger picture, and budget accordingly.

Reflections on Web 2.0 Expo

4 May 2010 by Clark Leave a Comment

Last October I toured the expo associated with O’Reilly’s Web 2.0 Conference, and had the chance again this week. Somehow, it didn’t feel as vibrant. Still, there were some interesting developments.

A couple of companies were there who I talked about last time, including Blue Kiwi (who I didn’t visit this time) and Vignette (who I did visit, unintentionally). I was talking to OpenText for quite awhile before it came up that they’d acquired Vignette! Naturally, their DNA is content management, but user- generated content is content, after all. I also talked to Social Text, seeing if they supported user-generation of video (no).

Also, I’d been pinged by the CEO of MangoSpring via the social software for the conference (which didn’t obviously give me a way of pinging back!?!?), so I stopped by the booth for their product, Engage. Which has the predictable mix of capabilities and is (at least initially) totally internally focused.

The internal focus was refreshing, because much of the expo felt marketing focused, without much focus on the ClueTrain of a two-way authentic discussion.

I also was intrigued to see Microsoft showing the Fuse team rather then SharePoint. Fuse seemed to be largely developing internal social media capabilities (enhancing Outlook) and some developer interfaces, but apparently also do some customer work. They were also touting a beta of accessing Microsoft Office docs collaboratively through FaceBook. Trying to counter Google Docs, I reckon, but will FaceBook appeal to the biz crowd?

One of the questions I was asking was about tracking the potential benefits of social media in the enterprise, particularly the outcomes of informal learning: rate of problem solving, products and services generated, etc. Engage has, like Spigit, an idea tool, but no one had a clear answer. Likely it will have to be developed for the group being supported (tho’ I’d like a more generic one if I could).

Nothing earth-shattering, some maturation, still a bit of hype but some more reasoned approaches overall.

Situated Learning Styles

2 May 2010 by Clark 1 Comment

I’ve been thrust back into learning styles, and saw an interesting relationship that bears repeating. Now you should know I’ve been highly critical of learning styles for at least a decade; not because I think there’s anything wrong with the concept, but because the instruments are flawed, and the implications for learning design are questionable.

This is not just my opinion; two separate research reports buttress these positions. A report from the UK surveyed 13 major and representative learning style instruments and found all with some psychometric questions. In the US, Hal Pashler led a team that concluded that there was no evidence that adapting instruction to learning styles made a difference.

Yet it seems obvious that learners differ, and different learning pedagogies would affect different learners differently. Regardless, using the best media for the message and an enlightened learning pedagogy seems best.

Even the simple question of whether to match learners to their style, or challenge them against their style has been unanswered. One of the issues that has been that much of the learning styles have been focused on cognitive aspects, yet cognitive science also recognizes two other areas: affective and conative, that is, who you are as a learner and your intentions to learn.

These two aspects, in particular the latter, could have an effect on learners. Affective, typically considered to be your personality, is best characterized by the Big 5 work to consolidate all the different personality characteristics into a unified set. It is easy to see that elements like openness and conscientiousness would have a positive effect on learning outcomes, and neuroticism could have a negative one.

Similarly, your intention to learn would have an impact. I typically think of this as your motivation to learn (whether from an intrinsic interest, a desire for achievement, or any other reason) moderated by any anxiety about learning (again, regardless whether from performance concerns, embarrassment, or other issue). It is this latter, in particular, that manifests in several instruments of interest.   Naturally, I’m also sympathetic to learning skills, e.g learning to learn and domain-independent skills.

In the UK study, two relatively highly regarded instruments were those coming from Entwistle’s program of research, and another by Vermunt. Both result in four characterizations of learners: roughly undirected learners, surface or reproducing learners, strategic or application learners, and meaning/deep learners.   Nicely, the work by Entwistle and Vermunt is funded research and not proprietary, and their work, instruments, and prescriptions are open.

I admit that any time I see a four element model, I’m inclined to want to put it into a quadrant model. And the emergent model from these three (each of which does include issues of motivation as well as learner skills) very much reminds me of the Situational Leadership model.

The situational leadership model talks about characterizing individual employees and adapting your leadership (really, coaching) to their stage. They have two dimensions: whether the learner needs task support and whether they need motivational support.   In short, you tell unmotivated and unskilled employees what to do, but try to motivate them to get them to the stage where they’re willing but unskilled and skill them.   When they’re still skilled but uncertain you support their confidence, and finally you just get out of their way!

This seems to me to be directly analogous to the learning models. If you chose two dimensions as needing learning skills support, and needing motivational support, you could come up with a nice two way model that provides useful prescriptions for learning. In particular, it seems to me to address the issue of when do you match a learners’ style, and when do you challenge; you match until the learner is confident, and then you challenge to both broaden their capabilities and to keep them engaged with challenge.

So, to keep with the result that the UK study found where most purveyors of instruments sell them and have no reason to work together,   I suppose what I ought to do is create an learning assessment instrument and associated prescriptions of my own, label the categories, brand it, and flog it. How about:

Buy: for those not into it, get them doing it
Try: for those willing, get them to develop their learning skills and support the value thereof
My: have them apply those learning skills to their goals and take ownership of the skills
Fly: set them free and resource them

I reckon I’ll have to call it the Quinnstrument!

Ok, I’m not serious about flogging it, but I do think that we can start looking at learning skills, and the conative/intention to learn as important components of learning.   Would you buy that?

Reflections on ISPI 2010

23 April 2010 by Clark 2 Comments

Early in the year, I gave a presentation online to the Massachusetts chapter of ISPI (the international society for performance improvement), and they rewarded me with a membership. A nice gesture, I figured, but little more (only a continent away). To my benefit, I was very wrong. The ISPI organization gave each chapter a free registration to their international conference, which happens to be in San Francisco this year (just a Bart trip away), and I won! (While the fact that my proximity may have been a factor, I’m not going to do aught but be very grateful and feel that the Mass chapter can call on me anytime.). Given that I just won a copy of GPS software for my iPhone (after seemingly never winning anything), I reckon I should buy a lottery ticket!

Now, it probably helps to explain that I’ve been eager to attend an ISPI conference for quite a while. I’m quite attracted to the HPT (Human Performance Technology) framework, and I’m ever curious. I even considered submitting to the conference to get a chance to attend, but their submission processes seemed so onerous that I gave up. So, I was thrilled to get a chance to finally visit.

Having completed the experience, I have a few   reflections. I think there’s a lot to like about what they do, I have some very serious concerns, and I wish we could somehow reconcile the too-many organizations covering the same spaces.

I mentioned I’m a fan of the HPT approach. There are a couple of things to like, including that they start by analyzing the performance gaps and causes, and are willing to consider approaches other than courses. They also emphasize a systems approach, which I can really get behind. There were some worrying signs, however.

For instance, I attended a talk on Communities of Practice, but was dismayed to hear discussion of monitoring, managing, and controlling instead of nurturing and facilitation. While there may need to be management buy-in, it comes from emergent value, not exec-dictated outcomes the group should achieve!

Another presentation talked about the Control System Model of Management. Maybe my mistake to come to OD presentations at ISPI, but it’s this area I’m interested via my involvement in the Internet Time Alliance. There did end up being transparency and contribution, but it was almost brought in by stealth, as opposed to being the explicit declarations of culture.

On the other hand, there were some positive signs.   They had enlightened keynotes, e.g. one talking about Appreciative Inquiry and positive psychology that I found inspiring, and I attended another on improv focusing on accepting the ‘offer’ in a conversation.   And, of course, Thiagi and others talked about story and games.

One surprise was that the technology awareness seems low for a group with technology in their prized approach. Some noticed the lack of tweets from the conference, and there wasn’t much of a overall technology presence (I saw no other iPads, for instance). I challenged one of the editors of their handbook, Volume 1 (which I previously complained didn’t have enough on informal learning and engagement) about the lack of coverage of mobile learning, and he opined that mobile was just a “delivery channel”. To be fair, he’s a very smart and engaging character, and when I mentioned context-sensitivity, he was quite open to the idea.

I attended Guy Wallace‘s   presentation on Enterprise Process Performance Improvement, and liked the structure, but reckon that it might be harder to follow in more knowledge-oriented industries. It was a pleasure to finally meet Guy, and we had a delightful conversation on these issues and more, with some concurrence on the thoughts above. As a multiple honoree at the conference, there is clearly hope for the organization to broaden their focus.

Overall, I had mixed feelings. While I like their rigor and research base, and they are incorporating some of the newer positive approaches, it appears to me that they’re still very much mired in the old hierarchical style of management.     Given the small sample, I reckon you should determine for yourself. I can clearly say I was grateful for the experience, and had some great conversations, heard some good presentations, and learned. What more can you ask for?

Designing for an uncertain world

17 April 2010 by Clark 9 Comments

My problem with the formal models of instructional design (e.g. ADDIE for process), is that most are based upon a flawed premise.   The premise is that the world is predictable and understandable, so that we can capture the ‘right’ behavior and train it.   Which, I think, is a naive assumption, at least in this day and age.   So why do I think so, and what do I think we can (and should) do about it?   (Note: I let my argument lead where it must, and find I go quite beyond my intended suggestion of a broader learning design.   Fair warning!)

The world is inherently chaotic. At a finite granularity, it is reasonably predictable, but overall it’s chaotic. Dave Snowden’s Cynefin model, recommending various approaches depending on the relative complexity of the situation, provides a top-level strategy for action, but doesn’t provide predictions about how to support learning, and I think we need more.   However, most of our design models are predicated on knowing what we need people to do, and developing learning to deliver that capability.   Which is wrong; if we can define it at that fine a granularity, we bloody well ought to automate it.   Why have people do rote things?

It’s a bad idea to have people do rote things, because they don’t, can’t do them well.   It’s in the nature of our cognitive architecture to have some randomness.   And it’s beneath us to be trained to do something repetitive, to do something that doesn’t respect and take advantage of the great capacity of our brains.   Instead, we should be doing pattern-matching and decision-making.   Now, there are levels of this, and we should match the performer to the task, but as I heard Barry Schwartz eloquently say recently, even the most mundane seeming jobs require some real decision making, and in many cases that’s not within the purview of   training.

And, top-down rigid structures with one person doing the thinking for many will no longer work.   Businesses increasingly complexify things but that eventually fails, as Clay Shirky has noted, and   adaptive approaches are likely to be more fruitful, as Harold Jarche has pointed out.   People are going to be far better equipped to deal with unpredictable change if they have internalized a set of organizational values and a powerful set of models to apply than by any possible amount of rote training.

Now think about learning design.   Starting with the objectives, the notion of Mager, where you define the context and performance, is getting more difficult.   Increasingly you have more complicated nuances that you can’t anticipate.   Our products and services are more complex, and yet we need a more seamless execution.   For example trying to debug problems between hardware device and network service provider, and if you’re trying to provide a total customer experience, the old “it’s the other guy’s fault” just isn’t going to cut it.   Yes, we could make our objectives higher and higher, e.g. “recognize and solve the customer’s problem in a contextually appropriate way”, but I think we’re getting out of the realms of training.

We are seeing richer design models. Van Merrienboer’s 4 Component ID, for instance, breaks learning up into the knowledge we need, and the complex problems we need to apply that knowledge to.   David Metcalf talks about learning theory mashups as ways to incorporate new technologies, which is, at least, a good interim step and possibly the necessary approach. Still, I’m looking for something deeper.   I want to find a curriculum that focuses on dealing with ambiguity, helping us bring models and an iterative and collaborative approach.   A pedagogy that looks at slow development over time and rich and engaging experience.   And a design process that recognizes how we use tools and work with others in the world as a part of a larger vision of cognition, problem-solving, and design.

We have to look at the entire performance ecosystem as the context, including the technology affordances, learning culture, organizational goals, and the immediate context.   We have to look at the learner, not stopping at their knowledge and experience, but also including their passions, who they can connect to, their current context (including technology, location, current activity), and goals.   And then we need to find a way to suggest, as Wayne Hodgins would have it, the right stuff, e.g. the right content or capability, at the right time, in the right way, …

An appropriate approach has to integrate theories as disparate as distributed cognition, the appropriateness of spaced practice, minimalism, and more.   We probably need to start iteratively, with the long term development of learning, and similarly opportunistic performance support, and then see how we intermingle those together.

Overall, however, this is how we go beyond intervention to augmentation.   Clive Thompson, in a recent Wired column, draws from a recent “man+computer” chess competition to conclude “serious cognitive advantages accrue to those who are best at thinking alongside machines”.   We can accessorize our brains, but I’m wanting to look at the other side, how can we systematically support people to be effectively supported by machines?   That’s a different twist on technology support for performance, and one that requires thinking about what the technology can do, but also how we develop people to be able to take advantage.   A mutual accommodation will happen, but just as with learning to learn, we shouldn’t assume ‘ability to perform with technology augmentation’.   We need to design the technology/human system to work together, and develop both so that the overall system is equipped to work in an uncertain world.

I realize I’ve gone quite beyond just instructional design.   At this point, I don’t even have a label for what I’m talking about, but I do think that the argument that has emerged (admittedly, flowing out from somewhere that wasn’t consciously accessible until it appeared on the page!) is food for thought.   I welcome your reactions, as I contemplate mine.

Mea Culpa and Rethink on Pre-tests

31 March 2010 by Clark 9 Comments

Well, it turns out I was wrong.   I like to believe it doesn’t happen very often, but I do have to acknowledge it when I am. Let me start from the worst, and then qualify it all over the place ;).

In the latest Scientific American Mind, there is an article on The Pluses of Getting It Wrong (first couple paragraphs available here). In short, people remember better if they first try to access knowledge that they don’t have, before they are presented with the to-be-learned knowledge.   That argues that pre-tests, which I previously claimed are learner-abusive, may have real learning benefits.   This result is new, but apparently real.   You empirically have better recall for knowledge if you tried to access it, even though you know you don’t have it.   My cognitive science-based explanation is that the search in some ways exercises appropriate associations that make the subsequent knowledge stick better.

Now, I could try to argue against the relevance of the phenomenon, as it’s focused on knowledge recovery which is not applied, and may still lead to ‘inert knowledge’ (where you may ‘know it’, but you don’t activate it in relevant situations).   However, it is plausible that this is true for application as well.   Roger Schank has argued that you have to fail before you can learn. (Certainly I reckon that’s true with overconfident learners ;). That is, if you try to solve a problem that you aren’t prepared for, the learning outcome may be better than if you don’t.   Yet I don’t think it’s useful to deny this result, and instead I want to think about what it might mean for still creating a non-aversive learner experience.

I still believe that giving learners a test they know they can’t pass at best seems to waste their time, and at worst may actually cause some negative affect like lack of self-esteem.   Obviously, we could and should let them know that we are doing this for the larger picture learning outcome.   But can we make the experience more ‘positive’ and engaging?

I think we can do more. I think we can put the mental ‘reach’ in the form of problem-based learning (this may explain the effectiveness of PBL), and ask learners to solve the problem. That is, put the ‘task’ in a context where the learner can both recognize the relevance of the problem and is interested in it.   Once learners recognize they can’t solve the problem, they’re motivated to learn the material.   And they should be better prepared mentally for the learning, according to this result. While it *is*, in a sense, a pre-test, it’s one that is connected to the world, is applied, and consequently is less aversive.   And, yes, you should still ensure that it is known that this is done to achieve a better outcome.

Now, I can’t guarantee that the results found for knowledge generalize to application, but I do know that, by and large, rote knowledge is not going to be the competitive edge for organizations.   So I’d rather err on the side of caution and have the learners do the mental ‘reach’ for the answer, but I do want it to be as close as possible to the reach they’ll do when they really are facing a problem.   If there is (and please, do ensure there really is, don’t just take the client’s or SME’s word for it), then you may want to take this approach for that knowledge too, but I’m (still) pushing for knowledge application, even in our pre-tests.

So, I think there’s a revision to the type of introduction you use to the content,   presenting the problem or type of problem they’ll be asked to solve later and encouraged to have an initial go at it before the concept, examples, etc are presented.   It’s a pre-test, but of a more meaningful and engaging kind.   Love to see any experimental investigation of this, by the way.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.