I wrote a screed on this topic over at eLearn Mag, which I highly recommend. In short:
Better design takes no more time* and yields better outcomes
(*after an initial transition period).
I look forward to your thoughts!
Clark Quinn’s Learnings about Learning
I wrote a screed on this topic over at eLearn Mag, which I highly recommend. In short:
Better design takes no more time* and yields better outcomes
(*after an initial transition period).
I look forward to your thoughts!
A post by Ellen Wagner got me thinking about what I’m really looking for in an interactivity solution. She was bringing some clarity to the Adobe Flash – HTML 5 debate, pointing out that HTML 5 is not yet a standard, and emphasizing some moves by Adobe to make Flash more open. Whether I agree or not, I realized my desire is not to choose one or the other, but instead it’s to find a solution!
The opportunity I’ve talked about before is a channel for publishers to move to a new era. The title of my blog is learnlets, based upon a claim I made almost two decades ago: in the future there will be lots of small interactive learning experiences (learnlets) that will teach you anything you want to know, including how to make small interactive learning experiences. That’s still a dream I have, but we’re now capable of realizing it, and there are some nuances that come from thinking about it in the current context.
Publishers produce books, but with the technology augments that they produce (ancillary or companion sites), they have most of the components needed to put meaningful problems (read: scenarios) in the mix and resource around those to create real learning experiences. With a market channel for those learning experiences (something like an app store), where it could go out to anyone’s device (tablets would be ideal), individuals could develop their own learning path, and for formal education we’d remove the burden of books (it pains me to watch my kids lug their own weight in books off to school!) and lift the learning.
What’s necessary, besides the devices and the market (and we’re getting those) is a meaningful interactivity standard. Flash has had performance issues, and HTML 5 may not be quite ready for prime time (and I have not yet been convinced of it’s ability to handle simulation-driven interactions). I don’t really care which one ends up ‘winning’, I just want a standard that allows me to deliver static (e.g. text, graphics), dynamic (video/audio/animations), and interactive content in a package that I can download and interact with! It doesn’t have to report back, we’d likely have other ways to assess outcomes (though reporting wouldn’t be a bad thing).
I think that if we can lift our learning design to match the quality of our devices, and have the market to deliver those learning experiences where and when desired, we’ll have the opportunity to lift ourselves to another level.
I like jogging (ok, more like plodding), as it’s a time I can queue up some questions to think about and then take them on the road to get some insights. In addition to some great thoughts on my presentation for the Innovations in eLearning Symposium, and my workshop at the mLearn Conference, I thought about LMS and social media.
I was reflecting on what I liked about Q2Learning’s model for system support, where a variety of things can be aggregated to achieve a competency: a course, a meeting, a project, etc. It occurred to me to think that if someone can decide what goes together to create a course, why shouldn’t the community itself decide?
It goes further: I got to design my own undergraduate major. I took a bunch of things I’d done, and some things I thought augmented those activities to create a coherent body of study on what was then termed Computer-Based Education (UCSD didn’t have a program in it back then), and submitted it as a proposal. The Provost vetted it, and I was on my way. Isn’t that a model that could be replicated? Can’t we have folks propose their course of study?
I started thinking about having networks start moving to becoming communities by defining component skills and proposed paths for achieving those skills, and also supporting proposals for other paths. Really, it’s about the community deciding how to help individuals move to the center, but with some explicit steps rather than implicit.
The learning organization role would be then one of facilitating this process of developing roles, competencies and curricula. It would certainly be a way of addressing the decreasing half-life of knowledge, by having it continually updated by the community in which those roles and skills made sense.
In this way, a community would co-create it’s learning paths in a dynamic interchange between the goals and tasks. And an LMS would then be a networking tool with the ability to manage the discussions, resources, and paths to competency as well as a learner’s record. It would be more organic and coupled in a robust feedback loop, not externalized, abstracted, filtered, and returned in ways that may diminish the value.
The learning organization would be dispersed as members of the constituent communities, helping develop the components of the competency path in concert with the members, adding in their value and nurturing development.
The thinking hasn’t yet gone far beyond this yet, but I have to say that it seems to approach an appropriate blend between the value of bringing in a real understanding of knowledge (the role of a learning organization) with the dynamic co-development of understanding that characterizes a community. Does this make sense to you?
Mobile is coming at us hard and fast. Announcements of changes in the marketplace (HP acquiring Palm in just the past week), and new devices (Google passing on the Nexus One to tout the next gen system), are coming fast and furious. The devices are out there (mobile is outselling the desktop) and the workforce is increasingly mobile (72%, according to IDC!). The question is, how do you get on top of taking advantage of mobile devices for organizational (and personal) learning?
I confess, I’m a design guy. I like to look at problems and create solutions. And I like to think that 30-odd years of practice and reflection on learning design (investigating myriad design practices, looking at design models, etc) provides some reason to think I’ve developed a wee bit of expertise on the topic.
I’m also a geek and I love tech toys. I’ve also been extremely enamored of the potential for mobile learning since Marcia Conner asked me to write a little screed on the topic 10 years ago now. It got me thinking in ways that haven’t stopped, so that I’ve been thinking and doing mobile for the past decade, and am awaiting feedback on the draft of a book on the topic for Pfeiffer.
Mobile is the killer app for deep reasons, and not surprisingly, my focus is on mobile design. As I say “if you get the design right, there are lots of ways to implement it; if you don’t get the design right it doesn’t matter how you implement it!” Design is the key. There are two things I’ve found out about mobile design:
You’ve heard me talk here before about some, e.g. the 4 C’s.
Naturally, it’s best if you work with these models a bit to really internalize them and see how they guide new opportunities to meet learning and performance needs for your folks. That’s why I’m pleased that I have the chance to offer a mobile design workshop at the eLearning Guild’s mobile learning conference, mLearnCon.
This is going to be an interactive and fun way to incorporate mobile learning into your repertoire of solution tools. Not to worry, we’ll contextualize design as well, talking about the devices, the market trends, the tools, and the organizational issues, but the focus is going to be, as it should, on design. It’s also in one of my favorite towns: San Diego, and of course there’s the rest of the conference to put the icing on the cake. I hope I’ll see you there, and get a chance to work with you on this exciting new area.
I’ve been thrust back into learning styles, and saw an interesting relationship that bears repeating. Now you should know I’ve been highly critical of learning styles for at least a decade; not because I think there’s anything wrong with the concept, but because the instruments are flawed, and the implications for learning design are questionable.
This is not just my opinion; two separate research reports buttress these positions. A report from the UK surveyed 13 major and representative learning style instruments and found all with some psychometric questions. In the US, Hal Pashler led a team that concluded that there was no evidence that adapting instruction to learning styles made a difference.
Yet it seems obvious that learners differ, and different learning pedagogies would affect different learners differently. Regardless, using the best media for the message and an enlightened learning pedagogy seems best.
Even the simple question of whether to match learners to their style, or challenge them against their style has been unanswered. One of the issues that has been that much of the learning styles have been focused on cognitive aspects, yet cognitive science also recognizes two other areas: affective and conative, that is, who you are as a learner and your intentions to learn.
These two aspects, in particular the latter, could have an effect on learners. Affective, typically considered to be your personality, is best characterized by the Big 5 work to consolidate all the different personality characteristics into a unified set. It is easy to see that elements like openness and conscientiousness would have a positive effect on learning outcomes, and neuroticism could have a negative one.
Similarly, your intention to learn would have an impact. I typically think of this as your motivation to learn (whether from an intrinsic interest, a desire for achievement, or any other reason) moderated by any anxiety about learning (again, regardless whether from performance concerns, embarrassment, or other issue). It is this latter, in particular, that manifests in several instruments of interest. Naturally, I’m also sympathetic to learning skills, e.g learning to learn and domain-independent skills.
In the UK study, two relatively highly regarded instruments were those coming from Entwistle’s program of research, and another by Vermunt. Both result in four characterizations of learners: roughly undirected learners, surface or reproducing learners, strategic or application learners, and meaning/deep learners. Nicely, the work by Entwistle and Vermunt is funded research and not proprietary, and their work, instruments, and prescriptions are open.
I admit that any time I see a four element model, I’m inclined to want to put it into a quadrant model. And the emergent model from these three (each of which does include issues of motivation as well as learner skills) very much reminds me of the Situational Leadership model.
The situational leadership model talks about characterizing individual employees and adapting your leadership (really, coaching) to their stage. They have two dimensions: whether the learner needs task support and whether they need motivational support. In short, you tell unmotivated and unskilled employees what to do, but try to motivate them to get them to the stage where they’re willing but unskilled and skill them. When they’re still skilled but uncertain you support their confidence, and finally you just get out of their way!
This seems to me to be directly analogous to the learning models. If you chose two dimensions as needing learning skills support, and needing motivational support, you could come up with a nice two way model that provides useful prescriptions for learning. In particular, it seems to me to address the issue of when do you match a learners’ style, and when do you challenge; you match until the learner is confident, and then you challenge to both broaden their capabilities and to keep them engaged with challenge.
So, to keep with the result that the UK study found where most purveyors of instruments sell them and have no reason to work together, I suppose what I ought to do is create an learning assessment instrument and associated prescriptions of my own, label the categories, brand it, and flog it. How about:
Buy: for those not into it, get them doing it
Try: for those willing, get them to develop their learning skills and support the value thereof
My: have them apply those learning skills to their goals and take ownership of the skills
Fly: set them free and resource them
I reckon I’ll have to call it the Quinnstrument!
Ok, I’m not serious about flogging it, but I do think that we can start looking at learning skills, and the conative/intention to learn as important components of learning. Would you buy that?
by Clark 2 Comments
Early in the year, I gave a presentation online to the Massachusetts chapter of ISPI (the international society for performance improvement), and they rewarded me with a membership. A nice gesture, I figured, but little more (only a continent away). To my benefit, I was very wrong. The ISPI organization gave each chapter a free registration to their international conference, which happens to be in San Francisco this year (just a Bart trip away), and I won! (While the fact that my proximity may have been a factor, I’m not going to do aught but be very grateful and feel that the Mass chapter can call on me anytime.). Given that I just won a copy of GPS software for my iPhone (after seemingly never winning anything), I reckon I should buy a lottery ticket!
Now, it probably helps to explain that I’ve been eager to attend an ISPI conference for quite a while. I’m quite attracted to the HPT (Human Performance Technology) framework, and I’m ever curious. I even considered submitting to the conference to get a chance to attend, but their submission processes seemed so onerous that I gave up. So, I was thrilled to get a chance to finally visit.
Having completed the experience, I have a few reflections. I think there’s a lot to like about what they do, I have some very serious concerns, and I wish we could somehow reconcile the too-many organizations covering the same spaces.
I mentioned I’m a fan of the HPT approach. There are a couple of things to like, including that they start by analyzing the performance gaps and causes, and are willing to consider approaches other than courses. They also emphasize a systems approach, which I can really get behind. There were some worrying signs, however.
For instance, I attended a talk on Communities of Practice, but was dismayed to hear discussion of monitoring, managing, and controlling instead of nurturing and facilitation. While there may need to be management buy-in, it comes from emergent value, not exec-dictated outcomes the group should achieve!
Another presentation talked about the Control System Model of Management. Maybe my mistake to come to OD presentations at ISPI, but it’s this area I’m interested via my involvement in the Internet Time Alliance. There did end up being transparency and contribution, but it was almost brought in by stealth, as opposed to being the explicit declarations of culture.
On the other hand, there were some positive signs. They had enlightened keynotes, e.g. one talking about Appreciative Inquiry and positive psychology that I found inspiring, and I attended another on improv focusing on accepting the ‘offer’ in a conversation. And, of course, Thiagi and others talked about story and games.
One surprise was that the technology awareness seems low for a group with technology in their prized approach. Some noticed the lack of tweets from the conference, and there wasn’t much of a overall technology presence (I saw no other iPads, for instance). I challenged one of the editors of their handbook, Volume 1 (which I previously complained didn’t have enough on informal learning and engagement) about the lack of coverage of mobile learning, and he opined that mobile was just a “delivery channel”. To be fair, he’s a very smart and engaging character, and when I mentioned context-sensitivity, he was quite open to the idea.
I attended Guy Wallace‘s presentation on Enterprise Process Performance Improvement, and liked the structure, but reckon that it might be harder to follow in more knowledge-oriented industries. It was a pleasure to finally meet Guy, and we had a delightful conversation on these issues and more, with some concurrence on the thoughts above. As a multiple honoree at the conference, there is clearly hope for the organization to broaden their focus.
Overall, I had mixed feelings. While I like their rigor and research base, and they are incorporating some of the newer positive approaches, it appears to me that they’re still very much mired in the old hierarchical style of management. Given the small sample, I reckon you should determine for yourself. I can clearly say I was grateful for the experience, and had some great conversations, heard some good presentations, and learned. What more can you ask for?
by Clark 9 Comments
My problem with the formal models of instructional design (e.g. ADDIE for process), is that most are based upon a flawed premise. The premise is that the world is predictable and understandable, so that we can capture the ‘right’ behavior and train it. Which, I think, is a naive assumption, at least in this day and age. So why do I think so, and what do I think we can (and should) do about it? (Note: I let my argument lead where it must, and find I go quite beyond my intended suggestion of a broader learning design. Fair warning!)
The world is inherently chaotic. At a finite granularity, it is reasonably predictable, but overall it’s chaotic. Dave Snowden’s Cynefin model, recommending various approaches depending on the relative complexity of the situation, provides a top-level strategy for action, but doesn’t provide predictions about how to support learning, and I think we need more. However, most of our design models are predicated on knowing what we need people to do, and developing learning to deliver that capability. Which is wrong; if we can define it at that fine a granularity, we bloody well ought to automate it. Why have people do rote things?
It’s a bad idea to have people do rote things, because they don’t, can’t do them well. It’s in the nature of our cognitive architecture to have some randomness. And it’s beneath us to be trained to do something repetitive, to do something that doesn’t respect and take advantage of the great capacity of our brains. Instead, we should be doing pattern-matching and decision-making. Now, there are levels of this, and we should match the performer to the task, but as I heard Barry Schwartz eloquently say recently, even the most mundane seeming jobs require some real decision making, and in many cases that’s not within the purview of training.
And, top-down rigid structures with one person doing the thinking for many will no longer work. Businesses increasingly complexify things but that eventually fails, as Clay Shirky has noted, and adaptive approaches are likely to be more fruitful, as Harold Jarche has pointed out. People are going to be far better equipped to deal with unpredictable change if they have internalized a set of organizational values and a powerful set of models to apply than by any possible amount of rote training.
Now think about learning design. Starting with the objectives, the notion of Mager, where you define the context and performance, is getting more difficult. Increasingly you have more complicated nuances that you can’t anticipate. Our products and services are more complex, and yet we need a more seamless execution. For example trying to debug problems between hardware device and network service provider, and if you’re trying to provide a total customer experience, the old “it’s the other guy’s fault” just isn’t going to cut it. Yes, we could make our objectives higher and higher, e.g. “recognize and solve the customer’s problem in a contextually appropriate way”, but I think we’re getting out of the realms of training.
We are seeing richer design models. Van Merrienboer’s 4 Component ID, for instance, breaks learning up into the knowledge we need, and the complex problems we need to apply that knowledge to. David Metcalf talks about learning theory mashups as ways to incorporate new technologies, which is, at least, a good interim step and possibly the necessary approach. Still, I’m looking for something deeper. I want to find a curriculum that focuses on dealing with ambiguity, helping us bring models and an iterative and collaborative approach. A pedagogy that looks at slow development over time and rich and engaging experience. And a design process that recognizes how we use tools and work with others in the world as a part of a larger vision of cognition, problem-solving, and design.
We have to look at the entire performance ecosystem as the context, including the technology affordances, learning culture, organizational goals, and the immediate context. We have to look at the learner, not stopping at their knowledge and experience, but also including their passions, who they can connect to, their current context (including technology, location, current activity), and goals. And then we need to find a way to suggest, as Wayne Hodgins would have it, the right stuff, e.g. the right content or capability, at the right time, in the right way, …
An appropriate approach has to integrate theories as disparate as distributed cognition, the appropriateness of spaced practice, minimalism, and more. We probably need to start iteratively, with the long term development of learning, and similarly opportunistic performance support, and then see how we intermingle those together.
Overall, however, this is how we go beyond intervention to augmentation. Clive Thompson, in a recent Wired column, draws from a recent “man+computer” chess competition to conclude “serious cognitive advantages accrue to those who are best at thinking alongside machines”. We can accessorize our brains, but I’m wanting to look at the other side, how can we systematically support people to be effectively supported by machines? That’s a different twist on technology support for performance, and one that requires thinking about what the technology can do, but also how we develop people to be able to take advantage. A mutual accommodation will happen, but just as with learning to learn, we shouldn’t assume ‘ability to perform with technology augmentation’. We need to design the technology/human system to work together, and develop both so that the overall system is equipped to work in an uncertain world.
I realize I’ve gone quite beyond just instructional design. At this point, I don’t even have a label for what I’m talking about, but I do think that the argument that has emerged (admittedly, flowing out from somewhere that wasn’t consciously accessible until it appeared on the page!) is food for thought. I welcome your reactions, as I contemplate mine.
by Clark 4 Comments
I’m usually a late adopter of new technology, largely because I’m frugal. I don’t like to spend money until I know just what the value is that I will be getting. So, when I heard about the iPad, I wasn’t one of those who signed up in advance. Which isn’t to say that I didn’t have a case of techno-lust; I am a geek, a boy who loves his toys. And, after all, I am on the stump about mobile learning.
So, I followed the developments closely. I looked at the specs, and I tracked the software app announcements. And I reflected a lot about the potential learning applications of this new platform.
The decision
What I didn’t expect was to get transfixed by a new possibility: that this device could provide a new capability to me, that of a laptop replacement. When I travel, I use my laptop to work; I write, I diagram, I create presentations, and catch up on email. The iPad, however, was announced as coming with (or having available) software for word processing (Pages), diagramming (OmniGraffle), presentations (Keynote), and email (Mail). It would also, when in range of WiFi, do standard web stuff like browse the web and use Twitter.
It began to look like maybe this device did have a justifiable case, such that m’lady was agreeable. There were some considerations: did I need the 3G version, which would come later, and how much memory (16, 32, or 64 GB)? Given that I already have an iPhone, which would meet immediate email, twitter, and/or web needs when not in WiFi range, I figured I could go with the first one coming out. However, my iPhone at 16 GB is already half full, and I’d likely be adding more apps and documents, so I thought I better go for 32 GB (I also figured with aggressive memory management, I could skip the 64 GB version). So my decision was made, with one problem.
The purchase
I hadn’t signed up for delivery, and now that deadline was being pushed out. And, I had a trip planned before the next shipping date. Now that I’d decided I could use it as a laptop substitute, I already wanted it. I wasn’t frantic, and I hate to wait in lines, so I wasn’t going to queue up at the Apple store. However, I did discover that other Apple retailers would have them, particularly BestBuy, which has a nearby store. So my plan was made: I would swing by there just around opening time, and if there wasn’t a huge queue, I’d see if they had any left. I wasn’t particularly optimistic.
So, after breakfast on the 3rd, I headed out in time to get there 5 minutes before they opened, and while there was a small queue, it wasn’t too bad. I checked it out, and a guy told me that they’d been handing out tickets for the iPad, and they seemed to have plenty. They didn’t come out again before the doors opened, but I knew I’d have my answer, one way or another, in a few minutes. And lo and behold, they had stacks of iPads. My transaction was complete within 7 minutes of the door opening, and I had my new device! And once I tweeted this outcome, I very shortly thereafter had several requests for this blog post! (The Apple lady in the BestBuy said the same thing happened with the iPhone releases: queues at the Apple store, and walkin service at the BestBuy; now you, good reader, are in on the secret.)
I also had to accessorize. BestBuy didn’t have the case, but I got a neoprene one with a pocket. Then Apple did have the case when I called, so I swung by late in the day. The place was packed but they also had iPads left! I also got the display adapter so I can present from it, and AppleCare.
The experience
Now it was time to play. I got it home, connected it to my Mac, and started setting it up. One almost immediate surprise was that it wasn’t charging. Turns out that is not uncommon, you need to have a relatively powerful USB port to both power and synch, and I guess my old laptop isn’t up to the job. However, it was fully charged and I got 2 days of intermittent use before it got close to needing a charge. Still, a bit of a pain to swap the cable between synching and charging… However, it was recognized right away and synched just fine. I made a mistake and synched everything (all photos, music, etc) when I really just wanted the limited set I had on the iPhone, but I was able to rectify that.
And then it was time for software. I’m as frugal with software as with hardware, so while I took some interesting free new IPad apps (solitaire, I confess , as well as weather and news, a calculator, etc), I was more picky with paid software. I did get Pages and Keynote (not Numbers, as I’m not a spreadsheet jockey, tho’ I may well get it if I start getting a lot of Excel sheets). I also got a PDF reader, GoodReader, as I had an immediate need. So far I’ve held off of OmniGraffle (which I *love* on the Mac), as it’s surprisingly expensive and the first reviews suggested it may have speed and interface problems. I’ll keep tracking the reviews, as I have faith in the company. I’m looking for a good note taking option, that will handle both text and sketches and/or quick diagrams, but the one or two examples I’ve found have had mixed reviews.
Finally, the ultimate question is how does it work. And the answer is, very very well. The battery life is almost phenomenal. The display is superb. The overall user experience is compelling. Tweetdeck (Twitter), Safari (browser), and Mail look great and work effectively. And it’s pretty functional; I’m touch-typing this with the onscreen keyboard in Pages on the plane, with the case folded to hold the iPad in landscape, tilted so I can use the onscreen keyboard and still see the screen.
There’s still some software to come (diagramming, note-taking with sketches), and accessories (maybe bluetooth keyboard), but it’s working well for me already. (Update: it was a battle to get this posted!) I’m on a short 3 day trip with just the iPad and iPhone, laptop-less, and we’ll see how it goes. (I do have a pad of paper and a pen. :) Stay tuned!
by Clark 9 Comments
Well, it turns out I was wrong. I like to believe it doesn’t happen very often, but I do have to acknowledge it when I am. Let me start from the worst, and then qualify it all over the place ;).
In the latest Scientific American Mind, there is an article on The Pluses of Getting It Wrong (first couple paragraphs available here). In short, people remember better if they first try to access knowledge that they don’t have, before they are presented with the to-be-learned knowledge. That argues that pre-tests, which I previously claimed are learner-abusive, may have real learning benefits. This result is new, but apparently real. You empirically have better recall for knowledge if you tried to access it, even though you know you don’t have it. My cognitive science-based explanation is that the search in some ways exercises appropriate associations that make the subsequent knowledge stick better.
Now, I could try to argue against the relevance of the phenomenon, as it’s focused on knowledge recovery which is not applied, and may still lead to ‘inert knowledge’ (where you may ‘know it’, but you don’t activate it in relevant situations). However, it is plausible that this is true for application as well. Roger Schank has argued that you have to fail before you can learn. (Certainly I reckon that’s true with overconfident learners ;). That is, if you try to solve a problem that you aren’t prepared for, the learning outcome may be better than if you don’t. Yet I don’t think it’s useful to deny this result, and instead I want to think about what it might mean for still creating a non-aversive learner experience.
I still believe that giving learners a test they know they can’t pass at best seems to waste their time, and at worst may actually cause some negative affect like lack of self-esteem. Obviously, we could and should let them know that we are doing this for the larger picture learning outcome. But can we make the experience more ‘positive’ and engaging?
I think we can do more. I think we can put the mental ‘reach’ in the form of problem-based learning (this may explain the effectiveness of PBL), and ask learners to solve the problem. That is, put the ‘task’ in a context where the learner can both recognize the relevance of the problem and is interested in it. Once learners recognize they can’t solve the problem, they’re motivated to learn the material. And they should be better prepared mentally for the learning, according to this result. While it *is*, in a sense, a pre-test, it’s one that is connected to the world, is applied, and consequently is less aversive. And, yes, you should still ensure that it is known that this is done to achieve a better outcome.
Now, I can’t guarantee that the results found for knowledge generalize to application, but I do know that, by and large, rote knowledge is not going to be the competitive edge for organizations. So I’d rather err on the side of caution and have the learners do the mental ‘reach’ for the answer, but I do want it to be as close as possible to the reach they’ll do when they really are facing a problem. If there is (and please, do ensure there really is, don’t just take the client’s or SME’s word for it), then you may want to take this approach for that knowledge too, but I’m (still) pushing for knowledge application, even in our pre-tests.
So, I think there’s a revision to the type of introduction you use to the content, presenting the problem or type of problem they’ll be asked to solve later and encouraged to have an initial go at it before the concept, examples, etc are presented. It’s a pre-test, but of a more meaningful and engaging kind. Love to see any experimental investigation of this, by the way.
by Clark 6 Comments
At the eLearning Guild’s Learning Solutions conference this week, Jean Marripodi convinced Steve Acheson and myself to host a debate on the viability of ADDIE in her ID Zone. While both of us can see both sides of ADDIE, Steve uses it, so I was left to take the contrary (aligning well to my ‘genial malcontent’ nature).
This was not a serious debate, in the model of the Oxford Debating Society or anything, but instead we’d agreed that we were going to go for controversy and fun in equal measures. This was about making an entertaining and informative event, not a scientific exploration. And in that, I think we succeeded (you can review the tweet stream from attendees and some subsequent conversation). Rather than recap the debate (Gina Minks has a short piece in her overall summary of the day), I’ll recap the points:
Steve showed how he does take responsibility, putting evaluation in the middle and using it more flexibly. He uses Dick & Carey’s model to start with, ensuring that a course is the right solution. The fact that the initial ‘course, job aid, other problem’ analysis is not included, however, is a concern.
It also came out that having a process is a powerful argument against those who might try to press unreasonable production constraints on you. If a VP wants it done in an unreasonable time frame, or doesn’t want to allow you to question the analysis that a course is needed, you have a push back (“it’s in our process”), particularly in a process organization. You do want a process.
The obvious question came up about what would be used in place of ADDIE. I believe that ADDIE as a checklist would be a nice accompaniment to both a more encompassing and a more learning-centric approach. For the former, I showed the HPT model as a representation of a design approach considering courses as part of a larger picture. For the latter, I suggested that a focus on learning experience design would be appropriate.
Using an HPT-like approach first, to ensure that a course is the right solution, is necessary. Then, I’d focus on working backwards from the needed change (Michael Allen talked about using sketches as lightweight prototypes at the conference, and first drawing the last activity the user engaged in) thinking about creating a learning experience that develops the learner’s capability. Finally, I’d be inclined to use ADDIE as a checklist to ensure all the important components are considered, once I’d drafted an initial design (or several). ADDIE certainly may be useful in taking that design forward, through development, implementation and evaluation.
I think ADDIE falls apart most in the initial analysis, not being broad enough, and in the design process: e.g. most ID processes neglect the emotional side of the equation, despite the availability of Keller’s ARCS model (which wasn’t even in the TIP database!). Good users, like Steve, take responsibility for reframing it practically, but I’m not confident that even a majority of ADDIE use is so enabled. Consequently, I worry that ADDIE is more detrimental than good. It ensures the minimum, but it essentially prevents inspiration.
I’m willing to be wrong, but I’ve been looking at the debate on both sides for a long time. While I know that PowerPoint doesn’t kill people, people kill people, and the same is true of ADDIE, the continued reliance on it is problematic. We probably need a replacement, one that starts with a broader analysis, and then provides guidance across job aid development, course development and more, that has at core iterative and situated design, informed by the recognition of the emotional nature of human use. Anyone have one to hand? Thoughts on the above?