General Stan McChrystal gave an inspiring and insightful talk about adapting to change, based upon his experience.
Clark Quinn’s Learnings about Learning
by Clark 2 Comments
I had the good fortune to be invited to the Future of Work event that was held here in Silicon Valley two weeks ago, and there were four breakouts, one of which was on learning and knowledge management. You can guess which one I was on (though tempted by the leadership and culture one; there was overlap).
Within that breakout the activity was to pick four topics and further break out. The issue was meeting workplace needs given the changing nature of work, and I suggested that perhaps the biggest need was to focus on skills that held true across domain, so called meta-cognitive skills, and learning to learn (a total surprise, right?). That was one that people were interested in, so that’s what we discussed.
We broke down learning into some component elements. We talked about how your beliefs about learning (your epistemological stance) mattered, as well as your intention to learn, and how effective you were at learning alone and with others. It also matters how well you use tools and external representations, as well as your persistence.
What emerged was that learning skills shouldn’t be taken for granted. And consequently, one of the attendees suggested that perhaps along with IQ and EQ, we should be looking at people’s LQ or learning quotient. I just saw an advertisement that said EQ (they called it EI Emotional Intelligence, probably to avoid trademark infringement) was better than IQ because you can improve your EQ scores. However, the evidence suggests you can improve your LQ scores too.
A decade ago now, Jay Cross and I were pushing the meta-learning lab, and I still think Jay was right claiming that meta-learning might be your best investment. So, are you aware of how you learn? Have you improved how you learn? Can you help others learn more effectively? I believe the answer is yes, and we not only can, but should.
A number of the Change Agents Worldwide resonated with an image of the 50 reasons not to change at work that was overlaid on an image of Doug Engelbart. The point, of course, is that all these reasons not to change provide a significant barrier to success. We decided that we‘d line up and start a blog carnival with each of us addressing one. I decided to address number 45:
45. We‘re doing all right as it is.
First, do you have evidence for that, or are you in denial? It‘s easy to not want to change because it‘ll be hard, so it‘s easier to say that things are OK. It is particularly easy if you aren‘t really checking! L&D, for instance, has been notoriously bad about seeing whether their interventions are actually achieving any impact (around 3% reporting that they actually go to level 4 on Kirkpatrick‘s scale).
If you are checking, what are your benchmarks? Are you measuring just execution, or are you including innovation? Because, as I‘ve said before, optimal execution is only going to let you survive, to thrive you‘ll need continual innovation. If you‘re doing what your strategy says, are you looking out for disruptive forces, and creating your own? You should be checking to see if you‘re striking that balance of being able agile enough as well as productive enough.
Finally, is “all right†really good enough? Shouldn‘t we be shooting for the best, not just good enough? Do you think your competitors are sitting complacent? We really should be looking for every edge. That doesn‘t come from believing we‘re doing all right. We should be looking for continual improvement.
Yes, it‘s harder to not believe we‘re doing all right as it is, but your curiosity should be driving you forward regardless. Your organization, if it isn‘t continually learning, is declining. Are you really doing all right? If you‘re definition of ‘all right‘ is that you are continually curious and moving forward with experimentation, then I reckon so.
Finally! Revolutionize Learning & Development (Performance and Innovation Strategy for the Information Age) is now shipping, (and has been available on Kindle for a couple of weeks), so at last you should be able to get your mitts on it if you’re so inclined. And, if you’re at all involved in trying to make your organization successful, I will immodestly suggest you might want to be so inclined.
Just as background, it documents my claim that organizational L&D isn’t doing what it could and should be doing, and what it is doing, it is doing badly. Then it lays out elements that should be considered, what it would look like if it were going well, and how you might get there. While the exact strategy for any one organization will be dependent on where they are and the nature of the business, there are some frameworks to help you apply those to your business. The goal is to move to Performance & Development, coupling optimal execution with continual innovation.
If you’re curious but not yet ready to dig in, let me mention a couple of things:
And, if you’re going to be at ASTD’s International Conference in DC next week, I will be there and:
So, check it out, and see if it makes sense to you. Or you can just go ahead and get it. I hope to see you in DC, and welcome your feedback!
by Clark 2 Comments
Towards Maturity is a UK-based but global initiative looking at organizations use of technology for learning. While not as well known in the US, they’ve been conducting research benchmarking on what organizations are doing and trying to provide guidance as well. I even put their model as an appendix in the forthcoming book on reforming L&D. So I was intrigued to see the new report they have just released.
The report, a survey of 2000 folks in a variety of positions in organizations, asks what they think about elearning, in a variety of ways. The report covers a variety of aspects of how people learn: when, where, how, and their opinion of elearning. The report is done in an appealing infographic-like style as well.
What intrigued me was the last section: are L&D teams tuned into the learner voice. The results are indicative. This section juxtaposes what the report heard from learners versus what L&D has reported in a previous study. Picking out just a few:
This is indicative of a big disconnect between L&D and the people they serve. This is why we need the revolution! There’s lots more interesting stuff in this report, so I strongly recommend you check it out.
A number of years ago I wrote a series on design heuristics that emerged by looking at our cognitive limitations and practices from other field. One of the practices I covered briefly in one of the posts was egoless design, and a recent conversation reminded me of it.
The context for this is talking about how to improve our designs. One of the things from Watts Humphrey’s work on software design was that if we don’t scrutinize our own work, we’ll have blindspots that we’re unaware of. With regular peer review, he substantially improved code quality outcomes. Egoless programming was all about getting our ego out of the way while we worked.
This applies to instructional design as well. Too often we have to crank it out, and we don’t test it to see if it’s working. Instead, if it’s finished, it is good. How do we know? It’s very clear that there are a lot of beliefs and practices about design that are wrong. Otherwise, we shouldn’t have this problem with elearning avoidance. There’s too much bad elearning out there. What can we do?
One of the things we could, and should do, is design reviews. Just like code reviews, we should get other eyes looking at our work. We should share our work at things like DemoFest, we should measure ourselves against quality criteria, and we should get expert reviews. And, we should set performance metrics and measure against them!
Of course, that alone isn’t good enough. We have to redesign our processes once we’ve identified the flaws, to structure things so that it’s hard to do bad design, and doing good design flows naturally. And then iterate.
If you don’t think your work is good enough to share, you’re not doing good enough work. And that needs to change. Get started: get feedback and assistance in moving forward. Just hearing talks about good design isn’t a bad start, but it’s not enough. You’ve got to look at what you are doing, get specifically relevant feedback, and then get assistance in redesigning your design processes. Or you won’t know your own limitations. It’s time to get serious about your elearning; do it as if it matters. If not, why do it at all?
by Clark 4 Comments
My latest tome, Revolutionize Learning & Development: Performance and Innovation Strategy for the Information Age is out. Well, sort of. What I mean is that it’s now available on Amazon for pre-order. Actually, it’s been for a while, but I wanted to wait until there was some there there, and now there’s the ‘look inside’ stuff so you can see the cover, back cover (with endorsements!), table of contents, sample pages, and more. Ok, so I’m excited!
What I’ve tried to do is make the case for dragging L&D into the 21st Century, and then provide an onramp. As I’ve been saying, my short take is that L&D isn’t doing what it could and should be doing, and what it is doing, it is doing badly. But I don’t believe complaining alone is particularly helpful, so I’m trying to put in place what I think will help as well. The major components are:
By itself, it’s not the whole answer, for several reasons. First, it can’t be. I can’t know all the different situations you face, so I can’t have a roadmap forward for everyone. Instead, what I supposed you could think of is that it’s a guidebook (stretching metaphors), showing suggestions that you’ll have to sequence into your own path. Second, we don’t know all yet. We’re still exploring many of these areas. For example, culture change is not a recipe, it’s a process. Third, I’m not sure any one person can know all the answers in such a big field. So, fourth, to practice what I’m preaching, there should be a community pushing this, creating the answers together.
A couple of things on that last part, the first one is a request. The community will need to be in place by the time the book is shipping. The question is where to host it. I don’t intend to build a separate community for it on the book site, as there are plenty of places to do this. Google groups, Yahoo groups, LinkedIn…the list goes on. It can’t be proprietary (e.g. you have to be a paid member to play). Ideally it’d have collaborative tools to create resources, but I reckon that can be accommodated via links. What do you folks think would be a good choice?
The second part of the community bit is that I’m very grateful to many people who’ve helped or contributed. Practitioner friends and colleagues provided the five case studies I’ve the pleasure to host. Two pioneers shared their thoughts. The folks at ASTD have been great collaborators in both helping me with resources, and in helping me get the message out. A number of other friends and colleagues took the time to read an early version and write endorsements. And I’ve learned together with so many of you by attending events together, hearing you speak, reading your writings, and having you provide feedback on my thoughts via talking or writing to me after hearing me speak or commenting on my scribblings here.
The book isn’t perfect, because I have thought of a number of ways it could be improved since I provided the manuscript, but I have stuck to the mantra that at some point it’s better out than still being polished. This book came from frustration that we can be doing so much better, and we’re not. I didn’t grow up thinking “I’m going to be a revolutionary”, but I can’t not see what I see and not say something. We can be doing so much better than we are. And so I had to be willing to just get the word out, imperfect. It wasn’t (isn’t) clear that I’m the best person to call this out, but someone needs to!
That said, I have worked really hard to have the right pieces in place. I’ve collected and integrated what I think are the necessary frameworks, provided case studies and a workplace scenario, and some tools to work forward. I have done my best to provide a short and cogent kickstart to moving forward.
Just to let you know that I’m starting my push. I’ll be presenting on the book at ASTD’s ICE conference, and doing some webinars. Bryan Austin of GameOn Learning interviewed me on my thoughts in this direction. I do believe in the message, and that it at least needs to be heard. I think it’s really the necessary message for L&D (in it, you’ll find out why I’m suggesting we need to shift to P&D!). Forewarned! I look forward to your feedback.
In my last post, I wrote about the first step you should take to move to Serious eLearning, which was making deeper practice. Particularly under the constraints of not rocking the boat. Here I want to talk about where you go from there. There are several followup steps you should take after (hopefully) success at the beginning. My big three are: aligning with the practice, extending the practice, and evaluating what is being done.
1. So, if you took the advice to make more meaningful and applied practice within the constraints of many existing workplaces (order-taking, content dump, ‘just do it’), you next want to be creating content aligned with helping the learner succeed at the practice. Once you have those practice questions, you should trim all that material to just what they’ll need to be able to make those decisions.
This also means stripping away unnecessary content, jettisoning the nice-to-know, trimming down the prose (we overwrite). By stripping away the content, you can work in more practice and still meet the (nonsensical) criteria of time in seat. And you’ll have to fight the forces of ‘it has to be in there’, but it’s a worthy fight, and part of the education of the organization that needs to occur.
Get some war stories from your SMEs while you’re working (or fighting) with them. Those should be your examples, and guide your practice design. But if you can’t, you’ll just have to do the best you can. Make the introduction help learners see what they’ll be able to do afterwards. All this fits within the standard format, so you should be able to get away with it and still be taking a stab at improving what you’re doing.
2. The second step is to extend practice. I mean this in two ways. For one, massed practice dissipates quickly, and you want practice spaced out over time. This may be a somewhat hard sell, yet it’s really required for learning to stick. Another part of the organization’s education. You should be developing some extra content at development time for streaming out over time, but breaking up your course so that the hour of seat time is 30 or 40 mins up front, and then 20 or 30 mins of followup spread out over days and with repeated practice will make learning stick way more than not. And if it matters, you should (if it doesn’t, why bother?).
The second way to extend it is to work on the meaningfulness of your practice. Ideally, practice would be deep, simulations or at least scenarios. The situations that will most define company success are, I will suggest, in complex contexts. To deal with those, you need practice in complex contexts: serious games or at least scenarios. And don’t make them boring, exaggerate so that the practice is as motivating as the real world situation is. Ultimately, you’d like learners creating solutions to real world problems like creating business deliverables, or performing in immersive environments, not answering multiple choice questions! And extending the experience socially: whether just reflecting on the experience together, or better yet, collaborative problem solving.
3. Finally, you should start measuring what you’re doing in important ways. This, too, will require educating your organization. But you shouldn’t assume your first efforts are working. You want to start with the change in the business that needs improving (e.g. performance consulting and Kirkpatrick level 4), then figure out what performance by individuals would lead to that business change, and then develop your learning objectives and practice to get people able to do that performance. And then measure whether they can, and whether it leads to performance changes in the workplace, and ultimately changes in the business metrics. This will require working with the business units to get their data, but ultimately that’s how you become strategic.
Of course, you should be measuring your own work, and similarly if your interventions are as efficient as possible. But those should only happen after you’re having an impact. Measuring your efficiency (“our costs per seat time are at the industry average”) without knowing whether you have an impact is delusional. Are your estimates of time to accomplish accurate? Are you using resources efficiently? Are people finding your experiences to be ‘hard fun’? These matter after the question of: “are we helping the organizations needs?”
So, between the previous post and this, hopefully you have some concrete ideas about how even in the most constrained circumstances you can start improving your learning design. And the Manifesto supporting principles go into more depth on this, if you need help. So, does this provide some guidance on how to get started? Ready to sign on? And, perhaps more importantly, what further questions do you have?
Yesterday, I posted about what we might like to see from folks, by role, in terms of the Manifesto. The other question to be answered is how to do this in the typical current situation where there’s little support for doing things differently. Let me take a worst-case scenario and try to take a very practical approach. This isn’t an answer for the pulpit, but is for the folks who put all this in the ‘too hard’ basket.
So, worst case: you’re going to still get a shower of PPTs and PDFs and be expected to make a course out of it, maybe (if you’re lucky) with a bit of SME access. And no one cares if it makes a difference, it’s just “do this”. And, first, you have my deepest sympathies. We’re hoping the manifesto changes this, but sometimes we have to start with where you live, eh? Recognize that the following is not PoliticallyCorrectâ„¢; I’m going outside the principled response to give you an initial kickstart.
The short version is that you’ve got to put meaningful practice in there. You need an experience that sets up a story, requires a choice using the knowledge, and lets the learner see the consequences. That’s the thing that has the most impact, and you’ll want several. This will have far more impact than a knowledge test. To do that isn’t too complex.
The very first thing you need to do when you’ve parsed that content is to figure out what, at core, the person who’s going to have this experience should be able to do differently. What performance aren’t they doing now? This is problematic, because sometimes the problem isn’t a performance problem, but here I’m assuming you don’t have that leeway. So you’ll have to do some inference. Yes, it’s a bit more thinking, but you already have to pull out knowledge, so it’s not that different (and gets easier with practice).
Say you’ve gotten product data. How would they use that? To sell? To address objections? To trouble shoot? Maybe it’s process information you’re working on. What would they do with that? Recognize problems? Take the next step? If you’re given information on workplace behavior problems? Let them determine whether grey areas exist, or coach people.
You’ll need to make a believable context and precipitative situation, and then ask them to respond. Make it challenging, so that the situation isn’t clear, and the alternative are plausible ways the learner could go wrong. The SME can help here. Make the scenario they’re facing and the decisions they must make as representative of the types of problems that they’ll be facing as you can. And try to have the story play out, e.g. the consequences of their choice be presented before they get the right answer or feedback about why it’s wrong. There are good reasons for this, but the short version is it’s to help them learn to read the situation when it’s real.
Let’s be clear, this is really just better multiple choice question design! I say that so you see you’re not going beyond what you already do, you’re just taking a slightly different tack to it. The point is to work within the parameters of content and questions (for now!), and yet get better outcomes.
Ideally, you’ll find all the plausible application scenarios, and be able to write multiple questions. If there’s any knowledge they have to know cold, you might have to also test that knowledge, but consider designing a job aid. (Even if it’s not tested and revised, which it should be, it’s a start on the path.)
There’s more, but that’s a start (more in my next post). Focus on meaningful practice first. Dress it up. Exaggerate it. But if you put good practice in their path, that’s probably the most valuable change to start with. There’re lots of steps from there, basically turning it into a learning experience: making everything less dense, more minimal, more focused on performance, adding in more meaningfulness. And redoing concept, example, introduction, etc. But the first thing, valuable practice, engages many of the eight values that form the core of the Manifesto: performance focused, meaningful to learners, engagement-driven, authentic contexts, realistic decisions, and real world consequences.
I’ve argued elsewhere that doing better elearning doesn’t take longer, and I believe it. Start here, and start talking about what you’re doing with your colleagues, bosses, what have you. Sign on to the Manifesto, and let them know why. And let me know how it goes.