Learnlets

Secondary

Clark Quinn’s Learnings about Learning

70:20:10 and the Learning Curve

27 January 2015 by Clark 4 Comments

My colleague Charles Jennings recently posted on the value of autonomous learning (worth reading!), sparked by  a diagram provided by another ITA colleague, Jane Hart (that I also thought was insightful). In Charles’ post he also included an IBM diagram that triggered some associations.

So, in IBM’s diagram, they talked about: the access phase where learning is separate, the integration where learning is ‘enabled’ by work, and the on-demand phase where learning is ’embedded’. They talked about ‘point solutions’ (read: courses) for access, then blended models for integration, and dynamic models for on demand.  The point was that the closer to the work that learning is, the more value.

However, I was reminded of Fits & Posner’s model of skill acquisition, which has 3 phases of cognitive, associative, and autonomous learning. The first, cognitive, is when you benefit from formal instruction: giving you models and practice opportunities to map actions to an explicit framework. (Note that this  assumes a good formal learning design,  not rote information and knowledge test!)  Then there’s an associative stage where that explicit framework is supported in being contextualized and compiled away.  Finally, the learner continues to improve through continual practice.

I was initially reminded of Norman & Rumelhart’s accretion, restructuring, and tuning learning mechanisms, but it’s not quite right. Still, you could think of accreting the cognitive and explicitly semantic knowledge, then restructuring that into coarse skills that don’t require  as much conscious effort, until it becomes a matter of tuning a finely automated skill.

721LearningCurveThis, to me, maps more closely to 70:20:10, because you can see the formal (10) playing a role to kick off the semantic part of the learning, then coaching and mentoring (the 20) support the integration or association of the skills, and then the 70 (practice, reflection, and personal knowledge mastery  including informal social learning) takes over, and I mapped it against a hypothetical improvement curve.

Of course, it’s not quite this clean. While the formal often does kick off the learning, the role of coaching/mentoring and the personal learning are typically intermingled (though the role shifts from mentor to mentee ;). And, of course, the ratios in 70:20:10 are only a framework for rethinking investment, not a prescription about how you apply the numbers.  And I may well have the curve wrong (this is too flat for the normal power law of learning), but I wanted to emphasize that the 10 only has a small role to play in moving performance from zero to some minimal level, that mentoring and coaching really help improve performance, and that ongoing development requires a supportive environment.

I think it’s important to understand how we learn, so we can align our uses of technology to support them in productive ways. As this suggests, if you care about organizational performance, you are going to want to support more than the course, as well as doing the course right.  (Hence the revolution. :)

#itashare

Wearables?

21 January 2015 by Clark 3 Comments

In a discussion last week, I suggested that  the  things I was excited about included wearables. Sure enough, someone asked if I’d written anything about it, and I haven’t, much. So here are some initial thoughts.

I admit I was not a Google Glass ‘Explorer’ (and now the program has ended).  While tempted to experiment, I tend not to spend money until I see how the device is really going to make me more productive.  For instance, when the iPad was first announced, I didn’t want one. Between the time it was announced and it was available, however, I figured out how I’d use it produce, not just consume.   I got one the first day it came out.  By the same rationale,  I got a Palm Pilot pretty early on, and it made me much more effective.   I haven’t gotten a wrist health band, on the other hand, though I don’t think they’re bad ideas, just not what I need.

The point being that I want to see a clear value proposition before I spend  my  hard earned money.  So what am I thinking in regards to wearables? What wearables do I mean?  I am  talking wrist devices, specifically.  (I   may eventually warm up to glasses as well, when what they can do is more augmented reality than they do now.)  Why wrist devices?  That’s what I’m wrestling with, trying to conceptualize what is a more intuitive assessment.

Part of it, at least, is that it’s with me all the time, but in an unobtrusive way.  It supports a quick flick of the wrist instead of pulling out a whole phone. So it can do that ‘smallest info’ in an easy way. And, more importantly, I think it can bring things to my attention more subtly than can a phone.  I don’t need a loud ringing!

I admit that I’m keen on a more mixed-initiative  relationship than I currently have with my technology.  I use my smartphone to get things I need, and it can alert me to things that I’ve indicated I’m interested in, such as events that I want an audio alert for.  And of course, for incoming calls.  But what about for things that my systems come up with on their own?  This is increasingly possible, and again desirable.  Using context, and if a system had some understanding of my goals, it might be able to be proactive.  So imagine you’re out and about, and your watch reminds you that while you were  here you wanted to pick up something nearby, and provide the item and location.  Or to prep for that upcoming meeting and provide some minimal but useful info.   Note that this is  not what’s currently on offer, largely.  We already have geofencing to do some, but right now for it to happen you largely have to pull out your phone or have it give a largely intrusive noise to be heard from your pocket or purse.

So two things about this: one why the watch and not the phone, and the other, why not the glasses?  The watch form factor is, to me, a more accessible interface to serve as a interactive companion. As I suggested, pulling it out of the pocket, turning it on, going through the security check (even just my fingerprint), adds more of an overhead than I necessarily want.  If I can have something less intrusive, even as part of a system and not fully capable on it’s own, that’s OK.  Why not glasses? I guess it’s just that they seem more unnatural.  I am accustomed to having information on my wrist, and while I wear glasses, I want them to be invisible to me.  I would love to have a heads-up display at times, but all the time would seem to get annoying. I’ll stretch and suggest that the empirical result that most folks have stopped wearing them most of the time bears up my story.

Why not a ring, or a pendant, or?  A ring seems to have  too small an interface area.  A pendant isn’t easily observable. On my wrist is easy for a glance (hence, watches).  Why not a whole forearm console?  If I need that much interface, I can always pull out my phone.  Or jump to my tablet. Maybe I will eventually will want an iBracer, but I’m not yet convinced. A forearm holster for my iPhone?  Hmmm…maybe too geeky.

So, reflecting on all this, it appears I’m thinking about tradeoffs of utility versus intrusion.  A wrist devices seems to fit a sweet spot in an ecosystem of tech for the quick glance, the pocket access, and then various tradeoffs of size and weight for a real productivity  between tablets and laptops.

Of course, the real issue is whether there’s sufficient information available through the watch that it makes a value proposition. Is there enough that’s easy to get to that doesn’t require a phone?  Check the temperature?  Take a (voice) note?  Get a reminder, take a call, check your location? My instinct is that there is.  There are times I’d be happy to not have to take my phone (to the store, to a party) if I could take calls on my wrist, do minimal note taking and checking, and navigating.  For the business perspective, also have performance support whether push or pull.  I don’t see it for courses, but for just-in-time…  And contextual.

This is all just thinking aloud at this point.  I’m contemplating the iWatch but don’t have enough information as of yet.  And I may not feel the benefits outweigh the costs. We’ll see.

Getting strategic means getting scientific

20 January 2015 by Clark Leave a Comment

I’ve been on a rant about learning design for a few posts, but I ended up talking about how creating a better process is part of getting strategic.  The point was that our learning design has to embody what’s know about how we learn, e.g. a learning engineering.  And it occurs to me that getting our processes structured to align with how we work is part of a bigger picture of how our strategies have to similarly be informed.

So, as part of the L&D Revolution I argue we need to have, I’m suggesting organizations, and consequently L&D, need to be aligned with how we think, work, and learn. So our formal learning initiatives (used only when they are really needed) need to be based upon learning science. And performance support similarly needs to reflect how we process information, and, importantly, things we don’t do well and need support for.  The argument for informal and social learning similarly comes from our natural approaches, and similarly needs to provide facilitation for where things can and do go wrong.

And, recursively, L&D’s processes need to similarly reflect what we do, and don’t, do well.  So,  just as we should provide support for performers to execute, communicate, collaborate, and continue to improve (why L&D needs to become P&D), we need to make sure that we practice what we preach.  And a scientific method means we need to measure what we’re doing, not just efficiency, but effectiveness.

It’s time that L&D gets out of the amateur approach, and starts getting professional. Which means understanding the organization’s goals, rejecting requests that are nonsensical, examining what we do, using technology in sophisticated ways (*cough* content engineering *cough*), and more.  We need to know about how we think, work, and learn, and apply it to what we do. We’re about people, after all, so it’s about time we understood the science in our field, and quit thinking that our existing practices (largely from an industrial age) are inherently  relevant. We must be scrutable, and that means we must scrutinize.  Time to  get to work.

#itashare

It’s the process, silly!

14 January 2015 by Clark Leave a Comment

So yesterday, I went off on some of the subtleties in elearning that are being missed.  This is tied to last weeks posts about how we’re not treating elearning seriously enough.  And part of it is in the knowledge and skills of the designers, but it’s also in the process. Or, to put it another way, we should be using steps and tools that align with the type of learning we need. And I don’t mean ADDIE, though not inherently.

So what  do I mean?  For one, I’m a fan of Michael Allen’s Successive Approximation Model (SAM), which iterates several times (tho’ heuristically, and it could be better tied to a criterion).  Given that people are far less predictable than, say, concrete, fields like interface design have long known that testing and refinement need to be included.  ADDIE isn’t inherently linear, certainly as it has evolved, but in many ways it makes it easy to make it a one-pass process.

Another issue, to me, is to structure the format for your intermediate representations so that make it hard to do aught but come up with useful information.  So, for instance, in recent work I’ve emphasized that a preliminary output  is a competency doc that includes (among other things)  the objectives (and measures), models, and common misconceptions.  This has evolved from a similar document I use in (learning) game design.

You then need to capture your initial learning flow. This is what Dick &  Carey call your instructional strategy, but to me it’s the overall experience of the learner, including addressing the anxieties learners may feel, raising their interest and motivation, and systematically building their confidence.  The anxieties or emotional barriers to learning may well be worth capturing at the same time as the competencies, it occurs to me (learning out loud ;).

It also helps if your tools don’t interfere with your goals.  It should be easy to create animations that help illustrate models (for the concept) and tell stories (for examples).  These can be any media tools, of course. The most important tools are the ones you use to create meaningful practice. These should allow you to create mini-, linear-, and branching-scenarios (at least).  They should have alternative feedback for every wrong answer. And they should support contextualizing the practice activity. Note that this does  not mean tarted up drill and kill with gratuitous ‘themes’ (race cars, game shows).  It means having learners make meaningful decisions and act on them in ways like they’d act in the real world (click on buttons for tech, choose dialog alternatives for interpersonal interactions, drag tools to a workbench or adjust controls for lab stuff, etc).

Putting in place processes that only use formal learning when it makes sense,  and then doing it right when it does make sense, is key to putting L&D on a path to relevancy.   Cranking out courses on demand, focusing on measures like cost/butt/seat, adding rote knowledge quizzes to SME knowledge dumps, etc are instead continuing down the garden path to oblivion. Are you ready to get scientific and strategic about your learning  design?

The subtleties

13 January 2015 by Clark Leave a Comment

I recently opined that good learning design was complex, really perhaps close to rocket science.  And I suggested that a consequent problem was that the nuances are subtle.  It occurs to me that perhaps discussing some example problems will help make this point more clear.

Without being exhaustive, there are several consistent problems I see in the elearning content I review:

  • The wrong focus. Seriously, the outcomes for the class aren’t meaningful!  They are about information or knowledge, not skill.  Which leads to no meaningful change in behavior, and more importantly, in outcomes. I don’t want to learn about X, I want to learn how to  do  X!
  • Lack of motivating introductions.  People are expected to give a hoot  about this information, but no one helps them understand why it’s important?  Learners should be assisted to viscerally ‘get’ why this is important,  and helped to see how it connects to the rest of the world.  Instead we get some boring drone about how this is really important.  Connect it to the world and let me see the context!
  • Information focused or arbitrary content presentations. To get the type of flexible problem-solving organizations need, people need mental models about why  and how  to do it this way, not just the rote steps.  Yet too often I see arbitrary lists of information accompanied  by a rote knowledge test.  As if that’s gonna stick.
  • A lack of examples, or trivial ones.  Examples need to show a context, the barriers, and how the content model provides guidance about how to succeed (and when it won’t).  Instead we get fluffy stories that don’t connect to the model and show the application to the context.  Which means it’s not going to support transfer (and if you don’t know what I’m talking about, you’re not ready to be doing design)!
  • Meaningless and insufficient practice.  Instead of asking learners to make decisions like they will be making in the workplace (and this is my hint for the  first  thing to focus on fixing), we ask rote knowledge questions. Which isn’t going to make a bit of difference.
  • Nonsensical alternatives to the right answer.  I regularly ask of audiences “how many of you have ever taken a quiz where the alternatives to the right answer are so silly or dumb that you didn’t need to know anything to pass?”  And  everyone raises their hand.  What possible benefit does that have?  It insults the learner’s intelligence, it wastes their time, and it has no impact on learning.
  • Undistinguished feedback. Even if you do have an alternative that’s aligned with a misconception, it seems like there’s an industry-wide conspiracy to ensure that there’s only one response for all the wrong answers. If you’ve discriminated meaningful differences to the right answer based upon how they go wrong, you should be addressing them individually.

The list goes on.  Further, any one of these can severely impact the learning outcomes, and I typically see  all of these!

These are really  just the flip side of the elements of good design I’ve touted in previous posts (such as this series).  I mean, when I look at most elearning content, it’s like the authors have no idea how we really learn, how our brains work.  Would you design a tire for a car without knowing how one works?  Would you design a cover for a computer without knowing what it looks like?  Yet it appears that’s what we’re doing in most elearning. And it’s time to put a stop to it.  As a first step, have a look at the Serious eLearning Manifesto, specifically the 22 design principles.

Let me be clear, this is just the surface.  Again, learning engineering is complex stuff.  We’ve hardly touched on engagement, spacing, and more.    This may seem like a lot, but this is really the boiled-down version!  If it’s too much, you’re in the wrong job.

Shiny objects and real impact

9 January 2015 by Clark 2 Comments

Yesterday I went off about how learning design should be done right and it’s not easy.  In a conversation two days ago, I was talking to a group that was  supporting several initiatives in adaptive learning, and I wondered if this was a good idea.

Adaptive learning is  desirable.  If learners come from different initial abilities, learn at different rates, and have different availability, the learning should adapt.  It should skip things you already know, work at your pace, and provide extra practice if the learning experience is extended.  (And, BTW, I’m  not talking learning styles).  And this is worthwhile,  if the content you are starting  with is good.  And even then, is it really necessary. To explain, here’s an analogy:

I have heard it said that the innovations for the latest drugs should be, in many cases, unnecessary. The extra costs (and profits for the drug companies) wouldn’t be necessary. The claim is that the new drugs aren’t any more effective than the existing treatments  if they were used properly.  The point being that people don’t take the drugs as prescribed (being irregular,  missing, not continuing past the point they feel better, etc), and if they did the new drugs wouldn’t be as good.  (As a side note, it would appear that focusing on improving patient drug taking protocols would be a sound strategy, such as using a mobile app.)  This isn’t true in all cases, but even in some it makes a point.

The analogy here is that using all the fancy capabilities: tarted up templates for simple questions, 3D virtual worlds, even adaptive learning, might not be needed if we did better learning design!  Now, that’s not to say we couldn’t add value with using the right technology at the right points, but as I’ve quipped in the past: if you get the design right, there are  lots of ways to implement it.  And, as a corollary, if you don’t get the design right, it doesn’t matter how you implement it.

We do need to work on improving our learning design, first, rather than worrying about the latest shiny objects. Don’t get me wrong, I  love  the shiny objects, but that’s with the assumption that we’re getting the basics right.  That was my assumption ’til I hit the real world and found out what’s happening. So let’s please get the basics right, and then worry about leveraging the technology on  top of a strong foundation.

Maybe it is rocket science!

8 January 2015 by Clark 11 Comments

As I’ve been working with the Foundation over the past 6 months I’ve had the occasion to review a wide variety of elearning, more specifically in the vocational and education space, but my experience mirrors that from the corporate space: most of it isn’t very good.  I realize that’s a harsh pronouncement, but I fear that it’s all too true; most of the elearning I see will have very little impact.  And I’m becoming ever more convinced that what I’ve quipped  in the past is true:

Quality design is hard to distinguish from well-produced but under-designed content.

And here’s the thing: I’m beginning to think that this is not just a problem with the vendors, tools, etc., but that it’s more fundamental.  Let me elaborate.

There’s a continual problem of bad elearning, and yet I hear people lauding certain examples, awards are granted, tools are touted, and processes promoted.  Yet what I see really isn’t that good. Sure, there are exceptions, but that’s the problem, they’re exceptions!  And while I (and others, including the instigators of the Serious eLearning Manifesto) try to raise the bar, it seems to be an uphill fight.

Good learning design is rigorous. There’re some significant effort just getting the right objectives, e.g. finding the  right  SME, working with them and not taking what they say verbatim, etc.  Then working to establish the right model and communicating it, making meaningful practice, using media correctly.  At the same time, successfully fending off the forces of fable (learning styles, generations, etc).

So, when it comes to the standard  tradeoff    –  fast, cheap, or good, pick two – we’re ignoring ‘good’.  And  I think a fundamental problem is  that everyone ‘knows’  what learning is, and they’re not being astute consumers.  If it looks good, presents content, has some interaction, and some assessment, it’s learning, right?  NOT!  But stakeholders don’t know, we don’t worry enough about quality in our metrics (quantity per time is not a quality metric), and we don’t invest enough in learning.

I’m reminded of a thesis that says medicos reengineered their status in society consciously.  They went from being thought of ‘quacks’ and ‘sawbones’ to an almost reverential status today by a process of making the process of becoming a doctor quite rigorous.  I’m tempted to suggest that we need to do the same thing.

Good learning design is complex.  People don’t have predictable properties as does concrete.  Understanding the necessary distinctions to do the right things is complex.  Executing the processes to successfully design, refine, and deliver a learning experience that leads to an outcome is a complicated engineering endeavor.  Maybe we do have to treat it like rocket science.

Creating learning should be considered a highly valuable outcome: you are helping people achieve their goals.  But if you really aren’t, you’re perpetrating malpractice!  I’m getting stroppy, I realize, but it’s only because I care and I’m concerned.  We have  got to raise our game, and I’m seriously concerned with the perception of our work, our own knowledge, and our associated processes.

If you agree, (and if you don’t, please do let me know in the comments),  here’s my very serious question because I’m running out of ideas: how do we get awareness of the nuances of good learning design out there?

 

Reflections on 15 years

31 December 2014 by Clark 2 Comments

For Inside Learning & Technologies 50th edition, a number of us were asked to provide reflections on what has changed over the past 15 years.  This was pretty much the period in which I’d returned to the US and took up with what was kind of a startup and led to my life as a consultant.  As an end of year piece, I have permission to post that article here:

15 years ago, I had just taken a step away from academia and government-sponsored initiatives to a new position leading a team in what was effectively a startup. I was excited about the prospect of taking the latest learning science to the needs of the corporate world. My thoughts were along the lines of “here, where we have money for meaningful initiatives, surely we can do something spectacular”. And it turns out that the answer is both yes and no.

The technology we had then was pretty powerful, and that has only increased in the past 15 years. We had software that let us leverage the power of the internet, and reasonable processing power in our computers. The Palm Pilot had already made mobile a possibility as well. So the technology was no longer a barrier, even then.

And what amazing developments we have seen! The ability to create rendered worlds accessible through a dedicated application and now just a browser is truly an impressive capability. Regardless of whether we overestimated the value proposition, it is still quite the technology feat. And similarly, the ability to communicate via voice and video allows us to connect people in ways once only dreamed of.

We also have rich new ways to interact from microblogs to wikis (collaborative documents). These capabilities are improved by transcending proximity and synchronicity. We can work together without worrying about where the solution is hosted, or where our colleagues are located. Social media allow us to tap into the power of people working together.

The improvements in mobile capabilities are also worth noting. We have gone from hype to hyphens, where a limited monochrome handheld has given way to powerful high-resolution full-color multi-channel always-connected sensor-rich devices. We can pretty much deliver anything anywhere we want, and that fulfills Arthur C. Clarke’s famous proposition that a truly advanced technology is indistinguishable from magic.

Coupled with our technological improvements are advances in our understanding of how we think, work, and learn. We now have recognition about how we act in the world, about how we work with others, and how we best learn. We have information age understandings that illustrate why industrial age methods are not appropriate.

It is not truly new, but reaching mainstream awareness in the last decade and more is the recognition that the model of our thinking as formal and logical is being updated. While we can work in such ways, it is the exception rather than the rule. Such thinking is effortful and it turns out both that we avoid it and there is a limit to how much deep thinking one can do in a day. Instead, we use our intuition beyond where we should, and while this is generally okay, it helps to understand our limitations and design around them.

There is also a spreading awareness of how much our thinking is externalized in the world, and how much we use technology to support us being effective. We have recognized the power of external support for thinking, through tools such as checklists and wizards. We do this pretty naturally, and the benefits from good design of technology greatly facilitate our ability to think.

There is also recognition that the model of individual innovation is broken, and that working together is far superior to working alone. The notion of the lone genius disappearing and coming back with the answer has been replaced by iterations on top of previous work by teams. When people work together in effective ways, in a supportive environment, the outcomes will be better. While this is not easy to effect in many circumstances, we know the practices and culture elements we need, and it is our commitment to get there, not our understanding, that is the barrier.

Finally, our approaches to learning are better informed now. We know that being emotionally engaged is a valued component in moving to learning experience design. We understand the role of models in supporting more flexible performance. We also have evidence of the value of performing in context. It is not news that information dump and knowledge test do not lead to meaningful skill acquisition, and it is increasingly clear that meaningful practice can. It is also increasingly clear that, as things move faster, meaningful skills – the ability to make better decisions – is what is going to provide the sustainable differentiator for organizations.

So imagine my dismay in finding that the approaches we are using in organizations are largely still rooted in approaches from yesteryear. While we have had rich technology opportunities to combine with our enlightened understanding, that is not what we are seeing. What we see is still expectations that it is done in-the-head, top-down, with information dump and meaningless assessment that is not tied to organizational outcomes. And while it is not working, demonstrably, there seems little impetus to change.

Truly, there has been little change in our underlying models in 15 years. While the technology is flashier, the buzz words have mutated, and some of the faces have changed, we are still following myths like learning styles and generational differences, we are still using ‘spray and pray’ methods in learning, we are still not taking on performance support and social learning, and perhaps most distressingly, we are still not measuring what matters.

Sure, the reasons are complex. There are lots of examples of the old approaches, the tools and practices are aligned with bad learning practices, the shared metrics reflect efficiency instead of effectiveness, … the list goes on. Yet a learning & development (L&D) unit unengaged with the business units it supports is not sustainable, and consequently the lack of change is unjustifiable.

And the need is now more than ever. The rate of change is increasing, and organizations now have more need to not just be effective, but they have to become agile. There is no longer time to plan, prepare, and execute, the need is to continually adapt. Organizations need to learn faster than the competition.

The opportunities are big. The critical component for organizations to thrive is to couple optimal execution (the result of training and performance support) with continual innovation (which does not come from training). Instead, imagine an L&D unit that is working with business units to drive interventions that affect key KPIs. Consider an L&D unit that is responsible for facilitating the interactions that are leading to new solutions, new products and services, and better relationships with customers. That is the L&D we need to see!

The path forward is not easy but it is systematic and doable. A vision of a ‘performance ecosystem‘ – a rich suite of tools to support success that surround the performer and are aligned with how they think, work, and learn – provides an endpoint to start towards. Every organization‘s path will be different, but a good start is to start doing formal learning right, begin looking at performance support, and commence working on the social media infrastructure.

An associated focus is building a meaningful infrastructure (hint: one all-singing all-dancing LMS is not the answer). A strategy to get there is a companion effort. And, ultimately a learning culture will be necessitated. Yet these components are not just a necessary component for L&D, they are the necessary components for a successful organization, one that can be agile enough to adapt to the increasing rate of change we are facing.

And here is the first step: L&D has to become a learning organization. Mantras like ‘work out loud’, ‘fail fast’, and ‘reflect’ have to become part of the L&D culture. L&D has to start experimenting and learning from the experiments. Let us ensure that the past 15 years are a hibernation we emerge from, not the beginning of the end.

Here’s to change for the better.  May 2015 be the  best year yet!

Quinn-Thalheimer: Tools, ADDIE, and Limitations on Design

23 December 2014 by Clark 2 Comments

A few months back, the esteemed Dr. Will Thalheimer encouraged me to join him in a blog dialog, and we posted the first one on who L&D had responsibility to.  And while we took the content seriously, I can’t say our approach was similarly.  We decided to continue, and here’s the second in the series, this time trying to look at what might be hindering the opportunity for design to get better.  And again, a serious convo leavened with a somewhat demented touch:

Clark:

Will, we‘ve suffered Fear and Loathing on the Exhibition Floor at the state of the elearning industry before, but I think it‘s worth looking at some causes and maybe even some remedies.  What is the root cause of our suffering?  I‘ll suggest it‘s not massive consumption of heinous chemicals, but instead think that we might want to look to our tools and methods.

For instance, rapid elearning tools make it easy to take PPTs and PDFs, add a quiz, and toss the resulting knowledge test and dump over to the LMS to lead to no impact on the organization.  Oh, the horror!  On the other hand, processes like ADDIE make it easy to take a waterfall approach to elearning, mistakenly trusting that ‘if you include the elements, it is good‘ without understanding the nuances of what makes the elements work.  Where do you see the devil in the details?

Will:

Clark my friend, you ask tough questions! This one gives me Panic, creeping up my spine like the first rising vibes of an acid frenzy. First, just to be precise—because that‘s what us research pedants do—if this fear and loathing stayed in Vegas, it might be okay, but as we‘ve commiserated before, it‘s also in Orlando, San Francisco, Chicago, Boston, San Antonio, Alexandria, and Saratoga Springs. What are the causes of our debauchery? I once made a list—all the leverage points that prompt us to do what we do in the workplace learning-and-performance field.

First, before I harp on the points of darkness, let me twist my head 360 and defend ADDIE. To me, ADDIE is just a project-management tool. It‘s an empty baseball dugout. We can add high-schoolers, Poughkeepsie State freshman, or the 2014 Red Sox and we‘d create terrible results. Alternatively, we could add World-Series champions to the dugout and create something beautiful and effective. Yes, we often use ADDIE stupidly, as a linear checklist, without truly doing good E-valuation, without really insisting on effectiveness, but this recklessness, I don‘t think, is hardwired into the ADDIE framework—except maybe the linear, non-iterative connotation that only a minor-leaguer would value. I‘m open to being wrong—iterate me!

Clark:

Your defense of ADDIE is admirable, but is the fact that it‘s misused perhaps reason enough to dismiss it? If your tool makes it easy to lead you astray, like the alluring temptation of a forgetful haze, is it perhaps better to toss it in a bowl and torch it rather than fight it? Wouldn‘t the Successive Approximation Method be a better formulation to guide design?

Certainly the user experience field, which parallels ours in many ways and leads in some, has moved to iterative approaches specifically to help align efforts to demonstrably successful approaches. Similarly, I get ‘the fear‘ and worry about our tools. Like the demon rum, the temptations to do what is easy with certain tools may serve as a barrier to a more effective application of the inherent capability. While you can do good things with bad tools (and vice versa), perhaps it‘s the garden path we too easily tread and end up on the rocks. Not that I have a clear idea (and no, it‘s not the ether) of how tools would be configured to more closely support meaningful processing and application, but it‘s arguably a collection worth assembling. Like the bats that have suddenly appeared…

Will:

I‘m in complete agreement that we need to avoid models that send the wrong messages. One thing most people don‘t understand about human behavior is that we humans are almost all reactive—only proactive in bits and spurts. For this discussion, this has meaning because many of our models, many of our tools, and many of our traditions generate cues that trigger the wrong thinking and the wrong actions in us workplace learning-and-performance professionals. Let‘s get ADDIE out of the way so we can talk about these other treacherous triggers. I will stipulate that ADDIE does tend to send the message that instructional design should take a linear, non-iterative approach. But what‘s more salient about ADDIE than linearity and non-iteration is that we ought to engage in Analysis, Design, Development, Implementation, and Evaluation. Those aren‘t bad messages to send. It‘s worth an empirical test to determine whether ADDIE, if well taught, would automatically trigger linear non-iteration. It just might. Yet, even if it did, would the cost of this poor messaging overshadow the benefit of the beneficial ADDIE triggers? It‘s a good debate. And I commend those folks—like our comrade Michael Allen—for pointing out the potential for danger with ADDIE. Clark, I‘ll let you expound on rapid authoring tools, but I‘m sure we‘re in agreement there. They seem to push us to think wrongly about instructional design.

Clark:

I spent a lot of time looking at design methods across different areas – software engineering, architecture, industrial design, graphic design, the list goes on – as a way to look for the best in design (just as I‘ve looked across engagement disciplines, learning approaches, and more; I can be kinda, er, obsessive).   I found that some folks have 3 step models, some 4, some 5. There‘s nothing magic about ADDIE as ‘the‘ five steps (though having *a* structure is of course a good idea).  I also looked at interface design, which has arguably the most alignment with what elearning design is about, and they‘ve avoided some serious side effects by focusing on models that put the important elements up front, so they talk about participatory design, and situated design, and iterative design as the focus, not the content of the steps. They have steps, but the focus is on an evaluative design process. I‘d argue that‘s your empirical design (that or the fumes are getting to me).  So I think the way you present the model does influence the implementation. If advertising has moved from fear motivation to aspirational motivation (c.f. Sach‘s Winning the Story Wars), so too might we want to focus on the inspirations.

Will:

Yes, let‘s get back to tools. Here‘s a pet peeve of mine. None of our authoring tools—as far as I can tell—prompt instructional designers to utilize the spacing effect or subscription learning. Indeed, most of them encourage—through subconscious triggering—a learning-as-an-event mindset.

For our readers who haven‘t heard of the spacing effect, it is one of the most robust findings in the learning research. It shows that repetitions that are spaced more widely in time support learners in remembering. Subscription learning is the idea that we can provide learners with learning events of very short duration (less than 5 or 10 minutes), and thread those events over time, preferably utilizing the spacing effect.

Do you see the same thing with these tools—that they push us to see learning as a longer-then-necessary bong hit, when tiny puffs might work better?

Clark:

Now we’re into some good stuff!  Yes, absolutely; our tools have largely focused on the event model, and made it easy to do simple assessments.  Not simple good assessments, just simple ones. It’s as if they think designers don’t know what they need.  And, as our colleague Cammy Bean’s book The Accidental Instructional Designer’s success shows, they may be right.  Yet I’d rather have a power tool that’s incrementally explorable, but scaffolds good learning than one that ceilings out just when we’re getting to somewhere interesting. Where are the templates for spaced learning, as you aptly point out?  Where are the tools to make two-step assessments (first tell us which is right, then why it’s right, as Tom Reeves has pointed us to)?  Where are more branching scenario tools?  They tend to hover at the top end of some tools, unused. I guess what I’m saying is that the tools aren’t helping us lift our game, and while we shouldn’t blame the tools, tools that pointed the right way would help.  And we need it (and a drink!).

Will:

Should we blame the toolmakers then? Or how about blaming ourselves as thought leaders? Perhaps we‘ve failed to persuade! Now we‘re on to fear and self-loathing…Help me Clark! Or, here‘s another idea. How about you and I raise $5 million in venture capital and we‘ll build our own tool? Seriously, it‘s a sad sign about the state of the workplace learning market that no one has filled the need. Says to me that (1) either the vast cadre of professionals don‘t really understand the value, or (2) the capitalists who might fund such a venture don‘t think the vast cadre really understand the value, (3) or the vast cadre are so unsuccessful in persuading their own stakeholders that truth about effectiveness doesn‘t really matter. When we get our tool built, how about we call it Vastcadre? Help me Clark! Kent you help me Clark? Please get this discussion back on track…What else have you seen that keeps us ineffective?

Clark:

Gotta hand it to Michael Allen, putting his money where his mouth is, and building ZebraZapps.  Whether that‘s the answer is a topic for another day.  Or night.  Or…  so what else keeps us ineffective?  I‘ll suggest that we‘re focusing on the wrong things.  In addition to our design processes, and our tools, we‘re not measuring the right things. If we‘re focused on how much it costs per bum in seat per hour, we‘re missing the point. We should be measuring the  impact  of our learning.  It‘s about whether we‘re decreasing sales times, increasing sales success, solving problems faster, raising customer satisfaction.  If we look at what we‘re trying to impact, then we‘re going to check to see if our approaches are working, and we‘ll get to more effective methods.  We‘ve got to cut through the haze and smoke (open up what window, sucker, and let some air into this room), and start focusing with heightened awareness on moving some needles.

So there you have it.  Should we continue our wayward ways?

Why L&D?

17 December 2014 by Clark 3 Comments

One of the concerns I hear is whether L&D still has a role.  The litany is  that  they’re so far out of touch with their organization, and science, that it’s probably  better to let them die an unnatural death than to try to save them. The prevailing attitude of this extreme view is that the Enterprise Social Network is the natural successor to the LMS, and it’s going to come from operations or IT rather than L&D.  And, given that I’m on record suggesting that we revolutionize L&D rather than ignoring it, it makes sense to justify why.  And while I’ve had other arguments, a really good argument comes from my thesis advisor, Don Norman.

Don’s on a new mission, something he calls DesignX, which is scaling up design processes to deal with “complex socio-technological systems”.   And he recently wrote an article about  why  DesignX that put out a good case why L&D as well.  Before I get there, however, I want to point out two other facets  of his argument.

The first is that often design has to go  beyond science. That is, while you use science when you can, when you can’t you use theory inferences,  intuition, and more to fill in the gaps, which you hope  you’ll find out later (based upon later science, or your own data) was the right choice.  I’ve often had to do this in my designs, where, for instance, I think research hasn’t gone quite far enough in understanding engagement.  I’m not in a research position as of now, so I can’t do the research myself, but I continue to look at what can be useful.  And this is true of moving L&D forward. While we have some good directions and examples, we’re still ahead of documented research.  He points out that system science and service thinking are science based, but suggests design needs to come in beyond those approaches.   To the extent L&D can, it should draw from science, but also theory and keep moving forward regardless.

His other important point is, to me, that he is talking about systems.  He points out that design  as a craft  works well on simple areas, but where he wants to scale design is to the level of systemic solutions.  A noble goal, and here too I think this is an approach  L&D needs to consider as well.  We have to go beyond point solutions – training, job aids, etc – to performance ecosystems, and this won’t come without a different mindset.

Perhaps the most interesting one, the one that triggered this post, however, was a point on why designers are needed.  His point is that others have focuses on efficiency and effectiveness, but he  argued that  designers have empathy for the users as well.  And I think this is really important.  As I used to say the budding software engineers I was teaching interface design to: “don’t trust your intuition, you don’t think like normal people”.  And similarly, the reason I want L&D in the equation is that they (should) be the ones who really understand how we think, work, and learn, and consequently they should be the ones facilitating performance and development. It takes an empathy with users to facilitate them through change, to help them deal with fears and anxieties dealing with new systems, to understand what a good learning culture is and help foster it.

Who else would you want to be guiding an organization in achieving effectiveness in a humane way?   So Don’s provided, to me, a good point on why we might still want L&D (well, P&D really ;)  in the organization. Well, as long as they also addressing the bigger picture and not just pushing info dump and knowledge test.  Does this make sense to you?

#itashare #revolutionizelnd

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.