Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

31 December 2014

Reflections on 15 years

Clark @ 7:32 AM

For Inside Learning & Technologies 50th edition, a number of us were asked to provide reflections on what has changed over the past 15 years.  This was pretty much the period in which I’d returned to the US and took up with what was kind of a startup and led to my life as a consultant.  As an end of year piece, I have permission to post that article here:

15 years ago, I had just taken a step away from academia and government-sponsored initiatives to a new position leading a team in what was effectively a startup. I was excited about the prospect of taking the latest learning science to the needs of the corporate world. My thoughts were along the lines of “here, where we have money for meaningful initiatives, surely we can do something spectacular”. And it turns out that the answer is both yes and no.

The technology we had then was pretty powerful, and that has only increased in the past 15 years. We had software that let us leverage the power of the internet, and reasonable processing power in our computers. The Palm Pilot had already made mobile a possibility as well. So the technology was no longer a barrier, even then.

And what amazing developments we have seen! The ability to create rendered worlds accessible through a dedicated application and now just a browser is truly an impressive capability. Regardless of whether we overestimated the value proposition, it is still quite the technology feat. And similarly, the ability to communicate via voice and video allows us to connect people in ways once only dreamed of.

We also have rich new ways to interact from microblogs to wikis (collaborative documents). These capabilities are improved by transcending proximity and synchronicity. We can work together without worrying about where the solution is hosted, or where our colleagues are located. Social media allow us to tap into the power of people working together.

The improvements in mobile capabilities are also worth noting. We have gone from hype to hyphens, where a limited monochrome handheld has given way to powerful high-resolution full-color multi-channel always-connected sensor-rich devices. We can pretty much deliver anything anywhere we want, and that fulfills Arthur C. Clarke’s famous proposition that a truly advanced technology is indistinguishable from magic.

Coupled with our technological improvements are advances in our understanding of how we think, work, and learn. We now have recognition about how we act in the world, about how we work with others, and how we best learn. We have information age understandings that illustrate why industrial age methods are not appropriate.

It is not truly new, but reaching mainstream awareness in the last decade and more is the recognition that the model of our thinking as formal and logical is being updated. While we can work in such ways, it is the exception rather than the rule. Such thinking is effortful and it turns out both that we avoid it and there is a limit to how much deep thinking one can do in a day. Instead, we use our intuition beyond where we should, and while this is generally okay, it helps to understand our limitations and design around them.

There is also a spreading awareness of how much our thinking is externalized in the world, and how much we use technology to support us being effective. We have recognized the power of external support for thinking, through tools such as checklists and wizards. We do this pretty naturally, and the benefits from good design of technology greatly facilitate our ability to think.

There is also recognition that the model of individual innovation is broken, and that working together is far superior to working alone. The notion of the lone genius disappearing and coming back with the answer has been replaced by iterations on top of previous work by teams. When people work together in effective ways, in a supportive environment, the outcomes will be better. While this is not easy to effect in many circumstances, we know the practices and culture elements we need, and it is our commitment to get there, not our understanding, that is the barrier.

Finally, our approaches to learning are better informed now. We know that being emotionally engaged is a valued component in moving to learning experience design. We understand the role of models in supporting more flexible performance. We also have evidence of the value of performing in context. It is not news that information dump and knowledge test do not lead to meaningful skill acquisition, and it is increasingly clear that meaningful practice can. It is also increasingly clear that, as things move faster, meaningful skills – the ability to make better decisions – is what is going to provide the sustainable differentiator for organizations.

So imagine my dismay in finding that the approaches we are using in organizations are largely still rooted in approaches from yesteryear. While we have had rich technology opportunities to combine with our enlightened understanding, that is not what we are seeing. What we see is still expectations that it is done in-the-head, top-down, with information dump and meaningless assessment that is not tied to organizational outcomes. And while it is not working, demonstrably, there seems little impetus to change.

Truly, there has been little change in our underlying models in 15 years. While the technology is flashier, the buzz words have mutated, and some of the faces have changed, we are still following myths like learning styles and generational differences, we are still using ‘spray and pray’ methods in learning, we are still not taking on performance support and social learning, and perhaps most distressingly, we are still not measuring what matters.

Sure, the reasons are complex. There are lots of examples of the old approaches, the tools and practices are aligned with bad learning practices, the shared metrics reflect efficiency instead of effectiveness, … the list goes on. Yet a learning & development (L&D) unit unengaged with the business units it supports is not sustainable, and consequently the lack of change is unjustifiable.

And the need is now more than ever. The rate of change is increasing, and organizations now have more need to not just be effective, but they have to become agile. There is no longer time to plan, prepare, and execute, the need is to continually adapt. Organizations need to learn faster than the competition.

The opportunities are big. The critical component for organizations to thrive is to couple optimal execution (the result of training and performance support) with continual innovation (which does not come from training). Instead, imagine an L&D unit that is working with business units to drive interventions that affect key KPIs. Consider an L&D unit that is responsible for facilitating the interactions that are leading to new solutions, new products and services, and better relationships with customers. That is the L&D we need to see!

The path forward is not easy but it is systematic and doable. A vision of a ‘performance ecosystem’ – a rich suite of tools to support success that surround the performer and are aligned with how they think, work, and learn – provides an endpoint to start towards. Every organization’s path will be different, but a good start is to start doing formal learning right, begin looking at performance support, and commence working on the social media infrastructure.

An associated focus is building a meaningful infrastructure (hint: one all-singing all-dancing LMS is not the answer). A strategy to get there is a companion effort. And, ultimately a learning culture will be necessitated. Yet these components are not just a necessary component for L&D, they are the necessary components for a successful organization, one that can be agile enough to adapt to the increasing rate of change we are facing.

And here is the first step: L&D has to become a learning organization. Mantras like ‘work out loud’, ‘fail fast’, and ‘reflect’ have to become part of the L&D culture. L&D has to start experimenting and learning from the experiments. Let us ensure that the past 15 years are a hibernation we emerge from, not the beginning of the end.

Here’s to change for the better.  May 2015 be the best year yet!

23 December 2014

Quinn-Thalheimer: Tools, ADDIE, and Limitations on Design

Clark @ 8:24 AM

A few months back, the esteemed Dr. Will Thalheimer encouraged me to join him in a blog dialog, and we posted the first one on who L&D had responsibility to.  And while we took the content seriously, I can’t say our approach was similarly.  We decided to continue, and here’s the second in the series, this time trying to look at what might be hindering the opportunity for design to get better.  And again, a serious convo leavened with a somewhat demented touch:

Clark:

Will, we’ve suffered Fear and Loathing on the Exhibition Floor at the state of the elearning industry before, but I think it’s worth looking at some causes and maybe even some remedies.  What is the root cause of our suffering?  I’ll suggest it’s not massive consumption of heinous chemicals, but instead think that we might want to look to our tools and methods.

For instance, rapid elearning tools make it easy to take PPTs and PDFs, add a quiz, and toss the resulting knowledge test and dump over to the LMS to lead to no impact on the organization.  Oh, the horror!  On the other hand, processes like ADDIE make it easy to take a waterfall approach to elearning, mistakenly trusting that ‘if you include the elements, it is good’ without understanding the nuances of what makes the elements work.  Where do you see the devil in the details?

Will:

Clark my friend, you ask tough questions! This one gives me Panic, creeping up my spine like the first rising vibes of an acid frenzy. First, just to be precise—because that’s what us research pedants do—if this fear and loathing stayed in Vegas, it might be okay, but as we’ve commiserated before, it’s also in Orlando, San Francisco, Chicago, Boston, San Antonio, Alexandria, and Saratoga Springs. What are the causes of our debauchery? I once made a list—all the leverage points that prompt us to do what we do in the workplace learning-and-performance field.

First, before I harp on the points of darkness, let me twist my head 360 and defend ADDIE. To me, ADDIE is just a project-management tool. It’s an empty baseball dugout. We can add high-schoolers, Poughkeepsie State freshman, or the 2014 Red Sox and we’d create terrible results. Alternatively, we could add World-Series champions to the dugout and create something beautiful and effective. Yes, we often use ADDIE stupidly, as a linear checklist, without truly doing good E-valuation, without really insisting on effectiveness, but this recklessness, I don’t think, is hardwired into the ADDIE framework—except maybe the linear, non-iterative connotation that only a minor-leaguer would value. I’m open to being wrong—iterate me!

Clark:

Your defense of ADDIE is admirable, but is the fact that it’s misused perhaps reason enough to dismiss it? If your tool makes it easy to lead you astray, like the alluring temptation of a forgetful haze, is it perhaps better to toss it in a bowl and torch it rather than fight it? Wouldn’t the Successive Approximation Method be a better formulation to guide design?

Certainly the user experience field, which parallels ours in many ways and leads in some, has moved to iterative approaches specifically to help align efforts to demonstrably successful approaches. Similarly, I get ‘the fear’ and worry about our tools. Like the demon rum, the temptations to do what is easy with certain tools may serve as a barrier to a more effective application of the inherent capability. While you can do good things with bad tools (and vice versa), perhaps it’s the garden path we too easily tread and end up on the rocks. Not that I have a clear idea (and no, it’s not the ether) of how tools would be configured to more closely support meaningful processing and application, but it’s arguably a collection worth assembling. Like the bats that have suddenly appeared…

Will:

I’m in complete agreement that we need to avoid models that send the wrong messages. One thing most people don’t understand about human behavior is that we humans are almost all reactive—only proactive in bits and spurts. For this discussion, this has meaning because many of our models, many of our tools, and many of our traditions generate cues that trigger the wrong thinking and the wrong actions in us workplace learning-and-performance professionals. Let’s get ADDIE out of the way so we can talk about these other treacherous triggers. I will stipulate that ADDIE does tend to send the message that instructional design should take a linear, non-iterative approach. But what’s more salient about ADDIE than linearity and non-iteration is that we ought to engage in Analysis, Design, Development, Implementation, and Evaluation. Those aren’t bad messages to send. It’s worth an empirical test to determine whether ADDIE, if well taught, would automatically trigger linear non-iteration. It just might. Yet, even if it did, would the cost of this poor messaging overshadow the benefit of the beneficial ADDIE triggers? It’s a good debate. And I commend those folks—like our comrade Michael Allen—for pointing out the potential for danger with ADDIE. Clark, I’ll let you expound on rapid authoring tools, but I’m sure we’re in agreement there. They seem to push us to think wrongly about instructional design.

Clark:

I spent a lot of time looking at design methods across different areas – software engineering, architecture, industrial design, graphic design, the list goes on – as a way to look for the best in design (just as I’ve looked across engagement disciplines, learning approaches, and more; I can be kinda, er, obsessive).   I found that some folks have 3 step models, some 4, some 5. There’s nothing magic about ADDIE as ‘the’ five steps (though having *a* structure is of course a good idea).  I also looked at interface design, which has arguably the most alignment with what elearning design is about, and they’ve avoided some serious side effects by focusing on models that put the important elements up front, so they talk about participatory design, and situated design, and iterative design as the focus, not the content of the steps. They have steps, but the focus is on an evaluative design process. I’d argue that’s your empirical design (that or the fumes are getting to me).  So I think the way you present the model does influence the implementation. If advertising has moved from fear motivation to aspirational motivation (c.f. Sach’s Winning the Story Wars), so too might we want to focus on the inspirations.

Will:

Yes, let’s get back to tools. Here’s a pet peeve of mine. None of our authoring tools—as far as I can tell—prompt instructional designers to utilize the spacing effect or subscription learning. Indeed, most of them encourage—through subconscious triggering—a learning-as-an-event mindset.

For our readers who haven’t heard of the spacing effect, it is one of the most robust findings in the learning research. It shows that repetitions that are spaced more widely in time support learners in remembering. Subscription learning is the idea that we can provide learners with learning events of very short duration (less than 5 or 10 minutes), and thread those events over time, preferably utilizing the spacing effect.

Do you see the same thing with these tools—that they push us to see learning as a longer-then-necessary bong hit, when tiny puffs might work better?

Clark:

Now we’re into some good stuff!  Yes, absolutely; our tools have largely focused on the event model, and made it easy to do simple assessments.  Not simple good assessments, just simple ones. It’s as if they think designers don’t know what they need.  And, as our colleague Cammy Bean’s book The Accidental Instructional Designer’s success shows, they may be right.  Yet I’d rather have a power tool that’s incrementally explorable, but scaffolds good learning than one that ceilings out just when we’re getting to somewhere interesting. Where are the templates for spaced learning, as you aptly point out?  Where are the tools to make two-step assessments (first tell us which is right, then why it’s right, as Tom Reeves has pointed us to)?  Where are more branching scenario tools?  They tend to hover at the top end of some tools, unused. I guess what I’m saying is that the tools aren’t helping us lift our game, and while we shouldn’t blame the tools, tools that pointed the right way would help.  And we need it (and a drink!).

Will:

Should we blame the toolmakers then? Or how about blaming ourselves as thought leaders? Perhaps we’ve failed to persuade! Now we’re on to fear and self-loathing…Help me Clark! Or, here’s another idea. How about you and I raise $5 million in venture capital and we’ll build our own tool? Seriously, it’s a sad sign about the state of the workplace learning market that no one has filled the need. Says to me that (1) either the vast cadre of professionals don’t really understand the value, or (2) the capitalists who might fund such a venture don’t think the vast cadre really understand the value, (3) or the vast cadre are so unsuccessful in persuading their own stakeholders that truth about effectiveness doesn’t really matter. When we get our tool built, how about we call it Vastcadre? Help me Clark! Kent you help me Clark? Please get this discussion back on track…What else have you seen that keeps us ineffective?

Clark:

Gotta hand it to Michael Allen, putting his money where his mouth is, and building ZebraZapps.  Whether that’s the answer is a topic for another day.  Or night.  Or…  so what else keeps us ineffective?  I’ll suggest that we’re focusing on the wrong things.  In addition to our design processes, and our tools, we’re not measuring the right things. If we’re focused on how much it costs per bum in seat per hour, we’re missing the point. We should be measuring the impact of our learning.  It’s about whether we’re decreasing sales times, increasing sales success, solving problems faster, raising customer satisfaction.  If we look at what we’re trying to impact, then we’re going to check to see if our approaches are working, and we’ll get to more effective methods.  We’ve got to cut through the haze and smoke (open up what window, sucker, and let some air into this room), and start focusing with heightened awareness on moving some needles.

So there you have it.  Should we continue our wayward ways?

17 December 2014

Why L&D?

Clark @ 8:33 AM

One of the concerns I hear is whether L&D still has a role.  The litany is that they’re so far out of touch with their organization, and science, that it’s probably  better to let them die an unnatural death than to try to save them. The prevailing attitude of this extreme view is that the Enterprise Social Network is the natural successor to the LMS, and it’s going to come from operations or IT rather than L&D.  And, given that I’m on record suggesting that we revolutionize L&D rather than ignoring it, it makes sense to justify why.  And while I’ve had other arguments, a really good argument comes from my thesis advisor, Don Norman.

Don’s on a new mission, something he calls DesignX, which is scaling up design processes to deal with “complex socio-technological systems”.   And he recently wrote an article about why DesignX that put out a good case why L&D as well.  Before I get there, however, I want to point out two other facets of his argument.

The first is that often design has to go beyond science. That is, while you use science when you can, when you can’t you use theory inferences, intuition, and more to fill in the gaps, which you hope you’ll find out later (based upon later science, or your own data) was the right choice.  I’ve often had to do this in my designs, where, for instance, I think research hasn’t gone quite far enough in understanding engagement.  I’m not in a research position as of now, so I can’t do the research myself, but I continue to look at what can be useful.  And this is true of moving L&D forward. While we have some good directions and examples, we’re still ahead of documented research.  He points out that system science and service thinking are science based, but suggests design needs to come in beyond those approaches.   To the extent L&D can, it should draw from science, but also theory and keep moving forward regardless.

His other important point is, to me, that he is talking about systems.  He points out that design as a craft works well on simple areas, but where he wants to scale design is to the level of systemic solutions.  A noble goal, and here too I think this is an approach L&D needs to consider as well.  We have to go beyond point solutions – training, job aids, etc – to performance ecosystems, and this won’t come without a different mindset.

Perhaps the most interesting one, the one that triggered this post, however, was a point on why designers are needed.  His point is that others have focuses on efficiency and effectiveness, but he argued that designers have empathy for the users as well.  And I think this is really important.  As I used to say the budding software engineers I was teaching interface design to: “don’t trust your intuition, you don’t think like normal people”.  And similarly, the reason I want L&D in the equation is that they (should) be the ones who really understand how we think, work, and learn, and consequently they should be the ones facilitating performance and development. It takes an empathy with users to facilitate them through change, to help them deal with fears and anxieties dealing with new systems, to understand what a good learning culture is and help foster it.

Who else would you want to be guiding an organization in achieving effectiveness in a humane way?   So Don’s provided, to me, a good point on why we might still want L&D (well, P&D really ;) in the organization. Well, as long as they also addressing the bigger picture and not just pushing info dump and knowledge test.  Does this make sense to you?

#itashare #revolutionizelnd

16 December 2014

Challenges in engaging learning

Clark @ 8:05 AM

I’ve been working on moving a team to deeper learning design.  The goal is to practice what I preach, and make sure that the learning design is competency-aligned, activity-based, and model-driven.  Yet, doing it in a pragmatic way.

And this hasn’t been without it’s challenges.  I presented to the team my vision, we worked out a process, and started coaching the team during development.  In retrospect, this wasn’t proactive enough.  There were a few other hiccups.

We’re currently engaged in a much tighter cycle of development and revision, and now feel we’re getting close to the level of effectiveness and engagement we need.  Whether a) it’s really better, and b) whether we can replicate it yet scale it as well is an open question.

At core are a few elements. For one, a rabid focus on what learners are doing is key.  What do they need to be able to do, and what contexts do they need to do it in?

The competency-alignment focus is on the key tasks that they have to do in the workplace, and making sure we’re preparing them across pre-class, in-class, and post-class activities to develop that ability.  A key focus is having them make the decision in the learning experience that they’ll have to make afterward.

I’m also pushing very hard on making sure that there are models behind the decisions.  I’m trying hard to avoid arbitrary categorizations, and find the principles that drove those categorizations.

Note that all this is not easy.  Getting the models is hard when the resources provided don’t include that information.  Avoiding presenting just knowledge and definitions is hard work.  The tools we use make certain interactions easy, and other ones not so easy.  We have to map meaningful decisions into what the tools support.  We end up making  tradeoffs, as do we all.  It’s good, but not as good as it could be.  We’ll get better, but we do want to run in a practical fashion as well.

There are more elements to weave in: layering on some general biz skills is embryonic.  Our use of examples needs to get more systematic.  As does our alignment of learning goal to practice activity.  And we’re struggling to have a slightly less didactic and earnest tone; I haven’t worked hard enough on pushing a bit of humor in, tho’ we are ramping up some exaggeration.  There’s only so much you can focus on at one time.

We’ll be running some student tests next week before presenting to the founder.  Feeling mildly confident that we’ve gotten a decent take on quality learning design with suitable production value, but there is the barrier that the nuances of learning design are subtle. Fingers crossed.

I still believe that, with practice, this becomes habit and easier.  We’ll see.

9 December 2014

My thoughts on tech and training

Clark @ 8:27 AM

The eLearning Guild,  in queuing up interest in their Learning Solutions/Performance Ecosystem conference, asked for some thoughts on the role of technology and training.  And, of course, I obliged.  You can see them here.

In short, I said that technology can augment what we already do, serving to fill in gaps between what we desired and what we could deliver, and it also gave us some transformative capabilities.  That is, we can make the face to face time more effective, extend the learning beyond the classroom, and move the classroom beyond the physical space.

The real key, a theme I find myself thumping more and more often, is that we can’t use technology in ineffective ways. We need to use technology in ways that align with how we think, work, and learn.  And that’s all too rare.  We can do amazing things, if: we muster the will and resources, do the due diligence on what would be a principled approach, and then do the cycles of develop and iteration to get us to where the solution is working as it should.

Again, the full thoughts can be found on their blog.

 

4 December 2014

Getting Models

Clark @ 8:25 AM

In trying to shift from a traditional elearning approach to a more enlightened one, a deeper one, you are really talking about viewing things differently, which is non-trivial. And then, even if you know you want to do better, you still need some associated skills. Take, for example, models.

I’ve argued before that models are a better basis for action, for making better decisions.  Arbitrary knowledge is hard to recollect, and consequently brittle.  We need a coherent foundation upon which to base foundations, and arbitrary information doesn’t help.  If I see a ‘click to learn more’, for instance, I have good clue that someone’s presenting arbitrary information.  However, as I concluded in the models article, “It’s not always there, nor even easily inferable.”  Which is a problem that I’ve been wrestling with.  So here’re my interim thoughts.

Others have counseled that not just any Subject Matter Expert (SME) will do.  They may be able to teach material with their stories and experience, and they can certainly do the work, but they may not have a conscious model that’s available to guide novices.  So I’ve head that you have to find one capable. If you don’t, and you don’t have good source material, you’re going to have to do the work yourself.  You might be able to find one in a helpful place like Wikipedia (and please join us in donating to help keep it going, would you please?), but otherwise you’re going to have to do the hard yards.

Say you’re wrestling with a list of things, like attacks on networks, or impacts on blood pressure.  There is a laundry list of them, and there may seem to be no central order.  So what do you do?  Well, in these cases where I don’t have one, I make one.

For instance, in attacks on networks, it seems that the inherent structure of the network provides an overarching framework for vulnerabilities.  Networks can be attacked digitally through password cracking or software vulnerabilities.  The data streams could also be hacked either physically connecting to wires or intercepting wireless signals.  Socially, you can trick people into doing wrong things too.  Similarly with blood pressure, the nature of the system tells us that constricted or less flexible vessels (e.g. from aging) will increase blood pressure. Decreased volume in the system will decrease, etc.

The point is, I’m using the inherent structure to provide a framework that wasn’t given. Is it more than the minimum?  Yes.  But I’ll argue that if you want the information to be available when necessary, or rather that learners will be able to make the right decisions, this is the most valuable thing you can do. And it might take less effort overall, as you can teach the model and support making good inferences more efficiently than teaching all the use cases.

And is this a sufficient approach?  I can’t say that; I haven’t spent enough time on other content. So at this point treat it like a heuristic.  However, it gives you something you can at least take to a SME and have them critique and improve it (which is easier than trying to extract a model whole-cloth ;).

Now there might also be the case that there just isn’t an organizing principle (I’m willing to concede that, for now…). Then, you may  need simply to ask your learners to do some meaningful processing on the material.  Look, if you’re presenting it, then you’re expecting them to remember it. Presenting arbitrary information isn’t going to do that. If they need to remember it, have them process it.  Otherwise, why present it at all?

Now, this is only necessary when you’re trying to do formal learning; it might be that you don’t have to get it in folks heads and can put it in the world. Do it if you can.   But I believe that what will make a bigger difference for learners, for performers, will be the ability to make better decisions. And, in our increasingly turbulent times that will come from models, not rote information.  So please, if you’re doing formal learning, do it right, and get the models you need. Beg, borrow, steal, or make, but get them.  Please?

Powered by WordPress