Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: engagement

Complexity in Learning Design

21 September 2021 by Clark Leave a Comment

a fractalI recently mentioned that one of the problems with research is that things are more interconnected than we think. This is particularly true with cognitive research. While we can make distinctions that simplify things in useful ways (e.g. the human information processing system model*), the underlying picture is of a more interactive system.  Which underpins why it makes sense to talk about Learning Experience Design (LXD) and not just instructional design. We need to accommodate complexity in learning design.  (* Which I talk about in Chapter 2 of my learning science book, and in my workshops on the same topic through the Allen Academy.)

We’re recognizing that the our cognition is more than just in our head. Marcia Conner, in her book  Learn More Now  mentioned how neuropeptides passed information around the body. Similarly, Annie Murphy Paul’s  The Extended Mind talks about moving cognition (and learning) into the world. In my Make It Meaningful workshops (online or F2F at DevLearn 19 Oct), I focus on how to address the emotional component of learning. In short, learning is about more than just information dump and knowledge test.

Scientifically, we’re finding there are lots of complex interactions between the current context, our prior experience, and our cognitive architecture. We’re much more ‘situated’ in the moment than the rational beings we want to believe. Behavioral economics and Daniel Kahneman’s research have made this abundantly clear. We try to avoid the hard mental work using shortcuts that work sometimes, but not others. (Understanding when is an important component of this).

We get good traction from learning science and instructional design approaches, for sure. There are good prescriptions (that we often ignore, for reasons above) about what to do and how. So, we should follow them. However, we need more. Which is why I tout LXD  Strategy! We need to account for complexity in learning design approaches.

For one, our design processes need to be iterative. We’ll make our best first guess, but it won’t be right, and we’ll need to tune. The incorporation of agile approaches, whether SAM or LLAMA or even just iterative ADDIE, reflects this. We need to evaluate and refine our designs to match the fact that our audience is more complex than we thought.

Our design also needs to think about the emotional experience as well as the cognitive experience. We want our design processes to systematically incorporate humor, safety, motivation, and more. Have we tuned the challenge enough, and how will we know?  Have we appropriately incorporated story? Are our graphics aligned or adding to cognitive load? There are lots of elements that factor in.

Our design process has to accommodate SMEs who literally can’t access what they do. Also learner interests, not just knowledge. We need to know what interim deliverables, processes for evaluation, times when we shouldn’t be working solo, and tools we need. Most importantly, we have to do this in a practical way, under real-world resource constraints.

Which is why we need to address this strategically. Too many design processes are carry-over from industrial approaches: one person, one tool, and a waterfall process. We need to do better. There’s complexity in learning design, both on the part of our learners, and ourselves as designers. Leveraging what we know about cognitive science can provide us with structures and approaches that accommodate these factors. That’s only true, however, if we are aware and actively address it. I’m happy to help, but can only do so if you reach out. (You know how to find me. ;) Here’s to effective and engaging  learning!

Iterating and evaluating

7 September 2021 by Clark Leave a Comment

Design cycleI’ve argued before about the need for evaluation in our work. This occurs summatively, where we’re looking beyond smile sheets to actually determine the impact of our efforts. However, it also should work formatively, where we’re seeing if we’re getting closer. Yet there are some ways in which we go off track. So I want to talk about iterating and evaluating our learning initiatives.

Let’s start by talking about our design processes. The 800 lb gorilla of ADDIE has shifted from a water flow model to a more iterative approach. Yet it still brings baggage. Of late, more agile and iterative approaches have emerged, not least Michael Allen’s SAM and Megan Torrance’s LLAMA. Agile approaches, where we’re exploring, make more sense when designing for people, with their inherent complexity.

Agile approaches work on the basis of creating, basically, Minimum Viable Products, and then iterating.  We evaluate each iteration. That is, we check to see what need to be improved, and what is good enough. However,  when are we done?

In my workshops, when talking about iteration, I like to ask the audience this question. Frequently, the answer is “when we run out of time and money”. That’s an understandable answer, but I maintain it’s the  wrong answer.

If we iterate until we run out of time and money, we don’t know that we’ve actually met our goals. As I explained about social media metrics, but applies here too, you  should be iterating until you achieve the metrics you’ve set. That means you know what you’re trying to do!

Which requires, of course, that you set metrics about what your solution should achieve. That could include usability and engagement (which come before and after, respectively), but most critically ‘impact’. Is this learning initiative solving the problem we’re designing it to achieve?  Which also means you need to have a discussion of why you’re building it, and how you know it’s working.

Of course, if you’re running out of time and money faster than you’re getting close to your goal, you have to decide whether to relax your standards, or apply for more resources, or abandon your work, or…but at least you’re doing so consciously. Yet this is still better than heuristically determining that three iterations is arbitrarily appropriate, for example.

I do recognize that this isn’t our current situation, and changing it isn’t easy. We’re still asked to make slide decks look good, or create a course on X, etc. Ultimately, however, our professionalism will ask us to do better. Be ready. Eventually, your CFO should care about the return on your expenditures, and it’ll be nice to have a real answer. So, iterating and evaluating  should  be your long term approach. Right?

Making it Meaningful

31 August 2021 by Clark 1 Comment

I volunteer for our local Community Emergency Response Team (CERT; and have learned lots of worthwhile things). On a call, our local organizer mentioned that she was leading a section of the train-the-trainers upcoming event, and was dreading trying to make it interesting. Of course I opened my big yap and said that’s something I’m focusing on, and offered to help. She took me up on it, and it was a nice case study in making it meaningful.

Now, I have a claim that you can’t give me a topic that I can’t create a game for. I’m now modifying that to ‘you can’t give me a topic I can’t make meaningful’.  She’d mentioned her topic was emergency preparedness, and while she thought it was a dull topic, I was convinced we could do it. I mentioned that the key was making it visceral.

I had personal experience; last summer our neighbor was spreading the rumor that we were going to have to evacuate owing to a fire over the ridge. (Turns out, my neighbor was wrong.) I started running around gathering sleeping bags, coats, dog crate, etc. Clearly, I was thinking about shelter. When I texted m’lady, she asked about passports, birth certificates, etc. Doh!

However, even without that personal example, there’s a clear hook. When I mentioned that, she mentioned that when you’re in a panic, your brain shuts down some and it’s really critical to be prepared. However, she mentioned that someone else was taking that bit, and her real topic was different types of disasters. Yet my example had already got her thinking, and she started talking about different people being familiar with an earthquake (here in California).

I thought of how when talking with scattered colleagues, they disclaim about how earthquakes are scary, and I remind them that  every place has its hazards. In the midwest it could be tornados or floods. On the east coast it’s hurricanes. Etc. The point being that everyone has some experience. Tapping into that, talking about consequences, is a great hook.

That’s the point, really. To get people willing to invest in learning, you have to help people see that they  do need it. (Also, that they don’t know it now,  and that this experience will change that.). You need to be engaged in making it meaningful!

Again, in my mind learning experience design (LXD) is about the elegant integration of learning science with engagement. You need to understand both. I’ve got a book and a workshop on learning science, and I’ve a workshop at DevLearn on the engagement side. I’ve also got a forthcoming book and an online workshop coming for more on engagement. Stay tuned!

More Marketing Malarkey

10 August 2021 by Clark 2 Comments

As has become all too common, someone decided to point me to some posts for their organization. Apparently, interest was sparked by a previous post of mine where I’d complained about  microlearning. While this one  does a (slightly) better job talking about  microlearning, it is riddled with other problems. So here’s yet another post about  more marketing malarkey.

First, I don’t hate microlearning; there are legitimate reasons to keep content small. It can get rid of the bloat that comes from contentitis, for one. There are solid reasons to err on the side of performance support as well. Most importantly, perhaps, is also the benefit of spacing learning to increase the likelihood of it being available. The thing that concerns me is that all these things are different, and take different design approaches.

Others have gone beyond just the two types I mention. One of the posts  cited a colleague’s more nuanced presentation about small content, pointing out four different ways to use microlearning (though interestingly,  five were cited in the referenced presentation). My problem, in this case, wasn’t the push for microlearning (there were some meaningful distinctions, though no actual mention how they require different design). Instead, it was the presence of myths.

One of the two posts opened with this statement: “The appetite of our employees is not the same therefore, we must not provide them the same bland food (for thought).” This seems a bit of a mashup. Our employees aren’t the same, so they need different things? That’s personalization, no? However, the conversation goes on to say: “It‘s time to put together an appetizing platter and create learning opportunities that are useful and valuable.”  Which seems to argue for engagement. Thus, it seems like it’s instead arguing that people need more engaging content. Yes, that’s true too. But what’s that got to do with our employees not having the same appetite? It  seems to be swinging towards the digital native myth, that employees now need more engaging things.

This is bolstered by a later quote: “When training becomes overwhelming and creates stress, a bite-sized approach will encourage learning.” If training becomes overwhelming and stressful, it  does suggest a redesign. However, my inclination would be to suggest that ramping up the WIIFM and engagement are the solution. A bite-sized approach, by itself, isn’t a solution to engagement. Small wrong or dull content isn’t a solution for dull or wrong content.

This gets worse in the other post. There were two things wrong here. The first one is pretty blatant:

There are numerous resources that suggest our attention spans are shrinking. Some might even claim we now have an average attention span of only 8 seconds, which equals that of a goldfish.

There are, of course, no such resources pointed to. Also, the resources that proposed this have been debunked. This is actually the ‘cover story’ myth of my recent book on myths! In it, I point out that the myth about attention span came from a misinterpreted study, and that our cognitive architecture doesn’t change that fast. (With citations.) Using this ‘mythtake’ to justify microlearning is just wrong. We’re segueing into tawdry marketing malarkey here.

This isn’t the only problem with this post, however. A second one emerges when there’s an (unjustified) claim that learning should have 3E’s: Entertaining, Enlightening, and Engaging. I do agree with Engaging (per the title of my first book), however, there’s a problem with it. And the other ones. So, for Entertaining, this is the followup: “advocates the concept of learning through a sequence of smaller, focused modules.” Why is smaller inherently more entertaining? Also, in general, learning doesn’t work as well when it’s just ‘fun’, unless it’s “hard fun”.

Enlightening isn’t any better. I do believe learning should be enlightening, although particularly for organizational learning it should be transformative in terms of enhancing an individual’s ability to  perform. Just being enlightened doesn’t guarantee that. The followup says: “Repetition, practice, and reinforcement can increase knowledge.” Er, yes, but that’s just good design. There’s nothing unique to microlearning about that.

Most importantly, the definition for Engaging is “A program journey can be spaced enough that combats forgetting curve.” That is spacing! Which isn’t a bad thing (see above), but not your typical interpretation of engaging. This is really confused!

Further, I didn’t even need to fully parse these two posts. Even on a superficial examination, they fail the ‘sniff test’. In general, you should be avoiding folks that toss around this sort of fluffy biz buzz, but even more so when they totally confound a reasonable interpretation of these concepts. This is just more marketing malarkey. Caveat emptor.

(Vendors, please please please stop with the under-informed marketing, and present helpful posts. Our industry is already suffering from too many myths. There’s possibly a short-term benefit, however the trend seems to be that people are paying more attention to learning science. Thus, in the long run I reckon it undermines your credibility. While taking them down is fun and hopefully educational, I’d rather be writing about new opportunities, not remedying the old.  If you don’t have enough learning science expertise to do so, I can help: books, workshops, and/or writing and editing services.)

 

Doing Gamification Wrong

22 June 2021 by Clark 8 Comments

roulette wheelAs I’ve said before, I’m not a fan of ‘gamification’. Certainly for formal learning, where I think intrinsic motivation is a better area to focus on than extrinsic. (Yes, there are times it makes sense, like tarting up rote memory development, but it’s under-considered and over-used.)  Outside of formal learning, it’s clear that it works in certain places. However, we need to be cautious in considering it a panacea. In a recent instance, I actually think it’s definitely misapplied. So here’s an example of doing gamification wrong.

This came to me via a LinkedIn message where the correspondent pointed me to their recent blog article. (BTW, I don’t usually respond to these, but if I do, you’re going to run the risk that I poke holes. 😈) In the article, they were talking about using gamification to build organizational engagement. Interestingly, even in their own article, they were pointing to other useful directions unknowingly!

The problem, as claimed, is that working remote can remove engagement. Which is plausible. The suggestion, however, was that gamification was the solution. Which I suggest is a patch upon a more fundamental problem. The issue was a daily huddle, and this quote summarizes the problem: “there is zero to little accountability of engagement and participation “.  Their solution: add points to these things. Let me suggest that’s wrong.

What facilitates engagement is a sense of purpose and belonging. That is, recognizing that what one does contributes to the unit, and the unit contributes to the organization, and the organization contributes to society. Getting those lined up and clear is a great way to build meaningful engagement. Interestingly, even in the article they quote: “to build true engagement, people often need to feel like they are contributing to something bigger than themselves.” Right! So how does gamification help? That seems to be trying to patch a  lack of purpose. As I’ve argued before, the transformation is not digital first, it’s people first.

They segue off to microlearning, without (of course) defining it. They ended up meaning spaced learning (as opposed to performance support). Which, again, isn’t gamification but they push it into there. Again, wrongly. They do mention a successful instance, where Google got 100% compliance on travel expenses, but that’s very different than company engagement. It’s  got to be the right application.

Overall, gamification by extrinsic motivation can work under the right circumstances, but it’s not a solution to all that ails an organization. There are ways and times, but it’s all too easy to be doing gamification wrong. ‘Tis better to fix a broken culture than to patch it. Patching is, at best, a temporary solution. This is certainly an example.

 

Update on my events

17 June 2021 by Clark Leave a Comment

In January I posted about my upcoming webinars (now past), workshops, etc. As things open up again (yay, vaccines), some upcoming events will be happening live!  And, of course, virtual. In fact, one starts next week! So I thought it time to update you on the things I’ll be doing. Then we’ll get back to my regular posts ;). So here’s an update on my events.

First, starting next week, is the Learning Development Conference, by the Learning Development Accelerator (caveat: I’m on their advisory board). Last year, it was an experiment. They did several things very well: it was focused on evidence-based approaches, it created timings that worked for a broad section of the world’s populace (e.g. live sessions were offered twice, once early once late), and it had asynchronous content as well as synchronous. It also had ways to maintain contact and discussions. As a result, it was a success, leading to the Accelerator and this second event.

It’s for six weeks, and first I’ve got an asynchronous course on learning science (a subset of the bigger one I do as a blended workshop for HR.com/Allen Academy). I’m also doing two live sessions (at different times) on some of the new results from cognitive science. I’m already dobbed in for one debate, and they’ll likely call on me for more. There are also a suite of the top names in evidence-based L&D appearing doing either or both of live or asynchronous content.

Second, at the end of August, I’ll be speaking at ATD’s International Conference and Exposition. This is a live event in Salt Lake City. (My first since the pandemic!) Of course I’m speaking on learning science; the topic of my book with them. There could even be a book-signing event!  If you don’t know ATD’s ICE, it’s huge, both a blessing and curse. Lots of quality content (ok, mostly ;), almost too many people to find your friends, but lots of new friends to make, with broad coverage. Also, a big exposition (maybe smaller this year ;).

Third, I’ll be at the Learning Guild’s DevLearn again this year. This has always been one of the best conferences because the Guild runs good events (caveat: I’m their first Guild Master). They want it to grow, of course, but as yet it’s still be reasonably sized, and with quality content. For one, I’ll be speaking on learning science implications.

I’ll  also be running a pre-conference workshop on Making Learning Meaningful. And this is, I suggest, truly of interest. I’ve been seeing more and more examples of well-designed content that’s still lacking in engagement, and this workshop is all about that. It’s an area I’ve been actively exploring and synthesizing into practical implications. Like in the series I did on the topic here, I cover how to hook initial interest, then maintain it through the experience. Also considered are the implications for the elements of learning, and a process to make it practical.

I recommend all three (or I wouldn’t be inclined to speak at them). So that’s the current update on my events. Hope to see you at one or another!

New recommended readings

8 June 2021 by Clark Leave a Comment

My Near Book ShelfOf late, I‘ve been reading quite a lot, and I‘m finding some very interesting books. Not all have immediate take homes, but I want to introduce a few to you with some notes. Not all will be relevant, but all are interesting and even important. I‘ll also update my list of recommended readings. So here are my new recommended readings. (With Amazon Associates links: support your friendly neighborhood consultants.)

First, of course, I have to point out my own Learning Science for Instructional Designers. A self-serving pitch confounded with an overload of self-importance? Let me explain. I am perhaps overly confident that it does what it says, but others have said nice things. I really did design it to be the absolute minimum reading that you need to have a scrutable foundation for your choices. Whether it succeeds is an open question, so check out some of what others are saying. As to self-serving, unless you write an absolute mass best-seller, the money you make off books is trivial. In my experience, you make more money giving it away to potential clients as a better business card than you do on sales. The typically few hundred dollars I get a year for each book aren‘t going to solve my financial woes! Instead, it‘s just part of my campaign to improve our practices.

So, the first book I want to recommend is Annie Murphy Paul‘s The Extended Mind. She writes about new facets of cognition that open up a whole area for our understanding. Written by a journalist, it is compelling reading. Backed in science, it’s valuable as well. In the areas I know and have talked about, e.g. emergent and distributed cognition, she gets it right, which leads me to believe the rest is similarly spot on. (Also her previous track record; I mind-mapped her talk on learning myths at a Learning Solutions conference). Well-illustrated with examples and research, she covers embodied cognition, situated cognition, and socially distributed cognition, all important. Moreover, there‘re solid implications for the redesign of instruction. I‘ll be writing a full review later, but here‘s an initial recommendation on an important and interesting read.  

I‘ll also alert you to Tania Luna‘s and LeeAnn Renninger‘s Surprise. This is an interesting and fun book that instead of focusing on learning effectiveness, looks at the engagement side. As their subtitle suggests, it‘s about how to Embrace the Unpredictable and Engineer the Unexpected. While the first bit of that is useful personally, it‘s the latter that provides lots of guidance about how to take our learning from events to experiences. Using solid research on what makes experiences memorable (hint: surprise!) and illustrative anecdotes, they point out systematic steps that can be used to improve outcomes. It‘s going to affect my Make It Meaningful  work!

Then, without too many direct implications, but intrinsically interesting is Lisa Feldman Barrett‘s How Emotions Are Made. Recommended to me, this book is more for the cog sci groupie, but it does a couple of interesting things. First, it creates a more detailed yet still accessible explanation of the implications of Karl Friston‘s Free Energy Theory. Barrett talks about how those predictions are working constantly and at many levels in a way that provides some insights. Second, she then uses that framework to debunk the existing models of emotions. The experiments with people recognizing facial expressions of emotion get explained in a way that makes clear that emotions are not the fundamental elements we think they are. Instead, emotions social constructs! Which undermines, BTW, all the facial recognition of emotion work.

I also was pointed to Tim Harford‘s The Data Detective, and I do think it‘s a well done work about how to interpret statistical claims. It didn‘t grip me quite as viscerally as the afore-mentioned books, but I think that‘s because I (over-)trust my background in data and statistics. It is a really well done read about some simple but useful rules for how to be a more careful reviewer of statistical claims. While focused on parsing the broader picture of societal claims (and social media hype), it is relevant to evaluating learning science as well.  

I hope you find my new recommended readings of interest and value. Now, what are you recommending to me? (He says, with great trepidation. ;)

How to be an elearning expert

1 June 2021 by Clark 3 Comments

I was asked (and have been a time or two before): “What’s the one most important thing you’d like to tell to be successful Ed Tech industry leader” Of course there wasn‘t just one ;). Still, looking at colleagues who I think fit that characterization, I find some commonalities that are worth sharing. So here‘s one take on how to be an elearning expert.

Let‘s start with that ‘one thing‘.   Which is challenging, since it‘s more than one thing! Still, I boiled it down into two components: know your stuff, and let people know.   That really is the core. So let‘s unpack that some more.   The first thing is to establish credibility. Which means demonstrating that you track and promote the right stuff.  

Some folks have created a model that they tout. Cathy Moore has Action Mapping, Harold Jarche has PKM, Con Gottfredson has the 5 moments of need, and so on.   It‘s good having a model, if it‘s a good, useful one (there are people who push models that are hype or ill-conceived at best). Note that it‘s not necessarily the case that these folks are just known for this model, and most of these folks can talk knowledgeably about much more, but ‘owning‘ a model that is useful is a great place to be. (I occasionally regret that I haven‘t done a good job of branding my models.) They understand their model and its contribution, it‘s a useful one, and therefore they contribute validly that way and are rightly recognized.

Another approach like this is owning a particular domain. Whether gaming (e.g. Karl Kapp), visuals (Connie Malamed), design (Michael Allen), mixed realities (Ann Rollins), AI (Donald Clark), informal (Jane Hart), evaluation (Will Thalheimer), management (Matt Richter), and so on, they have deep experience and a great conceptual grasp in a particular area. Again, they can and do speak outside this area, but when they talk about these topics in particular, what they say is worthy of your attention!

Then there are other folks who don‘t necessarily have a single model, but instead reliably represent good science. Julie Dirksen, Patti Shank, Jane Bozarth, Mirjam Neelen, and others  have established a reputation for knowing the learning science and interpreting it in accurate, comprehensible, and useful ways.  

The second point is that these folks write and talk about their models and/or approaches. They‘re out there, communicating. It‘s about reliably saying the important things again and again (always with a new twist). A reputation doesn‘t just emerge whole-cloth, it‘s built step by step. They also practice what they preach, and have done the work so they can talk about it. They talk the talk and walk the walk. Further, you can check what they say.  

So how to start? There are two clear implications. Obviously, you have to Know. Your. Stuff! Know learning, know design, know engagement, know tech. Further, know what it means in practice!   You can focus deeply in one area, or generate one useful and new model, or have a broad background, but it can‘t just be in one thing. It‘s not just all your health content for one provider. What you‘re presenting needs to be representative and transferable.  Further, you need to keep up to date, so that means continually learning: reading, watching, listening.

Second, it‘s about sharing. Writing and speaking are the two obvious ways. Sure, you can host a channel: podcast, vlog, blog, but if you‘re hosting other folks, you‘re seen as well connected but not necessarily as the expert. Further, I reckon you have to be able to write and speak (and pretty much all of these folks do both well).   So, start by speaking at small events, and get feedback to improve. Study good presentation style. Then start submitting for events like the Learning Guild, ATD, or LDA (caveats on all of these owing to various relationships, but I think they‘re all scrutable). I once wrote about how to read and write proposals, and I think my guidance is still valid.

Similarly, write. Learning Solutions or eLearn Mag are two places to put stuff that‘s sensibly rigorous but written for practitioners.   Take feedback to heart, and deliberately improve. Make sure you‘re presenting value, not pitching anything. What conferences and magazines say about not selling, that your clear approach is what sells, is absolutely true.  

Also, make sure that you have a unique ‘voice’. No one needs the same things others are saying, at least in the same way. Have a perspective, your own take. Your brand is not only what you say, but how you say it.

A related comment: track some related fields. Most of the folks I think of as experts have some other area they draw inspiration from. UX/UI, anthropology, software engineering, there are many fields and finding useful insight from a related one is useful to the field and keeps you fresh.

Oh, one other thing. You have to have integrity. People have to be able to trust what you say. If you push something for which you have a private benefit, or something that‘s trendy but not real, you will lose whatever careful credibility you‘ve built up. Don‘t squander it!  

So that‘s my take on how to be an elearning expert. So, what have I missed?

Overworked IDs

25 May 2021 by Clark 2 Comments

I was asked a somewhat challenging question the other day, and it led me to reflect. As usual, I‘m sharing that with you. The question was “How can IDs keep up with everything, feel competent and confident in our work” It‘s not a trivial question! So I‘ll share my response to overworked IDs.

There was considerable context behind the question. My interlocutor weighed in with her tasks:  

“sometimes I wonder how to best juggle everything that my role requires: project management, design and ux/ui skills, basic coding, dealing with timelines and SMEs and managers. Don‘t forget task analysis and needs assessment skills, making content accessible and engaging. And staying on top of a variety of software.”  

I recognize that this is the life of overworked IDs, particularly if you‘re the lone ID (which isn‘t infrequent), or expected to handle course development on your own. Yet it is a lot of different competencies. In work with IBSTPI, where we‘re defining competencies, we‘re recognizing that different folks cut up roles differently. Regardless, many folks wear different competency requirements that in other orgs are handled by different teams. So what‘s a person to do?

My response focused on a couple of things. First, there‘re the expectations that have emerged. After 9/11, when we were avoiding travel, there was a push for elearning. And, with the usual push for efficiency, rapid elearning became the vogue. That is, tools that made it easy to take PDFs and PPTs and put it up online with a quiz. It looked like lectures, so it must be learning, right?

One of the responses, then, is to manage expectations. In fact, a recent post addressed the gap between what we know and what orgs should know. We need to reset expectations.

As part of that, we need to create better expectations about what learning is. That was what drove the Serious eLearning Manifesto [elearningmanifesto.org], where we tried to distinguish between typical elearning and serious elearning. Our focus should shift to where our first response isn‘t a course!  

As to what is needed to feel competent and confident, I‘ve been arguing there are three strands. For one (not surprisingly ;), I think IDs need to know learning science. This includes being able to fill in the gaps in and update on instructional design prescriptions, and also to be able to push back against bad recommendations. (Besides the book, this has been the subject of the course I run for HR.com via Allen Academy, will be the focus of my presentation at ATD ICE this summer, and also my asynchronous course for the LDC conference.)  

Second, I believe a concomitant element is understanding true engagement. Here I mean going beyond trivial approaches like tarting-up drill-and-kill, and gamification, and getting into making it meaningful. (I‘ve run a workshop on that through the LDA, and it will be the topic of my workshop at DevLearn this fall.)

The final element is a performance ecosystem mindset. That is, thinking beyond the course: first to performance support, still on the optimal execution side of the equation. Then we move to informal learning, facilitating learning. Read: continual innovation! This may seem like more competencies to add on, but the goal is to reduce the emphasis (and workload) on courses, and build an organization that continues to learn. I address this in the  Revolutionize L&D book, and also my mobile course for Allen Interactions (a mobile mindset is, really, a performance ecosystem mindset!).

If you‘re on top of these you should prepared to do your job with competence and confidence. Yes, you still have to navigate organizational expectations, but you‘re better equipped to do so. I‘ll also suggest you stay tuned for further efforts to make these frameworks accessible.  

So, there‘re my responses to overworked IDs. Sorry, no magic bullets, I‘m afraid (because ‘magic‘ isn‘t a thing, sad as that may be). Hopefully, however, a basis upon which to build. That‘s my take, at any rate, I welcome hearing how you‘d respond.

Deep learning and expertise

20 April 2021 by Clark 3 Comments

A colleague asked “is anyone talking about how deep learning requires time, attention, and focus” He was concerned with “the trend that tells us everything must be short.”   He asked if I‘d written anything, and I realize I really haven‘t. Well, I did make a call  for “slow learning” once upon a time, but it‘s probably worth doing it again.   So here‘s a riff on deep learning and expertise.

First, what do we mean by deep learning? Here, I‘m suggesting that the goal of deep learning is expertise. We‘ve automated enough of the component elements that we can use our conscious processes to make expert judgments in addressing performance requirements. This could be following a process, making strategic decisions such as diagnoses and prescriptions, and more. It can also require developing pre-conscious responses, such as we train airline pilots to respond to emergencies.  

Now, these responses can vary in their degree of transfer. Making decisions about how to remedy a piece of machinery that‘s misbehaving is different than deciding how to prioritize the new product improvements. The former is more specific, the latter is more generic. Yet, there are certain things that are relevant to both.  

Another issue is how often it needs to be performed. You can develop expertise much quicker with lots of opportunities to apply the knowledge. It‘s more challenging to achieve when there aren‘t as many times it‘s relevant in the course of your workflow. The aforementioned pilots are training for situations they never hope to see!

Before we get there, however, there‘s one other issue to address: how much has to go in the head, and how much can be in the world?   In general, getting information in the head is hard (if we‘re doing it right), and we should try to avoid it when possible. I argue  for backwards design, starting with what the performance looks like if we‘ve focused on IA (intelligence augmentation ), that is, looking for the ideal combination of smarts between technology (loosely defined) and our heads. As Joe Harless famously said “iInside every fat course there‘s a thin job aid crying to get out.”  

Once we‘ve determined that we need human expertise, we also need to acknowledge that it takes time! I put it this way: the strengthening of connections (what learning is at the neural level) can only be done so much in any one day before the strengthening function fatigues; you literally need sleep before you can learn more. And only so much strengthening can happen in that one day. So to develop strong connections, e.g. strong enough that it will be triggered appropriately, is going to have to be spaced out over time.  

This does depend on the pre-existing knowledge of the learner, but it was Anders Ericsson who posited the approximately 10K hours of practice to achieve expertise. That‘s both not quite accurate and not quite what he said, but as a rule of thumb it may be helpful. The important thing is that not just any practice will work. It takes what he called ‘deliberate practice‘, that is the right next thing for this learner. Continued, over time, as the learners‘ ability increases new practice focuses are necessary.

All that can‘t come from a course (no one is going to sit through 10000 hours!). Instead, if we follow the intent of the 70:20:10 framework, it‘s going to take some initial courses, then coaching, with stretch assignments and feedback, and joining a relevant community of practice, and….

We also can‘t assume that our learners will develop this as efficiently as possible. Unless we‘ve trained them to be good self-learners, it will take guided learning across their experience. Even if it‘s only at a particular point; most people who are pursuing a sport, hobby, what have you, eventually will take a course to get past their own limitations and accelerate development.

The short answer is that deep expertise doesn‘t, can‘t, come from a short learning experience. It comes from an extended learning experience, with spaced, deliberate, and varied practice with feedback. If you want expertise, know what it takes and do it. That‘s true whether you‘re doing it for yourself or you‘re in charge of it for others. Deep learning and expertise comes with hard work. (Also, let‘s make that ‘hard fun‘ ;).  

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok