Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: top tools

Debating debates

17 January 2023 by Clark Leave a Comment

This is the year, at the LDA, of unpacking thinking (the broader view of my previous ‘exposure‘). The idea is to find ways to dig a bit into the underlying rationale for decisions, to show the issues and choices that underly design decisions. How to do that? Last year we had the You Oughta Know series of interviews with folks who represent some important ideas. This year we’re trying something new, using debates to show tradeoffs. Is this a good idea? Here’s the case, debating debates.

First, showing underlying thinking is helpful. For one, you can look at Alan Schoenfeld’s work on showing his thinking as portrayed in Collins & Brown’s Cognitive Apprenticeship. Similarly, the benefits are clear in the worked examples research of John Sweller. While it’s fine to see the results, if you’re trying to internalize the thinking, having it made explicit is helpful.

Debates are a tried and tested approach to issues. They require folks to explore both sides. Even if there’s already a reconciliation, I feel, it’s worth it to have the debate to unpack the thinking behind the positions. Then, the resolution comes from an informed position.

Moreover, they can be fun! As I recalled here, in an earlier debate, we agreed to that end. Similarly, in some of the debates I had with Will Thalheimer (e.g. here), we deliberately were a bit over-the-top in our discussions. The intent is to continue to pursue the fun as well as exposing thinking. It is part of the brand, after all ;).

As always, we can end up being wrong. However, we believe it’s better to err on the side of principled steps. We’ll find out. So that’s the result of debating debates. What positions would you put up?

Critical ID/LXD Differences?

14 June 2022 by Clark 4 Comments

I’ve argued both that Learning Experience Design (LXD) is an improvement on Instructional Design (ID), and that LXD is the  elegant integration of learning science with engagement. However, that doesn’t really unpack what are the critical ID/LXD differences. I think it’s worth looking at those important distinctions both in principle and practice. Here, I’m talking about the extensions to what’s already probably in place.

Principle

In principle, I think it’s the engagement part that separates the two. True, proper ID shouldn’t ignore it. However, there’s been too little attention. For instance, only one ID theorist, John Keller, has really looked at those elements. Overall, it’s too easy to focus purely on the cognitive. (Worse, of course, is a focus purely on knowledge, which really  isn’t good ID).

I suggest that this manifests in two ways. First, you need an initial emotional ‘hook’ to gain the learner’s commitment to the learning experience. Even before we open them up cognitively (though, of course, they’re linked)! Then, we need to manage emotions through out the experience. We want to do thinks like keep challenge balanced, anxiety low enough not to interfere, build confidence, etc.

We have tools we can use, like story, exaggeration, humor, and more to assist us in these endeavors. At core, however, what we’re focusing on is making it a true ‘experience’, not just an instructional event. Ideally, we’d like to be transformational, leaving learners equipped with new skills and the awareness thereof.

Practice

What does this mean in practice? A number of things. For one, it takes creativity to consider ways in which to address emotions. There are research results and guidance, but you’ll still want to exercise some exploration. Which also means you have to be iterative, with testing. I understand that this is immediately scary, thinking about costs. However, when you stop trying to use courses for everything, you’ll have more resources to do courses right. For that matter, you’ll actually be achieving outcomes, which is a justification for the effort.

Our design process needs to start gathering different information. We need to get performance objectives; what people actually need to do, not just what they need to know. You really can’t develop people if you’re not having them perform and getting feedback. You also need to understand  why this is needed, why it’s important, and why it’s interesting. It is, at least to the subject matter experts who’ve invested the time to  be experts in this…

Your process also needs to have those creative breaks. These are far better if they’re collaborative, at least at the times when you’re ideating. While ideally you have a team working together on an ongoing basis, in many cases that may be problematic. I suggest getting together at least at the ideating stage, and then after testing to review findings.

You’ll also want to be testing against criteria. At the analysis stage, you should design criteria that will determine when you’re ‘done’. When you run out of time and money is  not the right answer! Test usability first, then effectiveness, and then engagement. Yes, you want to quantify engagement. It doesn’t have to be ‘adrenaline in the blood’ or even galvanic skin response, subjective evaluations by your learners is just fine. If you are running out of time and money before you’re achieving your metrics, you can adjust them, but now you’re doing it on consciously, not implicitly.

I’m sure there more that I’m missing, but these strike me as some critical ID/LXD differences. There are differences in principle, which yield differences in practice. What are your thoughts?

Superstitions for New Practitioners

26 April 2022 by Clark 3 Comments

Black catIt’s become obvious (even to me) that there are a host of teachers moving to L&D. There are also a number of initiatives to support them. Naturally, I wondered what I could do to assist. With my reputation as a cynic apparently well-secured, I’m choosing to call out some bad behaviors. So here are some superstitions for new practitioners to watch out for!

As background, these aren’t the myths that I discuss in my book on the topic. That would be too obvious. Instead, I’m drawing on the superstitions from the same tome, that is things that people practice without necessarily being aware, let alone espousing them. No, these manifest through behaviors and expectations rather than explicit exhortation.

  • Giving people information will lead them to change. While we know this isn’t true, it still seems to be prevalent. I’ve argued before about why I think this exists, but what matters is what it leads to. That is, information dump and knowledge-test courses. What instead we need is not just a rationale, but also practice and then ongoing support for the change.
  • If it looks like school, it’s learning. We’ve all been to school, and thus we all know what learning looks like, right? Except many school practices are only useful for passing tests, not for actually solving real problems and meeting real goals. (Only two things wrong: the curriculum and the pedagogy, otherwise school’s fine.) It, however, creates barriers when you’re trying to create learning that actually works. Have people look at the things they learned outside of school (sports, hobbies, crafts, etc) for clues.
  • People’s opinion is a useful metric for success. Too often, we just ask ‘did you like it’. Or, perhaps, a ‘do you think it was valuable’. While the latter is better than the former, it’s still not good enough. The correlation between people’s evaluation of the learning and the actual impact is essentially zero. At least for novices. You need more rigorous criteria, and then test to achieve.
  • A request for a course is a sufficient rationale to make one. A frequent occurrence is a business unit asking for a course. There’s a performance problem (or just the perception of one), and therefore a course is the answer. The only problem is that there can be many reasons for a performance problem that have nothing to do with knowledge or skill gaps. You should determine what the performance gap is (to the level you’ll know when it’s fixed), and the cause.  Only when the cause is a skill gap does a course really make sense.
  • A course is always the best answer. See above; there are  lots  of reasons why performance may not be up to scratch: lack of resources, wrong incentives, bad messaging, the list goes on. As Harless famously said, “Inside every fat course there‘s a thin job aid crying to get out.” Many times we can put knowledge in the world, which makes sense because it’s actually  hard to get information and skills reliably in the head.
  • You can develop meaningful learning in a couple of weeks. The rise of rapid elearning tools and a lack of understanding of learning has led to the situation where someone will be handed a stack of PPTs and PDFs and a rapid authoring tool and expected to turn out a course. Which goes back to the first two problems. While it might take that long to get just a first version, you’re not done. Because…
  • You don’t need to test and tune. There’s this naive expectation in the industry that if you build it, it is good. Yet the variability of people, the uncertainty of the approach, and more, suggest that courses should be trialed, evaluated, and revised until actually achieving the necessary change. Beware the ‘build and release’ approach to learning design, and err on the side of iterative and agile.

This isn’t a definitive list, but hopefully it’ll help address some of the worst practices in the industry. If you’re wary of these superstitions for new practitioners, you’ll likely have a more successful career. Fingers crossed and good luck!

There’s some overlap here with messages to CXOs  1 and 2, but a different target here.  

Tech Thoughts

28 October 2021 by Clark Leave a Comment

I’m late with a post this week, owing to several factors, all relating to technology. I hadn’t quite pulled together a complete lesson, but by writing down these tech thoughts, I got there. (A further argument for the benefits of reflection.)

It started with upgrading our phones. We needed to (I tend to hand mine down, but really we both needed an upgrade this time). Of course there are hiccups, particularly since m’lady’s was so old that it couldn’t do the amazing thing mine had done. What happened with mine was that you just put the old phone and the new phone together and the new one just sucks the data off the old one and then the old one asks if you want to wipe it clean!  That’s some serious user experience. Something we should look more to in our LXD, so that we’re doing proper backwards design and we have the right combination of tools and learning to make performance relatively effortless.

Then another thing was quite interesting. An individual linked to me on the basis of a citation in a book. I didn’t know the book, so he sent me a picture of the page. He also asked if I could read Dutch. Sadly, no. However, I had recently upgraded my OS, and when I opened the picture, I noticed I could click on the text. Of. The. Picture!  I could select all the text (my OS was doing OCR on the picture live!), and then I could paste into Google Translate (another amazing feat) and it recognized it as Dutch and translated it into English. Whoa!

On the flip side, owing to the unusually heavy rain (for California), first our internet went out, and then the power. Fortunately both were working by the next morning. However, after that my backup drives kept dismounting and I couldn’t execute a backup reliably. I thought it might be the laptop, and I did a couple of increasingly difficult remedial measures. Nope. Had the drives been damaged by the power outage? Power outages aren’t quite new around here (we’re a bit up a hillside, and squirrels regularly blow the transformer), yet it hadn’t happened before.

Then I was on a Zoom call, and I started having hiccups in the microphone and camera. Even typing. What? When I switched to the laptop camera, it was all good.  All the things, drives, microphone, external monitor, are connected by a external hub. The hub had gone wonky! Instead of having to replace drives, I picked up a new hub last nite, and all’s good now. Phew!

I guess my take home tech thoughts is that we’re making true a promise I’ve mentioned when talking mobile: we really do have magic. (Asimov famously said any truly advanced technology is indistinguishable from magic.) We can do magical things like talk at distance, have demons do tasks on our behalf (picture text transcribing and translation), etc. On the other hand, when it doesn’t work, it can be hard to identify the problem!  Overall, it’s a win. Well, when it’s designed right! Which involves testing and tuning. As Dion Hinchcliffe put it: “Seamless #cx is now table stakes.” So let’s get designing, testing, and tuning, and make magical experiences.

Complexity in Learning Design

21 September 2021 by Clark Leave a Comment

a fractalI recently mentioned that one of the problems with research is that things are more interconnected than we think. This is particularly true with cognitive research. While we can make distinctions that simplify things in useful ways (e.g. the human information processing system model*), the underlying picture is of a more interactive system.  Which underpins why it makes sense to talk about Learning Experience Design (LXD) and not just instructional design. We need to accommodate complexity in learning design.  (* Which I talk about in Chapter 2 of my learning science book, and in my workshops on the same topic through the Allen Academy.)

We’re recognizing that the our cognition is more than just in our head. Marcia Conner, in her book  Learn More Now  mentioned how neuropeptides passed information around the body. Similarly, Annie Murphy Paul’s  The Extended Mind talks about moving cognition (and learning) into the world. In my Make It Meaningful workshops (online or F2F at DevLearn 19 Oct), I focus on how to address the emotional component of learning. In short, learning is about more than just information dump and knowledge test.

Scientifically, we’re finding there are lots of complex interactions between the current context, our prior experience, and our cognitive architecture. We’re much more ‘situated’ in the moment than the rational beings we want to believe. Behavioral economics and Daniel Kahneman’s research have made this abundantly clear. We try to avoid the hard mental work using shortcuts that work sometimes, but not others. (Understanding when is an important component of this).

We get good traction from learning science and instructional design approaches, for sure. There are good prescriptions (that we often ignore, for reasons above) about what to do and how. So, we should follow them. However, we need more. Which is why I tout LXD  Strategy! We need to account for complexity in learning design approaches.

For one, our design processes need to be iterative. We’ll make our best first guess, but it won’t be right, and we’ll need to tune. The incorporation of agile approaches, whether SAM or LLAMA or even just iterative ADDIE, reflects this. We need to evaluate and refine our designs to match the fact that our audience is more complex than we thought.

Our design also needs to think about the emotional experience as well as the cognitive experience. We want our design processes to systematically incorporate humor, safety, motivation, and more. Have we tuned the challenge enough, and how will we know?  Have we appropriately incorporated story? Are our graphics aligned or adding to cognitive load? There are lots of elements that factor in.

Our design process has to accommodate SMEs who literally can’t access what they do. Also learner interests, not just knowledge. We need to know what interim deliverables, processes for evaluation, times when we shouldn’t be working solo, and tools we need. Most importantly, we have to do this in a practical way, under real-world resource constraints.

Which is why we need to address this strategically. Too many design processes are carry-over from industrial approaches: one person, one tool, and a waterfall process. We need to do better. There’s complexity in learning design, both on the part of our learners, and ourselves as designers. Leveraging what we know about cognitive science can provide us with structures and approaches that accommodate these factors. That’s only true, however, if we are aware and actively address it. I’m happy to help, but can only do so if you reach out. (You know how to find me. ;) Here’s to effective and engaging  learning!

Overworked IDs

25 May 2021 by Clark 2 Comments

I was asked a somewhat challenging question the other day, and it led me to reflect. As usual, I‘m sharing that with you. The question was “How can IDs keep up with everything, feel competent and confident in our work” It‘s not a trivial question! So I‘ll share my response to overworked IDs.

There was considerable context behind the question. My interlocutor weighed in with her tasks:  

“sometimes I wonder how to best juggle everything that my role requires: project management, design and ux/ui skills, basic coding, dealing with timelines and SMEs and managers. Don‘t forget task analysis and needs assessment skills, making content accessible and engaging. And staying on top of a variety of software.”  

I recognize that this is the life of overworked IDs, particularly if you‘re the lone ID (which isn‘t infrequent), or expected to handle course development on your own. Yet it is a lot of different competencies. In work with IBSTPI, where we‘re defining competencies, we‘re recognizing that different folks cut up roles differently. Regardless, many folks wear different competency requirements that in other orgs are handled by different teams. So what‘s a person to do?

My response focused on a couple of things. First, there‘re the expectations that have emerged. After 9/11, when we were avoiding travel, there was a push for elearning. And, with the usual push for efficiency, rapid elearning became the vogue. That is, tools that made it easy to take PDFs and PPTs and put it up online with a quiz. It looked like lectures, so it must be learning, right?

One of the responses, then, is to manage expectations. In fact, a recent post addressed the gap between what we know and what orgs should know. We need to reset expectations.

As part of that, we need to create better expectations about what learning is. That was what drove the Serious eLearning Manifesto [elearningmanifesto.org], where we tried to distinguish between typical elearning and serious elearning. Our focus should shift to where our first response isn‘t a course!  

As to what is needed to feel competent and confident, I‘ve been arguing there are three strands. For one (not surprisingly ;), I think IDs need to know learning science. This includes being able to fill in the gaps in and update on instructional design prescriptions, and also to be able to push back against bad recommendations. (Besides the book, this has been the subject of the course I run for HR.com via Allen Academy, will be the focus of my presentation at ATD ICE this summer, and also my asynchronous course for the LDC conference.)  

Second, I believe a concomitant element is understanding true engagement. Here I mean going beyond trivial approaches like tarting-up drill-and-kill, and gamification, and getting into making it meaningful. (I‘ve run a workshop on that through the LDA, and it will be the topic of my workshop at DevLearn this fall.)

The final element is a performance ecosystem mindset. That is, thinking beyond the course: first to performance support, still on the optimal execution side of the equation. Then we move to informal learning, facilitating learning. Read: continual innovation! This may seem like more competencies to add on, but the goal is to reduce the emphasis (and workload) on courses, and build an organization that continues to learn. I address this in the  Revolutionize L&D book, and also my mobile course for Allen Interactions (a mobile mindset is, really, a performance ecosystem mindset!).

If you‘re on top of these you should prepared to do your job with competence and confidence. Yes, you still have to navigate organizational expectations, but you‘re better equipped to do so. I‘ll also suggest you stay tuned for further efforts to make these frameworks accessible.  

So, there‘re my responses to overworked IDs. Sorry, no magic bullets, I‘m afraid (because ‘magic‘ isn‘t a thing, sad as that may be). Hopefully, however, a basis upon which to build. That‘s my take, at any rate, I welcome hearing how you‘d respond.

A message to CxOs 2: about org learning myths

11 May 2021 by Clark 2 Comments

When I wrote my last post on a message to CxOs about L&D myths, I got some pushback. Which, for the record, is a good thing; one of us will learn something. As a counter to my claim that L&D often was it’s own worst enemy, there was a counter. The claim was that there are folks in L&D who get it, but fight upward against wrong beliefs. Which absolutely is true as well. So, let‘s also talk about what CxOs need to know about the org learning myths they may believe.  

First, however, I do want to say that there is evidence that L&D isn‘t doing as well as it could and should. This comes from a variety of sources. However, the question is where does the blame lie. My previous post talked about how L&D deludes itself, but there are reasons to also believe in unfair expectations. So here‘s the other side.  

  1. If it looks like schooling… I used this same one against L&D, but it‘s also the case that CxOs may believe this. Further, they could be happy if that‘s the case. Which would be a shame just as I pointed out in the other case. Lectures, information dump & knowledge test, in general content presentation doesn‘t lead to meaningful change in behavior in the absence of activity. Designed action and guided reflection, which looks a lot more like a lab or studio than a classroom, is what we want.
  2. SMEs know what needs to be learned. Research tells us to the contrary; experts don’t have conscious access to around 70% of what they  do (tho’ they do have access to what they know). Just accepting what a SME says and making content around that is likely to lead to a content dump and lack of behavior change. Instead, trust (and ensure) that your designers know more about learning than the SME, and have practices to help ameliorate the problem.
  3. The only thing that matters is keeping costs low.  This might seem to be the case, but it reflects a view that org learning is a necessary evil, not an investment. If we’re facing increasing change, as the pundits would have it, we need to adapt. That means reskilling. And effective reskilling isn’t about the cheapest approach, but the most effective for the money. Lots of things done in the name of learning (see above) are a waste of time and money. Look for impact first.
  4. Courses are the answer to performance issues.  I was regaled with a tale about how sales folks and execs were  insisting that customers wanted training. Without evaluating that claim. I’ll state a different claim: customers want solutions. If it’s persistent skills, yes, training’s the answer. However, a client found that customers were much happier with how-to videos than training for most of the situations. It’s a much more complex story.
  5. Learning stops at the classroom. As is this story. One of the reasons Charles Jennings was touting 70:20:10 was not because of the numbers, but because it was a way to get execs to realize that only the bare beginning came from courses, if at all. There’s ongoing coaching with stretch assignments and feedback, and interacting with other practitioners…don’t assume a course solves a problem. A colleague mentioned how her org realized that it couldn’t create a course without also creating manager training, otherwise they’d undermine the outcomes instead of reinforcing them.
  6. We‘ve invested in an LMS, that‘s all we need. That’s what the LMS vendors want you to believe ;)!  Seriously, if all you’re doing is courses, this could be true, but I’m hoping the above
  7. Customers want training.  Back to an earlier statement, customers want solutions. It is cool to go away to training and get smothered in good food and perks. However, it’s  also known that sometimes that  goes to the manager, not  the person who’ll actually be doing the work! Also, training can’t solve certain types of problems.  There are many types of problems customers encounter, and they have different types of solutions. Videos may be better for things that occur infrequently, onboard help or job aids may meet other needs to unusual to be able to predict for training, etc. We don’t want to make customers happy, we want  to make them successful!
  8. We need ways to categorize people. It’s a natural human thing to categorize, including people. So if someone creates an appealing categorization that promises utility, hey that sounds like a good investment. Except, there are many problems! People aren’t easy to categorize, instruments struggle to be reliable, and vested interests will prey upon the unwary.  Anyone can create a categorization scheme, but validating it, and having it be useful, are both surprisingly big hurdles. Asking people questions about their behavior tends to be flawed for complex reasons. Using such tools for important decisions like hiring and tracking have proven to be unethical. Caveat emptor.
  9. Bandwagons are made to be jumped on. Face it, we’re always looking for new and better solutions. When someone links some new research to a better outcome, it’s exciting. There’s a problem, however. We often fall prey to arguments that appear to be new, but really aren’t. For instance, all the ‘neuro’ stuff unpacks to some pretty ordinary predictions we’ve had for yonks. Further, there are real benefits to machine learning and even artificial intelligence. Yet there’s also a lot of smoke to complement the sizzle. Don’t get misled. Do a skeptical analysis.  This holds doubly true for technology objects. It’s like a cargo cult, what’s has come down the pike must be a new gift from those magic technologists! Yet, this is really just another bandwagon. Sure, Augmented Reality and Virtual Reality have some real potential. They’re also being way overused. This is predictable, c.f. Powerpoint presentations in Second Life, but ideally is avoided. Instead, find the key affordances – what the technology uniquely provides – and match the capability to the need. Again, be skeptical.

My point here is that there can be misconceptions about learning  within  L&D, but it can also be outside perspectives that are flawed. So hopefully, I’ve now addressed both. I don’t claim that this is a necessary and complete set, just certain things that are worth noting. These are org learning myths that are worth trying to overcome, or so I think. I welcome your thoughts!

Reflowable text thinking

17 February 2021 by Clark 1 Comment

Ok, I know I just talked about this, but something happened to sharpen my understanding. Recently, a colleague was advocating, for a product she‘s responsible for managing, that she was aware that people were “not used to reflowable text” And, frankly, that surprised me, but also explains the problems I‘ve railed about in the past. Because reflowable text thinking is a key to moving beyond hardwired formatting to separating content from description.  

As I‘ve bemoaned before, the notion of people hardcoding the way a page looks drives me nuts. If you want to change anything (and I frequently find ways to improve things), it‘s very hard to do. It takes a lot of fussing. And, yet, I have been aware of tools that are just for doing detailed page layout. This comes from the days of print, and having to handset the lead into a page to produce a newspaper and the like. But we‘re not there anymore.

Too, I‘ve had an advantage. I had the opportunity to learn to use a word processor very early on. I had vi, the Unix visual text editor to write with, allowing editing, and LaTex to specify visual details, while I was a college student (I was glad to abandon my typewriter!). Then, I got a Mac II and Microsoft Word (2.0) to write my PhD thesis. This was a boon, because I could write, and define things like margins and what headings look like. And, automagically, my paper came out from the printer (ultimately, I had to tweak a few things) ready to pass the library lady with her ruler.  

The point was that I was not fussing about how each page looked, I was instead specifying things like:

  • that a top level header required a page break beforehand (e.g. starting a new chapter),  
  • hat the next level header was left justified,  
  • that a heading should always be printed with the next paragraph or line of text,  
  • and so on.  

And when it was printed, it looked right. If I changed paper size, or margins, or what have you, it adapted.  

That‘s separating out what I‘m saying from how it behaves across screens, devices, printers, etc. And that was useful for the web, mobile, and more. It‘s responsive design. And, it‘s the key to moving our content and experiences forward.  

It‘s about describing behaviors, instead of hand-coding them. And having them refer to centralized descriptions. Which is a lot like coding, having new objects inherit the properties of their predecessors. And, it‘s about Web 3.0, the semantic web.  

Look, this has seemed to be something not all folks seem to be able to get their mind around. And, I hope that‘s not true, that it‘s learnable. Because we have to come to grips with this. It‘s already happening across the business in pretty much every other area. We can‘t lag; we need reflowable text thinking, because our audience needs flexible content. When we can gain considerable power at the expense of some rethinking, that‘s a fair tradeoff, in my mind. I welcome your thoughts.  

Five trends for 2021

15 December 2020 by Clark 2 Comments

As frequently happens, I get asked for my predictions. And, of course, I have reservations. Here’s a video that provides the qualifications, and five trends for 2021 that I’d expect, or like, to see.

And the script:


Hi, I‘m Clark Quinn, of Quinnovation (a boutique learning experience design strategy consultancy). I was recently asked about what trends I thought would be seen next year.  

Two relevant quotes to set the stage. For one, Alan Kay famously said “the best way to predict the future is to invent it.” So I tend to talk about trends we should see. The other is “never predict anything, particularly the future.” I heard an expert talk about having looked at predictions and outcomes, and the noticeable trend is that it went as expected, with one unforeseen twist. So, expecting I‘ll get it wrong, here are some trends I‘m either expecting or keen to see:

The first trend I‘m seeing and think will continue is an emphasis on learning science. And that‘s all to the good! Admittedly, I‘m part of this, what with running a course on learning science and having a forthcoming book on the topic. But I‘m seeing more and more people talking about it, and not all hype and even mostly right! There are more books, the Learning Guild‘s regular research reports are good, the launch of an event past summer and an associated new society focused on evidence-based learning (the Learning Development Accelerator) are all signs of growing momentum.

Second, when there‘s a lot of hype about something, it tends to be followed by a backlash.  This may be farther out than 2021, but with all the buzz about AI, I think we might see some more awareness of limitations. Yes, it can do some very useful things, but it also isn‘t a panacea. We‘re seeing a growing awareness of the problems with bias in data sets, the limitations of ungrounded knowledge, and concerns about the human costs.  

Three. On a related note, then, I expect more emphasis on the importance of meaningful practice. This comes from learning science, but also the focus on engagement. Thus, the push for Short Sims, and better written multiple choice questions, and in general a focus on ‘do‘, not know.   Hopefully, we‘ll see tool vendors aligning their content and assessment capabilities towards designing scenarios and contextualized practice, along with specific feedback for each wrong answer and support for reflection.

Fourth, I hope for a push towards content systems as well. This, too, may not be in the short term, but ultimately we have to realize that hardwiring experiences may make sense for formal systems, but not for adaptive learning.LXPs are a good move here, even if misnamed (really, they‘re smart portals, not learning experience platforms). Ultimately, we‘ll be better off if we can deliver content by description and rules, like recommendation system, rather than by having to handcraft content to create a ‘one-size fits all‘ solution.  

Finally, I think that our collaboration tools haven‘t lived up to the promise of technology. They‘re very much oriented towards particular modes, instead of supporting really rich interaction. This, too, is more long term, but we really should be able to talk together while working to create representations that capture our evolving thinking. Easily and elegantly! There‘s real opportunity here to engage multiple representations in an elegant suite.  

So there you have it, a wishful list of five trends for 2021. So what do you expect, or hope, to see?

The plusses and minuses of learning science research

25 August 2020 by Clark 1 Comment

A person who I find quite insightful (and occasionally inciteful ;) is Donald Clark. He built and sold Epic, an elearning company, and now he leads a learning AI company, Wildfire. He’s knowledgeable (for instance, having read up and summarized centuries of learning theorists), willing to call out bad learning, and he’s funny. And so, when he reported on a new study, I of course looked into it. And I find that it points out the plusses  and  minuses of learning science research.

To be clear, this is about his product, so there’s a vested interest. However, he’s got integrity; he’s not going to sully his reputation with a bad study. And, it’s a good study. It rightly demonstrates an important point. It’s just that it stops short of what we need for full  learning.

So, his product does something pretty amazing. You give it content, and it can not only answer questions about the content (as, for instance, some chat tools do), it can turn the tables and ask  you questions about the content. That is, it can serve as a sort of tutor. Which is all to the good.

What it can’t do, of course, is design meaningful practice. As Van Merriënboer’s Four Component Instructional Design (4C/ID) points out, you need to know the information, and you also need practice applying it. And I reckon we’re still far from that. So, while this is part of a whole solution (and Donald knows this), it’s not the full solution. He’s subsequently let me know it can do language tasks, which is impressive. I’m thinking more of contextualized scenarios, however.

The study demonstrates, as you might expect, that breaking up a video into reasonable chunks, and having system-generated questions asked in-between, led to 61% better retrieval, going from getting 8 to 14 questions right. That is a big improvement. it’s also impressive, since it’s generating those questions from video! That is, it parses the video, establishes a transcript, and then uses that to generate a knowledge base. Very cool.

And it’s a well-designed study. It’s got a control group, and a  reasonable number of subjects. It uses the same test material, for an AB comparison. Presumably, the video chunking was done by hand, into four pieces. The chunking and break might account for the difference, which wasn’t controlled for, but it’s still a big improvement. Granted, we know that watching a video alone isn’t necessarily going to improve retention (except, perhaps, over some other non-interactive way of dumping content). But still, this is good as it’s an improvement and a lot of work was saved.

What I quibble about, however, is the nature of the retrieval. The types of questions liable to be asked (and it’s not indicated), are knowledge questions. As suggested above, knowledge is a necessary component. But using that knowledge to make decisions in context is typically what our goals are. And to achieve such goals, you basically have to practice making decisions in context. (Interestingly, the topic here was equality and diversity, a topic he has complained about!)

Knowledge about a topic isn’t likely to impact your ability to apply it. What will  make a difference are actually doing things about it, like calling it out, having consequences, and actively working to remedy imbalances. And that requires separate practice. Which he’s acknowledged in the past, and rightly points out that his solution means you can devote more resources to that end.

Thus, the plusses of learning science research are we nibble away at the questions we need to answer, and find answers about the questions we ask. The minus, of course, is not necessarily asking the most important questions. It’d be easy to see this and say: “we’ve improved retention, and we’re done”. However, it won’t necessarily lead to reducing the behaviors being learned about, or building ability to deal with it.  There are plusses and minuses of learning science research, and we need to know the strengths, and limitations, of it when we hear it.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok