Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Break it down!

2 July 2024 by Clark 2 Comments

jigsaw puzzle piecesIn our LDA Forum, someone posted a question asking about taking Cathy Moore’s Action Mapping for soft skills, like improving team dynamics. Now, they’re specifically asking about a) people with experience, and b) in the context of not-for-profits, so…I’m not a good candidate to respond. However, what it does raise is a more common problem: how do you train things that are more ephemeral. Like, for instance, leadership, or communication? My short answer is “break it down”. What do I mean? Here’re some thoughts, and I welcome feedback!

Many moons ago, I co-wrote a paper on evaluating social media impacts. There are the usual metrics, like ‘engagement’. That is, are people using the system? Of course, for companies charging for their platform, this could be as infrequent as a person accessing it once a month. More practically, however, it should be a person hitting it at least several times a week, or even several times a day! If you’re communicating, cooperating, and collaborating, you really should be interacting at a fair frequency.

I, on the other hand, argued for more detailed implications. If you’re putting it into a sales team, you should expect not only messages, but more success on sales, shorter sales cycles, etc. So you can get more detailed. These days, you can do even more, and have the system actually tag what the messages are about and count them. You can go deeper.

Which is what I think is the answer here. What skills do you want? For an innovation demo with Upside Learning, I argued we should break it down. That includes how to work out loud, and how to provide feedback, and how to run group meetings. (I’m just reading Alex Edman’s May Contain Lies, and it contains a lot of details about how to consider data and evidence.) We can look for more granular evidence. Even for skills like team dynamics, you should be looking at what makes good dynamics. So, things like making it safe yet accountable, providing feedback on behavior not on the person, valuing diversity, etc. There should be specific skills you want to develop, and assess. These, then, become the skills you design your learning to accomplish. You are, basically, creating a curriculum of the various skills that comprise the aggregated topic.

It may be that you assess a priori, and discover that only some are missing in your teams. That upfront analysis should happen regardless, but is too infrequent. The interlocutor here also mentioned the audience complaining about the time for analysis. Yep, that’s a problem. Reckon you have to sell the whole package: analyzing, designing, and evaluating for impact on performance, not just some improvement. Yet, compared to throwing money away? Seems like targeting intervention efforts should be a logical sell. If only we lived in a rational world, eh?

Still, overall, I think that these broad programs break down into specific skills that can be targeted and developed. And, we should. Let’s not get away with vague intentions, explanations, and consequently no outcomes. Let’s do the work, break it down, and develop actual skills. That, at least, is my take, I welcome hearing yours!

Diving or surfacing?

25 June 2024 by Clark Leave a Comment

Bubbles in water with light behindIn my regular questing, one of the phenomena I continue to explore is design. Investigating, for instance, reveals that, contrary to recommendations, designers approach practice more pragmatically. That’s something I’ve been experiencing both in my work with clients and recent endeavors. So, reflecting, are and should folks be diving or surfacing?

The original issue is how designers design. If you look at recommendations, they typically recommend starting at the top level conceptualization and work down, such as Jesse James Garrett’s Information Architecture approach (PDF of the Elements of User Experience; note that he puts the highest level of conceptualization at the bottom and argues to work up). Empirically, however, designers switch between top-down and bottom-up. What do I do?

Well, it of course depends on the project. Many times (and, ideally), I’m brought in early, to help conceptualize the strategy, leveraging learning science, design, organizational context, and more. I tend to lead the project’s top-level description, creating a ‘blueprint’ of where to go. From there, more pragmatic approaches make sense (e.g. bringing in developers). Then, I’m checking on progress, not doing the implementation. I suppose that’s like an architect. That is, my role is to stay at the top-level.

In other instances, I’m doing more. I frequently collaborate with the team to develop a solution. Or, sometimes, I get concrete to help communicate the vision that the blueprint documents. Which,  in working with an unfamiliar team, isn’t unusual. That ‘telepathy’ comes with getting to know folks ;).

In those other instances, I too will find out that pragmatic constraints influence the overarching conceptualization, and work back up to see how the guidelines need to be adapted to account for the particular instance. Or we need to deconnect from the details to remember what our original objective is. This isn’t a problem! In general, we should expect that ongoing development unearths realities that weren’t visible from above, and vice versa. We may have good general principles, (e.g. from learning science), but then we need to adapt them to our circumstances, which are unlikely to exactly match. In general, we need to abstract the best principles, and then de- and re-contextualize.

I find that while it’s harder work to wrestle with the details (more pay for IDs! ;), it’s very worthwhile. What’s developed is better as a result of testing and refining. In fact, this is a good argument about why we should iterate (and build it into our timelines and budgets). It’s hubris to assume that ‘if we build it, it is good’. So, let’s not assume we can either be diving or surfacing, but instead recognize we should cycle between them. Start at the top and work down, but then regularly check back up too!

Learning Debt?

18 June 2024 by Clark Leave a Comment

In our LDA conversation with David Irons, User Experience (UX) Strategist, for our Think Like A…series, he mentioned a concept I hadn’t really considered. The concept is ‘design debt’, as an extension of the idea of ‘tech debt’. I was familiar with the latter, but hadn’t thought of it from the UX side. Nor, the LXD side! Could we have a learning debt?

So, tech debt is that delta between what good technology design would suggest, and what we do to get products out the door. So, for instance, using an algorithm for sorting that’s quicker with small numbers of entries but doesn’t handle volume. The accrued debt only gets paid back once you go back and redesign. Which, too often, doesn’t happen, and the debt accumulates. The problems can mean it’s difficult to expand capabilities, or keep performance from scaling. I think of how Apple OS updates occasionally don’t really add new features but instead fix the internals. (Hasn’t seemed to happen as much lately?)

Design debt is the UX equivalent. We see expedient shortcuts or gaps in the UX design, for instance.  As Ward Cunningham, an agile proponent, says:

Design debt is all the good design concepts of solutions that you skipped in order to reach short-term goals. It’s all the corners you cut during or after the design stage, the moments when somebody said: “Forget it, let’s do it the simpler way, the users will make do.”

It’s a real thing. You may experience it when entering a phone number into a field, and then hear it’s not in the proper format (though there was no prior information about what the required format is). That’s bad design, and could (and should) be fixed.

This could be true in learning, too. We could we have ‘learning debt’. When we make practice (and I should note for previous and future posts that includes any assessment where learners apply the knowledge we’ve provided) about knowledge instead of application of knowledge, for instance, we’re creating a gap between what they’ve learned to do and what they need to do. That’s a problem. Or when we put in content because someone insists it has to be there rather than a designer deciding it’s necessary for the learning. Which adds to cognitive load and undermines learning!

How often do we go back and improve our courses? If we’re offering workshops or some other instruction, we can adapt. When we create elearning, however, we tend to release it and forget it. When I ask audiences if they have any legacy courses that are out of date and unused but still hanging around their LMS, everyone raised their hands. We may update courses whose info has changed, but how many times do we go back and redo asynchronous courses because we’ve tracked and have evidence that it’s not working sufficiently? Yes, I acknowledge it happens, but not often enough. (*cough* We don’t evaluate our courses sufficiently nor appropriately. *cough*)

Ok, so everyone makes tradeoffs. However, which ones should we make? The evidence suggests erring on the side of better practice and less content. Prototyping and testing is another step that we can take to remove debt up front. With UX, lacks in design early on cost more to fix later. We don’t typically go back and fix, but we can and should. Better yet, test and fix before it goes live. Another way to think about it is that learning debt is money wasted. Build, run, and not learn, or build, test, and refine until learning happens?

There are debts we can sustain, and ones we can’t. And shouldn’t. When our learning doesn’t even happen, that’s not sustainable. Our Minimum Viable Product has to be at least viable. Too often, it’s not. Let’s ensure that viable means achieves an outcome, eh? It might not be optimal improvement, or as minimum in time as possible, but at least it’s achieving an outcome. That’s better than releasing a useless product (despite no one knowing). , even if we get paid (internally or externally). What am I missing?

 

Reflecting on adaptive learning technology

11 June 2024 by Clark 1 Comment

My last real job before becoming independent (long story ;) was leading a team developing an adaptive learning platform. The underlying proposition was the basis for a topic I identified as one of my themes. Thinking about it in the current context I realize that there’re some new twists. So here I’m reflecting on adaptive learning technology.

So, my premise for the past couple of decades is to decouple what learners see from how it’s delivered. That is, have discreet learning ‘objects’, and then pull them together to create the experience. I’ve argued elsewhere that the right granularity was by learning role: concepts are separate from examples, from practice, etc. (I had team members participating in the standards process.) The adaptive platform was going to use these learning objects to customize the sequence for different learners. This was both within a particular learning objective, and across a map of the entire task hierarchy.

The way the platform was going to operate was typical in intelligent tutoring systems, with a twist. We had a model of the learner, and a model of the pedagogy, but not an explicit model of expertise. Instead, the expertise was intrinsic to the task hierarchy. This was easier to develop, though unlikely to be as effective. Still, it was scalable, and using good learning science behind the programming, it should do a good job.

Moreover, we were going to then have machine learning, over time, improve the model. With enough people using the system, we would be able to collect data to refine the parameters of the teaching model. We could possibly be collecting valuable learning science evidence as well.

One of the barriers was developing content to our specific model. Yet I believed then, and still now, that if you developed it to a standard, it should be interoperable. (We’re glossing over lots of other inside arguments, such as whether smart object or smart system, how to add parameters, etc.) That was decades ago, and our approach was blindsided by politics and greed (long sordid story best regaled privately over libations). While subsequent systems have used a similar approach (*cough* Knewton *cough*), there’s not an open market, nor does SCORM or xAPI specifically provide the necessary standard.

Artificial intelligence (AI) has changed over time. While evolutionary, it appears revolutionary in what we’ve seen recently. Is there anything there for our purposes? I want to suggest no. Tom Reamy, author of Deep Text, argues that hybrids of symbolic and sub-symbolic AI (generative AI is an instance of the latter) have potential, and that’s what we were doing. Systems trained on the internet or other corpuses of images and/or text aren’t going to provide the necessary guidance. If you had a sufficient quantity of data about learning experiences with the characteristics of your own system, you could do it, but if it exists it’s proprietary.

For adaptive learning about tasks (not knowledge; a performance focus means we’re talking about ‘do’, not know), you need to focus on tasks. That isn’t something AI really understands, as it doesn’t really have a way to comprehend context. You can tell it, but it also doesn’t necessarily know learning science either (ChatGPT can still promote learning styles!). And, I don’t think we have enough training data to train a machine learning system to do a good job of adapting learning. I suppose you could use learning science to generate a training set, but why? Why not just embed it in rules, and have the rules work to generate recommendations (part of our algorithm was a way to handle this)? And, as said, once you start running you will eventually have enough data to start tuning the rules.

Look, I can see using generative AI to provide text, or images, but not sequencing, at least not without a rich model. Can AI generate adaptive plans? I’m skeptical. It can do it for knowledge, for sure, generating a semantic tree. However, I don’t yet see how it can decide what application of that knowledge means, systematically. Happy to be wrong, but until I’m presented with a mechanism, I’m sticking to explicit learning rules. So, where am I wrong?

What I’m up to

4 June 2024 by Clark Leave a Comment

Ok, so it’s been a wee bit too much about me (my books, themes), yet it occurs to me that I should document what I’m doing. (Which I’ve done before, but this is looking forward, too.) Not just for me (though it helps ;), but it’s because I realized my thinking other than books is actually getting spread out in various places. So, here’s what I’m up to…

Mostly, it’s centering around applying the cognitive and learning sciences to the design of solutions. In a variety of ways, of course. I’ve been working with Upside Learning, serving as their Chief Learning Strategist. They want to do more than pay lip service to learning science (which I laud). I’m working with them on evangelism, internal development, and more. I’m also working with Elevator 9, in this case as advisor. They’re a platform solution to complement live events, again doing so in alignment with our brains. I’m also serving as co-director of the Learning Development Accelerator. That’s a society focused on evidence-informed L&D, and we explore what this approach means in practice. In each, I’ve been advancing my own understanding, and sharing the learnings.

So, at LDA, you can find our podcasts, blog posts (some of which are free to air!), and some programs (some likewise). For members, we’re running some internal programs as well. I’ve been pleased to augment my previous program on You Oughta Know with this year’s YOK Practitioner, where I get to interview some really amazing people. Then there’s also the Think Like A…series, where we talk to representatives of adjacent fields we (should) be plagiarizing. Then there are workshops, and we’re always developing more things.

At Elevator 9, while most of the work is behind the scenes, I did author, and David Grad (the CEO) read and taped, a series of ‘liftologies’. These are short videos  talking about the learning science that goes into their offering. When they redo the website, they’ll be easy to find, but right now they’re visible through the E9 LinkedIn page posts.

Upside Learning, on the other hand, has been proactive. They do a podcast with the CEO, Amit Garg (yes, I’ve been on it). They have a blog (and I’ve written some for them). I’ve also done some quick videos on myths. In addition, I’ve written some of their ebooks (topics like impact, microlearning, scenarios). And, of course, some webinars as well. These continue.

All this in conjunction with continuing as Quinnovation! I continue with a few clients, on a limited basis. These, of course, are not public, though the thoughts can percolate out (e.g. in this blog). I’m still doing some events, mostly virtually. For instance, I’ll be talking about the alignment between effective education and engaging events at LXDCon on Tues the 11th (at 7AM PT ). I’ll also be at DevLearn and Learning 2024.

That’s all I can think of at the moment. There’s more in the offing, of course. But for now, that’s what I’m up to. This blog may be (more than) enough, but the other sites prompt different thinking. They’re worth knowing about on their own, too!  If you’re interested, these are places to either become evidence-based, apply it, or get it done. Obviously, it’s something I think is important for our industry. (As is knowing the human information processing loop, which I’ve made freely available.) Whatever you do, however you do it, please do avoid the myths and apply the science.

My Themes

28 May 2024 by Clark Leave a Comment

While there’s a high correspondence between my books and what I believe, it’s not one to one. While there’s overlap, there’s also unique (outsider?) perspectives. So as much for me as you, here’re my themes. It’s about applying what we know about cognition and learning. That also includes the emotional side. Moreover, we also need to apply it to the design process. That is, we, as designers, are applying, but also are subject to, what’s known about how we think, work, and learn. That’s led to a variety of things that are covered here.

It starts with a core focus on learning. Which starts with the core of cognition, the human information processing loop, but goes beyond. I think that core, by the way, is a critical thing that really everyone who designs for people (and that’s everyone) should know. (Made a video freely available to that end.) The phenomena that arise from and augment that architecture play a role here. It covers, by the way, material in two of my books: the learning science one and my myths one. It’s also the basis for my participation in the Serious eLearning Manifesto. It’s about us applying, correctly, what’s known about creating learning experiences that lead to real outcomes. I still think my focus on activity-based learning is an important way to think about creating experiences.

A complement to that is my focus on engagement in learning. Here I’m reflecting what’s been studied about making experiences engaging, across games, theatre and film, fiction, flow, and more. The first manifestation was in my book on designing serious games, but it’s morphed. My latest book is a complement to the learning science side and as a generalization of those early principles on games. I’ll be talking about this at LXDCon.

However, when we talk about performance, the picture broadens. (A topic I’ll be discussing for Upside Learning.) Marc Rosenberg talked about going beyond (e)learning, and Jay Cross wrote about informal learning. I like to think of an ecosystem approach to meeting the full suite of performance needs. This includes not just courses, but also performance support. However, it also goes further, talking about innovation as well. As I like to say, when you’re doing research, design, trouble-shooting, etc, you don’t know the answer when you begin, so it too is learning. I tried to capture this in my book on where L&D should go.

An older theme, about mobile, is in some sense no longer relevant. Mobile (for corporations and universities) has become ubiquitous and is part of the performance ecosystem. In fact, part of the recognition of the ecosystem perspective came from thinking about mobile with the recognition that it’s least about courses on a phone, and about so much more. The frameworks I created then – augmenting learning, performance support, social/informal, and context-specific – however, strike me as still worthwhile to consider. It’s really about the alignment of technology with our minds, which includes interface design and more.

Thus, implicit in the ecosystem perspective is technology. One thing we lag in is being smart about our systems. While web spinners have been using tagged content and rules, we typically still create experience hard coded in their delivery. We thus neglect content engineering, and similarly content management (e.g. the lifecycle). I was on this theme a number of years ago (content and context), but it’s sadly still relevant. I think the advent of generative AI may get folks to start thinking more about discrete content for adaptive delivery, but I’d still use a different approach to implement.

Again, it’s the application of how we think, work, and learn, to the design of solutions. In my case, for performance outcomes for individuals and organizations. Not sure what my next theme will be (or whether there’ll even be one, these are still all too relevant). I’m not sure this is comprehensive, so hopefully this first stab will give me time to think about it more!

About my books

21 May 2024 by Clark 2 Comments

My booksSo, I’ve written about writing books, what makes a good book, and updated on mine (now a bit out of date). I thought it was maybe time to lay out their gestation and raison d’être. (I was also interviewed for a podcast, vidcast really, recently on the four newest, which brought back memories.) So here’re some brief thoughts on my books.

My first book, Engaging Learning came from the fact that a) I’d designed and developed a lot of learning games, and b) had been an academic and reflected and written on the principles and process. Thus, it made sense to write it. Plus, a) I was an independent and it seemed like a good idea, and b) the publisher wanted one (the time was right). In it, I laid out some principles for learning, engagement, and the intersection. Then I laid out a systematic process, and closed with some thoughts on the future. Like all my books, I tried to focus on the cognitive principles and not the technology (which was then and continues to change rapidly). It went out of print, but I got the rights back and have rereleased it (with a new cover) for cheap on Amazon.

I wanted to write what became my fourth book as the next screed. However, my publisher wanted a book on mobile (market timing). Basically, they said I could do the next one if I did this first. I had been involved in mlearning courtesy of Judy Brown and David Metcalfe, but I thought they should write it. Judy declined, and David reminded me that he had written one. Still I and my publisher thought there was room for a different perspective, and I wrote Designing mLearning. I recognized that the way we use mobile doesn’t mesh well with ‘courses on a phone’, and instead framed several categories of how we could use them. I reckon those categories are still relevant as ways to think about technology!  Again, republished by me.

Before I could get to the next book, I was asked by one of their other brands if I could write a mobile book for higher education. The original promise was that it’d be just a rewrite of the previous, and we allocated a month. Hah! I did deliver a manuscript, but asked them not to publish it. We agreed to try again, and The Mobile Academy was the result. It looks at different ways mobile can augment university actions, with supporting the classroom as only one facet. This too was out of print but I’ve republished.

Finally, I could write the book I thought the industry needed, Revolutionize Learning & Development. Inspired by Marc Rosenberg’s Beyond eLearning and Jay Cross’s Informal Learning, this book synthesizes a performance and technology-enabled push for an ecosystem perspective. It may have been ahead of its time, but it’s still in print. More importantly, I believe it’s still relevant and even more pressing! Other books have complemented the message, but I still think it’s worth a read. Ok, so I’m biased, but I still hear good feedback ;). My editor suggested ATD as a co-publisher, and I was impressed with their work on marketing (long story).

Based upon the successes of those books (I like to believe), and an obvious need in our field, ATD asked for a book on the myths that plague our industry. Here I thought Will Thalheimer, having started the Debunkers Club, would be a better choice. He, however, declined, thinking it probably wasn’t a good business decision (which is likely true; not much call for keynotes or consulting on myths). So, I researched and wrote Millennials, Goldfish & Other Training Misconceptions. In it, I talked about 16 myths (disproved beliefs), 5 superstitions (things folks won’t admit to but emerge anyways) and 16 misconceptions (love/hate things). For each, I tried to lay out the appeal and the reality. I suggest what to do instead, for the bad practices. For the misconceptions, I try to identify when they make sense.  In all cases I didn’t put down exhaustive references, but instead the most indicative. ATD did a great job with the book design, having an artist take my intro comic ideas for each and illustrating them, and making a memorable cover. (They even submitted it to a design competition, where it came close to winning!)

After the success of that tome, ATD came back and wanted a book on learning science. They’d previously asked me to edit the definitive tome, and while it was appealing, I didn’t want to herd cats. Despite their assurances, I declined. This, however, could be my own simple digest, so I agreed. Thus, Learning Science for Instructional Designers emerged. There are other books with different approaches that are good, but I do think I’ve managed to make salient the critical points from learning science that impact our designs. Frankly, I think it goes beyond instructional designers (really, parents, teachers, relatives, mentors and coaches, even yourself are designing instruction), but they convinced me to stick with the title.

Now, I view Learning Experience Design as the elegant integration of learning science with engagement. My learning science book, along with others, does a good job of laying out the first part. But I felt that, other than game design books (including mine!), there wasn’t enough on the engagement side. So, I wanted a complement to that last book (though it can augment others). I wrote Make It Meaningful as that complement. In it, I resurrected the framework from my first book, but use it to go across learning design. (Really, games are just good practice, but there are other elements). I also updated my thinking since then, talking about both the initial hook and maintaining engagement through to the end. I present both principles and practical tips, and talk about the impact on your standard learning elements. In an addition I think is important, I also talk about how to take your usual design process, and incorporate the necessary steps to create experiences, not just instruction. I do want you to create transformational experiences!

So, that’s where I’m at. You can see my recommended readings here (which likely needs an update.) Some times people ask “what’s your next book”, and my true answer at this point is “I don’t know.”  Suggestions? Something that I’m qualified to write about, that there’s not already enough out about, and it’s a pressing need? I welcome your thoughts!

An outside perspective

14 May 2024 by Clark 1 Comment

Hand holding lensSomeone reached out to me for a case study on addressing a workplace problem. I was willing, but there’s a small problem; I’ve never had to address a workplace learning problem. At least, in the way most people expect. Instead, I provide an outside perspective. What’s that mean?

So, first of all, I don’t come from an instructional design (ID) background. I did get some exposure to educational approaches when I designed my own undergraduate degree in Computer-Based Education. Yet, there weren’t any ID courses where I was a student. As a graduate student, I took psychology courses on learning. I also read Reigeluth’s survey of ID design approaches. Further, I got a chance to interview the gracious and wise David Merrill. But, again, no formal ID courses were on tap.

On the flip side, I was in a vibrant program that was developing a cognitive science degree, and read everything on learning I could find: behavioral, cognitive, social, neural, even machine learning! I was in my post-doc as they were forming the learning science approach, too, and I was at a relevant institution. Still, no ID. So, I do have deep learning roots, just not ID.

Then, after the post-doc, I taught. That is, practiced learning design, and continued reading and talking ID, and attending relevant conferences. Just not a formal ID course. Then I joined a small startup to design an adaptive learning platform, and then started consulting, but never a workplace learning role inwardly faced.

What that means is that I bring an ‘outside’ perspective to L&D. Which, I think, isn’t a bad thing. I’ve helped firms meet realistic goals in innovative ways, courtesy of not having my thinking pre-constrained. I’ve been able to interpret learning science in practical terms, and infer what ID says (also, I’ve read it and reflected in context on it). So, I’ve talked L&D design, and ID improvements, but from the view of an outsider.

Many times outsiders can bring new perspectives. And, they can be ignorant of all the contextual details. Thus, it’s really important to ask and establish those constraints, and then to be sensitive to the ones that they didn’t mention. (One of the benefits of the court jester was to reframe things in ways that showed the humor in the hidden assumptions.) Still, I’m not apologizing. I think the background I’ve acquired is useful to people who need to meet real goals, and have a decent track record in doing so. I welcome your thoughts on whether an outside perspective is of benefit.

AI as a System

7 May 2024 by Clark Leave a Comment

At the recent Learning Guild‘s Learning & HR Tech Solutions conference, the hot topic was AI. To the point where I was thinking we need a ‘contrarian’ AI event! Not that I’m against AI (I mean, I’ve been an AI groupie for literal decades), I think it’s got well-documented upsides. (And downsides.) Just right now, however, I feel that there’s too much hype, and I’m waiting for the trough of disillusionment to hit, ala the Gartner hype cycle.  In the meantime, though, I really liked what Markus Bernhardt was saying in his session. It’s about viewing AI as a system, though of course I had a pedantic update to it ;).

So, Markus’ point was that we should separate out data from the processing method. Markus presented a simple model to think about AI that I liked. In it, he proposed three pieces that I paraphrase:

  • the information you use as your basis
  • the process you use with the information
  • and the output you achieve

Of course, I had a quibble, and ended up diagramming my own way of thinking about it. Really, it only adds one thing to his model, an input! Why?

So I have the AI system containing the process and data it operates on. I like that separation, because you can use the same process on other data, or vice versa. As the opening keynote speaker, Maurice Conti, pointed out, the AI’s not biased, the data is. Having good data is important. As is having the right process to achieve the results you want (that is, a good match between problem and process; the results are the results ;). Are you generating or discriminating, for instance? Then Markus said you get the output, which could be prose, and image, a decision, …

However, I felt that it’s important to also talk about the input. What you input determines the output. With different queries, for instance, you can get different results. That’s what prompt engineering is all about! Moreover, your output can then be part of the input in an iterative step (particularly if your system retains some history). Thus, thinking about the input separately is, to me, a useful conceptual extension.

It may seem a trivial addition, but I think it helps to think about how to design inputs. Just as we align process with data for task, we need to make sure that the input matches to the process to get the best output. So, maybe I’m overcomplicating thinking about AI as a system. What say you?

Engaged and/or Effective

30 April 2024 by Clark Leave a Comment

Quadrant diagram of effectiveness by engagement: neither is an info dump, engaged is a trivial pursuit, effective is boring work, unless it's also engaging in which case it's hard fun.I’ve regularly talked about how learning can, and should, be ‘hard fun’. Yet, I haven’t really talked about each, effectiveness and engagement, independently. Of course, there’s a quadrant map that separately talks about engaged and/or effective. Let me remedy the lack!

The lack of either engagement or effectiveness is relatively rare, thankfully. You do see it, when under-skilled and under-resourced folks are making a course. For instance giving SMEs authoring tools or dumping a bunch of PPTs and PDFs on an inexperienced instructional designer. Or, when folks won’t spend enough to even get production values, let alone actual effectiveness. What you get is information dump (because experts don’t have access to what they actually do, research tells us), but not with professional polish. It’s ‘content’ without distinction. More importantly, if there is practice, it’s on knowledge retrieval rather than knowledge application. Which leads to what in cognitive science is called ‘inert knowledge’. It may be remembered, but it won’t be used when relevant.

We also see a lot of ‘tarted up’ information dump. Here, there are good production values. It looks nice, because it’s well-produced. However, it’s still information (usually with a quiz). Here, folks know a bit about visual design, and use tools and templates that make it look good. They may even have experienced designers on staff, but…time and cost expectations keep folks from doing the right thing. It could also be a lack of understanding of the importance of challenging contextual practice. That’s all too common, too! It’s still a trivial pursuit.

Quite simply, learning needs to be effective. If it’s not, we’re wasting money. Now, that’s been shown to be the case in many ways. Over the years, we’ve heard estimates from 10-15% of our training efforts are working. Which means we’re wasting 85-90% of our investment. Yet we know what leads to good learning (e.g. the Serious eLearning Manifesto). Learning science gives us good guidelines, but we still see too much information dump. Yet, if it’s not engaging, learners aren’t likely to commit appropriately, and we’re not optimizing the outcomes. It just seems like work.

When we understand the necessary alignment between engagement and effectiveness, however, we truly can deliver ‘hard fun’. That alignment is what my research and design efforts yielded. It was also the core of my book on serious game design and my most recent tome on making learning meaningful. (The latter is really a complement to my learning science book, and an attempt to bring both together to do learning experience design.)

It’s not necessarily easy to generate ‘hard fun’, nor is it the cheapest option. However, it gets easier with practice (like most things), and it’s the most cost-effective option. That is, if you truly want results. But if you don’t, why are you bothering? There are requirements, like making sure you have a real learning need, but that should be true, regardless.  You shouldn’t be asking about engaged and/or effective, you should be shooting for both. Right?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok