Learnlets

Secondary

Clark Quinn’s Learnings about Learning

The easy answer

16 July 2024 by Clark Leave a Comment

In working on something, I’m looking at the likely steps people take. Of course, I’m listing them from easiest to most useful (with the hope that folks understand they should take the latter). However, it’s making me think that, too often, people are looking at the easy answer, not the most accurate one. Because they really don’t know the problem. When does the easy answer make sense? Are we letting ourselves off the hook too much?

So, for instance, in learning we really should do analysis when someone asks for something. “We need a course on X.” “Ok, what tells you that you need this, and how will we know when it’s worked?” In a quick family convo, we established that this sort of un-analytical request is made all the time:

  • “Why isn’t my plant blooming?” (It’s not the season.)
  • “Fix this code.” (The input’s broken, not the code.)
  • …

Yet, people actually don’t do this up-front analysis. Why? It’s harder, it takes more time, it slows things down, it costs more. Besides, we know what the problem is.

DivergeConvergeProblemSolutionExcept, we don’t know what the problem is. Too often, the question or request is making some assumptions about the state of the world that may not be true. It may be the right answer, but it may not. Ensuring that you’ve identified the problem correctly is the first part of the design process, and you should diverge on exploration before you converge on a solution. That’s the double diamond, where you first explore the problem, before you explore a solution.

Perhaps counter-intuitively, this is more efficient. Why? Because you’re not expending resources solving the wrong problem. Are you sure you’ve gotten it right? How do you know when to take the easier path? If you know the answer you need, you’re better equipped to choose the level of solution you need. If you don’t know the question, however, and make assumptions about the root cause, you can go off the rails. And, end up spending effort you didn’t need to.

Look, I live in the real world. I have to take shortcuts (heck, I’m lazy ;). And I do. However, I like to do that when I know the answer, and know that the outcome is good enough to meet the need. I’ll go for the easy answer, if I know it’ll solve the problem well enough. But I can’t if I don’t know the question or problem, and just assume. And we know what happens when we ass-u-me.

Break it down!

2 July 2024 by Clark 2 Comments

jigsaw puzzle piecesIn our LDA Forum, someone posted a question asking about taking Cathy Moore’s Action Mapping for soft skills, like improving team dynamics. Now, they’re specifically asking about a) people with experience, and b) in the context of not-for-profits, so…I’m not a good candidate to respond. However, what it does raise is a more common problem: how do you train things that are more ephemeral. Like, for instance, leadership, or communication? My short answer is “break it down”. What do I mean? Here’re some thoughts, and I welcome feedback!

Many moons ago, I co-wrote a paper on evaluating social media impacts. There are the usual metrics, like ‘engagement’. That is, are people using the system? Of course, for companies charging for their platform, this could be as infrequent as a person accessing it once a month. More practically, however, it should be a person hitting it at least several times a week, or even several times a day! If you’re communicating, cooperating, and collaborating, you really should be interacting at a fair frequency.

I, on the other hand, argued for more detailed implications. If you’re putting it into a sales team, you should expect not only messages, but more success on sales, shorter sales cycles, etc. So you can get more detailed. These days, you can do even more, and have the system actually tag what the messages are about and count them. You can go deeper.

Which is what I think is the answer here. What skills do you want? For an innovation demo with Upside Learning, I argued we should break it down. That includes how to work out loud, and how to provide feedback, and how to run group meetings. (I’m just reading Alex Edman’s May Contain Lies, and it contains a lot of details about how to consider data and evidence.) We can look for more granular evidence. Even for skills like team dynamics, you should be looking at what makes good dynamics. So, things like making it safe yet accountable, providing feedback on behavior not on the person, valuing diversity, etc. There should be specific skills you want to develop, and assess. These, then, become the skills you design your learning to accomplish. You are, basically, creating a curriculum of the various skills that comprise the aggregated topic.

It may be that you assess a priori, and discover that only some are missing in your teams. That upfront analysis should happen regardless, but is too infrequent. The interlocutor here also mentioned the audience complaining about the time for analysis. Yep, that’s a problem. Reckon you have to sell the whole package: analyzing, designing, and evaluating for impact on performance, not just some improvement. Yet, compared to throwing money away? Seems like targeting intervention efforts should be a logical sell. If only we lived in a rational world, eh?

Still, overall, I think that these broad programs break down into specific skills that can be targeted and developed. And, we should. Let’s not get away with vague intentions, explanations, and consequently no outcomes. Let’s do the work, break it down, and develop actual skills. That, at least, is my take, I welcome hearing yours!

Diving or surfacing?

25 June 2024 by Clark Leave a Comment

Bubbles in water with light behindIn my regular questing, one of the phenomena I continue to explore is design. Investigating, for instance, reveals that, contrary to recommendations, designers approach practice more pragmatically. That’s something I’ve been experiencing both in my work with clients and recent endeavors. So, reflecting, are and should folks be diving or surfacing?

The original issue is how designers design. If you look at recommendations, they typically recommend starting at the top level conceptualization and work down, such as Jesse James Garrett’s Information Architecture approach (PDF of the Elements of User Experience; note that he puts the highest level of conceptualization at the bottom and argues to work up). Empirically, however, designers switch between top-down and bottom-up. What do I do?

Well, it of course depends on the project. Many times (and, ideally), I’m brought in early, to help conceptualize the strategy, leveraging learning science, design, organizational context, and more. I tend to lead the project’s top-level description, creating a ‘blueprint’ of where to go. From there, more pragmatic approaches make sense (e.g. bringing in developers). Then, I’m checking on progress, not doing the implementation. I suppose that’s like an architect. That is, my role is to stay at the top-level.

In other instances, I’m doing more. I frequently collaborate with the team to develop a solution. Or, sometimes, I get concrete to help communicate the vision that the blueprint documents. Which,  in working with an unfamiliar team, isn’t unusual. That ‘telepathy’ comes with getting to know folks ;).

In those other instances, I too will find out that pragmatic constraints influence the overarching conceptualization, and work back up to see how the guidelines need to be adapted to account for the particular instance. Or we need to deconnect from the details to remember what our original objective is. This isn’t a problem! In general, we should expect that ongoing development unearths realities that weren’t visible from above, and vice versa. We may have good general principles, (e.g. from learning science), but then we need to adapt them to our circumstances, which are unlikely to exactly match. In general, we need to abstract the best principles, and then de- and re-contextualize.

I find that while it’s harder work to wrestle with the details (more pay for IDs! ;), it’s very worthwhile. What’s developed is better as a result of testing and refining. In fact, this is a good argument about why we should iterate (and build it into our timelines and budgets). It’s hubris to assume that ‘if we build it, it is good’. So, let’s not assume we can either be diving or surfacing, but instead recognize we should cycle between them. Start at the top and work down, but then regularly check back up too!

Learning Debt?

18 June 2024 by Clark Leave a Comment

In our LDA conversation with David Irons, User Experience (UX) Strategist, for our Think Like A…series, he mentioned a concept I hadn’t really considered. The concept is ‘design debt’, as an extension of the idea of ‘tech debt’. I was familiar with the latter, but hadn’t thought of it from the UX side. Nor, the LXD side! Could we have a learning debt?

So, tech debt is that delta between what good technology design would suggest, and what we do to get products out the door. So, for instance, using an algorithm for sorting that’s quicker with small numbers of entries but doesn’t handle volume. The accrued debt only gets paid back once you go back and redesign. Which, too often, doesn’t happen, and the debt accumulates. The problems can mean it’s difficult to expand capabilities, or keep performance from scaling. I think of how Apple OS updates occasionally don’t really add new features but instead fix the internals. (Hasn’t seemed to happen as much lately?)

Design debt is the UX equivalent. We see expedient shortcuts or gaps in the UX design, for instance.  As Ward Cunningham, an agile proponent, says:

Design debt is all the good design concepts of solutions that you skipped in order to reach short-term goals. It’s all the corners you cut during or after the design stage, the moments when somebody said: “Forget it, let’s do it the simpler way, the users will make do.”

It’s a real thing. You may experience it when entering a phone number into a field, and then hear it’s not in the proper format (though there was no prior information about what the required format is). That’s bad design, and could (and should) be fixed.

This could be true in learning, too. We could we have ‘learning debt’. When we make practice (and I should note for previous and future posts that includes any assessment where learners apply the knowledge we’ve provided) about knowledge instead of application of knowledge, for instance, we’re creating a gap between what they’ve learned to do and what they need to do. That’s a problem. Or when we put in content because someone insists it has to be there rather than a designer deciding it’s necessary for the learning. Which adds to cognitive load and undermines learning!

How often do we go back and improve our courses? If we’re offering workshops or some other instruction, we can adapt. When we create elearning, however, we tend to release it and forget it. When I ask audiences if they have any legacy courses that are out of date and unused but still hanging around their LMS, everyone raised their hands. We may update courses whose info has changed, but how many times do we go back and redo asynchronous courses because we’ve tracked and have evidence that it’s not working sufficiently? Yes, I acknowledge it happens, but not often enough. (*cough* We don’t evaluate our courses sufficiently nor appropriately. *cough*)

Ok, so everyone makes tradeoffs. However, which ones should we make? The evidence suggests erring on the side of better practice and less content. Prototyping and testing is another step that we can take to remove debt up front. With UX, lacks in design early on cost more to fix later. We don’t typically go back and fix, but we can and should. Better yet, test and fix before it goes live. Another way to think about it is that learning debt is money wasted. Build, run, and not learn, or build, test, and refine until learning happens?

There are debts we can sustain, and ones we can’t. And shouldn’t. When our learning doesn’t even happen, that’s not sustainable. Our Minimum Viable Product has to be at least viable. Too often, it’s not. Let’s ensure that viable means achieves an outcome, eh? It might not be optimal improvement, or as minimum in time as possible, but at least it’s achieving an outcome. That’s better than releasing a useless product (despite no one knowing). , even if we get paid (internally or externally). What am I missing?

 

About my books

21 May 2024 by Clark 2 Comments

My booksSo, I’ve written about writing books, what makes a good book, and updated on mine (now a bit out of date). I thought it was maybe time to lay out their gestation and raison d’être. (I was also interviewed for a podcast, vidcast really, recently on the four newest, which brought back memories.) So here’re some brief thoughts on my books.

My first book, Engaging Learning came from the fact that a) I’d designed and developed a lot of learning games, and b) had been an academic and reflected and written on the principles and process. Thus, it made sense to write it. Plus, a) I was an independent and it seemed like a good idea, and b) the publisher wanted one (the time was right). In it, I laid out some principles for learning, engagement, and the intersection. Then I laid out a systematic process, and closed with some thoughts on the future. Like all my books, I tried to focus on the cognitive principles and not the technology (which was then and continues to change rapidly). It went out of print, but I got the rights back and have rereleased it (with a new cover) for cheap on Amazon.

I wanted to write what became my fourth book as the next screed. However, my publisher wanted a book on mobile (market timing). Basically, they said I could do the next one if I did this first. I had been involved in mlearning courtesy of Judy Brown and David Metcalfe, but I thought they should write it. Judy declined, and David reminded me that he had written one. Still I and my publisher thought there was room for a different perspective, and I wrote Designing mLearning. I recognized that the way we use mobile doesn’t mesh well with ‘courses on a phone’, and instead framed several categories of how we could use them. I reckon those categories are still relevant as ways to think about technology!  Again, republished by me.

Before I could get to the next book, I was asked by one of their other brands if I could write a mobile book for higher education. The original promise was that it’d be just a rewrite of the previous, and we allocated a month. Hah! I did deliver a manuscript, but asked them not to publish it. We agreed to try again, and The Mobile Academy was the result. It looks at different ways mobile can augment university actions, with supporting the classroom as only one facet. This too was out of print but I’ve republished.

Finally, I could write the book I thought the industry needed, Revolutionize Learning & Development. Inspired by Marc Rosenberg’s Beyond eLearning and Jay Cross’s Informal Learning, this book synthesizes a performance and technology-enabled push for an ecosystem perspective. It may have been ahead of its time, but it’s still in print. More importantly, I believe it’s still relevant and even more pressing! Other books have complemented the message, but I still think it’s worth a read. Ok, so I’m biased, but I still hear good feedback ;). My editor suggested ATD as a co-publisher, and I was impressed with their work on marketing (long story).

Based upon the successes of those books (I like to believe), and an obvious need in our field, ATD asked for a book on the myths that plague our industry. Here I thought Will Thalheimer, having started the Debunkers Club, would be a better choice. He, however, declined, thinking it probably wasn’t a good business decision (which is likely true; not much call for keynotes or consulting on myths). So, I researched and wrote Millennials, Goldfish & Other Training Misconceptions. In it, I talked about 16 myths (disproved beliefs), 5 superstitions (things folks won’t admit to but emerge anyways) and 16 misconceptions (love/hate things). For each, I tried to lay out the appeal and the reality. I suggest what to do instead, for the bad practices. For the misconceptions, I try to identify when they make sense.  In all cases I didn’t put down exhaustive references, but instead the most indicative. ATD did a great job with the book design, having an artist take my intro comic ideas for each and illustrating them, and making a memorable cover. (They even submitted it to a design competition, where it came close to winning!)

After the success of that tome, ATD came back and wanted a book on learning science. They’d previously asked me to edit the definitive tome, and while it was appealing, I didn’t want to herd cats. Despite their assurances, I declined. This, however, could be my own simple digest, so I agreed. Thus, Learning Science for Instructional Designers emerged. There are other books with different approaches that are good, but I do think I’ve managed to make salient the critical points from learning science that impact our designs. Frankly, I think it goes beyond instructional designers (really, parents, teachers, relatives, mentors and coaches, even yourself are designing instruction), but they convinced me to stick with the title.

Now, I view Learning Experience Design as the elegant integration of learning science with engagement. My learning science book, along with others, does a good job of laying out the first part. But I felt that, other than game design books (including mine!), there wasn’t enough on the engagement side. So, I wanted a complement to that last book (though it can augment others). I wrote Make It Meaningful as that complement. In it, I resurrected the framework from my first book, but use it to go across learning design. (Really, games are just good practice, but there are other elements). I also updated my thinking since then, talking about both the initial hook and maintaining engagement through to the end. I present both principles and practical tips, and talk about the impact on your standard learning elements. In an addition I think is important, I also talk about how to take your usual design process, and incorporate the necessary steps to create experiences, not just instruction. I do want you to create transformational experiences!

So, that’s where I’m at. You can see my recommended readings here (which likely needs an update.) Some times people ask “what’s your next book”, and my true answer at this point is “I don’t know.”  Suggestions? Something that I’m qualified to write about, that there’s not already enough out about, and it’s a pressing need? I welcome your thoughts!

An outside perspective

14 May 2024 by Clark 1 Comment

Hand holding lensSomeone reached out to me for a case study on addressing a workplace problem. I was willing, but there’s a small problem; I’ve never had to address a workplace learning problem. At least, in the way most people expect. Instead, I provide an outside perspective. What’s that mean?

So, first of all, I don’t come from an instructional design (ID) background. I did get some exposure to educational approaches when I designed my own undergraduate degree in Computer-Based Education. Yet, there weren’t any ID courses where I was a student. As a graduate student, I took psychology courses on learning. I also read Reigeluth’s survey of ID design approaches. Further, I got a chance to interview the gracious and wise David Merrill. But, again, no formal ID courses were on tap.

On the flip side, I was in a vibrant program that was developing a cognitive science degree, and read everything on learning I could find: behavioral, cognitive, social, neural, even machine learning! I was in my post-doc as they were forming the learning science approach, too, and I was at a relevant institution. Still, no ID. So, I do have deep learning roots, just not ID.

Then, after the post-doc, I taught. That is, practiced learning design, and continued reading and talking ID, and attending relevant conferences. Just not a formal ID course. Then I joined a small startup to design an adaptive learning platform, and then started consulting, but never a workplace learning role inwardly faced.

What that means is that I bring an ‘outside’ perspective to L&D. Which, I think, isn’t a bad thing. I’ve helped firms meet realistic goals in innovative ways, courtesy of not having my thinking pre-constrained. I’ve been able to interpret learning science in practical terms, and infer what ID says (also, I’ve read it and reflected in context on it). So, I’ve talked L&D design, and ID improvements, but from the view of an outsider.

Many times outsiders can bring new perspectives. And, they can be ignorant of all the contextual details. Thus, it’s really important to ask and establish those constraints, and then to be sensitive to the ones that they didn’t mention. (One of the benefits of the court jester was to reframe things in ways that showed the humor in the hidden assumptions.) Still, I’m not apologizing. I think the background I’ve acquired is useful to people who need to meet real goals, and have a decent track record in doing so. I welcome your thoughts on whether an outside perspective is of benefit.

More on coaching

16 April 2024 by Clark Leave a Comment

Recently, the LDA had a debate about coaching, following on the podcast interview. The wise Emma Weber represented the pro argument, while the LDA’s own Matt Richter was con. (Note that these are false divides, we explore the topic for the sake of unpacking issues.) Superb moderating from Kat Koppett was a bonus!  As the discussion went, it uncovered more on coaching, without yielding any finality (for reasons we’ll explore).

So, one of the problems emerged immediately, getting into definitions. Matt pushed a bit on the ‘like sport’ notion, where coaching has lots of specific knowledge, while Emma was more on the domain-independent side of coaching. What emerged was that different people have different definitions. Some folks (like me) put coaching further on the domain-dependent side, with mentoring being the more abstract. However, it’s clear others view coaching as the more advanced and deeper side.

This divide isn’t new, but it does provide some barriers, not least to research! As that issue came up, Kat pointed us to a study that began by saying “However, the coaching research suggests a large variety of processes and outcomes, lacking clarity on the primary psychological dimensions most impacted.” Their meta-analysis suggested that “executive coaching is a powerful instrument for organizations to support positive change and personal development.” Which is a good thing, for sure. Their definition does seem to err more on the general side, which is interesting. And, to my own understanding, an important lesson.

One issue that stuck with me was thinking through the range of development. After the formal learning experience, I think there’re times when folks need to be observed, and provided some feedback as they perform. It became clear that the domain-independent model wants the learner to recognize for themselves when they’re not doing well and need to ask for assistance. Yet, a crucial inflection point is making that transition, and I believe that folks aren’t there right away. Similarly, we may not have the resources to add in all the complexities to a particular model for this task initially. So, we expect coaches (read: supervisors and managers) to help develop understanding. Maybe that’s not coaching, by definition, but it’s a task.

I’ll agree at some point you can start guiding folks to their own improvements, but I suspect that only comes when some base level of understanding is reached. We should be clear about this type of interaction as well as the one advocated for coaching! Similarly, we need clarity on labelling! We didn’t end up coming to any finality on that, sadly.

An issue I hadn’t thought about, but became important in the discussion is the issue of appropriate coaching. Clearly, some approaches to coaching don’t work . Knowing when you can expect the coachee to be capable of domain-independent coaching would be one important criteria. Knowing how to ask questions appropriately is another. My concern here is that there are a fair few models about coaching, and with the terminological and empirical barriers, how do you determine the best methods? If we’re to be evidence-based, how can we be?

I can’t say we came to any conclusions, but I do feel we unpacked more of the issues, and did give ourselves some guidance as to what to do when, even if we don’t have agreed upon names for it all yet. Coaching is important, of both types. The data from that study shows coaching can help. We know also that extending the learning experience through feedback on performance helps. We just need to figure out how best to combine them so we know more about coaching. Those are my thoughts , at least, I look forward to yours.

Being proactive?

9 April 2024 by Clark Leave a Comment

On recent edition of the Learning Development Accelerator‘s Think Like A… series, I interviewed Kevin Wheeler. He represented, in our discussion, the role of talent in the organization. Now, I’ve been talking the organizational perspective for a while. Despite that, amongst the pearls of wisdom he dropped was one that really resonated. It had to do with the forces that are gathering, and his suggestion was that L&D should start being proactive.

He was actually talking about talent and L&D in conjunction. One of his points is that we’re two sides of the same coin. There’s a decision about ‘build vs buy’ when meeting the needs of the organization. In this case, L&D is the build while talent is the ‘buy’. His metaphor about a ‘supply chain’ for thinking about talent is apt; his point is to be looking to the sources of talent.

However, what struck me was his perspective that both haven’t been proactive enough. He sees talent & learning being too reactive to needs, instead of looking ahead and making plans. For instance, what skills are necessary to cope with the emergence of generative AI? What do you need? Do you have the foundations in the org or will you need new capabilities that are available? He envisions an executive role that encompasses both L&D and talent to be responsible for ensuring that the org is forward looking in skills and meeting them.

This aligns nicely with the current focus on ‘upskilling’, as everyone’s going nuts trying to figure out what skills, and how to develop or acquire them, at scale. Thinking ahead might not anticipate every revolution, but it’s clear that the foundational technology base has mutated, and that these new capabilities are likely to stick around. The revolution may be over (guesses on that?), but there’s certain to be evolution, likely rapid! How do you cope?

I think there’s strong evidence that L&D has been too reactive – order-taking – and that there are several ways we can be more strategic. That includes being proactive, as well as having a richer suite of solutions instead of courses über alles. It’s also about taking ownership of innovation by practicing it internally, as well. Listening to Kevin was a great opportunity to think about the bigger picture of what we do.

BTW, with the clear caveat that I’m a co-director, we really are trying to make what appears in the LDA be of value. There’re no vendors, it’s all evidence-based principles and practices for L&D. We invite you to check us out. 

Impactful decisions

2 April 2024 by Clark 1 Comment

I’ve been talking about impact in a variety of ways, and have also posited that decisions are key. I really haven’t put them together, so perhaps it’s time ;). So here’re some thoughts on impactful decisions.

To start with, I’ve suggested that what will make a difference to orgs, going forward (particularly in this age of genAI), is the ability to make better decisions. That is, either ones we’re not making right now, or new ones we need to be able to make.  When we’re moving away from us doing knowledge tasks (e.g. remembering arbitrary bits of information), our value is going to be in pattern-matching and meaning-making. When faced with a customer’s problems, we’ll  need to match it to a solution. We need to look at a market, and discern new products and approaches. As new technologies emerge, we’ll have to discern the possibilities. What makes us special is the ability to apply frameworks or models to situations despite the varying contexts. That’s making decisions.

To do this, there are several steps. What are the situations and decisions that need to be made? We should automate rote decisions. So then we’ll be dealing with recognizing situations, determining models, using them to make predictions of consequences, and choose the right one. We need to figure out what those situations are, the barriers to success, and figuring out what can be in the world, and what needs to be in the head. Or, for that matter, what we can solve in another way!

We also need to determine how we’ll know when we’ve succeeded. That is, what’s the observable measure that says we’re doing it right. It frequently can be triggered by a gap in performance. It’s more than “our sales aren’t up to scratch”, but specifics: time to close? success rate? Similarly for errors, or customer service ratings, etc. It needs to be tangible and concrete.  Or it can be a new performance we need. However, we need some way to know what the level is now and what it should be, so we can work to address it.

I note that it may feel ephemeral: “we need more innovation”, or “we need greater collaboration”, or… Still, these can be broken down. Are people feeling safe? Are they sharing progress? Is constructive feedback being shared? Are they collaborating? There are metrics we can see around these components, and they may not be exhaustive, but they’re indicative.

Then, we need to design to develop those capabilities. We should be designing the complements to our brain, and then developing our learning interventions. Doing it right is important! That means using models (see above) and examples (models in context), and then appropriate practice, with all the nuances: context, challenge, spacing, variation, feedback…  So, first the analysis, then the design. Then…

The final component is evaluation. We first need to see if people are able to make these decisions appropriately, then whether they’re doing so, and whether that’s leading to the needed change. We need to be measuring to see if we’re getting things right after our intervention, it’s translating to the workplace, and leading to the necessary change.

When we put these together, in alignment, we get measurable improvement. That’s what we want, making impactful decisions. Don’t trust to chance, do it by design!

Misplaced organizational focus?

26 March 2024 by Clark 3 Comments

Conjunctions are interesting learning opportunities. When two things provide different facets, particularly on something you’ve been thinking about, it’s serendipitous. In this case, two widely different readings triggered some reflections asking whether perhaps we’ve a misplaced organizational focus.

So, I’ve been a bit concerned about the rabid interest in generative AI. Not that I think it’s inherently bad, despite its flaws. Instead, my concern is the uses it’s put to. If you think about the classic engineering proposition – cheap, fast, or good; pick 2 – you know you can apply AI to any of the areas. Always, however, it seems that the focus is on cheap and fast. Which concerns me. There’s substantial evidence that our L&D efforts aren’t having an impact. Thus, doing bad faster and cheaper is still bad!

Part of this, it seemed to me, to stem from a rabid focus on short-term returns. I read The Japan That Can Say No many moons ago, and became convinced that a purely financial focus isn’t in the long-term interests of organizations. Now, there’re reinforcement!

First, in Australian news was a report about how a famous economist was rethinking the role of economics. While I didn’t agree with all of it, one aspect that resonated was captured in these bits:

“…we have largely stopped thinking about ethics and about what constitutes human well-being. We are technocrats who focus on efficiency…We often equate well-being with money or consumption, missing much of what matters to people.”

The juxtaposition happened with this quote aggregated by Learnnovators and posted to LinkedIn:

” The early signals of what A.I. can do should compel us to think differently about ourselves as a species. …Those skills are ones we all possess and can improve, yet they have never been properly valued in our economy or prioritized in our education and training…”
– Aneesh Raman, VP, Workforce Expert at LinkedIn & Maria Flynn, President & CEO of Jobs for the Future (JFF)

The overlap, to me, has to do with the undervaluing of what humans bring to the economic table. Efficiency isn’t the only good. Pushing L&D to do ‘box ticking’ learning design faster and cheaper isn’t consonant with recognizing what gives our work meaning. Besides undervaluing what learning design could and should be, it’s disrespectful to the learners and the organization.

I think that what’s driving organizations should be how they contribute to society as a whole. The means to that end is creating an internal environment conducive to supporting people, individually and collectively, to contribute their best in ways that respect what we offer. There are things technology can do that, frankly, we as people shouldn’t. Similarly, there are things we can do that we shouldn’t abrogate. To paraphrase the meme, I don’t want people doing menial tasks leaving the creativity to machines.

A holistic synergy, each doing what they do best to augment the other, alone and together, is optimal. Our economics should support that as well, and to the extent our structures don’t, it may be time to rethink them. Otherwise, it’s a misplaced organizational focus. Thoughts?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok