meta-learning Archives - Learnlets https://blog.learnlets.com/category/meta-learning/ Clark Quinn's learnings about learning Sun, 16 Feb 2025 18:54:05 +0000 en-US hourly 1 https://blog.learnlets.com/wp-content/uploads/2018/02/cropped-LearnletsIcon-32x32.png meta-learning Archives - Learnlets https://blog.learnlets.com/category/meta-learning/ 32 32 Contextual Leadership https://blog.learnlets.com/2025/02/contextual-leadership/ https://blog.learnlets.com/2025/02/contextual-leadership/#respond Tue, 25 Feb 2025 16:09:01 +0000 https://blog.learnlets.com/?p=9078 I’m not a leadership guru by any means. In fact, having read Pfeffer’s Leadership BS, I’m more of a cynic. However, I have been learning a bit from my LDA co-director Matt Richter (as well as CEO of E9, and leadership coach, David Grad). Matt’s a fan of Keith Grint, UK Historian, who talks about how […]

The post Contextual Leadership appeared first on Learnlets.

]]>
I’m not a leadership guru by any means. In fact, having read Pfeffer’s Leadership BS, I’m more of a cynic. However, I have been learning a bit from my LDA co-director Matt Richter (as well as CEO of E9, and leadership coach, David Grad). Matt’s a fan of Keith Grint, UK Historian, who talks about how you need to make decisions differently in different situations. His approach reminds me of another, so here I’m looking at contextual leadership.

Grint talks about three situations:

  • Tame: where things are known, and you just manage
  • Wicked: where things are fluid, and you need to lead a team to address
  • Critical: where things are urgent, and you need to make a decision

The point being that a leader needs to address each objective appropriately to the type of circumstance you’re facing. Makes sense. We know these different situations arise.

What this reminds me of is Dave Snowden’s Cynefin framework (he’s very clear not to call it a model). Again, I’m not au fait with the nuances, but I’ve been a fan of the big picture. The main thing, to me, are the different situations he posits. That includes:

  • Clear means we have known solutions
  • Complicated likewise, but requires certain expertise for success
  • Complex systems, which require systematic exploration
  • Chaotic, and here you just have to do something 

As I understand it, the goal is to move things from chaotic and complex to complicated or clear. (There’s a fifth area in the framework, confusion, but again I’m focusing on the big picture versus nuances.)

So, let’s do a mapping. Here, I posit, tame equates to clear and complicated, wicked is complex, and critical is chaotic. Clearly, there’s a time element in critical that doesn’t necessarily apply in the Cynefin model. Still, despite some differences, one similarity emerges.

The important thing in both models is you can’t use the same approach to all problems. You have to recognize the type of situation, and use the appropriate approach. If it’s critical, you need to get expert advice and make a choice. If it’s not, but it’s new or uncertain, you assign (and lead) a team to investigate. This, to me, is really innovation.

The tame/clear, to me, is something that can and likely should be automated. People shouldn’t be doing rote things, that’s for machines. Increasingly, I’m seeing that we’re now getting computers to do much of the ‘complicated’ too, rightly or wrongly. We can do it right, of course, but there are times when the human pattern-matching is superior, and we always need oversight.

The interesting areas are the complex and chaotic. Those are areas where I reckon there continue to be roles for people. Perhaps that where we should be focusing our efforts. Not everyone needs to be a leader every time, but it’s quite likely that most everyone’s potentially going to be pulled into the decision-making in a wicked or complex situation. How we manage those will be critical, and that’s about managing process to obtain the best out of the group. That’s something I’ve been looking at for a long time (there’s a reason my company is called Quinnovation ;). Particularly the aspects that lead to the most effective outcomes.

So, we can automate the banal, manage the process right in innovating, and be decisive when things are time-critical. Further, we can select and/or develop people to be able to do this. This is what leadership should be, as well as, of course, creating the culture that the group will exist in. Getting the decision-making bit right, though, builds some of the trust that is necessary to accomplish that last bit. Those are my musings, what are yours?

 

 

 

The post Contextual Leadership appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/02/contextual-leadership/feed/ 0
Our (post) cognitive nature? https://blog.learnlets.com/2025/02/our-post-cognitive-nature/ https://blog.learnlets.com/2025/02/our-post-cognitive-nature/#comments Tue, 18 Feb 2025 16:07:29 +0000 https://blog.learnlets.com/?p=9075 A regular commenter (by email) has taken me to task about my recent post on cognitive science. Which is fair, I’m open to criticism; I can always learn more! Yet, I feel that the complaint isn’t actually fair. So I raise the debate here about our (post) cognitive nature. I welcome feedback! So, the gist […]

The post Our (post) cognitive nature? appeared first on Learnlets.

]]>
A regular commenter (by email) has taken me to task about my recent post on cognitive science. Which is fair, I’m open to criticism; I can always learn more! Yet, I feel that the complaint isn’t actually fair. So I raise the debate here about our (post) cognitive nature. I welcome feedback!

So, the gist of the discussion is whether I’m positing a reductionist and mechanistic account of cognition. I argue, basically, that we are ‘meat’. That is, that our cognition is grounded in our physiology, and that there’s nothing ephemeral about our cognition. There is no ineffable element to our existence. To be clear, my correspondent isn’t claiming a metaphysical element either, it’s more nuanced than that.

What I am missing, supposedly, is the situated nature of our cognition. We are very much a product of our action, is the claim. Which I don’t dispute, except that I will maintain we have to have some impact on our cognitive architecture. Channeling Paul Kirschner, learning is a change in long-term memory, which implies the existence of the latter. For instance, I argued strongly against a view that all that we store from events is the emotional outcome. If that were the case, we’d have nothing to recreate the experience, yet we can recount at least some of the specifics.  More emotional content means more recall, typically.

The accusation is that I’m being too computational, in that even if I go sub-symbolic, I’m still leveraging a computational model of the world. Whereas I believe that our thinking isn’t formal logical (as I’ve stated, repeatedly). Instead, we build inaccurate and incomplete models of the world (having shifted from formal mental models to a more predictive coding view of the world).  Further, those models are instantiated in consciousness in conjunction with the current context, which means they’re not the same each time.

Which is where I get pilloried. Since we haven’t (yet) explained consciousness, there must be something more than the physical elements. At least as I understand it, and it’s not clear I do. Yet, to me, this sort of attitude seems to suggest that it’s beyond comprehension, and maybe even matter. Which I can’t countenance.

So, that’s where the discussion is currently. Am I still cognitivist, or am I post-cognitivist? I’m oversimplifying, because it’s been the subject of a number of exchanges, without resolution as yet. This may trigger more discussion ;). No worries, discussion and even debate is how we learn!

The post Our (post) cognitive nature? appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/02/our-post-cognitive-nature/feed/ 1
The garden path https://blog.learnlets.com/2025/02/the-garden-path/ https://blog.learnlets.com/2025/02/the-garden-path/#respond Tue, 04 Feb 2025 16:09:06 +0000 https://blog.learnlets.com/?p=9064 Two recent times, I’ve seen glorious stories of how things could be. And, to be fair, I’ve been guilty myself; I have pursued and purveyed rosy stories. Yet, as I recognize more of the world’s challenges – randomness, illogic, bias, money, and more – I begin to question myself. What is it about the garden […]

The post The garden path appeared first on Learnlets.

]]>
Two recent times, I’ve seen glorious stories of how things could be. And, to be fair, I’ve been guilty myself; I have pursued and purveyed rosy stories. Yet, as I recognize more of the world’s challenges – randomness, illogic, bias, money, and more – I begin to question myself. What is it about the garden path?

The usual story is something along the lines of ‘first this happens, and it leads to this, …, and then this wonderful thing happens.’ The transitions sound plausible, they could happen!  The causal story continues from good outcome to next good outcome, until we get the inevitable results. And, if we’re not careful, we might miss the problem.

There’s also the chance that the transitions won’t happen. Brian Klaas’ Fluke is one story that illustrates the role chance plays. Randomly, things don’t go as planned. Julie Dirksen’s Talk to the Elephant, talks about the ways our systems and people themselves go awry. There are many things that stand in the way of  things working out the way you expect or even intend. As has been said, never predict anything, particularly the future.  I once heard an analysis that says that the trends you observe do tend to continue, but something unexpected always flips them from where you thought things would go.

Another issue are the underlying assumptions. Often, they’re more unlikely than they seem. Will Y happen because X happened (e.g. will this person get the job offer because they rode up in the elevator with the hiring manager)?  Do you even accept the premise of the assumption? Just because someone tells you that the sky is green, are you going to believe them when your own experience may differ.

There are benign situations, and then some that are not.  When I have told such stories, I (sadly) believed them. I have been an idealist (and in many ways still am), so I inferred a world where things worked as planned. (I have learned better, for instance watching a promising enterprise be undermined by ego and greed.) Then there’re the more insidious ones, when someone’s telling a story to convince you to do something that is less likely than is portrayed. In either case, either the innocently naive or the venally misleading, are prevailing upon the gullible. And, of course, I’ve been victim on the receiving end as well.

What’s my point (he asks himself)? I guess that it’s to be wary of such stories. Don’t tread along the rosy trail portrayed without some assessment of the probabilities. Ask yourself if the final outcome is as plausible as the starting point would suggest? There’s lots of room for distraction as you trod the garden path. Be aware of claims that all will follow the same path!

The post The garden path appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/02/the-garden-path/feed/ 0
What and why cognitive science? https://blog.learnlets.com/2025/01/what-and-why-cognitive-science/ https://blog.learnlets.com/2025/01/what-and-why-cognitive-science/#respond Tue, 28 Jan 2025 16:09:16 +0000 https://blog.learnlets.com/?p=9059 I was on LinkedIn, and noted this list of influences in a profile: “complex systems, cybernetics, anthropology, sociology, neuroscience, (evolutionary) biology, information technology and human performance.” And, to me, that’s a redundancy. Why? A while ago, I said “Departments of cognitive science tend to include psychologists, linguists, sociologists, anthropologists, philosophers, and, yes, neuroscientists. ” I […]

The post What and why cognitive science? appeared first on Learnlets.

]]>
I was on LinkedIn, and noted this list of influences in a profile: “complex systems, cybernetics, anthropology, sociology, neuroscience, (evolutionary) biology, information technology and human performance.” And, to me, that’s a redundancy. Why?

A while ago, I said “Departments of cognitive science tend to include psychologists, linguists, sociologists, anthropologists, philosophers, and, yes, neuroscientists. ” I missed artificial intelligence and computer science more generally. Really, it’s about everything that has to do with human thought, alone, or in aggregate. In a ‘post-cognitive’ era, we also recognize that thinking is not just in the head, but external. And it’s not just the formal reasoning, or lack thereof, but it’s personality (affect), and motivation (conation).

Cognitive science emerged as a way to bring different folks together who were thinking about thinking. Thus, that list above is, to me, all about cognitive science! And I get why folks might want to claim that they’re being integrative, but I’m saying “been there, done that”. Not me personally, to be clear, but rather that there’s a field doing precisely that. (Though I have pursued investigations across all of the above in my febrile pursuit of all things about applied cognitive science.)

Why should we care? Because we need to understand what’s been empirically shown about our thinking. If we want to develop solutions – individual, organizational, and societal –  to the pressing problems we face, we ought to do so in ways that are most aligned with how our brains work. To do otherwise is to invite inefficiencies, biases, and other maladaptive practices.

Part of being evidence-informed, in my mind, is doing things in ways that align with us. And there is lots of room for improvement. Which is why I love learning & development, these are the people who’ve got the most background, and opportunity, to work on these fronts. Yes, we need to liase with user experience, and organizational development, and more, but we are (or should be) the ones who know most about learning, which in many ways is the key to thinking (about thinking).

So, I’ve argued before that maybe we need a Chief Cognitive Officer (or equivalent). That’s not Human Resources, by the way (which seems to be a misnomer along the lines of Human Capital). Instead, it’s aligning work to be most effective across all the org elements. Maybe now more than ever before! At least, that’s where my thinking keeps ending up. Yours?

The post What and why cognitive science? appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/01/what-and-why-cognitive-science/feed/ 0
Writing, again https://blog.learnlets.com/2025/01/writing-again/ https://blog.learnlets.com/2025/01/writing-again/#respond Tue, 21 Jan 2025 16:04:55 +0000 https://blog.learnlets.com/?p=9056 So, I’m writing, again. Not a book (at least not initially ;), but something. I’m not sure exactly how it’ll manifest, but it’s emerged. Rather than share what I’m writing (too early), I’m reflecting a bit on the process. As usual, I’m writing in Word. I’d like to use other platforms (Pages? Scrivener? Vellum?), but there […]

The post Writing, again appeared first on Learnlets.

]]>
So, I’m writing, again. Not a book (at least not initially ;), but something. I’m not sure exactly how it’ll manifest, but it’s emerged. Rather than share what I’m writing (too early), I’m reflecting a bit on the process.

As usual, I’m writing in Word. I’d like to use other platforms (Pages? Scrivener? Vellum?), but there are a couple of extenuating circumstances. For one, I’ve been using Word since I wrote my PhD thesis on the Mac II I bought for the purpose. I think that was Word 2.0, circa late 80’s. In other words, I’ve been using Word a long time! Then, the most important thing besides ‘styles‘ (formatting, not learning) is the ability to outline. Word has industrial-strength outlining, and, to use an over-used and over-emphatic point, I live and die by outlines.

I outline my plan before I start writing, pretty much always. Not for blog posts like this, but for anything of any real length beyond such a post. Anything with intermediate headings is almost guaranteed to be outlined. I tend to prefer well-structured narratives (at least for non-fiction?). It likely will change, of course. When my very first book was written, it pretty much followed the structure. Ever since then…  My second book had me rearranging the structure as I typed. My most recent book got restructured after every time I shared it with my initial readers, until suddenly it gelled.

In this case, and not unlike most cases, I move things around as I go. This should be a section all its own. That is superfluous to need. This other goes better here than where I originally put it. And so on. I do take a pass through to reconcile any gaps or transitions, though I try to remedy those as I go.  The goal is to do a coherent treatment of whatever the topic is.

I throw resources in as I go. That is, if I find myself referring to a concept, I put a reminder in a References or Resources section at the end to grab a reference later. I have a separate (ever-growing) file of references for that purpose. Though I may not always include the reference in the document (currently I’m trying to keep the prose lean), but I want folks to have a resource at least.

I also jump around, a bit. Mostly I proceed from ‘go to whoa’, but occasionally I realize something I want to include, and put a note at the appropriate place. That sometimes ends up being prose, until I realize I need to go back to where I was ;). I hope that it leads to a coherent flow. Of course, as above, I do reread sections, and I try to give a final read before I pass on to whatever next step is coming. Typically, that means sending to someone to see if I’m on track or off the rails.

I also am pondering that I may retrofit with diagrams. Sometimes I’ve put them in as I go. At other times, I go back and fill them in. I do love me a good diagram, for the reasons Larkin & Simon articulated (Connie Malamed is doing a good job on visuals over at LinkedIn this month). Sometimes I edit the ones I have as I recognize improvements, sometimes I create new ones, sometimes I throw existing ones in. It’s when I think they’ll help, but I can think of several I probably should make.

The above holds true for pretty much all writing I do beyond these posts. This is for me, first, after all! Otherwise, I solicit feedback (which I don’t always get; I think folks trust me too much, at least for shorter things). I’m sure others work different. Still, these are my thoughts on writing, again. I welcome your reflections!

The post Writing, again appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/01/writing-again/feed/ 0
They’re ripping you off https://blog.learnlets.com/2025/01/theyre-ripping-you-off/ https://blog.learnlets.com/2025/01/theyre-ripping-you-off/#respond Tue, 07 Jan 2025 16:01:04 +0000 https://blog.learnlets.com/?p=9043 Ok, so I am grateful. But there may also be times to rant. (Maybe I’m grateful for getting it off my chest?) But I’m seeing a continual rise in how folks are looking to take advantage of me, and you. And I don’t like it. So, here are some of the ways they’re ripping you […]

The post They’re ripping you off appeared first on Learnlets.

]]>
Ok, so I am grateful. But there may also be times to rant. (Maybe I’m grateful for getting it off my chest?) But I’m seeing a continual rise in how folks are looking to take advantage of me, and you. And I don’t like it. So, here are some of the ways they’re ripping you off!

So, first, it’s the rise in attempts to defraud you. That can be scams, phishing, or more. As I was creating this post, this was a repost on Bluesky:

Robocalls are seeing a massive increase lately. Keep in mind that efforts to stop caller-ID spoofing have largely had no real effect, because callers now use “throw away” numbers that verify correctly and then are abandoned after days or even hours. In fact, if you get an “unknown caller” on your phone, it’s likely NOT a spam call, because spammers can now so easily not bother spoofing or blocking their numbers, they just keep switching to different “legit” numbers that spam blocks usually don’t detect.

Email phishing is on the rise, and much of it now is bypassing SPF and DKIM checks (that Google and other large mailers started requiring for bulk mailings) due to techniques such as DKIM replay and a range of other methods. Fake PayPal invoices are flooding the Net, and they often are passing those checks meant to block them. It’s reported that many of these are coming from Microsoft’s Outlook, with forged PayPal email addresses. Easiest way to detect these is to look at the phone number they want you to call if you have a question — and if it’s not the legit PayPal customer service number you know it’s not really from PayPal. Getting you to call the scammers on the phone is the basis of the entire scheme.

It’s all getting worse, not better. – From Lauren Weinstein Lauren.vortex.com

Another one are Google Calendar announcements, and recently DocuSign frauds. Plus, of course, the continual fake invoices for Macafee, etc. I don’t know about you, but the earlier scam of pretending to be someone on LinkedIn has returned. I’m seeing a renewal of folks saying that I have an interesting profile, or that I’d be a good match for their company’s new initiative. Without knowing anything about me, of course.

Worse, I’m now seeing at least the former showing up in Bluesky (so I’m keeping Mastodon around; quinnovator on both), and even on Academia.edu! I hear about some attempts to crack down on the factories where they house (and exploit) folks to do this. Which, of course, just drives them to smaller and harder to find such activities. The tools are getting more powerful, making it easier.

The one that really gets me is the increasing use of our data to train language models. I was first alerted when a tool (no longer freely available) allowed me to check one of the AI engines. Sure enough, this blog was a (miniscule) percentage of it. In the column on the right, you can see I’m ok with my posts being fodder. Er, only if you aren’t making money, share alike, and provide attribution! Which isn’t the case; I haven’t had contact nor seen remuneration.

This is happening to you, too. As they say, if you’re not paying, you’re the product. If you use Generative AI (e.g. ChatGPT), you’re likely having your prompts tracked, and any materials you upload are fair game. Many of the big tools (e.g. Microsoft) that connect to the internet are also taking your data. Some may make not taking the default, but others aren’t. In short, your data is being used. Sure, it may be a fair exchange, but how do you know?

In short, they’re ripping you off. They’re ripping us off!  And, we can passively accept it, or fight. I do. I report phishing, I block folks on social media, and I tick every box I can find saying you can’t have my data. Do we need more? I like that the EU has put out a statement on privacy rights. Hopefully, we’ll see more such initiatives. The efforts won’t stop; shareholder returns are at stake after all, but I think we can and should stand up for our rights. What say you?

The post They’re ripping you off appeared first on Learnlets.

]]>
https://blog.learnlets.com/2025/01/theyre-ripping-you-off/feed/ 0
Looking forward https://blog.learnlets.com/2024/12/looking-forward-2/ https://blog.learnlets.com/2024/12/looking-forward-2/#respond Tue, 31 Dec 2024 16:01:20 +0000 https://blog.learnlets.com/?p=9048 Last week, I expressed my gratitude for folks from this past year. That’s looking back, so it’s time to gaze a touch ahead. With some thoughts on the whole idea! So here’s looking forward to 2025. (Really? 25 years into this new century? Wow!) First, I’m reminded of the talk I heard once. The speaker, […]

The post Looking forward appeared first on Learnlets.

]]>
Woman on the ocean, peering into the distance.Last week, I expressed my gratitude for folks from this past year. That’s looking back, so it’s time to gaze a touch ahead. With some thoughts on the whole idea! So here’s looking forward to 2025. (Really? 25 years into this new century? Wow!)

First, I’m reminded of the talk I heard once. The speaker, who’d if memory serves had written a book about predicting the future, explained why it was so hard. His point was that, yes, there are trends and trajectories, but he found that there was always that unexpected twist. So you could expect X, but with some unexpected twist. For instance, I don’t think anyone a year ago really expected Generative AI to become such a ‘thing’.

There was also the time that someone went back and looked at some predictions of the coming year, and evaluated them. That didn’t turn out so well, including for me! While I have opinions, they’re just that. They may be grounded in theory and 4+ decades of experience, but they’re still pretty much guesswork, for the reason above.

What I have done, instead, for a number of years now is try to do something different. That is, talk about what I think we should see. (Or to put it another way, what I’d like to see. ;). Which hasn’t changed much, somewhat sadly. I do think we’ve seen a continuing rise of interest in learning science, but it’s been mitigated by the emergence of ways to do cheaper and faster. (A topic I riffed on for the LDA Blog.) When there’s pressure to do work faster, it’s hard to fight for good.

So, doing good design is a continued passion for me. However, in the conversations around the Learning Science conference we ran late this year, something else emerged that I think is worthy of attention. Many folks were looking for ways to do learning science. That is, resolving the practical challenges in implementing the principles. That, I think, is an interesting topic. Moreover, it’s an important one.

I have to be cautious. When I taught interface design, I deliberately pushed for more cognition than programming. My audience was software engineers, so I erred on getting them thinking about thinking. Which, I think, is right. I gave practical assignments and feedback. (I’d do better now.) I think you have to push further, because folks will backslide and you want them as far as you can get them.

On the other hand, you can’t push folks beyond what they can do. You need to have practical answers to the challenges they’ll face in making the change. In the case of user experience, their pushback was internal. Here, I think it’s more external. Designers want to do good design, generally. It’s the situation pragmatics that are the barrier here.

If I want people to pay more attention to learning science, I have to find a way to make it doable in the real world. While I’m finding more nuances, which interests me, I have to think of others. Someone railed that there are too many industry pundits who complain about the bad practices (mea culpa). That is, instead of cheering on folks that they can do better. And I think we need both, but I think it’s also incumbent to talk about what to do, practically.

Fortunately, I have not only principle but experience doing this in the real world. Also, we’ve talked to some folks along the way. And we’ll do more. We need to find that sweet spot (including ‘forgiveness is easier than permission’!) where folks can be doing good while doing well.  So that’s my intention for the year. With, of course, the caveat above! That’s what I’m looking forward to. You?

The post Looking forward appeared first on Learnlets.

]]>
https://blog.learnlets.com/2024/12/looking-forward-2/feed/ 0
Gratitude https://blog.learnlets.com/2024/12/gratitude/ https://blog.learnlets.com/2024/12/gratitude/#respond Tue, 24 Dec 2024 16:01:55 +0000 https://blog.learnlets.com/?p=9041 While I’ve another post I’m meaning to write, it’s not the time ;). For now, it’s time to express gratitude. Research says actually listing the things you’re grateful for improves your mind! So, time to explore what I have to be grateful for. (And I’m being positive here. ;) One of the good things happened […]

The post Gratitude appeared first on Learnlets.

]]>
While I’ve another post I’m meaning to write, it’s not the time ;). For now, it’s time to express gratitude. Research says actually listing the things you’re grateful for improves your mind! So, time to explore what I have to be grateful for. (And I’m being positive here. ;)

One of the good things happened in the first half of the year. I had the pleasure to continue my relationship with the folks at Upside Learning. Amit Garg continued to support learning science through his deeply grounded perspective, which led to a number of good things. One was the continual ideas from Isha Sood for marketing. There were a plethora of steps around publicizing the benefits of learning science. We did webinars, presentations, videos, and more, causing me to think afresh.  Another was working with Vidya Rajagopal to bake learning science into their design practices. She prodded me about the pragmatic constraints and we collaborated on generating new ideas about how to succeed.

Speaking of proselytizing learning science, I was engaged in many activities for the Learning Development Accelerator (LDA). With my co-director Matthew Richter, and the team, we ran a wide variety of activities. While some were members-only, others were publicly available or separate events. For instance, the Learning Science Conference was an opportunity to explore the underlying concepts and research results. We greatly benefitted from the excellent presenters, who we learned much from (as did I in particular!). Stay tuned for the followup!

I’m also grateful for those who participated in a couple of the programs the LDA ran. Both the Think Like A… and the You Oughta Know: Practitioner series drew upon folks who enlarged our perspectives on related fields and doing the work. Likewise with the debates. Of course, the LDA members are also always inquiring about the nuances. The lists are long, but you know who you are; heartfelt thanks!

I also had the chance to continue my involvement with Elevator 9. I learned a lot as the focus moved from a ‘no code’ developed-solution to a focus on developing a serious platform. A benefit was when David Grad’s passion and smart focus was coupled with Page Chen’s learning background and practical experience. It was a pleasure to work with both of them, and we plan to be able to tell you more early in the next year!

Of course, Quinnovation had its own work to do, and I had some really great experiences working with folks on their projects. We looked at the contexts and goals, and figured out steps to proceed along the path. I’m grateful, as I always learn a lot working with folks, and getting the chance to meld my background with their situations and expertise to craft viable solutions. Of course, I welcome hearing if I can assist you in the coming year!

I also did lots of interviews via podcasts, which are enlightening. The many smart hosts ask interesting questions, prompting me to think (and, regularly, rethink). These were coupled with articles for Upside, LDA, and more. I found out that one article back in January for Training Journal was their most read article of that month! Like my blogging here, these are further opportunities that cause me to reprocess my previous thinking.

I’m sure there’re more folks I’m forgetting. Mea culpa, and thanks!

In all, I’ve got a lot to be grateful for. As the research says, I find it boosting my mood as I write. So thanks to the folks above who helped me continue to explore the opportunities and solutions. I’ve much to have gratitude for, and that is the best thing of all. May you, too, have much to be grateful for, and may the holidays and the new year bring you more.

 

 

 

The post Gratitude appeared first on Learnlets.

]]>
https://blog.learnlets.com/2024/12/gratitude/feed/ 0
Uniqueness https://blog.learnlets.com/2024/12/uniqueness/ https://blog.learnlets.com/2024/12/uniqueness/#comments Tue, 17 Dec 2024 16:09:38 +0000 https://blog.learnlets.com/?p=9033 In a conversation yesterday, we were talking about what works in presenting yourself (in this case, for a job). I mentioned that in the US you have to perhaps overpromise, whereas my experience in Oz (coloured, as it is, by its Brit origins ;), was that you underpromise. The latter worked well for me, because […]

The post Uniqueness appeared first on Learnlets.

]]>
In a conversation yesterday, we were talking about what works in presenting yourself (in this case, for a job). I mentioned that in the US you have to perhaps overpromise, whereas my experience in Oz (coloured, as it is, by its Brit origins ;), was that you underpromise. The latter worked well for me, because I believe I tend to err on the side of quiet; I don’t like boasts. I was suggesting, in this case, that you needed what made one unique to a particular situation. Thinking further, I think I do value what is uniqueness. What do I mean?

So, to get a (proper?) Ph.D., you are expected make a unique contribution to understanding. Consider our knowledge as a giant ball, and what a thesis does is push out one tiny bump. The goal is something no one else has done. For instance, for my Ph.D., I broke analogy up into a different set of steps, and measured performance. My specification of steps was unique, but that wasn’t the contribution (in my mind, at least). What I also did was try training to improve those processes (four of the six, for reasons), and it did impact a couple, with good reasons not to have impacted the others. It wasn’t earth-shattering, by any means (I suspect no one cites my thesis!), but it was a contribution. (And, of course, it grounded me in the literature and practices.)

When I think of folks I respect, in many cases it’s because they have made a unique contribution. By the way, I suppose I should be clear: unique isn’t enough, it has to be a positive contribution (which can include ruling out things). It’s like innovation: not just an idea, but a good one!  So, for instance, Will Thalheimer’s been a proponent of evidence-informed practices, but his unique contribution is LTEM. So too with Patti Shank and multiple choice questions, Michael Allen with SAM, Harold Jarche with PKM, etc. I’m kind of thinking right now that Julie Dirksen’s new book is what’s really new!  I am inclined to think that new syntheses are also valuable.

For instance, my own books on myths and learning science are really syntheses, not new ideas. (Maybe my mobile books too?) Reflecting, I think that the three books that wanted to publish, my first on games, my fourth on L&D strategy, and my most recent on engagement (channeling the core from the first book), are more unique contributions.  Though I will self-servingly and possibly wrongly suggest my way of thinking about contexts, models, and more are innovations. Like Allen’s CCAF (Context – Challenge – Activity – Feedback), perhaps.

Which isn’t to say syntheses that organize things into new and more comprehensible ways isn’t also a contribution. In addition to (immodestly) my afore-mentioned books in that category, I think of folks like Connie Malamed, Christy Tucker, Matthew Richter, Ruth Clark, Jane Bozarth, etc. These folks do a great job of taking received wisdom and collating and organizing it so as to be comprehensible. And I could be providing too short a shrift in some cases.

My stance is that I don’t see enough ‘uniqueness’. Original ideas are few and far between. Which may be expected, but we have to be careful. There are a lot more touted ideas than there are good ones. What really is different? What’s worth paying attention to? It’s not an easy question, and I may be too harsh. There is a role for providing different perspectives on existing things, to increase the likelihood that people hear of it. But those should be new perspectives. I’m not interested in hearing the same ideas from different folks. So, does this make sense, or am I being too harsh?

By the way, I suspect that there are more ideas than we actually hear about. I know people can be hesitant about sharing them for a variety of reasons. If you’ve got an idea, share it with someone! If they get excited, it may well be new and worthwhile. Take a chance, we may all benefit.

The post Uniqueness appeared first on Learnlets.

]]>
https://blog.learnlets.com/2024/12/uniqueness/feed/ 2
Beyond Learning Science? https://blog.learnlets.com/2024/11/beyond-learning-science/ https://blog.learnlets.com/2024/11/beyond-learning-science/#respond Tue, 19 Nov 2024 16:08:57 +0000 https://blog.learnlets.com/?p=9012 The good news is, the Learning Science Conference has gone well. The content we (the Learning Development Accelerator, aka LDA) hosted from our stellar faculty was a win. We’ve had lively discussions in the forum. And the face to face sessions were great! The conference continues, as the content will be there (including recordings of […]

The post Beyond Learning Science? appeared first on Learnlets.

]]>
The good news is, the Learning Science Conference has gone well. The content we (the Learning Development Accelerator, aka LDA) hosted from our stellar faculty was a win. We’ve had lively discussions in the forum. And the face to face sessions were great! The conference continues, as the content will be there (including recordings of the live sessions). The open question is: what next? My short answer is going beyond learning science.

So, the conference was about what’s known in learning science. We had topics about the foundations, limitations, media, myths, informal/social, desirable difficulty, applications, and assessment/evaluation. What, however, comes next? Where do you go from a foundation in learning science?

My answer is to figure out what it means! There are lots of practices in L&D that are grounded in learning science, but go from there to application. My initial list looks like this:

  1. Instructional design. Knowing the science is good, but how do you put it into a process?
  2. Modalities. When you’re doing formal learning, you can still do it face to face, virtually, online, or blended. What are the tradeoffs, and when does each make sense?
  3. Performance consulting. We know there are things where formal learning doesn’t make sense. We want gaps and root causes to determine the right intervention.
  4. Performance support. If you determine job aids are the answer, how do you design, develop, and evaluate them? How do they interact with formal learning?
  5. Innovation. This could (and should; editorial soapbox) be an area for L&D to contribute. What’s involved?
  6. Diversity. While this is tied to innovation, it’s a worthy topic on its own. And I don’t just mean compliance.
  7. Technology. There are lots of technologies, what are their learning affordances? XR, AI, the list goes on.
  8. Ecosystem. How do you put the approaches together into a coherent solution for performance? If you don’t have an ‘all singing, all dancing’ solution, what’s the alternative?
  9. Strategy. There’s a pretty clear vision of where you want to be. Then, there’s where you are now. How do you get from here to there?

I’m not saying this is the curriculum for a followup, I’m saying these are my first thoughts. This is what I think follows beyond learning science. There are obviously other ways we could and should go. These are my ideas, and I don’t assume they’re right. What do you think should be the followon? (Hint: this is likely what next year’s conference will be about. ;)

The post Beyond Learning Science? appeared first on Learnlets.

]]>
https://blog.learnlets.com/2024/11/beyond-learning-science/feed/ 0