Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Impactful decisions

2 April 2024 by Clark 1 Comment

I’ve been talking about impact in a variety of ways, and have also posited that decisions are key. I really haven’t put them together, so perhaps it’s time ;). So here’re some thoughts on impactful decisions.

To start with, I’ve suggested that what will make a difference to orgs, going forward (particularly in this age of genAI), is the ability to make better decisions. That is, either ones we’re not making right now, or new ones we need to be able to make.  When we’re moving away from us doing knowledge tasks (e.g. remembering arbitrary bits of information), our value is going to be in pattern-matching and meaning-making. When faced with a customer’s problems, we’ll  need to match it to a solution. We need to look at a market, and discern new products and approaches. As new technologies emerge, we’ll have to discern the possibilities. What makes us special is the ability to apply frameworks or models to situations despite the varying contexts. That’s making decisions.

To do this, there are several steps. What are the situations and decisions that need to be made? We should automate rote decisions. So then we’ll be dealing with recognizing situations, determining models, using them to make predictions of consequences, and choose the right one. We need to figure out what those situations are, the barriers to success, and figuring out what can be in the world, and what needs to be in the head. Or, for that matter, what we can solve in another way!

We also need to determine how we’ll know when we’ve succeeded. That is, what’s the observable measure that says we’re doing it right. It frequently can be triggered by a gap in performance. It’s more than “our sales aren’t up to scratch”, but specifics: time to close? success rate? Similarly for errors, or customer service ratings, etc. It needs to be tangible and concrete.  Or it can be a new performance we need. However, we need some way to know what the level is now and what it should be, so we can work to address it.

I note that it may feel ephemeral: “we need more innovation”, or “we need greater collaboration”, or… Still, these can be broken down. Are people feeling safe? Are they sharing progress? Is constructive feedback being shared? Are they collaborating? There are metrics we can see around these components, and they may not be exhaustive, but they’re indicative.

Then, we need to design to develop those capabilities. We should be designing the complements to our brain, and then developing our learning interventions. Doing it right is important! That means using models (see above) and examples (models in context), and then appropriate practice, with all the nuances: context, challenge, spacing, variation, feedback…  So, first the analysis, then the design. Then…

The final component is evaluation. We first need to see if people are able to make these decisions appropriately, then whether they’re doing so, and whether that’s leading to the needed change. We need to be measuring to see if we’re getting things right after our intervention, it’s translating to the workplace, and leading to the necessary change.

When we put these together, in alignment, we get measurable improvement. That’s what we want, making impactful decisions. Don’t trust to chance, do it by design!

Engineering solutions

19 March 2024 by Clark 1 Comment

Every once in a while, I wonder what I’m doing (ok, not so infrequently ;). And it’s easy to think it’s about applying what’s known about learning to the design of solutions. However, it’s more. It is about applying science results to designing improvements, but, it’s broader than learning, and not just individual. Here are some reflections on engineering solutions.

As I’ve probably regaled you with before, I was designing and programming educational computer games, and asking questions like “should we use spacebar and return, or number keys to navigate through menus?” (This was a long time ago.) I came across an article that argued for ‘cognitive engineering’, applying what we knew about how we think to the design of systems. Innately I understood that this also applied to the design of learning. I ended up studying with the author of the article, getting a grounding in what was, effectively, ‘applied cognitive science’.

Now, my focus on games has been on them as learning solutions, and that includes scenarios and simulation-driven experiences. But, when looking for solutions, I realize that learning isn’t always the answer. Many times, for instance, we are better off with ‘distributed‘ cognition. That is, putting the answer in the world instead of in our heads. This is broader than learning, and invokes cognitive science. Also, quite frankly, many problems are just based in bad interface designs!  Thus, we can’t stop at learning. We truly are more about performance than learning.

In a sense, we’re engineers; applying learning and cognitive science to the design of solutions, (just as chemical engineering is about applying chemistry). Interestingly, the term learning engineering has another definition. This one talks about using the benefits of engineering approaches, such as data, and technology-at-scale, to design solutions. For instance, making adaptive systems requires integrating content management, artificial intelligence, learning design, and more.

Historically, our initial efforts in technology-facilitated learning did take teams. The technology wasn’t advanced enough, and it took learning designers, software engineers, interface designers and more to generate solutions like Plato, intelligent tutoring systems, and the like.  I’ve argued that Web 1.0 took the integration of the tech, content design, and more, which usually was more than one person could handle. Now, we’ve created powerful tools that allow anyone to create content. Which may be a problem! The teams used to ensure quality. Hopefully, the shift back comes with a focus on process.

We can apply cognitive science to our own design processes. We’ve evolved many tools to support not making reliable mistakes: design processes, tools like checklists, etc. I’ll suggest that moving to tools that make it easy to produce content haven’t been scaffolded with support to do the right thing. (In fact, good design makes it hard to do bad things, but our authoring tools have been almost the opposite!)  There’s some hope that the additional complexity will focus us back on quality instead of being a tool for quantity. I’m not completely optimistic in the short term, but eventually we may find that tools that let us focus on knowledge aren’t the answer.

I’m thinking we will start looking at how we can use tools to help us do good design. You know the old engineering mantra: good, fast, and cheap, pick 2. Well, I am always on about ‘good’. How do we make that an ongoing factor? Can we put in constraints so it’s hard to do bad design? Hmm… An interesting premise that I’ve just now resurrected for myself. (One more reason to blog!) What’re your thoughts?

 

Where are we at?

28 November 2023 by Clark 1 Comment

Signs pointing multiple directions with distances. I was talking with a colleague, and he was opining about where he sees our industry. On the other hand,  had some different, and some similar thoughts. I know there are regular reports on L&D trends, with greater or lesser accuracy. However, he was, and I similarly am looking slightly larger than just “ok, we’re now enthused about generative AI“. Yes, and, what’s that a signal of? What’s the context? Where are we at?

When I’m optimistic, I think I see signs of an awakening awareness. There are more books on learning science, for instance. (That may be more publishers and people looking for exposure, but I remain hopeful.)  I see a higher level of interest in ‘evidence-based’. This is all to the good (if true). That is, we could and should be beginning to look at how and why to use technology to facilitate learning appropriately.

On the cynical side, of course, is other evidence. For example, the interest in generative AI seems to be about ways to reduce costs. That’s not really what we should be looking at. We should be freeing up time to focus on the more important things, instead of just being able to produce more ‘content’ with even less investment. The ‘cargo cult’ enthusiasm about: VR, AR, AI, etc still seems to be about chasing the latest shiny object.

As an aside, I’ll still argue that investing in understanding learning and better design will have a better payoff than any tech without that foundation. No matter what the vendors will tell you!  You can have an impact, though of course you risk having a previous lack of impact exposed…

So, his point was that he thought that more and more leaders of L&D are realizing they need that foundation. I’d welcome this (see optimism, above ;).  Similarly, when I argue that if Pine & Gilmore are right (in The Experience Economy) as to what’s the next step, we should be the ones to drive the Transformation Economy (experiences that transform you).  Still,  is this a reliable move in the field? I still see folks who come in from other areas of the biz to lead learning, but don’t understand it. I’ll also cite the phenomena that when folks come into a new role they need to be seen to be doing something. While them getting their mind around learning would be a good step, I fear that too many see it as just management & leadership, not domain knowledge. Which, reliably, doesn’t work. Ahem.

Explaining the present, let alone predicting the future, is challenging. (“Never predict anything, particularly the future!”) Yet, it would help to sort out whether there is (finally) the necessary awakening. In general, I’ll remain optimistic, and continue to push for learning science, evidence, and more. That’s my take. What’s yours? Where are we at?

A brief AI overview?

7 November 2023 by Clark 2 Comments

At the recent and always worthwhile DevLearn conference, I was part of the panel on Artificial Intelligence (AI). Now, I’m not an AI practitioner, but I have been an AI groupie for, well, decades. So I’ve seen a lot of the history, and (probably mistakenly) think I have some perspective. So I figured I’d share my thoughts, giving a brief AI overview.

Just as background, I took an AI course as an undergrad, to start. Given the focus on thinking and tech (two passions), it’s a natural. I regularly met my friend for lunch after college to chat about what was happening. When I went to grad school, while I was with a different advisor, I was in the same lab as David Rumelhart. That happened to be just at the time he was leading his grad students on the work that precipitated the revolution to neural nets. There was a lot of discussion of different ways to represent thinking. I also got to attend an AI retreat, sponsored by MIT, and met folks like John McCarthy, Ed Feigenbaum, Marvin Minsky, Dan Dennet, and more! Then, as a faculty member in computer science, I had a fair affiliation with the AI group. So, some exposure.

So, first, AI is about using computer technology to model intelligence. Usually, human intelligence, as a cognitive science tool, but occasionally just to do smart things in any means possible. Further, I feel reasonably safe to say that there are two major divisions in AI: symbolic and sub-symbolic. The former dominated AI for several decades, and this is where a system does formal reasoning through rules. Such systems do generate productive results (e.g. chatbots, expert systems), but eventually don’t do a good job of reflecting how people really think. (We’re not formal logical reasoners!)

As a consequence, sub-symbolic approaches emerged, that tried architectures to do smart things in new ways. Neural nets end up showing good results. They find use in a couple of different ways. One is to set them loose on some data, and see what they detect. Such systems can detect patterns we don’t, and that’s proven useful (what’s known as unsupervised learning).

The other is to give them a ‘training set’ (also known as supervised learning), a body of data about inputs and decisions. You provide the inputs, and give feedback on the decisions until they make them in the same way.Then they generalize to decisions that they haven’t had training on. It’s also the basis of what’s now called generative AI, programs that are trained on a large body of prose or images, and can generate plausible outputs of same. Which is what we’re now seeing with ChatGPT, DALL-E, etc. Which has proven quite exciting.

There are issues of concern with each. Symbolic systems work well in well-defined realms, but are brittle at the edges. In supervised learning, the legacy databases unfortunately frequently have biases, and thus the resulting systems also have these biases! (For instance, housing loan data have shown bias.) They also don’t understand what they’re saying. So generative AI systems can happily tout learning styles from the corpus of data they’ve ingested, despite scientific evidence to the contrary.

There are issues in intellectual property, when the data sources don’t receive acknowledgement nor recompense.  (For instance, this blog has been used for training a sold product, yet I haven’t received a scintilla of return.) People may lose jobs if they’re currently doing something that AI can replace. While that’s not bad (that is, don’t have people do boring rote stuff), it needs to be done in a way that doesn’t leave those folks destitute. There should be re-skilling support. There are also climate costs from the massive power requirements of such systems. Finally, such systems are being put to use in bad ways (e.g. fakes). It’s not surprising, but we really should develop the guardrails before these tools reach release.

To be fair, there are some great opportunities out there. Generative AI can produce some ideas you might not have thought of. The only problem is that some of them may be bad. Which brings me to my final point. I’m more a fan of Augmenting Intellect (ala Engelbart) than I am of Artificial Intelligence. Such systems can serve as a great thinking partner! That is, they support thinking, but they also need scrutiny. Note that there can be combinations, such as hybrids of unsupervised and supervised, and symbolic with sub-symbolic.
With the right policies, AI can be such a partner. Without same, however, we open the doors to substantial risks. (And, a few days after first drafting this, the US Gov announced an approach!) I think having a brief AI overview provides a basis for thinking usefully about how to use them successfully. We need to be aware to avoid the potential problems. I hope this helps, and welcome your corrections, concerns, and questions.

What does it take to leave?

24 October 2023 by Clark 4 Comments

I did it, I finally left. I’m not happy about it, but it had to happen. (Actually, it happened some weeks ago.) So, what does it take to leave?

I’m talking about Twitter (oh, yeah, ‘X’ as in what’s been done to it ), by the way. I’d been on there a fair bit. Having tossed my account, I can’t see when my first tweet was, but at least since 2009. How do I know? Because that’s when I was recruited to help start #lrnchat, an early tweetchat that has still been going as recently as this past summer! I became an enthusiast and active participant.

And, let me be clear, it’s sad. I built friendships there with folks with long before I met them in person. And I learned so much, and got so much help. I like to tell the story about when I posted a query about some software, and got a response…from the guy who wrote it! For many years, it was a great resource, both personal and professional!

So, what happened? Make no mistake, it was the takeover by Elon Musk. Twitter went downhill from there, with hiccups but overall steadily. The removal of support, the politics, the stupid approaches to monetization, the bad actors, it all added up. Finally, I couldn’t take it any more. Vote with your feet. (And yes, I’m mindful of Jane Bozarth’s admonition: “worth every cent it cost you”. Yep, it was free, and that was unexpected and perhaps couldn’t be expected to last. However, I tolerated the ads, so there was a biz basis!)

Perhaps it’s like being an ex-smoker, but it riles me to see media still citing X posts in their articles. I want to yell “it’s dead, what you hear are no longer valid opinions”. I get that it’s hard, and lots of folks are still there, but… It had become, and I hear that it continues to be, an increasing swamp of bad information. Not a good source!

So where am I now? There isn’t yet an obvious solution. I’m trying out Mastodon and Bluesky. If you’re there, connect! I find the former to be more intimate. The latter is closer to twitter, but I’m not yet really seeing my ‘tribe’ there. I am posting these to both (I think). I’m finding LinkedIn to be more of an interaction location lately, as well, though it’s also become a bit spammy. #sideeffects? I keep Facebook for personal things, not biz, and I’m not on Instagram. I also won’t go on Threads or TikTok.

So, what does it take to leave? I guess when the focus turns from facilitating enlightening conversation at a reasonable exchange, to monetization and ego. When there’s interference in clean discourse, and opposition to benign facilitation. And, yes, I’m not naive enough to believe in total philanthropy (tho’ it happens), but there are levels that are tolerable and then there’s going to a ridiculous extreme. Wish I had $44B to lose! I know I’m not the only one wishing those who’ve earned riches would focus on libraries and other benevolent activities instead of ego-shots into space, but this is the world we’ve built. Here’s to positive change in alignment with how people really think, work, and learn.

Top 10 tools for Learning 2023

31 August 2023 by Clark 3 Comments

Somehow I missed colleague Jane Hart’s annual survey of top 10 tools for learning ’til just today, yet it’s the last day! I’ve participated in the past, and find it a valuable chance for reflection on my own, as well as seeing the results come out. So here’s my (belated) list of top 10 tools for learning 2023.

I’m using  Harold Jarche’s Personal Knowledge Mastery framework for learning here. His categories of seek (search and feed), sense (interpret) and share (closely or broadly) seems like an interesting and relevant way to organize my tools.

Seek

I subscribe to blog posts via email, and I use Feedblitz because I use it as a way for people to sign up for Learnlets. I finally started paying so they didn’t show gross ads (you can now signup safely; they lie when they say the have ‘brand-safe’ ads), and fortunately my mail removes images (for safety, unless I ask), so I don’t see them.

I’m also continuing to explore Mastodon (@quinnovator@sfba.social). It has its problems (e.g. hard to find others, smaller overall population), but I do find the conversations to be richer.

I’m similarly experimenting with Discord. It’s a place where I can generally communicate with colleagues.

I’m using Slack as a way to stay in touch, and I regularly learn from it, too. Like the previous two, it’s both seek and share, of course.

Of course, web surfing is still a regular activity. I’ve been using DuckDuckGo as a search engine instead of more famous ones, as I like the privacy policies better.

Sense

I still use Graffle as a diagramming tool (Mac only). Though I’m intrigued to try Apple’s FreeForm, in recent cases I’ve been editing old diagrams to update, and it’s hard to switch.

Apple’s Keynote is also still my ‘goto’ presentation maker, e.g. for my LDA activities. I have to occasionally use or output to Powerpoint, but for me, it’s a more elegant tool.

I also continue to use Microsoft’s Word as a writing tool. I’ve messed with Apple’s Pages, but…it doesn’t transfer over, and some colleagues need Word. Plus, that outlining is still critical.

Share

My blog (e.g. what you’re reading ;) is still my best sharing tool, so WordPress remains a top learning tool.

LinkedIn has risen to replace Twitter (which I now minimize my use of, owing to the regressive policies that continue to emerge). It’s where I not only auto-post these screeds, but respond to others.

As a closing note, I know a lot of people are using generative AI tools as thinking partners. I’ve avoided that for several reasons. For one, it’s clear that they’ve used others’ work to build them, yet there’s no benefit to the folks whose work has been purloined. There are also mistakes.  Probably wrongly, but I still trust my brain first. So there’re my top 10 tools for learning 2023

Don’t use AI unsupervised!

8 August 2023 by Clark Leave a Comment

A recent post on LinkedIn dubbed me in. In it, the author was decrying a post by our platform host, which mentioned Learning Styles. The post, as with several others, asks experts to weigh in. Which, I’ll suggest, is a broken model. Here’s my take on why I say don’t use AI unsupervised.

As a beginning, learning styles isn’t a thing. We’ve instruments, which don’t stand up to psychometric scrutiny. Further, reliable research to evaluate whether they have a measurable impact comes up saying ‘no’. So, despite fervent (and misguided) support, folks shouldn’t promote learning styles as a basis to adapt to. Yet that’s exactly what the article was suggesting!

So, as I’ve mentioned previously, you can’t trust the output of an LLM. They’re designed to string together sentences of the most probabilistic thing to say next. Further, they’ve been trained, essentially, on the internet. Which entails all the guff as well as the good stuff. So what can come out of it’s ‘mouth’ has a problematically high likelihood of saying something that’s utter bugwash (technical term).

In this case, LinkedIn (shamefully) is having AI write articles, and then circulating them for expert feedback. To me that’s wrong for two reasons. Each is bad enough in it’s own right, but together they’re really inexcusable.

The first reason is that they’ve a problematically high likelihood of saying something that’s utter bugwash! That gets out there, without scrutiny, obviously. Which, to me, doesn’t reflect well on LinkedIn for being willing to publicly demonstrate that they don’t review what they provide. Their unwillingness to interfere with obvious scams is bad enough, but this really seems expedient at best.

Worse, they’re asking so-called ‘experts’ to comment on it. I’ve had several requests to comment, and when I review them, they aren’t suitable for comment. However, asking folks to do this, for free on their generated content, is really asking for free work. Sure, we comment on each other’s posts. That’s part of community, helping everyone learn. And folks are contributing (mostly) their best thoughts. Willing, also, to get corrections and learn. (Ok, there’s blatant marketing and scams, but what keeps us there is community.) But when the hosting platform generates it’s own post, in ways that aren’t scrutable, and then invites people to improve it, it’s not community, it’s exploitation.

Simply, you can’t trust the output of LLMs. In general, you shouldn’t trust the output of anything, including other people, without some vetting. Some folks have earned the right to be trusted for what they say, including my own personal list of research translators. Then,  you shouldn’t ask people to comment on unscrutinized work. Even your own, unless it’s the product of legitimate thought! (For instance, I usually reread my posts, but it is hopefully also clear it’s just me thinking out loud.)

So, please don’t use AI unsupervised, or at least until you’ve done testing. For instance, you might put policies and procedures into a system, but then test the answers across a suite of potential questions. You probably can’t anticipate them all, but you can do a representative sample. Similarly, don’t trust content or questions generated by AI. Maybe we’ll solve the problem of veracity and clarity, but we haven’t yet. We can do one or the other, but not both. So, don’t use AI unsupervised!

Web 3.0 and system-generated content

20 June 2023 by Clark Leave a Comment

Not quite 15 years ago, I proposed that Web 3.0 would be system-generated content. There was talk about the semantic web, where we started tagging things, even auto-tagging, and then operating on chunks by rules connecting tags, not hard wiring. I think, however, that we’ve reached a new interpretation of Web 3.0 and system-generated content.

Back then, I postulated that Web 1.0 was producer-generated content. That is, the only folks who could put up content had all the skills. So, teams (or the rare individual) who could manage the writing, the technical specification, and the technical implementation. They had to put prose into html and then host it on the web with all the server requirements. These were the ones who controlled what was out there. Pages were static.

Then, CGIs came along, and folks could maintain state. This enabled some companies to make tools that could handle the backend, and so individuals could create. There were forms where you could type in content, and the system could handle posting it to the web (e.g. this blog!). So, most anyone could be a web creator. Social media emerged (with all the associated good and bad). This was Web 2.0, user-generated content.

I saw the next step as system-generated content. Here, I meant small chunks of (human-generated) content linked together on the fly by rules. This is, indeed, what we see in many sites. For instance, when you see recommendations, they’re based upon your actions and statistical inferences from a database of previous action. Rules pull up content descriptions by tags and present them together

There is another interpretation of Web 3.0, which is where systems are disaggregated. So, your content isn’t hosted in one place, but is distributed (c.f. Mastodon or blockchain. Here, the content and content creation are not under the control of one provider. This disaggregation undermines unified control, really a political issue with a technical solution.

However, we now see a new form of system-generated content. I’ll be clear, this isn’t what I foresaw (though, post-hoc, it could be inferred). That is, generative AI is taking semantics to a new level. It’s generating content based upon previous content. That’s different than what I meant, but it is an extension. It has positives and negatives, as did the previous approaches.

Ethics, ultimately, plays a role in how these play out. As they say, Powerpoint doesn’t kill people, bad design does. So, too, with these technologies. While I laud exploration, I also champion keeping experimentation in check. That is, nicely sandboxing such experimentation until we understand it and can have appropriate safe-guards in place. As it is, we don’t yet understand the copyright implications, for one. I note that this blog was contributing to Google C4 (according to a tool I can no longer find), for instance. Also, everyone using ChatGPT 3 has to assume that their queries are data.

I think we’re seeing system-generated content in a very new way. It’s exciting in terms of work automation, and scary in terms of the trustworthiness of the output. I’m erring on the side of not using such tools, for now. I’m fortunate that I work in a place of people paying me for my expertise. Thus, I will continue to rely on my own interpretation of what others say, not on an aggregation tool. Of course, people could generate stuff and say it’s from me; that’s Web 3.0 and system-generated content. Do be careful out there!

The Role of a Storyboard?

23 May 2023 by Clark 1 Comment

In a recent conversation, the issue of storyboards came up. It was in the larger context of representations for content development. In particular, for communicating with stakeholders (read: clients ;). The concern was how do you communicate the content versus representing the experience. So, the open question is what is the role of a storyboard?

So, there are (at least) several elements that are important in creating a learning experience. One is, of course, the domain. What are the performance objectives? Moreover, how are you providing content support. Specifically, what models are you using, and what examples. Then, of course, there’s the practice. Ideally, practice aligns with the performance objectives, and the models and examples support the practice.

There’s also the experience flow. How are you hooking learners up front? We are concerned with balancing the quantity of content with the actual practice, keeping engagement throughout.

In both cases, we need to communicate these to the clients. Too often, of course, clients raise concerns about making sure ‘the content’ is covered. In many cases, they really don’t understand that less is better, and have a large amount they’ve heard from subject matter experts (SMEs). Not knowing, of course, that SMEs have access to what they know about the domain, but less about what they do! Thus, they’re looking to make sure the content’s covered.

There will also be concern about the experience. This, likely not ideally, comes after assurance about the content. I personally have experienced situations where stakeholders say ‘ok’ to a storyboard, but then balk at the resulting experience. Some (surprisingly high) proportion of folks can’t infer an experience from a storyboard.  This has been echoed by others.

The question is what is the role of a storyboard. In game design, there is a (dynamic) design document that captures everything as it develops. Is this the right representation to communicate the experience? It  communicates to developers, but is it good for clients? I argue that we want more iterative representations, for instance getting sign-off on what we’ve heard from the analysis and documenting what will be the focus of the design. We also want to separate out the domain from the experience.

Overall, I advocate representing the experience, for instance in (mocked up) screenshots with narration to represent a sample interaction. That can accompany the storyboard, but when folks have to sign-off on an experience, and they can’t get it from the usual representations, you’ll need an augment. I wonder whether we should fight against presenting the content that’ll be covered.

We should show the objectives, models, and examples, but fight against content ‘coverage’. Cathy Moore does a good job in her ‘Action Mapping’ to argue that what the minimum is to achieve success on appropriate performance tasks is a good goal. I agree, as does learning science. The role of a storyboard is to capture development for developers. It may not be the right communication tool for stakeholders. I welcome your thoughts.

Misconceptions?

28 February 2023 by Clark 6 Comments

Several books ago, I was asked to to talk about myths in our industry. I ended up addressing myths, superstitions, and misconceptions. While the myths persist, the misconceptions propagate, aided by marketing hype. They may not be as damaging, but they also are a money-sink, and contribute to the lack of our industry making progress. How do we address them?

The distinctions I make for the 3 categories are, I think, pretty clear. Myths are beliefs that folks will willingly proclaim, but are contrary to research. This includes learning styles, attention span of a goldfish, millennials/generations, and more (references in this PDF, if you care). Superstitions are beliefs that don’t get explicit support, but manifest in the work we do. For example, that new information will lead to behavior change. We may not even be aware of the problems with these! The last category is misconceptions. They’re nuanced, and there are times when they make sense, and times they don’t.

The problem with the latter category is that folks will eagerly adopt, or avoid, these topics without understanding the nuances. They may miss opportunities to leverage the benefits, or perhaps more worrying, they’ll spend on an incompletely-understood premise. In the book, I covered 16 of them:

70:20:10
Microlearning
Problem-Based Learning
7 – 38 – 55
Kirkpatrick
NeuroX/BrainX
Social Learning
UnLearning
Brainstorming
Gamification
Meta-Learning
Humor in Learning
mLearning
The Experience API
Bloom’s Taxonomy
Learning Management Systems

On reflection, I might move ‘unlearning’ to myths, but I’d certainly add to this list. Concepts like immersive learning, workflow learning, and Learning Experience Platforms (LXPs)  are some that are touted without clarity. As a consequence, people can be spending money without necessarily achieving any real outputs. To be clear, there are real value in these concepts, just not in all conceptions thereof. The labels themselves can be misleading!

In several of my roles, I’m working to address these, but the open question is “how?” How can we illuminate the necessary understanding in ways that penetrate the hype? I truly do not know. I’ve written here and spoken and written elsewhere on previous concepts, to little impact (microlearning continues to be touted without clarity, for instance). At this point, I’m open to suggestions. Perhaps, like with myths, it’s just persistent messaging and ongoing education. However, not being known for my patience (a flaw in my character ;), I’d welcome any other ideas!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok