Learnlets

Secondary

Clark Quinn’s Learnings about Learning

In praise of reminders

17 June 2025 by Clark Leave a Comment

I have a statement that I actively recite to people: If I promise to do something, and it doesn’t get into a device, we never had the conversation. I’m not trying to be coy or problematic, there are sound reasons for this. It’s part of distributed cognition, and augmenting ourselves. It’s also part of a bigger picture, but here I am in praise of reminders.

Schedule by clock is relatively new from a historical perspective. We used to use the sun, and that was enough. As we engaged in more abstract and group activities, we needed better coordination. We invented clocks and time as a way to accomplish this. For instance, train schedules.

It’s an artifact of our creation, thus biologically secondary. We have to teach kids to tell time! Yet, we’re now beholden to it (even if we muck about with it, e.g. changing time twice a year, in conflict with research on the best outcomes for us). We created an external system to help us work better. However, it’s not well-aligned with our cognitive architecture, as we don’t naturally have instincts to recognize time.

We work better with external reminders. So, we have bells ringing to signal it’s time to go to another course, or to attend worship. Similar to, but different than other auditory signals (that don’t depend on our spatial attention) such as horns, buzzers, sirens, and the like. They can draw our attention to something that we should attend to. Which is a good thing!

I, for one, became a big fan of the Palm Pilot (I could only justify a III when I left academia, for complicated reasons). Having a personal device that I could add and edit things like reminders on a date/time calendar fundamentally altered my effectiveness. Before, I could miss things if I disappeared into a creative streak on a presentation, paper, diagram, etc. With this, I could be interrupted and be alerted that I had an appointment for something: call, meeting, etc. I automatically attach alerts to all my calendar entries.

Granted, I pushed myself to see just how effective I could make myself. Thus, I actively cultivated my address book, notes, and reminders as well as my calendar (and still do). But this is one area that’s really continued to support my ability to meet commitments. Something I immodestly pride myself for delivering on. I hate to have to apologize for missing a commitment! (I’ll add multiple reminders to critical things!)   Which doesn’t mean you shouldn’t, actively avoid all the unnecessary events people would like to add to your calendar, but that’s just self-preservation!

Again, reminders are just one aspect of augmenting ourselves. There are many tools we can use – creating representations, externalizing knowledge, … – but this on in particular as been a big key to improving my ability to deliver. So I am in praise of reminders, as one of the tools we can, and should, use. What helps you?

(And now I’ll tick the box on my weekly reminder to write a blog post!)

Software engineer vs programmer

20 May 2025 by Clark Leave a Comment

A rotund little alien character, green with antennas, dressed in a futuristic space suit, standing on the ground with a starry sky behind them. If you go online, you’ll find many articles that talk about the difference in roles between software engineers and programmers. In short, the former have formal training and background. And, at least in this day and age, oversee coding from a more holistic perspective. Programmers, on the other hand, do just that, make code. Now, I served in a school of computer science for a wonderful period of my life. Granted, my role was teaching interface design (and researching ed tech). Still, I had exposure to both sides. My distinction between software engineer vs programmer, however, is much more visceral.

Early in my consulting career, I was asked to partner with a company to develop learning. The topic was project management for non-project managers. They chose me because of my game design experience as well as learning science background, The company that contracted me was largely focused on visual design. For instance, the owner also was teaching classes on that. Moreover, their most recent project was a book on the fauna of a fictitious world in the Star Wars universe (with illustrations). He also had a team of folks back in India. Our solution was a linear scenario, quite visual, set in outer space both because of experience of their team and the audience of engineers.

After the success of the project, the client came back and asked for a game to accompany the learning experience. Hey, no problem, it’s not like we’ve already addressed the learning objectives or anything! Still, I like games! This was going to be fun. So I dug in, cobbling together a game design. We used the same characters from the previous experience, but now focused on making project management decisions and dealing with different personality types (the subtext was, don’t be a difficult person to work with).

The core mechanic was:

  • choose the next project
  • assess any problem
  • find the responsible person,
  • ask (appropriately) for the fix

Of course, the various rates of problems, stage of development and therefore person, stage and scope of the project, were all going to need tuning. In addition, we wanted the first n problems to deal with good people, to master the details, before beginning to deal with more difficult personality types.

So, from my development docs, they hired a flash programmer to build the game. And…when we tried to iterate, we got more bugs instead of improvement. This happened twice. I realized the coders were hard-wiring the parameters throughout the code, which meant that if you wanted to tune a value, they had to search throughout the code to change all the values. Now, for those who know, this is incredibly bad programming. It wasn’t untoward to develop a small Flash animation, but it didn’t scale to a full game program.

We had a discussion, and they finally procured someone who actually understood the use of constants, someone with more than just a programming background. Suddenly, tweaks were returning with short-turnaround, and we could tune the experience! Thus, we were able to create a game that actually was fun. We didn’t really get to know whether it was effective, because they hadn’t set any metrics for impact, but they were happy and touted the game in several venues. We took that as a positive outcome ;).

The take-home lesson, of course, is if you need tuning (and, for anything of sufficient size and user-facing, you will), you need someone who understands proper code structures. I’ll always ask for someone who understands software engineering, not just a programmer. There’s a reason that a) they’re known as ‘cowboy coders’, and b) there’s software process! That’s my personal definition of a software engineer vs programmer, and I realize it’s out of date in this era of increasingly complex software. Still, the value of structure and process isn’t restricted to software, and is ever more important, eh?

The illusion of (my) competency

13 May 2025 by Clark 2 Comments

I have a cobbled together tech environment. My monitor and printer are years old, and my laptop is relatively new. To make them fit together, particularly with the limited ports of a laptop, I’ve a hub. And, now a few weeks ago, things went wonky. In weird ways. And I still don’t know it all, but there are some lessons here, not least the illusion of (my) competency. We all have travails, but this one has some lessons.

So, the first symptom was my microphone suddenly not working. I’d gotten a reasonable one, but it died (and apparently wasn’t good enough anyway), so my co-director in the LDA, Matt Richter, got me a nice one. It was this fancy, heavy one, that stopped working. It’s USB, so has LEDs, which are blue normally but green when in use. Except what happened was that when I wanted to use it, it started not working and blinking between blue and green. Hold that thought.

Then, the camera in my monitor stopped working. I have one in my laptop that’s better/newer, but it’s a bit away (I prefer to work on the big monitor, of course). I use the laptop camera when I want to look impressive and show my book case beside me ;). But I prefer the monitor camera for work meetings and the like. Yet, I couldn’t. Now hold that thought, too!

The icing on the proverbial cake came when my backup drive stopped backing up. It would start, and say it was finding probs, but then would time out with an error. I tried to run diagnostics, which said it was corrupt. I also got SMART results saying it was fine. All very confused.

Finally, I decided I had to contact Apple (fortunately Matt’s also insisted I keep up my service plan). After a lot of shenanigans that I won’t bore you with, the question came: is my hub powered? As it was, the answer’s no. If memory serves (dodgy proposition), I got it for free when Amazon was experimenting with providing things to people who wrote reviews. Which would be a weird issue, as I’d been using it for several years this way.

Still, went and ordered a powered hub. Then ordered another, realizing I wanted the faster version. BTW, I had to pick up both, because the other was fulfilled before I got the second, and the solution seemed to be that I picked up both and returned the first. Which took an extra 5 minutes, apparently to defend against fraud. Why we can’t have nice things…(Lesson: allow people to cancel orders via chat or online, don’t make them come in, take and immediately return).

While I was dealing with the hub, I decided to call the maker of the microphone (remember?). I’d called them before and left a message, and also emailed, to no reply. (Lesson: return your customer queries.) I was leaving a voicemail when I got a call from them, so I switched. It turns out that the blinking means the mic’s muted. There’s a knob on the back that’s ‘gain’, BUT it turns out it’s also a button, and if you push it, it mutes the mic. Now, I’d gone to the site and the mic instructions, and it doesn’t talk about that at all! I mentioned it to the person on the phone, and they said that they’d tried to get it changed, to no avail. (Lesson: put all the information about operation in the <expletive deleted> documents!) Problem fixed.

Then, when I installed the powered hub, the video camera in the monitor started working again. It’s not clear why it suddenly stopped, but…it was fixed. The lesson here, and for me (tho’ feel free to take it to heart), is probably never to trust an unpowered hub to be sufficient. However, and in my defense, it had been working for months if not years. (And, if you could’ve told me that, I don’t want to hear it.)

Finally, we get back to the drive. The powered hub didn’t fix it. I spent an unreasonable amount of time trying to run diagnostics, both Apple’s and the manufacturers. Finally, I delisted it from the backup software, erased it, then reintroduced it. And, voila’, it’s working! Not sure what the problem was. (Lesson: provide more feedback to the user on what’s going on.)

As a weird aside, I asked for a support call, but when it came it shut off my wifi calling. I live in a slight depression and have a bad signal, so I use wifi. But by turning it off, they ensured that the call would get dropped. I can’t imagine why they would do such a thing, but I’ve had real trouble getting calls from them, and this time just happened to see that the wifi calling had stopped at the time of the call. I had to use chat. Very puzzling and unresolved.

All told, this took way too much time, and while I learned one lesson, there was too much the result of bad design, not incompetence. As Don Norman said in The Design of Everyday Things, if it’s difficult to use, blame the designer, not the user. We know how to do better, we just don’t do it frequently enough. (I also recommend Kathy Sierra’s Badass as a guiding light.) I’m willing to assume responsibility for my culpability, and admit to the illusion of (my) competency. But I also recognize that I’m not stupid, and better design would’ve limited my frustrations and time wasting.

Locus of intelligence

6 May 2025 by Clark 1 Comment

I’m not a curmudgeon, or even anti-AI (artificial intelligence). To the contrary! Yet, I find myself in a bit of a rebellion in this ‘generative‘ AI era. And I’m wondering why. The hype, of course, bugs me. But it occurs to me that a core problem may reside in where we put the locus of intelligence. Let me try to make it clear.

In the early days of the computer (even before my time!), the commands were to load memory into registers, conduct boolean operations on them, and to display the results. The commands to do so were at the machine level. We went a level above with a translation of that machine instructions into somewhat more comprehensible terms, assembly language. As we went along, we went more and more to putting the onus on the machine. This was because we had more processor cycles, better software etc. We’re largely to the point where we can stipulate what we want, and the machine will code it!

There are limits. When Apple released the Newton, they tried to put the onus on the machine to read human writing. In short, it didn’t work. Palm’s Pilots succeeded because Jeff Hawkins went for Graffiti as the language, which shared the responsibility between person and processor. Nowadays we can do speech and text recognition, but there are still limitations. Yes, we have made advances in technology, but some of it’s done by distributing to non-local machines, and there are still instances where it fails.

I think of this when I think of prompt engineering. We’ve trained LLMs with vast quantities of information. But, to get it out, you have to ask in the right way! Which seems like a case of having us adapt to the system instead of vice versa. You have to give them heaps more context than a person would need, and they still can hallucinate.

I’m reminded of a fictional exchange I recently read (of course I can’t find it now), where the AI user is being advised to know the domain before asking the AI. When the user queries why they would need the AI if they know the domain, they’re told they’re training the AI!

As people investigate AI usage, one of the results is that your initial intelligence indicates how much use you’ll get out of this version of AI. If you’re already a critical thinker, it’s a good augment. If you’re not, it doesn’t help (and may hinder).

Sure, I have problems with the business models (much not being accounted for: environmental cost, IP licensing, security, VC boosting). But I’m more worried about people depending too much on these systems without truly understanding what the limitations are. The responsible folks I know advocating for AI always suggest having a person in the loop. Which is problematic if you’re giving such systems agency; it’ll be too late if they do something wrong!

I think experimenting is fine. I think it’s also still too early to place a bet on a long-term relationship with any provider. I’m seeing more and more AI tools, e.g. content recommenders, simulation avatars, and the like. Like with the LMS, when anyone who could program a database would build one, I’m seeing everyone wanting to get in on the goldrush. I fear that many will end up losing their shirts. Which is, I suppose, the way of the world.

I continue to be a big fan of augmenting ourselves with technology. I still think we need to consider AI a tool, not a partner. It’s nowhere near being our intellectual equal. It may know more, but it still has limitations overall. I want to develop, and celebrate our intelligence. I laud our partnership with technologies that augment what we do well with what we don’t. It’s why mobile became so big, why AI has already been beneficial, and why generative AI will find its place. It’s just that we can’t allow the hype to blind us to the real locus of intelligence: us.

Applied learning science

18 March 2025 by Clark Leave a Comment

One of my favorite things to do is to help people apply the cognitive and learning sciences (under realistic constraints). That can be to their practices, processes, or products, via consulting, workshops, writing, and more. One thing I’ve done over the past few years is doing this for a particular entity. I was found via a workshop, and ended up coming on as an advisor. They’re now about ready to go live, and it’s time for me to tell you what they’re doing, why, and how. So here’s an application of applied learning science.

It starts with a problem, as many good solutions do. The issue is that, in L&D, too often they’re delivering live sessions to address a particular situation. Whether someone’s said “we need a course on this”, or there’s been a deep analysis, at some point they’ve pulled people together. It could be a day, several days in a row, or even spaced out every other week, every month, what have you. And, we know, that by and large, this isn’t going to lead to change!

Research on learning tells us, quite strongly, that to achieve a persistent new ability to ‘do’, we need to strengthen the learning over time. New information gets forgotten after only a day or two, according to the forgetting curve! So, we need to reactivate the learning. That can be reconceptualization, recontextualization, or reapplication. It can also be reflection, and even planning, and evaluation.

However, it’s been tough to do this reactivation. It typically requires finagling, and faces objections; not just the learners, but also the stakeholders! Such interventions need to be small but effective. That’s what this solution does. Other approaches have been tried, and some other solutions do exist, but this one has a couple of advantages. For one, a clear focus. It’s not doing other things, except reactivating learning.

Ok, one other thing, it’s also collecting data. Too often,  there’s no way to know if it learning’s effective. Even if there’s intent, it’s hard to get approval. So, this solution not only reactivates learning as mentioned, it tracks the responses. In practical ways.

What’s been my role? That’s the other thing; we’re applying this in ways that reflect what learning science tells us. Ok, we have to make some inferences, that we’re testing, but we’re starting from good principles. So, I’m advising on the spacing of the learning and the content of the reactivation. We call those prompts, that ask learners to respond. These prompts then gather into small chunks called LIFTs (Learning Interventions Fueling Transformation). (Everyone’s gotta have an acronym, after all, and this plays along with the company name, Elevator 9 ;).  The sequence of LIFTs makes a learning journey.

What’s important is how many we need, and how frequently we deliver them. It’s dependent on some factors, so we’re asking about those too: frequency of application, complexity, importance, and prior experience. Hopefully, in clear and useful ways.  They’re actively  looking for companies that are keen to help us refine this, too (in return for the usual considerations ;).

The end result is a product that easily supplements your live events. Your learners get reactivations, and you get data. Importantly, you get better outcomes from your interventions. This capability is possible, the goal is just to make it easy to do. Moreover, with a solution that not only embodies but shares the underlying learning science, improving you as it does your learners. Win-Win! I generally don’t tout solutions, but this one has actively put learning science (tempered by reality, to be sure) at the forefront. Applied learning science, and technology, the way it ought to be done. It’s been an honor to work with them!

Getting smarter

14 January 2025 by Clark Leave a Comment

A number of years ago now, I analyzed the corporate market for a particular approach. Not normally something I do (not a tool/market analyst), but at the time it made sense. My recommendation, at the end of the day, was the market wasn’t ready for the product. I am inclined to think that the answer would be different today. Maybe we are getting smarter?

First, why me? A couple of reasons. For one, I’m independent. You (should) know that you’ll get an unbiased (expert) opinion. Second, this product was something quite closely related to things I do know about, that is, learning experiences that are educationally sound. Third, the asker was not only a well-known proponent of quality learning, but knew I was also a fan of the work. So, while I’m not an analyst, few same would’ve really understood the product’s value proposition, and I do know the tools market at a useful level. I knew there was nothing else on the market like it, and the things that were closest I also knew (from my authoring simulation games work, as in my first book, and the research reports for the Learning Guild).

The product itself allowed you to author deep learning experiences. That is, where you immerse yourself in authentic tasks, with expert support and feedback. Learning tasks that align with performance tasks are the best practice environments, and in this case were augmented with resources available at the point of need. The main problem was that they required an understanding of deep learning to be able to successfully author. In many cases, the company ended up doing the design despite offering workshops about the underlying principles. Similarly, the industrial-strength branching simulation tools I knew then struggled to survive.

And that was my reason, then, to suggest that the market wasn’t ready. I didn’t think enough corporate trainers, let alone the managers and funding decision-makers, would get the value proposition. There still are many who are ‘accidental’ instructional designers, and more so then. The question, then, is whether such a tool could now succeed. And I’m more positive now.

I think we are seeing greater interest in learning science. The big societies have put it on their roadmaps, and our own little LDA learning science conference was well received. Similarly, we’re seeing more books on learning science (including my own), and more attention to same.  I think more folks are looking for tools that make it easy to do the right thing. Yes, we’re also confronting the AI hype, but I think after the backlash we’ll start thinking again about good, not just cheap and fast. I not only hope, but I think there’s evidence we are getting smarter and more focused on quality. Fingers crossed!

Taking a higher perspective

12 November 2024 by Clark Leave a Comment

A number of years ago, I did some consulting to a training organization. The issue was that they didn’t seem to have a sustained relationship with their folks. And, this has seemed to me like an obvious and solvable problem. However, I may be missing something, so perhaps you can help me in taking a higher perspective.

In the particular instance, they provided training in particular areas. That is, folks would attend their courses and then, at least theoretically, be able to perform in new ways. Yet, they felt that folks didn’t necessarily sustain allegiance to them nor their offerings.

I asked what else they offered.  From the perspective of a performer, I’m not there to learn! Instead, I’m there to acquire new skills so I can perform better. And, if we take to heart what performance consulting has to say, there’re also resources such as job aids. These lead to success where learning isn’t even necessary. There’s more, too.

We can go further, of course. What about community? If you’re focused on a particular area of performance, would it make sense to be connected to others in the same endeavor? I’ll suggest that it’s likely. As folks develop in ability, they need to start interacting with others.

This organization wasn’t alone, of course. I’ve engaged with a number of organizations over the years that faced the same issue. (Whether they knew it or not.) In fact, I suspect it’s more prevalent that we agree. Particularly in this era of information available online, how do you generate a sustained relationship?

It seems to me that if we’re taking a higher perspective, we’ll realize that courses are just a component of a full development ecosystem. Of course, there are lots of issues involved: finding ways to curate or create all the elements, content management, platform choice and integration, and more. Still, this seems to me to be at least part of the answer. So, what am I missing?

 

What L&D resources do we use?

29 October 2024 by Clark 1 Comment

This isn’t a rhetorical question. I truly do want to hear your thoughts on the necessary resources needed to successfully execute our L&D responsibilities. Note that by resources in this particular case, I’m not talking: courses, e.g. skill development, nor community. I’m specifically asking about the information resources, such as overviews, and in particular tools, we use to do our job. So I’m asking: what L&D resources do we need?

A diagram with spaces for strategy, analysis, design, development, evaluation, implementation, evaluation, as well as topics of interest. Elements that can be considered to be included include tools, information resources, overviews, and diagrams. There are some examples populating the spaces.I’m not going to ask this cold, of course. I’ve thought about it a bit myself, creating an initial framework (click on the image to see it larger). Ironically, considering my stance, it’s based around ADDIE. That’s because I believe the elements are right, just that it’s not a good basis for a design process. However, I do think we may need different tools for the stages of analysis, design, development, implementation, and evaluation, even if don’t invoke them in a waterfall process. I also have categories for overarching strategy, and for specific learning topics. These are spaces in which resources can reside.

There are also several different types of resources I’ve created categories for. One is an overview of the particular spaces I indicate above. Another are for information resources, that drill into a particular approach or more. These can be in any format: text or video typically. Because I’m weird for diagrams, I have them separately, but they’d likely be a type of info resource. Importantly, one is tools. Here I’m thinking performance support tools we use: templates, checklists, decision trees, lookup tables. These are the things I’m a bit focused on.

Of course, this is for evidence-based practices. There are plenty of extant frameworks that are convenient, and cited, but not well-grounded. I am looking for those tools you use to accomplish meaningful solutions to real problems that you trust. I’m looking for the ones you use. The ones that provide support for excellent execution. In addition to the things listed above, how about processes? Frameworks? Models? What enables you to be successful?

Obviously, but importantly, this isn”t done! That is, I put my first best thoughts out there, but I know that there’s much more. More will come to me (already has, I’ve already revised the diagram a couple of times), but I’m hoping more will come from you too. That includes the types of resources, spaces, as well as particular instances.

The goal is to think about the resources we have and use. I welcome you putting in, via comments on the blog or wherever you see this post, and let me know which ones you find to be essential to successful execution. I’d really like to know what L&D resources do we use. Please take a minute or two and weigh in with your top and essential tools. Thanks!

Top 10 Tools for Learning 2024

27 August 2024 by Clark 2 Comments

Once again, the inimitable Jane Hart is running her Top 10 Tools for Learning survey. The insights are valuable, not least because it points out how much of our learning comes from other than formal learning. So, here are my Top 10 Tools for Learning 2024, in no particular order:

Google Docs. I write, a lot. And, increasingly, I want others to weigh in. I am cranky that I have to choose a tool instead of just going to one place to collaborate,  and I struggle with the file structure of Drive, but the feature set within Docs is good enough to support collaborative writing. And collaborative work in general is something I strongly advocate for. Collective intelligence, as Nigel Paine refers to it. For myself, however, – articles, books –  I still use…

Microsoft Word. I’m not a big fan of the parent company (they have glommed on to the current plan for subscriptions, which makes financial sense but is a bad customer experience), and it’s not the writing tool that Scrivener is, but I’m so familiar with it (started using circa 1988) and the outlining is industrial strength (a feature I love and need). It’s the start of most of my writing.

Apple Freeform.  I still use Omnigraffle, but I’m keen to support free tools, and this one’s proprietary format isn’t any worse than any others. I could use Google Draw, I suppose, particularly when collaborating, but somehow folks don’t seem to collaborate as much around diagrams. Hmm…

WordPress. This is the tool I use to write these blog posts. It’s a way for me to organize my thinking. Yes, it’s writing too, but it’s for different types of writing (shorter, more ‘in the moment’ thoughts). While the comments here are fewer, they still do come. Announcements get auto-posted to LinkedIn, Mastodon, & Bluesky.

LinkedIn. This is where I get more comments than, these days, I do on my blog. Plus, we use it to write and talk about the Learning Development Accelerator and Elevator 9. I follow some folks, and connect with lots. It remains my primary business networking tool. Feel free to connect with me (if you’re in L&D strategy ;).

Mastodon & Bluesky. Yes, this counts as two, but I use them very similarly. Since the demise of Twitter (eX), I’ve looked for an alternative, and regularly stay with these two. They’re (slightly) different; Mastodon seems a bit more thoughtful, Bluesky is more dynamic, but they’re both ways to stay in touch with what people are thinking, largely outside the L&D space. Still haven’t found all my peeps there, but I’m Quinnovator (of course) on both.

News Apps/Sites. I’m also learning via news apps, again staying up with what’s happening in the larger world. So, I get Yahoo News because one email is there. Also, I check some sites regularly: ABC (Australia, not US), BBC, and Apple News (because it’s on my iPad). I’m counting this as one because otherwise it’d overwhelm my count.

Apple Mail. I subscribe to a few newsletters, mostly on learning science, and some blogs. They come in email (directly or via Feedblitz). This is all part of Harold Jarche’s Personal Knowledge Mastery elements of Seek – Sense – Share, and these are updated regularly but are part of the seek. Some of the writing I do is the sharing. Making sense is the above writing, diagramming, and…

Apple Keynote. Creating presentations for webinars, workshops, speaking engagements such as keynotes, and the like is another way I make sense of the world. So, having a good tool to create them is critical, and Keynote works more the way I think than PowerPoint does.

So that’s it, my 10. It may not work for Jane’s categorization (sorry!), but it captures the way I think about it. Please do share yours, too! (There are more ways than writing a post, so find the one that works for you.)

 

Reflecting on adaptive learning technology

11 June 2024 by Clark 1 Comment

My last real job before becoming independent (long story ;) was leading a team developing an adaptive learning platform. The underlying proposition was the basis for a topic I identified as one of my themes. Thinking about it in the current context I realize that there’re some new twists. So here I’m reflecting on adaptive learning technology.

So, my premise for the past couple of decades is to decouple what learners see from how it’s delivered. That is, have discreet learning ‘objects’, and then pull them together to create the experience. I’ve argued elsewhere that the right granularity was by learning role: concepts are separate from examples, from practice, etc. (I had team members participating in the standards process.) The adaptive platform was going to use these learning objects to customize the sequence for different learners. This was both within a particular learning objective, and across a map of the entire task hierarchy.

The way the platform was going to operate was typical in intelligent tutoring systems, with a twist. We had a model of the learner, and a model of the pedagogy, but not an explicit model of expertise. Instead, the expertise was intrinsic to the task hierarchy. This was easier to develop, though unlikely to be as effective. Still, it was scalable, and using good learning science behind the programming, it should do a good job.

Moreover, we were going to then have machine learning, over time, improve the model. With enough people using the system, we would be able to collect data to refine the parameters of the teaching model. We could possibly be collecting valuable learning science evidence as well.

One of the barriers was developing content to our specific model. Yet I believed then, and still now, that if you developed it to a standard, it should be interoperable. (We’re glossing over lots of other inside arguments, such as whether smart object or smart system, how to add parameters, etc.) That was decades ago, and our approach was blindsided by politics and greed (long sordid story best regaled privately over libations). While subsequent systems have used a similar approach (*cough* Knewton *cough*), there’s not an open market, nor does SCORM or xAPI specifically provide the necessary standard.

Artificial intelligence (AI) has changed over time. While evolutionary, it appears revolutionary in what we’ve seen recently. Is there anything there for our purposes? I want to suggest no. Tom Reamy, author of Deep Text, argues that hybrids of symbolic and sub-symbolic AI (generative AI is an instance of the latter) have potential, and that’s what we were doing. Systems trained on the internet or other corpuses of images and/or text aren’t going to provide the necessary guidance. If you had a sufficient quantity of data about learning experiences with the characteristics of your own system, you could do it, but if it exists it’s proprietary.

For adaptive learning about tasks (not knowledge; a performance focus means we’re talking about ‘do’, not know), you need to focus on tasks. That isn’t something AI really understands, as it doesn’t really have a way to comprehend context. You can tell it, but it also doesn’t necessarily know learning science either (ChatGPT can still promote learning styles!). And, I don’t think we have enough training data to train a machine learning system to do a good job of adapting learning. I suppose you could use learning science to generate a training set, but why? Why not just embed it in rules, and have the rules work to generate recommendations (part of our algorithm was a way to handle this)? And, as said, once you start running you will eventually have enough data to start tuning the rules.

Look, I can see using generative AI to provide text, or images, but not sequencing, at least not without a rich model. Can AI generate adaptive plans? I’m skeptical. It can do it for knowledge, for sure, generating a semantic tree. However, I don’t yet see how it can decide what application of that knowledge means, systematically. Happy to be wrong, but until I’m presented with a mechanism, I’m sticking to explicit learning rules. So, where am I wrong?

Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok