Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Beyond LLMs

2 December 2025 by Clark Leave a Comment

So, I was recently at the DevLearn conference, and it was, as always, fun. Though, as you might expect, there as non-stop discussion of AI. Of course, even on the panel I was on (with about 130 other Guild Masters; hyperbole, me?) it was termed AI tho’ everyone was talking Large Language Models (LLMs). In preparation, I started thinking about LLMs and their architecture. What I have realized (and argued) is that people are misusing LLMs. What became clear to me, though, is why. And I realize there’s another, and probably better, approach. So let’s talk going beyond LLMs.

As background, you tune LLMs (and the architecture, whether applied to video, audio, or language) to a particular task. Using text/language as an example, their goal is to create good sounding language. And they’ve become very good at it. As has been said, they create what sounds like good answers. (They’re not, as hype has it, revolutionary, just evolutionary, but they appear to be new.)

I made that point on the panel, asking the audience how many thought LLMs made good answers, and there was a reasonable response. Then I asked how many thought it made what sounded like good answers. My point was that they’re not the same. So, they don’t necessarily make good design! (Diane Elkins pointed out that they trained on average, so they create average. If you’re below average, they’re good, if you’re above average, they’ll do worse than what you’d do.) I ranted that tech-enabled bad design is still bad design!

However, I’ve been a fan of predictive coding, as it poses a plausible model of cognition. Then I heard about active inference. And, in a quick search, found out that together, they’re much closer to actual thinking. In particular, combined, they approach artificial general intelligence (AGI for short, and wrongly attributed to our current capability). I admit that I don’t go fully into the math, but conceptually, they build a model of the world (as we do). Moreover, they learn, and keep learning. That is, they’re not training on a set of language statements to learn language, but they’re building explanations of how the world works.

I think that when we want really good systems to know about a domain (say business strategy) and provide good guidance, this is the type of architecture. What I said there, and will say again here, is that this is where we should be applying our efforts. We’re not there yet, and I’m not sure how far the models have evolved. On the other hand, if we were applying the resources going to LLMs… Look, I’m not saying there aren’t roles for LLMs, but too often they’re being used inappropriately I think we can do better when we go beyond LLMs. You heard it here first ;).

Rethinking AI and education

25 November 2025 by Clark 1 Comment

I read a post today about Artificial Intelligence and education, and it prompted some thoughts. Including causing me to rethink a post I made earlier! It’s all about using AI to assist with instruction, and it’s probably worth sharing my thinking, to see what you all think. So here are some thoughts on rethinking AI and education.

The post Carl Hendrick wrote talked (at length ;) about how AI is being used to support education. We know that using AI to create answers keeps you from doing the hard work to learn. So, for instance, the trend to use AI by students to write assignments, etc, undermines learning. However, it becomes an ‘us against them’ battle to try to create assignments that require students to think. So far, pretty much the best approach I’ve heard means that you ultimately have to spend some time talking to students one on one. Which doesn’t scale well.

Carl was getting philosophical, about trends and what teaching vs learning means. And it’s an important point: learning is a process, teaching is an intervention, ideally to facilitate that process. If we take Geary’s evolutionary learning model, we pretty much need instruction for certain topics. But it led to me to question the whole premise.

The conversation largely rode on ChatGPT (and, implicitly, other Large Language Models). These models are created to generate plausible language. Not correct answers, note. And, they do it so well that there’s been a revolution in the hype (not the substance, mind you). And what concerns me is that LLMs aren’t able to really ‘know’ anything. In my previous post, I posited that we could perhaps combine ‘agents’ (in some way that’d be secure) to create a tutoring model. But I wonder if that’s the right way.

I am thinking about efforts to generate models that instead of generating plausible language, do ‘knowing’. That is, modeling the predictive coding models of the brain. It might be hard to get them to the right level, but at least they’d understand. If you think back to the Intelligent Tutoring Systems of the past, they built deep models of expertise in the domain. Could systems learn this, instead of interpreting language ‘about’ this? Coupling such a system with a teaching engine (maybe learning what instruction really is) might be a real tutor.

Carl’s point about the nature of teaching is that it’s much more than providing answers. In the experiment he was citing, they carefully built a tutor that they had to tune to do real teaching, fighting its natural predilections of such systems to provide answers that sound like correct ones. That, ultimately, sounds wrong.

(I’m not going into the curriculum and assessment, by the way, I don’t know that what they’re teaching is actually useful in today’s day and age; was it knowledge about physics, or actual ability to use it? There’re robust results that students who learn formal physics still make bad predictions, such as after a semester, still thinking the a ball dropped from a plane still lands directly under where it was dropped!)

My point is that trying to make LLMs be teachers may be using AI in the wrong way. Sure, ITSs don’t scale well, but could we build an engine that learns a domain (rather than handcrafting) and one that learns to teach, and then scale that? It’s not language fluency that matters, it’s pedagogical fluency is my argument. That’s how I’m rethinking AI and education. Your thoughts?

Still the myths

11 November 2025 by Clark Leave a Comment

Over on LinkedIn, about the last site worth visiting (I do use Bluesky and Mastodon, but they’re not really ‘sites’ so much as channels), I am still seeing quotes about people believing in misinformation. Learning styles, generations, attentions spans dropping, etc, all these things that aren’t valid are still being touted. Despite our debunking efforts, it’s still the myths!

To be fair, we do seem to be seeing a bit of an ‘anti-science’ movement. Which would be not only silly, but sad! Sure, there are problems with science, but it still beats every other process we have. Anecdotes don’t surpass real evidence, and personal opinion isn’t superior to what proper research tells us.

For one, as Naomi Oreskes makes clear in her book Why Trust Science, what makes science work isn’t just the process. So, yes, scientists conduct experiments, and others review them, and it’s a collective decision to publish them. And, yes, bad papers are still submitted (I used to serve on editorial boards, and my rejection rate was about 95%; but it was a good journal ;). Also, it’s hard to bring in new viewpoints. What Oreskes points out, however, and aptly, is that over time, these processes advance our understanding. We may have fits and spurts, but for the long game we win. For instance, how are you able to read this offering of mine, over miles and minutes? Because science.

So, science denial is counter-productive, but it exists. Gale Sinatra and colleague Barbara Hofer, in their book Science Denial, outline the reasons how this happens. Based upon research into the situation, they document our minds have biases, and how we can be swayed. We can also have our own beliefs, and our tendency to confirmation bias means we only look for evidence that supports our views. Fortunately, they discuss ways to address these problems, but we need to put some of these into place (as with Brian Klaas’ recommendations for fighting corruption).

We have good data that there are things we should avoid. There really aren’t any psychometrically valid instrument for learning styles, and no evidence that should use them if we did. Categorizing people by generations is, basically, a form of stereotyping. Our attentions spans can engage for hours even, as we play games, read novels, watch movies, etc. And so on!

Sometimes, it feels hopeless. But I look and see that we’re getting more attention to learning science. It really is about communication, and it seems we’re (slowly) making headway. So, I’ll keep keeping on (heck, I wrote the book!), despite ‘still the myths’. Hopefully, fewer and less over time. Fingers crossed!

Is ‘average’ good enough?

26 August 2025 by Clark Leave a Comment

As this is my place to ‘think out loud’, here’s yet another thought that occurred to me: is ‘average’ good enough? And, just what am I talking about? Well, LLMs are, by and large, trained on a vast corpora. Essentially, it’s averaging what is known. It’s creating summaries of what’s out there, based upon what’s out there. (Which, BTW, suggests that it’s going to get worse, as it processes its own summaries! ;) But, should we be looking to the ‘average’?

In certain instances, I think that’s right. If you’re below average in understanding, learning from the average is likely to lift you up. You can move from below average to, well, average. Can you go further? If you’re in well-defined spaces, like mathematics, or even programming, what LLMs know may well be better than average. Not as good as a real expert, but you can raise your game. Er, that is, if you really know how to learn.

Using these systems seems to become a mental crutch, if you don’t actually do the thinking. While above average people seem to be able to use the systems well, those below average don’t seem to learn. IF you used it to provide knowledge, and then put that knowledge into practice, and get feedback (so, for instance, experimenting), you could fine tune your performance (not as eloquently as having someone provide feedback, but perhaps sufficiently). However, this requires knowing how to learn, and the evidence here is also that we don’t do that well.

So, generative AI models give you average answers. Except, not always. They hallucinate (and always will, if this makes sense). For instance, they’ll happily support learning styles, because that’s a zombie idea that’s wrong but won’t die. They can even make stuff up, and don’t know and can’t admit to it. If you call them on it, they’ll go back and try again, and maybe get it right. Still, you really should have an ‘expert’ in the loop. Which may be you, of course.

Look, I get that they can facilitate speed. Though that would just seem to lead your employer to expect more from you. Would that be accompanied by more money? Ok, I’m getting a bit out of my lane here, but I’m not inclined. But is faster better?

Also, ‘average’ worries me. As I’ve written, Todd Rose wrote a book called The End of Average that is truly insightful. Indeed, one of those books that makes you see the world in a different way, and that’s high praise. The point being that average removes the quality. Averaging removes the nuances, the details, as does summarization. Ideally, you should be learning from the best, not the average, if learning is social (as Mark Britz likes to point out).

Sure, it can know the average of top thoughts, but what’s better is having those top thinkers. If they’re disagreeing, that’s better for dialog, but not summarization. In truth, I’d rather learn from a Wikipedia page put together by people than a Gen AI summary, because I don’t think we can trust GenAI summaries as much as socially constructed understanding. And it’s not the same thing.

So, I’ll suggest ‘average’ isn’t nearly good enough in most cases. We want people who know, and can do. I don’t mind if folks find GenAI useful, but I want them to use it as support, not as a solution. Hey, there’s a lot that can be done with regular AI in many instances, and Retrieval Augmented Generation (RAG) systems offer some promise of improvement for GenAI, but still not perfect outcomes. And, still, all the other problems (IP, business models, and…). So, where’ve I gone wrong?

Note, I should be putting references in here, but I’ve read a lot lately and not done a good job of saving the links. Mea culpa. Guess you’ll just have to trust me, or not. 

Top 10 Learning Tools 2025

14 August 2025 by Clark 1 Comment

Every year, the inimitable Jane Hart collects what people say are their top 10 tools for learning. The results are always intriguing, for instance, last year AI really jumped up the list. You can vote using this form, or email your list to her via the address on that page. I’ve participated every year I’ve known about it, and do so again. Here’s my list. Realize this is for ‘learning’, not formal education per se. It’s whatever makes sense for you.

Writing

I write, a lot. It’s one way of my making sense of things. So…Microsoft Word remains my goto tool. Less and less so, of course. I’ve been using Google Docs to collaborate with others quite a bit, and am currently using Apple’s Pages for that purpose. Still, I think of Word as my ‘goto’ tool, at least for now. I don’t like Microsoft, and am trying to wean myself away, but I really really need industrial strength outlining, and no one else has measured up.

Apple’s Notes needs a mention. I use it, a lot. Several things are pinned to the top (including my SoMe canned responses, and shopping lists). I also share recipes with family members (on Apple devices), take notes on books and the like, keep a list of ‘to consume’ (books, movies). I also use Notability for biz notes, but it’s not as ubiquitous, and I may just shift everything to notes as there’ve been an increasing number of ‘offers’ to upgrade. Yuck.

And, of course, WordPress for this blog. Here’s where I share preliminary thoughts that end up appearing in articles, presentations, or books. It’s a way to share thinking and get feedback.

Diagramming

I’m still using OmniGraffle. I tried using Google’s Draw, and Apple’s Freeform, but… OmniGraffle’s positives are its user interface. It works the way I want to think about it. Sure, it’s probably changed my thinking to adapt to it too, but from the get go I found using it to be sweet. In fact, as I’ve recounted, I immediately redid some diagrams in it that I’d created in other ways previously just because it was so elegant. The downsides are not only that it’s Mac-only (I work with many other folks), but that it’s not collaborative. Diagramming is one of the ways I make sense of things.

Presentation

Apple’s Keynote remains my preferred presentation tool. I continue to use it to draft presentations. It defaults to my ‘Quinnovation’ theme, tho’ for reasons (working with others, handouts w/o color, builds, etc) I will use a plain white theme. I even have built a deck of diagram builds, so I can paste them into presos but have them to hand rather than having to remake them each time. It’s another way to share.

Connection

Apple Mail, for email, is an absolute necessity. I have to stay in touch with folks, and mail’s critical to coordinate and share.

I use Safari all the time as my browser, tho’ occasionally I have to have Chrome-compatibility, at which time I use Brave; Chrome-compatible but without Google’s intrusiveness. Takes me to Wikipedia, a regular trusted source for looking things up.

Zoom remains my ‘goto’ virtual meeting tool (all my meetings are virtual these days!). I of course use Microsoft’s Teams (but only through the browser now, was able to turf the app), and Google Meet, but only as others request. Of course, connecting with others is critical to learning.

Wow, I’m running out of time and space. Let’s see: Slack is a coordination tool I use a lot with the LDA, and Elevator 9. It’s also a way to share thinking, so it’s a learning tool too.

There’s more, so I guess I’ll use my last slot and aggregate my Social Media tools. That includes LInkedIn, Bluesky, and Mastodon. All three get notification of blog posts, but other than that each has its separate uses. LinkedIn is for biz connections, and reading what others are posting. Bluesky is mostly what Twitter used to be (before it became Xitter), fun, quantity. Mastodon’s more restrained in growth, but the underlying platform is really resistant to political/business corruption.

That’s all I can think of. I welcome hearing your thoughts and seeing the results.

Beyond Design

12 August 2025 by Clark Leave a Comment

When you look at the full design process, I admit to a bias. Using Analysis-Design-Development-Implementation-Evaluation, ADDIE, (though I prefer more iterative models: SAM, LLAMA, …), I focus early. There are two reasons why, but I really should address them.  So let’s talk beyond ‘design’ and why my bias might exist. (It pays to be a bit reflective, or defensive?, from time to time.)

I do believe that it’s important to get the first parts right. I’ve quipped before that if you get the design right, there are lots of ways to implement it. To do that, you need to get the analysis and design right. So I focus there. And, to be sure, there’s enough detail there to suit (or befuddle) most. Also, lots of ways we go wrong, so there’s suitable room for improvement. It’s easy, and useful, to focus there.

Another reason is that implementation, as implied in the quip, can vary. If you have the resources, need, and motivation, you can build simulation-driven experiences, maybe even VR. There are different ways to do this, depending. And those ways change over time. For instance, a reliable tool was Authorware, and then Flash, and now we can build pretty fancy experiences in most authoring tools. It’s a craft thing, not a design thing.

Implementation does matter. How you roll things out is an issue. As Jay Cross & Lance Dublin made clear in Implementing eLearning, you need to treat interventions as organizational change. That includes vision, and incentives, and communication, and support, and… And there’s a lot to be learned there. Julie Dirksen addresses much in her new book Talk to the Elephant about how things might go awry, and how you can avoid the perils.

Finally, there’s evaluation. Here, our colleague Will Thalheimer leads the way, with his Learning Transfer Evaluation Model (LTEM). His book, Performance Focused Learner Surveys comes closest to presenting the whole model. Too often, we basically do what’s been asked, and don’t ask more than smile sheets at best. When, to be professional, we should have metrics that we’re shooting to achieve, and then test and tune until we achieve them.

Of course, there’re also my predilections. I find analysis and design, particularly the latter, to be most intellectually interesting. Perhaps it’s my fascination with cognition, which looks at both the product and process of design. My particular interest is in doing two things: elegantly integrating cognitive and ‘emotional‘ elements, and doing so in the best ways possible that push the boundaries but not the constraints under which we endeavor. I want to change the system in the long term, but I recognize that’s not likely to happen without small changes first.

So, while I do look beyond design, that’s my more common focus. I think it’s the area where we’re liable to get the best traction. Ok, so I do say that measurement is probably our biggest lever for change, but we’ll achieve the biggest impact by making the smallest changes that improve our outcomes the most. Of course, we have to be measuring so that we know the impact!

Overall, we do need the whole picture. I do address it all, but with a bias. There are others who look at the whole process. The aforementioned Julie, for one. Her former boss and one of our great role-models, Michael Allen, for another. Jane Bozarth channels research that goes up and down the chain. And, of course, folks who look at parts. Mirjam Neelen & Paul Kirschner, Connie Malamed, Patti Shank, they all consider the whole, but tend to have areas of focus, with considerable overlap. Then we go beyond, to performance support and social, and look to people like Mark Britz, Marc Rosenberg, Jay Cross, Guy Wallace, Nigel Paine, Harold Jarche, Charles Jennings, and more.

All to the good, we benefit from different perspectives. It’s hard to get your mind around it all, but if you start small, with your area, it’s easy to begin to see connections, and work out a path. Get your design right, but go beyond design as well to get that right (or make sure it’s being done right to not undermine the design ;). So say I, what say you?

Auto-marked generative?

29 July 2025 by Clark Leave a Comment

As I continue to explore learning science, and get ever-deeper, one idea came to me that I had to check out. So, we’re recognizing the difference between elaboration (getting material into long-term memory), and retrieval (getting it out). They’re different, and yet both valuable. However, generative (not Generative AI, btw) activities typically have learners create their own understandings as a goal of having them reprocess the information. Which makes them labor-intensive to evaluate. Sure, you could have GenAI evaluate and respond, but that’s problematic for several reasons. Is there another way? Can you have auto-marked generative activities?

Increasingly, from educators I’m hearing more about generative activities. These are elaboration processing, where learners express the material in their own way. I argue that this can be either connecting it to personal experiences, or connecting it to prior knowledge (playing some semantics here ;). The goal, however, is to deepen and extend the patterns across neural activity, increasing the likelihood of their activation.

Whether prose, diagram, or mindmap (yes, a form of diagram, but…), these are free-form, and thus need review. Someone needs to look at them, to ascertain whether they’re right or whether they represent a significant misunderstanding. I remember when Kathy Fisher (of semantic networking fame and software SemNet) talked about how she asked students about how water got from the digestive to the excretory system, and they (many?) ended up positing in their mind-maps an extra tube connecting the two. (Fun fact: no such tube exists, water is absorbed into the blood, and then filtered out via kidneys.) Of course, with this evidence, it’s easy to diagnose misconceptions, at the expense of sufficient human interaction.

I was thinking about writing retrieval practice mini-scenarios, and was led to wonder whether you could do the same for generative activities. That is, present alternatives, perhaps of the most common misconceptions, and have learners choose between different representations. One advantage, then, would be the ability to auto-mark understanding. It seems to me that they’ll still need to process each representation, to be able to choose one, so they’re doing processing. It could be a mindmap, diagram, or prose restatement. You’d also be able to diagnose, and remediate, misunderstandings.

For example, you could ask:

How does water get from the digestive to the excretory system:

    • There’s a direct connection between the two, known as the aqueduct.
    • Water is absorbed into the blood and then filtered out via the kidneys.
    • There’s an organ that processes water from the former to the latter.

(A rough conceptualization; I’m sure a physiologist, could do better!)

I thought that perhaps I could ask someone who both talks about cognitive processing, researches instructional strategies, and in particular talks about generative activity. Professor Rich Mayer, who Ruth Clark introduced to us at the Learning Development Accelerator, was kind enough to respond, and we had a Zoom Chat. Not putting words in his mouth, it was my understanding that he agreed that this was a plausible model. I freely offer anyone to research this (including you, Rich!). Unless such are extant, in which case please point me to existing journal articles or the like.

There’s no telling whether this is useful, of course. Are auto-marked generative activities possible and plausible? Still, better to get the idea out there than not, it may end up being useful! Which, of course, is the ultimate goal. Thoughts?

Continually learning

8 July 2025 by Clark Leave a Comment

picture of a dictionary page with the word 'learning' highlightedI’ve been advising Elevator 9 on learning science. Now, while I advise companies via consulting, this is a different picture. For one, they’re keen to bake learning science into the core, which is rare and (in my mind) valuable. It’s also a learning opportunity for me. I’m watching all the things a startup has to deal with that I’ve avoided (I didn’t get the entrepreneurial gene). It’s also turning out to have a really interesting revelation, which is worth exploring. I like continually learning, and this is just such an opportunity.

To start, I’ve advised lots of companies over the years. This includes on learning design, product design, market strategy, and more. Of course, with me you always get more than anticipated (like it or not ;), because I’ve eclectic interests. I also collect models, and when they match, you’ll hear about it. (To be fair, most clients have welcomed my additional insights; it’s an extra bonus of working with me! :) It’s also fun, since I also educate folks as I go along (“working with you is like going to graduate school”). Rarely, however, have I been locked into the development. I come in, give good advice, and get out. Here, it’s not the same.

I’m always a sponge, learning as well as sharing. Here, however, I’ve had involvement for a longer time; from their first no-code version and now serious platform development (in User Acceptance Testing phase, which means we’re about to launch; exciting!). From CEO David Grad, through COO Page Chen, and then all the folks that have been added from tech, to sales and marketing, UI, and more, I’ve been usually at least peripherally involved and exposed. It’s fascinating, and I’m really learning the depths that each element takes, and of course it’s far more than my naive ideas had initially conceived.

There are two major elements to their solution. One is wrapping extended reactivation around training events. The second is taking the collected data and making it available as evidence of the learning trajectory. My role is essentially in the first; for one, there are lots of nuances going into the quantity and spacing of learning. While there’s good guidance, we’re making our best principled decisions, and then we’ll refine through testing. I’m also guiding about what those reactivation activities should be. We are extending learning, not quite to the continual, but certainly to the necessary proficiency.

This is where it’s getting interesting. I realized the other day that most of what learning science talks about is formal learning: practice before performance. Yet, here, we’re actually moving into applying the learning into the workplace, and having learners look at the impact they’re having. In many ways, this looks more like coaching. That is, we’re covering the full trajectory. Which means we have to base principles beyond just formal learning. This is serious fun! Our data collection, as a consequence, goes beyond just the cognitive outcomes, but also looks at how the experience is developing.

Sure, there are tradeoffs. The market demands that we incorporate artificial intelligence, and they’re not immune to the advantages. We’re also finding that, pragmatically, the implications can get complex really fast, and that we have to make some simplifying assumptions. Of course, they’re also needing to develop a minimally viable product first, after which they’ll see what direction extensions go. It’s not the ideal I would envision, but it’s also a solution that’s going to really meet what’s needed.

So, I’m continually learning, and enjoying the journey. We’ll see, of course, if we can penetrate awareness with the solution, which should be viable, and also handle the general difficulties that bedevil many startups. Still, it’s a great opportunity for me to be involved in, and similarly it’s one that can address real organizational needs.

In praise of reminders

17 June 2025 by Clark Leave a Comment

I have a statement that I actively recite to people: If I promise to do something, and it doesn’t get into a device, we never had the conversation. I’m not trying to be coy or problematic, there are sound reasons for this. It’s part of distributed cognition, and augmenting ourselves. It’s also part of a bigger picture, but here I am in praise of reminders.

Schedule by clock is relatively new from a historical perspective. We used to use the sun, and that was enough. As we engaged in more abstract and group activities, we needed better coordination. We invented clocks and time as a way to accomplish this. For instance, train schedules.

It’s an artifact of our creation, thus biologically secondary. We have to teach kids to tell time! Yet, we’re now beholden to it (even if we muck about with it, e.g. changing time twice a year, in conflict with research on the best outcomes for us). We created an external system to help us work better. However, it’s not well-aligned with our cognitive architecture, as we don’t naturally have instincts to recognize time.

We work better with external reminders. So, we have bells ringing to signal it’s time to go to another course, or to attend worship. Similar to, but different than other auditory signals (that don’t depend on our spatial attention) such as horns, buzzers, sirens, and the like. They can draw our attention to something that we should attend to. Which is a good thing!

I, for one, became a big fan of the Palm Pilot (I could only justify a III when I left academia, for complicated reasons). Having a personal device that I could add and edit things like reminders on a date/time calendar fundamentally altered my effectiveness. Before, I could miss things if I disappeared into a creative streak on a presentation, paper, diagram, etc. With this, I could be interrupted and be alerted that I had an appointment for something: call, meeting, etc. I automatically attach alerts to all my calendar entries.

Granted, I pushed myself to see just how effective I could make myself. Thus, I actively cultivated my address book, notes, and reminders as well as my calendar (and still do). But this is one area that’s really continued to support my ability to meet commitments. Something I immodestly pride myself for delivering on. I hate to have to apologize for missing a commitment! (I’ll add multiple reminders to critical things!)   Which doesn’t mean you shouldn’t, actively avoid all the unnecessary events people would like to add to your calendar, but that’s just self-preservation!

Again, reminders are just one aspect of augmenting ourselves. There are many tools we can use – creating representations, externalizing knowledge, … – but this on in particular as been a big key to improving my ability to deliver. So I am in praise of reminders, as one of the tools we can, and should, use. What helps you?

(And now I’ll tick the box on my weekly reminder to write a blog post!)

Software engineer vs programmer

20 May 2025 by Clark Leave a Comment

A rotund little alien character, green with antennas, dressed in a futuristic space suit, standing on the ground with a starry sky behind them. If you go online, you’ll find many articles that talk about the difference in roles between software engineers and programmers. In short, the former have formal training and background. And, at least in this day and age, oversee coding from a more holistic perspective. Programmers, on the other hand, do just that, make code. Now, I served in a school of computer science for a wonderful period of my life. Granted, my role was teaching interface design (and researching ed tech). Still, I had exposure to both sides. My distinction between software engineer vs programmer, however, is much more visceral.

Early in my consulting career, I was asked to partner with a company to develop learning. The topic was project management for non-project managers. They chose me because of my game design experience as well as learning science background, The company that contracted me was largely focused on visual design. For instance, the owner also was teaching classes on that. Moreover, their most recent project was a book on the fauna of a fictitious world in the Star Wars universe (with illustrations). He also had a team of folks back in India. Our solution was a linear scenario, quite visual, set in outer space both because of experience of their team and the audience of engineers.

After the success of the project, the client came back and asked for a game to accompany the learning experience. Hey, no problem, it’s not like we’ve already addressed the learning objectives or anything! Still, I like games! This was going to be fun. So I dug in, cobbling together a game design. We used the same characters from the previous experience, but now focused on making project management decisions and dealing with different personality types (the subtext was, don’t be a difficult person to work with).

The core mechanic was:

  • choose the next project
  • assess any problem
  • find the responsible person,
  • ask (appropriately) for the fix

Of course, the various rates of problems, stage of development and therefore person, stage and scope of the project, were all going to need tuning. In addition, we wanted the first n problems to deal with good people, to master the details, before beginning to deal with more difficult personality types.

So, from my development docs, they hired a flash programmer to build the game. And…when we tried to iterate, we got more bugs instead of improvement. This happened twice. I realized the coders were hard-wiring the parameters throughout the code, which meant that if you wanted to tune a value, they had to search throughout the code to change all the values. Now, for those who know, this is incredibly bad programming. It wasn’t untoward to develop a small Flash animation, but it didn’t scale to a full game program.

We had a discussion, and they finally procured someone who actually understood the use of constants, someone with more than just a programming background. Suddenly, tweaks were returning with short-turnaround, and we could tune the experience! Thus, we were able to create a game that actually was fun. We didn’t really get to know whether it was effective, because they hadn’t set any metrics for impact, but they were happy and touted the game in several venues. We took that as a positive outcome ;).

The take-home lesson, of course, is if you need tuning (and, for anything of sufficient size and user-facing, you will), you need someone who understands proper code structures. I’ll always ask for someone who understands software engineering, not just a programmer. There’s a reason that a) they’re known as ‘cowboy coders’, and b) there’s software process! That’s my personal definition of a software engineer vs programmer, and I realize it’s out of date in this era of increasingly complex software. Still, the value of structure and process isn’t restricted to software, and is ever more important, eh?

Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.