Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Context and models

22 July 2025 by Clark Leave a Comment

One of the things I’ve recognized is that we don’t pay enough attention to context. It turns out to be a really important factor in cognition, as our long-term memory interacts with the current context to determine our interpretation. And, as such, makes our interpretations very ’emergent’. Thus, our training needs to ensure that we’re liable to make the right interpretation and so choose the right action. Do we do this well? And can artificial intelligence (AI), specifically generative AI (GenAI), help? Here’re some thoughts on context and models.

So, we’ve gone from symbolic models to sub-symbolic ones as we’ve moved to a ‘post-cognitive’ interpretation of our thinking. What’s been realized is that we’re not the formal logical reasoning beings that we’d like to think. Instead, we’re very much assembling our understanding on the fly as an interaction between context and memory. In fact, our emergent memory can be altered by the context, as Beth Loftus’ research demonstrated. Which means that, if we want specific interpretations and reactions (e.g. making decisions under uncertainty), we should be careful to ensure that we provide training across a suitable suite of contexts.

Now, active inference models of cognition suggest that we’re actively building models of how the world works. Thus, we’re abstracting across experiences to generate ever-more accurate explanations. Research on mental models suggests that they’re incomplete, not completely accurate, and, arguably most importantly, hard to get rid of if they’re wrong. Thus, providing good models beforehand is important, and work by John Sweller further suggests that examples showing models in context benefit learning. You can present the model, but ultimately the learner must ‘own’ it. So, it’s important to know the models and their range of applicability to facilitate that abstraction.

What is important to know, however, is that GenAI doesn’t build models of the world. This was an important (and, sadly, not self-generated) realization for me. The implication, however, is clear. I have maintained that GenAI can’t understand context, and thus can’t generate suitable practice environments. Which, of course, is to the good for designers, since it leaves them a role ;). Importantly, however, this framing also suggests that GenAI also can’t choose an appropriate suite of contexts for practice, since it doesn’t understand models and how they’re applicable (and when not). (Another designer role!)

I am all for using technology to complement our own cognition. However, that entails knowing what the true affordances of the technology are, and also what it can’t do. So, GenAI can help think of great settings for practice. Along with a person (an expert actually) to vet the suggestions, of course. It can think of things we might forget, or ones we haven’t thought of yet. It can, of course, also create ones that aren’t realistic. There’re potentially great opportunities, but we have to know what matters, and what doesn’t. Context and models matter. GenAI can’t understand them. You can take it from there.

From knowledge to performance

15 July 2025 by Clark Leave a Comment

For reasons, I’ve been looking at multiple-choice questions (MCQs). Of course, for writing them right, you should look to Patti Shank’s book Write Better Multiple-Choice Questions.  And there’s clearly a need!  Why? Because when it comes to writing meaningful MCQs, I’m wanting to move us from knowledge to performance. And the vast number of questions I found didn’t do that.

To start, I’ll point, as I often do, to Pooja Agarwal’s research (plays to my bias ;). She found that asking high-level questions (e.g. application questions, or mini-scenarios as I like to term them) leads to ability to answer high-level questions (e.g., to do). What wasn’t necessary were low-level knowledge questions. She tested low alone, high alone, and low + high. What she found that was to pass high tests, you needed high questions. Further, low questions didn’t add anything. I’ll also suggest that our needs, for our learners and our organizations, are the ability to apply knowledge in high-level ways.

Yet, when I look at what’s out there, I continually see knowledge questions. They violate, btw, many principles of good multiple questions (hence Patti’s book ;). These questions often have silly or obvious alternatives to the right answer. They include the wrong length responses, and too many (3 is ideal, usually, including the right answer). We also see a lack of feedback, just ‘right’ or ‘wrong’, not anything meaningful. We also see too many questions, or incomplete coverage, and arbitrary criteria (why 80%?). Then, too, the absolutes (never/always, etc), which isn’t the way to go. Perhaps worst, they don’t always focus on anything meaningful, but query random information that was in no way signaled as important.

Now, I suppose I can’t say that knowledge questions should be avoided. There might be reasons to ensure they’re there for diagnostic reasons (e.g. why are learners are getting this wrong). I’d suggest, however, that such questions are way overused. Moreover, we can do better. It’s even essentially easy (though not effortless).

What we have learners do is what’s critical for their effective learning, If we care (and we should), that means we need to make sure that what they do leads to the outcomes our organizations need. Which means that we need lots of practice. Deliberate practice, with desirable difficulty, spaced out over time. We need reactivation, for sure. But what we do to reactivate dictates what we’ll be able to do. If we ask people knowledge questions, they’ll be able to answer knowledge questions. But that has been shown to not lead to their ability to apply that knowledge to make decisions: solve problems, design solutions, generate better practices.

So, we can do better. We must do better. That is, if we want to actually assist our organizations. If we’re talking skilling (up-, re-, etc), we’re talking high-level questions. On the way, perhaps (and recommended), to more rigorous assessment (branching scenarios, sims, mentored practice, coaching, etc), Regardless, we want what we have learners do be meaningful, When we’re moving from knowledge to performance, it’s critical, And that’s what I believe we should be doing.

(BTW, technology’s an asset, but not a solution. As I like to say:

If you get the design right, there are lots of ways to implement it; if you don’t get the design right, it doesn’t matter how you implement it. )

Continually learning

8 July 2025 by Clark Leave a Comment

picture of a dictionary page with the word 'learning' highlightedI’ve been advising Elevator 9 on learning science. Now, while I advise companies via consulting, this is a different picture. For one, they’re keen to bake learning science into the core, which is rare and (in my mind) valuable. It’s also a learning opportunity for me. I’m watching all the things a startup has to deal with that I’ve avoided (I didn’t get the entrepreneurial gene). It’s also turning out to have a really interesting revelation, which is worth exploring. I like continually learning, and this is just such an opportunity.

To start, I’ve advised lots of companies over the years. This includes on learning design, product design, market strategy, and more. Of course, with me you always get more than anticipated (like it or not ;), because I’ve eclectic interests. I also collect models, and when they match, you’ll hear about it. (To be fair, most clients have welcomed my additional insights; it’s an extra bonus of working with me! :) It’s also fun, since I also educate folks as I go along (“working with you is like going to graduate school”). Rarely, however, have I been locked into the development. I come in, give good advice, and get out. Here, it’s not the same.

I’m always a sponge, learning as well as sharing. Here, however, I’ve had involvement for a longer time; from their first no-code version and now serious platform development (in User Acceptance Testing phase, which means we’re about to launch; exciting!). From CEO David Grad, through COO Page Chen, and then all the folks that have been added from tech, to sales and marketing, UI, and more, I’ve been usually at least peripherally involved and exposed. It’s fascinating, and I’m really learning the depths that each element takes, and of course it’s far more than my naive ideas had initially conceived.

There are two major elements to their solution. One is wrapping extended reactivation around training events. The second is taking the collected data and making it available as evidence of the learning trajectory. My role is essentially in the first; for one, there are lots of nuances going into the quantity and spacing of learning. While there’s good guidance, we’re making our best principled decisions, and then we’ll refine through testing. I’m also guiding about what those reactivation activities should be. We are extending learning, not quite to the continual, but certainly to the necessary proficiency.

This is where it’s getting interesting. I realized the other day that most of what learning science talks about is formal learning: practice before performance. Yet, here, we’re actually moving into applying the learning into the workplace, and having learners look at the impact they’re having. In many ways, this looks more like coaching. That is, we’re covering the full trajectory. Which means we have to base principles beyond just formal learning. This is serious fun! Our data collection, as a consequence, goes beyond just the cognitive outcomes, but also looks at how the experience is developing.

Sure, there are tradeoffs. The market demands that we incorporate artificial intelligence, and they’re not immune to the advantages. We’re also finding that, pragmatically, the implications can get complex really fast, and that we have to make some simplifying assumptions. Of course, they’re also needing to develop a minimally viable product first, after which they’ll see what direction extensions go. It’s not the ideal I would envision, but it’s also a solution that’s going to really meet what’s needed.

So, I’m continually learning, and enjoying the journey. We’ll see, of course, if we can penetrate awareness with the solution, which should be viable, and also handle the general difficulties that bedevil many startups. Still, it’s a great opportunity for me to be involved in, and similarly it’s one that can address real organizational needs.

Where’s quality?

1 July 2025 by Clark 4 Comments

I get it, when you’ve a hammer, the whole world looks like a nail. Moreover, there’s money on the table, and it’d be a shame not to grab onto it. Still, there’s also integrity. And, frankly, I fear that we’re going down the wrong path. So I’ll rail again, by asking “where’s quality?”

So, a colleague recently provided a link to a report by a well-known analyst. In the report, they call for an AI revolution for L&D. And, yes, I do believe L&D needs a revolution, I wrote a whole book about it. However, I fear that the direction under advisement is focusing on the wrong thing. So here’s what the initial post summarized about the article:

* Despite significant investment, many companies are utilizing outdated learning models that do not deliver substantial business impact.

* Learning needs to be dynamic, personalized, and focused on enablement.

* Chief Learning Officers (CLOs) should re-establish themselves as leaders within the enterprise, focusing not just on learning but on employee enablement.

* Artificial intelligence (AI) offers the potential to speed up content creation, lower costs, and improve operational efficiency, which allows Learning and Development (L&D) to adopt a wider and more strategic role.

Do you see anything wrong with this? I actually agree  with the first point, and probably the third. However, I think we can make a strong case that the second is not the primary issue. And very clearly the fourth point identifies what’s wrong in the second, at least before the last phrase.

So, first, when we invoke learning, we should be very careful to do it right. There are claims that up to 90% of our investment in training is going to waste. However, it’s not because our learning designs aren’t ‘dynamic, personalized, and focused on enablement’, it’s because our learning isn’t designed according to what research says works. Now, our learning needs change as our abilities improve. We start knowing what we need and why. There’re also times when performance support can be more effective than courses. Courses can still be valid, if they’re done well.

That’s the point I continue to make: I maintain that we’ll save more money and have more impact if we focus on good learning design before we invest in fancy technology. That includes AI. We want meaningful practice (which I suggest is still a role for designers, as AI doesn’t understand context), not information dump. Knowledge <> ability to perform. What we need is practice of doing. At least for novices. But beyond that, only effective self-learners will be truly able to leverage information on their own to learn. Even social learning gets better when we understand learning.

So, learning needs to be evidence-informed, first. Then, and only then, can it be dynamic, personalized, etc. Even knowing when and how to use AI as performance support counts (a more valid role, tho’ there needs to be scrutiny of the advice somehow, as AIs can give bad advice). Sure, CLO’s do need to be leaders in the enterprise, but that comes from understanding cognition and learning, and then using those to better enable innovation as well as optimizing performance. Enablement’s fine as a premise, but it’s got to come from understanding. For instance, you can’t get employees contributing just because you put in AI, you need to create a learning culture. (Putting AI into a Miranda organization isn’t going to magically fix the problem.)

Let me be clear: my argument is not Gen AI bad vs Gen AI good. No, it’s learning science involved versus not. I am fine if we start using AI, Gen or otherwise,, but after we’ve made sure we’re doing the right things first. Let me pose a hypothetical: for $30K, would you rather have 3 courses versus 10? What if those 3 courses were designed to actually have an impact, versus 10 that are pretty and full of information, but won’t move a single meaningful needle the organization? Sure, I’ve made up the numbers, but the reality is that we’re talking about achieving real outcomes versus making folks feel good; I’ll suggest “it’s pretty and people like it” is no substitute for improving the outcome.

This makes the last line above more problematic: we don’t need to speed up content creation. Content dump <> learning. Lowering costs and improving efficiency is all good, but after you’ve ensured adequate effectiveness. And no one seems to be talking about that. That’s why I’m asking “where’s quality?” It’s not being discussed, because AI is the next shiny object: “there’s plenty of money to be made”. Anyone else sensing a bubble? And that’s without even considering IP ethics, environmental impact, security, and VC funding. The business model is still up in the air. Hence, my question. Your thoughts?

As an aside, there’s a quote in the paper that illustrates their lack of deep understanding: “As our attention spans shorten”. Ahem. While there’s a credible argument made by Gloria Marks, I still suggest it’s not a change in our cognitive architecture, but instead availability and familiarity. We can still disappear for hours into a novel, movie, or game. It’s a fallacious basis for an argument. 

Truth in advertising: I was tempted to title this “WTAH”, but…I decided that might be too incendiary ;). Hence, “Where’s quality?” Still, you can imagine my mood while reading and then writing this.

Writing for learning

24 June 2025 by Clark Leave a Comment

Fountain pen writing on lined paper.We write for lots of reasons. It’s all about communication, but with different purposes, there should be different writing. Just for books, the language in a thriller should be different than for thoughtful stories. Writing for ads is different than writing for science. And, writing for learning is different than writing for other purposes. What am I talking about?

What research tells us, as Ruth Clark lets us know, is that we learn better from conversational language. Formal language, such as in an encyclopedia, or a textbook, doesn’t work for elearning or how an instructor talks to an audience. You want to be informal, personal, and more. Yet too often our prose is tedious.

Dialog, in particular, should be authentic to the speaker. I quail when I see characters spouting language straight out of an instructional manual or, worse, a marketing spiel. Good character development goes beyond stereotypes and develops some personality. This should come through in their language. Writing dialog, then, isn’t what most designers have been trained in. Which means that designers shouldn’t write dialog, or at least get external support whether training, even just peer review.

Writing for learning needs to be clear, of course. It also needs to be accurate. And yet, it shouldn’t be onerous to read. If there are barriers to comprehension, you’re putting in unnecessary barriers to your learning outcome. Really, you’re managing cognitive load. Obtuse language impedes processing, and learning is processing-intensive enough!

I’ve talked before about the importance of emotion in learning, for motivation, keeping anxiety under control, building confidence, and more. Writing is one of the most compact forms of media for communicating, and so we want our language to address these issues as well. Conversational language helps reduce anxiety by being familiar, and shows relatedness, part of the Self-Determination Theory of motivation. When folks believe we care about them, they’re more inclined to succumb to our ministrations.

Writing for learning is one of the elements necessary for the appropriate use of media. We should use the right media for the message (with a caveat about the value of novelty), and then we should apply the right media correctly. That is, ensuring we apply the appropriate expertise. We can make changes, such as my common example of Ken Burn’s compelling use of still images in his video documentary of the Civil War, but even then there are accommodations. In short, writing for learning has some particular constraints, and we as designers should be aware of them.

There’s more, of course. What you write in an introduction is different than what’s presented about a model, than the narrative for an example, for the instructions versus the description of the context for retrieval practice, etc. Knowing what the role is, and the appropriate writing, becomes habit with experience, but like all learning, models and feedback help accelerate the path there. You need to know not just what to write, but how and when. Those are my thoughts, what are yours?

In praise of reminders

17 June 2025 by Clark Leave a Comment

I have a statement that I actively recite to people: If I promise to do something, and it doesn’t get into a device, we never had the conversation. I’m not trying to be coy or problematic, there are sound reasons for this. It’s part of distributed cognition, and augmenting ourselves. It’s also part of a bigger picture, but here I am in praise of reminders.

Schedule by clock is relatively new from a historical perspective. We used to use the sun, and that was enough. As we engaged in more abstract and group activities, we needed better coordination. We invented clocks and time as a way to accomplish this. For instance, train schedules.

It’s an artifact of our creation, thus biologically secondary. We have to teach kids to tell time! Yet, we’re now beholden to it (even if we muck about with it, e.g. changing time twice a year, in conflict with research on the best outcomes for us). We created an external system to help us work better. However, it’s not well-aligned with our cognitive architecture, as we don’t naturally have instincts to recognize time.

We work better with external reminders. So, we have bells ringing to signal it’s time to go to another course, or to attend worship. Similar to, but different than other auditory signals (that don’t depend on our spatial attention) such as horns, buzzers, sirens, and the like. They can draw our attention to something that we should attend to. Which is a good thing!

I, for one, became a big fan of the Palm Pilot (I could only justify a III when I left academia, for complicated reasons). Having a personal device that I could add and edit things like reminders on a date/time calendar fundamentally altered my effectiveness. Before, I could miss things if I disappeared into a creative streak on a presentation, paper, diagram, etc. With this, I could be interrupted and be alerted that I had an appointment for something: call, meeting, etc. I automatically attach alerts to all my calendar entries.

Granted, I pushed myself to see just how effective I could make myself. Thus, I actively cultivated my address book, notes, and reminders as well as my calendar (and still do). But this is one area that’s really continued to support my ability to meet commitments. Something I immodestly pride myself for delivering on. I hate to have to apologize for missing a commitment! (I’ll add multiple reminders to critical things!)   Which doesn’t mean you shouldn’t, actively avoid all the unnecessary events people would like to add to your calendar, but that’s just self-preservation!

Again, reminders are just one aspect of augmenting ourselves. There are many tools we can use – creating representations, externalizing knowledge, … – but this on in particular as been a big key to improving my ability to deliver. So I am in praise of reminders, as one of the tools we can, and should, use. What helps you?

(And now I’ll tick the box on my weekly reminder to write a blog post!)

Expert in the loop

10 June 2025 by Clark Leave a Comment

A couple of recent occurrences have prodded me to think. (Dangerous, I know!). In this case, generative AI continues to generate ;) hype and concern in close to equal measure. Which means it dominates conversations, including one I had recently with Markus Bernhardt. Then, there was a post by Simon Terry that said something related that doesn’t completely align. So, some thoughts arguing to have an expert in the loop.

First, as a neighbor as well as an AI strategist of renown, I’m grateful Markus and I can regularly converse. (And usually about AI!) His depth and practical experience in guiding organizations complements my long-standing fascination with AI. One item in particular was of note. We were discussing how you need a person to vet what comes out of Generative AI. And it became clear that it can’t just be anybody. It takes someone with expertise in the area to be able to determine if what’s said is true.

That would suggest that the AI is redundant. However, there are limitations to our cognition. As I’ve recounted numerous times, technology does well what we don’t, and vice-versa. So, we use tools. One of the things we do is unconsciously forget aspects of solutions that we could benefit from. Hence, for instance, checklists. In this case, Generative AI can be a thinking partner in that it can spin up a lot of ideas. (Ignoring, for the moment, issues like intellectual property and environmental costs, of course.) They may not be all good, or even accurate, but…they may be things we hadn’t recalled or even thought of. Which would be a nice complement to our thinking. It requires our expertise, but it’s a plausible role.

Now, Simon was talking about how ‘human in the loop’ perpetuates a view of humans as cogs in a machine. And I get it. I, too, worry about having people riding herd on AI. That is, for instance, AI doing the creative work, and humans taking responsibility. That’s broken. But, having AI as a thinking partner, with a human generating ideas with AI, and taking responsibility for the accuracy as well as the creativity, doesn’t seem to be problematic. (And I may be wrong, these are preliminary thoughts!)

Still, I think that just a ‘human in the loop’ could be wrong. Having an expert in the loop, as Markus suggested, may be a more appropriate situation. He pointed out a couple of ways Generative AIs can introduce errors, and it’s a known problem. We have to have a person in the loop, but who? As I recounted recently, are we just training the AI? Still, I can see a case being made that this is the right way to use AI. Not as an agent (acting on its own, *shudder*), but as a partner. Thoughts?

What does ‘evidence-informed’ mean?

3 June 2025 by Clark Leave a Comment

We colloquially tout the Learning Development Accelerator as a society for ‘evidence-based’ practice. Or, more accurately, as ‘evidence-informed’, as Mirjam Neelen & Paul Kirschner advise us in their tome. But, what does ‘evidence-informed’ mean, in practice? Does everything you do have to align with what research tells us? What’s the practical interpretation? So, I have an admission to make.

To start, if you go to the LDA site (I just did), it says: “Explores and encourages research-aligned practices”. That is a noble goal, to be sure. Let’s be clear, however: research doesn’t cover all our particular situations. In fact, it’s unlikely to cover any of our specific situations. Much of the research we use is done on psychology undergraduates, and frequently for education purposes, e.g. K12 or higher ed. Which means it’s indicative of our general cognitive processing, but not our specific situations.

There is research on organizational learning, to be sure. It’s not always pristine laboratory conditions, as it may well be meeting real-world needs. Of course, we do see some A/B-type studies. Still, while legitimate, they’re not likely to be our particular situation. That is, our particular audience, our specific learning objectives, our timeline, our urgency, etc.

So what does one do? We must abstract the underlying principles, and reinstantiate for our circumstances. There are good overall principles, such as the benefit of generative activities and spaced retrieval practice. The nature of these, of course, such as choosing the right activities (Thiagi & Matt have a whole book on this!), and the right parameters for retrieval (we’re asking for that at Elevator9), means that we have to customize. Which means we have to test and tune. We can’t expect to get it right the first time. (Though, we’ll get better over time.)

There will be times, when we’re doing something that’s far enough away that we’re kind of making it up as we go along. (An area I love, as it requires considering all the models I’ve mentally collected over the years.) Then, we may find good examples to use as guidance. Someone’s tried something, and it worked for them. If you look at the LDA Research Checklist, for instance, you’ll see that replicated research is desirable. Well, that’s ideal. We live in the real world, however.  BTW, this is a good reason to share what you learn (you may have to anonymize it, for sure): so others benefit.

So, and this is where I make an admission, there will be times where we don’t have adequate guardrails. There are times when we have only some examples, or basically we’re wading into new areas. Then, we are free, with a caveat: we can’t do what’s been shown to be wrong. For instance, learning styles. Or attention-span of a goldfish. Or any of the other myths. My take, and I require this for LDA Press as well, is that we ask for the evidence-base, but we require that submissions not violate what’s known.

So, evidence-based, research-aligned, etc, at least means avoiding what has been shown not to work. It starts from using the best evidence-available to guide design, and then testing (which research also tells us to do!). Why? Because we get better outcomes. We do know that not following research is unlikely to have an impact. Learning design is, at core, a probabilistic game. Increasing the likelihood of a real impact should be what we’re about. Doing so on the basis of research is a faster and more reliable path to having an impact. Ultimately, the answer to the question “what does ‘evidence-informed’ mean?” is better outcomes. Who doesn’t want that?

What sorts of activities?

27 May 2025 by Clark Leave a Comment

When we do learning, we must be active. That is, it’s not enough to receive information. (Unless we’re actively practicing and attending presentations are reflection.) We must do! Then the question becomes one of doing ‘what’? I’m seeing too many of the wrong sorts of things in play, so it’s worth asking: what sorts of activities should we be doing?

Cognitively, we need to perceive information to get it into working memory. From there, to get into long-term memory – and be useful – we need to elaborate and practice retrieval. Elaboration is the process whereby we strengthen connections between the new material and the familiar. This increases the likelihood of activation in context. Then, we need to practice retrieving the knowledge for use. This strengthens our ability to retrieve and apply as we need.

One thing to note is that research shows that we don’t need to retrieve the fact-based knowledge before practicing retrieval to actually use. Our goal for organizational learning is to use information to make meaningful decisions. Better fact-recall isn’t likely to be what will help your organization thrive. Instead, what matters is acquiring the new skills that will define the ability to adapt.

For elaboration, what we increasingly hear is about ‘generative‘ learning activities. These are when you’re taking new information, and processing it more deeply. It can involve rephrasing, visualizing, and of course connecting it to your prior experience. These activities help strengthen the information into long-term memory.

An associated task it to practice using the information. That is, putting learners into situations where they need to use the new information to make decisions that they couldn’t before. The ideal situation, of course, is mentored live practice, but…there are limitations. Individual mentoring isn’t always cost-effective. Also, live practice may have consequences for wrong answers. In many cases, we use simulations. These can be programmed, or branching scenarios. Even mini-scenarios (e.g. better written multiple-choice questions) are a good option.

What we don’t need are fact-check questions. As above, there’s no real benefit. They may make us feel good, but they aren’t inclined to make us better at using the information. There are lots of bad practices around this. We can just use knowledge questions, thinking we’re helping learning (and not). Worse, I’ve seen many cases where they’re asking for arbitrary bits of information that aren’t highlighted. Also, too often we’re presenting way too much information than people can remember at one time (or at all).

So, if we’re to design effective learning, what sorts of activities is an important question. We don’t need fact-checks. We do benefit from processing, and retrieval. That’s worth practicing and performing. Review your work and look at what you’re having learners do. If it’s not elaboration and retrieval, you’re wasting learners’ time and your efforts. Why do that?

 

Software engineer vs programmer

20 May 2025 by Clark Leave a Comment

A rotund little alien character, green with antennas, dressed in a futuristic space suit, standing on the ground with a starry sky behind them. If you go online, you’ll find many articles that talk about the difference in roles between software engineers and programmers. In short, the former have formal training and background. And, at least in this day and age, oversee coding from a more holistic perspective. Programmers, on the other hand, do just that, make code. Now, I served in a school of computer science for a wonderful period of my life. Granted, my role was teaching interface design (and researching ed tech). Still, I had exposure to both sides. My distinction between software engineer vs programmer, however, is much more visceral.

Early in my consulting career, I was asked to partner with a company to develop learning. The topic was project management for non-project managers. They chose me because of my game design experience as well as learning science background, The company that contracted me was largely focused on visual design. For instance, the owner also was teaching classes on that. Moreover, their most recent project was a book on the fauna of a fictitious world in the Star Wars universe (with illustrations). He also had a team of folks back in India. Our solution was a linear scenario, quite visual, set in outer space both because of experience of their team and the audience of engineers.

After the success of the project, the client came back and asked for a game to accompany the learning experience. Hey, no problem, it’s not like we’ve already addressed the learning objectives or anything! Still, I like games! This was going to be fun. So I dug in, cobbling together a game design. We used the same characters from the previous experience, but now focused on making project management decisions and dealing with different personality types (the subtext was, don’t be a difficult person to work with).

The core mechanic was:

  • choose the next project
  • assess any problem
  • find the responsible person,
  • ask (appropriately) for the fix

Of course, the various rates of problems, stage of development and therefore person, stage and scope of the project, were all going to need tuning. In addition, we wanted the first n problems to deal with good people, to master the details, before beginning to deal with more difficult personality types.

So, from my development docs, they hired a flash programmer to build the game. And…when we tried to iterate, we got more bugs instead of improvement. This happened twice. I realized the coders were hard-wiring the parameters throughout the code, which meant that if you wanted to tune a value, they had to search throughout the code to change all the values. Now, for those who know, this is incredibly bad programming. It wasn’t untoward to develop a small Flash animation, but it didn’t scale to a full game program.

We had a discussion, and they finally procured someone who actually understood the use of constants, someone with more than just a programming background. Suddenly, tweaks were returning with short-turnaround, and we could tune the experience! Thus, we were able to create a game that actually was fun. We didn’t really get to know whether it was effective, because they hadn’t set any metrics for impact, but they were happy and touted the game in several venues. We took that as a positive outcome ;).

The take-home lesson, of course, is if you need tuning (and, for anything of sufficient size and user-facing, you will), you need someone who understands proper code structures. I’ll always ask for someone who understands software engineering, not just a programmer. There’s a reason that a) they’re known as ‘cowboy coders’, and b) there’s software process! That’s my personal definition of a software engineer vs programmer, and I realize it’s out of date in this era of increasingly complex software. Still, the value of structure and process isn’t restricted to software, and is ever more important, eh?

Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.