Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: groupie

Locus of intelligence

6 May 2025 by Clark 1 Comment

I’m not a curmudgeon, or even anti-AI (artificial intelligence). To the contrary! Yet, I find myself in a bit of a rebellion in this ‘generative‘ AI era. And I’m wondering why. The hype, of course, bugs me. But it occurs to me that a core problem may reside in where we put the locus of intelligence. Let me try to make it clear.

In the early days of the computer (even before my time!), the commands were to load memory into registers, conduct boolean operations on them, and to display the results. The commands to do so were at the machine level. We went a level above with a translation of that machine instructions into somewhat more comprehensible terms, assembly language. As we went along, we went more and more to putting the onus on the machine. This was because we had more processor cycles, better software etc. We’re largely to the point where we can stipulate what we want, and the machine will code it!

There are limits. When Apple released the Newton, they tried to put the onus on the machine to read human writing. In short, it didn’t work. Palm’s Pilots succeeded because Jeff Hawkins went for Graffiti as the language, which shared the responsibility between person and processor. Nowadays we can do speech and text recognition, but there are still limitations. Yes, we have made advances in technology, but some of it’s done by distributing to non-local machines, and there are still instances where it fails.

I think of this when I think of prompt engineering. We’ve trained LLMs with vast quantities of information. But, to get it out, you have to ask in the right way! Which seems like a case of having us adapt to the system instead of vice versa. You have to give them heaps more context than a person would need, and they still can hallucinate.

I’m reminded of a fictional exchange I recently read (of course I can’t find it now), where the AI user is being advised to know the domain before asking the AI. When the user queries why they would need the AI if they know the domain, they’re told they’re training the AI!

As people investigate AI usage, one of the results is that your initial intelligence indicates how much use you’ll get out of this version of AI. If you’re already a critical thinker, it’s a good augment. If you’re not, it doesn’t help (and may hinder).

Sure, I have problems with the business models (much not being accounted for: environmental cost, IP licensing, security, VC boosting). But I’m more worried about people depending too much on these systems without truly understanding what the limitations are. The responsible folks I know advocating for AI always suggest having a person in the loop. Which is problematic if you’re giving such systems agency; it’ll be too late if they do something wrong!

I think experimenting is fine. I think it’s also still too early to place a bet on a long-term relationship with any provider. I’m seeing more and more AI tools, e.g. content recommenders, simulation avatars, and the like. Like with the LMS, when anyone who could program a database would build one, I’m seeing everyone wanting to get in on the goldrush. I fear that many will end up losing their shirts. Which is, I suppose, the way of the world.

I continue to be a big fan of augmenting ourselves with technology. I still think we need to consider AI a tool, not a partner. It’s nowhere near being our intellectual equal. It may know more, but it still has limitations overall. I want to develop, and celebrate our intelligence. I laud our partnership with technologies that augment what we do well with what we don’t. It’s why mobile became so big, why AI has already been beneficial, and why generative AI will find its place. It’s just that we can’t allow the hype to blind us to the real locus of intelligence: us.

Intelligent Tutoring via Models

22 April 2025 by Clark Leave a Comment

Today I read that Anthropic has released Claude for Education (thanks, David ;). And, it triggered some thinking. So, I thought I’d share. I haven’t fully worked out my thoughts, so this is preliminary. Still, here’re some triggered reflections on Intelligent Tutoring via models.

intelligent tutoring system architecture, with an AI underpinning, learner, tutoring, and content model, and a user-system interface.So, as I’ve mentioned, I’ve been an AI groupie. Which includes tracking the AI and education field, since that’s the natural intersection of my interests. Way back when, Stellan Ohlsson abstracted the core elements of an intelligent tutoring system (ITS), which include a student (learner) model, a domain (expert on the content) model, and an instruction (tutoring) model. So, a student with a problem takes an action, and then we see what an expert in the domain would do. From that basis, the pedagogy determines what to do next.  They’ve been built, work in research, and even been successfully employed in the real world (see Carnegie Learning).

Now, I’ve largely been pessimistic about the generative AI field, for several reasons. These include that it’s:

  • evolutionary, not revolutionary (more and more powerful processors using slight advances on algorithms yields a quantum bump)
  • predicated on theft and damage (IP and environmental issues)
  • likely will lead to ill use (laying off folks to reduce costs for shareholder returns)
  • based upon biz models boosted by VC funds and as yet still volatile (e.g. don’t pick your long term partners yet)

Yet, I’ve been upbeat for AI overall, so it’s mostly the hype and the unresolved issues that are bugging me. So, seeing the features touted for this new system made me think of a potential way in which we might get the desired output. Which is how I (and we) should evolve.

As background, several decades back I was leading a team developing an adaptive learning system. The problem with ITS is that the content model is hard to build; they had to capture how experts reasoned in the field, and then model it through symbolic rules. In this instance I had the team focus on the tutoring model instead, and used a content model based upon learning objects with the relationships between them capturing the knowledge.  Thus, you had to be careful in the content development. (This was an approach we got running. A commercial company subsequently brought it to market successfully a decade after our project. Of course, our project was burned to the ground by greed and ego.)

So, what I realized is that, with the right constraints, you could perhaps do an intelligent tutoring system. So, first, the learner model might be primed by a pre-test, but is built by learner actions. The content model could come from training on textbooks. You could do either a symbolic processing of the prose (a task AI can do), or a machine learning (e.g. LLM) version by training. Then, the tutoring model could be symbolic, capturing the best of our rules, or trained on a (procured, not stolen) database of interventions (something Kaplan was doing, for instance). (In our system, we wrote rules, but had parameters that could be tuned by machine learning over time to get better.)

My thought was that, in short, we can start having cross-domain tutoring. We can have a good learning model, and use the auto-categorization of content. Now, this does beg the problem of knowledge versus skills, which I still worry about. (And, continue to look at.) Still, it appears that the particular solution is looking at this opportunity. I’ll be keen to see how it goes; maybe we can have learning support. If we blend this and a coaching engine…maybe the dream I articulated a long time ago might come to fruition.

AI as a System

7 May 2024 by Clark Leave a Comment

At the recent Learning Guild‘s Learning & HR Tech Solutions conference, the hot topic was AI. To the point where I was thinking we need a ‘contrarian’ AI event! Not that I’m against AI (I mean, I’ve been an AI groupie for literal decades), I think it’s got well-documented upsides. (And downsides.) Just right now, however, I feel that there’s too much hype, and I’m waiting for the trough of disillusionment to hit, ala the Gartner hype cycle.  In the meantime, though, I really liked what Markus Bernhardt was saying in his session. It’s about viewing AI as a system, though of course I had a pedantic update to it ;).

So, Markus’ point was that we should separate out data from the processing method. Markus presented a simple model to think about AI that I liked. In it, he proposed three pieces that I paraphrase:

  • the information you use as your basis
  • the process you use with the information
  • and the output you achieve

Of course, I had a quibble, and ended up diagramming my own way of thinking about it. Really, it only adds one thing to his model, an input! Why?

So I have the AI system containing the process and data it operates on. I like that separation, because you can use the same process on other data, or vice versa. As the opening keynote speaker, Maurice Conti, pointed out, the AI’s not biased, the data is. Having good data is important. As is having the right process to achieve the results you want (that is, a good match between problem and process; the results are the results ;). Are you generating or discriminating, for instance? Then Markus said you get the output, which could be prose, and image, a decision, …

However, I felt that it’s important to also talk about the input. What you input determines the output. With different queries, for instance, you can get different results. That’s what prompt engineering is all about! Moreover, your output can then be part of the input in an iterative step (particularly if your system retains some history). Thus, thinking about the input separately is, to me, a useful conceptual extension.

It may seem a trivial addition, but I think it helps to think about how to design inputs. Just as we align process with data for task, we need to make sure that the input matches to the process to get the best output. So, maybe I’m overcomplicating thinking about AI as a system. What say you?

A brief AI overview?

7 November 2023 by Clark 2 Comments

At the recent and always worthwhile DevLearn conference, I was part of the panel on Artificial Intelligence (AI). Now, I’m not an AI practitioner, but I have been an AI groupie for, well, decades. So I’ve seen a lot of the history, and (probably mistakenly) think I have some perspective. So I figured I’d share my thoughts, giving a brief AI overview.

Just as background, I took an AI course as an undergrad, to start. Given the focus on thinking and tech (two passions), it’s a natural. I regularly met my friend for lunch after college to chat about what was happening. When I went to grad school, while I was with a different advisor, I was in the same lab as David Rumelhart. That happened to be just at the time he was leading his grad students on the work that precipitated the revolution to neural nets. There was a lot of discussion of different ways to represent thinking. I also got to attend an AI retreat, sponsored by MIT, and met folks like John McCarthy, Ed Feigenbaum, Marvin Minsky, Dan Dennet, and more! Then, as a faculty member in computer science, I had a fair affiliation with the AI group. So, some exposure.

So, first, AI is about using computer technology to model intelligence. Usually, human intelligence, as a cognitive science tool, but occasionally just to do smart things in any means possible. Further, I feel reasonably safe to say that there are two major divisions in AI: symbolic and sub-symbolic. The former dominated AI for several decades, and this is where a system does formal reasoning through rules. Such systems do generate productive results (e.g. chatbots, expert systems), but eventually don’t do a good job of reflecting how people really think. (We’re not formal logical reasoners!)

As a consequence, sub-symbolic approaches emerged, that tried architectures to do smart things in new ways. Neural nets end up showing good results. They find use in a couple of different ways. One is to set them loose on some data, and see what they detect. Such systems can detect patterns we don’t, and that’s proven useful (what’s known as unsupervised learning).

The other is to give them a ‘training set’ (also known as supervised learning), a body of data about inputs and decisions. You provide the inputs, and give feedback on the decisions until they make them in the same way.Then they generalize to decisions that they haven’t had training on. It’s also the basis of what’s now called generative AI, programs that are trained on a large body of prose or images, and can generate plausible outputs of same. Which is what we’re now seeing with ChatGPT, DALL-E, etc. Which has proven quite exciting.

There are issues of concern with each. Symbolic systems work well in well-defined realms, but are brittle at the edges. In supervised learning, the legacy databases unfortunately frequently have biases, and thus the resulting systems also have these biases! (For instance, housing loan data have shown bias.) They also don’t understand what they’re saying. So generative AI systems can happily tout learning styles from the corpus of data they’ve ingested, despite scientific evidence to the contrary.

There are issues in intellectual property, when the data sources don’t receive acknowledgement nor recompense.  (For instance, this blog has been used for training a sold product, yet I haven’t received a scintilla of return.) People may lose jobs if they’re currently doing something that AI can replace. While that’s not bad (that is, don’t have people do boring rote stuff), it needs to be done in a way that doesn’t leave those folks destitute. There should be re-skilling support. There are also climate costs from the massive power requirements of such systems. Finally, such systems are being put to use in bad ways (e.g. fakes). It’s not surprising, but we really should develop the guardrails before these tools reach release.

To be fair, there are some great opportunities out there. Generative AI can produce some ideas you might not have thought of. The only problem is that some of them may be bad. Which brings me to my final point. I’m more a fan of Augmenting Intellect (ala Engelbart) than I am of Artificial Intelligence. Such systems can serve as a great thinking partner! That is, they support thinking, but they also need scrutiny. Note that there can be combinations, such as hybrids of unsupervised and supervised, and symbolic with sub-symbolic.
With the right policies, AI can be such a partner. Without same, however, we open the doors to substantial risks. (And, a few days after first drafting this, the US Gov announced an approach!) I think having a brief AI overview provides a basis for thinking usefully about how to use them successfully. We need to be aware to avoid the potential problems. I hope this helps, and welcome your corrections, concerns, and questions.

Vale Roger Schank

3 February 2023 by Clark 4 Comments

I’d first heard of Roger Schank’s work as an AI ‘groupie’ during my college years. His contributions to cognitive science have been immense. He was a challenging personality and intellect, and yet he fought for the right things. He passed away yesterday, and he will be missed.

Roger’s work connected story to cognition. He first saw how we had expectations about events owing to his experience at a restaurant with an unusual approach. At Legal Seafoods (at the time) you paid before being served (more like fast food than a sit-down venue). Surprised, Roger realized that there must be cognitive structures for events that were similar to the proposed schemas for things. He investigated the phenomena computationally, advancing artificial intelligence and cognitive science. Roger subsequently applied his thinking to education, writing Engines for Education (amongst other works), while leading a variety of efforts in using technology to support learning. He also railed against AI hype, accurately of course. I was a fan.

I heard Roger speak at a Cog Sci conference I attended to present part of my dissertation research. The controversy around his presentation caused the guest speaker, Stephen Jay Gould, to comment “you guys are weird”! His reputation preceded him; I had one of his PhD graduates on a team and he told me Roger was deliberately tough on them, saying “if you can survive me, you can survive anyone”.

I subsequently met up with Roger at several EdTech events hither and yon. In each he was his fiery, uncompromising self. Yet, he was also right. He was a bit of a contradiction: opinionated and unabashed, but also generous and committed to meaningful change. He also was a prodigious intellect; if you were as smart as him, I guess you had a reason to be self-confident. I got to know him a bit personally at those events, and then when he engaged me for advice to his company. He occasionally would reach out for advice, and always offer the same.

He could be irritating in his deliberate lack of social graces, but he was willing to learn, and had a good heart. In return, I learned a lot from him, and use some of his examples in my presentations. It was an honor to have known him, and the world will be a little duller, and probably a little dumber, without him. Rest in peace.

New recommended readings

8 June 2021 by Clark Leave a Comment

My Near Book ShelfOf late, I‘ve been reading quite a lot, and I‘m finding some very interesting books. Not all have immediate take homes, but I want to introduce a few to you with some notes. Not all will be relevant, but all are interesting and even important. I‘ll also update my list of recommended readings. So here are my new recommended readings. (With Amazon Associates links: support your friendly neighborhood consultants.)

First, of course, I have to point out my own Learning Science for Instructional Designers. A self-serving pitch confounded with an overload of self-importance? Let me explain. I am perhaps overly confident that it does what it says, but others have said nice things. I really did design it to be the absolute minimum reading that you need to have a scrutable foundation for your choices. Whether it succeeds is an open question, so check out some of what others are saying. As to self-serving, unless you write an absolute mass best-seller, the money you make off books is trivial. In my experience, you make more money giving it away to potential clients as a better business card than you do on sales. The typically few hundred dollars I get a year for each book aren‘t going to solve my financial woes! Instead, it‘s just part of my campaign to improve our practices.

So, the first book I want to recommend is Annie Murphy Paul‘s The Extended Mind. She writes about new facets of cognition that open up a whole area for our understanding. Written by a journalist, it is compelling reading. Backed in science, it’s valuable as well. In the areas I know and have talked about, e.g. emergent and distributed cognition, she gets it right, which leads me to believe the rest is similarly spot on. (Also her previous track record; I mind-mapped her talk on learning myths at a Learning Solutions conference). Well-illustrated with examples and research, she covers embodied cognition, situated cognition, and socially distributed cognition, all important. Moreover, there‘re solid implications for the redesign of instruction. I‘ll be writing a full review later, but here‘s an initial recommendation on an important and interesting read.  

I‘ll also alert you to Tania Luna‘s and LeeAnn Renninger‘s Surprise. This is an interesting and fun book that instead of focusing on learning effectiveness, looks at the engagement side. As their subtitle suggests, it‘s about how to Embrace the Unpredictable and Engineer the Unexpected. While the first bit of that is useful personally, it‘s the latter that provides lots of guidance about how to take our learning from events to experiences. Using solid research on what makes experiences memorable (hint: surprise!) and illustrative anecdotes, they point out systematic steps that can be used to improve outcomes. It‘s going to affect my Make It Meaningful  work!

Then, without too many direct implications, but intrinsically interesting is Lisa Feldman Barrett‘s How Emotions Are Made. Recommended to me, this book is more for the cog sci groupie, but it does a couple of interesting things. First, it creates a more detailed yet still accessible explanation of the implications of Karl Friston‘s Free Energy Theory. Barrett talks about how those predictions are working constantly and at many levels in a way that provides some insights. Second, she then uses that framework to debunk the existing models of emotions. The experiments with people recognizing facial expressions of emotion get explained in a way that makes clear that emotions are not the fundamental elements we think they are. Instead, emotions social constructs! Which undermines, BTW, all the facial recognition of emotion work.

I also was pointed to Tim Harford‘s The Data Detective, and I do think it‘s a well done work about how to interpret statistical claims. It didn‘t grip me quite as viscerally as the afore-mentioned books, but I think that‘s because I (over-)trust my background in data and statistics. It is a really well done read about some simple but useful rules for how to be a more careful reviewer of statistical claims. While focused on parsing the broader picture of societal claims (and social media hype), it is relevant to evaluating learning science as well.  

I hope you find my new recommended readings of interest and value. Now, what are you recommending to me? (He says, with great trepidation. ;)

Artificial Intelligence or Intelligence Augmentation

12 April 2017 by Clark Leave a Comment

In one of my networks, a recent conversation has been on Artificial Intelligence (AI) vs Intelligence Augmentation (IA). I’m a fan of both, but my focus is more on the IA side. It triggered some thoughts that I penned to them and thought I’d share here [notes to clarify inserted with square brackets like this]:

As context, I‘m an AI ‘groupie’, and was a grad student at UCSD when Rumelhart and McClelland were coming up with PDP (parallel distributed processing, aka connectionist or neural networks). I personally was a wee bit enamored of genetic algorithms, another form of machine learning (but a bit easier to extract semantics, or maybe just simpler for me to understand ;).

Ed Hutchins was talking about distributed cognition at the same time, and that remains a piece of my thinking about augmenting ourselves. We don‘t do it all in our heads, so what can be in the world and what has to be in the head?  [the IA bit, in the context of Doug Engelbart]

And yes, we were following fuzzy logic too (our school was definitely on the left-coast of AI ;).  Symbolic logic was considered passe‘! Maybe that‘s why Zadeh [progenitor of fuzzy logic] wasn‘t more prominent here (making formal logic probabilistic may have seemed like patching a bad core premise)?  And I managed (by hook and crook, courtesy of Don Norman ;) to attend an elite AI convocation held at an MIT retreat with folks like McCarthy, Dennett, Minsky, Feigenbaum, and other lights of both schools.  (I think Newell were there, but I can‘t state for certain.)  It was groupie heaven!

Similarly, it was the time of emergence of ‘situated cognition‘ too (a contentious debate with proponents like Greeno and even Bill Clancy while old school symbolics like Anderson and Simon argued to the contrary).  Which reminds me of Harnad‘s Symbol Grounding problem, a much meatier objection to real AI than Dreyfuss’ or the Chinese room concerns, in my opinion.

I do believe we ultimately  will achieve machine consciousness, but it‘s much further out than we think. We‘ll have to understand our own consciousness first, and that‘s going to be tough, MRI and other such research not withstanding. And it may mean simulating our cognitive architecture on a sensor equipped processor that must learn through experimentation and feedback as we do. e.g. taking a few years just to learn to speak! (“What would it take to build a baby” was a developmental psych assignment I foolishly attempted ;)

In the meantime, I agree with Roger Schank (I think he was at the retreat too) that most of what we‘re seeing, e.g. Watson, is just fast search, or pattern-learning. It‘s not really intelligent, even if it‘s doing it like we do (the pattern learning). It‘s useful, but it‘s not intelligent.

And, philosophically, I agree with those who have stated that we must own the responsibility to choose what we take on and what we outsource. I‘m all for self-driving vehicles, because the alternative is pretty bad (tho‘ could we do better in driver training or licensing, like in Germany?).  And I do want my doctor augmented by powerful rote operations that surpass our own abilities, and also by checklists and policies and procedures, anything that increases the likelihood of a good diagnosis and prescription.  But I want my human doctor in the loop.  We still haven‘t achieved the integration of separate pattern-matching, and exception handling, that our own cognitive processor provides.

AI and Learning

7 October 2015 by Clark Leave a Comment

At the recent DevLearn, Donald Clark talked about AI in learning, and while I largely agreed with what he said, I had some thoughts and some quibbles. I discussed them with him, but I thought I’d record them here, not least as a basis for a further discussion.

Donald’s an interesting guy, very sharp and a voracious learner, and his posts are both insightful and inciteful (he doesn’t mince words ;). Having built and sold an elearning company, he’s now free to pursue what he believes and it’s currently in the power of technology to teach us.

As background, I was an AI groupie out of college, and have stayed current with most of what’s happened.  And you should know a bit of the history of the rise of Intelligent Tutoring Systems, the problems with developing expert models, and current approaches like Knewton and Smart Sparrow. I haven’t been free to follow the latest developments as much as I’d like, but Donald gave a great overview.

He pointed to systems being on the verge of auto parsing content and developing learning around it.  He showed an example, and it created questions from dropping in a page about Las Vegas.  He also showed how systems can adapt individually to the learner, and discussed how this would be able to provide individual tutoring without many limitations of teachers (cognitive bias, fatigue), and can not only personalize but self-improve and scale!

One of my short-term problems was that the questions auto-generated were about knowledge, not skills. While I do agree that knowledge is needed (ala VanMerriënboer’s 4CID) as well as applying it, I think focusing on the latter first is the way to go.

This goes along with what Donald has rightly criticized as problems with multiple-choice questions. He points out how they’re largely used as knowledge test, and  I agree that’s wrong, but  while there are better practice situations (read: simulations/scenarios/serious games), you can write multiple choice as mini-scenarios and get good practice.  However, it’s as yet an interesting research problem, to me, to try to get good scenario questions out of auto-parsing content.

I naturally argued for a hybrid system, where we divvy up roles between computer and human based upon what we each do well, and he said that is what he  is seeing in the companies he tracks (and funds, at least in some cases).  A great principle.

The last bit that interested me was whether and how such systems could develop not only learning skills, but meta-learning or learning to learn skills. Real teachers can develop this and modify it (while admittedly rare), and yet it’s likely to be the best investment. In my activity-based learning, I suggested that gradually learners should take over choosing their activities, to develop their ability to become self-learners.  I’ve also suggested how it could be layered on top of regular learning experiences. I think this will be an interesting area for developing learning experiences that are scalable but truly develop learners for the coming times.

There’s more: pedagogical rules, content models, learner models, etc, but we’re finally getting close to be able to build these sorts of systems, and we should be  aware of what the possibilities are, understanding what’s required, and on the lookout for both the good and bad on tap.  So, what say you?

Mac memories

21 January 2014 by Clark Leave a Comment

This year is the 30th anniversary of the Macintosh, and my newspaper asked for memories.  I’ll point them to this post ;).

As context, I was programming for the educational computer game company, DesignWare.  DesignWare had started out doing computer games to accompany K12 textbooks, but I (not alone) had been arguing about heading into the home market, and happened to run into Bill Bowman and David Seuss at a computer conference, who’d started Spinnaker to sell education software to the home market, and were looking for companies that could develop product. I told them to contact my CEO, and as a reward I got to do the first joint title, FaceMaker. When DesignWare created it’s own titles, I got to do Creature Creator and Spellicopter before I headed off to graduate school for my Ph.D. in what ended up being, effectively, applied cognitive science.

While I was at DesignWare, I had been an groupie of Artificial Intelligence and a nerd around all things cool in computers, so I was a fan of the work going on at Xerox Palo Alto Research Center (aka Parc), and followed along in Byte magazine. (I confess that, at the time, I was a bit young to have been aware of the mother of all demos by Doug Engelbart and the inspiration of the Parc work.)  So I lusted after bitmap screens and mice, and the Lisa (the Mac predecessor).

My Ph.D. advisor, Donald Norman, had written about cognitive engineering and the research lab I joined was very keen on interface design (leading to Don’s first mass-market and must-read book, The Psychology of Everyday Things, subsequently titled The Design of Everyday Things, and a compendium of writings call User-Centered System Design).  He was, naturally, advising Apple.  So while I dabbled in meta-learning, I was right there at the heart of thinking around interface design.

Naturally, if you cared about interface design, had designed engaging graphic interfaces, and had watched how badly the IBM PC botched the introduction of the work computer, you really wanted the Macintosh.  Command lines were for those who didn’t know better.  When the Macintosh first came out, however, I couldn’t justify the cost.  I had access to Unix machines and the power of the ARPANET.  (The reason I was originally ho-hum about the internet was that I’d been playing with Gopher and WAIS and USENET for years!)

I finally justified the purchase of a Mac II to write my PhD thesis on.  I used Microsoft Word, and with the styles option was able to meet the rigorous requirements of the library for theses without having to pay someone to type it for me (a major victory in the small battles of academia!).  I’ve been on a Macintosh ever since, and have survived the glories of iMacs and Duos (and the less-than stellar Performa).  And I’ve written books, created presentations, and brainstormed through diagrams in ways I just haven’t been able to on other platforms.  My family is now also on Macs.  When the alternative can be couched as the triumph of marketing over matter, there really has been little other choice.  Happy 30th!

How I became a learning experience designer

25 January 2010 by Clark 9 Comments

Not meaning this to be a sudden spate of reflectiveness, given my last post on my experience with the web, but Cammy Bean has asked when folks became instructional designers, and it occurs to me to capture my rather twisted path with a hope of clarifying the filters I bring in thinking about design.

It starts as a kid; as Cammy relates, I didn’t grow up thinking I wanted to be a learning designer.   Besides a serious several years being enchanted with submarines (still am, in theory, but realized I probably wouldn’t get along with the Navy for my own flaws), I always wanted to have a big desk covered with cool technology, exploring new ideas.     I wasn’t a computer geek back then (the computer club in high school sent off programs to the central office to run and received the printout a day or so later), but rather a science geek, reading Popular Science and spending hours on the floor looking at the explanatory diagrams in the World Book (I’m pretty clearly a visual conceptual learner :).   And reading science fiction. I did have a bit of an applied bent, however, with a father who was an engineer and could fix anything, who helped my brother and I work on our cars and things.

When I got to UCSD (just the right distance from home, and near the beach), my ambition to be a marine biologist was extinguished as the bio courses were both rote-memorization and cut-throat pre-med, neither of which inspired me (my mom was an emergency room nurse, and I realized early on that I wasn’t cut out for blood and gore).   I took some computer science classes with a buddy and found I could do the thinking (what with, er, distractions, I wasn’t the most diligent student, but I still managed to get pretty good grades).   I also got a job tutoring calculus, physics, and chemistry with the campus office for some extra cash, and took some learning classes. I also got interested in artificial intelligence, too, and was a bit of a groupie around how we think, and really cool applications of technology.

I somehow got the job of computer support for the tutoring office, and that’s when a light went on about the possibilities of computers supporting learning.   There wasn’t a degree program in place, but I found out my college allowed you to specify your own major and I convinced Provost Stewart and two advisors (Mehan & Levin) to let me create my own program.   Fortunately, I was able to leverage the education classes I’d taken for tutoring, the computer science classes I’d also taken, and actually got out faster than any program I’d already dabbled in! (And got to do that cool ’email for classroom discussion’ project with my advisors, in 1979!)

After calling around the country trying to find someone who needed a person interesting in computers for learning, I finally got hooked up with Jim Schuyler, who had just started a company doing computer games to go along with textbook publisher’s offerings.   I eventually managed to hook DesignWare up with Spinnaker to do a couple of home games for them before Jim had DesignWare start producing it’s own home games (I got to do two cool ones, FaceMaker and Spellicopter as well as several others).

However, I had a hankering to go back to graduate school and get an advanced degree.   As I wrestled with how to design the interfaces for games, I read an article calling for a ‘cognitive engineering’, and contacted the author about where I might study this.   Donald Norman ended up letting me study with him.

The group was largely focused on human-computer interaction, but I maintained my passion for learning solutions.   I did a relatively mainstream PhD but while focusing on the general cognitive skill of analogical reasoning, I also attempted an intervention to improve the reasoning.

Though it was a cognitive group, I was eclectic, and looked at every form of learning.   In addition to the cognitive theories that were in abundance, I took and TA’d for the behavioral learning courses.   David Merrill was visiting nearby, and graciously allowed me to visit him for a discussion (as well as reading Reigeluth’s edited overview of instructional design theories).   Michael Cole was a big fan of Vygotsky, and I was steeped in the social learning theories thereby.   David Rumelhart and Jay McClelland were doing the connectionist/PDP work while I was a student, so I got that indoctrination as well.   And, as an AI groupie, I even looked at machine learning!

I subsequently did a postdoc at the University of Pittsburgh’s Learning Research & Development Center, where I was further steeped in cognitive learning theory, before heading off to UNSW to teach interaction design and start doing my own research, which ended up being very much applied, essentially an action- or design-research approach.   My subsequent activities have also been very much applications of a broad integration of learning theory into practical yet innovative design.

The point being, I never formally considered myself an instructional designer so much as a learning designer.   Having worked on non-formal education in many ways, as well as teaching in higher education, my applications have crossed formal instruction and informal learning.   As the interface design field was very much exploring   subjective experiences at the time I was a graduate student, and from my game design experience, I very naturally internalized a focus on engaging learning, believing that learning can, and should, be hard fun.

I’ve synthesized the eclectic frameworks into a coherent learning design model that I can apply across technologies, and strongly believe that a solid grounding in conceptual frameworks combined with experiences that span a range of technologies and learning outcomes is the best preparation for a flexible ability to design experiences that are effective and engaging. Passionate as I am about learning, I do think we could do a better job of providing the education that’s needed to help make that happen, and still look for ways to try to help others learn (one of my employees once said that working with me was like going to grad school, and I do try to educate clients, in addition to running workshops and continuing to speak).

And, I’ve ended up, as I dreamed of, with a desk covered with cool technology and I get to explore new ideas: designing solutions that integrate the cutting edge of devices, tools, models, frameworks, all to help people achieve their goals.   I continue to think ahead about what new possibilities are out there, and work to improve what’s happening.     I love learning experience design (and the associated strategic thinking to make it work), believe there’s at least some evidence that I do it pretty well, and hope to keep doing it myself and helping others do it better.   Who’s up for some hard fun?

Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok