Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Roger Schank #learntech2015 Keynote Mindmap

2 November 2015 by Clark Leave a Comment

Roger gave his passioned, opinionated, irreverent, and spot-on talk to kick off LearnTechAsia. He covered the promise (or not) of AI, learning, stories, and the implications for education.

Non-invasive Brain Surgery

28 October 2015 by Clark Leave a Comment

Changing behavior is hard. The brain is arguably the most complex thing in the known universe. Simplistic approaches aren‘t likely to work. To rewire it, one approach is to try surgery. This is problematic for a several reasons: it‘s dangerous, it‘s messy, and we really don’t understand enough about it. What‘s a person to do?

Well, we do know that the brain can rewire itself, if we do it right. This is called learning. And if we design learning, e.g. instruction, we can potentially change the brain without surgery. However, (and yes, this is my point) treating it as anything less than brain surgery (or rocket science), isn‘t doing justice to what‘s known and what‘s to be done.

The number of ways to get it wrong is long. Information dump instead of skills practice. Massed practice instead of spaced. Rote knowledge assessment. Lack of emotional engagement. The list goes on. (Cue the Serious eLearning Manifesto.) In short, if you don‘t know what you‘re doing, you‘re likely doing it wrong and are not going to have an effect. Sure, you‘re not likely to kill anyone (unless you‘re doing this where it matters), but you‘ll waste money and time. Scandalous.

Again, the brain is complex, and consequently so is learning design. So why, in the name of sense and money, do we treat it as trivial? Why would anyone buy a story that we can achieve anything meaningful by taking content and adding a quiz (read: rapid eLearning)? As if a quiz is somehow going to make people do better. Who would believe that just anyone can present material and learning will occur? (Do you know the circumstances when that will work?) And really, throwing fuzzy objects around the room and ice-breakers will somehow make a difference? Please. If you can afford to throw money down the drain (ok, if you insist, throw it here ;), and don‘t care if any meaningful change happens, I pity you, but I can‘t condone it.

Let‘s get real. Let‘s be honest. There‘s a lot (a lot) of things being done in the name of learning that are just nonsensical. I could laugh, if I didn‘t care so much. But I care about learning. And we know what leads to learning. It‘s not easy. It‘s not even cheap. But it will work. It requires good analysis, and some creativity, and attention to detail, and even some testing and refinement, but we know how to do this.

So let‘s stop pretending. Let‘s stop paying lip-service. Let‘s treat learning design as the true blend of art and science that it is. It‘s not the last refuge of the untalented, it‘s one of the most challenging, and rewarding, things a person can do. When it‘s done right. So let‘s do it right! We‘re performing brain surgery, non-invasively, and we should be willing to do the hard yards to actually achieve success, and then reap the accolades.

OK, that‘s my rant, trying to stop what‘s being perpetrated and provide frameworks that might help change the game. What‘s your take?

Supporting our Brains

13 October 2015 by Clark 5 Comments

One of the ways I’ve been thinking about the role mobile can play in design is thinking about how our brains work, and don’t.  It came out of both mobile and the recent cognitive science for learning workshop I gave at the recent DevLearn.  This applies more broadly to performance support in general, so I though I’d share where my thinking is going.

To begin with, our cognitive architecture is demonstrably awesome; just look at your surroundings and recognize your clothing, housing, technology, and more are the product of human ingenuity.  We have formidable capabilities to predict, plan, and work together to accomplish significant goals.  On the flip side, there’s no one all-singing, all-dancing architecture out there (yet) and every such approach also has weak points. Technology, for instance, is bad at pattern-matching and meaning-making, two things we’re really pretty good at.  On the flip side, we have some flaws too. So what I’ve done here is to outline the flaws, and how we’ve created tools to get around those limitations.  And to me, these are principles for design:

table of cognitive limitations and support toolsSo, for instance, our senses capture incoming signals in a sensory store.  Which has interesting properties that it has almost an unlimited capacity, but for only a very short time. And there is no way all of it can get into our working memory, so what happens is that what we attend to is what we have access to.  So we can’t recall what we perceive accurately.  However, technology (camera, microphone, sensors) can recall it all perfectly. So making capture capabilities available is a powerful support.

Similar, our attention is limited, and so if we’re focused in one place, we may forget or miss something else.  However, we can program reminders or notifications that help us recall important events that we don’t want to miss, or draw our attention where needed.

The limits on working memory (you may have heard of the famous 7 ±2, which really is <5) mean we can’t hold too much in our brains at once, such as interim results of complex calculations.  However, we can have calculators that can do such processing for us. We also have limited ability to carry information around for the same reasons, but we can create external representations (such as notes or  scribbles) that can hold those thoughts for us.  Spreadsheets, outlines, and diagramming tools allow us to take our interim thoughts and record them for further processing.

We also have trouble remembering things accurately. Our long term memory tends to remember meaning, not particular details. However, technology can remember arbitrary and abstract information completely. What we need are ways to look up that information, or search for it. Portals and lookup tables trump trying to put that information into our heads.

We also have a tendency to skip steps. We have some randomness in our architecture (a benefit: if we sometimes do it differently, and occasionally that’s better, we have a learning opportunity), but this means that we don’t execute perfectly.  However, we can use process supports like checklists.  Atul Gawande wrote a fabulous book on the topic that I can recommend.

Other phenomena include that previous experience can bias us in particular directions, but we can put in place supports to provide lateral prompts. We can also prematurely evaluate a solution rather than checking to verify it’s the best. Data can be used to help us be aware.  And we can trust our intuition too much and we can wear down, so we don’t always make the best decisions.  Templates, for example are a tool that can help us focus on the important elements.

This is just the result of several iterations, and I think more is needed (e.g. about data to prevent premature convergence), but to me it’s an interesting alternate approach to consider where and how we might support people, particularly in situations that are new and as yet untested.  So what do you think?

Learnnovators Deeper eLearning Series

8 October 2015 by Clark Leave a Comment

For the past 6 months, Learnnovators has been hosting a series of posts I’ve done on Deeper eLearning Design that goes through the elements beyond traditional ID.  That is, reflecting on what’s known about how we learn and what that implies for the elements of learning. Too often, other than saying we need an objective and practice (and getting those wrong), we talk about ‘content’.  Basically, we don’t talk enough about the subtleties.

So here I’ve been getting into the nuances of each element, closing with an overview of changes that are implied for processes:

1. Deeper eLearning Design: Part 1 – The Starting Point: Good Objectives
2. Deeper eLearning Design: Part 2 – Practice Makes Perfect
3. Deeper eLearning Design: Part 3 – Concepts
4. Deeper eLearning Design: Part 4 – Examples
5. Deeper eLearning Design: Part 5 – Emotion
6. Deeper eLearning Design: Part 6 – Putting it All Together

I’ve put into these posts  my best thinking around learning design. The final one’s been posted, so now I can  collect the whole set  here for your convenience.

And don’t forget the Serious eLearning Manifesto!  I hope you find this useful, and welcome your feedback.

AI and Learning

7 October 2015 by Clark Leave a Comment

At the recent DevLearn, Donald Clark talked about AI in learning, and while I largely agreed with what he said, I had some thoughts and some quibbles. I discussed them with him, but I thought I’d record them here, not least as a basis for a further discussion.

Donald’s an interesting guy, very sharp and a voracious learner, and his posts are both insightful and inciteful (he doesn’t mince words ;). Having built and sold an elearning company, he’s now free to pursue what he believes and it’s currently in the power of technology to teach us.

As background, I was an AI groupie out of college, and have stayed current with most of what’s happened.  And you should know a bit of the history of the rise of Intelligent Tutoring Systems, the problems with developing expert models, and current approaches like Knewton and Smart Sparrow. I haven’t been free to follow the latest developments as much as I’d like, but Donald gave a great overview.

He pointed to systems being on the verge of auto parsing content and developing learning around it.  He showed an example, and it created questions from dropping in a page about Las Vegas.  He also showed how systems can adapt individually to the learner, and discussed how this would be able to provide individual tutoring without many limitations of teachers (cognitive bias, fatigue), and can not only personalize but self-improve and scale!

One of my short-term problems was that the questions auto-generated were about knowledge, not skills. While I do agree that knowledge is needed (ala VanMerriënboer’s 4CID) as well as applying it, I think focusing on the latter first is the way to go.

This goes along with what Donald has rightly criticized as problems with multiple-choice questions. He points out how they’re largely used as knowledge test, and  I agree that’s wrong, but  while there are better practice situations (read: simulations/scenarios/serious games), you can write multiple choice as mini-scenarios and get good practice.  However, it’s as yet an interesting research problem, to me, to try to get good scenario questions out of auto-parsing content.

I naturally argued for a hybrid system, where we divvy up roles between computer and human based upon what we each do well, and he said that is what he  is seeing in the companies he tracks (and funds, at least in some cases).  A great principle.

The last bit that interested me was whether and how such systems could develop not only learning skills, but meta-learning or learning to learn skills. Real teachers can develop this and modify it (while admittedly rare), and yet it’s likely to be the best investment. In my activity-based learning, I suggested that gradually learners should take over choosing their activities, to develop their ability to become self-learners.  I’ve also suggested how it could be layered on top of regular learning experiences. I think this will be an interesting area for developing learning experiences that are scalable but truly develop learners for the coming times.

There’s more: pedagogical rules, content models, learner models, etc, but we’re finally getting close to be able to build these sorts of systems, and we should be  aware of what the possibilities are, understanding what’s required, and on the lookout for both the good and bad on tap.  So, what say you?

Connie Yowell #DevLearn Keynote Mindmap

30 September 2015 by Clark Leave a Comment

Connie Yowell gave a passionate and informing presentation on the driving forces behind digital badges.

Looking forward on content

24 September 2015 by Clark 5 Comments

At DevLearn next week, I’ll be talking about content systems in session 109.  The point is that instead of monolithic content, we want to start getting more granular for more flexible delivery. And while there I’ll be talking about some of the options on how, here I want to make the case about why, in a simplified way.

As an experiment (gotta keep pushing the envelope in a myriad of ways), I’ve created a video, and I want to see if I can embed it.  Fingers crossed.  Your feedback welcome, as always.

 

Agile?

17 September 2015 by Clark 6 Comments

Last Friday’s #GuildChat was on Agile Development.  The topic is interesting to me, because like with Design Thinking, it seems like well-known practices with a new branding. So as I did then, I’ll lay out what I see and hope others will enlighten me.

As context, during grad school I was in a research group focused on user-centered system design, which included design, processes, and more. I subsequently taught interface design (aka Human Computer Interaction or HCI) for a number of years (while continuing to research learning technology), and made a practice of advocating the best practices from HCI to the ed tech community.  What was current at the time were iterative, situated, collaborative, and participatory design processes, so I was pretty  familiar with the principles and a fan. That is, really understand the context, design and test frequently, working in teams with your customers.

Fast forward a couple of decades, and the Agile Manifesto puts a stake in the ground for software engineering. And we see a focus on releasable code, but again with principles of iteration and testing, team work, and tight customer involvement.  Michael Allen was enthused enough to use it as a spark that led to the Serious eLearning Manifesto.

That inspiration has clearly (and finally) now moved to learning design. Whether it’s Allen’s  SAM  or Ger Driesen’s  Agile Learning Manifesto, we’re seeing a call for rethinking the old waterfall model of design.  And this is a good thing (only decades late ;).  Certainly we know that working together is better than working alone (if you manage the process right ;), so the collaboration part is a win.

And we certainly need change.  The existing approaches we too often see involve a designer being given some documents, access to a SME (if lucky), and told to create a course on X.  Sure, there’re tools and templates, but they are focused on making particular interactions easier, not on ensuring better learning design. And the person works alone and does the design and development in one pass. There are likely to be review checkpoints, but there’s little testing.  There are variations on this, including perhaps an initial collaboration meeting, some SME review, or a storyboard before development commences, but too often it’s largely an independent one way flow, and  this isn’t good.

The underlying issue  is that waterfall models, where you specify the requirements in advance and then design, develop, and implement just don’t work. The problem is that the human brain is pretty much the most complex thing in existence, and when we determine a priori what will work, we don’t take into account the fact that like Heisenberg what we implement will change the system. Iterative development and testing allows the specs to change after initial experience.  Several issues arise with this, however.

For one, there’s a question about what is the right size and scope of a deliverable.  Learning experiences, while typically overwritten, do have some stricture that keeps them from having intermediately useful results. I was curious about what made sense, though to me it seemed that you could develop your final practice first as a deliverable, and then fill in with the required earlier practice, and content resources, and this seemed similar to what was offered up during the chat to my question.

The other one is scoping and budgeting the process. I often ask, when talking about game design, how to know when to stop iterating. The usual (and wrong answer) is when you run out of time or money. The  right answer would be when you’ve hit your metrics, the ones you should set before you begin that determine the parameters of a solution (and they can be consciously reconsidered as part of the process).  The typical answer, particularly for those concerned with controlling costs, is something like a heuristic choice of 3 iterations.  Drawing on some other work in software process, I’d recommend creating estimates, but then reviewing them after. In the software case, people got much better at estimates, and that could be a valuable extension.  But it shouldn’t be any more difficult to estimate, certainly with some experience, than existing methods.

Ok, so I may be a bit jaded about new brandings on what should already be good practice, but I think anything that helps us focus on developing in ways that lead to quality outcomes is a good thing.  I encourage you to work more collaboratively, develop and test more iteratively, and work on discrete chunks. Your stakeholders should  be glad you did.

 

Accreditation and Compliance Craziness

8 September 2015 by Clark 3 Comments

A continued bane of my existence is the ongoing requirements that are put in place for a variety of things.  Two in particular are related and worth noting: accreditation and compliance.  The way they’re typically construed is barking mad, and we can (and need to) do better.

To start with accreditation. It sounds like a good thing: to make sure that someone issuing some sort of certification has in place the proper procedures.  And, done rightly, it would be. However, what we currently see is that, basically, the body says you have to take what the Subject Matter Expert (SME) says as the gospel. And this is problematic.

The root of the problem is that SMEs don’t have access to around 70% of what they do, as research at the University of Southern California’s Cognitive Technology group has documented. However, of course, they have access to all they ‘know’.  So it’s easy for them to say what learners should know, but not what learners  actually should be able to do.  And some experts are better than others at articulating this, but the process is opaque to this nuance.

So unless the certification process is willing to allow the issuing institution the flexibility to use a process to drill down into the actual ‘do’, you’re going to get knowledge-focused courses that don’t actually achieve important outcomes. You could do things like  incorporating those who depend on the practitioners, and/or using a replicable and grounded process with SMEs that helps them work out what the core objectives need to be; meaningful ones, ala competencies. And a shoutout to Western Governors University for somehow being accredited using competencies!

Compliance is, arguably, worse.  Somehow, the amount of  time you spend is the important determining factor. Not what you can do at the end, but instead that you’ve done  something  for an hour.  The notion that amount of time spent relates to ability at this level of granularity is outright maniacal.  Time would matter, differently for different folks, but you have to be doing the right thing, and there’s no stricture for that.   Instead, if you’ve been subjected to an hour of information, that somehow is going to change your behavior. As if.

Again, competencies would make sense.  Determine what you need them to be able to do, and then assess that. If it takes them 30 minutes, that’s OK. If it takes them 5 hours, well, it’s necessary to be compliant.

I’d like to be wrong, but I’ve seen personal instances of both of these, working with clients. I’d really like to find a point of leverage to address this.  How can we start having processes that obtain necessary skills, and then use those to determine ability, not time or arbitrary authority!  Where can we start to make this necessary change?

3 C’s of Engaging Practice

26 August 2015 by Clark Leave a Comment

In thinking through what makes experiences engaging, and in particular making practice engaging, I riffed on some core elements.   The three terms  I came up with were  Challenge, Choices, & Consequences. And I realized I had a nice little alliteration going, so I’m going to elaborate and see if it makes sense to me (and you).

In general, good practice is having the learner make decisions in context. This has to be more than just recognizing the correct knowledge option, and providing a ‘right’ or ‘wrong’ feedback.  The right decision has to be made, in a plausible situation with plausible alternatives, and the right feedback has to be provided.

So, the first thing is, there has to be a situation that the learner ‘gets’ is important. It’s meaningful to them and to their stakeholders, and they want to get it right. It has to be clear there’s a real decision that has outcomes that are important.    And the difficulty  has to be adjusted to their level of ability. If it’s too easy, they’re bored and little  learning occurs. If it’s too difficult, it’s frustrating and again little learning occurs.  However, with a meaningful story and the right level of difficulty, we have the appropriate  challenge.  

Then, we have to have the right alternatives to select from. Some of the challenge comes from having a real decision where you can recognize that making the wrong choice would be problematic. But the alternatives must require an appropriate level of discrimination.  Alternatives that are so obvious or silly that they can be ruled out aren’t going to lead to any learning. Instead, they need to be ways learners reliably go wrong, representing misconceptions. The benefits are several: first, you can find out what they really know (or don’t), and you have the chance to address them. Also, this assists in having the right level of challenge.  So  you must have the right  choices.

Finally, once the choice is made, you need to have feedback. Rather than immediately have some external voice opine ‘yes’ or ‘no’, let the learner see the consequences of that choice. This is important for two  reasons. For one, it closes the emotional experience, as you see what happens, wrapping up the experience. Second, it shows how things work in the world, exposing the causal relationships and assists the learner understanding. Then you can provide feedback (or not, if you’re embedding this single decision in a scenario or game where other choices are precipitated by this choice). So, the final element are  consequences.

While this isn’t complete, I think it’s a nice shorthand to guide the design of meaningful and engaging practice. What do you think?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.