Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

8 October 2015

Learnnovators Deeper eLearning Series

Clark @ 8:08 am

For the past 6 months, Learnnovators has been hosting a series of posts I’ve done on Deeper eLearning Design that goes through the elements beyond traditional ID.  That is, reflecting on what’s known about how we learn and what that implies for the elements of learning. Too often, other than saying we need an objective and practice (and getting those wrong), we talk about ‘content’.  Basically, we don’t talk enough about the subtleties.

So here I’ve been getting into the nuances of each element, closing with an overview of changes that are implied for processes:

1. Deeper eLearning Design: Part 1 – The Starting Point: Good Objectives
2. Deeper eLearning Design: Part 2 – Practice Makes Perfect
3. Deeper eLearning Design: Part 3 – Concepts
4. Deeper eLearning Design: Part 4 – Examples
5. Deeper eLearning Design: Part 5 – Emotion
6. Deeper eLearning Design: Part 6 – Putting it All Together

I’ve put into these posts my best thinking around learning design. The final one’s been posted, so now I can collect the whole set  here for your convenience.

And don’t forget the Serious eLearning Manifesto!  I hope you find this useful, and welcome your feedback.

7 October 2015

AI and Learning

Clark @ 8:10 am

At the recent DevLearn, Donald Clark talked about AI in learning, and while I largely agreed with what he said, I had some thoughts and some quibbles. I discussed them with him, but I thought I’d record them here, not least as a basis for a further discussion.

Donald’s an interesting guy, very sharp and a voracious learner, and his posts are both insightful and inciteful (he doesn’t mince words ;). Having built and sold an elearning company, he’s now free to pursue what he believes and it’s currently in the power of technology to teach us.

As background, I was an AI groupie out of college, and have stayed current with most of what’s happened.  And you should know a bit of the history of the rise of Intelligent Tutoring Systems, the problems with developing expert models, and current approaches like Knewton and Smart Sparrow. I haven’t been free to follow the latest developments as much as I’d like, but Donald gave a great overview.

He pointed to systems being on the verge of auto parsing content and developing learning around it.  He showed an example, and it created questions from dropping in a page about Las Vegas.  He also showed how systems can adapt individually to the learner, and discussed how this would be able to provide individual tutoring without many limitations of teachers (cognitive bias, fatigue), and can not only personalize but self-improve and scale!

One of my short-term problems was that the questions auto-generated were about knowledge, not skills. While I do agree that knowledge is needed (ala VanMerriënboer’s 4CID) as well as applying it, I think focusing on the latter first is the way to go.

This goes along with what Donald has rightly criticized as problems with multiple-choice questions. He points out how they’re largely used as knowledge test, and I agree that’s wrong, but while there are better practice situations (read: simulations/scenarios/serious games), you can write multiple choice as mini-scenarios and get good practice.  However, it’s as yet an interesting research problem, to me, to try to get good scenario questions out of auto-parsing content.

I naturally argued for a hybrid system, where we divvy up roles between computer and human based upon what we each do well, and he said that is what he is seeing in the companies he tracks (and funds, at least in some cases).  A great principle.

The last bit that interested me was whether and how such systems could develop not only learning skills, but meta-learning or learning to learn skills. Real teachers can develop this and modify it (while admittedly rare), and yet it’s likely to be the best investment. In my activity-based learning, I suggested that gradually learners should take over choosing their activities, to develop their ability to become self-learners.  I’ve also suggested how it could be layered on top of regular learning experiences. I think this will be an interesting area for developing learning experiences that are scalable but truly develop learners for the coming times.

There’s more: pedagogical rules, content models, learner models, etc, but we’re finally getting close to be able to build these sorts of systems, and we should be  aware of what the possibilities are, understanding what’s required, and on the lookout for both the good and bad on tap.  So, what say you?

30 September 2015

Connie Yowell #DevLearn Keynote Mindmap

Clark @ 4:58 pm

Connie Yowell gave a passionate and informing presentation on the driving forces behind digital badges.

24 September 2015

Looking forward on content

Clark @ 8:04 am

At DevLearn next week, I’ll be talking about content systems in session 109.  The point is that instead of monolithic content, we want to start getting more granular for more flexible delivery. And while there I’ll be talking about some of the options on how, here I want to make the case about why, in a simplified way.

As an experiment (gotta keep pushing the envelope in a myriad of ways), I’ve created a video, and I want to see if I can embed it.  Fingers crossed.  Your feedback welcome, as always.


17 September 2015


Clark @ 8:03 am

Last Friday’s #GuildChat was on Agile Development.  The topic is interesting to me, because like with Design Thinking, it seems like well-known practices with a new branding. So as I did then, I’ll lay out what I see and hope others will enlighten me.

As context, during grad school I was in a research group focused on user-centered system design, which included design, processes, and more. I subsequently taught interface design (aka Human Computer Interaction or HCI) for a number of years (while continuing to research learning technology), and made a practice of advocating the best practices from HCI to the ed tech community.  What was current at the time were iterative, situated, collaborative, and participatory design processes, so I was pretty  familiar with the principles and a fan. That is, really understand the context, design and test frequently, working in teams with your customers.

Fast forward a couple of decades, and the Agile Manifesto puts a stake in the ground for software engineering. And we see a focus on releasable code, but again with principles of iteration and testing, team work, and tight customer involvement.  Michael Allen was enthused enough to use it as a spark that led to the Serious eLearning Manifesto.

That inspiration has clearly (and finally) now moved to learning design. Whether it’s Allen’s SAM or Ger Driesen’s Agile Learning Manifesto, we’re seeing a call for rethinking the old waterfall model of design.  And this is a good thing (only decades late ;).  Certainly we know that working together is better than working alone (if you manage the process right ;), so the collaboration part is a win.

And we certainly need change.  The existing approaches we too often see involve a designer being given some documents, access to a SME (if lucky), and told to create a course on X.  Sure, there’re tools and templates, but they are focused on making particular interactions easier, not on ensuring better learning design. And the person works alone and does the design and development in one pass. There are likely to be review checkpoints, but there’s little testing.  There are variations on this, including perhaps an initial collaboration meeting, some SME review, or a storyboard before development commences, but too often it’s largely an independent one way flow, and this isn’t good.

The underlying issue is that waterfall models, where you specify the requirements in advance and then design, develop, and implement just don’t work. The problem is that the human brain is pretty much the most complex thing in existence, and when we determine a priori what will work, we don’t take into account the fact that like Heisenberg what we implement will change the system. Iterative development and testing allows the specs to change after initial experience.  Several issues arise with this, however.

For one, there’s a question about what is the right size and scope of a deliverable.  Learning experiences, while typically overwritten, do have some stricture that keeps them from having intermediately useful results. I was curious about what made sense, though to me it seemed that you could develop your final practice first as a deliverable, and then fill in with the required earlier practice, and content resources, and this seemed similar to what was offered up during the chat to my question.

The other one is scoping and budgeting the process. I often ask, when talking about game design, how to know when to stop iterating. The usual (and wrong answer) is when you run out of time or money. The right answer would be when you’ve hit your metrics, the ones you should set before you begin that determine the parameters of a solution (and they can be consciously reconsidered as part of the process).  The typical answer, particularly for those concerned with controlling costs, is something like a heuristic choice of 3 iterations.  Drawing on some other work in software process, I’d recommend creating estimates, but then reviewing them after. In the software case, people got much better at estimates, and that could be a valuable extension.  But it shouldn’t be any more difficult to estimate, certainly with some experience, than existing methods.

Ok, so I may be a bit jaded about new brandings on what should already be good practice, but I think anything that helps us focus on developing in ways that lead to quality outcomes is a good thing.  I encourage you to work more collaboratively, develop and test more iteratively, and work on discrete chunks. Your stakeholders should be glad you did.


8 September 2015

Accreditation and Compliance Craziness

Clark @ 8:07 am

A continued bane of my existence is the ongoing requirements that are put in place for a variety of things.  Two in particular are related and worth noting: accreditation and compliance.  The way they’re typically construed is barking mad, and we can (and need to) do better.

To start with accreditation. It sounds like a good thing: to make sure that someone issuing some sort of certification has in place the proper procedures.  And, done rightly, it would be. However, what we currently see is that, basically, the body says you have to take what the Subject Matter Expert (SME) says as the gospel. And this is problematic.

The root of the problem is that SMEs don’t have access to around 70% of what they do, as research at the University of Southern California’s Cognitive Technology group has documented. However, of course, they have access to all they ‘know’. So it’s easy for them to say what learners should know, but not what learners actually should be able to do.  And some experts are better than others at articulating this, but the process is opaque to this nuance.

So unless the certification process is willing to allow the issuing institution the flexibility to use a process to drill down into the actual ‘do’, you’re going to get knowledge-focused courses that don’t actually achieve important outcomes. You could do things like incorporating those who depend on the practitioners, and/or using a replicable and grounded process with SMEs that helps them work out what the core objectives need to be; meaningful ones, ala competencies. And a shoutout to Western Governors University for somehow being accredited using competencies!

Compliance is, arguably, worse.  Somehow, the amount of time you spend is the important determining factor. Not what you can do at the end, but instead that you’ve done something for an hour.  The notion that amount of time spent relates to ability at this level of granularity is outright maniacal.  Time would matter, differently for different folks, but you have to be doing the right thing, and there’s no stricture for that.   Instead, if you’ve been subjected to an hour of information, that somehow is going to change your behavior. As if.

Again, competencies would make sense.  Determine what you need them to be able to do, and then assess that. If it takes them 30 minutes, that’s OK. If it takes them 5 hours, well, it’s necessary to be compliant.

I’d like to be wrong, but I’ve seen personal instances of both of these, working with clients. I’d really like to find a point of leverage to address this.  How can we start having processes that obtain necessary skills, and then use those to determine ability, not time or arbitrary authority!  Where can we start to make this necessary change?

26 August 2015

3 C’s of Engaging Practice

Clark @ 2:28 pm

In thinking through what makes experiences engaging, and in particular making practice engaging, I riffed on some core elements.   The three terms I came up with were Challenge, Choices, & Consequences. And I realized I had a nice little alliteration going, so I’m going to elaborate and see if it makes sense to me (and you).

In general, good practice is having the learner make decisions in context. This has to be more than just recognizing the correct knowledge option, and providing a ‘right’ or ‘wrong’ feedback.  The right decision has to be made, in a plausible situation with plausible alternatives, and the right feedback has to be provided.

So, the first thing is, there has to be a situation that the learner ‘gets’ is important. It’s meaningful to them and to their stakeholders, and they want to get it right. It has to be clear there’s a real decision that has outcomes that are important.  And the difficulty has to be adjusted to their level of ability. If it’s too easy, they’re bored and little learning occurs. If it’s too difficult, it’s frustrating and again little learning occurs.  However, with a meaningful story and the right level of difficulty, we have the appropriate challenge. 

Then, we have to have the right alternatives to select from. Some of the challenge comes from having a real decision where you can recognize that making the wrong choice would be problematic. But the alternatives must require an appropriate level of discrimination.  Alternatives that are so obvious or silly that they can be ruled out aren’t going to lead to any learning. Instead, they need to be ways learners reliably go wrong, representing misconceptions. The benefits are several: first, you can find out what they really know (or don’t), and you have the chance to address them. Also, this assists in having the right level of challenge.  So  you must have the right choices.

Finally, once the choice is made, you need to have feedback. Rather than immediately have some external voice opine ‘yes’ or ‘no’, let the learner see the consequences of that choice. This is important for two reasons. For one, it closes the emotional experience, as you see what happens, wrapping up the experience. Second, it shows how things work in the world, exposing the causal relationships and assists the learner understanding. Then you can provide feedback (or not, if you’re embedding this single decision in a scenario or game where other choices are precipitated by this choice). So, the final element are consequences.

While this isn’t complete, I think it’s a nice shorthand to guide the design of meaningful and engaging practice. What do you think?

19 August 2015

Concrete and Contextual

Clark @ 8:38 am

I’m working on the learning science workshop I’m going to present at DevLearn next month, and in thinking about how to represent the implications of designing to account for how we work better when the learning context is concrete and sufficient contexts are used, I came up with this, which I wanted to share.

Concrete deliverables and multiple contextsThe empirical data is that we learn better when our learning practice is contextualized.  And if we want transfer, we should have practice in a spread of contexts that will facilitate abstraction and application to all appropriate settings, not just the ones seen in the learning experience.  If the space between our learning applications is too narrow, so too will our transfer be. So our activities need to be spread about in a variety of contexts (and we should be having sufficient practice).

Then, for each activity, we should have a concrete outcome we’re looking for. Ideally, the learner is given a concrete deliverable as an outcome that they must produce (that mimics the type of outcome we’re expecting them to be able to create as an outcome of the learning, whether decision, work product, or..).  Ideally we’re in a social situation and they’re working as a team (or not) and the work can be circulated for peer review.  Regardless, then there should be expert oversight on feedback.

With a focus on sufficient and meaningful practice, we’re more likely to design learning that will actually have an impact.  The goal is to have practice that is aligned with how our learning works (my current theme: aligning with how we think, work, and learn). Make sense?

18 August 2015

Where in the world is…

Clark @ 8:09 am

It’s time for another game of Where’s Clark?  As usual, I’ll be somewhat peripatetic this fall, but more broadly scoped than usual:

  • First I’ll be hitting Shenzhen, China at the end of August to talk advanced mlearning for a private event.
  • Then I’ll be hitting the always excellent DevLearn in Las Vegas at the end of September to run a workshop on learning science for design (you should want to attend!) and give a session on content engineering.
  • At the beginning of November I’ll be at LearnTech Asia in Singapore, with an impressive lineup of fellow speakers to again sing the praises of reforming L&D.

Yes, it’s quite the whirl, but with this itinerary I should be somewhere near you almost anywhere you are in the world. (Or engage me to show up at your locale!) I hope to see you at one event or another before the year is out.


12 August 2015

Designing Learning Like Professionals

Clark @ 8:31 am

I’m increasingly realizing that the ways we design and develop content are part of the reason why we’re not getting the respect we deserve.  Our brains are arguably the most complex things in the known universe, yet we don’t treat our discipline as the science it is.  We need to start combining experience design with learning engineering to really start delivering solutions.

To truly design learning, we need to understand learning science.  And this does not mean paying attention to so-called ‘brain science’. There is legitimate brain science (c.f. Medina, Willingham), and then there’s a lot of smoke.

For instance, there’re sound cognitive reasons why information dump and knowledge test won’t lead to learning.  Information that’s not applied doesn’t stick, and application that’s not sufficient doesn’t stick. And it won’t transfer well if you don’t have appropriate contexts across examples and practice.  The list goes on.

What it takes is understanding our brains: the different components, the processes, how learning proceeds, and what interferes.  And we need to look at the right levels; lots of neuroscience is not relevant at the higher level where our thinking happens.  And much about that is still under debate (just google ‘consciousness‘ :).

What we do have are robust theories about learning that pretty comprehensively integrate the empirical data.  More importantly, we have lots of ‘take home’ lessons about what does, and doesn’t work.  But just following a template isn’t sufficient.  There are gaps where have to use our best inferences based upon models to fill in.

The point I’m trying to make is that we have to stop treating designing learning as something anyone can do.  The notion that we can have tools that make it so anyone can design learning has to be squelched. We need to go back to taking pride in our work, and designing learning that matches how our brains work. Otherwise, we are guilty of malpractice. So please, please, start designing in coherence with what we know about how people learn.

If you’re interested in learning more, I’ll be running a learning science for design workshop at DevLearn, and would love to see you there.

Next Page »

Powered by WordPress