Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Learning Design isn’t for the wimpy

24 September 2012 by Clark 4 Comments

I’ve had my head down on a major project, a bunch of upcoming speaking engagements, some writing I’ve agreed to do, and…(hence the relative paucity of blog posts).  That project, however, has been interesting for a variety of reasons, and one really is worth sharing: ID isn’t easy.  We’ve been given some content, and it’s not just about being good little IDs and taking what they give us and designing instruction from it.  We could do it, but it would be a disaster (in this case, that’s what we’re working from, a too-rote too-knowledge-dump course).  And it’s too often what I’ve seen done, and it’s wrong.

SMEs don’t know how they do what they do.  Part of the process of becoming expert is compiling away the underlying thinking that goes on, so it moves from conscious to subconscious.  So when the time comes to work with SMEs about what’s needed, they a) make up stories about what they do, or b) resort to what they’ve learned (e.g. knowledge). It’s up to the ID to push back and unpack the models that guide performance.  Yet that’s hard, particularly when they’re not domain experts, and SMEs have issues.

It takes a fair bit of common sense (remarkable by how uncommon it is), and willingness to continually reframe what the expert says and twist it until it’s focused on how they make decisions. There’re formal processes call Cognitive Task Analysis when you need them, but a ‘discount CTA’ approach (analogous to Nielsen’s ‘discount usability‘) would be appropriate in many cases.Such an approach includes getting some really good examples of both successes and failures of the task under consideration, and working hard to extract the principles that guide success.  But SMEs can’t be order takers; they have to be willing to fight to understand what decisions do learners need to make that they can’t make now, and how to make those decisions.

It really helps to either have a deep background in the field, or a broad background.  You can get the former by teaching ID to  a SME, or having an ID work in a particular field for a long time.  The latter works if you’re more in the ‘gun for hire’ mode. You then need, however, a broad knowledge that you can draw upon to make some reasonable inferences. That’s what I typically do, as my deep expertise is in learning design, but fortunately I’m eternally curious (used to lie on the floor with a volume of the World Book spread out in front of me). Model-based and systems thinking help immensely.

You really have to work hard, use your brain, draw upon real world knowledge  and  go to the mat with the material.  If you’re not willing to do this, you’re not cut out to be a learning designer. There’s much more, understanding the way we learn, experience design, and more, but this is part of the full picture.

Bob Mosher Keynote Mindmap #PSS12

13 September 2012 by Clark 5 Comments

Bob Mosher opened the Performance Support Symposium with a passionate keynote about Performance Support.  It strongly made the case for a blended approach, which I support.  As with mobile, the time is definitely now.

 

HyperCard reflections #hypercard25th

10 August 2012 by Clark 3 Comments

It’s coming up to the 25th anniversary of HyperCard, and I’m reminded of how much that application played a role in my thinking and working at the time. Developed by Bill Atkinson, it was really ‘programming for the masses’, a tool for the Macintosh that allowed folks to easily build simple, and even complex, applications. I’d programmed in other environments : Algol, Pascal, Basic, Forth, and even a little Lisp, but this was a major step forward in simplicity and power.

Screen from Voodoo AdventureA colleague of mine who was working at Claris suggested how cool this new tool was going to be, and I taught myself HyperCard while doing a postdoc at the University of Pittsburgh’s Learning Research and Development Center. I used it to prototype my ideas of a learning tool we could use for our research on children’s mental models of science. I then used it to program a game based upon my PhD research, embedding analogical reasoning puzzles into a game (Voodoo Adventure; see screenshot). I wrote it up and got it published as an investigation how games could be used as cognitive research tools. To little attention, back in ’91 :).

While teaching HCI, I had my students use HyperCard to develop their interface solutions to my assignments. The intention was to allow them to focus more on design and less on syntax. I also reflected on how the interface encapsulated to some degree on what Andi diSessa called ‘incremental advantage’, a property of an environment that rewarded greater investments in understanding with greater power to control the system. HyperCard’s buttons, fields, and backgrounds provided this, up until the next step to HyperTalk (which also had that capability once you got into the programming notion). I also proposed that such an environment could support ‘discoverability’ (a concept I learned from Jean Marc Robert), where an environment could support experimentation to learn to use it in steady ways. Another paper resulted.

I also used HyperCard to develop applications in my research. We used it to develop Quest for Independence, a game that helped kids who grew up without parents (e.g. foster care) learn to survive on their own. Similarly, we developed a HCI performance support tool. Both of these later got ported to the web as soon as CGI’s came out that let the web retain state (you can still play Quest; as far as I know it was the first serious game you could play on the web).

The other ways HyperCard were used are well known (e.g. Myst), but it was a powerful tool for me personally, and I still miss having an easy environment for prototyping. I don’t program anymore (I add value other ways), but I still remember it fondly, and would love to have it running on my iPad as well! Kudos to Bill and Apple for creating and releasing it; a shame it was eventually killed through neglect.

Shades of grey

7 August 2012 by Clark Leave a Comment

In looking across several instances of training in official procedures, I regularly see that, despite bunches of regulations and guidelines, that things are not black and white, but that there are myriad shades of grey.  And I think that there is probably a very reasonable way to deal with it.  (Surely you didn’t think I was talking about a book!)

In these situations, there are typically cases that are very white, others that are very black, but most end up somewhere in the middle, with a fair degree of ambiguity.  And the concerns of the governing body are various.  In one instance, the body was more concerned that you’d done due diligence and could show a trail of the thinking that led to the decision. If you did that, you were ok, even if you ended up making the wrong decision. In another case, the concern was more about consistency and repeatability. You didn’t want to show bias.

However, the training doesn’t really reflect that. In many cases, they point out the law (in the official verbiage), you work through some examples, and you’re quizzed on the knowledge.  You might even workshop a few examples.  Typically, you are to get the ‘right answer’.

I’d suggest that a better approach would be to give the learners a series of examples that are first workshopped by small groups, with their work brought back to the class.  The important things are the ways the discussion is facilitated, supported, and the choice of problems.  First, I think they’re given the problems and the associated requirements, guidelines, or regulations.  Period.  No presentation beforehand, nothing except reactivating the relevance of this material to their real work.

Examples chosen from the white and black ends into the greyI’m  suggesting that the first problem they face be, essentially, ‘white’, and the second is ‘black’ (or vice versa). The point is for them to see what the situation looks like when it’s very clear,  and for them to get used to using the materials to make a determination. (This is likely what they’re going to be doing in real practice anyway!)  At this point, the discussion facilitation is focused on helping them understand how the rules play out in the clear cases.

Then they start getting grayer cases, ones where there’s more ambiguity.  Here, the focus of discussion facilitation is to start emphasizing the subtext: either ‘document your work’, or ‘be consistent’, or whatever.  The amount of these will depend on how much practice they need.  If the decisions are complex, they’re relatively infrequent, or the decisions are really important, they’ll need more practice.

This way, the learners are a) getting comfortable with the decisions, b) getting used to using the materials to make the decisions, and c) recognizing what’s really important.

I’m relatively certain that this may be problematic for some of the SMEs, who may prefer to argue for right/wrong answers, but I think it reflects the reality when you unpack the thinking behind the way it plays out in practice.  And I think that’s more important for the learners, and the training organization, to recognize.

Of course, as they work in groups, the most valuable way to support them may be for them to have the coordinates of other members of their group to call on when they face really  tough decisions. That sort of collaboration may trump formal instruction anyway ;).

 

Quinnovation online and on the go

1 August 2012 by Clark Leave a Comment

First, I have to tout that my article on content systems has been published in Learning Solutions magazine.   It complements my recent post on content and data.

Second, I’ll be presenting on mobile at the eLearning Guild’s Performance Support Symposium in September in Boston.  Would welcome seeing you there.  Also will be doing a deeper ID session for Mass. ISPI while I’m there.

Third, I’ll be keynoting the MobilearnAsia conference in Singapore at the end of October.  It’s the first in the region, and if you’re in the neighborhood it should be a great way to get steeped in mobile.

Finally, I’ll be at the eLearning Guild’s DevLearn in November, presenting my mobile learning strategy workshop, among other things.

If you’re at one of these events, say “hi”!

 

Levels of eLearning Quality

31 July 2012 by Clark 8 Comments

Of late, I’ve been both reviewing eLearning, and designing processes & templates. As I’ve said before, the nuances between well-designed and well produced eLearning  are subtle, but important. Reading a forthcoming book that outlines the future but recounts the past, it occurs to me that it may be worthwhile to look at a continuum of possibilities.

For the sake of argument, let’s assume that the work is well-produced, and explore some levels of differentiation in quality of the learning design. So let’s talk about a lack of worthwhile objectives, lack of models, insufficient examples, insufficient practice, and lack of emotional connection.  These combine into several levels of quality.

The first level is where there aren’t any, or aren’t good learning objectives. Here we’re talking about waffly objectives like ‘understand’, ‘know’, etc. Look, I’m not a behaviorist, but I think *when* you have formal learning goals (and that’s not as often as we deliver), you bloody well ought to have some pretty meaningful description around it.  Instead what we see is the all-to-frequently observed knowledge dump and knowledge test.

Which, by the way, is  a colossal waste of time and money.  Seriously, you are, er, throwing away money if that’s your learning solution. Rote knowledge dump and test reliably lead to no meaningful behavior change.  We even have a label for it in cognitive science: “inert knowledge”.

So let’s go beyond meaningless objectives, and say we are focused on outcomes that will make a difference. We’re ok from here, right? Er, no.  Turns out there are several different ways we can go wrong.  The first is to focus on rote procedures. You may want execution, but increasingly the situation is such that the decisions are too complex to trust a completely prescribed response. If it’s totally predictable, you automate it!

Otherwise, you have two options; you provide sufficient practice, as they do with airline plots and heart surgeons. If lives aren’t on the line and failure isn’t as expensive as training, you should focus on providing model-based instruction where you develop the performer’s understanding of what’s underlying the decisions of how to respond.  That latter gives you a basis for reconstructing an appropriate response even if you forget the rote approach.   I recommend this in general, of course.

Which brings up another way learning designs go wrong.  Sufficient practice as mentioned above would suggest repeating until you can’t get it wrong.  What we tend to see, however, is practice until you get it right. And that isn’t sufficient.  Of course, I’m talking real practice, not knowledge test ala multiple choice questions. Learners need to perform!

We don’t see sufficient examples, either. While we don’t want to overwhelm our learners, we do need sufficient contexts to abstract across. And it does not have to occur in just one day, indeed, it shouldn’t!  We need to space the learning out for anything more than the most trivial of learning. Yet the ‘event’ model of learning crammed into one session is much of what we see.

The final way many designs fails is to ignore the emotional side of the equation.  This manifests itself in several ways, including introductions, examples, and practice.  Too often, introductions let you know what you’re about to endure, without considering why you should care.  If you’re not communicating the value to the learner, why should they care? I reckon that if you don’t convey the WIIFM, you better not expect any meaningful outcomes.  There are more nuances here (e.g. activating relevant knowledge, etc), but this is the most egregious.

In examples and practice, too, the learner should see the relevance of what is being covered to what they know is important and they care about.  These are two important and separate things.  What they see should be real situations where the knowledge being addressed plays a real role. Then they should also care about the examples personally.

It’s hard to be able to address all the elements, but aligning them is critical to achieving well-designed, not just well-produced learning. Are you really making the necessary distinctions?

More slides please…

27 July 2012 by Clark 1 Comment

Really?  Yes.  Let me explain:

I’ve been reviewing some content for a government agency. This is exciting stuff, evaluating whether contract changes are valid.  Ok, it’s not exciting to me, but to the audience it’s important.  And there’s a reliable pattern to the slide deck that the instructor is supposed to use: it’s  large  amounts of text.

Again, exciting stuff, right from the regulations.  But that’s important to this audience; I actually don’t have a problem with it. The problem is that it’s all crammed on one screen!  Why is this a problem?

It’s  not a problem for printing.  You wouldn’t want to waste paper, and trees, printing it out. So being dense in this way isn’t bad. No, it’s bad when it’s presented.

When it’s presented, there is some highlighting of the important things. But if you were to hear someone go over the three wordy bullet points on one screen, you’d be hard pressed to follow.  However, if you spaced the same screen out three times, one for each bullet point, , you’d support cognitive load more appropriately.  You’re using more screens, but covering the same material in the same time, you’re just switching between screens emphasizing the separate points.  And you don’t have to put each bullet point on a separate screen; to help maintain context you  could have the same text but only the relevant one clear and the others greyed out or blurred.

Hey, screens are cheap. In fact, they’re essentially  free!  Using more screens when presenting doesn’t cost any more.  Really!  You can address each point clearly, maintaining context but helping focus attention.  It’ll help the instructor too, not just the students.

Ok, so there is  one cost.  Maintaining a separate deck for printing and projecting could be some extra management overhead.  But for one, who’s better at policies and procedures than the government?  More seriously, I often will have a slide in my deck that’s a prose version of something I convey graphically, e.g. the five slides I use to present Brent Schlenker’s five-ables of social media (findable, feedable, linkable, taggable, editable).  In the presentation I have a slide with an image for each. For print, I hide those five and show the one text one.  It’s not that hard.  The same principle could be used here, the full slide for printing, the three equivalents for presenting.

There are times when you want more slides. They’re simpler, more focused, and better support maintaining context and focus. Don’t scrimp on the slides.  It’s better to have slides with not so much text, but if you must, space it out.

A game? Who says?

11 July 2012 by Clark 1 Comment

I just reviewed a paper submitted to a journal (one way to stay in touch with the latest developments), and all along they were doing research on the cognitive and motivational relationships in the game. They claimed it was a game, and proceeded on that assumption.  And then the truth came out.

When designing and evaluating learning experiences, you really want to go beyond whether it’s effective or easy to use, and decide whether it’s engaging.  Yes, you absolutely need to test usability first (if there’s a problem with the learning outcomes, is it the pedagogy or the interaction?), and then learning effectiveness. But ultimately, if you want it optimally tuned for success, pitched at the optimal learning level using meaningful activities, it should feel like a game.  The business case is that the effectiveness will be optimized, and the tuning process to get there is less than you think (if you’re doing it right).  And the only real way to test it is subjectively: do the players think it’s a game.

If you create a learning experience and call it game, but your learners don’t think it is, you undermine their motivation and your credibility.  It can be relative (e.g. better than regular learning) as you might not have the resources to compete with commercial games, but it ought to be better than having to sit through a page turner, or you’ve failed.

There are systematic ways to design games that achieve both meaningful engagement and effective education practice. Heck, I wrote a whole  book  on the topic.  It’s not magic, and while it requires tuning, it’s doable. And, as  I’ve stated before:  you can’t say it’s a game, only your players can tell you that.

So here were these folks doing research on a ‘game’. The punchline: “students, who started playing the game with high enthusiasm, started complaining after a short while, ‘this is not a game’, and stopped gameplay”.  Fail.

Seriously, if you’re going to make a game, make it demonstrably fun. Or it’s not a game, whether you say so or not.

Emergent & Semantic Learning

10 July 2012 by Clark 2 Comments

The last of the thoughts still percolating in my brain from #mlearncon finally emerged when I sat down to create a diagram to capture my thinking (one way I try to understand things is to write about them, but I also frequently diagram them to help me map the emerging conceptual relationships into spatial relationships).

Semantic and Emergent rules for contentWhat I was thinking about was how to distinguish between emergent opportunities for driving learning experiences, and semantic ones.  When we built the Intellectricity© system, we had a batch of rules that guided how we were sequencing the content, based upon research on learning (rather than hardwiring paths, which is what we mostly do now).  We didn’t prescribe, we recommended, so learners could choose something else, e.g. the next best, or browse to what they wanted.  As a consequence, we also could have a machine learning component that would troll the outcomes, and improve the system over time.

And that’s the principle here, where mainstream systems are now capable of doing similar things.  What you see here are semantic rules (made up ones), explicitly making recommendations, ideally grounded in what’s empirically demonstrated in research.  In places where research doesn’t stipulate, you could also make principled recommendations based upon the best theory.  These would recommend objects to be pulled from a pool or cloud of available content.

However, as you track outcomes, e.g. success on practice, and start looking at the results by doing data analytics, you can start trolling for emergent patterns (again, made up).  Here we might find confirmation (or the converse!) of the empirical rules, as well as potentially  new patterns that we may be able to label semantically, and even perhaps some that would be new.  Which helps explain the growing interest in analytics.  And, if you’re doing this across massive populations of learners, as is possible across institutions, or with really big organizations, you’re talking the ‘big data’ phenomena that will provide the necessary quantities to start generating lots of these outcomes.

Another possibility is to specifically set up situations where you randomly trial a couple alternatives that are known research questions, and use this data opportunity to conduct your experiments. This way we can advance our learning more quickly using our own hypotheses, while we look for emergent information as well.

Until the new patterns emerge, I recommend adapting on the basis of what we know, but simultaneously you should be trolling for opportunities to answer questions that emerge as you design, and look for emergent patterns as well.  We have the capability (ok, so we had it over a decade ago, but now the capability is on tap in mainstream solutions, not just bespoke systems), so now we need the will.  This is the benefit of thinking about content as systems – models and architectures – not just as unitary files.  Are you ready?

 

An integrating design?

27 June 2012 by Clark 8 Comments

In a panel at #mlearncon, we were asked how instructional designers could accommodate mobile.  Now, I believe that we really haven’t got our minds around a learning experience distributed across time, which our minds really require.  I also think we still mistakenly think about performance support as separate from formal learning, but we don’t have a good way to integrate them.

I’ve advocated that we consider learning experience design, but increasingly I think we need performance experience design, where we look at the overall performance, and figure out what needs to be in the head, what needs to be in the world, and design them concurrently.  That is, we look at what the person knows how to do, and what should be in their head, and what can be designed as support.  ADDIE designs courses.  HPT determines whether to do a job aid (the gap is knowledge), or training (the gap is a skill).  I’m not convinced that either really looks at the total integration (and willing to be wrong).

What was triggered in my brain, however, was that social constructivism might be a framework within which we could accomplish this.  By thinking of what activities the learners would be engaged in, and how we’d support that performance with resources and other learners and performers as collaborators when appropriate, we might have a framework.  My take on social constructivism has it looking at what can and should be co-owned by the learner, and how to get the learner there, and it naturally involves resources, other people, and skill development.

So, you’d look at what needs to be done, and think through the performance, and ask what resources (digital and human) would be there with the performer, the gap between your current learner and the performer you’d need, and how to develop an experience to achieve that end state.  The notion is what mental design process designers may need going forward, and what framework provides the overarching framework to support that design process.

It’s very related to my activity framework, which nicely resonates as it very much focuses on what you can do, and resourcing that, but that framework is focused on reframing education to make it skills focused and developing self learning. This would require some additions that I’ll have to ponder further.  But, as always, it’s about getting ideas out there to collect feedback. So, what say you?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok