Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Evolutionary versus revolutionary prototyping

26 May 2015 by Clark 2 Comments

At a recent meeting, one of my colleagues mentioned that increasingly people weren’t throwing away prototypes.  Which prompted reflection, since I have been a staunch advocate for revolutionary prototyping (and here I’m  not talking about “the”  Revolution ;).

When I used to teach user-centered  design, the tools for  creating interfaces were complex. The mantras were test early, test often, and I advocated  Double Double P’s (Postpone Programming, Prefer Paper; an idea I first grabbed from Rob Phillips then at Curtin).  The reason was that if you  started building too early in the design phase, you’d have too much invested to throw things away if they weren’t working.

These days, with agile programming, we see sprints producing working code, which then gets elaborated in subsequent sprints.  And the tools make it fairly easy to work at a high level, so it doesn’t take too much effort to produce something. So maybe we can make things that we can throw out if they’re wrong.

Ok, confession time, I have to say that I don’t quite see how this maps to elearning.  We have sprints, but how do you have a workable learning experience and then elaborate it?  On the other hand, I know Michael Allen’s doing it with SAM and Megan Torrance just had an article on it, but I’m not clear whether they’re talking storyboard, and then coded prototype, or…

Now that I think about it, I think it’d be good to document the core practice mechanic, and perhaps the core animation, and maybe the spread of examples.  I’m big on interim representations, and perhaps we’re talking the same thing. And if not, well, please educate me!

I guess the point is that I’m still keen on being willing to change course if we’ve somehow gotten it wrong.  Small representations is good, and increasing fidelity is fine, and so I suppose it’s okay if we don’t throw out prototypes  often as long as we do when we  need to.  Am I making sense, or what am I missing?

David McCandless #CALDC3 Keynote Mindmap

12 May 2015 by Clark Leave a Comment

David McCandless gave a graphically and conceptually insightful talk on the power of visualization at the Callidus Cloud Connections.  He demonstrated the power of insight by tapping into the power of our pattern matching cognitive architecture.    visualization

Pushing back

5 May 2015 by Clark 2 Comments

In a recent debate with my colleague on the Kirkpatrick model, our host/referee asked me whether I’d push back on a request for a course. Being cheeky, I said yes, but of course I know  it’s harder than that.  And I’ve been mulling the question, and trying  to think of a perhaps more pragmatic (and diplomatic ;) approach.  So here’s a cut at it.

The goal is not to stay with just ‘yes’, but to followup.  The technique is to drill in for more information under the guise of ensuring you’re making the  right  course. Of course,  really you’re trying to determine whether there really is a need for a course at all, or maybe a job aid or checklist instead will do, and if so what’s critical to success.  To do this, you need to ask some pointed questions with the demeanor of being professional and helpful.

You might, then, ask something like “what’s the problem you’re trying to solve” or “what will the folks taking this course be able to do that they’re not doing now”.  The point is to start focusing on the real performance gap that you’re addressing (and unmasking if they don’t really know).  You  want to keep away from the information that they think needs to be in the head, and focus in on what decisions people can make that they can’t make now.

Experts can’t tell you what they actually do, or at least about 70% of it, so you need to drill in more about behaviors, but at this point you’re really trying to find out what’s not happening that should be.  You can use the excuse that “I just want to make sure we do the  right course” if there’s some push back on your inquiries, and you may also have to stand up for your requirements on the basis that you have expertise in your area and they have to respect that just as you respect their expertise in their area (c.f. Jon Aleckson’s  MindMeld).  

If what you discover does end up being about information, you might ask about “how fast will this information be changing”, and “how much of this will be critical to making better decisions”.  It’s hard to get information into the head, and it’s a futile effort if it’ll be out of date soon and it’s an expensive one if  it’s large amounts and arbitrary. It’s also easy to think that information will be helpful (and the nice-to-know as well as the must), but really you should be looking to put information in the world if you can. There are times when it has to be in the head, but not as often as your stakeholders and SMEs think.  Focus on what people will  do differently.

You also want to ask “how will we know the course is working”.  You can ask about what change would be observed, and should talk about how you will measure it.  Again, there could be pushback, but you need to be prepared to stick to your guns.  If it isn’t going to lead to some measurable delta, they haven’t really thought it through.  You can help them here, doing some business consulting on ROI for them. And here’s it’s not a guise, you really are being helpful.

So I think the answer can be ‘yes’, but that’s not the end of the conversation.  And this is the path to start demonstrating that you are about business.  This may be the path that starts getting your contribution to the organization to start being strategic. You’ll have to start being about more than efficiency metrics (cost/seat/hour; “may as well weigh ’em”) and about how you’re actually impacting the business. And that’s a good thing.  Viva la Revolucion!

Activities for Integrating Learning

30 April 2015 by Clark 2 Comments

I’ve been working on a learning design that integrates developing social media skills with developing specific competencies, aligned with real work.  It’s an interesting integration, and I drafted a pedagogy that I believe accomplishes the task.  It draws heavily on the notion of activity-based learning.  For your consideration.

Activity ModelThe learning process is broken up into a series of activities. Each activity  starts with giving the learning teams a deliverable they have to create, with a deadline an appropriate distance out.  There are criteria they have to meet, and the challenge is chosen such that it’s within their reach, but out of their grasp.  That is, they’ll have to learn some things to accomplish it.

As they work on the deliverable, they’re supported. They may have resources available to review, ideally curated (and, across the curricula, their responsibility for curating their own resources is developed as part of handing off the responsibility for learning to learn).  There may be people available for questions, and they’re also being actively watched and coached (less as they go on).

Now, ideally the goal would be a real deliverable that would achieve an impact on the organization.  That, however, takes a fair bit of support to make it a worthwhile investment. Depending on the ability of the learners, you may start with challenges that are like but not necessarily real challenges, such as evaluating a case study or working on a simulation.  The costs of mentoring go up as the consequences of the action, but so do the benefits, so it’s likely that the curriculum will similarly get closer to live tasks as it progresses.

At the deadline, the deliverables are shared for peer review, presumably with other teams. In this instance, there is a deliberate intention to have more than one team, as part of the development of the social capabilities. Reviewing others’ work, initially with evaluation heuristics, is part of internalizing the monitoring criteria, on the path to becoming a self-monitoring and self-improving learner. Similarly, the freedom to share work for evaluation is a valuable move on the path to a learning culture.  Expert review will follow, to finalize the learning outcomes.

The intent is also that the conversations and collaborations be happening in a social media platform. This is part of helping the teams (and the organization) acquire social media competencies.  Sharing, working together, accessing resources, etc. are being used in the platform just as they are used for work. At the end, at least, they are being used for work!

This has emerged as a design that develops both specific work competencies and social competencies in an integrated way.  Of course, the proof is when there’s a chance to run it, but in the spirit of working out loud…your thoughts welcome.

Got Game?

28 April 2015 by Clark 1 Comment

Why should you, as a learning designer, take a game design workshop?  What is the relationship between games and learning?  I want to suggest that there are  very  important reasons why you should.

Just so you don’t think I’m the only one saying it, in the decade since I wrote the book  Engaging Learning:  Designing e-Learning Simulation Games, there have been a large variety of books on the topic. Clark Aldrich has written three, at least count. James Paul Gee has pointed out how the semantic features of games match to the way our brains learn, as has David  Williamson Shaeffer.  People like Kurt Squire, Constance Steinkuhler, Henry Jenkins, and Sasha Barab have been strong advocates of games for learning. And of course Karl Kapp has a recent book on the topic.  You could also argue that Raph Koster’s A Theory of Fun is another vote given that his premise is that fun  is learning. So I’m not alone in this.

But more specifically, why get steeped in it?  And I want to give you three reasons: understanding engagement, understanding practice, and understanding design.  Not to say you don’t know these, but I’ll suggest that there are depths which you’re not yet incorporating into your learning, and  you could and should.  After all, learning  should be ‘hard fun’.

The difference between a simulation and a game is pretty straightforward.  A simulation is just a model of the world, and it can be in any legal state and be taken to any other.  A self-motivated and effective self-learner can use that to discover what they need to know.  But for specific learning purposes, we put that simulation into an initial state, and ask the learner to take it to a goal state, and we’ve chosen those so that they can’t do it until they understand the relationships we want them to understand. That’s what I call a scenario, and we typically wrap a story around it to motivate the goal.  We can tune that into a game.  Yes, we turn it into a game, but by tuning.

And that’s the important point about engagement. We can’t call it game; only our players can tell us whether it’s a game or not. To achieve that goal, we have to understand what motivates our learners, what they care about, and figure out how to integrate that into the learning.  It’s about not designing a learning event, but designing a learning  experience.  And, by studying how games achieve that, we can learn how to take our learning from mundane to meaningful.   Whether or not we have the resources and desire to build actual games, we can learn valuable lesssons to apply to any of our learning design. It’s the emotional element most ID leaves behind.

I also maintain that, next to mentored live practice, games are the best thing going (and individual mentoring doesn’t scale well, and live practice can be expensive both to develop but particularly when mistakes are made).  Games  build upon that by providing deep practice; embedding important decisions in a context that makes the experience as meaningful as when it really counts.  We use game techniques to heighten and deep the experience, which makes it closer to live practice, reducing transfer distance. And we can provide repeated practice.  Again, even if we’re not able to implement full game engines, there are many important lessons to take to designing other learning experiences: how to design better multiple choice questions, the value of branching scenarios, and more.  Practical improvements that will increase engagement and increase outcomes.

Finally, game designers use design processes that have a lot to offer to formal learning design. Their practices in terms of information collection (analysis), prototyping and refinement, and evaluation are advanced by the simple requirement that their output is such that people will actually pay for the experience.  There are valuable elements that can be transferred to learning design even if you aren’t expecting to have an outcome so valuable you can charge for it.

As professionals, it behooves us to look to other fields with implications that could influence and improve our outcomes. Interface design, graphic design, software engineering, and more are all relevant areas to explore. So is game design, and arguably the most relevant one we can.

So, if you’re interested in tapping into this, I encourage you to consider the game design workshop I’ll be running for the ATD Atlanta chapter on the 3rd of June. Their price is fair even if you’re not a chapter member, and it’s great deal if you are.  Further, it’s a tried and tested format that’s been well received since I first started offering it. The night before, I’ll be busting myths at the chapter meeting.  I hope I’ll see you there!

Why models matter

21 April 2015 by Clark 2 Comments

In the industrial age, you really didn’t need to understand why you were doing what you were doing, you were just supposed to do it.  At the management level, you supervised behavior, but you didn’t really set strategy. It was only at the top level where you used the basic principles of business to run your organization.  That was then, this is now.

Things are moving faster, competitors are able to counter your advances in months, there’s more information, and this isn’t decreasing.  You really need to be more agile to deal with uncertainty, and you need to continually innovate.   And I want to suggest that this advantage comes from having a conceptual understanding, a model of what’s happening.

There are responses we can train,  specific ways of acting in context.  These aren’t what are most valuable any more.  Experts, with vast experience responding in different situations, abstract models that guide what they do, consciously or unconsciously (this latter is a problem, as it makes it harder to get at; experts can’t tell you 70% of what they actually do!).  Most people, however, are in the novice to practitioner range, and they’re not necessarily ready to adapt to changes,  unless we prepare them.

What gives us the ability to react are having models that explain  the underlying causal relations as we best understand them, and then support in applying those models in different contexts.  If we have models, and see how those models guide performance in context A, then B, and then we practice applying it in context C and D (with model-based feedback), we gradually develop a more flexible ability to respond. It’s not subconscious, like experts, but we can figure it out.

So, for instance, if we have the rationale behind a sales process, how it connects to the customer’s mental needs and the current status, we can adapt it to different customers.  If we understand the mechanisms of medical contamination, we can adapt to new vectors.  If we understand the structure of a cyber system, we can anticipate security threats. The point is that making inferences on models is a more powerful basis than trying to adapt a rote procedure without knowing the basis.

I recognize that I talk a lot in concepts, e.g. these blog posts and diagrams, but there’s a principled reason: I’m trying to give you a flexible basis, models, to apply to your own situation.  That’s what I do in my own thinking, and it’s what I apply in my consulting.  I am a collector of models, so that I have more tools to apply to solving my own or other’s problems.   (BTW, I use concept and model relatively interchangeably, if that helps clarify anything.)

It’s also a sound basis for innovation.  Two related models (ahem) of creativity say that new ideas are either the combination of two different models or an evolution of an existing one.  Our brains are pattern matchers, and the more we observe a pattern, the more likely it will remind us of something, a model. The more models we have to match, the more likely we are to find one that maps. Or one that activates another.

Consequently, it’s also one  of the things I push as a key improvement to learning design. In addition to meaningful practice, give the concept behind it, the why, in the form of a model. I encourage you to look for the models behind what you do, the models in what your presented, and the models in what your learners are asked to do.

It’s a good basis for design, for problem-solving, and for learning.  That, to me, is a big opportunity.

Cyborg Thinking: Cognition, Context, and Complementation

15 April 2015 by Clark Leave a Comment

I’m writing a chapter about mobile trends, and one of the things I’m concluding with are the different ways we need to think to take advantage of mobile. The first one emerged as I wrote and kind of surprised me, but I think there’s merit.

The notion is one I’ve talked about before, about how what our brains do well, and what mobile devices do well, are complementary. That is, our brains are powerful pattern matchers, but have a hard time remembering rote information, particularly arbitrary or complicated details.  Digital technology is the exact opposite. So, that complementation whenever or wherever we are is quite valuable.

Consider chess.  When first computers played against humans,  they didn’t do well.  As computers became more powerful, however, they finally beat the world champion. However, they didn’t do it like humans do, they did it by very different means; they couldn’t evaluate well, but they could calculate much deeper in the amount of turns played and use simple heuristics to determine whether those were good plays.  The sheer computational ability eventually trumped the familiar pattern approach.  Now, however, they have a new type of competition, where a person and a computer will team and play against another similar team. The interesting result is not the best chess player, nor the best computer program, but a player who knows best how to leverage a chess companion.

Now map this to mobile: we want to design the best complement for our cognition. We want to end up having the best cyborg synergy, where our solution does the best job of leaving to the system what it does well, and leaving to the person the things we do well. It’s maybe only a slight shift in perspective, but it is a different view than designing to be, say, easy to use. The point is to have the best  partnership available.

This isn’t just true for mobile, of course, it should be the goal of all digital design.  The specific capability of mobile, using sensors to do things  because of when and where we are, though, adds unique opportunities, and that has to figure into thinking as well.  As does, of course, a focus on minimalism, and thinking about content in a new way: not as a medium for presentation, but as a medium for augmentation: to complement the world, not subsume it.

It’s my thinking that this focus on augmenting  our cognition and our context with content that’s complementary is the way to optimize the uses of mobile. What’s your thinking?

Defining Microlearning?

14 April 2015 by Clark 8 Comments

Last week on the #chat2lrn twitter chat, the topic was microlearning. It was apparently prompted by this post by Tom Spiglanin which does a pretty good job of defining it, but some conceptual confusion showed up in the chat that makes it clear there’s some work to be done.  I reckon there may be a role for the label and even the concept, but I wanted to take a stab at what it is and isn’t, at least on principle.

So the big point to me is the word ‘learning’.  A number of people opined about accessing a how-to video, and let’s be clear: learning doesn’t have to come from that.   You could follow the steps and get the job done and yet need to access it again if you ever needed it. Just like I can look up the specs on the resolution of my computer screen, use that information, but have to look it up again next time.  So it could be just performance support, and that’s a  good thing, but it’s not learning.  It suits the notion of micro content, but again, it’s about getting the job done, not developing new skills.

Another interpretation was little bits of components of learning (examples, practice) delivered over time. That is learning, but it’s not microlearning. It’s distributed learning, but the overall learning experience is macro (and much more effective than the massed, event, model).  Again, a good thing, but not (to me) microlearning.  This is what Will Thalheimer calls subscription learning.

So, then, if these aren’t microlearning, what is?  To me, microlearning has to be a small but complete learning experience, and this is non-trivial.  To be a full learning experience, this requires a model, examples, and practice.  This could work with very small learnings (I use an example of media roles in my mobile design workshops).  I think there’s a better model, however.

To explain, let me digress. When we create formal learning, we typically take learners away from their workplace (physically or virtually), and then create contextualized practice. That is, we may present concepts and examples (pre- via blended, ideally, or less effectively in the learning event), and  then we create practice scenarios. This is hard work. Another alternative is more efficient.

Here, we layer the learning on top of the work learners are already doing.  Now, why isn’t this performance support? Because we’re not just helping them get the job done, we’re explicitly turning this into a learning event by not only scaffolding the performance, but layering on a minimal amount of conceptual material that links what they’re doing to a model. We (should) do this in examples and feedback on practice, now we can do it around real work. We can because (via mobile or instrumented systems) we know where they are and what they’re doing, and we can build content to do this.  It’s always been a promise of performance support systems that they could do learning on top of helping the outcome, but it’s as yet seldom seen.

And the focus on minimalism is good, too.  We overwrite and overproduce, adding in lots that’s not essential.  C.f. Carroll’s Nurnberg Funnel or Moore’s Action Mapping.  And even for non-mobile, minimalism makes sense (as I tout under the banner of the Least Assistance Principle).  That is, it’s really not rude to ask people (or yourself as a designer) “what’s the least I can do for you?”  Because that’s what people generally really prefer: give me the answer and let me get back to work!

Microlearning as a phrase has probably become current (he says, cynically) because elearning providers are touting it to sell the ability of their tools to now deliver to mobile.   But it can also be a watch word to emphasize thinking about performance support, learning ‘in context’, and minimalism.  So I think we may want to continue to use it, but I suggest it’s worthwhile to be very clear what we mean by it. It’s not courses on a phone (mobile elearning), and it’s not spaced out learning, it’s small but useful full learning experiences that can fit by size of objective or context ‘in the moment’.  At least, that’s my take; what’s yours?

Tom Wujec #LSCon Keynote Mindmap

25 March 2015 by Clark 2 Comments

Tom Wujec gave a discursive and well illustrated talk about how changes in technology were changing industry, ultimately homing in on creativity.  Despite a misstep mentioning Kolb’s invalid learning styles instrument, it was entertaining and intriguing.

 

Design Thinking?

10 March 2015 by Clark 11 Comments

There’s been quite a bit of flurry about Design Thinking of late (including the most recent #lrnchat), and I’m trying to get my around what’s unique about it.  The wikipedia entry linked above helps clarify the intent, but is there any there there?

It helps to understand that I’ve been steeped in design approaches since at least the 80’s. Herb Simon’s Sciences of the Artificial argued, essentially, that design is the quintessential human activity. And my grad school experience was in a research lab focused on interface design.  Process was critical, and when I was subsequently teaching interface design, I was tracking new initiatives like situated design and participatory design, anthropological efforts designed to get closer to the ‘customer’.

In addition to being somewhat obsessive about learning how people learn, and as a confirmed geek continually exploring new technology, I also got interested in design processes beyond interface design. As my passion was designing learning technology solutions to meet real needs, I explored other design approaches to look for universals.  Along the way I looked at industrial, graphic, architectural, software, and other design disciplines.  I also read the psychological research on our cognitive limitations and design approaches.  (I made a small bit of my career on bringing the advances in HCI, which was more advanced in process, to ed tech.)

The reason I mention this is that the elements of Design Thinking: being open minded, diverging before converging, using teams, empathy for the customer, etc, all strike me as just good design. It’s not obvious to me whether it gets into the nuances (e.g. the steps in the Wikipedia article  don’t allow me to see whether they do things like ensure that everyone takes time to brainstorm on their own before coming together; an important step to prevent groupthink), but at the granularity I’ve seen, it seems to be quite good.  You mean everyone isn’t already both aware of and using this?  Apparently not.

So in that respect, Design Thinking is a win.  If adding a label to a systematized compendium of good practices will raise awareness, I’m all for it.  And I’m willing to have my consciousness raised that there’s more to it, because as a proponent of design, I’m glad to see that folks are taking steps to help design get better and will be thrilled  if it adds something new.

 

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.