Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Increasing our responsibility

9 April 2013 by Clark 4 Comments

InFormalI ranted a couple of weeks ago about how we need to move out of our complacency and make a positive change.  As I sometimes do, I stumbled upon a diagram that characterizes the type of change I think we need to be considering.

The perspective riffs off of the concept of the relative value of formal versus informal learning methods shift as performers move from novice to expert. (And, as I’ve previously noted, what’s considered in/formal changes depending on if you’re the performer or designer.)  And, too often, we tend to restrict our interventions to the formal side, yet there are lots of things we can be doing on the informal side.

InFormalLDPCRolesLargely, however, I see learning and development (L&D) groups as focusing exclusively on novices, or to beginning practitioners, and leaving practitioners and experts on their own.  Even if they’re addressing these more advanced audiences, they tend to use the ‘course’ as the vehicle, when it’s not really  necessary.  These audiences know what they need to know, and just want that useful information, they don’t need the full preparation that novices do.  Novices don’t know what they need to know nor why it’s important, so we provide all that in a course model.  We can be much more telegraphic to advanced performers, and the value of social networks starts kicking in here too.

The point I’m trying to make is that we can, and should, take responsibility for the rest of the performers. We  can assist their performance, hence the term we’ve been preferring in the Internet Time Alliance:  performance consultant.  This implies facilitating performance across the organizational roles, top to bottom and from beginner to expert.

I’d like to suggest that L&D groups need to become focused on facilitating organizational performance, which includes but is not limited to training.  It’s going to benefit the organization, it’s going to lead to greater strategic contributions and associated value, and it’s an approach that will likely preclude a long slow march to irrelevance and extinction.  Better the folks that understand how we learn and perform (and if you don’t, what are you waiting for?) take responsibility than having it devolve by default to business units and/or IT, eh?

#itashare

Games & Meaningful Interactivity

8 April 2013 by Clark 5 Comments

A colleague recently queried: “How would you support that Jeopardy type games (Quizzes, etc.) are not really games?”  And while I think I’ve discussed this before, I had a chance to noodle on it on a train trip.  I started diagramming, and came up with the following characterization.

GameSpacesI separated out two dimensions. The first  is differentiating between knowledge and skills.  I like how Van Merriënboer talks about the knowledge you need and the complex problems you apply that knowledge to.  Here I’m separating ‘having’ knowledge from ‘using’ knowledge, focusing on application.  And, no surprise, I’m very much on the side of using, or  doing, not just knowing.

The second dimension is whether the learning is essentially very true to life, or exaggerated in some way.  Is it direct, or have we made some effort to make it engaging?

Now, for rote knowledge, if we’re contextualizing it, we’re making it more applied (e.g. moving to the skills side), so really what we have to do is use extrinsic motivation.  We gamify knowledge test (drill and kill) and make it into Jeopardy-style quiz shows.   And while that’s useful in very limited circumstances, it  is  not  what we (should) mean by a game.  Flashy rote drill, using extrinsic motivation, is a fall-back, a tactic of last resort.  We can do better.

What we should mean by a game is  to take practice scenarios and focus on ramping up the intrinsic motivation, tuning the scenario into a engaging experience.  We can use tools like exaggeration, humor, drama, and techniques from game design, literature, and more, to make that practice more meaningful.  We align it with the learners interests (and vice-versa), making the experience compelling.

Because, as the value chain suggests, tarting up rote knowledge (which is useful  if that’s what we need, and sometimes it’s important, e.g. medical terminology) is better than not, but not near as valuable as real practice via scenarios, and even better if we tune it into a meaningful experience.  Too often we err on the side of knowledge instead of skills,  because it’s easy, because we’re not getting what we need from the SME, because that’s what our tools do, etc, but we should be focusing on skills, because that’s what’s going to make a difference to our learners and ultimately our organizations.

What we should do is be focusing on better able to  do, moving to the skill side. Tarted up quiz shows are not really games, they’re simplistic extrinsic response trainers.  Real, serious, games translate what Sid Maier said about games – “a series of interesting decisions” – into a meaningful experience: a series of important decisions.  Practicing those are what will make the difference you care about.

Signs of hope?

21 March 2013 by Clark Leave a Comment

Attending the SolutionsFest at the Learning Solutions conference last week, despite my earlier rant, I saw signs of hope.  A quick spot check revealed a number of approaches going above and beyond.

One direction I was pleased to see was a move to more performance support. I saw several solutions that were more focused on providing information as needed, or letting you  navigate to it, rather than thinking everything had to be ‘in the head’. This is definitely a promising sign.  They’re not hard to build, either.

The second promising sign was the use of scenarios. Several different solutions were focused on putting learners into contexts and asking them to perform. This is definitely the direction we need to see more of.  And, again, it’s not that hard to do!

One interesting takeaway was that the innovative solutions seemed to come more from small or internal groups rather than the big teams.  Which only reinforces my overall concern with the industry as a whole.  I wonder if it’s easier for small teams to adapt to advice of folks like Michael Allen (no more boring elearning), Julie Dirksen (Design for How People Learn) and Will Thalheimer than it is for big teams, who not only have to change processes but also educate their customers.

This is an unscientific sample; I did a quick tour of the displays, but couldn’t see all as there were some that were just too crowded.  I also looked at them relatively briefly and didn’t make comprehensive notes, so this is just a read of my state of mind as I finished.  It doesn’t ameliorate the overall concern, but it does provide some hope that things are changing in small pockets.

Yes, you do have to change

18 March 2013 by Clark 22 Comments

Of late, I’ve seen a disturbing trend.  Not only are the purveyors of existing solutions preaching caution and steadiness, but it even seems like some of the  ‘names’ of the field are talking in ways that make it easy to think that the industry is largely doing ok.  And I do  not understand this, because it’s demonstrably wrong.  The elearning industry, and the broader learning industry, is severely underperforming the potential (and I’m being diplomatic).

We  know what leads to effective learning outcomes.  And we’ve known it for decades (just because MOOCs are new doesn’t mean their pedagogies are): clear models, annotated examples, and most importantly deep and meaningful practice focused on significant skill shifts (let alone addressing the emotional side of the equation).  Learners need to perform, repeatedly, with guidance, over more and more complex contexts until they achieve the level of performance they need.  However, that’s no where near what we’re seeing.

What we see are knowledge dump/test tarted up with trivial interactions.  People will pass a test, but they will  not have achieved the ability to affect any meaningful business outcomes.  If it’s knowledge that performers need, create a job aid, not a ‘spray and pray’.  And facilitate people in self-helping.  As things get more complex and moving faster, there’s no way everything  can be kept up with by new course development, even if it were a useful approach, and mostly it’s not.

We’re even  measuring  the wrong things.  Cost per seat hour is secondary (at best).  That’s ‘fine-tuning’, not the core issue.  What’s primary is business impact.  Are you measurably improving key performance indicators as outcomes?

And that’s assuming courses are all the learning unit should be doing, but increasingly we recognize that that’s only a small proportion of what makes important business outcomes, and increasingly we’re recognizing that the role needs to move from instructional designer to performance consultant.  More emphasis can and should be on providing performance resources and facilitating useful interactions rather than creating courses.  Think performance support first, and communities of practice, only resorting to courses as a last result.

Tools that make turning Powerpoint presentations into page-turning content aren’t going to fix this, nor are tools that provide prettified drill-and-kill, nor ones that let you host and track courses.  There are places for those, but they’re not the bulk of the opportunity, and shouldn’t be the dominant solutions we see.  There’s  so much more: deeply engaging scenarios and simulation-driven interactions on the formal side, powerful job aid tools for performance support (particularly mobile), coaching and mentoring as a better solution than courses in many (most) cases, performer-focused portals of tools, underlying powerful content management suites, and rich social environments to support performers making each other smarter and more effective.

I’m not quite sure why the easy tools dominate the expo halls, except perhaps because anyone can build them.  More worrisome is that it can let designers off the hook in terms of thinking deeper.  We need to focus first on rich outcomes, and put the tools secondary.

While the industry congratulates itself on how they make use of the latest technology, the lack of impact is leading a drive to irrelevancy.  Learners tolerate the courses, at best.  Operations groups and others are beginning to focus on the performance solutions available.  Executives are beginning to hear a message that the old approach is a waste of resources.

Hiding your head in the sand isn’t going to cut it. The industry is going to have to change.  And that means you will have to change.  But you’re a professional in learning, right?  So lead the way.  The best way to change is to take that first step.

 

Daniel Coyle #LSCon Keynote Mindmap

14 March 2013 by Clark Leave a Comment

Daniel Coyle gave a wonderfully funny, passionate, and poignant keynote, talking about what leads to top performance. Naturally, I was thrilled to hear him tout the principles that I suggest make games such a powerful learning environment: challenge, tight feedback, and large amounts of engaging practice. With compelling stories to illustrate his points, he balanced humor and emotional impact to sell a powerful plea for better learning.

20130314-101527.jpg

Aaron Dignan #LSCon Keynote Mindmap

13 March 2013 by Clark 1 Comment

In a clever talk, Aaron Dignan used game theory to talk about how to improve the workplace.

20130313-170311.jpg

Barrier to scale?

11 March 2013 by Clark 2 Comments

I was part of a meeting about online learning for an institution, and something became clear to me.  We were discussing MOOCs (naturally, isn’t everyone?), and the opportunities for delivering quality learning online.  And that’s where I saw a conflict that suggested a fundamental barrier to scale.

When I think about quality learning, the core of it is, to me, about the learning activity or experience.  And that means meaningful problems with challenge and relevance, more closely resembling those found in the real world than ones typically taught in schools and training.  There’s more.

The xMOOCs that I’ve seen have a good focus on quality assessment aligned to the learning goal, but there’s a caveat. Their learning goals have largely been about cognitive skills, about how to ‘do’. And I’m a big fan of focusing on ‘do’, not know.  But I recognize there’s more, there’s to ‘be’.  That is, even if you have acquired skills in something like AI programming, that doesn’t mean you’re ready to be employed as an AI programmer. There’s much more.  For instance, how to keep yourself up to date, how to work well with others, what are the nature of AI projects, etc.

It also came up that when polled, a learned committee suggested top things to learn were to lead, to work well on a team, communicate, etc.  These are almost never developed by working on abstract problems.  In fact, I’d suggest that the best activities are meaningful, challenging, and collaborative.  The power of social learning, of working together to receive other viewpoints and negotiate a shared understanding, and creating a unique response to the challenge, is arguably the best way to learn.

Consequently, it occurs to me, that you simply cannot make a quality learning experience that can be auto-assessed.  It needs to be rich, and mentored, scaffolded, and evaluated.  Which means that you have real trouble scaling a quality learning experience.  Even with peer assessment, there’s some need for human intervention with every group’s process and product.  Let alone generating the beneficial meta-learning aspects that could come from this.

So, while there are real values to be developed from MOOCs, like developing perhaps some foundation knowledge and skills, ultimately a valuable education will have to incorporate some mechanism to handle meaningful activities to develop the desirable deep understanding.  A tiered model, perhaps?  This is still embryonic, but it seems to me that this is a necessary step on the way to a real education in a domain.

Leaving Trails

6 March 2013 by Clark 1 Comment

So I was away for the weekend at a retreat with like-minded souls, Up to All of Us, thinking deeply about the issues that concern us. I walked away with some new and renewed friendships, relaxed, and with a few new thoughts. Two memes stuck with me, and the first was “leaving trails”.

For context, the event featured designers – graphic, industrial, visual – but mostly learning designers. In a session on supporting the growth of design awareness, we were being led through an exercise on body-storming (using role plays to work through issues), and one of the elements that surfaced was posting your designs on the walls in places where it’s hard to see others’ work. And I had two reactions to this, the first being that the ability to share work was a culture issue, but the other was a transparency issue.

The point that I brought up was that just seeing the work wasn’t enough, ideally you’d want to understand what was the thinking behind it (not just working out loud, but thinking out loud). That can come from a conversation around the work, but that’s not always possible (particularly if it’s a virtual wall).

And I thought the leader of the exercise, an eloquent and experienced designer, said that you couldn’t really annotate your thoughts about the work. Which I fundamentally disagreed with, but he then went on to talk about showing interim work, specs, etc (and I’m filling in here with some inferences because memory’s not perfect).

What emerged in my thinking was the phrase leaving trails, not just your work, but the trajectories, constraints, and more. As I’ve argued before, I think showing the thinking behind decisions is going to be increasingly important at every level. At workgroup level, individuals will be better able to collaborate if their (prior) work is detailed. Communities of practice similarly need such evidence. Another colleague also presented work on B Corps, benefit corporations, in which businesses will move from shareholder returns to missions, and such transparency will be necessary here as well as for eGovernment. I reckon, what with ClueTrain, any org that isn’t being transparent enough will lose trust.

Of course, the comfort level in sharing gets back to the culture issue: people have to be safe to share their work and give and receive feedback in constructive ways to move forward. Which is really the subject of the next meme.

(NB: one of the principles of the event is Chatham House Rule, which basically says you can’t share personal details without prior approval, and I didn’t ask, so the perpetrators and victims shall remain nameless.)

Norman’s Design of Future Things

18 February 2013 by Clark Leave a Comment

Donald Norman’s book, The Design of Everyday Things is a must-read for anyone who creates artifacts or interfaces for humans.  This one goes forward in the same vein, but talking about how new tech in the roughly 20 years since that book came out, and the implications.  There are some interesting thoughts, though few hints for learning.

In the book, Don talks about how new technologies are increasingly smart, e.g. cars are almost self-driving (and since the book was published back in 2007, they’re now already on the cusp).  As a consequence, we have to start thinking deeply about when and where to automate, having technologies make decisions, versus when we’re in the loop.  And, in the latter case, when and how we’re kept alert (pilots lose attention trying to monitor an auto-pilot, even falling asleep).

The issue, he proposes, is that tenuous relationship between an aware partner and the human.  He uses the relationship between a horse and rider as an example, talking about loose-rein control and close-rein control. Again, there are times the rider can be asleep (I recall a gent in an Irish pub bemoaning the passing of the days when “the horse knew the way home”).

He covers a range of data points from existing circumstances as well as experiments in new approaches.  This ranges from noise to crowd behavior.  For noise, he looks at  how the way mechanical things made noises were clues to their state and operation, and that we’re losing those clues as we increasingly make things quiet. Engineers are even building in noise as a feature when it’s disappeared via technical sophistication.  For crowd behavior, one example is how the removal of street signs in a couple of cities have reduced accidents.

At the end, he comes up with a set of design principles:

  1. Provide rich, complex, and natural signals
  2. Be predictable
  3. Provide a good conceptual model
  4. Make the output understandable
  5. Provide continual awareness, without annoyance
  6. Exploit natural mapping to make interaction understandable and effective

For learning, he talks about how robots that teach are one place in which such animated and embodied avatars make sense, whereas in may situations they’re more challenging.  He talks about how they don’t need much mobility, can speak, and can be endearing. Not to replace teachers, but to supplement them. Certainly we have the software capability, but we have to wonder what sort of system makes sense to invest in the actual embodiment versus speaking from a mobile device or computer.

As an exercise, I looked at his design principles to see what might transfer over to the design of learning experiences.  The main issue is that in learning, we want the learner facing problems, focusing on the task of creating a solution with overt cognitive awareness, as opposed to an elegant, almost unconscious, accomplishment of a goal.  This suggests that rule 2, ‘be predictable’, might be good in non-critical areas of focus, but not in the main area.  The rest seem appropriate for learning experiences as well.

This is a thoughtful book, weaving a number of elements together to capture a notion, not hammer home critical outcomes.  As such, it is not for the casual designer, but for those looking to take their design to the ‘next level’, or consider the directions that will be coming, and how we might prepare people for them. Just as Don proposed that the interface design folks should be part of the product design team in The Invisible Computer, so too should the product support specialists, sales training team, and customer training designers be part of the design team going forward, as the considerations of what people will have to learn to use new systems are increasingly a concern in the design of systems, not just products.

Performance support-ing learning

11 February 2013 by Clark 8 Comments

In a post last week, I mentioned how Gloria Gery’s original vision of performance support not only was supposed to help you in the moment, it was also – at least in principle – of developing you over time. And yet I have yet to see it. So what am I talking about?

Let’s use an example. I think of the typical GPS as one of the purest models of performance support: it knows where you’re trying to go (since you tell it), and it helps you every step of the way. It can even adapt if you make a mistake. It will get you there.

However, the GPS will tell you nothing about the rationale it’s using to choose your route, which can seem different than one you might have chosen on your own. Even if it offers you alternatives, or you specify preferences like ‘no toll roads’, the underlying reasoning isn’t clear. Yet this might be an opportunity for navigational learning (e.g. “this route has more lights, so we prefer the slightly longer one with fewer opportunities for stopping”).

Nor does it help you learn anything along the way: geography, political boundaries, even geology, although it could do any of these with only a thin veneer of extra work: “as we cross the river, we are also crossing the boundary between X county and Y; in 1643 the pressure between the two cities of X1 and Y1 jockeying for power led to this settlement that shared the water resource.”

It could go further, using this as an example of a greater phenomena: “geographic features often serve as political boundaries, including mountains and rivers as well as oceans”. This latter would, in a sensible approach, only be used a few times (as the message,nonce known, could become annoying. And, ideally, you could choose what you wanted to learn about.

This isn’t limited to GPS, this could be used in any instance of guided performance. Sometimes you might not care (e.g. I suspect most users of Turbo Tax don’t want to know about the nuances of the tax, they just want it done!), but if you want people to understand the reasoning as a boost to more expert performance, e.g. so they can then start using that model to infer how to deal with things that fall outside of the range of performance support, this is a missed opportunity.

The point is to have even our programs to be ‘thinking out loud‘, both to help us learn, and to serve as a check on validity. Sure, it should be able to be shut off or customized, but the processing going on provides an opportunity for learning to happen in new and meaningful ways. The more we can couple the concept to the context, the more we can create learning that will really stick. And that is, or should be, the real goal.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.