Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: engag

Meaningful and Experiential

6 August 2013 by Clark 1 Comment

At lunch last week with my colleague Jay Cross, we riffed on the most important word was for learning in the organization. I chose ‘meaningful’, he went with ‘experiential’.  They’re both important, but I thought I’d tease them out a bit.

MeaningfulExperientialI take meaningful in two senses: what you’re doing (read: learning) is directly related to what your goals are, and it’s something you care about.  So, for example, in designing serious games, you want to focus on key skills, not on irrelevant material. And, to work as engagingly as possible, you also are also choosing a context that the learner cares about. Taking something relatively abstract like coaching, you could be providing support for developing a coaching model (rather than using an existing one), or you could be figuring out how to help a person separate out person from behavior (commenting on the latter is to be preferred).  Similarly, it could be in the context of being a better accountant (uninteresting  except to accountants), or it could be for sports (which might be of interest to a broader segment).  The point is to be focusing on relevant skills in interesting contexts.

Now, I take experiential to have some overlap, but to be addressing two senses as well: both the context you are learning in, and the nature of the learning experience.  That is, you can be learning away from work, or in the work process, the latter being more ‘experiential’. And you can be learning by doing in either context, learning ‘how’, as opposed to learning ‘about’.  I think there’s overlap here in being contextually relevant , but separate in the sense of personally interesting and the learning being applied.

I mapped it out, with lack of experiential and meaningful being disconnected content (which I see far too much of in workplace learning), or where we start providing knowledge how (not about) to be meaningful, providing activity-based learning to be experiential, and of course the ultimate being the intersection of both.

I’m sure Jay would argue that if it’s experiential, learning through real work experience, it’s inherently meaningful.  And I’d argue that if it’s suitably meaningful, it naturally has to be experiential.  Yet overall I’m happy take either one or both versus neither!

Evidence-based Design

5 June 2013 by Clark 1 Comment

In my last post, I asserted that we need evidence-based design for what we do.  There are a number of sources for same. Of course, you could go do a Master’s or Ph.D. in cognition and learning, but there are shorter paths.

There are several good books out (and I believer that there is at least one more on the way) that summarize the implications of research design. Ruth Clark has been a co-author on a couple, eLearning and the Science of Instruction, and the subsequent Efficiency in Learning.  Julie Dirksen’s Design for How People Learn is another good one. Michael Allen’s work on design is also recommended, e.g. Guide to eLearning.

Will Thalheimer, Ruth, and Julie regularly write and talk about these things in other forums than books.  Go listen to them!  I try as well, though often filtered through games, mobile, or elsewhere.  There’re others, too.

A number of people run workshops on deeper design. I know I have one, and I’m sure others have them as well. Do try to make sure that it covers both cognitive and emotional elements, focusing on meaningful change.

There are gaps: there isn’t all the research we need, or at least not digested.  The role of emotional engagement isn’t as well fleshed out as we’d like, and some of the research is frankly focused on studies too small to give practical guidelines (c.f. the consternation on serious game design that surrounded a recent post).  Where we don’t have research, we have to make inferences from theoretical frameworks, but you should know those too.  It’s better than going on ‘intuition’ or folk science.

Still, there’s no excuse to do un-engaging, over-written, and under-practiced learning.  Better design doesn’t take longer (with the caveat that there’s some initial hiccup ’til we make the change).  We have the knowledge, and the tools aren’t the barrier.  Let’s do better, please!

Integrating Meta-learning

29 May 2013 by Clark 2 Comments

There’s much talk about 21st Century skills, and rightly so: these skills are the necessary differentiators for individuals and organizations, going forward.  If they’re important, how do we incorporate them into systems, and track them?  You can’t do them in a vacuum, they only can be brought out in the context of other topics.  We can integrate them by hand, and individually assess them, but how do we address them in a technology-enabled world?  In the context of a project, here’s where my thinking is going:

MetaLearningTaggingFirst, you have some domain activity you are having the learner engage in. It might be something in math, science, social studies, whatever (though ideally focused on applied knowledge). Then you give them an assignment, and it might have a number of characteristics: it might be social, e.g. working with others, or problem solving. You could choose many characteristics, e.g. from the SCANS competencies (using information technology, reasoning), that the task entails.  That task is labeled with tags associated with the required competency, and tracked via SCORM or more appropriately with the Experience API.  There may be more than two, but we’ll stick with that model here.

MetaLearningStructureSo, when we then look across topics that the learner is engaging in, and the characteristics of the assignments, we can look for patterns across competencies. Is there a particular competency that is troubling or excelling?  It’s somewhat indirect, but it’s at least one way of systematically embedding meta-learning skills and tracking them.  And that’s a lot better than we’re doing now.

Remember the old educational computer games that said ‘develops problem solving skills’?  That was misleading. Most of those games ‘required’ problem-solving skills, but no real development of said skills was embedded.  A skilled parent or teacher could raise discussion across the problems, but most of the games didn’t.  But they could. Moreover, additional 21C resources could be made available for the assignments that required them, and there could be both programmatic or mentor intervention to develop these.

We need to specifically address meta-learning, and with technology we can get evidence.  And we should.  Now, my two questions are: does the concept make sense?  And does the diagram communicate it?

Assessing online assessments

9 May 2013 by Clark 3 Comments

Good formal learning consists of an engaging introductions, rich presentation of concepts, annotated examples, and meaningful practice, all aligned on cognitive skills. As we start seeing user-generated online c, publishers and online schools are feeling the pressure. Particularly as MOOCs come into play, with (decreasingly) government funded institutions giving online content and courses for free. Are we seeing the demise of for-profit institutions and publishers?

I will suggest that there’s one thing that is harder to get out of the user-generated content environment, and that’s meaningful practice. I recall hearing of, but haven’t yet seen, a seriously threatening repository of such. Yes, there are learning object repositories, but they’re not yet populated with a rich suite of contextualized practice.

Writing good assessments is hard. Principles of good practice include meaningful decisions, alternatives that represent reliable misconceptions, relevant contexts, believable dialog, and more. They must be aligned to the objectives, and ideally have an increasing level of challenge.

There are some technical issues as well. Extensions that are high value include problem generators and randomness in the order of options (challenging attempts to ‘game’ the assessment). A greater variety of response options for novelty isn’t bad either, and automarking is desirable for at least a subset of assessment.

I don’t want to preclude essays or other interpretive work like presentations or media content, and they are likely to require human evaluation, even with peer marking. Writing evaluation rubrics is also a challenge for untrained designers or experts.

While SMEs can write content and even examples (if they get pedagogical principles and are in touch with the underlying thinking, but writing good assessments is another area.

I’ve an inkling that writing meaningful assessments, particularly leveraging interactive technology like immersive simulation games, is an area where skills are still going to be needed. Aligning and evaluating the assessment, and providing scrutable justification for the assessment attributes (e.g. accreditation) is going to continue to be a role for some time.

We may need to move accreditation from knowledge to skills (a current problem in many accreditation bodies), but I think we need and can have a better process for determining, developing, and assessing certain core skills, and particularly so-called 21st century skills. I think there will continue to be a role for doing so, even if we make it possible to develop e necessary understanding in any way the learner chooses.

As is not unusual, I’m thinking out loud, so I welcome your thoughts and feedback.

Designing Higher Learning

29 April 2013 by Clark 6 Comments

I’ve been thinking a lot about the higher education situation, specifically for-profit universities. One of the things I see is that somehow no one’s really addressing the quality of the learning experience, and it seems like a huge blindspot.

I realize that in many cases they’re caught between a rock and a hard place. They want to keep costs down, and they’re heavily scrutinized.  Consequently, they worry very much about having the  right  content.  It’s vetted by Subject Matter Experts (SMEs), and has to be produced in a way that, increasingly, it can serve face to face (F2F) or online.  And I think there’s a big opportunity missed.  Even if they’re buying content from publishers, they are focused on content, not  experience.  Both for the learner, and developing learner’s transferable and long-term skills.

First, SMEs can’t really tell you what learners need to be able to do. One of the side-effects of expertise is that it gets compiled away, inaccessible to conscious access.  Either SMEs make up what they  think  they do (which has little correlation with reality) or they resort to what they had to learn. Neither’s a likely source to meaningful learning.

Even if you have an instructional designer in the equation, the likelihood that they’re knowledgeable enough and confident enough to work with SMEs to get the real outcomes/objectives is slim.  Then, they also have to get the engagement right.  Social engagement can go a good way to enriching this, but it has to be around meaningful tasks.

And, what with scrutiny, it takes a strong case to argue to the accrediting agencies that you’ve gone beyond what SMEs tell you to what’s really needed. It sounds good, but it’s a hard argument to an organization that’s been doing it in a particular way for a long time.

Yet, these institutions also struggle with retention of students.  The learners don’t find the experience relevant or engaging, and leave.  If you took the real activity, made it meaningful in the right way, learners would be both more engaged and have better outcomes, but it’s a hard story to comprehend, and perhaps harder yet to implement.

Yet I will maintain that it’s both doable, and necessary.  I think that the institution that grasps this, and focused on a killer learning experience, coupled with going the extra mile to learner success (analytics is showing to be a big help here), and developing them as learners (e.g, meta-learning skills) as well as performers, is going to have a defendable differentiator.

But then, I’m an optimist.

TweetDeck RIP

24 April 2013 by Clark 5 Comments

Twitter’s been an integral part of my social media existence for more than four years n0w, and owing to things like #lrnchat, I need to have good tools.  I’ve played around with a number, but TweetDeck swept my enthusiasm for quite a while.  And now it’s going, and I’m mad and sad.

To understand, you have to understand several things:

  • When you’re across platforms, sometimes on my Mac, sometimes on my iPad, and sometimes on my iPhone, it’s a major benefit to have one tool that is across the platforms
  • If you’re doing something like monitoring a conference backchannel over several days, you  have to have columns
  • If you’re engaged in a 60 minute chat, you  have to have quick updates
  • And if you have to log in some of the times you want to use it, you’ll be less likely to participate

TweetDeck met all of these. Barely, it was across platforms, but not well: TweetDeck on the iPad had degraded to pretty pathetic. It surprised me how it could be so good on the iPhone, and so bad on the iPad.  Of course, they haven’t updated the iPad version in forever.  I used to regularly harass them about it via tweets.

Twitter bought TweetDeck, which seemed like it could be a good thing, but it seemed to hamstring the teams, having them focus on the web version.  And now they’re getting rid of the apps completely.  That’s why I’m sad.

What’s worse, the reasons TweetDeck is supposedly going away is that they find that more and more people are using the Twitter app on iOS. Um, hello, the TweetDeck on the iPad is  broken!  Of  course  they aren’t using it! And columns on the iPhone just don’t make a lot of difference.  That’s why I’m mad, it’s not that it’s not in demand, they’ve killed it!

There had been no other cross-platform solution that meets all the needs above.  None.  HootSuite came close, but it didn’t update fast, last I checked. TweetBot was supposedly industrial strength, but it was only iOS.   And Twitter’s own solution doesn’t support columns.  There literally wasn’t an alternate.  Even TweetDeck on the web will ‘time out’ and you need to login again.  It’s a barrier to go into your password keeper, enter ID app password, navigate to entry, get twitter password, and go back and log in. Particularly when you’re dashing to join a chat.

It appears TweetBot now has a Mac solution, so I’ll be checking that out.  Fingers crossed.

Games & Meaningful Interactivity

8 April 2013 by Clark 5 Comments

A colleague recently queried: “How would you support that Jeopardy type games (Quizzes, etc.) are not really games?”  And while I think I’ve discussed this before, I had a chance to noodle on it on a train trip.  I started diagramming, and came up with the following characterization.

GameSpacesI separated out two dimensions. The first  is differentiating between knowledge and skills.  I like how Van Merriënboer talks about the knowledge you need and the complex problems you apply that knowledge to.  Here I’m separating ‘having’ knowledge from ‘using’ knowledge, focusing on application.  And, no surprise, I’m very much on the side of using, or  doing, not just knowing.

The second dimension is whether the learning is essentially very true to life, or exaggerated in some way.  Is it direct, or have we made some effort to make it engaging?

Now, for rote knowledge, if we’re contextualizing it, we’re making it more applied (e.g. moving to the skills side), so really what we have to do is use extrinsic motivation.  We gamify knowledge test (drill and kill) and make it into Jeopardy-style quiz shows.   And while that’s useful in very limited circumstances, it  is  not  what we (should) mean by a game.  Flashy rote drill, using extrinsic motivation, is a fall-back, a tactic of last resort.  We can do better.

What we should mean by a game is  to take practice scenarios and focus on ramping up the intrinsic motivation, tuning the scenario into a engaging experience.  We can use tools like exaggeration, humor, drama, and techniques from game design, literature, and more, to make that practice more meaningful.  We align it with the learners interests (and vice-versa), making the experience compelling.

Because, as the value chain suggests, tarting up rote knowledge (which is useful  if that’s what we need, and sometimes it’s important, e.g. medical terminology) is better than not, but not near as valuable as real practice via scenarios, and even better if we tune it into a meaningful experience.  Too often we err on the side of knowledge instead of skills,  because it’s easy, because we’re not getting what we need from the SME, because that’s what our tools do, etc, but we should be focusing on skills, because that’s what’s going to make a difference to our learners and ultimately our organizations.

What we should do is be focusing on better able to  do, moving to the skill side. Tarted up quiz shows are not really games, they’re simplistic extrinsic response trainers.  Real, serious, games translate what Sid Maier said about games – “a series of interesting decisions” – into a meaningful experience: a series of important decisions.  Practicing those are what will make the difference you care about.

Yes, you do have to change

18 March 2013 by Clark 22 Comments

Of late, I’ve seen a disturbing trend.  Not only are the purveyors of existing solutions preaching caution and steadiness, but it even seems like some of the  ‘names’ of the field are talking in ways that make it easy to think that the industry is largely doing ok.  And I do  not understand this, because it’s demonstrably wrong.  The elearning industry, and the broader learning industry, is severely underperforming the potential (and I’m being diplomatic).

We  know what leads to effective learning outcomes.  And we’ve known it for decades (just because MOOCs are new doesn’t mean their pedagogies are): clear models, annotated examples, and most importantly deep and meaningful practice focused on significant skill shifts (let alone addressing the emotional side of the equation).  Learners need to perform, repeatedly, with guidance, over more and more complex contexts until they achieve the level of performance they need.  However, that’s no where near what we’re seeing.

What we see are knowledge dump/test tarted up with trivial interactions.  People will pass a test, but they will  not have achieved the ability to affect any meaningful business outcomes.  If it’s knowledge that performers need, create a job aid, not a ‘spray and pray’.  And facilitate people in self-helping.  As things get more complex and moving faster, there’s no way everything  can be kept up with by new course development, even if it were a useful approach, and mostly it’s not.

We’re even  measuring  the wrong things.  Cost per seat hour is secondary (at best).  That’s ‘fine-tuning’, not the core issue.  What’s primary is business impact.  Are you measurably improving key performance indicators as outcomes?

And that’s assuming courses are all the learning unit should be doing, but increasingly we recognize that that’s only a small proportion of what makes important business outcomes, and increasingly we’re recognizing that the role needs to move from instructional designer to performance consultant.  More emphasis can and should be on providing performance resources and facilitating useful interactions rather than creating courses.  Think performance support first, and communities of practice, only resorting to courses as a last result.

Tools that make turning Powerpoint presentations into page-turning content aren’t going to fix this, nor are tools that provide prettified drill-and-kill, nor ones that let you host and track courses.  There are places for those, but they’re not the bulk of the opportunity, and shouldn’t be the dominant solutions we see.  There’s  so much more: deeply engaging scenarios and simulation-driven interactions on the formal side, powerful job aid tools for performance support (particularly mobile), coaching and mentoring as a better solution than courses in many (most) cases, performer-focused portals of tools, underlying powerful content management suites, and rich social environments to support performers making each other smarter and more effective.

I’m not quite sure why the easy tools dominate the expo halls, except perhaps because anyone can build them.  More worrisome is that it can let designers off the hook in terms of thinking deeper.  We need to focus first on rich outcomes, and put the tools secondary.

While the industry congratulates itself on how they make use of the latest technology, the lack of impact is leading a drive to irrelevancy.  Learners tolerate the courses, at best.  Operations groups and others are beginning to focus on the performance solutions available.  Executives are beginning to hear a message that the old approach is a waste of resources.

Hiding your head in the sand isn’t going to cut it. The industry is going to have to change.  And that means you will have to change.  But you’re a professional in learning, right?  So lead the way.  The best way to change is to take that first step.

 

Daniel Coyle #LSCon Keynote Mindmap

14 March 2013 by Clark Leave a Comment

Daniel Coyle gave a wonderfully funny, passionate, and poignant keynote, talking about what leads to top performance. Naturally, I was thrilled to hear him tout the principles that I suggest make games such a powerful learning environment: challenge, tight feedback, and large amounts of engaging practice. With compelling stories to illustrate his points, he balanced humor and emotional impact to sell a powerful plea for better learning.

20130314-101527.jpg

Games do teach

27 February 2013 by Clark 4 Comments

I think Ruth Clark’s provides a great service in presenting what the research says on elearning, starting with her highly recommended book  eLearning and the Science of Instruction.  So it’s hard to want to quibble, but she put out what I think is a somewhat irresponsible post on games with the provocative title “Why Games Don’t Teach“.  So it’s only fair that I raise my objections, though the comments do a great job also  of pointing out the problem.

As many have pointed out, the title is needlessly confrontational.  It’s patently obvious games teach, simply by trying a popular game yourself and realizing quickly that there’s no way you’re going to achieve a competitive level of play without substantial practice.  As Raph Koster’s fun and valuable book  A Theory of Fun for Game Design aptly points out, the reason games succeed is that they do require learning.

So the real point Ruth is making is that research doesn’t show the value of games for learning, and that there are no guidelines from research for design.  And yet she continues to be wrong.  As Karl Kapp (author of Gamification)  points out in his thoughtful and comprehensive comment, there are quite a few studies demonstrating this (and further elaborates on a study Ruth cites, countering her point).  As far back as the 80’s, frankly, Lepper and Cordova had a study demonstrating improvement from a game version of a math practice application.  The evidence is there.

What’s more insidious, as Koreen Olbrish points out in her comment, is that the definition of learning is open.  Unfortunately, what Ruth’s talking about seems to be rote memorization, by and large.  And we do know that tarting up drill and kill makes it more palatable (although we need to be quite certain that the information does have to be ‘in the head’ rather than able to be ‘in the world’).  But I maintain that rote fact remembering isn’t what’s going to make an organizational successful, it’s making better decisions, and that’s where games will shine.

Games, properly used, are powerful tools for meaningful practice.  They’re not complete learning experiences, but next to mentored live practice, they’re the best bet going.  And principles for design?   Going further,  I believe that there are sound principles for design (heck, I wrote a book about it). It starts with a laser focus on the objectives, and the important ways people go wrong, and then creating environments where exercising those skills, making just those decisions that learners need to be able to make, are made in a meaningful context.

Yes, it requires good design. And, essentially the same basics of good learning design as anywhere else, and  more, not other.  The problem with research, and I welcome more and a taxonomy, is that research tries to whittle things down into minute elements, and games are inherently complex, as are the decisions they’re training.  There are long-term projects to design environments and conduct the small elements of research, but we’ve good principles now, and can and should use a design-based research approach.

Overall, I think that it’s safe to say that:

  • games can and do teach
  • we have good principles on how to design them
  • and that more research wouldn’t be bad

However, I think the article really only makes the latter point, and I think that’s a disservice.  Your mileage may vary.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok