Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Integrating Meta-learning

29 May 2013 by Clark 2 Comments

There’s much talk about 21st Century skills, and rightly so: these skills are the necessary differentiators for individuals and organizations, going forward.  If they’re important, how do we incorporate them into systems, and track them?  You can’t do them in a vacuum, they only can be brought out in the context of other topics.  We can integrate them by hand, and individually assess them, but how do we address them in a technology-enabled world?  In the context of a project, here’s where my thinking is going:

MetaLearningTaggingFirst, you have some domain activity you are having the learner engage in. It might be something in math, science, social studies, whatever (though ideally focused on applied knowledge). Then you give them an assignment, and it might have a number of characteristics: it might be social, e.g. working with others, or problem solving. You could choose many characteristics, e.g. from the SCANS competencies (using information technology, reasoning), that the task entails.  That task is labeled with tags associated with the required competency, and tracked via SCORM or more appropriately with the Experience API.  There may be more than two, but we’ll stick with that model here.

MetaLearningStructureSo, when we then look across topics that the learner is engaging in, and the characteristics of the assignments, we can look for patterns across competencies. Is there a particular competency that is troubling or excelling?  It’s somewhat indirect, but it’s at least one way of systematically embedding meta-learning skills and tracking them.  And that’s a lot better than we’re doing now.

Remember the old educational computer games that said ‘develops problem solving skills’?  That was misleading. Most of those games ‘required’ problem-solving skills, but no real development of said skills was embedded.  A skilled parent or teacher could raise discussion across the problems, but most of the games didn’t.  But they could. Moreover, additional 21C resources could be made available for the assignments that required them, and there could be both programmatic or mentor intervention to develop these.

We need to specifically address meta-learning, and with technology we can get evidence.  And we should.  Now, my two questions are: does the concept make sense?  And does the diagram communicate it?

Extending Learning

23 May 2013 by Clark 9 Comments

At the just concluded ASTD International Conference and Exhibition, on exhibit were, finally, two instances of something that should’ve been obvious. And I’m not alone in having waited.

SpacedPracticeSeveral years ago, Dr. Will Thalheimer was touting a ‘learning follow-on’ solution, a mechanism to continue to reactivate knowledge after a learning experience. He’s talked about the spacing effect (even providing the basis for a diagram in Designing mLearning), drawing upon his experience as one of our best proponents of evidence-based learning design. We know that reactivation leads to better outcomes, whether seeing a re-representation of the concept, a new example (ideally in another context), and most usefully, having more practice. I’m not aware of how the solution he was touting at the time, but as we really haven’t seen any significant awareness raising, I’m not optimistic.

However, at the conference were two separate examples of such systems. They worked differently, but that they exist at all is a positive outcome. Both used diagrams (e.g. Ebbinghaus forgetting curve) to show the effects of memory over time, and it’s apt that the problem is real. If we just use the traditional event model, things are likely to be gone a few days later if it’s not immediately put into action. That doesn’t characterize many of our learning outcomes.

The solutions were different, of course. One used mobile technology to provide reminders and access to content. The other used the web. Both basically provided the same opportunity. I didn’t evaluate the relative costs, ease of integration, etc, but having such capability is great. It’s something that folks could arrange for themselves, but as yet I haven’t really seen it, at least not in a systematic way.

They still separate solutions, not integrated, but it’s reason for hope. It’s surprising no one’s baked it into their LMS, but there you go. At least we’re seeing the beginning of awareness, and hopefully we’ll get more.

Assessing online assessments

9 May 2013 by Clark 3 Comments

Good formal learning consists of an engaging introductions, rich presentation of concepts, annotated examples, and meaningful practice, all aligned on cognitive skills. As we start seeing user-generated online c, publishers and online schools are feeling the pressure. Particularly as MOOCs come into play, with (decreasingly) government funded institutions giving online content and courses for free. Are we seeing the demise of for-profit institutions and publishers?

I will suggest that there’s one thing that is harder to get out of the user-generated content environment, and that’s meaningful practice. I recall hearing of, but haven’t yet seen, a seriously threatening repository of such. Yes, there are learning object repositories, but they’re not yet populated with a rich suite of contextualized practice.

Writing good assessments is hard. Principles of good practice include meaningful decisions, alternatives that represent reliable misconceptions, relevant contexts, believable dialog, and more. They must be aligned to the objectives, and ideally have an increasing level of challenge.

There are some technical issues as well. Extensions that are high value include problem generators and randomness in the order of options (challenging attempts to ‘game’ the assessment). A greater variety of response options for novelty isn’t bad either, and automarking is desirable for at least a subset of assessment.

I don’t want to preclude essays or other interpretive work like presentations or media content, and they are likely to require human evaluation, even with peer marking. Writing evaluation rubrics is also a challenge for untrained designers or experts.

While SMEs can write content and even examples (if they get pedagogical principles and are in touch with the underlying thinking, but writing good assessments is another area.

I’ve an inkling that writing meaningful assessments, particularly leveraging interactive technology like immersive simulation games, is an area where skills are still going to be needed. Aligning and evaluating the assessment, and providing scrutable justification for the assessment attributes (e.g. accreditation) is going to continue to be a role for some time.

We may need to move accreditation from knowledge to skills (a current problem in many accreditation bodies), but I think we need and can have a better process for determining, developing, and assessing certain core skills, and particularly so-called 21st century skills. I think there will continue to be a role for doing so, even if we make it possible to develop e necessary understanding in any way the learner chooses.

As is not unusual, I’m thinking out loud, so I welcome your thoughts and feedback.

Designing Higher Learning

29 April 2013 by Clark 6 Comments

I’ve been thinking a lot about the higher education situation, specifically for-profit universities. One of the things I see is that somehow no one’s really addressing the quality of the learning experience, and it seems like a huge blindspot.

I realize that in many cases they’re caught between a rock and a hard place. They want to keep costs down, and they’re heavily scrutinized.  Consequently, they worry very much about having the  right  content.  It’s vetted by Subject Matter Experts (SMEs), and has to be produced in a way that, increasingly, it can serve face to face (F2F) or online.  And I think there’s a big opportunity missed.  Even if they’re buying content from publishers, they are focused on content, not  experience.  Both for the learner, and developing learner’s transferable and long-term skills.

First, SMEs can’t really tell you what learners need to be able to do. One of the side-effects of expertise is that it gets compiled away, inaccessible to conscious access.  Either SMEs make up what they  think  they do (which has little correlation with reality) or they resort to what they had to learn. Neither’s a likely source to meaningful learning.

Even if you have an instructional designer in the equation, the likelihood that they’re knowledgeable enough and confident enough to work with SMEs to get the real outcomes/objectives is slim.  Then, they also have to get the engagement right.  Social engagement can go a good way to enriching this, but it has to be around meaningful tasks.

And, what with scrutiny, it takes a strong case to argue to the accrediting agencies that you’ve gone beyond what SMEs tell you to what’s really needed. It sounds good, but it’s a hard argument to an organization that’s been doing it in a particular way for a long time.

Yet, these institutions also struggle with retention of students.  The learners don’t find the experience relevant or engaging, and leave.  If you took the real activity, made it meaningful in the right way, learners would be both more engaged and have better outcomes, but it’s a hard story to comprehend, and perhaps harder yet to implement.

Yet I will maintain that it’s both doable, and necessary.  I think that the institution that grasps this, and focused on a killer learning experience, coupled with going the extra mile to learner success (analytics is showing to be a big help here), and developing them as learners (e.g, meta-learning skills) as well as performers, is going to have a defendable differentiator.

But then, I’m an optimist.

Sach’s Winning the Story Wars

17 April 2013 by Clark 1 Comment

On a recommendation, I’ve been reading Jonah Sach’s Winning the Story Wars.   While it’s ostensibly about marketing/advertising, which interests me not,  I was intrigued by the possibilities to understand stories from a different perspective.   I was surprised to find that it offered much more.

The book does cover the history of advertising, going through some classic examples of old-style advertising, and using some surprisingly successful examples to elicit a new model.  Some personal stories and revelations make this more than a conceptual treatise.

The core premise  is turning your customer into a potential hero of an important journey.  You play the role of the mentor, providing the magic aid for them to accomplish a goal that they know they need, but for a variety of reasons may have avoided.  The journey is motivated from core values, a feature that resonates nicely with my personal quest for using technology to facilitate wisdom.

The book also provides, as one of the benefits, a nice overview of story, particularly the hero’s journey as synthesized by Joseph Campbell across many cultures and time periods.  If you find Campbell a tough read, as many do, this is a nicely digested version.  It talks in sensible ways about the resistance, and trials, and ultimate confrontation.

The obvious focus is on new way to build your brand, tapping into higher purpose, not the more negative fears of inadequacy.  So this book  is valuable for those looking to market in a higher way.  And I do intend to rethink the Quinnovation site as a consequence.  But I suggest there’s more.

The notion of the individual being offered the opportunity to play a transformative role seems to be a useful framing for learning. We can, and should, be putting learners in meaningful practice roles, and those roles can be coming from learners’ deep motivators.  One of the heuristics in learning game design is Henry Jenkins’ “put the player in a role they’d like to be in”.  This provides a deeper grounding,  put the learner in a role they aspire to be in.

I think this book provides not only practical marketing advice, but also guidance for personal journeys and learning.  I think that the perspective of designing stories and roles that are based on personal values to be a great opportunity to do better design. I haven’t completely finished it yet, but I’ve already found enough value in the majority of it to recommend it to you.

Increasing our responsibility

9 April 2013 by Clark 4 Comments

InFormalI ranted a couple of weeks ago about how we need to move out of our complacency and make a positive change.  As I sometimes do, I stumbled upon a diagram that characterizes the type of change I think we need to be considering.

The perspective riffs off of the concept of the relative value of formal versus informal learning methods shift as performers move from novice to expert. (And, as I’ve previously noted, what’s considered in/formal changes depending on if you’re the performer or designer.)  And, too often, we tend to restrict our interventions to the formal side, yet there are lots of things we can be doing on the informal side.

InFormalLDPCRolesLargely, however, I see learning and development (L&D) groups as focusing exclusively on novices, or to beginning practitioners, and leaving practitioners and experts on their own.  Even if they’re addressing these more advanced audiences, they tend to use the ‘course’ as the vehicle, when it’s not really  necessary.  These audiences know what they need to know, and just want that useful information, they don’t need the full preparation that novices do.  Novices don’t know what they need to know nor why it’s important, so we provide all that in a course model.  We can be much more telegraphic to advanced performers, and the value of social networks starts kicking in here too.

The point I’m trying to make is that we can, and should, take responsibility for the rest of the performers. We  can assist their performance, hence the term we’ve been preferring in the Internet Time Alliance:  performance consultant.  This implies facilitating performance across the organizational roles, top to bottom and from beginner to expert.

I’d like to suggest that L&D groups need to become focused on facilitating organizational performance, which includes but is not limited to training.  It’s going to benefit the organization, it’s going to lead to greater strategic contributions and associated value, and it’s an approach that will likely preclude a long slow march to irrelevance and extinction.  Better the folks that understand how we learn and perform (and if you don’t, what are you waiting for?) take responsibility than having it devolve by default to business units and/or IT, eh?

#itashare

Games & Meaningful Interactivity

8 April 2013 by Clark 5 Comments

A colleague recently queried: “How would you support that Jeopardy type games (Quizzes, etc.) are not really games?”  And while I think I’ve discussed this before, I had a chance to noodle on it on a train trip.  I started diagramming, and came up with the following characterization.

GameSpacesI separated out two dimensions. The first  is differentiating between knowledge and skills.  I like how Van Merriënboer talks about the knowledge you need and the complex problems you apply that knowledge to.  Here I’m separating ‘having’ knowledge from ‘using’ knowledge, focusing on application.  And, no surprise, I’m very much on the side of using, or  doing, not just knowing.

The second dimension is whether the learning is essentially very true to life, or exaggerated in some way.  Is it direct, or have we made some effort to make it engaging?

Now, for rote knowledge, if we’re contextualizing it, we’re making it more applied (e.g. moving to the skills side), so really what we have to do is use extrinsic motivation.  We gamify knowledge test (drill and kill) and make it into Jeopardy-style quiz shows.   And while that’s useful in very limited circumstances, it  is  not  what we (should) mean by a game.  Flashy rote drill, using extrinsic motivation, is a fall-back, a tactic of last resort.  We can do better.

What we should mean by a game is  to take practice scenarios and focus on ramping up the intrinsic motivation, tuning the scenario into a engaging experience.  We can use tools like exaggeration, humor, drama, and techniques from game design, literature, and more, to make that practice more meaningful.  We align it with the learners interests (and vice-versa), making the experience compelling.

Because, as the value chain suggests, tarting up rote knowledge (which is useful  if that’s what we need, and sometimes it’s important, e.g. medical terminology) is better than not, but not near as valuable as real practice via scenarios, and even better if we tune it into a meaningful experience.  Too often we err on the side of knowledge instead of skills,  because it’s easy, because we’re not getting what we need from the SME, because that’s what our tools do, etc, but we should be focusing on skills, because that’s what’s going to make a difference to our learners and ultimately our organizations.

What we should do is be focusing on better able to  do, moving to the skill side. Tarted up quiz shows are not really games, they’re simplistic extrinsic response trainers.  Real, serious, games translate what Sid Maier said about games – “a series of interesting decisions” – into a meaningful experience: a series of important decisions.  Practicing those are what will make the difference you care about.

Signs of hope?

21 March 2013 by Clark Leave a Comment

Attending the SolutionsFest at the Learning Solutions conference last week, despite my earlier rant, I saw signs of hope.  A quick spot check revealed a number of approaches going above and beyond.

One direction I was pleased to see was a move to more performance support. I saw several solutions that were more focused on providing information as needed, or letting you  navigate to it, rather than thinking everything had to be ‘in the head’. This is definitely a promising sign.  They’re not hard to build, either.

The second promising sign was the use of scenarios. Several different solutions were focused on putting learners into contexts and asking them to perform. This is definitely the direction we need to see more of.  And, again, it’s not that hard to do!

One interesting takeaway was that the innovative solutions seemed to come more from small or internal groups rather than the big teams.  Which only reinforces my overall concern with the industry as a whole.  I wonder if it’s easier for small teams to adapt to advice of folks like Michael Allen (no more boring elearning), Julie Dirksen (Design for How People Learn) and Will Thalheimer than it is for big teams, who not only have to change processes but also educate their customers.

This is an unscientific sample; I did a quick tour of the displays, but couldn’t see all as there were some that were just too crowded.  I also looked at them relatively briefly and didn’t make comprehensive notes, so this is just a read of my state of mind as I finished.  It doesn’t ameliorate the overall concern, but it does provide some hope that things are changing in small pockets.

Yes, you do have to change

18 March 2013 by Clark 22 Comments

Of late, I’ve seen a disturbing trend.  Not only are the purveyors of existing solutions preaching caution and steadiness, but it even seems like some of the  ‘names’ of the field are talking in ways that make it easy to think that the industry is largely doing ok.  And I do  not understand this, because it’s demonstrably wrong.  The elearning industry, and the broader learning industry, is severely underperforming the potential (and I’m being diplomatic).

We  know what leads to effective learning outcomes.  And we’ve known it for decades (just because MOOCs are new doesn’t mean their pedagogies are): clear models, annotated examples, and most importantly deep and meaningful practice focused on significant skill shifts (let alone addressing the emotional side of the equation).  Learners need to perform, repeatedly, with guidance, over more and more complex contexts until they achieve the level of performance they need.  However, that’s no where near what we’re seeing.

What we see are knowledge dump/test tarted up with trivial interactions.  People will pass a test, but they will  not have achieved the ability to affect any meaningful business outcomes.  If it’s knowledge that performers need, create a job aid, not a ‘spray and pray’.  And facilitate people in self-helping.  As things get more complex and moving faster, there’s no way everything  can be kept up with by new course development, even if it were a useful approach, and mostly it’s not.

We’re even  measuring  the wrong things.  Cost per seat hour is secondary (at best).  That’s ‘fine-tuning’, not the core issue.  What’s primary is business impact.  Are you measurably improving key performance indicators as outcomes?

And that’s assuming courses are all the learning unit should be doing, but increasingly we recognize that that’s only a small proportion of what makes important business outcomes, and increasingly we’re recognizing that the role needs to move from instructional designer to performance consultant.  More emphasis can and should be on providing performance resources and facilitating useful interactions rather than creating courses.  Think performance support first, and communities of practice, only resorting to courses as a last result.

Tools that make turning Powerpoint presentations into page-turning content aren’t going to fix this, nor are tools that provide prettified drill-and-kill, nor ones that let you host and track courses.  There are places for those, but they’re not the bulk of the opportunity, and shouldn’t be the dominant solutions we see.  There’s  so much more: deeply engaging scenarios and simulation-driven interactions on the formal side, powerful job aid tools for performance support (particularly mobile), coaching and mentoring as a better solution than courses in many (most) cases, performer-focused portals of tools, underlying powerful content management suites, and rich social environments to support performers making each other smarter and more effective.

I’m not quite sure why the easy tools dominate the expo halls, except perhaps because anyone can build them.  More worrisome is that it can let designers off the hook in terms of thinking deeper.  We need to focus first on rich outcomes, and put the tools secondary.

While the industry congratulates itself on how they make use of the latest technology, the lack of impact is leading a drive to irrelevancy.  Learners tolerate the courses, at best.  Operations groups and others are beginning to focus on the performance solutions available.  Executives are beginning to hear a message that the old approach is a waste of resources.

Hiding your head in the sand isn’t going to cut it. The industry is going to have to change.  And that means you will have to change.  But you’re a professional in learning, right?  So lead the way.  The best way to change is to take that first step.

 

Daniel Coyle #LSCon Keynote Mindmap

14 March 2013 by Clark Leave a Comment

Daniel Coyle gave a wonderfully funny, passionate, and poignant keynote, talking about what leads to top performance. Naturally, I was thrilled to hear him tout the principles that I suggest make games such a powerful learning environment: challenge, tight feedback, and large amounts of engaging practice. With compelling stories to illustrate his points, he balanced humor and emotional impact to sell a powerful plea for better learning.

20130314-101527.jpg

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.